Has the postmodern revolution gone full circle?

By Colin Turfus

While discussions about the philosophical foundations of judgements of right and wrong are often framed in terms of rational versus irrational perspectives, viz. those based on the enlightened values of science and reason as opposed to those based on authority or faith, this is not altogether an accurate view of where the real centre of moral debate currently lies. The game-changer has been the arrival of postmodern ideology and the hegemony which it has established over most debate about public policy and morality. This assertion may come as a surprise to many who are aware of the existence of a philosophical perspective called “postmodernism” but do not see it as having much to do with how they frame their moral judgements or how society around them is ordered. They would I suggest probably be wrong to believe so.

In understanding postmodernism, it is important to recognise that it arose not as a logical corollary of the efforts in the “Enlightenment” period to establish a rational foundation for addressing moral dilemmas and resisting the tyranny of religious and traditionalist worldviews in the 18th and 19th centuries, but as a rejection of that project. While Hobbes, Locke, Bentham, Hume, Kant, Hegel and Feuerbach vied with one another to provide a theoretical foundation for moral discourse, ultimately none was able to prevail.

The great prophet who was ultimately to sound the death-knell of the enlightenment was probably Friedrich Nietzsche in his portrayal of the madman running around with a lantern proclaiming that God was dead. His suggestion was that the madman represented the enlightenment philosophers who, in their critique of traditional values, looked to construct in their place a system of values which pared away the superstition and retained the essence; but that there was no such essence. Freed from the constraints of the prior expectations of our peers, we are free to steer whichever course we choose.

Postmodernism builds on this insight pushing the corollary that there are no objective standards of right and wrong, only differences of perspective. According to the Encyclopaedia Britannica:

Reality, knowledge, and value are constructed by discourses; hence they can vary with them. This means that the discourse of modern science, when considered apart from the evidential standards internal to it, has no greater purchase on the truth than do alternative perspectives, including (for example) astrology and witchcraft. Postmodernists sometimes characterize the evidential standards of science, including the use of reason and logic, as “Enlightenment rationality.”

This point of view is often portrayed as moral relativism, but to do so is to miss an important feature of the postmodernist position: although it holds that there is no one correct point of view on questions of right and wrong, all points of view are not necessarily equal in validity. Indeed, echoing Orwell’s critique of communist society in his Animal Farm, some points of view are in practice “more equal than others.” For, as stated above, values are seen as arising in practice in “discourses” taking place in different social groups or communities. And some groups have greater power or “hegemony” to impose their view on other relatively disempowered groups. Without taking a position on whose views are more correct between the relatively more or less powerful group, postmodernists argue that it behoves us to take the side of the relatively disempowered group so as to help redress the intrinsic injustice of the situation.

So, the conversation moves from one about being right to one about having rights. While a traditional perspective on human rights would be to argue that all human beings possess rights equally, the postmodernist position is that greater rights have to accrue to the relatively disempowered and so greater emphasis given to defending their values. Thus, is born the concept of group rights: women’s rights, gay rights, transgender rights, black rights, Muslim rights, etc. It is one of the great achievements of the postmodernist agenda that, without any need for moral discourse, it has become possible to dismiss almost any moral position which is portrayed as disrespectful of any of those group rights, particularly if that moral position can also be portrayed as promoting the interests of some relatively more powerful group.

Not surprisingly, this approach leads quite quickly to inconsistency and even incoherence. For example, it is often argued in the corporate environment that “diversity” policies are necessary to ensure that the best people are chosen, by which is meant a sufficient number from relatively disempowered groups. But if that is one’s position, one needs to argue that members of different groups bring different talents and perspectives to the table by virtue of their belonging to those different groups, so there is a fundamental inequality between groups that demands to be recognised. This it would appear is acceptable if one were to suggest, say, that women bring a greater degree of empathy into leadership than men and should on that basis be favoured more than at present. But if one were to say something suggesting that men by virtue of being men are more likely to have some quality or qualities that qualify them for leadership, there would be outrage and claims of sexism or misogyny. Whether any of the supporting claims has a basis in truth or not is entirely irrelevant. The morality of the issue is determined by whose interests are served by taking a claim seriously.

Thus, is the new irrationalism born, where matters of fact and evidence are swept aside in favour of identity politics which is elevated as the determining principle in all disputes between competing moral perspectives. Just as within 19th century European society, as Nietzsche argued, Christianity exercised hegemony on the basis of authoritarian structures enforcing a morality which society internalised as the natural order of things, postmodernism has, by virtue of backing up its strictures with laws and regulations which carry stringent penalties and ensuring that its point of view is taught in all educational institutions, often even to the exclusion of parental rights to assert an alternative position, achieved a similar hegemony.

Basing its power on an enforcing authority backed up with persistent indoctrination, it has effectively managed to marginalise dissenting opinions and severely curtail moral debate in the public space. It is the new orthodoxy with divine-like authority to make truth claims on the basis of consistency with its asserted principles which are immune to disproof or falsification by reason or evidence. Indeed, those seeking to bring evidence to contradict its claims are routinely vilified and marginalised. Thus, have we come full circle in recreating the very conditions that the Enlightenment set out, but on its own terms failed, to address.

Happily, the inconsistency and incoherence of the postmodernist perspective is increasingly being challenged by a new generation of thinkers from across the political spectrum. For example, Ken Wilber in his Trump and a Post-Truth World notes how postmodernism has played itself out and in attempting to create a new basis for determining truth has ultimately undermined it:

And thus, postmodernism as a widespread leading-edge viewpoint slid into its extreme forms (e.g., not just that all knowledge is context-bound, but that all knowledge is nothing but shifting contexts; or not just that all knowledge is co-created with the knower and various intrinsic, subsisting features of the known, but that all knowledge is nothing but a fabricated social construction driven only by power). When it becomes not just that all individuals have the right to choose their own values (as long as they don’t harm others), but that hence there is nothing universal in (or held-in-common by) any values at all, this leads straight to axiological nihilism: there are no believable, real values anywhere. And when all truth is a cultural fiction, then there simply is no truth at all—epistemic and ontic nihilism. And when there are no binding moral norms anywhere, there’s only normative nihilism. Nihilism upon nihilism upon nihilism—“there was no depth anywhere, only surface, surface, surface.” And finally, when there are no binding guidelines for individual behaviour, the individual has only his or her own self-promoting wants and desires to answer to—in short, narcissism. And that is why the most influential postmodern elites ended up embracing, explicitly or implicitly, that tag team from postmodern hell: nihilism and narcissism—in short, aperspectival madness. The culture of post-truth.

Wilber looks forward to an evolution beyond postmodernism to a developmental model which is more “integrated” or “systemic”. His view is that when a system is broken, as ours currently is, it reverts back to the last point at which it functioned effectively. Let’s hope he is right. Such ideas are a welcome breath of fresh air in a political culture in which the discourse revolves less and less around facts and evidence and consists more and more of ad hominem attacks on detractors and dissident voices launched from within the relative security of group identity siloes. Voices of those who like Wilber are critical of the failings of postmodernism and emphasise the need for new ideas are increasingly being heard, particularly on social media where many of the new currents in popular thought are increasingly finding receptive audiences. It will be interesting to watch how all this plays out.


The Just Society: Equality or Freedom?

In A Theory of Justice John Rawls conducted a famous thought experiment. He asked, if we were to imagine, behind a ‘veil of ignorance’, being born into a world in a position somewhere on the scale of unalloyed privilege and crushing poverty, what would be the type of social system we would advocate. Rawls assumed it would be reasonable to choose a society in which economic justice of a distributive nature prevailed, on the likelihood that we would be more likely to be one of the multitude of the poor than the small fraction of the privileged.

Rawls attempted to derive in a purely rational manner the proper balance between freedom and equality. This has, indeed, been the central narrative of political discourse for at least the last century. It has been assumed that the rational position is a centrist one, forging a middle point somewhere between the two poles of freedom and equality. Although in American terms it was considered radical and A Theory of Justice is considered a liberal left academic touchstone, from the perspective of the present Rawls position seems mildly quaint. The middle ground is now largely out of favour and this is perhaps a timely moment to reconsider the prevailing political narrative. I happen to think that Rawls is wrong: from a point of logic, ethics and the facts of history.

To begin with, the contiguity of freedom and equality is the wrong juxtaposition. Freedom and equality are not opposite ends of a spectrum in which the Aristotelian mean is the just position; they are contradictory ideas which compete for the same space. Therefore, it is impossible to derive a stable balance between them. Logically, if you favour freedom, you cannot accept the idea of equality; similarly, if you favour equality, you cannot logically tolerate freedom. Some, like Rawls – though, I suspect, fewer than in the past – argue that we need to compromise: we accept limitations on our freedom for the sake of some equality (although, strangely, I never hear people arguing the opposite; it seems the argument only goes one way). However, the reality is that the advocates of equality are never content with some equality. In the end, everything must be levelled, to the point of absurdity. It would be reasonable to assume that this might have less to do with the idea of equality as an abstract principle than with its advocates; but even this obsession can be explained by an analysis of equality.

Freedom and equality are, in fact, only related by the concept of power, and who holds it. In freedom, power is distributed; so, the closest we get to equality is when we are free. In equality, power is concentrated in the hands of a few, and even the few controlled by the most powerful; so, the only sense in which we are equal is an equality of powerlessness. To paraphrase Orwell in Animal Farm, everyone is equal, only some are more equal than others.

Despite freedom and equality (or their advocates) competing for social space, there is an ethical difference between them, which to my mind is like the difference between light and darkness. Belief in freedom must logically be accompanied by a belief in tolerance. If I believe in freedom, I do not believe in it only for myself, but for everyone, because the fact that it is distributed guarantees my own freedom. Just as I have a right to my thoughts, words and actions, so do you and everyone else, except inasmuch as the exercise of your right would deprive me of mine, for example by intimidating me or killing me. This is a perfectly reasonable and realisable state, in a society in which everyone shares that fundamental belief. It does have vulnerabilities, though, to pathological liars, the intolerant, those living at the extreme and those whose tendency is to usurp power.

That vulnerability is exploited by ideologues, who manifest all the traits just described. The love of freedom grows out of the philosophical discovery of ignorance, expressed eloquently by Socrates, but reaffirmed in the scientific revolution and accompanying Enlightenment of the seventeenth century. Ideologues first trait is absolute certainty. That means, bluntly put, they believe a lie, as truth is an evolving quality, ever pursued but never finally attained. If you are certain of something, there is a tendency to think everyone who disagrees with you is a fool or a rogue. There is, of course, a reasonable degree of certainty, which anyone having a point of view is expected to possess (otherwise we would have nothing to say), but it is tempered by an openness to correction and development. The ideologue, though, cannot bear correction and hates the open debate of ideas. The more unreasonable the belief the more vociferously its opponents must be attacked, and in the most extreme cases, be silenced. That is why the ideologue loves power, because it is a means of controlling knowledge and protecting certainty.

The belief in equality is an ideological position tout court. There is no equality either in nature or human society. The Procrustean critique of equality is already so well established that it needs no repeating. The socialist dogma of equality of outcomes is just an economic version of such crude egalitarianism and is impossible to realise where any spark of human creativity and freedom remain. It has been shown in practice not to result in greater equality, except in misery and fear for all but a tiny privileged minority. The liberal fudge of equality of opportunity is no more realisable, though a worthwhile goal if pursued intelligently, pragmatically and gradually. I will suggest such an approach (at least the theoretical foundations of such) towards the end of this article.

Being an ideological position, and embodying a fundamental untruth, any programme to implement equality must resort to lies, the denigration of critical voices, the capture of the levers of power in any society, and the use of those powers to force conformity to the dictates of the ideology. This is both the logical necessity of equality and the actual practice of its advocates. This is most obvious in totalitarian states of the left, though it is also manifested in totalitarian states of the right that have policies to ‘equalise’ society by removing undesirable elements. However, it is also seen in otherwise liberal democratic societies where the equality agenda proceeds stepwise by advancing the cause of groups that are proclaimed to be disadvantaged, less by addressing the root causes of their disadvantage, but by political activism and entryism to tear down the normative values of those societies and to brand the relatively advantaged as oppressors. Each step proceeds by labelling the cause promoted as addressing an ‘injustice’. However, the final result is not equality but conformity and the rule of a powerful minority.

If there is one sense in which I would accept the notion of equality, it is that we are at a deep metaphysical or mystical level of equal value as human beings, and that as members of the species homo sapiens we have a value above all other species. I would qualify that by saying that in a secular context our assessment of the value of any specific individual is driven by a host of symbolic, aesthetic and practical concerns, such as who they represent, how they present themselves and how they act. Nonetheless, a transcendent sense of human value, in which we feel called to work for the betterment of humanity and, particularly, for the lessening of inhumanity, is not only compatible with freedom; it seems to me to be the essence of freedom.

Turning then to freedom, it has become an accepted orthodoxy that science has demonstrated the non-existence of free will. This is only true, however, according to the canons of positivistic reductionism, and I’m not even sure of the status of this assertion in the light of quantum theory, which portrays indeterminism at a very fundamental level. Be that as it may, the experience of freedom is real at the human and social level; we know when we are free and when we are not free, because it is felt at the level of our perceived status in the social order and our experience of relative power or powerlessness, which is even manifested as a physiological response.

The moment I think or move of my own volition, I assert my freedom and my difference, which is manifest in the world, multiplied infinitely by all the individuals in the world. In the way I am, think and move I create inequality. Naturally, I am better at some things than any other random person and worse at some; which is true of all people, everywhere. Some of these attributes lead to power, influence and wealth, some to mediocrity and some to ruin. This effect is multiplied across all societies and creates the turbulent history of the world, an uncomfortable truth of how individuals, peoples and nations prosper, stagnate and decline.

The true dilemma of justice is not in the clash between freedom and equality, but the subtle negotiation between freedom and responsibility. Freedom guarantees the possibility of doing anything within your power. This is an exuberant, exhilarating, addictive human experience, and one in which individuals can blossom emotionally and intellectually and achieve unimaginable things. On the societal level it enables the conditions under which real human progress can be made. It also, of necessity, allows bad choices to be made, individually and socially. That includes the freedom to act criminally and psychopathically and endanger the lives and possessions of others. It also includes the freedom to be obnoxious or simply insensitive and offend others. It may also just include acts of kindness in good faith which are, nevertheless, unwelcome. It will inevitably include choices which impact on our health, education, career, livelihood, prospects for love and family, and overall happiness and quality of life. Since the outcomes can be so different, it is important to understand responsibility and the part that it plays in freedom.

Responsibility is not well understood, because few people think about it, and it is not part of our social discourse today, apart from sniping asides from the fringes of moral commentary. If most people have heard of it, it is probably as an admonition to bend to the will of the commentator rather than to act on their own will; that is, it is perceived as a threat to, or an imposed limitation of, one’s own freedom. It is true that the word is often uttered as a reactionary shibboleth without, however, having any specific content. This is to misunderstand the role of responsibility.

Responsibility is not the inhibitor of freedom, it is its guarantor. The first stance of the responsible person is to accept that they are free, in both an existential sense and as a social actor making choices. Without this affirmation there can be no responsibility, only obedience, and at worst, slavery. The great tragedy of much of human history and still much of the world today is that social conditions do not allow people to be free and, therefore, not responsible, though this number is, arguably, diminishing. The human thirst for freedom is unquenchable; we always choose it as an alternative to tyranny, especially when we have experienced the latter.

The second stance of the responsible person is to accept that their choices and the acts that flow from them all have consequences, for good and ill, for which they reap the benefits and the costs. With experience comes a greater ability to discern between the two and the wise person will not only make better choices but also choose to impose limits on their actions. The actions that destroy, deplete and offend are the ones that are most likely to result in a reaction that aims to curtail the freedom of the individual for the protection of the common good. For this to happen, the power of the community or the state must be invoked. Every invocation of the power of the greater collective or its authoritative representative entails a diminution of the freedom of the individual, which itself informs the state of freedom of the society. Consequently, that which guarantees the freedom of society is an act of self-limitation imposed on oneself for the sake of the greater good. It is something that emerges from the realisation and experience of the actual and potential consequences of one’s actions in the world and the harm that may occur because of them.

The third stance of the responsible person is to work for the common good, which is as close to a definition of social justice as I would allow. A commitment to justice in this sense is not a commitment to equality, but it can be compatible with a commitment to reducing inequality, particularly of opportunity. Justice, we might say, is relative freedom (rather than absolute freedom). Justice is the addressing of actual injustices, where there is the absence, limitation or oppression of freedom. It is not attempting to equalise everything by limiting the freedom of the majority in favour of a minority. People are not, and never will be equal in freedom, but it is not unreasonable to address that issue by increasing the freedom of the less free. One of the ways of doing that is education about freedom and the values of freedom, that come through a grounding in science, the humanities, arts and ethics. Another is through strengthening the character of people to be self-reliant and resilient, as well as generous in spirit. Physical disadvantages can and are increasingly be addressed through technological development, empowering people who have life-limiting conditions.

It would be naïve to think that we could do without laws and rely simply on the self-realisation of all the individuals with whom we share a society. There is an argument to be made, though, that justice and the common good can only emerge when there is a keen sense of individual freedom and a commitment to be governed by a state that protects and fosters that sense based on an evolving notion of truth. An over-strong state or community has ideological motives, degrades its commitment to freedom and replaces it with coercion, precisely because it cannot command assent. In a free society, the justification for the state is that it protects the freedoms of the people that it represents, internally and externally, and not the interests of a ruling faction.

The quest to build an equal society, on the other hand, requires totalitarian government precisely because of its fundamental impossibility. In addition, radical egalitarians feel no need to exercise the type of self-control discussed here in their treatment of other people, and feel free to offend, demean, ultimately to dispossess and eliminate those that they have determined to be the enemies of equality. Of course, this mistreatment of those they consider ideological enemies demonstrates the absurdity of their position, as believers in equality and warps, ultimately, both their sense of and realisation of justice.



Deserved honours and divisive honorifics: respect, rights and freedoms in an era of identity activism

The British actor Ben Kingsley, probably most famous for his eponymous role in the film Ghandi, has, since being knighted in the 2001 honours list, apparently insisted on being referred to as ‘Sir Ben Kingsley’. This is his right and he is, from all accounts, quite offended if the honorific is overlooked. Not all recipients of such honours are quite so sensitive to the proprieties being observed so closely or the impunity of their slight redressed with quite the force of the Kingsley wrath. Nevertheless, although it is his right, we are not obliged to comply with his wishes. There is not a law that requires the satisfaction of any obligation, real or imagined, to address a knight of the realm by their title, outside the laws of polite society, regardless of any damage inflicted on their self-esteem.

I should say I have nothing against the person, the persona or the onstage and on-screen performances of said Sir Ben. I have used him merely as a colourful illustration of the principle that the right to receive a particular good, service or respect does not automatically place upon other members of society an obligation to fulfil that right to the holder’s affective satisfaction. If I have bought a lottery ticket, I have a right to participate in the lottery, but not a right to win. If I have paid for some goods, I have the right to receive them and in good condition, but not to be perfectly satisfied by them. In a court of law, I have a right to hear my case heard, but not the right to receive the verdict I want. And so on. In no case I can think of can the privileges placed upon one by the conferring of a right imply an additional right to be respected or an additional obligation on the rest of us to make the bearer of the right be happy.

Nevertheless, while from the assumption that there is no right to be happy, we cannot conclude that there is no right to be respected, we have travelled from the well-founded and established proposition that the rights of others must be respected to the belief that there is indeed a right to be respected. The confused logic of this pathway is not limited to the layperson; it permeates academia and the judiciary, instigated by intellectual sophistry and compounded by legal activism. The problematic issue, though, from a societal perspective, is not a logical one. It is that the notion of a right and the necessity of respecting rights is so fundamental to legal culture, that anything that can be established as a right has the force of law behind it.

The entire debate has become confused around the conflation of rights with respect. Both terms have also suffered from an inflation of meaning over the past few decades. A right, at least as established under English law, meant a freedom that could not be interfered with by those in authority, such as the state, or by any other individual. These rights included the freedom of belief and conscience, the right to free speech and the right of assembly. Sir Ben Kingsley’s right to his honorific is a right in that mould; he has the freedom to append that title to his name, a freedom which those of us not similarly honoured do not have; it is, moreover, a freedom which he is not obliged to practice but I am obliged to respect. However, and this is the crucial point, I am not obliged under law to acquiesce in, or respect, the choices he makes on the basis of that freedom. I am free to do so, but also free not to do so.

What has happened to the notion of rights in the post-war period is that they have been transformed from negative freedoms to positive goods for the individual, such as education and employment, and then to positive goods for groups, including the protection of identities. With each step there has been a move away from holding the authority of the state to account, towards empowering the state over goods which it is increasingly difficult to guarantee, resulting in it becoming more coercive in its attempts to deliver them.

At the same time there has been a confusion between and a conflation of two meanings of the word ‘respect’. The first of these is defined as the “deference to a right, privilege or privileged position”, the second as “esteem for or a sense of the worth or excellence of a person, a personal quality or ability”*. These meanings are clearly different, but they have become confused and conflated through the notion of group rights. Such rights are invariably based on ‘identities’, a slippery term as they are largely self-constructed and weakly bounded, meaning they can be restructured and multiplied virtually endlessly. Identity as a self-construction necessarily evokes the sense of worth of the group to which one belongs; and as a right commands deference to that judgment. Hence, the conflation of the two senses.

The particular case I have in mind, because it highlights the extremes to which this idea can be pushed, is the decision by some states in America and Canada – surely to be replicated more widely sooner or later – to give legal force to the concept of preferred names for transgender people; that is, a legal requirement that in any public setting transgender people or people with specific gender identity requirements must be referred to by their preferred pronouns, and not simply by the pronouns ‘he’ or ‘she’ based on a speakers own identification, assumption or assertion of their gender. For those who are not up to date with this area of identity advocacy, this extends far beyond the use of gender neutral terms such as ‘they’ and its lexical cognates (which, despite its grammatical awkwardness, I, like many, have been using for the past 30-odd years when gender is unknown or unimportant). Facebook now lists up to as many as 70 varieties of gender identification.

The first thing to say, as others have pointed out, is that the standard gender pronouns are not an honorific, a mark of respect, but only a mark of categorisation, categories that have emerged organically within the evolution of language. It is an entirely fallacious claim, therefore, that because the use of standard gender pronouns confer respect, the same respect should be extended to any form of gender identification that one can conjure up.

We are used in our society to make allowance for people’s individuality; and, although most of us do not consider this to be entangled with questions of rights, are willing to accommodate people’s reasonable requests in the name of common humanity and shared social bonds. However, the requests – actually, demands – that are being foisted upon those working in the public domain, such as in businesses, schools and universities under those legal jurisdictions, to match every individual’s gender self-identification with their preferred pronoun or other title (and be aware that these include such tags as ‘gender fluidity’ which entail switching gender identity upon a whim), are so outrageous that we cannot help suspect that there is an ideologically driven agenda.

Moreover, unlike the case of the be-knighted Sir Ben, whose honours – though rightly deserved – have mainly the force of tradition and convention, and whose demands for recognition we are under no obligation to meet, or only to the extent that we wish to remain on good terms with him, the case with gender identity is wholly different. There, the right is not predicated on any recognised accomplishment, but is demanded only on the basis of a belief. A specious one at that; one not founded on any credible empirical evidence. Yet it is one backed by the full force of the law. Of course, people have a right to their belief, regarding gender or any other marker of identity, but they do not have the right to respect for their belief, or acquiescence in the charade that their identity is sovereign. All identities are self-constructed and negotiated.

This seems to me to be the point on which legal activism has betrayed the inner meaning of a right, which is that a freedom always comes with consequences, which includes the consequences of disapproval and disavowal. Such advocates have carved out a realm of pure freedom purportedly protected from any consequences whatsoever, on the pernicious premise that the exercise of freedom of others in disagreeing with the social manifestation of their right and, especially, with the coercion of respect for their choices in doing so, can be categorised as ‘hate speech’ or ‘abuse’. The truth is that freedom is never without consequences, whether we choose to believe it or not, and in promoting the idea that it can be they give rise to a realm of chaos, both psychologically and socially.

I have dwelt on the issue of gender identity as a current manifestation and illustration of what happens when philosophy abandons logic and evidence, and the ramifications of that on a societal level.  On a personal level, I have been acquainted with a handful of people with complex gender identities. Some of those earned my respect for being people of integrity, whose character and accomplishments transcended the singular dimension of their gender. I suspect for many more it has become a fashionable eccentricity engendered by the extreme liberality of the societies we inhabit in the West, to which the young are particularly susceptible.  If so, it is a dangerous game that has unchained respect from accomplishment and freedom from consequences, one in which those in academia are particularly complicit.

*Note: definitions are adapted from the online dictionary Dictionary.com: respect. (n.d.). Dictionary.com Unabridged. Retrieved January 31, 2018 from Dictionary.com website http://www.dictionary.com/browse/respect

The Unintended Consequences of Law

By Colin Turfus

It is a fact little known that the origins of the title of John Steinbeck’s famous novel “Of Mice and Men” was an ode “To a Mouse” by Scots poet Robert Burns a century and a half earlier. The reference is specifically to the following passage from the end of the penultimate stanza:

The best laid schemes o’ Mice an’ Men,
Gang aft agley,
An’ lea’e us nought but grief an’ pain,
For promis’d joy!

The poet, reflecting on how in ploughing his field he has wreaked devastation on a poor mouse’s homestead, highlights a theme which is no less relevant for us today than it was for Burns’ contemporaries, including the mouse; namely our frequent inability to foresee the potentially negative consequences of our plans and actions.

The theme is a perennial one in literature. Most of Shakespeare’s tragedies could be said to be poignant illustrations in one way or another of this principle. A quick search on http://www.goodreads.com reveals no less than 235  volumes currently in print titled with the theme of unintended consequences!

A more recent author who devoted his life’s work to elaborating on this theme in one way or another is Nobel prize-winning economist and social philosopher Friedrich von Hayek. His general thesis is probably most succinctly stated in his final, and relatively accessible work, “The Fatal Conceit” (1988), about which I have had cause to write elsewhere on the Societal Values website. The conceit of which he writes is that of governments or the supporters thereof who imagine that the state, by acting with the intention to address a problem or achieve a certain end, has at its disposal by virtue of it privileged position and power a sufficient grasp of what it needs to know to be successful in achieving the desired end.

I should also mention in passing Robert Turley’s 2008 book “The Rule of Law and Unintended Consequences“, which overlaps to some extent with the theme which I wish to explore below. He argues that the layers of amendments to the US Constitution and Bill of Rights, often interpreting language in the original documents in ways which it would not or could not have been interpreted at time of writing, often with the intention of bringing about changes in the ordering of society, are leading to unforeseen consequences which are often undesirable.

A real problem here is the ease which those who enact the original rulings in law avoid any sense of blame for the undesired consequences on the basis that, well, they were not what was desired. One could argue that what is going on here is a conflation of philosophical categories. When it is asked who or what is responsible for the present unholy mess, moral responsibility is typically denied by those who took critical decisions on the basis that there was no malevolent or mischievous intention. But that is a separate question from the question of causality: viz., whether, if a different decision had been made at the time, the current predicament could have beeen avoided. While the media are often energetic in pursuing the issue of moral culpability (we all love a scandal and generally celebrate when the scalps of politicians and lawmakers are collected), they show considerably less interest in what is surely the more important question, namely what can be learned from the situation to ensure a similarly unfavourable state of affairs does not recur. Such, if it is addressed at all, is likely to be through the setting up of a public enquiry which typically drags on for years and only reports back when the media (and consequently the public at large) have largely lost interest in the issue being investigated.

But the passing of new laws onto the statute books is only one way in which the mischief of unintended consequences is perpetrated in modern society. An increasingly common response to perceived problems or injustices is to call for increased regulation. For example the clamour has recently been renewed to implement the  recommendation of the Leveson Report to set up a press watchdog to whose jurisdiction newspapers would be required to sign up or else face draconian penalties. Whenever this proposal is resisted on the basis that it turns the clock back on hundreds of years of press freedom in this country, this argument is invariably countered by the suggestion that that is not the purpose of the watchdog, which is rather to empower the victims of media  (mainly celebrities it would appear).

Although the press continue bravely to resist, the encroachment of regulators into virtually every area of industry or professional arena, whether it be education, engineering, telecommunications, finance, insurance, aviation or the energy or automobile industries, the proliferation of red tape seems to know no bound. Of course, whenever this trend is challenged, the same tired old argument is trotted out that those who do not welcome each and every new initiative are in favour of deregulation and hence anarchy. Of course, it is always argued, the intention of the regulation is to prevent abuse and/or allow it to be addressed, so what objection can be raised against that? Well, if you don’t see the fallacy in that line of argument, you have not been paying attention…

However, rather than just lamenting the ever-increasing scale and scope of the encroachment of regulatory intervention, I would like to point out a more fundamental fallacy in the project to regulate our way to safety, security and prosperity. Let us ask ourselves, on what basis should we decide what is appropriate regulatory intervention in an industry, the operation of which is critical for public health and safety, or for the successful functioning of the national economy on which we depend for our livelihood?

In a complex industry, this is invariably a task which requires expert judgement. We would not expect the governance of critical industries to be put in the hands of anyone who was less than an expert. So it is that financial institutions employ ever-increasing armies of risk personnel and compliance officers to protect against mishap. And while schools employ few dedicated health and safety officers, an ever-increasing portion of each teacher’s time is eaten up by tasks required to comply with one or other regulation or other administrative requirement.

But is it “experts” who are in this scheme of things deciding how best to guard against risk or harm? While they may in moments of nostalgia have some recollections of a past life when human beings lived in a state of nature and evolved best practice based on experience, it is highly unlikely that any of the “experts” at the coal face of the industry, dealing with new issues as they arise, is afforded much autonomy to use their judgment or expertise to address them. Rather they will be expected to implement a policy which has been drafted and imposed by an administrative or managerial layer whose job is to do just that. There is no problem there, you may suggest, provided the policy is based on best practice. But where does best practice come from? Precisely from the experts with frontline experience, who find themselves being instructed by administrators who often have considerably less, or certainly less recent, experience.

But the problem does not stop there, because frequently those inside an institution are deemed insufficiently trustworthy or capable of codifying and enforcing best practice. For that reason external regulators have to be appointed and given draconian powers to perform that task. But how can someone who is not even a part of your organisation ensure your compliance with best industry practice? Inevitably, resort has to be made to the imposition of mandatory reports following a standard template, tickboxes and audit trails which can be inspected at regular visits. And before you know it safety, efficiency and best industry practice have become synonymous with compliance with the regulatory regime.

While this may work for a while, the inherent instability of this situation becomes apparent on a moment’s reflection. Prescriptive rules can only be written based on a fixed view of what the issues are. But if those rules are enforced and developed by a regulatory authority not working in one of the organisations they seek to control, they will not be aware of new issues as they arise. And those who are so aware are not empowered to address them because they lack either authority, time or motivation, or possibly all three, their main duty being to furnish evidence that regulations have been complied with. Furthermore the administrators above them are motivated the same way and in their dealings with regulators are unlikely to raise the subject of new issues they may be facing (even were they aware of them). More likely they will look to discuss with regulators only those issues brought up by the regulators themselves: the reward for being proactive is likely to be new requirements to implement yet more intrusive monitoring and evidence-collecting, all of which costs money and is damaging to their institution. It is not long before their priority has become to reduce to a minimum the number of points cited by regulators as requiring attention.

The end consequence of all this, an entirely foreseeable one, I would argue, is that organisations’ risk management policy ceases to be looking at the real risks which are coming up inside the organisation and is reduced to addressing instead the risk of attracting regulatory criticism. Stultified, inflexible, mechanistic rules purporting to reflect “best practice,” focussed usually on addressing the last big crisis drive out expert opinion based on more recent experience of current issues as the practitioners whose previous insights provided the authority based on which the regulatory prescriptions were justified are marginalised and reduced to passive “rule takers.”

Such consequences may not be intended; but they are foreseeable.

Nietzsche and Weber: Transcendent Individualism as Resistance to the ‘Iron Cage’ of Bureaucratic Rationalisation.



Modernity has been characterised not only by the great benefits brought by the increase in scientific knowledge and the technologies that have flowed from it, such as increasing wealth and convenience, improvements in health and well-being, and access to enormous amounts of information by ordinary citizens, but also by the increased possibilities for the documentation, regulation and control of our individual lives by governments, corporations or the cooperation of the two which technology has facilitated. This was already foreseen in the nineteenth century by one of sociology’s founders, Max Weber, who coined the term ‘the iron cage’ to characterise the growth of bureaucratic rationalisation in capitalist society. Other writers of the period also perceived this tendency within modernity, notably Friedrich Schiller, and Franz Kafka in The Trial. Over the intervening century the bureaucratic state has slowly but inexorably been stretching its tentacles into every aspect of social life, and this development has gathered pace with the advent of big data. With the convergence of government with big data, such as the establishment of a social credit system in China this tendency is now reaching its apotheosis in the ‘digital state’.

The argument made in this essay is that while technological developments have facilitated the drift towards the digital state, we have allowed ourselves to be seduced by the promises that the digital world holds, while neglecting the matter of our spiritual being, specifically the rationality, freedom and moral individualism which is the foundation of a sustainable democratic order. While responsibility for this neglect cannot ultimately be laid at the feet of anyone but each of us individually, there are cultural currents that define the social context in which we are brought up, educated and live our lives, and those currents are driven by thinkers of great perception and boldness. Friedrich Nietzsche (1844-1900) was one such thinker. His influence on the twentieth century, if largely unacknowledged, has been profound, as various aspects of his ideas contributed directly or indirectly to eugenics, National Socialism, existentialism, the sexual revolution, liberal theology and postmodern philosophy. By advocating hedonism as a positive virtue, Nietzsche unleased the genie of irrationalism in Western culture, where it has played havoc with our thinking and institutions since.

I will briefly review four aspects of Nietzsche’s philosophy and their influence on European culture1: the Übermensch, transvaluation of values, death of God, and the eternal recurrence. As post-modernism is the contemporary intellectual legacy of Nietzschean philosophy, I will consider how this legacy is taking forward the programme of transvaluation, and the influence that is having on the modern culture and, specifically, on individualism as the bulwark against the bureaucratic state’s total dominance. Finally, I will re-evaluate Nietzsche for insights that might yet reinvigorate individualism and the democratic tradition.

The Transvaluation of European Thought

Like Weber, Nietzsche observed the increasing bureaucratisation of European society and, while like Weber, he saw this as rooted in Christian doctrine and values, unlike Weber, he was not merely content to objectivise these values as structural components in a ‘science’ of society; instead, he called for the wholesale transvaluation of our value system. Under the influence of Schopenhauer’s philosophy of the will, he developed his ideas of the will-to-power. Nietzsche saw the phenomenon of bureaucratisation as a moral failure of Christian civilisation, particularly as represented in the bourgeoise life of the middle classes, and this failure as arising from the weakening effect of Christian values such as humility, meekness, love and charity on the will-to-power. In place of these values, he sought to instil what he saw as the aristocratic values of the past, those of the warrior code of the pagan gods.

Nietzsche stands in opposition to much of what we think of as philosophy in the Western tradition, usually discussed along the dual traditions of rationalism and empiricism, which can be traced back to the debates of the ancient Greeks, although inflected through the ideas of medieval scholasticism. Rather, he made a turn into mystification and mythologisation through the medium of analogy and aphorism. His most influential work, Thus Spoke Zarathustra, uses the figure of a hermetic seer, nominally based on the actual founder of the Zoroastrian religion2, who descends from his mountain to speak about the Übermensch (Over-man, more commonly translated as Superman) and announce the death of God to the world.

The Übermensch is Nietzsche’s anthropological prototype, a heroic figure, nominally based on the pagan gods of German folklore, who rejects the values of the contemporary society to live entirely by their own chosen values. The Übermensch – talented, ruthless, aristocratic and this-worldly – is the opposite of the stereotypical bourgeoise middle class person that Nietzsche despised. Despite the middle classes embodying many of the virtues of stable societies and their cultural values, they are consistently a target for elitist figures, including the totalitarian ideologues of left and right of the past century and their intellectual apologists. One can see Nietzsche’s point to some extent; although most of us in the West at least are middle class, to aspire to be middle class is to accept a place in Weber’s ‘iron cage’ of an increasingly regulated existence. To the extent that we are aware of this, we feel a call to resist, and the Übermensch offers us one model of resistance. For reasons that I will develop further below I think it is the wrong model; not wrong absolutely, but too partial to address our current requirements. What it does suggest is that resistance has an element of danger, both risk to ourselves and – at least potential – threat to others.

Surveying the conditions of his day, Nietzsche believed European civilisation was on the verge of sliding into nihilism. The cause of this catastrophe he argued was that Christianity was effectively emasculating the population; belief in the afterlife, values such as meekness, humility, love and forgiveness, and turning the other cheek in the face of hostility, were diluting the will-to-power necessary for the vitality of a culture. As part of his critique of Christianity, Nietzsche, through the mouthpiece of Zarathustra, announced the death of God, meaning that belief in God and in an afterlife no longer had any power to motivate European civilisation to greatness. His riposte to Christian belief was the doctrine of the eternal recurrence. This is best understood as a thought experiment: imagine that if we had to live each moment of our life over and over again eternally, and then imagine living it without a single regret. Nietzsche was not advocating living a blameless life, but a Dionysian existence of excess without shame.

Is it true that belief in an afterlife encourages apathy towards social development in this world? One can see logically why it could be true, but there is no compelling evidence that there is a causal relationship. The Victorian period in British history was marked not only by a strong religiosity, but also substantial social reform frequently motivated by religious belief. Nietzsche obviously moved in more genteel circles, in which an insipid form of religious observance encouraged passivity rather than social engagement. This coincided with the rise of more bureaucratic states in Europe as urban populations rose with the development of capitalism and industrialisation driven by scientific discovery. Together they created a pliant cultural milieu, in which the individuality of the individual was subsumed in a culture of mediocrity. Against this reality Nietzsche railed and called for a transvaluation of values, something that entailed the wholesale replacement of the Christian virtues and the values arising from the Enlightenment with the pagan virtues of the aristocratic warrior, the elevation of a Dionysian view of human life and potentialities.

One sees something like a need for a Nietzschean reaction to the present-day dominance of illiberal values, which, together with the rise of digital technology, have emasculated the vibrancy of Western and other developed cultures. We are a few steps away from becoming vassals of a totalitarian digital state. The implementation of a social credit system in China is the precursor of what may happen globally if present trends continue, because it has a logical inevitability as well as an intrinsic appeal to the powerful. However, There is a terrible paradox to Nietzsche’s revolt against the Christian and humanist traditions of European culture; standing outside the mainstream and preaching a philosophy of the extreme – a heady mixture of violence and hedonism – against the suffocating dictates of reason and conventional morality, has weakened very core values of European and Western identity and stability and allowed the influx, cultivation and nurturing of extremist ideologies at the very heart of many of our academic institutions.

The Susceptibility of Post-Modern Societies to the ‘Iron Cage’

The ‘iron cage’ of Weber’s imagination is as apt a description of the social trends we see today as it was of his own time. Two new factors have been added: the emergence of digital technology which has accelerated and augmented the bureaucratisation of the state and its intrusion into more areas of individual and family life; the rise of a rights-based illiberalism which necessitates, increasingly, the use of the tools of state power to implement and police its diktats in every corner of society.

How have we been enticed into the iron cage, and how do we continue to live there for the most part unaware of our imprisonment? Answering those questions fully would require a historical and psychological account to be given and I am neither a historian nor a psychologist, but from a socio-philosophical perspective it can plausibly be argued that a Nietzschean transvaluation has in fact occurred. European civilisation has been based upon individualism derived from both classical Enlightenment values and Christian values. This type of individualism has provided people with the tools for both internal resilience, that is inner conviction in an extrinsic truth, and the ability to call out wrongdoing and transgression in the name of a greater good, not only moral but also social. At the same time, it has also bred a belief in fundamental freedom and tolerance, meaning an acceptance of that with which one did not necessarily agree. Beyond this, these fundamental values have provided the basis for a shared understanding and belonging in a web of communities, both secular and spiritual, in which disagreements could be discussed in a more-or-less civilised manner. It is this individualism which has now been severely weakened.

How is it then that a culture that underlay Western individualism has been so etiolated? I think that the seeds lie already in how Christianity and humanism developed through their institutional embodiments. In some respects their positive strengths and values made them susceptible to the enticement of alternative – more extreme – interpretations of their virtues. These forces include the emergence of a culture of groupthink. At some point in the development of human rights thinking, the notion of group rights became accepted. This went against the very idea of human rights in its original form, which enshrined the right of the individual to be protected from the power of the state. The protection of the rights of a group requires and inversion of this priority, that is the interference of the state in the rights of individuals in freely expressing their views on groups considered vulnerable. Of course, it can, and is, argued that this represents progress in social matters; nevertheless, it was a breach in the protection of individual rights. The expansion of this initially laudable idea of the protection of vulnerable groups has continued apace, until it has come to occupy almost the entire discourse on human rights, and where group rights conflict with the individual right of self-expression or conscience, almost invariably group rights – the protection of one’s rights as part of a collective identity – take precedence in any legal judgement.

A second related threat is the progressive undermining of the spiritual and secular values of European civilisation. For reasons that it is beyond the scope of this essay to consider, spiritual and secular values, while often in tension, exist in a symbiotic relationship. It has often been noted that the particular religious legacy of the West has been instrumental in creating its intellectual culture. Attempts to distil the essence of rationality shorn of this historical and cultural context have inevitably run into paradox. At least since the French Revolution, though, the intellectual culture of the West has been increasingly hostile to religion, and this has permeated almost every institution and medium of mass communication. For example, the EU is an attempt to create a European identity based entirely on secular values, without any reference to its shared religious history. To some extent this trend is understandable, as it can be seen as a reaction against the past historical abuses of power of the Christian churches and the wars of religion. However, the lessons of the French revolution should disabuse us of the idea that reason alone is the guarantor of a just social order. I suspect (though I have no evidence for this) that religion creates a context of rules for an extended community in which reason can operate but is constrained; freed of this constraint, reason has nothing to operate on but itself, which at least explains the self-destructive tendencies in the hyper-rationalism of post-modern philosophies such as deconstructionism.

Post-modernism is doubtless the principal contemporary ideology with a Nietzschean lineage.3 Its indebtedness to Nietzsche is two-fold: on the one hand is its clear inheritance of Nietzsche’s diatribes against Christianity and rationality, though reinterpreted through a Marxist appeal to equality to the downtrodden (replacing the industrial proletariat with whoever can conveniently be labelled a victim of western power structures) and the subtle use of dialectic that allows the play of meaning to the verge of semantic nihilism; on the other hand is its incessant narratives and barely concealed love of confrontation and transgression: Foucault’s discourses on power and ‘symbolic violence’ (basically everything), anti-imperialist, radical feminist and queer theorists that subject even science and mathematics to their victimological hermeneutics, to the current vogue for ‘safe spaces’, ‘microaggressions’ and ‘trigger warnings’, that foreclose open debate and precipitate pre-emptively defensive acts of violence. Nowhere is this postmodern dialectic more revealing than in its apologetics for radical Islam, despite (or is it because of?) its anti-rationalist and anti-science fundamentalism, its oppression of women, support for global jihad and dreadful human rights record.

Resistance to the ‘Iron Cage’

Is it possible to interpret Nietzsche for a route out of the iron cage, towards which, I have argued, he has unwittingly helped entice us, by creating the cultural shift in values that is facilitating the advent of the totalitarian digital state? I believe that a reading of Nietzsche can be foundational to a reassessment of individualism moving into the emerging information age, both of its rationalistic elements and of its Christian morality. I will focus on two of Nietzsche’s concepts, the figure of the Übermensch and the eternal recurrence.

The Übermensch has been criticised as a type of proto-fascist ideal. They live by an aristocratic code of superiority, the will-to-power, which is what attracted the Nazi theorists to the idea and it is certainly true that the Nazis appropriated the terminology for their own propaganda.4 The delineation of the idea in itself, therefore, makes Nietzsche responsible to that extent. That, however, can be said of almost any idea: that it is subject to misinterpretation and misappropriation. A reading of Nietzsche on the subject should be enough to correct that criticism. Fascism is a branch of socialism that identifies the state with national identity rather than the industrial proletariat. The Übermenschen live by their own values, not by the values of the collective. They have no allegiance to the state, to an ideology, to a collective identity or obedience to a Fuhrer, which is where Nietzsche and fascism part company.

I think Nietzsche was right to critique the dominant values of the culture of his time, particularly the way in which Christianity, with its focus on sin and salvation, diminished the image of man and reduced the capacities and potentialities of life in this world with the promise of a better life in the next. He was also right in predicting the slide into nihilism that occurred with the two world wars. It is possible that the very culture of inadequacy and dependence which he lacerated was instrumental in the rise of Hitler, who came as a messianic saviour to the German people. However, the image of the Übermensch should not be appropriated wholesale, but accepted critically as a corrective to the weaknesses of the dominant European culture. Particularly at this time, as people are becoming in thrall to the new digital culture and the possibilities for radical government control over the actions and thoughts of their citizens, Nietzsche’s Übermensch holds out the possibility of the individual citizen becoming more dangerous to the power of the state.

That said, this does not require a total transvaluation of the sort proclaimed by Nietzsche. Many of the values that he criticised have an important place in our culture and our psychology. The fact is, we are physically and morally limited and fail or commit sins. All cultures have evolved methods for individual and societal healing, such as confession, punishment, contrition, mercy and forgiveness, depending on the nature of the crime. I suggest rather than a rejection of the values of the culture of which we find ourselves a part, we should engage in a more critical appropriation and individualisation of those values, accepting the positive aspects and resisting attempts by the state to coerce us into its desired patterns of behaviour. It is to redress the balance in the relationship between the state and the citizen, which has flowed in the direction of state empowerment during the last 100 years. It is not a repudiation of statehood, but of the totalitarian bureaucratic state that is threateningly just over the horizon. It is also to accept the responsibility for becoming a better citizen, who holds the state to account.

The idea of the eternal recurrence (or eternal return as it is also known) is probably the most difficult of Nietzsche’s ideas to fathom. I have offered my interpretation above, and on the surface a more morally odious and nihilistic idea can barely be conceived. Yet I want to turn that on its head now, and consider how that might yet presage an important philosophical turn in European civilisation. The eternal recurrence, on Nietzsche’s own understanding, means to live beyond not merely belief in a life after death, but beyond belief itself, in a world of values. It is to live in the eternal present; not so much to live hedonistically in the present moment as such, but to live one’s values as if they are eternal values. Nietzsche therefore declares that the age to come is the new axial age, in which matters of value, whether they be religious or secular, take precedence over the matters of ontology and epistemology which have hitherto been the central concerns of philosophy.

Just as Nietzsche could not contemplate a transvaluation of European civilisation without a mythological underpinning, so too a reinterpretation of the eternal recurrence as a paradigm shift to values-based culture has its own mythology, which is best described by Maurice Berman’s concept of ‘the re-enchantment of the world’, which  emerged in a book of the same name on the philosophy and psychology of science, and became adopted as a tellingly evocative motif among certain environmental writers and theologians in the late twentieth century. Coming full circle, it was, ironically, a challenge to Weber’s characterisation of the predicament of post-Enlightenment societies through a phrase he had borrowed from Schiller, ‘the disenchantment of the world’. Through ‘disenchantment’ Weber had in mind, the distancing from the immediate experience of nature – and, indeed, the experience of the sacred in nature that had predominated in the medieval mind – through the emergence of the modern scientific viewpoint, and the increasing rationalisation and bureaucratisation of society enabled by the technological and economic advances of the age, which together created a sense of alienation of the individual, from the natural environment and the social other. The disenchantment of the world is the spiritual precursor of the iron cage of bureaucratic rationalisation.

The idea of re-enchantment fulfils the need in a thoroughly secularised age for a sense of the transcendent in human life. That could be transcendence in the religion of our own culture, in a new religious, philosophical or political movement, in great art, literature and music, in the experience and contemplation of nature, in creative pursuit or in surpassing human achievement in sport and adventure, or in love. Seeking transcendence of our ego, our experience of the self, is not only an expression of our freedom and individuality, but also our desire, as an individual, to belong to the human community. Nietzsche’s concept of the Übermensch, therefore finds a more benevolent interpretation in what I call transcendent individualism, a philosophy of the self that is at the heart of resistance to the iron cage.

To speak of transcendent individualism as benevolent does not, though, mask its threat to the forces of bureaucratic rationalisation. The modern capitalist society requires us to be good workers and consumers, whereas socialism requires us to be good citizens of the state. Of the two prospects, given the choice, people have chosen the former on the whole, and the former almost universally after having experienced the latter. But the state in either case has no intrinsic interest in us as individuals, only as functional parts of its operational whole; it defends us against enemies, feeds us, educates us, provides we remain in reasonable health, and perhaps employs us, because that is the requirement of its own survival – indeed without doing those things we would call it a failed state. Paradoxically, then, though the state is, in the end, just individuals, as a deontological entity, it abstracts the individuality of the individual and, if it becomes too powerful it crushes the natural state of free thought, free expression, free action and free association that underlie authentic social belonging.

Transcendent individualism, by resisting the encroachment of the overgrown state into more areas of our lives, is the guarantor of the continuing vitality of the society of which the state is an important part. It addresses philosophically an issue which has been neglected in recent debates on democracy, the importance of individualism as the foundation of democratic societies, without reducing it to the consumer of capitalist requirement. It does not shy away, either, from the notion of democracy as a messy, conflict-ridden and sometimes revolutionary force. I do not foresee a reduction in conflict in democratic society in the future, as there will inevitably be clashes of values, but this is the essence of the form of society that builds itself on the value of the individual, one that must be eternally vigilant of collectivist tendencies and the stultifying oppression of bureaucratic rationalisation.


  1. I have referred to European thought and European culture rather more than the more general term Western thought and culture, firstly because this is more representative of the cultural milieu in which Nietzsche moved and wrote, but also because, although there are continuities with Western thought and culture more generally, some of the criticisms, e.g. of the character of Christianity, discussed here do not necessarily apply outside Europe.
  2. The modern-day Parsees of India, a small but influential community, are the last remnants of the Zoroastrian religion, which was once widespread throughout central Asia. Its influence is even apparent in Jewish and early Christian theology.
  3. Nietzsche’s relationship to subsequent developments is disputed and paradoxical, as it seems he is held responsible for precipitating the things he warned against. He likened Christianity to a slave mentality, making a virtue of weakness. Today postmodernism – which does have an authentic Nietzschean heritage – underpins much of social justice rhetoric and activism, yet reproduces this mentality. Similarly, while he warned against nihilism, he is considered by some a nihilist philosopher.
  4. It is known definitively that Nietzsche’s links to Nazism arose through the emendation of his archive posthumously by his sister Elisabeth Förster-Nietzsche, who was married to an believer in Aryan supremacy, and was later herself a National Socialist sympathiser. Through the bowdlerised works, Nietzsche came to the attention of Nazi theorists and leaders.


Selected Bibliography

Peter Baehr (2001), The “Iron Cage” and the “Shell as Hard as Steel”: Parsons, Weber, and the Stahlhartes Gehäuse Metaphor in the Protestant Ethic and the Spirit of Capitalism, History and Theory Volume 40, Issue 2, pages 153–169, May 2001

Ernst Bertram (2009[1918]). Nietzsche: Attempt at a New Mythology [Translated by Robert E. Norton]. University of Illinois Press.

Maurice Berman (1981). The Reenchantment of the World. Ithaca, NY: Cornell University Press.

Simon Denyer (22 October 2016). China wants to give all of its citizens a score – and their rating could affect every area of their lives. The Independent (online): http://www.independent.co.uk/news/world/asia/china-surveillance-big-data-score-censorship-a7375221.html

Graeme Garrard (2008). Nietzsche for and against the Enlightenment. The Review of Politics, Vol. 70, No. 4 (Fall, 2008), pp. 595-608

Richard Jenkins (2000). Disenchantment, Enchantment and Re-Enchantment: Max Weber at the Millennium. [MWS 1 (2000) 11-32]. http://maxweberstudies.org/kcfinder/upload/files/MWSJournal/1.1pdfs/1.1%2011-32.pdf

Friedrich Nietzsche (2005). Thus Spoke Zarathustra: a book for everybody and nobody (translated by Graham Parkes). Oxford: Oxford University Press.

Legality and Morality: Can Both be Served (and Justice Preserved)?

By Colin Turfus

The rules of morality are not the conclusions of our reason, David Hume

It is often suggested that obedience to the law is a virtue and by implication that respect for the law is a requirement of morality. But is this necessarily the case? Although this might at first might appear obvious, I would suggest the issue turns out on closer inspection not to be so at all. In the inherited concept of Common Law in the UK, what is required or enforced in law corresponds to established practice in society. So there is a natural coincidence between the requirements of legality and of morality. No problem there.

However, many laws on the statute book were not a consequence of the way ordinary people chose to live their lives or of publicly accepted standards they were expected by their peers to live up to, but rather represented an imposition by the powerful upon the weak. Examples of this were the iniquitous Corn Laws, the permitting of profiteering through the trading of slaves and the denial of full legal personality to women. It took vigorous campaigning over many years for the law to be changed in these areas. Few if any these days would suggest that the people who challenged these laws and sought to have them struck down or modified were not acting virtuously and in accordance with the requirements of morality (although aspects of the former have been reintroduced in the guise of EU agricultural policy and we are at risk of backsliding on the latter as the pressure grows to accept Sharia Law principles in the UK). Of course, that did not prevent those who did campaign against these injustices being characterised by many of their peers as agitators and subversives.

Fast forward to the present and we have what many would see as the 21st century equivalent: the Equality Acts 2006 and 2010 which enshrine rules preventing discrimination in a wide range of areas on the basis of a considerable number of properties such as race, age, belief, sex, etc. Ostensibly this can be viewed as a continuation of the same process of eliminating injustice from society and giving greater rights and protection to the oppressed. But does this characterisation stand up to scrutiny?

In terms of the intent it is hard to argue against such laws, whether from a moral or other perspective. But there is one clear difference here from the cases I mentioned above from the 19th and early 20th centuries: it was very clear in those earlier cases which injustices were being corrected in what situations, whether that be the principle that no human being can be the property of another or that the right to inherit wealth or to vote should not be denied one on the basis of being female. The more problematic aspect of the Equality Acts is hinted at in their name: that intrinsic to the legislation is the idea that everyone is “equal” and should be treated as such in society.

Whereas it is a rather binary thing whether women have voting rights or not, it is not so clear how a society can by legislation be transformed from one where people are not equal (and if they were, what need would there be for legislation?), to one where they are. In the first case, the act of discrimination or denial of rights is clearly defined and its violation easily identifiable: specifically, if or when a women is prevented from casting a vote at a public election. The unequivocal success of the older acts is evidenced in the number of people who have been prosecuted for violations in recent years (none) and, in the case of slavery, the fact no significant new legislation was considered necessary until the enactment of the Modern Slavery Act in 2015, nearly two hundred years later.

So, we might ask, does society today embody “equality” as envisaged by the acts? Few would suggest that it does. But, if it does not, who should be prosecuted and/or what remedial action should be taken? Here we start seeing the problematic of the idea of legislating for “equality”. For one, before the ink was even dry on the 2006 act, new protected characteristics were being added. This process was further extended by the 2010 act followed by an amending act in 2011 and another in March of this year (2017) which among other things imposed a duty on public authorities to publish data showing compliance with all provisions and indeed of actions they are taking to enhance compliance. This has given rise to a controversy over the the NHS’s recent edict that doctors should henceforth interrogate all patients about their sexual preferences and record responses in their medical history. It is argued that this is intrusive and an infringement of rights of privacy. But the NHS administration in acting thus is arguably only seeking to comply with the duties imposed by the Equality Acts. Are we as a consequence of all this nearer to an agreed state of equality or is the targeted end-state receding ever further into the distance? Who can say (especially since the NHS is yet to prepare and publish their data; and good luck to the people whose job it is to interpret it!)?

Then we have to start looking at the number of court cases which have been and continue to be engendered by such legislation, many of which have gone all the way up to the European Court of Human Rights, often visiting the front pages of the tabloids several times along the way. Numerous alleged violations have been successfully prosecuted. But equally, in many other cases individuals have lost jobs, important privileges and personal reputations without any court case occurring, often to protect the company employing them after allegations have been made, and frequently on the basis of actions taken or comments made entirely outside of any work context. Such may or may not have been the intention of those drafting the legislation, but it has been the result.

The challenging question we then have to ask ourselves in relation not only to Equality law but all law is: does the law as it stands conform to our moral perspective? Should we celebrate each time someone falls foul of its provisions and believe that through this our society has become just a bit more equal, and therefore better? Or is there not need for us to stand back and scrutinise the legislation, whatever its intent, for the outcomes that flow therefrom, and make an independent judgement based not only on the intent but also on the effect? And is there not a case also to challenge even whether the intent is coherently enough defined and/or sufficiently attainable by the proposed legislative means to merit our moral consent in the first place?

But to do this requires an independent moral perspective; and clearly that cannot happen if we conflate legality with morality. Ultimately the latter must be defined by what people, society, believe to be right and wrong. There must be flexibility here as, self-evidently, not everyone has the same perspective. But changing the law to define something to be illegal, does not make it wrong in people’s minds. And while it may force people to behave as if it were, there is no guarantee that people’s perspective will change over time.

The difference, I believe, with the big issues from the past which I discussed previously is that there were independent compelling arguments which people in their conscience found difficult to resist (even though they found themselves harmed economically) and which eventually won the day. The problem with legislation which is not enforcing a binary distinction but rather initiating a process towards an (often ill-defined) end state, is that it is much more difficult, impossible even, to adduce compelling moral arguments in its support.

Also, at the heart of politics has always been a tension between equality (emphasised on the left) and freedom (emphasised on the right), which reflects a difference in outlook which is ultimately personal. To favour one over the other in legislation is to politicise the moral realm and potentially to invade the sacred space of individual conscience, which is of course itself protected as a human right.

Colin Turfus has a PhD in applied mathematics and works in risk management. He is co-founder of the website Societal Values.

National sovereignty considered as a rule-based game


The Cambridge dictionary defines sovereignty as “The power of a country to govern itself”. As opposed to what? the power of a country not to govern itself? Defined in this way, the idea of sovereignty is a tautology; power, nationhood and government are effectively a closed loop. This historical weight of sovereignty is the conundrum that lies at the heart of every movement for secession, call for regional autonomy or declaration of independence, that is, the authority to declare sovereignty is predicated on the prior existence of that sovereignty. That is also why secession tends to be such a fraught issue, because it is played out as a struggle over power as a zero-sum game: if I gain, you lose; if you gain, I lose. In the worst-case scenarios winning and losing are calibrated by acts of violence by a perpetrator on a victim, and the final tally is reckoned in standing armies, cowed populations and territory held.

The breakup of Yugoslavia was the exemplar of this worst-case scenario occurring in Europe within recent memory, though, further afield, the creation of South Sudan and the suppression of the Tamil independence movement were events notable for their savagery. Generally, democracies manage secessionist tendencies rather better, as negotiation and concession are built into their modus operandi. Yugoslavia, held together by the iron grip of Tito in the orbit of the Soviet Union, had never developed the democratic traditions as an independent nation to cope with the centrifugal forces operating on its constituent parts post-Tito. By comparison, Czechoslovakia, after a few brief years as a democratic entity could negotiate a peaceful divorce (assisted by statesmen of real stature to be sure). For all the wind and thunder churned out by the mass media, the referendum on Scottish independence and even the Brexit negotiations have been carried out peacefully and with a sense of decorum.

Given this, the present standoff between the Spanish government and the Catalan authorities is unique in recent European history and extraordinary for a modern democratic nation. Having no vested interest in the struggle being played out, my natural inclination is to side with the argument that in any sovereign democratic state there are laws governing the distribution of powers in society, which applies also to the powers ceded to regional authorities; within those laws negotiations can take place on the balance of that distribution. At a level more removed, there is a case for the negotiability of the laws themselves, if they are considered unjust, but this is a case for extreme constitutional discretion. Democracy is never just about voting or “the will of the people” – the war cry of demagogues – it is suffrage under the rule of law. Since the supreme court of Spain has effectively denied the legality of the Catalan referendum, it is within its constitutional right for the national government to suspend the region’s autonomy and dismiss its elected government (although it seems only the latter is being proposed).

We can usefully lean upon Kant’s categorical imperative in this and other questions of secession: “Act only in accordance with that maxim through which you can at the same time will that it become a universal law.” If Mr Puigdemont were to achieve his aim of an independent Catalonia, would he countenance the further sundering of his independent state, given that the majority of Catalonia’s population did not vote for independence (the turnout was 43%) as the referendum was boycotted by pro-union parties? The question is purely rhetorical, of course, as secessionists are openly on a quest for sovereignty, i.e. power, nationhood and state governance, not for the enactment of abstract philosophical principles.

Yet, the dissembling and hypocrisy of secessionists is only one side in the game of sovereignty, in the political reaction to and decision-making about regional claims to power and the precedents thereby established. In a democracy, unlike an absolute monarchy, or socialist and fascist republics, in which power is centralised and total, the state has a duty to make judgements about the just and wise distribution of power. Thus, in the UK, calls for a Cornish state, the mutterings of the Wessex Independence Party* and the proclamations of the self-declared Kingdom of Hay-on-Wye can safely be ignored. Welsh and Scottish regional autonomy, however, cannot. Even though these identities are largely literary creations, they exert a real influence on the imagination of populations in regions that are geographically, and historically distinct. There is a fine judgement about when and to what extent to concede authority when demands are made. Too little risks resentment; too much fuels the ambitions of the unscrupulous and sets a precedent for those with less justification. In the case of Catalonia, the intransigence of the Rajoy government fails to accommodate the need in any dynamic society to be responsive to the genuine aspirations of a significant segment of its population. Politics is less like chess, the rules of which have been codified and frozen for centuries; it is more like music or architecture, in which the most interesting things happen in moving beyond the accepted conventions and structures.

The broad thrust of history seems to have been the absorption of lesser kingdoms and fiefdoms into sovereign nations. Since in most cases the boundaries of nation states are accidents of history – the shadow of battlefronts or the ruled lines of imperial surveyors – it seems a dogmatic article of faith to claim that the sovereign nation state is the immutable and ultimate legitimate player in international politics. A greater law of history, if there can be such a thing, is the chaotic nature of change and the unknowability of the future. However, as rational beings we can mitigate somewhat the turbulence of change through dialogue and negotiation. All people, as free individuals and as individuals identified by their various belonging, desire empowerment and the ownership that gives them over their lives, their community and their environment, something that the great centres of power ultimately need to recognise.


*Wessex was a medieval kingdom, resurrected as a fictional county in Thomas Hardy’s novels; I came up with the name for a projected satirical article, then checked it – the Party actually exists.


The Korean Dilemma

Never formally concluded, the wound of the Korean war has been festering for over 60 years; but quietly sidelined by matters considered more geopolitically important, the two Koreas now threaten to be the ground zero of a nuclear Armageddon. The hateful totalitarian nightmare of North Korea has nothing worthwhile to offer the community of nations and its continuing existence in the modern world is an affront to all right-thinking people. Politics, however, is the art of the possible. Given the utter disregard the regime has for the suffering of its own people, there is no pressure that can be brought to bear that might divert it from its goal of acquiring full nuclear capability. The best policy under the circumstances is to let it acquire that capability.

The reality of nuclear weapons and the doctrine of mutually assured destruction has been one of the decisive factors in maintaining the balance of power globally and regionally since the end of the second world war. North Korea, reflecting the Kim dynasty running it, manifests an enormous inferiority-superiority complex, born of its disastrous economy and continual humanitarian crisis, while at the same time in the grip of a maniacal self-belief and belligerence towards its ideological enemies, which includes just about every nation in the world. Acquiring nuclear capability is the only strategy the regime has to bolster its self-esteem. Despite the impression portrayed in the Western media, it is unlikely that Kim Jong Un is actually mad, so the likelihood of him launching a first-strike nuclear warhead at the United States, South Korea or Japan is remote. He understands that it would be all over in that event. While I have some sympathy for Donald Trump’s response, which has certainly not been less effective than Obama’s aloof indifference, or ‘strategic patience’, I think that repeated warnings of the dire consequences of the North’s repeated violation of every norm of international relations devalues a currency that is already worthless, and makes America look impotent. Neither is diplomacy the answer. Every concession made to the Kim dynasty in return for compliance to nuclear non-proliferation has been a veil behind which they accelerated their nuclear programme. Sanctions also clearly have not and do not work, and, moreover, they rely on the wild-card of Chinese compliance, which is unlikely to be forthcoming, given their own strategic interests in the region and their fear of further destabilising the North.

What would a nuclear capable North Korea mean for the international community? There are both potential benefits and potential dangers. One possible benefit is that the North might stabilise geopolitically, having achieved a major strategic goal; its new-found self-esteem and confidence might induce it to be less belligerent to its neighbours as it could boast being amongst the minority of nuclear powers in the world. Moreover, it could have the confidence that its political system, though the most regressive amongst the world’s established states, would be safe from external threat. This could – even though unlikely – initiate a process of economic reform and a lightening of the burden on its people, following the course that China has followed since the death of Mao. The opposite could happen, of course, such is the unpredictability of the regime. It could use its nuclear threat to blackmail other countries in the region. It could become an exporter of nuclear technology to terrorist organisations such as Al Qaida and IS. Therefore, acceptance of a nuclear North Korea should be backed by siting nuclear weapons in every surrounding country which is presently non-nuclear – South Korea and Japan (China and Russia would object, but China particularly would be threatened by a nuclear Pyongyang), and imposing a blockade on all its shipping and air freight and a total travel embargo on all North Koreans. The nuclear balance should be permanent, the other measures dependent on the North meeting its obligations to coexist peacefully with its neighbours.

America, at the moment, is attempting to control the situation. This is impossible, given that there are no constraints that might plausibly be effective. It is possible, though, to manage it. The management of conflict, rather than the idealistic and impractical goal of eliminating it, is the only form of peace that is likely to be realised on the Korean peninsula in the foreseeable future.

Universal Basic Income and the promises and perils of a leisured economy

‘From each according to his abilities, to each according to his needs’ (Karl Marx, Critique of the Gotha Program)


Until recently few people would have heard of Universal Basic Income (UBI), despite the idea having been around for more than 200 years.1 Although it has gone under various names and had various proposed formats (such as basic endowment and negative taxation2) they are all basically the same proposal: that every single person should receive a fixed stipend from the state, regardless of wealth or need, and then be able to choose whether to exist on this or to top it up by working. Its attractiveness is its simplicity – cutting through the complications of means testing – and its ethical appeal to the equal worthiness of human life that has garnered support not just from the left, for whom the appeal is obvious, but across the political spectrum. There is, though, a pragmatic reason for the growing interest in UBI: real concern that the effects of labour outsourcing and automation are having on the continuing existence of many jobs in developed economies, and the political fallout that may arise (some will say it has already begun) as a result, is beginning to be taken seriously.

Like all grand ideas, especially those that unite opinion across a wide spectrum, there are also voices that raise pertinent objections from various perspectives. One is around cost and effectiveness. Some maintain that it is affordable3 and the only thing lacking is the political will to implement it, others that if it were to be set at a rate that would be effective it would be unaffordable4 and, moreover, if it were set at a rate lower than this boundary of effectiveness, poorer households would be worse off than they are under existing benefits system. There is also resistance to the idea of well-off individuals, in no need of financial support, receiving it of right as a bonus, despite this being the point – that it is given universally, regardless of circumstances. Another area of contention is around socio-ethical and psychological issues. Advocates5 of the UBI say that it will free people from anxiety in the face of job loss or being chained to a waged job, allowing them to be creative: to dedicate themselves to art or music, say, or to engage in voluntary work, while others may choose to start their own businesses; thus, it has the potential to reinvigorate culture and even the economy. Opponents6, though, state that it will create a disincentive to work, even undermining the work ethic of entire nations.

I am not placed to judge the economic merits of this idea. Those who are interested can follow up the sources cited in the endnotes. Nor am I qualified to express more than an opinion on the possible psycho-social implications; there are around a dozen medium-size pilot schemes7 of UBI taking place worldwide at present, and the outcome of those will have some bearing on whether individual countries decide to implement them on a larger scale. However, they will have to overcome political obstacles, even if they are roundly endorsed at the experimental level. My purpose in this essay is to critically evaluate the viability of UBI as a social policy and advance a philosophical argument that work constitutes one important aspect of our human value and that a work-free, or ‘leisured’ economy risks diminishing that value, to the extent that we should be wary of the radical restructuring of human society entailed by UBI.

The periodic recurrence of the idea

The fact that UBI has been around for a long time but never been implemented suggests that it is a periodic response to economic crises that tends to fade once the particular crisis has passed. In this case I am not just referring to economic crises like the financial crash of 2008, which has many precursors, but the economic threat implied to individuals’ means of support by new technologies. Today it is the advent of Artificial Intelligence and robots through which many of the jobs we take for granted today may disappear, but this threat has been a real feature of the economic landscape over the past two centuries. The spread of new industrial centres in England in the eighteenth century, driven by steam power, created an enormous dislocation of the population from rural to urban areas. Much the same happened in America in the 1920s and 1930s and is happening today in China. The eventual outcome, in every case, has been increased prosperity (granted that that does not take into account the tally of human misery in the process, or the sacrifice of those who did not live to see the benefits). Moreover, work has never disappeared; only specific types of work have become extinct, to be replaced by jobs required by a new economic infrastructure.

Nevertheless, like the story of the boy who cried wolf, this does not mean that the threat of the extinction of work per se is not a real one this time or the next time. Predictions that the era of free market capitalism, and the social relations that this entailed, are over are not to be lightly dismissed, if indeed the very idea of work on which capitalism has been predicated is headed for oblivion. While it can be argued that the free market has successfully adapted to a largely service, finance and consumer economy, it could be countered that moving the agricultural and industrial jobs – that is, the jobs that ensure that we are fed and have things to buy and sell – offshore to more economically advantageous environments in the developing world, where labour costs are much lower, has condemned us to living in a virtual economy, running on extended credit. Certainly, the size of the deficits in post-industrial nations suggests that this might be the case. If that is so, when there are no more developing countries to which to relocate industries or outsource our agriculture, that might precipitate the end of capitalism.8

Such is, of course, pure speculation. I find more persuasive the argument that the free market is a spontaneously arising feature of every human society, from the simplest to the most complex, because it is human nature to trade, both for necessities and for luxuries. And trade both necessitates, as a precondition, and involves, as a consequence, work of some form or other. That is not to say that the free market principle has not been expropriated and exploited by powerful institutions, of which the capitalist (and socialist) models of economic organisation are the most recent in our historical experience. However, I believe human ingenuity, fostered through the free market, will continue to deliver technological innovation and economic advancement, of which work will always be an integral part, though the nature of work will continue to evolve, and, probably and hopefully, the economic system will evolve in the direction of greater justice and individual empowerment.

Therefore, while we cannot rule out the possibility of UBI being adopted in advanced economies, the historical precedent, and human nature itself, suggests that we are more likely to see an evolution in the type of work and types of jobs available, rather than the disappearance of work and the emergence of a purely leisured economy, though that does not rule out the emergence of a different economic system and economic relations. However, I doubt whether UBI will play a significant role in this future; in some respects, I believe it would be a dangerous and retrograde step, for reasons that I will set out below.

The nature of human value

One of the plausible moral arguments for UBI is that it appeals to our sense of fairness, that everyone should be treated equally, based on our common humanity. Though this intersects with a Rawlsian interpretation of justice ‘behind the veil’, this is not uniquely a sentiment of the secular left, but an idea deep in the anthropology of the monotheistic religions, of man as a created being possessing intrinsic value, and thus due a portion of the earth’s bounty. While I think the idea of intrinsic value is important for human societies, I do not think this is the way we think under normal circumstances; it is, rather, something we take refuge in in extremis, when the normal basis on which moral judgements are made and societies are ordered has broken down, as in natural disaster, in encountering the victim of violence or other criminal acts, or in war.

Intrinsic value makes sense in a theological context, for belief in a divine creator and sustainer, arbiter and dispenser of justice is the ultimate bulwark against chaos. The problem with intrinsic value as a secular concept is that it is essentially a static concept – that, of course, being its virtue and utility. But under normal circumstances we would want to know why something is valuable, not just that it is. This calls for a more dynamic concept of value, one in which the value of something is determined in a relationship between a valued object and a valuing subject, in which both the subject and object have conditions attached to them. From this perspective, intrinsic value can be seen to be a terminal, peripheral or extreme type of value, which sees value as only inherent in the object completely independent of the subject.

When the object under consideration is a member of human society, specifically a player in the economic order of society, this means that the economic value (which I am not claiming, by the way, is the only measure of human worth, just the relevant one in this context) is also established in a relationship, between what society, represented by an employer (of whatever type), needs and what an employee has to offer. From this point of view, no form of government support can ever, or should ever, be more than a transient solution to real financial need, not a permanent state instituted to fix a hypothetical problem. This is what existing systems of income support already accomplish in the real world, albeit imperfectly.

Like all arguments that are deductively arrived at, there is a danger of a reductio ad absurdam. If we were to state that there are no circumstances under which people are valued, and thereby economically supported, outside of their economic contribution, this would be the logic of the workhouse or the concentration camp. In developed economies we do not let people starve and we try to ensure that, as far as possible, people have the means to attain the basic requirements of life, such as health, education and employment. What concerns me is that this could merely be a contingent state of affairs resting on tenuous philosophical foundations (such as intrinsic value, which could be eliminated by the progressive secularization of society), rather than having a sound economic rationale supported by a more plausible theory of value.

The most obvious resolution of this issue is that developed economies invest in potential economic value as well as exploit actual/realised economic value, investing – through policy, education, the health services, professional training and guidance and, if necessary, income support – in those who are not presently economically viable, which does not undermine the premise that economic value is established in an economic relationship. Both historically and at present, bringing formerly marginalised groups, such as women and the disabled into the workforce, correlates strongly with economic development.  However, establishing that there is a strong case for investing for potential, where the norm is to be economically active, is not the same case being made for UBI, in which the assumed norm is to be economically inactive and where economic activity is a choice rather than a necessity. In some sense, this seems to be establishing that humans have no economic value and that they are therefore superfluous to the economic system, which would be a dangerous idea in the hands of a totalitarian state.

The presumption of freedom and development

I mentioned in the introduction that one of the arguments raised against UBI is that it would discourage initiative. This is yet to be proved; projections from the effects of existing benefits could be misleading, as UBI would be given to those over a range of very different social and economic circumstances, rather than targeted at specific groups in society, such as the unemployed and disabled, who will consist of a proportionately larger percentage of individuals experiencing severe obstacles to employment than the general population. My criticism of UBI is not that it necessarily discourages individual initiative, but that its wholesale implementation in a country risks arresting social, political and economic development. I think one of the reasons that UBI has supporters on both the left and right is that sizeable segments of both political persuasions believe in static visions of society: the left in a socialist utopia that will never exist and the right in a golden age that never existed. I believe they see in UBI at least an opportunity, and perhaps a tool, to realise their vision.

Both static and dynamic visions of society grapple with human nature, and any nations seeking to embody these visions would have to meet the requirements and desires of their populations and utilise their talents. The problem is that how people define their needs is relative to the broader social expectations, and they have the nature of escalating once a certain basic level is met (take health care in the NHS as an example), until they become indistinguishable from desires. It could be argued that advanced automation will remove the cap on our wants (being infinitely adjustable upwards) and the requirement for any productive skills, and that our lives can be geared towards leisure – and production only to satisfy aesthetic desires (rather than fulfilling basic needs). It is interesting in this regard that Elon Musk and Marc Zuckerberg, two of the most powerful tech innovators, are public supporters of UBI.9 They have been at the forefront of the technological revolution that is changing the nature of work and making many of the jobs we do today redundant.

The examples of Musk and Zuckerberg and their ilk illustrate one of the paradoxes of the advocates and supporters. These innovators have taken advantage of the opportunities of their socio-economic environment, the scientific, technological, educational and cultural vectors of their time and fashioned new possibilities for the rest of us to interact socially, intellectually, commercially and romantically (and, yes, erotically), yet in supporting UBI they implicitly assume the next generation are to be condemned to a life of consumption and leisure, rather than working to create their own technological miracles. Although I would not want to dismiss genuinely held humanitarian fears, part of the resolution of the paradox may be that the Silicon Valley luminaries see their role as quasi-messianic, as ushering in the end of work and leading us into the land of milk and honey, represented by the leisured economy of UBI, rather than as particularly fortunate exemplars of the principle that work can be empowering and transformative. They have perhaps unintentionally fallen into an eschatological vision of the future of human society as static, as terminal, whereas all experience suggests that it is dynamic and endlessly evolving. UBI could potentially lock us into a static form of society in which there is no development, or – even worse – one in which development is entirely out of our hands and in the hands instead of autonomous technology.

These diverging visions, of static and dynamic societies, engage with human nature in diametrically opposite ways, the static by attempting to limit the needs and suppress the desires of their populations and the dynamic by being shaped by them. One of the reasons for caution regarding UBI is that it can only be administered by the state. Some, admittedly, see the state having a lighter touch in administering UBI than is the case for means-tested benefits, and support it for that reason. I do not think we should be so blasé. Until now, our relation to the state in democracies is fundamentally one of tolerating its power over our lives for the benefits it brings; however, in theory at least, we are free agents who have the power to hold it to account and limit its authority in certain regards. The basis of that freedom in law is property and the means to support oneself. If these would become the sole monopoly of the state, we would be relinquishing the last vestiges of legal autonomy; essentially, every person would become a vassal of the state.

The verdict

While the future is open, and the challenges posed by AI, automation and the limits of capitalism are not to be underestimated, I think that UBI is unlikely to become the prevalent economic mode, and nor do I think we will settle on a sustainable model for its widespread, yet alone ‘universal’ implementation. The experiments are ongoing and we should take heed of the economic, social and psychological outcomes of those trials. Until now, though, there has not been an economic case established, as all the existing programmes have required an enormous investment with no predicted economic output, the financial equivalent of nuclear fusion. Moreover, the purported technology, which is supposed to put us out of work and, at the same time, generate the wealth to support our lives, has not yet arrived, and I suspect, also rather like nuclear fusion, it will prove to be permanently just around the next corner.

Apart from the practicalities, I think there are philosophical and ethical objections to UBI. In every society, from the most primitive to the present, humans have been economically active; in fact human nature defines what the economy means. I do not believe that the financial transaction can be separated from human economic activity, in other words, that there exists an ‘economy’ apart from human economic activity. The financial transaction is the expression of an economic value which is determined in a relationship of trust between economic agents, one offering goods, skills and services and the other paying for them. I would go as far as to say, were this separation to be possible, it would negate the economic value of human lives and diminish human value overall and, therefore, should be resisted at all costs.

The discussion around UBI raises legitimate issues around the quality of life, as many people – particularly young people – work in poorly-paid jobs with no security and few prospects of advancement. Even for those in secure work, there is the prospect of a lifetime of “wage slavery”. There are no easy answers; pointing out that workers today live and work under better conditions than those of previous generations does little to allay the sense for many that life is unfair, because we invariably compare ourselves with others whose circumstances we consider more favourable. One of the more positive outcomes of the 2008 global economic downturn is the number of people who have started their own businesses, and it may be that while this may not presage the next wave of innovation (this is more likely to arise initially from government funding of research in universities and industries such as defence), it demonstrates a willingness of people to be more flexible, perhaps accept a simpler life, and a desire to be more in control of their economic fortune.

I expect rather than the abrupt collapse of capitalism, there will be a transition to a new form of economy marked by advanced technology, superabundant information and a free market in which the autonomous economic agent will be empowered. Work does not make us free, but it shapes our freedom, and is the basis for almost everything else we do that is valuable. I consider that UBI will always be a recurring but peripheral phenomenon in this future.



  1. Thomas Paine (1737-1809), one of the founding fathers of the United States and the author of The Rights of Man (1791) was an advocate of ‘basic endowment’.
  1. The Republican president Richard Nixon proposed a UBI scheme, called ‘negative taxation’ in the 1970’s. However, it was rejected by Democrats on the basis that the proposed rate was set too low to offer sufficient support.
  2. Affordable
  1. Unaffordable
  1. Supporters
  1. Opponents
  1. Ashifa Kassam (24 April 2017), ‘Ontario plans to launch universal basic income trial run this summer’; The Guardian (online) at: https://www.theguardian.com/world/2017/apr/24/canada-basic-income-trial-ontario-summer
  2. Paul Mason (2015), PostCapitalism: A Guide to Our Future. London: Allen Lane.
  3. Jathan Sadowski (22 June 2016), ‘Why Silicon Valley is embracing universal basic income’; The Guardian (online) at: https://www.theguardian.com/technology/2016/jun/22/silicon-valley-universal-basic-income-y-combinator

Values and Identity

By Colin Turfus

We hear much about “values” and “identity” in discussions in the media these days. Often the debate about values is specifically around so-called “British values”; and the discussion about identity is often in the context of what is referred to as “identity politics.” The discourse on both these topics in my experience tends to be tedious and unilluminating, so I would like to try and consider these topics from a fresh perspective, in particular to look at their relationship and consider what light the one can shine on the other.

What Do I Mean by Values?

In relation to values, it is often assumed that, to be worthy of the name, they need to be universal or universalisable, i.e. worthy or capable of being upheld by all people, conceptually at all times (although this last idea is rather difficult to square with the commonly held perception that values somehow evolve as the human race becomes more “enlightened”). One of the consequences of this approach is that the concept of “British values” becomes almost an oxymoron, and we tend only to list amongst them things like “fairness,” “democracy” and “respect for the rule of law” which we would advocate that all people in all nations should adopt on the basis of their self-evident merit, arguing to that end along the lines of Kant’s categorical imperative.

Personally, I consider Kant’s philosophy to be unduly influential in our public debate, not least in his insistence on universalisability as a means of determining rules for what constitutes the good. While such arguments are helpful if we wish to compel others to adopt a mandated value perspective, or at least to behave and speak in public as if they did, much of what we really mean by “values” is not really universal at all; indeed it is often quite idiosyncratic and personal. Not only that, I would even propose that idiosyncratic value perspectives are crucial in bringing people together in social groupings and enable the members thereof to see themselves as distinct from members of other groupings on the basis not only of what Aristotle would refer to as the telos, or intrinsic purpose, of the group but also of the values to which this telos gives rise.

What I am arguing, therefore, is that we should think of values as inhabiting intersecting spheres, mirroring the fact that as multifaceted individuals we ourselves inhabit separate spheres in our lives, such as work, family, sports clubs, choirs, and discussion groups, each with its own telos and its own values, some of which may be universal and others of which may be highly exclusive. What I want to emphasise is that those which are universal are not necessarily higher or more important than others. Indeed they probably only offer a lowest common denominator. Who would wish it to be said as their epitaph only that they always did what was required of them? Or that they never strayed even once into political incorrectness? Surely a life replete with value has to go beyond the mundane and be infused with some idiosyncratic personal passion?

Problems with Conflicting Values

Another common misunderstanding—and this brings me on to the question of identity—is the opposite one: that when we feel the values of some particular group are antithetical to our own, there is some onus on us to show respect for the values of that group and indeed for whatever is the object of their valuing. To my mind that is not just nonsense but dangerous nonsense. The group and their values are different precisely because they are idiosyncratic, not universal. To demand that we respect the values and the valuing of another group is to suggest that we should adopt in some measure their idiosyncrasies. But in order to do so, we must at the same time relinquish hold on some of those associated with our own group (and identity). Thus we are required to pay homage to the values of groups to which we do not belong, and so to sublimate our own values. Presumably the other group is expected to reciprocate. As can be seen, if we were to take this process to its logical conclusion, our very identity, shaped as this is by the groups we belong to and the aims and activities we share with their members, will be undermined.

Much of such discussion about respect for other people’s values revolves around the idea of rights, which of course are inextricably linked with universalisability. Human rights law does indeed protect freedom of expression and freedom of conscience. Consequently there is an onus on each of us to respect the rights of individuals and groups to hold and give expression to their values, but not necessarily to respect the values (or objects of valuing) themselves. Of course, if the expression of “values” is antithetical to universal rights held by others, or worse illegal, even the right of expression is curtailed.

In summary, “values” may by their nature and the role they play in our social lives divide us as much as they unite us, and to believe or wish otherwise is not just mistaken, but potentially dangerous as it may result in drawing groups into unnecessary conflict and undermining their ethos and the seminal role they play in underpinning civil society. Protecting spheres of value requires a judicious measure of separation to be maintained between social groupings. There can be such a thing as too much “unity”.

Problems with Multiple Identities?

As I have suggested above, the idiosyncratic values we hold to are a reflection of the social groupings we belong to (or used to) and vice versa. These give rise in a natural way to our identity, which will naturally be multi-faceted. This state of affairs and the recognition of the ultimate incommensurability of diverse value perspectives has given rise in the modern era to the postmodernist narrative, or perhaps I should say multiplicity of narratives. (How this squares with the purportedly necessary Kantian condition of universalisability of values has never to my knowledge been elucidated.) In accordance with this perspective, no individual or group has the right to prioritise their perspective or values over any other.

Well that’s the theory. The reality in the UK and indeed across Western society in general is of course what has become known as the multi-cultural society, whereby non-indigenous communities are encouraged to hold fast to their separate identity and culture, express their grievances against the majority community and demand preferential access to resources; and are often permitted, encouraged even, to flaunt the law, such as with so-called sanctuary cities in the US and in refugee camps at Sangatte. Thus we enter the realm of identity politics in accordance with which the basis of this entitlement to assert one’s identity or culture is a sense of victimhood, or the experience of prejudice or microaggression at the hands of the “majority” community. We have veritable industries now established in our universities manufacturing theories and devising ever more ways to fan the flames and identify new injustices which had hitherto gone unnoticed. How this can be reconciled with the idea of a society of universal shared values is hard to fathom.

But it does not even stop there. Under the postmodernist agenda new minority communities are all the time being identified, for example through the agency of gender dysphoria which has gone within a matter of years from being a pathological psychological state to being the major battleground in the crusade to evict inherited/traditional values from their erstwhile home at the centre of society. What was previously referred to as the LGBT community has become a veritable alphabet soup where soon we will be running out of letters to represent all the rainbow of gender perspectives which the theory (or, absent that, the ideology) seeks to accommodate. And don’t get me started on the haves versus the have-nots debate (where interestingly it appears uniquely to be the minority who are deemed to be the oppressors!).

Under this narrative, the authenticity of the perspective expressed is deemed to arise not from any coherent philosophical perspective or historical narrative but from the grievances which are evinced, so civilised discussion is barely possible, only capitulation lest one be seen as manifestly part of the problem and labelled as embodying this or that phobia or -ism.

Where Does This Leave Us?

This whole business seems to me to be a misuse of the idea of identity. From the perspective I outlined above, this should be about shared values within a community or social grouping, not shared grievances and enmities. Interestingly I would see the new revanchist nationalism evident in the US and Europe (it was rarely if ever absent anywhere else in the world, so is seldom remarked upon other than in Europe or North America) as a reaction against the perceived injustice of precisely this privileging of minority over majority interests. Unfortunately the manifestation of this tends all too often to be again through the expression of shared grievances and enmities, which is not really leading towards a resolution.

It would appear there is urgent need to put positive values back at the centre of the concept of identity and indeed of our moral/societal/political discourse.