From Certainty to Normative Agreement: a thought experiment in furthering the project of modernity

One of the accusations thrown at postmodern theorists and activists, such as the purveyors of identity politics is that they are advocates of relativism and deniers of facts. I am going to argue that this is actually their greatest virtue. They go downhill from there on in, as they seek to impose their own brand of conformism on the majority, but in dismantling the ‘grand narratives’ of the past few centuries, they have performed an important service to the future. Only, I would go further and turn this into a permanent state of knowledge at the edge of chaos in order that the presently halting project of modernity may continue.

The virtue of relativism is precisely the undermining of certainty, that vice of the fearful and vengeful, to which so many throughout history until today have been sacrificed, a vice to which postmodernist theorists and activists after their initial foray into emancipatory rhetoric have themselves succumbed. How to avoid the trap of, in the words of the philosopher William Warren Bartley III, this “retreat to commitment”? I propose a first step of the dissolution of ‘facts’ as badges of prestige and authority to which everyone wishes to lay claim. Through a number of further steps the final goal is the reconstitution of a range of broad but shifting and tentative agreements on a normative account of the world.

This first stage can be referred to as epistemic levelling and this is likely to be the most contested, but as I will argue, the most necessary step to get beyond the current ideological impasse. This is the proposition that there be no ‘facts’ upon which we can build a rock-solid case for whatever position we wish to hold on any particular issue, but rather sets of beliefs about the disposition of the world. The concept of a fact is like a fossil in the strata of knowledge, the immutable remains of a once-living idea. If the history of thought has taught us anything it is that yesterday’s firm and accepted truths are now viewed with incredulity. We can and should flip that around; how likely is it that what is considered either orthodoxy or ‘edgy’ today will be reviewed with anything other than stupefaction in the future? Except perhaps by historians and philosophers.

The relativity of truths is something that we have come to accept in even such a hard-edged area of science. The mechanistic worldview constructed throughout the period of modernity from the Renaissance to the Enlightenment, with the contributions of such geniuses as Galileo and Newton, and which fostered the belief in the nineteenth century that almost all that could be known was already known, began to collapse in the twentieth century, first with Einstein’s theories of relativity and then with quantum mechanics. The primacy of historical relativism in the sciences was consolidated in Kuhn’s concept of the scientific paradigm, but even in Popper’s conception of scientific method, the bold conjecture on what the flow of information reveals about reality is only tentative as it awaits its inevitable falsification. The generally accepted view of the nature of science today, Lakatos’ concept of ‘research programmes’ essentially combines elements of both Kuhn and Popper.

We can generalise what we have learned from science to say that one’s perspective on the world, whether that be physical reality, political partisanship, views on morality, and so on, is a theoretical ordering of the information that is available in any context, which is essentially saying that we all operate on belief systems sustained by the information available to us, or at least the information we are willing to entertain. This view is largely supported by a recent development in the cognitive sciences, known as ‘active inference’, which asserts that humans – as well as all other sentient beings – are continually modelling their environment and their place in it and algorithmically correcting model error through informational signals coming through their sensory organs as they act (although given what I have argued above, you are not obliged to take this as evidence, only perspective). There is a twist to this, though, to which I will return later.

There is a possible objection to the proposition of epistemic levelling and the dissolution of facts that I can think of. That is what can be called second order perspective. If I state that ‘X believes Y’ it can be argued, irrespective of the facticity or otherwise of Y, that “It is a fact that ‘X believes Y’”. If statements of the sort ‘X believes Y’ are classed as Z statements, then it can be argued that there are classes of statements, of which Z statements are an example, which are facts. Such logical formalism finds its apotheosis in mathematics and, to take the most elemental mathematical statement ‘1 + 1 = 2’ as an example, such statements are clearly facts. Or are they?

There are several potential refutations of this stance, all interrelated and related to the idea of decontextualization. One is that such formalisms tend to yield either tautologies or paradoxes. Such self-evident statements are formally true but yield no interesting information about the real world that we could categorise as insightful or practically useful. Kurt Gödel’s incompleteness theorem demonstrated that no mathematical system can be proven within its own parameters. Ultimately, all mathematics rests on a genre of belief, known as axioms. In our own experience, once contextualised to the world of experience, even 1 + 1 does not always equal two, for example in system formation or in the fusion of two things onto one.

The proposition of epistemic levelling is that all perspectives are at ground individual and that all perspectives are rooted in the sets of beliefs that an individual has about the world they inhabit and that at this level of intentionality there is an implicit equal validity: no hierarchy of opinions, no preferential beliefs, no orthodoxy, only the way that each individual sees the world. This is an important precursor to the rest of the argument, and its implications essentially shape the argument.

The next step of reasoning should be as follows: assuming we are not in a Cartesian dream or in the Matrix, but we live in a real world which we share with others, if each person’s set of beliefs is individual and unique, then one of three moral stances is true; either my perspective on the world is uniquely right and everyone else’s is wrong, or someone else’s is right, or we all are right about some things and wrong about others. Intrinsically, there is no a priori way to know which of these three options is correct, but only a posteriori testing of one’s beliefs through gathering information as we act in the world. Acting in such a manner, though, implies that the third option is the most likely and reasonable, for in doing so we accept the first option as implausible. The second is realistically untestable. This being so, it suggests that we should have at least a grudging respect for others’ views of the world, tolerance of views different than our own, and an openness to dialogue with others, for this is one of the mechanisms by which we discover confirmatory or disconfirmatory evidence.

The social and political implication of this is that freedom of expression, including the expression of unpopular views but also the encouragement of dialogue should be fundamental principles in every social institution in society, as this allows for error correction and the growth of knowledge.

Merely dialoguing with others will not necessarily lead us to have the same ideas; it is probably better if it doesn’t. What it does do is generates greater respect for difference, exposes at the same time similarities of human experience, refines our own thinking and makes us more emotionally resilient. It also prepares us for the final stage of the process, which is the differentiation of beliefs through their consequences experienced as social agents. As this takes place at the individual level and as we are most likely to have a model that mixes beliefs that are more closely aligned with reality and those which are less so, the consequence for individuals is likely to be salutary but beneficial in the long run.

How can this very private experience of model testing and error correction be contextualised with the reality of our social belonging? After all, we typically inhabit different forms of life, such as families, neighbourhood associations, religious organisations, political parties, professional bodies, schools and colleges, trade guilds and unions, the various forces, voluntary organisations, musical bodies, clubs, and so on. The answer is not entirely clear but goes something like this. The tenor of life in a developed society is of multiple and voluntary belonging, in comparison with the past when opportunities for belonging were limited and anyway mostly compulsory. In the evolution of the project of modernity this is likely to be a trend that continues and accelerates, fractionating the established forms of society and even nations more. At the same time people will negotiate new forms of life, some long-term and some short-term, based on shared values. Societies will become ongoing experiments in living, with some better outcomes and some worse, but none being perfect. Relatively good outcomes can be expected to be more widely adopted, with the emergence of normative agreements, but through the natural osmosis of observation and dialogue, not through coercion.

Dream on; such an evolution of modernity cannot come to pass unless the dragon of certainty is slain first. I mentioned a twist in the process of active inference: normally, one updates through error correction the model of the world based on the information incoming; however, it is also possible to act in the world in such a way that the incoming information is only that which reinforces prior views. This relates to the first of the three moral stances mentioned previously – the position that one’s own perspective is the only correct one. Unlikely as that is, it is a position that some people hold due to such factors as immaturity, bad experiences (such as abuse or persecution) or mental illness and often a combination of those.

There are a number of ways in which people in this position can deal with the cognitive dissonance that must inevitably arise. One is to isolate themselves completely (the hermetic solution); one is to curate the information to which they are exposed, something that is now possible with the internet and social media (the ‘filter bubble’ solution); the third and most dangerous to a culture such as that in the West, which has thrived on freedom of expression, is to band together with those that share a subset of one’s view of the world and collectively pressure changes in society that essentially outlaw any other views (the collectivist solution). That manifests as the ideological suppression of dissent

All three phenomena are expressing themselves in developed nations today. But whereas the first two do more damage to the individual concerned (although can be a problem for society in the long run), it is collectivism that is damaging society most visibly and immediately. Whereas the prior political settlement of left/liberal and right/conservative was not perfect, both positions had claims to legitimacy and were broad in philosophical scope, encompassing many on both sides who could comfortably have been on the other. The left led the way (as it often has done, embracing more creative and disruptive types) in adopting identity politics, essentially the interests of marginal minorities – ethnic, religious, sexual and lifestyle – that do not naturally represent the interests of the majority and, through a process that does not need to be reiterated here, have made this the priority of almost every public social institution and have effectively banned discussion of whether this should be so.

Clearly, the phenomenon is far more complex than this. Moreover, postmodern identity politics does raise legitimate questions about the place and inclusion of marginal minorities that need to be debated. Collectivist coercion of the majority, however, does not allow debate. Such a model of reality can only accumulate errors as time goes on as corrective information is banned, denied and deflected. Three dangers now present themselves. Postmodern identity politics can only thrive in a society that is open, tolerant and prosperous, but itself not only does not contribute to these conditions but aggressively denigrates the culture which does and that has allowed it to appear and propagate, effectively becoming a form of parasitism, weakening and perhaps even terminally threatening the society that it inhabits. Then, while the collectivist front must inevitably disintegrate, as it is starting to do now that it is reaching almost total hegemony, the question is how much damage will be irreparably done to society in the process. The third danger is that of reaction, a backlash against minorities that will undo all the progress that has been achieved through democratic reform.

A curious admixture of ignorance and certainty has become the default position of too many in society today. The charge of ignorance is justified, one that we are all prey to; what needs to be punctured is the mantle of certainty. I have suggested that this is achieved through an understand that we are all operating with sets of beliefs, mostly wrong, but some helpful, many innocuous and some downright dangerous, and that the best method of error correction is to engage in vigorous debate with others, particularly on points on which there is disagreement. It is good to see that some governments and institutions are waking up to the threat that muzzling free speech poses and are taking corrective measures. This is a necessary but not sufficient condition for overcoming the impasse to restoring the project of modernity. There should also be encouragement for the practice of the free dialogue of ideas within our institutions as widely as possible.

Transcendent Individualism, Part 1: Collectivism and the Intolerability of Uncertainty

The concept of transcendent individualism, which I have developed through a number of essays and also presented in lectures, has naturally, as all unfamiliar ideas do, come in for criticism. Some of this has been exploratory and clarificatory in nature, some because it challenges – or is perceived to challenge – the particular values of the interlocutor. This is exactly as it should be: the concept of transcendent individualism is still embryonic; it seems to me to capture something of the essence of what a historic humanistic cultures have always rested upon in some form, and also has some claim to being what citizens of democratic societies aspire to; but to garner more than passing interest it should seed an ongoing debate, be not just explanatory of a process but be part of a process of the “evolution” of society, embody within itself the prospect of self-transcendence, rather than a rigid doctrine. Like all evolving ideas, it runs the risk of being transformed in that process, even becoming redundant, to be replaced by something different and fitter, more appropriate to the social landscape of the future.

That said, in this essay I want to lay out where transcendent individualism stands in relation to what is today its chief competitor, in anthropological and politico-philosophical terms, that is collectivism in its various forms. In fact, transcendent individualism can be construed as a response to what I see as the dangers of collectivism but also to the false ascriptions of individualism that collectivists engage in. This brings me to the first point, which is that transcendent individualism is no different in essence than individualism as that has been understood in historical terms, but a deeper exploration of some of its facets in light of the collectivist critique. In other words, transcendent individualism is not a new philosophical perspective, but a new exploration of a very old philosophical position, perhaps emphasising certain aspects and re-infusing it into a society and political culture which has undergone a century or more of the depredations and false promises of collectivist ideologies.

It is probably good to start with a definition of collectivism, as collectivism is back in fashion at the moment, particularly with the young in the West, who have no experience of living in collectivist societies and who are a generation or two removed from the experience of their effects in the political sphere, and also those who are enamoured of the moral kudos that comes with being associated with collectivist rhetoric but are largely insulated from the effects of collectivist economic systems, such as the wealthy who can move their assets to places where they are secure. Collectivism can be defined as any ideological, political or economic system that posits a greater good than the individual good and, additionally, commands allegiance to that good through a central authority. Collectivist systems range from authoritarian to totalitarian and can include every social institution from the smallest up to an entire society. Collectivist societies are those in which all the important functions of society are centralised in the state and, typically, in a single person. Collectivist societies include those in which the dominant ideology is religious and those in which it is secular or, more commonly today, atheistic. There are those which allow some economic freedom and those in which the state controls all economic activity. Political freedoms, if allowed, are barely tolerated and there is little respect for human rights.

Regarding the definition I have given, it could be objected that there are actually goods which are greater than the individual good, and that may well be the case. But if ten or a hundred people were asked to say what that greater good was, we may not end up with ten or a hundred different answers, but there would be a range and certainly not a convergence on one. In other words, a decision about a greater good is almost by definition an individual decision; therefore, by what logic could we justify imposing a single definition on an entire population, except by the logic of power? Naturally, people do not arrive at a decision about the greater good in a vacuum, but usually through some pre-existing institutions, such as religions and political parties, on which common interests converge, and shared customs. But the point is that there is still a range of options, which could not coexist with a single allowed definition. So, although most people arrive at an individual position that there are greater things than the individual, it does not follow that there is one thing that must be so.

Just as we can individually decide that there is such a thing as a greater good than the individual good, we can individually subject ourselves to the rules, principles and values that flow from that decision. However, we cannot with consistency insist that the path which we have chosen individually must be enforced on others. I might choose to be religious because I believe that serving God is a greater good, but I cannot insist that everyone be so. I might believe that society would be better if wealth were shared equally, but I cannot force people to do so. This is, of course, the central paradox of all forms of collectivism: that they fail to reason universally from an individual decision about the greater good that there must be equally valid – on the basis of preference at least – competing visions of the good in the world. This creates a dilemma for the collectivist: the existence of alternative visions of the good throws their certainty into doubt and if there is one characteristic common to all committed collectivists it is their intolerance of uncertainty. The resolution to this dilemma is the anathematising of alternative visions of the good, which then justifies, permits and even lauds the use of coercive measures to ensure conformity.

For the individualist, at the political level, there is no problem with competing visions of the good. The fact that one cannot always ‘have one’s way’ might be personally galling, but there is recognition that there is always the possibility that one might be wrong and that there are alternative ways of seeing things which are better. In other words, for the individualist we could say that the existence of competing ideas it itself a form of the good. The individualist, for this reason, is tolerant of uncertainty as a fundamental attitude towards the world.

That is not to say that individualists are total relativists. They start from the same assumption as collectivists that there is no greater good than the common good, but rather than assume that that good is what they personally believe, they have the humility to accept that understanding what makes for the common good is not easy and is certainly not easily attainable. Ideas about the greater good, like all ideas that in fact serve the common good, must meet certain criteria: have an internally consistent view or logical structure, be able to withstand criticism, and have evidence in their favour. The form of society that facilitates the greater good, in the eyes of individualists, is precisely that which allows an ongoing debate about what constitutes the greater good, even as people individually and collectively pursue their own personal vision of the good together with those of like mind.

In fact, on analysis, we see that individualism and collectivism are exactly the opposite of what collectivists purport them to be. For collectivists, individualism is about selfishness and collectivism about the good of all and, therefore, self-evidently morally superior. What we see in reality is that individualists – certainly those who follow through their individualist perspective to its logical and moral conclusions – are committed to the greater good, but are prepared to endure some uncertainty about its exact nature and construct their social and political institutions to reflect the view that as a society we are always learning and that to the extent that we must rely on certainties and stable features in order to meet the requirements of the day, they should be ones which have met the criteria described above. Collectivists, on the other hand, being intolerant of uncertainty, are far more likely to insist on their personal (i.e., individual) views and preferences and band together with like-minded individuals to enforce their point of view. For this reason, collectivist cultures are always dominated by a committed ideological minority, are characterised by conflict – because collectivists cannot tolerate dissent – and ones in which the majority acquiesce through fear.

While individualists believe that the good of the individual is a great good, a consistent individualism must, paradoxically, posit the highest good to be the collective good of all individuals and can only accept a political and social settlement in which that principle is embedded and affirmed. This is a logical inference from the suppositions of a consistent individualism; individualism is not philosophical or emotive solipsism. In a reworking of Rawls’ famous thought experiment, rather than considering one’s place in society behind the “veil of ignorance” and assuming that the outcome of such an experiment must be the advocacy of some form of social justice mediated through state organs, one can empathetically identify with the value that another places on their own life and the desires they have for achieving happiness and arrive at a concept of the common good that respects those things above all. The outcome of this thought experiment is that the good of the individual is realistically only achievable in a society in which individuals support each other and the political culture supports them in the realisation of their desires.

This differs from collectivism in two ways. The first is it does not assume that the conditions for the common good are known and perhaps only not implemented because of some systemic evil; on the contrary, as already stated, individualists believe that the knowledge needed to create a society that serves the interests of all is rather hard to come by and that it is incremental. Second, it does not intellectually defer the responsibility for the discovery – or, more likely, the proclamation – of those conditions to authority, be those religious, political or intellectual; rather, it falls to each individual, drawing, of course, on the wisdom and opportunities accumulated within our cultural institutions, to make some headway in discovery of those conditions.

It is no big step to realise that the form of social settlement and political system which accommodates both the individual desire for self-betterment and fulfilment and our desire to belong collectively is one that prioritises freedom. With a few rare exceptions, all individuals choose to belong in various manners in collective contexts. A polity that prioritises freedom implies two things in contradistinction to collectivism. One is that the belonging and the subjection to the authoritative dictates of the collective is a matter of choice; if there are persuasive or coercive forces at work, such as family or community, they are at least local and escapable. The second is that in a culture that prioritises freedom, one is free to belong in multiple ways, both in different forms of life – even if logically, ideologically, or aesthetically incompatible – and with different degrees of commitment.

In summary, both collectivists and individualists believe in a greater good and that this is defined more or less by what we can call the common good. Collectivists, by definition, know with certainty that the common good accords with their individually preferred or shared collectivist ideology. The only problem is to convince others of their correctness; if persuasion does not work, coercion is certainly justified. Once the intolerability of uncertainty is admitted, there are no limits to justified authority or measures to ensure conformity. For individualists the greater good is the good of the individual, but this means the good of each individual, not just what is good for the sole individual. The common good emerges from a consideration of the individual good and the only reasonable inference is that this is served by a form of society that both allows and facilitates the pursuit of the individual good. This is a far more complex proposition, one which we see rudiments of but which is incomplete and likely to remain in a state of perpetual becoming.

Reflections on the Nature of Truth in a Post-Relativist Age

If a man says that there is no such a thing as truth, you should take him at his word and not believe him. Roger Scruton

In classical times there were considered to be three absolute values: truth, beauty and goodness, which were considered to be rooted in the unbroken order of things, the relationship of mankind to the cosmos and the gods. In the period of modernity a spirit of relativism pervaded and these values were no longer considered to be absolute. Hume and the sceptical tradition epitomised by Moore’s Principia Ethica have considered the good to be merely the preference of the individual, and aesthetic relativism beauty to be ‘in the eye of the beholder’. However, recent scientific work on altruism and perception suggest that there are objective correlates of subjective feelings of value, in these cases actions and structural disposition. In the case of truth, the feeling of ‘trueness’ should be matched to an objective correlate, which in common with the philosophical tradition I take to be actual existence.

It could be said that our relationship to truth has changed over time. In a simpler age there were the truths of religion and there were the truths of the voices of authority, often those who transmitted the sacred words or who represented divinity on earth, such as kings and emperors. With the Reformation and the Enlightenment those truths began to lose their grip on the imagination of greater numbers and be displaced by the secular truths of science and the provincial voices of a community of experts in various fields such as law, politics and economics. It may be that in our time, under the twin influences of postmodern philosophy, with its radical de-centring of subjectivity and deconstruction of all forms of authority, and the technology of the information society, exemplified by the Internet, we are entering a post-relativist age, one not characterised by the tolerance and compromise fostered by recognising the limitations of knowledge in a relativistic milieu, but one in which, paradoxically, extravagant claims to truth are made in a nihilistic one.

It might be surprising that the notion of truth is still taken seriously, many believing it to have been displaced by a thoroughgoing relativism with regards to omniscient claims. But one of the long-recognised problems of relativism is that it logically undercuts its own suppositions: it cannot be a true statement that there is no such thing as truth. Perhaps the purveyors of relativism have something more specific in mind, the non-existence of ‘Truth’ as an absolute, allied to moral absolutism, and though they might not be entirely out of the woods, this is a known category: that of the assertions of theology, sovereignty and metaphysics. We have become inured to the debunking of authority in these fields. What might be less well known is that science has also lost its privileged place as a purveyor of truth; scientific theories are now generally considered to be useful creations rather than discoveries of the iron laws of nature. It is only in logic and in mathematics that the notion of truth remains largely intact, although even here outriggers of postmodernism, such as feminist theory and ‘queer’ theory have been transvaluating rational thought’s central tenets into the will to dominate and deploying the gambit of victimisation.

It is, though, in the field of politics that the most obvious manifestations of post-relativism are found: the assumption of, and attribution of, bad faith to whatever and whoever takes a different perspective, regardless of the evidence; the concoction of ‘alternative facts’ and the accusation of ‘fake news’ in a zero-sum game in which the rules of civilised discourse and the arduous responsibility of arriving at something like the truth in a complex social world have been laid aside; and the grandstanding assumption of indubitable infallibility based for the most part not on knowledge and experience but on tenuous sources within cyberspace. Today, many people seem content to outsource their thinking and behaviour to the social media corporations. In a more scripturally literate past this was known as building your house on sand.

While not the source of the problem, it does not help that current theories of truth within philosophy are based on very narrow criteria. The two prevailing models of truth are the correspondence theory of truth, in which statements made about reality correspond to the facts as they are known and the coherence theory of truth, in which statements have logical coherence with other validated propositions. The correspondence theory of truth goes back to Aristotle but has had modern exponents in Russell and Austin. Russell, for example, stated that for a statement to be true every linguistic element in the statement, such as the relationship between a subject and object must correspond to a factual reality. While commonsensical for many mundane, concrete descriptions, this would seem inadequate for any state of affairs in which interpretation is called for; for example, how would one determine that even the simple judgement that a particular road was a long road was objectively true?

A sister theory of correspondence theory is Tarski’s semantic theory of truth, which states that a proposition of the form /“snow is white” is true if and only if snow is white/, the two occurrences of the phrase belonging to the primary language and metalanguage respectively. This establishes the condition of whether a ‘true’ or ‘false’ truth value can be attributed to a statement cast as a tautology, but not whether the referent of the statement is true or not. A parallel example would be the statement /“kryptonite is green” is true if and only if kryptonite is green/. The conditions for attributing a truth value are the same, but the referents have a different ontological status. Since kryptonite does not exist outside of the imaginary world of the Superman comics, kryptonite is neither green nor any other colour. So although this would satisfy Tarski’s conditions for attributing a false truth value to the statement, it seems to me that that would not be evaluated on a par with a statement such as “sulphur is blue” in which an attributive error, rather than a category error, had been committed.

Both these versions of correspondence, to my mind, suffer the same limitations. The first is that they limit themselves to so-called real (i.e. physical) objects, whereas many of the things that language speaks of are non-physical, abstract or imaginary. The problem is their positivistic notion of existence, the reduction of reality to basic fundamentals over which they claim there is no dispute. However, there is no existence which is not problematic. Take, for instance, the proposition that the earth is round and orbits the sun. It was once consider heretical to make public such a belief.  Today the denial of either of these accepted facts is considered a mark of eccentricity or perversity. But how has the proposition “the earth is round and orbits the sun” been established as true*, since very few have had the opportunity to experience this directly? It is on the basis of an established intellectual tradition that the word has percolated down even to the least intellectual through school textbooks and popular culture. Every piece of so-called evidence could have an alternative explanation. We take it in good faith that the experts who assert that it is so have the means to evaluate the evidence and the theory that binds the evidence into a coherent explanation as fundamentally sound. For all that, the emergence of the internet has spawned and hosts a multiplicity of flat-earth conspiracy theorist websites and other alternate ways of seeing reality, from committed ufologists to millenialist movements and crackpot therapeutics, that have eroded faith in reason and empirical evidence among much of the public.

“The world is all that is the case”, according to Wittgenstein at the opening of the Tractatus Logico-Philosophicus, according to which whatever is true must be an existing object or an existing state of affairs, such that stating Y of X must be true if X exists and Y is a quality that pertains to X. However, in order to address the divergence between ordinary language and the range of objects or events found in the phenomenal world of human experience, it is necessary to part company with positivism and its insistence on ‘atomic facts’ and take a phenomenological position that whatsoever we speak of has a proper mode of existence. In other words it is necessary to expand the range of fundamental ontology, over which truth values can be asserted, to include at least social ontology and the ontology of the psyche. It seems to me that there are six categories of knowledge to which the label ‘truth’ can be attached, though I am not dogmatically committed to this: the truth that nature, great art and great acts reveal to us; the truths contained in sacred texts and institutions; authority, the mystique surrounding it and its pronouncements; matters of fact encountered in the everyday; theories, such as those of science, the humanities and philosophy; and tautologies, as in mathematics and logic. The only thing that binds these together is the requirement that their ‘truth’ be conceived as related to a mode of existence. That is to say, that nothing can be said to be true unless it is held to exist in some manner.

This brings me to the second limitation of these theories: that they do not establish the conditions upon which correspondence between a statement and the actual state of affairs described can be said to hold or not to hold, other than to affirm or deny that they do. In fact, the conditions of truth for an object or state of affairs can be said to be met when they are defined in a dialectic of conceptualisation and evaluation, that is, their mode of existence is both conceptualised and incorporating – even implicitly – a method by which the assertion of existence can be judged. For example, if a unicorn were to be defined as a horned horse, then any statement that contained a reference to unicorns, such as “I encountered a unicorn in the forest” would be easily refuted as no such creatures exist; however, if it were defined as a mythical horned horse, then the same statement would be taken allegorically or dramatically. Less obviously, we do this with everyday objects. How would one know that a particular object was a cup unless we had imbibed a concept of a cup that was continually validated in our everyday experience? Contrast this, then, with the bafflement or indifference with which we encounter unfamiliar objects for which we have no conception or understanding of their use.

The conflict between religion and science is largely about conflicting ideas of truth and the misapprehension from both sides of the nature of the truths that they are promoting. A less restrictive ontology could broaden our conception of what we consider part of the real. A case could even be made for the existence of God as an object of faith that can only be apprehended through a life of faith. However, both religion (at least of the more fundamentalist varieties) and science (allied to atheistic fundamentalism) believe that religion is advocating truths that are evidentially demonstrable, as an alternative or equivalent to science, for example about the origin of the universe or the origin of life. But this was not the view of truth that was promulgated by classical religion, such as the theologies of Augustine or Averroes (Ibn Rushd), nor indeed by the more open-minded modern commentators. The palaeontologist and evolutionary theorist Steven Jay Gould has spoken of the ‘nonoverlapping magisteria’ of science and religion, in which both address the same realities from different perspectives. Simply put, we could say that science addresses the facts of reality through theory and data and religion addresses the meaning of reality through stories and metaphor. Even though atheists experience the awe-inspiring nature of cosmic reality, they are hampered in expressing this in the reductive language of science and frequently take refuge in the spiritual language of parable and metaphor.

Of course, definitions are not always attached to statements, nor should they necessarily be, as this would be an imposition on the beauty and simplicity of language. Most statements are understood in context anyway. This favours a coherence theory of truth in which statements are anchored in others which are verifiable, though I have argued that we need a broader range of the conditions in which verification takes place. I think one of the great dangers of the post-relativist age of information overload and reductive horizons is that we are losing the ability to contextualise the utterances of those with whom we may not share the same outlook in a broader framework of accommodation, and instead are tempted to defend our small islands of privileged truth in bouts of hyperbolic rage.

*Or approximately true, as the earth is flattened at the poles, and it is more accurate to say the earth and sun revolve around a common axis.