Truth as fiction: the dangers of hubris in the information environment
14 Feb 2020| and

As Felipe Fernández-Armesto points out in his 2019 Out of our minds: what we think and how we came to think it, ideas about how humans tell truth from falsity are among the oldest and most important ones we ever have. The digital information age presents a range of new and old challenges to the conundrum, and in national security they become particularly acute. In the digital age, we’re drowning in information—and most projections suggest it will only get worse. A tendency, however, to overreact to these new iterations of old problems will quickly forfeit any strategic gains the digital information age once seemed to offer.

Another of the oldest and most important human concepts is narrative. The stories we tell about ourselves, both individually and collectively, are the substrates of identity. Narrative is a different species from truth and falsity; it spans the gap between factual truth and truth as meaning—a delicate, dynamic and complex assemblage of information, knowledge, understanding and illusion. Humans participate in constructing their own narratives, but are not their sole architects. The world outside of human control has the final say.

In Stalin’s Soviet Union as well as Nazi Germany, ‘truth’ was considered a theoretical construct under the control of human architects. As Eugene Lyons noted of life in the USSR in his 1938 Assignment in Utopia, under certain conditions utter nonsense can seem true—nonsense with a special type of momentum whereby each accumulated failure only stiffens collective commitment to the lie. When centralised regimes control much of the information infrastructure, falsehoods can seem plausible for a period of time, but not because logic bends to human will—it doesn’t. By controlling information infrastructure and by implementing a climate not only of fear but of vanguardism, Soviet and Nazi propagandists created truth out of thin air. The climates of terror and the infrastructures of control these regimes produced eventually fell—and so did the nonsense they held aloft for a brief period of time. The 20th century stands as the greatest warning against hubris and human authorship.

Despite those lessons, the temptation to treat narrative truth as a fiction of our own making—unburdened by fact—is again on the agenda. Three factors have put it there: our self-inflicted glut of data; the conviction that adversaries are conducting narrative warfare against us and that we’re losing; and highly speculative theories about what technology can do to combat the problem.

The belief, driven primarily by the big data commerce industry (as opposed to the big data science industry), that data must contain a type of magic dust—discernible patterns and regularities in human behaviour that offer significant insights—is losing its lustre. Much of what has passed, and been bought and sold, as behavioural analytics derived from data has little or no legitimacy. Some analytical tools perform no better than human intuition at predicting social outcomes, yet the idea that complex human behaviour can be steered by social engineers is part of the zeitgeist. That the age of big data might yield diminishing returns is information-age heresy.

Two decades of conflict against an adversary unbound by ethical and moral constraints in communication has left the West’s security agencies and their personnel pushing boundaries to win the contest of narratives by countering the content, flow or, in many cases, the communicators. The overarching concern, characterised as a national security risk, has been amplified in recent years as revisionist states use information as a tool or weapon. Again, the desire to counter (to be seen to be doing something, usually offensive, against the identified threat) has led to that same security apparatus considering forgoing the very values, ethics and morals that make Western democracy worth preserving.

This isn’t mere speculation. Decisions within the US security apparatus to reframe long-held prohibitions against torture in order to gain a supporting legal opinion justifying ‘enhanced’ interrogation methods are a sobering case study. Arguably, a significant erosion of trust in US decision-making was cemented following the release of reports on previously covert actions related to enhanced interrogation—rendition to black sites, waterboarding and so on. The negative impact was greater because those acts were fully thought through and authorised in order to ‘win’, not the actions of poorly led individuals as occurred at Abu Ghraib. Strategic consequences flow from serious breaches of trust.

The fight can’t be solely about countering the adversary. We need to protect what we have and ensure that we don’t throw it away through frustration and the desire to report a success. Freedom is not free and, once squandered, can’t be regained without tumultuous change. For democratic states, people’s and partners’ trust in national institutions is a central requirement. While ‘trust’ is an increasingly flexible concept, short-term wins have negative long-term strategic consequences. Bellingcat’s work to separate fact from fiction and correctly attribute online influence efforts provides a great longitudinal view of the increasing number of states or strategic agencies attempting to ‘win’. It’s also a sobering insight into the fragility of the concepts behind engaging in clandestine or covert actions while outsourcing delivery and hiding in the sea of big data. Here, hubris is increasingly obvious.

The third factor—statistical inference software (unhelpfully known as artificial intelligence, or AI)—offers high-speed information sorting. The technology is a crucial part of the response to the data deluge in the national security sphere. But AI might never accomplish the simulated assembly of complex information, knowledge, understanding and illusion that gives rise to narrative identity. We accrue intellectual debt when we offload cognitive tasks. It would take historically epic hubris to contend that statistical inference can replace or even augment this process. It stretches credulity even further to suggest that states should attempt such interventions as they grapple with the uncertainty of the digital age. The price of hubris could be a type of hidden defeat.

Intervening in human complexity is permanently fraught. If we believe an adversary is conducting such activities, why not let them fail? No current technology offers a way out that doesn’t include significant societal costs. Statistical inference software helps in sorting and analysing information that’s bounded and discrete—the identification of threat signatures in cyberattacks is one example. Expecting this technology to play a meaningful role in the complex assembly of narrative and meaning, however, is to make a scientifically unsupported leap of faith.

Expecting humans and the machines they make to become the authors and architects of narrative meaning, without exacerbating the very problem they’re supposedly attempting to mitigate, is an old fantasy using some new gadgets. It never ends well.

The astute approach to what humans and the machines they make can do to help tell truth from falsity—this oldest of human conundrums—requires a large dose of sceptical conservatism from the national security community. Australia has an open, democratic social fabric to protect and strengthen. Truth may be a partly human fiction, but it’s one of the most important fictions a nation can conceive. Turning it over to algorithmic alchemy and a handful of central controllers in a frenzy of presentism is the fastest way to unravel the whole tapestry.