Post-truth: The future is already here, it’s just not very evenly distributed

Algorithms, discourses and «patternocracy» — everything you were never told about «post-truth».

«Post-truth» was named the word of the year by the Oxford dictionary, beating «brexiteer» and «chatbot». According to its definition, post-truth is «relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief». However, the term seems to have been used before — in 1982 American playwright Steve Tesich, reflecting on the Iran-Contra scandal and the Persian Gulf War, wrote: «We, as a free people, have freely decided that we want to live in some post-truth world». 25 years later we face post-truth in both global and private senses.

Everyday life broadly relies on digital technologies and online environments: getting news from social network feeds, buying groceries with the help of online shops or subscription delivery services. Everyone has a smartphone ready to run almost any user’s request in their pocket. 

However, along with the positive aspects of this revolutionary environment, we inevitably face the consequences of modernity. Big data, algorithms and AI seem unbelievable and far from everyday life to most users: all these sophisticated mechanisms exist somewhere in secret labs or Silicon Valley, and so are hardly relevant to «ordinary people». And yet, this is not true. Strelka Magazine talked to digital and online researchers — Benjamin Bratton, director of the «New normal» educational programme at the Strelka Institute; Vladan Yoler, director of the Share Foundation; and Yan Krasni, associate professor at the University of Belgrade — to understand what post-truth actually is and how it shapes everyday life.

Benjamin Bratton. Director of the «New normal» educational programme at the Strelka Institute, Professor of Visual Arts and Director of the Center for Design and Geopolitics at the University of California.

What is post-truth? You take some personal sense of resentment, create a narrative out of that resentment to actually make a story for people to hear, then you create a narrative where there are heroes and bad guys, then you get some sort of point in which there lies a hidden truth (like scandals): you now have the foundation. And this is the idea that there’s actually some hidden truth; it is secretly revealed to you and now you know about a banking conspiracy or Hillary’s emails or whatever and now you are a hero. So, making people heroes by giving them a secret truth — that’s what post-truth is.

The networks work as a public space; they’re a forum for performance and shaming and all kinds of other things in this pseudo-public space. So, part of the post-truth thing, as well, is also an overemphasis on subjectivity: my truth is my truth because this is my experience; this is a narrative that I wish to have; whatever might be outside of this is essentially irrelevant. So it’s also probably connected to designing a public identity: this is my subjective and personal experience of the world.

Graham Harman made an interesting point: he said that post-truth is probably not the right word. In his publications he calls it post-reality and I think it’s probably a good argument he makes because one of the effects of this phenomenon that we’re talking about is that the actual real is demeaned and essentially it may be irrelevant to the process of truth making, sort of de-linking the real from truth. On the one hand, there’s not an absence of truth, but rather an abundance. Let’s take augmented reality as an example to my point about how post-truth is actually being produced as an excess of truth rather than its absence.

Let’s imagine you wear augmented reality glasses and they say «this is halal and that is haram», «clean ‒ unclean», «good guy ‒ bad guy», «ours ‒ theirs» and all these oppositions. What you get is not exactly real but what is produced is a kind of truth regime that is making claims about what is, in fact, the true thing to do, what is right and wrong, what is the right thing to do, what actually should be taken. 

So why do we have this? Well, part of it may be a kind of response to an excess of information: more information is available but the context of understanding the informational patterns depends on cultural norms. Part of the response to this confusion is to find patterns in the noise that might have actually been there. And so the desire to find those patterns in the noise becomes a market for the people who get the patterns, who tell you what’s going on, who design a narrative.

It’s not that there’s no truth behind them, but the reason why they have power is because they provide a mechanism of simple truth, excessive truth. And that’s interesting. It’s not that we see the world through these filters, we actually express our ideas about the world through the formation of these filters: sharing, liking and such are all acts of performance as much as they are of content consumption. So this is sort of a mix of the theological public with a reverence for subjective self-expression and, at a time when there are these refugee or racism problems, this is essentially becoming a fundamentalist idiocratic populism.

Vladan Yoler. Director of the Share Foundation, director of Share Lab, professor at New Media department of Academy of Arts at the University of Novi Sad.

It is trendy these days to claim that Facebook is one of the greatest catalysts of the post-truth society. Newsfeed and other algorithms within this algorithmic factory are clustering users into filter bubbles, where each user is served with his own version of reality. Here we are speaking about 1.6 billion different nano-discourses: realities shaped by the algorithms that estimate which set of news, ads or events will fit into your vision of the world. Truth or objective facts do not play any role in this process; the main parameter is attention and behaviour that should serve as a bait for your click on the ad. This bubble is outlined by the event horizon: within it, we are just seeing what we want to see, ideas which align with our own and opinions which appeal to our emotions. This phenomenon is also known as an «echo chamber», a metaphorical description of a situation in which information, ideas or beliefs are amplified or reinforced by transmission and repetition inside an «enclosed» system, where different or competing views are censored, disallowed, or underrepresented. Each of these «enclosed» systems cultivates its beliefs and values, forming a unique version of Truth. This is completely in line with the definition of post-truth.

But it is unfair to put all the blame on Facebook and its algorithms. Empowered by the Facebook targeting system, politicians and other stakeholders can penetrate deep into those bubbles and perform emotional stimulation, nano-targeting on the level of the smallest cluster, presenting to them a custom made utopia and introduce the enemy, the object of hate. As discussed by the media theorist Manuel Castells, fabrication and the diffusion of messages that distort facts and induce misinformation for the purpose of advancing someone’s interests is a basic tactic of propaganda and control, one of the oldest and most direct forms of media politics. Facebook can be blamed as an engine, catalyst or tool, but still, the owners, management and the workers in the post-truth factory are the same as before: politicians, information warfare experts, spin doctors, marketing experts and armies of low-paid activists or unpaid political trolls. If we really want to know how it is to live in the post-truth world, we should probably ask some journalist who needs to write a new cheerleading article about the supreme leader every day, a marketing expert or a professional online troll who must produce hundreds of fake comments every day. Or, we should re-read Orwell’s «1984» as a manual for understanding the dystopian post-truth society.

We can reformulate William Gibson’s sentence «The future is already here, it’s just not very evenly distributed» into «truth is still there it’s just not very evenly distributed». In the context of big data hype, global surveillance or the promise of « smart cities», we should as always ask first who will own the collected data and we will get an answer on who will have access to the statistically generated and algorithmically rendered version of the truth. Access to truth is and will stay a privilege of the political, economical and technological elite and all others will be just data emitters, constantly exposed to waves of emotional stimulation and analysed and watched by algorithms.

Jan Krasni. Associate professor at University of Belgrade, researcher at Share Lab

Algorithms are not only structuring the content which we get online according to our interests but they are also creating the content. The first phenomenon has been known for some time already, since the term filter bubble. It «became» a problem in the mainstream media when a Swiss Magazine published an article on how algorithms based on psychological and behaviour analytics made it possible for Trump to win the elections. Trump’s «online squad» was producing so much different content, which could cover a very broad range of users distributed in the continuum between his and Clinton’s supporters. This content was then disseminated according to the user’s psychological profiles i.e. psychograms. The mainstream media actually made a problem out of the discipline of psychological analytics based on our personal data collected while any of us are visiting social networks.

The latter problem is still not so much in the focus of the mainstream media since it brings them revenue. When you visit Google News you’ll see these little units and sections with pictures, titles and leads. All these elements come from different sources. The algorithms collect them and combine them together, creating something of an automatically-created clickbait. This is basically a new text assembled by means of «montage». The «clickbaits» or «newsbites» are structured differently depending on the information the algorithm has on the user, users or large user groups visiting the aggregator service. This kind of personalisation is not always adjusted by ourselves, but more and more by the algorithm itself. Thus we realise that the algorithm is «aware» and that it recognises the «meanings». We should not forget that the Facebook news feed is maybe even more «excited» about our personal preferences which are registered even without us knowing it. In other words, news feeds largely reflect our interests, but they can also affect us (and they do) by creating or at least influencing something we perceive as our personal discourse that we are exposed to every day.

What these personalisations are based on is called a digital footprint: all the traces you leave by merely «being online». These processed metadata describe your behaviour and characteristics. The algorithms and neural networks behind the news feed recognise and support patterns; like when sketching in a notebook, the line becomes stronger because you are repeating the move in a similar direction but not always on the same track. When it comes to the feed something similar happens: if you are just a little bit «traditional», you’ll get more and more articles which would ideologically be sorted as traditional. If you’re labelled as «liberal», you’ll probably get more liberal content to read. Even if you don’t become more liberal or more conservative, and if you don’t sink into any other label that you get, you’ll stay as you are since the bubble will not let anything inside which could make it burst. You are not able to change, you are not able to develop yourself. That is the possible problem of the automated personalised discourses: they recognise you and don’t allow you to change yourself anymore (this doesn’t include the influence by them, though).

When it comes to the proper method for analysis of these discourses, one cannot think anymore only in terms of media studies, computer sciences, computer and corpus linguistics or any other discipline that emerged in the 20th century or came in the early 21st century. Technology is affecting our everyday life, our everyday life, our meaning and decision making much more deeply. So instead of speaking about the «data turn» or in this case about the «algorithm turn», we might as well remember the concept of methodological pluralism and accept the fact that our approach depends on the phenomenon we are confronted with, and not on any discipline or school. 

Text: Ekaterina Arie
Translation: Ekaterina Arie