The ‘spinternet’, or the use of social media by governments and special interest groups to recruit and galvanise populations to tow the party line, is a term coined by Georgetown University researcher and journalist Evegny Morozov. It’s a concise shorthand for the topic of last Wednesday night’s discussion at DigiFest, the series of digital technology events I curated this month at London’s Science Museum.
As a social psychologist interested in the interpersonal processes that occur between people on the Web, resulting in a change in personal attitudes and behaviours, this was a discussion I knew would be interesting to the sell-out crowd. We got meaty, dissecting how we – and governments and special interest groups – use this communication technology to influence. I wasn’t disappointed. The contributors, including Professor Phil Taylor of University of Leeds, Dr. Lian Zhu of Bournemouth University, Terry Pattar from Janes and Chris Vallance from the BBC, gave the sell-out crowd (and me) superb insight into whether this new communication medium is indeed a truly revolutionary means of disseminating propaganda. The resounding conclusion is that the Web has the potential to be the ultimate propaganda tool, but it’s not there yet. They argued that other technologies – from the printing press to radio to television – have been better utilised thus far. But is that because the new media hasn’t been figured out yet?
The dissemination of propaganda through mass communication channels are well-documented. The one party system in a country like China uses its control of the media to protect the sanctity of national identity. Other countries have come under fire for their heads of state’s close ties between media organisations and government officials (e.g., Italy, where President Silvio Berlisconi owns three of the four main media channels, or Venezuela, where Chavez’s interests in media are evidenced by the hubris-induced phone-ins that he hosts for eight hours per day). In this country we seek (note, we seek) to keep the two separate, but clearly that’s just propaganda. Dr Zhu, formerly a TV and print journalist in Shanghai, said that the political influence in media is simply not as overt, but it’s just as prevalent as in China. The UK has a long history of effectively using public communication to promote the interests of the government (do keep calm and carry on… alternatively, you can get excited and make things) and wartime provides fertile grounds for designers, to create an us and them mentality in whichever country you happen to live in.
So where does the Web come in, and how does it fit into the tapestry of proapaganda tools? Professor Taylor, a propaganda historian, suggested that there are three crucial turning points in propaganda’s history: the invention of the printing press, World War II, and the invention of the World Wide Web. He argues the Web is the next step: that it offers the publishing facilities of the printing press and the techniques and tricks perfected during WW2, but in addition it is an enormous trove of information – potentially destabilising the sanctity of the nation state and the effectiveness of any propaganda from government sources (how can a country win the hearts and minds of its population when they have access to other, perhaps contradictory, ideas?) – and a new medium of social diffusion that encourages rapid dissemination of information, often to specialised audiences who are already predisposed to being converted.
The Web is the most extensive knowledge resource in history. It has been criticised as being a library dominated by disinformation: the same facility that allows anyone to public, its critics argue, undermines the objectivity of ‘knowledge’. For that to be preserved, the critics demand that a peer review process is in place to establish that content is unbiased. But that already happens, just in a new way.
For example, the poster child of the openness of the Web is Wikipedia, the user-generated encyclopedia that is a first-stop-shop for the most superficial understanding of a topic. Despite its functioning and effective community-generated peer review system, reliant on the contributions of its members, it is still criticised as a source of non-rigorous disinformation.
At the heart of this debate is what makes information credible. That becomes even more important in computer-mediated communication. Offline, we focus on the output provided to us by gatekeepers: newspaper articles, television programmes, radio programmes, academic institutions. Naomi Klein would put brands centre stage: we seek information from those sources we trust. In practice, this can include special interest groups that confirm our interests, and governments. And in many ways, it is these latter groups which hold the most sway over what we believe to be true. And there is evidence that this ‘cyberbalkanisation’ is particularly true online: people develop strong in-group identities because what they read and share in their online communities repeat what we think. Wojcieszak (2008) suggests that the online environment encourages extremist attitudes; I propose, based on the results of my own work, that this is because people make assumptions about other people’s beliefs that are more extreme than they actually are. They assume consensus of attitudes and behaviours when, in fact, there is none. Terry Pattar, who analyses extremist websites and blogs, says that the influence of the few is rearing its head online: there are increasingly fewer and fewer key websites and forums referenced in extremist propaganda. The universe of extremism has condensed from an innumerable number to around 5.
So how have governments tried to come to grips with this new propaganda conduit? Some have chosen to use brute force: the censorship policies of countries like Iran and China are well-documented. Both limit access to information that does not tow the party line, that – in their view – destabilises the scaffolding of their national identity. However, this sledgehammer approach is ineffective because the Web sits on an architecture that is inherently unbreakable, in which information can be re-routed around blockades and gates with a bit of well-placed technological jiggery pokery. Like Haystack, for example, used to great effect during the Green Revolution in Iran in the summer of 2009.
The way the two countries deal with online content that is contradictory to the national agenda, however, is different. I would argue that the Chinese approach is a more sophisticated version of the Iranian system, that the events that transpired in the summer of 2009 in the immediate aftermath of the disputed Iranian Presidential election have already been experienced by the Chinese authorities: the government sought to cut off access by using its unique position as the first and only port of call for telecommunications in and out of the country. But in both cases, the blockade was breached.
China has had a few years on Iran when it comes to how they use the Web. Rather than ensure that information cannot be accessed (and therefore pretend that it does not exist), they have begun to use social media tools to attack the most important currency on the web: people’s reputations. The government has implemented a widespread programme of agents who engage in dialogue and disseminate party content on the very sites that are standing out against the Party. Using social media techniques (in, it should be noted, similar ways as PR and marketing people in the UK use them), they seed the “correct” ideas through the most acceptable way that these issues can be introduced in this medium: through dialogue and discussion. By dropping links. By asking for evidence. By offering “alternative” points of view. They’ve done a great job at implementing the most effective propaganda tool in the book: social pressure.
I have no doubt that the Iranian government will learn from its unsuccessful attempts to shut itself off from the rest of the world. In fact, Morozov has already begun to document how they’re doing it: fake videos and Twitter and Facebook updates. Clever clever. He also offers examples of how this is going even wider, from Nigeria to Russia and beyond.
In the short term, as one audience member commented, this is going to ruin our fun. Other people agree. We don’t want governments in our Web! It’s a rhetoric-free platform! We populate it, for us and our interests! We don’t want it to be used to manipulate us! Sorry folks, it’s too late. It’s a hugely powerful tool. Of course they want to harness it. And they will.
Thanks to the Science Museum for hosting this event and, of course, the participants themselves.
Comments
Fascinating blog post! It makes me wonder if the Internet and social media were around during Hilter’s time, would World War II in Europe have an entirely different outcome or even occurred? Could a World War III emanate from social media manipulation, intentionally or perhaps, unintentionally? Sometimes my pondering goes to dark places. X-/
As an ageing whatever, the first question asked on planning a revoltion was “How do we get control of communication?” This seems to be the same thing – the web represents freedom at the moment but TPTB will get control of it or lose. This has already happened in some places…
I find this article interesting because as a Broadcast Communication student in the Philippines, I am more exposed to the mainstream media than the new media, in general. Your insights are very significant especially that we are studying the internet and the new media culture which has lesser studies and publications in our university, at the same time we are also studying the mainstream media which is the expertise of my college in the university.
Aldrin – glad to be of service!