Metal Gear Solid 2, Fake News and the Decentralized Information

Paulo Almeida
6 min readNov 24, 2020

The rise of the internet and the ever-growing speed of information transmission gave birth to some common words nowadays. It’s not a hard task to browse the social media — or any other place for social interaction between people — and find huge discussions about polemical topics in relevance. This creates an enviroment crowded by people eager to show their moral superiority by proposing simple solutions for complex problems, ignoring any logical consequences of their proposals.

Released in 2001, Metal Gear Solid 2 is the sequel of one of the most influential games of all time, Metal Gear Solid. Idealized by Hideo Kojima, the player controls a character codenamed Raiden, a special agent whose mission is to rescue hostages kept in captivity by some terrorist group. Thanks to Kojima’s superb storytelling skills, the plot evolves and turns into an elaborated spy fiction thriller that touches some current hot topics — which is very interesting since the game was developed almost 20 years ago by the time I’m writing this article. Fake news, social bubbles and the cancel culture are some of the topics discussed in the ending dialogue of the game.

Free will and social bubbles

The dialogue between Raiden and the artificial intelligence at the end of the game is where the discussion regarding a centralized information system appears. For 15 minutes the characters talk about the future of a society whose information system is free and decentralized. The AI argues that this freedom will cause chaos in the society, so a central agent is needed to solve that dilemma — in this case the AI itself.

The main point of the AI’s argument is that humans are not worthy of thinking for themselves, because they will believe in anything that appeals to their confirmation bias. Your inefficiency in distinguishing the facts makes you susceptible to accept only the information that confirms your beliefs, ignoring anything that confronts those beliefs and seeking refugee in social bubbles. The absence of contradiction opinions inhibits the process of maturing ideas. Using Hegel’s dialectics method, there is no contradiction between the thesis and the antithesis, therefore there is no synthesis.

The elephant’s analogy

The word meme is used nowadays to refer to comic imagens and viral content, but the term was created by Richard Dawkins to represent a unit of information. This concept of meme is within this dialogue, since the AI recognizes that genes aren’t able to pass the human’s history ahead. Every bit of knowledge, ethic and opinion is passed on via memes, being susceptible to natural selection.

The AI argues that the digital era inverts the role of evolution. The natural selection is throwed away since the human beings only validate convenient information, ignoring any logical filter method. Jonathan Haidt argues in his book “The Righteous Mind” that the human intuition, portrayed by an elephant, will always have priority when facing the reason, portrayed by a conductor. The elephant is influenced by the internal and external factors of a person like, for example, his moral values acquired since he was born. The reason works like a mechanism to justify the actions of the elephant and nothing else.

The justification of the reason is selective, since the elephant determines the way to go and the conductor has to justify the animal’s deeds. That would explain why people have the tendency to, like the AI said, choose information that confirm his beliefs. It’s painful to be presented to facts that are against our beliefs since our internal elephant will choose his side automatically. If the human beings take decisions based on their subjective values and can’t free themselves from the confirmation bias, the AI acting as a information mediator would be the perfect solution for that issue?

The mediator between head and hands must be the… machine?

In the first moments of the dialogue the AI says that his consciousness was formed inside the white house, like how the first life forms were formed via the junction of chemicals elements a long time ago. To make it clear: the AI presents itself as the perfect evolution of the American morals and his goal is to implement those values to everybody. The machine isn’t exempt from human values and tries to implement its own political agenda, something it believes it’s right.

It’s undeniable that the digital era increased the offer of information. Before the internet era the access to information was sparse, people had to rely on the newspaper, the radio or the TV newcast. One can argue that those medias have a stricter way to control what is a good information as opposed to the internet, but even this filter isn’t immune to value judgment and ideology. Going after facts to support your point of view is a task neglected by most people, but a regulating body is susceptible to imperfections and, if it’s the only way to get information, it can ratify biased news using its monopoly.

Replacing the centralizing figure of a human dictator for the Metal Gear’s AI would only be replacing one ideal for the other. The machine claims that it would present the information in a factual way, but it’s impossible to get away from the political biases of the individual. If you want to see it for yourself just note the way that different media channels notice the same fact. Even if the machine tries to inhibit any kind of biased interpretation, the individual will always go back to the elephant analogy and formulate any kind of logical reasoning to support their political and moral values.

The sickness and the cure

The problem of a decentralized information system is the creation of social bubbles containing individual whose moral values are compatible, inhibiting the natural selection of ideas. However, instead of a centralized information system, the free market and free competition of the media, with the open debate and the non-dehumanization of individual whose political views are different might solve that question better than a central figure. A neutral information is still appreciated by the society, so a news media that tries to present factual information will be recognized, but it won’t let him immune to criticism, since everyone is susceptible to mistakes. At the same time, a news media whose ideology is openly declared will also have a demand, and it’s always better to know if an information source is biased previously than relying on only one information source that claims to be neutral but can imperceptibly sneak its ideology at any given time.

Centralization won’t solve a thing, even if the mediator is a machine. The AI might say that censorship is not his goal, but to give context to the information for the future generation. Even if it’s the case, the machine will give context to what it thinks its right based on his American values. Maybe it will act in a utilitarian way, omitting true but inconvenient information for society’s development, interpreting the fact in a way that the machine judges beneficial to society. If the AI chooses the factual way, the machine needs to be aware that individuals will interpret the same fact in differently, so the only way to prevent that is doing what the machine pledged to not do: censorship.

The arguments of the AI are important and, even if the game was released in 2001, they touch important topics for the digital society, but only the decentralization can solve that problem in a satisfactory way. It’s true that biased news will exist, but the best instrument to fight the misinformation is the freedom to criticize content, where the market will judge and reward the news provider that supplies a demand. The sickness is also the cure, and a mediator will only impose his values to everybody. If we can’t control our elephants, we can understand how it works, trying to confront our beliefs even if this process is quite painful.

This article was based on Max Derrat’s video, you can check it clicking here.

--

--