The Misinformation Game
How a 2003 Computer Game by DeepMind Technologies's CEO Dealt with Misinformation
A few weeks ago on March 22nd and against the backdrop of the launch of ChatGPT-4 and other Large Language Model AIs (LLMs), Elon Musk, Steve Wozniak, and a host of other science and tech luminaries signed an open letter warning the World of the risk that an ‘out-of-control' AI arms race could pose to humanity, calling on all AI labs to enact a publicly verifiable pause on all research to be followed by a possible government moratorium if needed. (letter, news)
Included in the letter was a list of possible research areas that could be used to make AIs safer. From the open letter:
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Finding ways to “distinguish real from synthetic” is a way of admitting that AI systems will be able to create misinformation at large scales. So to find out how these policies might be enacted, it might be informative to dive down a long forgotten gaming rabbit hole to see how one of the leading minds in AI development dealt with misinformation back in 2003.
Warning! Speculations ahead.
Before Deepmind Technologies
In the aftermath of the public letter being released, the former Google/Alphabet CEO, Eric Schmidt, went on This Week with George Stephanopoulos to talk about the need for appropriate guardrails around Artificial Intelligence research.
Schmidt has been thinking about landmark advances in AI for a while now. Indeed the fact that ChatGPT came from OpenAI and not from Google/Alphabet was a bit of a shock to many in the AI facing world. Up until recently most of the greatest public advances in AI all came from the London based DeepMind Technologies, run by Demis Hassabis, which Google acquired in 2014.
If the name Demis Hassabis doesn’t ring a bell, it might be time to learn it.
Demis Hasabis is an example of the kind of genius polymath the UK elevates into one area of extreme performance, and who then seamlessly and inexplicably shifts his attention to some other completely different area of extreme performance with seemingly no concern for the change of domain.
As a teen in the mid 90s, Hasabis had gained a reputation as a prodigious game player. He played varsity chess at Cambridge from 1995 to 1997. He then went on to become the five-time World Pentamind champion of the London Mind Sports Olympiad between 1998 and 2003, a kind of pentathlon for the mind.
Given all this game playing expertise it of course made a lot of sense to fund Hasabis as video game developer, and that’s exactly what happened in 1998 with the ill-fated Elixir Studios.
Elixir Studios founded in 1998 would face significant challenges during a five year run before producing its first and flagship game, 'Republic: The Revolution' in 2003 to mixed reviews. Delayed multiple times, the game promised staggering graphics, detailed simulated city mechanics, and an in-game nighttime sky that would be astronomically accurate, all of which Hasabis catalogued in a recurring blog.
Unfortunately for game players, but fortunately for the field of AI, the studio's five-year struggle to create the game imperiled the studio, which closed in 2005 after producing only one other game after ‘Republic’.
Despite this setback, Hasabis landed on his feet, pursuing cognitive neuroscience studies at University College London, and earning a PhD in that field in 2009.
Hasabis’s interest in the human brain and its potential for AI research would eventually lead to the founding of DeepMind Technologies. DeepMind would go on to make groundbreaking strides in the field of AI, first with its AlphaGo program beating the world's best Go player, then with its AlphaZero program learning to play chess, shogi, and Go at a superhuman level without any prior knowledge or training, and later with the protein folding tool AlphaFold.
Eric Schmidt would eventually comment on the acquisition of DeepMind saying that Deepmind was “one of the greatest British success stories of the modern age”.
Hasabis, along with the rest of the AI community, is now urging caution, but if we want to speculate about how the polymath might be thinking about how to implement AI guardrails around AI generated content and AI generated misinformation, perhaps we can dig into how his old ill-fated game dealt with in-game misinformation.
Welcome to Novistrana

In ‘Republic: the Revolution’ you play as a rising faction leader in the fictional former Eastern European country, Novistrana, with the goal of eventually overthrowing the country’s dictator. (Here’s a play through if you’re interested.)
You as the player needs to increase the local support for your faction and to do so you’ll need to recruit a team of operatives whose skills will range from the use of Force, to Influence, and Commerce, the three of which function the core resources in the game.
Gameplay involves your avatar and your operatives moving around various different city districts performing various tasks ranging from conducting political events, to fundraising, to skullduggery, to gathering intelligence, to spreading propaganda and misinformation, all against competing rival factions who are operating similarly against you and each other. Resources are awarded each game day by each district and depend on how much your political faction is supported by each district.
Each district has different demographics which can make it harder for you to get information about what’s going on in that area if your faction is out of favor in the district.
Interestingly some operatives can spread misinformation in a given district which then allows for a higher level of in-game ‘Secrecy’ in the same district. Districts where you have higher ‘Secrecy’ are more fertile areas to conduct skullduggery and be able to get away with it. So in the context of the game misinformation actions are almost always used to provide cover for other actions.
This occurs both in support of your actions and against your actions as the player. Other factions might try to provide cover for their own nefarious actions, for example. But if you are able to uncover information about a rival’s actions by having a high level of information gathering in a district, then you can use that information as evidence to undermine the rival faction.
The player is thereby rewarded for having high intelligence gathering levels in each district, which then undermines rival faction’s ease to conduct operations, which eventually starves the rival factions of political support and resources if the player plays skillfully.
Back to reality
So how does all of this track with real life?
Well, the goal of the game is for the player to build a functioning political movement that can displace the current dictator in order to install the player as the new dictator/leader. Extending this view a bit, the game might be implying that strong governments and perhaps only strong governments are able to conduct misinformation operations.
There are big issues with this view of course. The in-game citizens lack access to the Internet, external media sources, or indeed any media outlets that aren’t controlled by the government.
So a more accurate view of the game’s message around misinformation might be that in the absence of strong free speech protections, a totalitarian regime will lean towards domestic intelligence gathering and misinformation operations.
What then should we expect for AI regulations?
Well, last September before the ongoing Cambrian Explosion of AI tools, Eric Schmidt participated in a fireside chat with the Sergey Nazarov, the founder of the Web3 token Chainlink (LINK), and was asked how Blockchains and Web3 broadly might integrate into the existing Web2 world.
Specifically, Nazarov speculated that Blockchains might be a type of guardrail for AIs, and Schmidt wondered aloud in response if that would be the way things develop, before emphasizing that the shift of Web3 from Proof-of-Work blockchains to Proof-of-Stake was a step in the right direction for the Web3 industry, “getting its act together”.
Schmidt continued commenting on coming regulations in Web3: “Societies don’t reward libertarian thinking very much because of all the issues around power and control and so forth and so on. It’s highly unlikely that a purely libertarian view of how these things are going to emerge is going to work. What instead is going to happen is governments will assert their authority, partly because that’s what governments do. And you [the Web3 audience] want to do it in such a way that it’s rational.” [starting at 20:35]
The comment was directed at the free-wheeling Blockchain types within the Web3 world, but one wonders how close to home those sentiments will hit as the regulatory and legislative eyes of governments look at the AI systems coming into existence now daily.
Will speculative pieces like this one be considered misinformation? We shall see.
Until next time.
-Jack


It begs the question if web3 is “rational” does that mean AI/ChatGPT4 is “irrational”