In 1955, Isaac Asimov published his story Universal suffrage. In it he describes how the first electronic democracy uses the most advanced computer in the world (Multivac) to decide the vote of an entire nation, with the intervention of a single human voter.
While we have not yet reached that ominous future, the role of artificial intelligence and data science is becoming increasingly important in the course of democratic elections. The electoral campaigns of Barack Obama and Donald Trump, the Synthetic Party of Denmark and the massive theft of information in the Macron campaign are good examples.
Opinion monitoring, the “sentiment analysis”
One of the first success stories in the use of techniques of big data and analysis of social networks to adjust an electoral campaign was that of Barack Obama for the presidential elections of the United States in 2012. In his campaign (and in many others after), the traditional polls of voting intention, based on telephone calls or personal interviews , were complemented with the analysis of social networks.
These analytics offer a cheap and near real-time method of gauging voter sentiment. For this, Natural Language Processing (NLP) techniques are applied, particularly those dedicated to sentiment analysis. These techniques analyze the messages contained in tweets, blogs, etc. and they try to measure whether the opinions expressed in them are positive or negative with respect to a certain politician or a certain electoral message.
The main problem they have is the sampling bias, since the most active users on social networks tend to be young and technophiles, and they do not represent the entire population. That is why these techniques have limitations when it comes to predicting electoral results, although they are very useful for studying voting trends and the state of opinion of the people.
Intervention in electoral campaigns: the case of Donald Trump
More disturbing than the study of emotions in social networks is its use to influence states of opinion and modulate the vote. A well-known case is that of Donald Trump’s campaign in the 2016 US presidential elections. big data and the psychographic profiles had a lot to do with a victory that the polls had failed to predict.
It was not a mass manipulation, but different voters received different messages based on predictions about their susceptibility to different arguments, receiving biased, fragmented and sometimes contradictory information with other messages from the candidate. The task was entrusted to the company Cambride Analytica, which was involved in a controversy over the unauthorized collection of information on millions of Facebook users.
Cambride Analytica’s method was based on Kosinski’s psychometric studies, which he found to be with a limited number of likes you can get a profile of the user as accurate as if your family or friends did.
The problem with this approach is not in the use of technology but in the “covert” nature of the campaign, the psychological manipulation of susceptible voters through direct appeals to their emotions or the deliberate spread of false news through bots. This was the case of Emmanuel Macron in the 2017 French presidential elections. His campaign suffered a massive email theft just two days before the elections. multitude of bots They were in charge of disseminating evidence of the commission of crimes supposedly contained in the information, which later turned out to be false.
Political action and government: the Synthetic Party
No less disturbing than the previous point is the possibility that an artificial intelligence (AI) governs us. Denmark opened the debate in its last legislative elections, which were attended by the Synthetic Party led by an AI, a chatbot called Leader Lars, with the aspiration to enter the parliament. Behind chatbot there are humans, of course, in particular the MindFuture foundation for art and technology.
Leader Lars was trained with the electoral programs of fringe Danish parties since 1970 to shape a proposal that would represent the 20% of the Danish population that does not go to the polls.
While the Synthetic Party may seem like an extravagance (with proposals as daring as a universal basic income of more than €13,400 a month, twice the average wage in Denmark), it has served to stimulate debate about the ability of an AI to rule us. Can a contemporary, well-trained and well-resourced AI really rule us?
If we analyze the recent past of artificial intelligence, we see that advances follow one after the other at breakneck speed, particularly in the field of natural language processing after the appearance of architectures based on transformers. The transformers they are huge artificial neural networks trained to learn to generate texts, but easily adaptable to many other tasks. Somehow, these networks learn the general structure of human language and end up having a knowledge of the world through what they have “read”.
One of the most advanced and spectacular examples has been developed by OpenAI and is called ChatGPT. It’s about a chatbot able to coherently answer almost any question asked in natural language, to generate text, or to perform tasks as complicated as writing computer programs from a few prompts.
Free of corruption, but without transparency
The advantages of using an AI for government action would be several. On the one hand, their ability to process data and knowledge for decision making is far superior to that of any human. It would also be free (in principle) from the phenomenon of corruption and would not be influenced by personal interests.
But, today, the chatbot they just react, they feed on the information that someone provides them and they give answers. They are not really free to think “spontaneously”, to take the initiative. It is more appropriate to see these systems as oracles, capable of answering questions such as “what do you think would happen if…”, “what would you propose in case of…”, rather than as active or controlling agents.
The possible problems and dangers of this type of intelligence, based on large neural networks, have been analyzed in numerous scientific studies. A fundamental problem is the lack of transparency (“explainability”) of the decisions they make. In general, they act as “black boxes” without us being able to know what reasoning they have carried out to reach a conclusion.
And let’s not forget that behind the machine are humans, who have been able to introduce certain biases (consciously or unconsciously) into the AI through the texts they have used to train it. On the other hand, the AI is not free from giving wrong data or advice, as many ChatGPT users have been able to experience.
Technological advances allow us to glimpse a future AI capable of “ruling us”, for the moment not without the essential human control. The debate should soon move from the technical level to the ethical and social level.