PROFESSOR TSHILIDZI MARWALA
IT IS election season once again in the United States, and if the latest polls data is anything to go by (and barring anything unusual) Donald Trump will lose this election.
His victory in 2017 was aided by the confluence of Facebook, Cambridge Analytica, Big Data and Artificial Intelligence (AI). We now live in an era where we are always connected to an electronic device, whether we are on Twitter, Facebook, or even WhatsApp.
Being connected means that we are being tracked and consequently influenced to act in a particular way. This influence includes what we buy and how we vote.
One could conclude that we are surrendering our democracy to machines.
Nevertheless, what is wrong with intelligent machines? If intelligent machines can monitor our vital data, determine whether we are at risk of dread diseases, and alert a doctor, why complain about the machine? Machines bring many good attributes to our lives.
Though machines, just like humans, discriminate against ethnic minorities and poor people, they are more correctable than humans are. For example, police in the US use Idemia, which scans faces using algorithms. Yet, results from the National Institute of Standards and Technology have indicated that two of Idemia’s algorithms were more likely to confuse black than white faces.
Machines are corrected by infusing them with representative data while human beings require more than education to be taught not to discriminate.
With the problems we face in the judiciary, where there are accusations of bias, we should perhaps start to think seriously about replacing our lawyers, judges and leaders of Chapter 9 institutions with AI machines. This is serious because AI machines are proving to be more rational and thus logical than human beings.
Going back to Trump, the question becomes, what can go wrong with an election? Recently, I read the book Deep Fakes: The Coming Infocalypse by Nina Schick. One of the big fears about elections is deepfake technology.
They are called “deep” because they are based on deep neural networks, which relate the input (a person’s facial image) to the output (another person’s body). Facebook uses deep neural networks for facial recognition to automatically label images when they are uploaded.
These deep learning neural networks have been used to create the generative adversarial networks (GANs) that are able to fake a person saying things that they did not say and create images of people that have never existed. In fact, this is akin to transposing the face of one person onto another.
GANs are made of two neural networks with one generating an image or data and another one classifying whether it is real or fake. This continues until the network that is generating the data is so good that the classifiers think they are real. The competition between the generator and classifier networks is called game theory.
GANs can be used for essential functions such as estimating missing data, including pictures that are not complete. For example, they have been used to complete images of incomplete statues of Roman emperors from 2,000 years ago.
GANs can be used to harm people. For example, the first iterations of GANs were used to place female celebrities into fake pornographic movies, showing how fundamentally patriarchal our global culture is. While disturbing, these videos are remarkably convincing.
According to visual threat company, Sensity, non-consensual deepfake pornography accounted for 96% of the total deepfake videos online with 99% of those mapped faces from female celebrities on to porn stars.
The impact of deepfakes in influencing public thought is already apparent. As Quartz journalist Olivia Goldhill put it in November 2019, “Today’s sexist weapon is tomorrow’s political tool”.
For instance, in 2019, a video of Nancy Pelosi, the speaker of the US House of Representatives, was intentionally slowed by 25%, altering the pitch to make it appear as though she was slurring her words.
The video, which went viral after it was initially posted on a Facebook page called Politics Watchdog, is an example of a deepfake. While many amusing doctored videos like this are available online, the use of them in politics presents a disturbing reality. How do we discern the fake from the genuine?
The Pelosi video prompted US intelligence officials to issue a warning ahead of the 2020 elections about the use of deepfakes to influence political campaigns.
In another instance, a deepfake was created to make it seem like Barack Obama had used some unfavourable terms to describe current US President Donald Trump. The potential for disinformation to spread and for public thought to be swayed is stark, and there is an opportunity for these videos to be used to swing elections.
In a political campaign created by RepresentUS, a grassroots anti-corruption organisation, manipulated videos of North Korean dictator Kim Jong-un and Russian President Vladimir Putin were used to warn Americans that their democracy is in danger. Using this technology, people can be created.
ThisPersonDoesNotExist.com, for example, provides a snapshot of how easy it is to create a new face and transplant your views onto it – effectively creating an AI human that is disturbingly realistic.
As the creator of the website, Phillip Wang put it, “most people do not understand how good AIs will be at synthesising images in the future”.
Deepfakes fall under the general area of information warfare. What can South Africa do to protect people from deepfakes? Firstly, we need to understand what deepfakes are.
Secondly, we need to understand the underlying technologies driving deepfakes.
The underlying technologies driving deepfakes are AI, machine learning, big data and social networks. Thirdly, we need to come up with strategies to control or limit the negative impact of deepfakes.
This should include introducing laws that specifically deal with deepfakes.
Furthermore, it should include capacitating our national intelligence sector to have the capabilities to develop and deploy technologies that limit the proliferation of deepfakes.
- The views expressed in this article are that of the author/s and do not necessarily reflect that of the University of Johannesburg.