The Looming Threat of AI in Elections

    • By,
      Yvonna Tia Steele – Student, Kautilya

In their book “How to Rig an Election,” Nic Cheeseman and Brian Klaas talk about how, In numerous nations across the globe, the art of holding onto power has evolved into the art of manipulating elections. With such trends prevailing, the people must scrutinize this democratic process and spot the tools that aid in electoral tampering. In 2023 itself, Zimbabwe enforced limits on campaign spending and altered registration fees to the detriment of the opposition, along with a mysterious ballot paper shortage. These tactics are a few subtle methods by which the results are swayed and the facade of democracy maintained.

Combating harmful online content is crucial now more than ever. With over 2 billion people in more than 50 countries heading to the polls in 2024, the worry is the use of emerging technologies like AI for misinformation. Recently, doctored videos featuring Amitabh Bachchan’s iconic TV show “Kaun Banega Crorepati” surfaced in Madhya Pradesh that fueled anti-incumbency feelings through clips that showed the actor posing political questions about the state. The threat to elections is slowly becoming the ingenuity of information and the inability to hold anyone accountable due to a lack of regulations.

The Global Risks Report 2024 identified misinformation and disinformation as a top risk. Concern over AI making election rigging easier comes in the form of precise message targeting and democratization of disinformation. The volume and authenticity of fabricated videos showing opposition leaders engaging in heinous acts could rise in 2024. A testament to the same is India getting ranked the sixth most vulnerable nation to deepfakes, according to Deeptrace’s report titled “State of Deepfakes” and the rising cases that are being documented. Such disinformation has the power to influence voters, particularly in countries with low literacy levels and a growing digital- connect like India. During elections, which are already characterized by intense emotions and partisan divisions, the effects of misinformation escalate. Take, for example, how misinformation during the Delhi riots regarding CAA was linked to radicalization. Academic work has also been done to understand this phenomenon in the Indian electoral landscape via case studies.

In 2019, Facebook conducted an interesting experiment to understand how the social media site is experienced by a person living in Kerala if they are to follow the algorithm recommendations. As a result, there was a deluge of hate speech, false information, and celebration of violence. AI gives the masses access to easy creation of content that can be put on these platforms and influence the nation. The uptake of such content can be understood with ease. In April 2023, Tamil Nadu Finance Minister Palanivel Thiaga Rajan claimed that a popular audio clip of him asserting that the son of DMK’s president had accumulated significant amounts of illicit wealth over the years was generated using AI. Even police in Gujarat got reports of manipulated videos of politicians during the peak of political campaigning for the polls in December 2022. This phenomenon is not new and is not limited to one nation.

Up until now, we have addressed the increased accessibility of AI that is leading to misinformation. What about a scenario in which the positives of AI are contorted to fulfill selfish agendas? Voter sentiment research driven by AI is a powerful tool for political campaigners, pollsters, and policymakers since it can quickly and correctly assess enormous data sets from multiple sources. This data and the inferences can revolutionize how political mandates are structured. Now imagine if this positive (data collated and used by AI) is fed into a political campaign inside a black box (basically it is an AI system that arrives at conclusions and decisions without explaining how or why they reached).

Black-Boxed Politics fuels various concerns: lack of transparency, undue influence, and voter suppression. While full public scrutiny of every AI design isn’t feasible, demanding transparency in key decisions is crucial. This allows us to see and challenge potential biases and political agendas lurking behind the algorithms, especially in the public sector. Without such transparency, informed political participation becomes impossible, leaving controversial decisions shrouded in secrecy.

Voters can be sent varying messages according to estimates about how receptive they would be to different arguments, all because of big data and machine learning. The NaMo app collects data through their surveys, which gives the political party an edge in the electoral process because disaggregated data is used to understand voter sentiment and shaping strategy. Advertisements that appeal to fear will be shown to the paranoid, while those who lean conservative will shown arguments based on custom and community. In a time where 2023 is being called the “year of elections” and we have the “fifth industrial revolution,” I believe an informed discourse on the impact of AI in politics is imperative. Achieving democratic progress against the negative consequences of such technologies will require vigilance in the form of regulation and transnational solidarity among all who cherish freedom. 2024 will test us, but it does not have to defeat us.

*The Kautilya School of Public Policy (KSPP) takes no institutional positions. The views and opinions expressed in this article are solely those of the author(s) and do not reflect the views or positions of KSPP.