REFLECTIONS OF ARTIFICIAL INTELLIGENCE ON DEMOCRATIC GOVERNANCE
The new technological possibilities enabled by artificial intelligence are not merely simple technological advancements; rather, they stem from deep historical roots and have direct and indirect effects on democratic governance, which is an integral part of modern society. On the other hand, given the versatility of artificial intelligence, it is difficult to definitively classify these effects as entirely beneficial or threatening to democratic governance.
One such technology that can be evaluated within this context is deepfake, which can be summarized as the manipulation of images through artificial intelligence technology. Deepfake videos, which became usable by late 2017, are considered a potential threat to individual rights and democratic governance, beyond just being a source of humor. The AI-generated fake videos, even in their early stages of development, already make it difficult to distinguish between fake and real.
The first point of concern regarding deepfake videos is that they can be produced not only by comedians and anonymous internet users but also through official channels.
In May 2018, the Flemish Socialist Party (SPA), a political party in Belgium, created a deepfake video of U.S. President Donald Trump, who was known for not supporting policies aimed at combating climate change, in an effort to draw attention to climate change. At the end of the video, which was shared on the party’s social media platforms, the manipulated Trump says, “We all know that climate change is fake, just like this video.” Interestingly, Trump’s lip movements were not fully manipulated, yet some social media users failed to recognize the video as fake. Given that deepfake videos can already shake certain users’ perceptions of reality under such conditions, what might we face if deepfake technology advances further and those who create these videos intentionally seek to create the impression that they are real?
According to the U.S. House Intelligence Committee, deepfake poses significant challenges to national security and democratic governance because it undermines individuals’ and voters’ ability to trust what they see and hear on screen. In this context, before the U.S. presidential election in November 2020, American politicians were concerned that deepfake could influence election outcomes and urged social media companies to take action to prevent the spread of misinformation before the elections.
A report published by New York University’s Stern Center for Business and Human Rights in September 2019 suggested that social media, which accelerates the spread of misinformation, should detect and remove deepfake videos before they cause harm. Reflecting the general trends in the U.S., social media companies imposed bans on deepfake videos before the elections.
While deepfake is inherently “fake,” this does not necessarily mean that it is a technology that inherently harms democracy. Deepfake does not have to be used solely for malicious purposes.
Before the local parliamentary elections in the Delhi National Capital Region in February 2020, Manoj Kumar Tiwari, the Delhi representative of the ruling Bharatiya Janata Party (BJP) (no longer in office), used deepfake technology in his election campaign.
A political campaign video originally filmed in Hindi by Tiwari was transformed into videos in English and Haryanvi (a regional language) using AI-based deepfake technology. According to an official, deepfake allowed the campaign efforts to be scaled because it enabled Tiwari, who could not speak Haryanvi, to reach his voters in their language while still appearing as himself. Another official stated that some voters (housewives) found it “encouraging” to hear Tiwari speak in Haryanvi.
Although Tiwari suffered a crushing defeat in the elections, the ethical use of AI-generated videos in an election campaign, rather than as a tool for manipulation, marked an important turning point.
The intense concern of the American public (especially Democrats) towards deepfake before the 2020 U.S. presidential election primarily stemmed from allegations that AI had been used as a method to manipulate voters in the 2016 presidential election and that Donald Trump’s victory over Hillary Clinton was linked to this.
Such strong allegations were made possible by data obtained without individuals’ consent, often through malicious intent or system errors. As some may recall, in 2018, several media outlets claimed that Cambridge Analytica had accessed the data of 270,000 Facebook users who volunteered for a personality test, as well as their friends’ data. The claims suggested that data on 50 million users had been accessed. More dramatically, Facebook later admitted that data of a total of 87 million people, most of whom (70.6 million) were Americans, may have been improperly shared with Cambridge Analytica.
According to publicly available information, Cambridge Analytica worked for conservative candidates in the 2014 U.S. Senate elections, the Leave.EU group during the Brexit process, Republican presidential candidate Ted Cruz in 2015, and later for Donald Trump, the official Republican candidate in the 2016 U.S. presidential election. Of course, the success of these actors (most of them) cannot be entirely attributed to Cambridge Analytica’s AI-based “solutions.” However, this does not change the fact that data obtained improperly has been a powerful factor in making decisions that were democratically taken and continue to influence the global system in various ways today.
According to Andy Wigmore, communications director for the Leave.EU group, AI tells you “everything” about individuals and how to persuade them with an advertisement, just like the one they used. In this context, according to Wigmore, Facebook “likes” are the “most powerful weapon” of their campaign.
Artificial intelligence is a technology capable of developing different solutions to different societal conditions. For example, in the U.S., AI can also be used to analyze the components that help bills become laws and to determine the likelihood of bills being enacted. One of the main reasons for using AI to predict the passage of bills is the relatively low rate of bills being passed into law in the U.S. Congress. Since the 93rd Congress (January 3, 1973 – December 20, 1974), the highest percentage of bills passed into law was during the 100th Congress (January 6, 1987 – October 22, 1988), where the rate was only 7%. After it became possible to derive meaningful conclusions from complex data, a “solution” to this long-standing situation in the U.S. was found. According to publicly available information in 2018, FiscalNote, known for its activities in this field, collects data on more than 1.5 million bills in Congress, 50 state legislatures, and 9,000 city councils by scanning official websites. This data is available to the public.
FiscalNote, founded by Tim Hwang in 2013, uses self-learning AI to evaluate bills based on specific contextual keywords and phrases that have historically influenced the likelihood of success. It then combines this with information on the sponsors of the bill and the voting records of lawmakers to indicate the probability of the bill passing. However, there are criticisms that these capabilities empower a small segment of society even further. Given that lobbying plays a central role in the creation and passage of bills in the U.S., these criticisms are not unfounded.
There are also assessments that simple correlations in historical data may not produce meaningful results regarding whether a bill will pass or not. Furthermore, FiscalNote is considered to have failed in predicting the likelihood of bills becoming law. However, the fact that FiscalNote is not the only company operating in this field and the potential for further development should not be overlooked.
In an environment with political uncertainties, as well as uncertainties caused by the coronavirus, FiscalNote’s solutions, which use AI to bring together government records and offer personalized data to its clients, have been met with great interest. In other words, AI can also lead to increased transparency in the face of uncertainties caused by politicians and other circumstances.
Conclusion
AI is both a threat and an opportunity for democratic governance at the same time. However, this outcome is more related to how the data, the basic “nourishment” of AI, is obtained and how other societal processes are managed than to the characteristics of AI itself. Therefore, regulations will determine the significance of AI for our democracy. However, as is often the case, the de facto situation is ahead of the legal situation in this field as well.
On the other hand, as we will discuss in our next article, AI can also become a tool that empowers anti-democratic regimes and destroys human rights. Thus, AI will likely continue to be both a threat and an opportunity for democracies in the future.
