THE FRIGHTENING REALITY BROUGHT BY ARTIFICIAL INTELLIGENCE: SYNTHETIC AUTHENTICITY OR DEEP FAKERY
Don’t judge anyone by the story others tell about them,” said Plato. But what if the one telling us what we see isn’t even the person themselves? We already live in a world where trust and belief are severely eroded. Now, imagine that with the spread of synthetic authenticity or deepfake technology, which we could also call deep technical deception, highly sophisticated fake videos can be produced instantly with simple applications! I recall a saying I’ve heard from my father and many elders: ‘Don’t believe everything you hear, and only believe half of what you see. Investigate the other half.’

According to scientific research, 87% of the information relayed to the brain is through the eyes, 9% through the ears, and 4% through other senses. While decision-making is a complex process influenced by various factors such as intuition, we are now in an era where people can be deceived through their senses.
Deepfake, a technology summarized as the manipulation of images via artificial intelligence, can influence decision-making mechanisms by making the fake seem real.
In deepfake technology, which erases the distinction between the fake and the real, it is not the technology itself that determines the direction of the effect, but rather the purpose for which it is used. Currently, most videos made with this technology are meant to surprise and amuse us. However, the trajectory doesn’t seem to be heading just towards amusement.
Before the February 2020 Delhi National Capital Region local parliamentary elections, Manoj Kumar Tiwari, the Delhi representative of India’s ruling Bharatiya Janata Party (BJP) (no longer in this position), officially and legally used deepfake technology in his election campaign.
A political campaign video shot in Hindi by Tiwari was later converted into videos in English and Haryanvi (a local language) using artificial intelligence-based “deepfake” technology. According to an official, deepfake allowed the scaling of campaign efforts. Since Tiwari couldn’t speak Haryanvi, deepfake enabled him to reach his voters in their language. According to another official, some voters (housewives) found it “encouraging” to listen to Tiwari speak in Haryanvi.
In May 2018, the Flemish Socialist Party (SPA), a political party in Belgium, created a deepfake video of former US President Donald Trump, known for not supporting policies against climate change, to draw attention to the issue. At the end of this video, published on the party’s social media platforms, the manipulated Trump says, “We all know climate change is fake, just like this video.” Moreover, Trump’s lip movements were not fully manipulated. Despite this, some social media users couldn’t tell that the video was fake.
Even under these circumstances, when the reality perceptions of certain users can be shaken, what awaits us if deepfake technology advances further and the creators of deepfake videos deliberately aim to create the impression that the videos are real?
According to the US House Intelligence Committee, deepfake poses significant challenges to national security and democratic governance as voters can no longer trust their own eyes and ears when evaluating what they see on the screen. Similarly, many politicians expressed concern that deepfake could affect the results of the November 2020 US presidential election.
A report published in September 2019 by the NYU Stern Center for Business and Human Rights assessed social media as a factor accelerating the spread of misinformation and recommended that deepfake videos be identified and removed before causing any harm. As a reflection of these general trends in the US, social media companies implemented bans on deepfake videos before the elections.
The intense concern of the American public (especially Democrats) about deepfake before the 2020 US presidential election stems mainly from claims that artificial intelligence was used as a method to manipulate voters in the 2016 election and that Donald Trump’s victory over Hillary Clinton was related to this.
As you may recall, in 2018, some media outlets claimed that Cambridge Analytica had accessed the data of 270,000 Facebook users and their friends, who volunteered for a personality test created by the company. It was claimed that the data of 50 million users were accessed. More dramatically, Facebook admitted that the data of a total of 87 million people, most of whom (70.6 million) were Americans, may have been improperly shared with Cambridge Analytica.
According to publicly available information, Cambridge Analytica worked for conservative candidates in the 2014 US Senate elections, the Leave.EU group during the Brexit process, Republican presidential candidate Ted Cruz in 2015, and Donald Trump in the 2016 election.
According to Andy Wigmore, communications director for the Leave.EU group, artificial intelligence, “just like the way they used it,” tells you everything about individuals and how to persuade them with a specific ad. In this context, Wigmore considers Facebook “likes” the “most powerful weapon” of their campaign.
Thus, data obtained through bad intentions and system errors based on simple clicks reflecting Facebook users’ likes have likely changed the fate of different nations and the world. Moreover, this process can continue with new technologies like deepfake.
In March 2021, the FBI stated that they expect synthetic content obtained through deepfake or generative adversarial networks techniques based on artificial intelligence to be used by malicious actors in cyber and foreign influence (infiltration) operations in the next 12-18 months.
According to the FBI, Russian and Chinese actors are already using synthetic content in foreign influence operations. Malicious cyber actors will use these artificial intelligence-based techniques in their cyber operations as an extension of existing spear-phishing and social engineering processes. However, the complexity of synthetic content will cause a more severe and widespread impact. Estonia’s intelligence service has also claimed that Russian special services are likely trying to develop deepfake technology in cyber warfare. In this context, according to Estonian officials, Russia remains the “primary threat” to the EU in cyberspace.
A US expert on propaganda videos used by terrorists suggested that advanced deepfake technology could disrupt the decision-making ability of enemy elements on the battlefield. In this context, it has been proposed that deepfake technology be closely examined and integrated into military operations in the future.
In April 2021, information made public revealed that politicians from the Netherlands, Latvia, Ukraine, Estonia, Lithuania, and the UK had a video conference with an individual using deepfake technology to impersonate Leonid Volkov, the chief of staff for Russian opposition leader Alexei Navalny. Based on these discussions with unidentified individuals, it can be said that deepfake poses a security vulnerability for states.
The application areas of deepfake technology are increasingly diversifying. Indeed, in April 2021, it was revealed that the University of Washington had developed software capable of creating deepfake satellite images and detecting fake images. The possibility of such a technology being available offers critical possibilities, such as concealing the current situation or misleading military assets towards wrong targets.
The US military announced that it has made progress in scientific studies aimed at detecting deepfake technology, which it emphasizes as a threat to US society and national security. Meanwhile, countries like China and the UK are also making efforts to counter deepfake threats through their official institutions.i açıklamıştır. Diğer yandan Çin ve İngiltere gibi ülkeler de resmi kurumları aracılığıyla deepfake tehditlerine karşı önlem alma çabası içerisindedir.
For a long time, disinformation campaigns used to gain advantages in areas such as diplomacy, trade, and military operations are likely to reach a new level with deepfake technology. This technology, which allows for the manipulation of decision-making mechanisms of both civilian and military elements, must be approached with caution.
