IMAGINE if you will, sometime in the distant future, that while you are casually walking down a busy high street in downtown Kuala Lumpur you are suddenly brought to the attention of a crowd of people closely huddled together – enveloped by intense and thunderous chatter – before a television screen situated in a shop window.
A broadcast is streaming, you observe what appears to be the prime minister publically declaring his abrupt resignation from office at a press conference in Putrajaya, citing his inability to cope with the pressure of government service, desiring to go into permanent retirement.
Emotions are high and the crowd disperses in rage, leaving the scene immediately – shouting obscenities and vulgarities – while you stand gobsmacked.
You frantically return home to your computer, wanting to uncover the reasons for the prime minister’s decision when you realise something rather odd.
The prime minister made no such statement, he was still very much abroad attending an international summit.
You come to find that his voice and stature had been accurately co-opted by deep fake technology, the fraudulent live stream was created by malicious parties – as part of a political ploy – conspiring to tarnish the prime minister’s reputation, cause mass confusion and instigate social unrest in Malaysia.
You are left in total disbelief, deceived by near-authentic footage of a telecast that simply never took place.
This grim reality is not too far from us given the steady advancements of this technology.
The authorities ought to urgently look into the issue of deep fakes and how their potential weaponisation could threaten national security and the welfare of Malaysians.
Deep fakes have only recently been introduced into the cultural lexicon, gaining notoriety a few years ago.
The reason for the nature of the term itself is attributed to the fact that this technology comprises artificial intelligence software that undergoes the process of “deep learning” so that it is able to produce accurate forgeries.
The software in question is programmed, through deep learning which is a vigorous process that involves exposing artificial intelligence to information, to analyse swathes of data sets of a particular subject – be it Instagram posts, YouTube videos etc – gathering information and developing a comprehensive profile.
It is based on that very profile that the programme is able to produce images or videos of the subject in question, capable of being directed to say anything, completely say or do anything in the likeness of that subject.
This is because enough information on the subject has been gathered such that it is able to accurately simulate the subject’s speech patterns and facial appearance even if it does not have a working model of the subject saying something in particular.
It can nevertheless be programmed to depict the subject in a realistic way.
This would make it possible, for example, to train the programme to produce fake videos that depict Hollywood celebrities performing outrageous acts, American presidents saying the foulest of things and public figures in compromising positions in a way that is completely indistinguishable from reality and most certainly may have you fooled.
The potential destructiveness of deep fake technology has almost always been repeatedly emphasised by critics ever since its modern inception.
Throughout the years in which it has been active, users have exploited the technology to digitally manipulate existing footage by superimposing the face of a particular person onto that very footage.
This was, in fact, the purpose it served in the technology’s early days.
Gaining notoriety on the Reddit website in 2017, an anonymous user posted digitally altered pornographic videos that used the faces of prominent celebrities on the website, making it appear as though the celebrities in question were themselves in the video.
The videos swiftly garnered public interest and were made viral.
The very first instance the technology came to be used already involved weaponisation, degrading innocent people completely disassociated from the pornographic industry and having their identities forcefully implicated in these lewd videos.
Deep fake technology, in the absence of strategic safety parameters, allows for the widespread assault on human dignity to be facilitated without challenge.
There were further instances of deep fake technology being used to create sexually explicit content modelled after high-profile internet personalities.
Female streamers on Twitch, an online live-streaming platform, suffered from the mass circulation of deep fakes that appropriated their likeness, causing a grievous upset in the internet community.
Due to the sinister combination of both the viralising features of online media and the unadulterated abilities of deep fake technology, virtually no action could be taken as the videos were increasingly shared and replicated.
The subsequent democratisation of this technology, which is made accessible to the public, caused a significant shift in online media.
Realising its potential for satire, relatively “harmless” videos were produced by internet users for the purposes of parody.
The technology was still in its early stages and in the eyes of the public, there appeared to be very little danger in circulating videos that could be immediately identified as fraudulent if it was in the service of internet humour.
It would become apparent, however, throughout the years that the consequences of deep fake technology were not trivial and had indeed the potential to instigate damages of near-epic proportions.
In 2022, a fraudulent video of Ukraine President Volodymyr Zelenskyy demanding the surrender and outright acquiescence of Ukrainian soldiers to the Russian military was circulated on social media.
Ukrainian TV stations – in what appeared to be a geopolitical, retaliatory attack – were hijacked and programmed to televise the fake broadcast in an attempt to cause mass confusion.
Fortunately, the Ukrainian authorities appropriately took down the video and issued clarifications to the public at large.
It is important to note that while the deep fake at the time was easily identifiable as fraudulent since the video had particular irregularities and distortions, it nevertheless demonstrated that it could be galvanised to jeopardise the integrity of a sovereign state.
It has also posed a threat to international organisations and institutions.
A person, who was able to digitally alter his video feed such that it mimicked the likeness of the Mayor of Kyiv, was able to dupe senior officials of the European Union into agreeing to conduct video calls.
This demonstrated that deep fakes could be exploited to carry out government espionage.
It may be firmly established therefore that the technology in question is indeed a matter of national security concerning the government and its citizens.
It is on an upward trend of the continuous trajectory of technological involvement and if very little is done to strategically contain its influence, it could very well contribute to the weakening of Malaysian security, afflicting the lives of many innocent Malaysians as they are the most vulnerable to this.
Deep fake technology’s potential in the area of criminal malfeasance is limitless.
An advanced variant of this technology could dupe financial institutions into legitimising financial institutions, circulate politically provocative content to incite geopolitical tensions, facilitate identity theft, blackmail individuals through the use of synthetic revenge porn and instigate a campaign of deliberate disinformation and misinformation. The list is not exhaustive.
Despite the negatives of deep fake technology, it would not be right to exclude discussions as to the positives that it could confer on society if strictly regulated.
Deep fake technology could be used in the filmmaking and advertisement industries with the goal of making realistic footage more accessible from remote locations.
It could also be incorporated into education and research allowing for more simulations of historical re-enactments and experimentation.
What is needed is the middle ground, one that recognises the detrimental effects of deep fake technology while simultaneously accommodating useful advancements in technology.
The government must develop a comprehensive strategy to counteract and combat deep fake technology.
One of the priorities of the Communications and Digital Ministry is to consider stricter legislation.
In the early months of 2023, the Cyberspace Administration of China – under the powers of the Chinese government – instituted new policy measures that outright outlawed the creation of deep fake media without the explicit consent of users.
National policies may also be modelled after those of the European Union and the US, which prohibit the dissemination of deep fakes in areas that raise political concerns and implicate people in pornographic material.
There must also be a consideration of an extension to present legislation that revises the definition of personal data so that it is more inclusive of more areas of the human condition in a way that prevents the digital mimicry of persons.
Since the technology in question is still in its infancy, there must also be efforts to carry out national campaigns that spread awareness as to the existence of the technology and its detrimental effects.
This could aid the public in identifying more sophisticated forms of deep fake fraudulence.
Investments in the development of new technologies would be pivotal in this area.
Deep fake detection technologies would be immensely helpful both to authorities and the public in being able to immediately report harmful forgeries.
It is of crucial importance that Malaysia strengthen data borders.
The recent announcement by the government of the creation of the cyber security commission could coincide with newfound studies in the area of deep fake technology.
As early as last year, Europe’s Policing Agency issued a warning over the dangers of the deployment of deep fake technology by foreign actors to undermine public trust in government institutions.
This ruptured relationship between the public and the government could cause a rift and be further encroached upon in a way that destabilises countries.
We must consider ourselves fortunate that we have the capacity to resolve potential issues that deep fake technology could cause but there could very well be a time, if left to itself, when it would simply be too overwhelming to stop.
This situation therefore must be urgently addressed before it becomes the country’s future affliction.