Skip to main content

Singapore Policy Journal

Topic / International Relations and Security

Deepfakes: The Implications of this Emerging Technology on Society and Governance

On 19 September 2019, Prime Minister Lee Hsien Loong posted an article on his Facebook page about how deepfake technology was used to scam a victim out of US$243,000, with the goal of highlighting the dangers of deepfakes.[1] While many dismissed this as a one-off incident, Lee’s concerns were not unjustified. Deepfakes, which refer to the use of artificial intelligence (AI) to superimpose someone’s likeness onto another individual, are a new and potentially dangerous tool capable of largescale damage.[2] Video doctoring technology is not new, but improvements to algorithms and machine learning have increased its accessibility to the general public, such that an individual with the right knowledge could easily produce convincing deepfakes from home. Although there is software which can be used to detect deepfake technology, it is increasingly predicted that deepfake technology will become undetectable in the future, while simultaneously becoming easier to produce.[3]

Because deepfake technology has thus far not been widely used in or against Singapore and Singaporeans, it is also not something that Singapore is best positioned to deal with presently. Although Singapore aims to become a regional and international hub for key capabilities such as cybersecurity and AI as part of the Smart Nation initiative, I would argue that current efforts are insufficient in dealing with deepfakes.[4] Singapore does have strong legal frameworks that could potentially be modified to safeguard against deepfakes; however, without updating current legislative and policy frameworks, there is insufficient protection, especially for certain sects within society. Furthermore, the lack of discourse around the subject means that there is insufficient knowledge about the potential dangers of deepfake technology within civil society.

The consequences of deepfakes

When used maliciously, deepfake technology has the capacity to impact different levels—from individuals to particular communities to wider society. On an individual level, deepfakes could be used to humiliate, disgrace, and compromise individuals. Currently, the most prominent usage of deepfake technology is for pornographic purposes, with a 2019 report suggesting that approximately 96% of deepfakes are used to superimpose the faces of both celebrities and ordinary people into compromising scenes and positions.[5] Deepfake technology thus further complicates the regulation and control of leaked pornographic material. So long as the perpetrator has a photograph of their target’s face, the target could have their likeness used to make pornographic content, regardless of the precautions that they take. With the appropriate technology, this makes producing deepfake pornography even easier than other forms of involuntary pornography. An individual could be targeted from a different country, without any knowledge of the incident or the perpetrator. This clearly violates principles of consent and privacy. Under current legislation, however, not a lot can be done.[6]

Beyond the grotesque individual impacts, deepfakes can also be used to target particular communities, or cause wider societal disruption. In May 2018, a video emerged of Donald Trump appearing to taunt Belgian citizens over their country’s position on climate change. While it was later revealed to be a deepfake, created by a Belgian political party aiming to encourage more people to support urgent climate action in the country, the damage had been done, with many Belgian citizens outraged by what President Trump “said.”[7] Closer to home, a video was released in 2020 featuring several major political figures, including Prime Minister Lee Hsien Loong, Pritam Singh, Jamus Lim, and Charles Yeo, showing them singing along to Japanese pop ballad Baka Mitai, as part of a wider social media trend.[8] While the video was obviously a deepfake and was seen as a humorous, light-hearted prank, the potential ramifications are worth noting. While these examples did not have major political consequences, with more refined technology, it is not a stretch to see how this technology could be used more maliciously, and with greater consequences on national security, public health, or social harmony. With deepfake technology enabling people to show anyone “saying” anything, the spread of violence, hatred, and division become easier than ever before.[9]

Current policy landscape and its limitations

As AI usage has increased in the past few years, so have frameworks surrounding its ethical and legal usage—as an application of the technology, deepfakes are technically included within such frameworks. In 2019, the first edition of Singapore’s Model AI Governance Framework, which outlines guidance regarding ethical and governance issues in implementing AI, was released to the private sector.[10] This follows the creation of Advisory Council on the Ethical Use of AI and Data in 2018, designed to advise on and develop advisory guidelines, frameworks, and codes of practice on the implementation of AI, for voluntary adoption.[11] However, these frameworks and guidelines are non-binding in nature, and thus do not compel individuals or groups with malicious intent to act ethically. Without any genuine threat, there is no disincentive towards using deepfake technology irresponsibly; non-binding frameworks, while well meaning, thus do not suffice in regulating deepfake usage.

While there are no laws directly regarding deepfakes in Singapore, there are other existing policies which can address certain potential problems arising from deepfakes. The 2019 Protection from Online Falsehoods and Manipulations Act (POFMA), which allows for the removal of misinformation deemed to threaten national interests, could be used to mitigate the consequences of deepfakes.[12] Furthermore, the Singapore Penal Code Section 499 states that “if a person intends to defame another person or should know that what he or she says will harm that person, then that person has committed a crime,” and could be interpreted to cover the use of deepfakes to defame other individuals.[13] Laws pertaining to sexual harassment could also be repackaged to protect individuals from the harmful consequences of deepfake technology, while copyright laws could prevent intellectual property from being misused.

However, while current legislation may provide a certain degree of protection, they are insufficient in two major areas, especially in relation to individuals. First, with the focus of POFMA on false and misleading statements detrimental to security, public health and safety, national harmony, or the functioning of society; it does not provide meaningful protection to individuals who have been impacted by the consequences of deepfake technology. Individuals are thus reliant on civil action, which could be difficult due to a whole myriad of factors. Defamation requires certain strict criteria within a court of law, and the way that deepfakes are used means that there is a misalignment between the problem and the existing laws designed to counteract them. For example, the anonymous nature of the internet means that there may well not be anybody that an individual can sue; without a clear perpetrator, defamation would be difficult to prove. Therefore, current defamation laws in Singapore are insufficient to provide adequate protection to individuals. Furthermore, even if the content is later removed, societal damage in the form of reputational loss or economic impact is often already done.

The other major area that is lacking is the inability to deal with the practical issues of enforcement. Between the virality and anonymity of social media, as well as difficulties regarding identifying the usage of deepfake technology, the Singaporean government may deem that tackling the question of deep fake technology and the consequences on every individual may be more difficult than it is worth, especially when national security or society are not threatened. Even amongst individuals, unless there is criminal action, the Singaporean government may be unable to address every instance of the use of deepfakes. In the same way that the government does not regulate every online prank, if the government deems deepfake usage to be largely mischievous rather than malicious, then regulation of deepfakes becomes difficult to enforce, but does not trivialise the issue for those impacted.

Lessons from other countries

In considering Singapore’s situation and next steps, it is worth looking at what other countries have done. Admittedly, due to the emerging nature of this technology, most other countries do not have concrete legislation or policies to deal with deepfakes which Singapore can learn from. Commonwealth countries such as the UK may have certain laws which provide some protection, but there is nothing which comprehensively addresses the issue. The EU, meanwhile, has the Code of Practice on Disinformation, which similarly to POFMA provides societal protection against disinformation, but fails to protect individuals from the impacts of deepfake technology.

While there may be little concrete policy to adapt, Singapore should consider some of the policy proposals from the EU. For example, Article 52 of the Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) proposes a transparency obligation when AI technology, including deepfakes, is used.[14] This could be potentially adapted to compel or even require content producers or social media platforms to disclose such information. Other potential policies floated include blanket bans on the usage of deepfake technology for certain applications, and potentially rewriting sections of the current defamation laws to expand social protection from deepfakes and other forms of AI.[15]

What Singapore can or should do

In response to improvements in deepfake technology, there are a few areas that the government may theoretically intervene: the technology enabling deepfakes, the creation and circulation of deepfakes, and the protection of individuals and larger society.[16] Critically, the government should address the practical issues in preventing the spread of such technology through improvements in their technical capacity, while providing greater legal protection, especially for individuals. On the former, much to their credit, the government is trying to encourage further research into deepfake technology. In July 2021, AI Singapore announced that a new programme, designed to encourage researchers and AI enthusiasts to try and improve upon existing deepfake identification algorithms.[17] If the government, law enforcement and members of the public were better able to detect the use of deepfake technology, its societal impact may be reduced.

For individuals and businesses, there has been an emphasis on digital literacy through means such as cross-referencing information, and not sharing sources unless they are proven to be trustworthy. For example, the 2013 S.U.R.E (Source, Understand, Research, Evaluate) campaign by the National Library Board was designed to promote information and digital literacy, calling upon individuals to ensure the validity of information they come across.[18] Future campaigns could similarly be launched to educate the public about the impact of deep fakes and AI, and potentially how to recognise this.

As for legislative efforts, one possible suggestion would be to update existing legislation, such as acts pertaining to defamation, sexual harassment, and intellectual property, in order to accommodate deepfakes and AI as a whole. Singapore has strong existing legislation covering these areas, and so an expansion, rather than an overhaul, of current legislation would suffice. A critical concern for the government would be to ensure that a balance is struck between the protection of individual rights and societal protection. Changes in policy would need to be careful not to encroach onto individual rights of freedom of speech or expression, and a balance between societal protection and individual rights needs to be found. Drawing from EU proposals, one possible solution would be to ban the usage of deepfakes in certain applications, such as for pornography or political purposes. However, this would be difficult to enforce, and would require global coordination.

It must be noted that regardless of the amount of action that Singapore takes, it does not have a lot of control over content produced outside its borders, or by non-Singaporeans. The porous and viral nature of content on the internet means that neither Singapore, nor any other individual country, can fully regulate deepfakes on their own. However, while this does pose a global challenge, it does also present a huge potential opportunity for Singapore. With a lack of existing policies worldwide, Singapore has the opportunity to be a world leader in defining the conversation surrounding deepfake regulation. Through its domestic actions, Singapore even has the chance to pioneer a framework which could form the basis of future global regulation, thereby improving its status as a global hub for AI.

Conclusion

It is clear that deepfake technology, along other forms of machine learning and AI, will become increasingly prevalent. Without proper regulation, it has the potential to cause significant damage, not just in Singapore but globally. Singapore currently has many policies which, if updated, can provide strong protections against deepfakes. Furthermore, it has an opportunity to lead the global conversation surrounding deepfakes, potentially pioneering an international framework, and cementing itself as a significant player in AI development. In the grand scheme of things, this can and should be seen as just another challenge in Singapore’s wider movement towards becoming a Smart Nation. With the development of technology always outpacing its regulation, Singapore should capitalise on this opportunity and implement pre-emptive safeguards against deepfakes.

Featured image: At a session titled Deepfakes: Do Not Believe What You See at the World Economic Forum Annual Meeting 2020” by World Economic Forum is licensed under CC BY-NC-SA 2.0


[1] Jewel Stolarchuk, “PM Lee Expresses Concern over ‘Deepfake’ Technology That Can Mimic People’s Voices through AI,” The Independent Singapore News (blog), September, 20 2019, https://theindependent.sg/pm-lee-expresses-concern-over-deepfake-technology-that-can-mimic-peoples-voices-through-ai/.

[2] Sally Adee, “What Are Deepfakes and How Are They Created?” IEEE Spectrum, April 29, 2020, https://spectrum.ieee.org/tech-talk/computing/software/what-are-deepfakes-how-are-they-created.

[3] Buzz Blog Box. “How Deepfake Technology Impact the People in Our Society?” Medium, February 3, 2020, https://becominghuman.ai/how-deepfake-technology-impact-the-people-in-our-society-e071df4ffc5c.

[4] “Digital Government Transformation,” GovTech Singapore, accessed July 18, 2021, https://www.tech.gov.sg/digital-government-transformation/.

[5] Yash Raj, “Obscuring the Lines of Truth: The Alarming Implications of Deepfakes,” Jurist,  June 17, 2020, https://www.jurist.org/commentary/2020/06/yash-raj-deepfakes/.

[6] Raj, “Obscuring.”

[7] Hui Hang Tang, “Deepfakes, Nudes and the Threat to National Security,” Channel NewsAsia, December 14, 2019, https://www.channelnewsasia.com/news/cnainsider/deepfakes-deepnude-porn-videos-threat-to-national-security-12183798.

[8] Ilyas Sholihyn, “Someone Deepfaked Singapore’s Politicians to Lip-Sync That Japanese Meme Song,” AsiaOne, August 7, 2020, https://www.asiaone.com/digital/someone-deepfaked-singapores-politicians-lip-sync-japanese-meme-song.

[9] Raj, “Obscuring.”

[10] “Artificial Intelligence,” Infocomm Media Development Authority, accessed September 10, 2021, http://www.imda.gov.sg/infocomm-media-landscape/SGDigital/tech-pillars/Artificial-Intelligence.

[11] Infocomm Media Development Authority, “Composition of the Advisory Council on the Ethical Use of Artificial Intelligence (“AI”) and Data,” accessed September 10, 2021, http://www.imda.gov.sg/news-and-events/Media-Room/Media-Releases/2018/composition-of-the-advisory-council-on-the-ethical-use-of-ai-and-data.

[12] “Protection from Online Falsehoods and Manipulation Act 2019 – Singapore Statutes Online,” Singapore Statutes Online, accessed September 10, 2021. https://sso.agc.gov.sg/Act/POFMA2019.

[13] “Defamation Act – Singapore Statutes Online,” Singapore Statutes Online, accessed June 4, 2021, https://sso.agc.gov.sg/Act/DA1957.

[14] “EUR-Lex – 52021PC0206 – EN – EUR-Lex,” EUR-Lex, accessed September 16, 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206.

[15] “EPRS_STU(2021)690039_EN.pdf,” accessed September 10, 2021. https://www.europarl.europa.eu/RegData/etudes/STUD/2021/690039/EPRS_STU(2021)690039_EN.pdf.

[16] ‘EPRS_STU(2021)690039_EN.Pdf’.

[17] “AI Singapore Trusted Media Challenge – Trusted Media Challenge,” AI Singapore, accessed October 26, 2021, https://trustedmedia.aisingapore.org/competition/aisg/.

[18] “About Us,” National Library Board Singapore, accessed September 17, 2021, https://sure.nlb.gov.sg/about-us/sure-campaign/.