Skip to main content

Topic / Science, Technology and Data

George Orwell’s Dystopian World is Coming to Life and the European AI Act Will Not Stop It: The Collection of Emotional Data by AI

In Nineteen Eighty-Four, George Orwell depicted a dystopian world where authorities establish their power through the control of emotions. Fiction is turning into reality. The emergence of AI-based emotion recognition systems transforms our emotions into simple data that can be collected and used against us, and no regulation exists in US law to prevent this. The picture is not much better in Europe: regulation of emotion recognition systems is indeed emerging, but it is by no means sufficient.

The Problem: Lack of Regulation of AI-based Emotion Recognition Systems in the United States

AI-based emotion recognition systems are not a technology of the future: these AI systems are already on the market.1 Yet in the United States, no regulation of emotion recognition systems currently exists. Furthermore, these AI systems and the immense risks they entail were only first mentioned in the political sphere in June 2023, in a brief press release issued by Senator Ron Wyden (D-OR).2 As emotion recognition systems are on the verge of entering the political debate in the United States, but remain ignored by the law, a closer look atthe only regulation in the world that directly targets AI-based emotion recognition systems the European AI Act might prove instructive — on what not to do.3

How Do Emotion Recognition Systems Work?

Before exploring how to regulate emotion recognition systems, we first need to understand how they work. Emotional data is deduced by AI — usually by means of Facial Emotion Recognition (FER) — from biometric data such as gaze, changes in pupil circumference, facial expressions (smiles, grimaces, frowns), voice (intensity, rate of speech, timbre, etc.) or body movements. Through interpretation of this biometric data, AI claims to be able to identify emotions.

While various studies have shown that the accuracy of AI in detecting emotions is now very high, some error remains.4, 5 In addition, emotion recognition systems are biased: a comparative study of the main facial emotion recognition algorithms revealed that these algorithms were more likely to associate negative emotions with the faces of black people than with the faces of white individuals.6

What are AI-based Emotion Recognition Systems Used for?

Tech giants are working on AI-based emotion recognition systems because emotions have a determining influence on individual decision-making.7 Contrary to what has long been advocated by micro-economists, consumers cannot be considered as homo-economicus, i.e. perfectly rational economic agents.8 Emotions play a key role in the decision-making process and are at least as important as reason.9 The decision to purchase therefore could be a response to a certain emotion.10 For example, a study showed that the sadness felt by fans following the defeat of their national team at the Soccer World Cup altered their buying behavior on the financial markets.11

Knowing the emotions felt by an individual at a given moment therefore enables firms to determine with a high degree of precision which advertisements will be effective on individual consumers. Whoever has access to a person’s emotional data can influence their purchasing decisions. Consequently, the use of emotional data for “emotional marketing” must be regulated to prevent such manipulation of consumers. Although this use of emotion recognition systems will likely prove to be the most worrisome one, it is unfortunately by no means the only use we should be concerned about. Today, these AI systems are used at borders and during job interviews to determine whether migrants and job applicants are sincere in their answers.12, 13 Some schools have even introduced emotion recognition systems to measure pupils’ attentiveness during lessons.14

Insufficiencies in the European AI Act

The European AI Act does not prohibit the use of emotion recognition systems for marketing purposes. This European regulation only prohibits the use of emotion recognition systems “in the areas of workplace and education, except in cases where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.”15 By contrast, this means that all other uses of the emotion recognition systems —classified as high-risk AI systems — are lawful, provided they comply with an obligation of transparency: operators of an emotion recognition system are required to inform individuals when the system is used on them.16

Expecting to solve the problem raised by emotion recognition systems simply by means of an obligation of transparency is illusory. However, as long as this transparency requirement is met, the use of emotion recognition systems for marketing purposes is currently lawful.

However, one provision leaves room for doubt. Article 5 prohibits the use of an AI system that deploys subliminal techniques with the objective of distorting people’s behavior by impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that is likely to cause them significant harm.17 One could surely argue that targeted advertising based on consumers’ emotions is able to distort their behavior by impairing their ability to make an informed decision.

Some may argue that the condition of significant harm is not met.18 Thus, this provision could only be used to prohibit the use of certain emotion recognition systems used for marketing purposes — those that lead an individual to make a decision that causes him direct harm (e.g. use leading a recovering alcoholic who has not been drinking for several months to buy alcohol). A significant question is raised: how can we determine before individuals make a decision that it will cause them significant harm? This amounts to trying to predict the future. Alternatively, one might argue that manipulation represents psychological violence which in itself constitutes significant harm.

Ultimately, although one might think that this provision should apply to emotion recognition systems used for marketing purposes, this is not currently the case. It therefore appears that the use of emotion recognition systems for marketing purposes is currently lawful under the AI Act.

How Could Other EU Regulation be Applied?

The AI Act expressly states that other EU regulations remain applicable to emotion recognition systems.19 Consequently, one cannot understand the scope of the AI Act without analyzing the other EU regulations that may apply to the collection and use of emotional data.

Accordingly, consideration could be given to applying Directive 2005/29/EC, which concerns unfair business-to-consumer commercial practices and prohibits commercial practices that “materially distort” the economic behavior of the average consumer (article 5).20 This remedy would be insufficient, however, because it would only prevent the use of emotional data (and only one particular use of it), and not the collection of such data. Moreover, since emotional marketing is a form of targeted advertising, and targeted advertising has never been sanctioned on the grounds of Directive 2005/29/EC, it is likely that this provision would not be applicable to emotional marketing.

Consequently, the last line of potential defense is the General Data Protection Regulation (GDPR).21 Article 9 of the GDPR establishes a restrictive list of data that qualifies as sensitive data because of the significant risks it poses to rights and freedoms and therefore is subject to a reinforced protection regime. However, at present, this list does not include emotional data. This will remain the case until emotional data is qualified as “highly personal data” by an independent supervisory authority. Regardless of the qualification chosen, any processing of emotional data will have tobe preceded by a data protection impact assessment (DPIA) because of the risk of such processing for the rights and freedoms of individuals (article 35).

What Could be Done? Possible Paths Forward

It is clear that the AI Act will prove insufficient, and uncertainties remain as to the application of the GDPR to the processing of emotional data. Given this, I believe the time has come to consider the adoption of new norms aimed at effectively protecting emotions through the law. Two categories of regulation are possible: regulation of the collection of emotional data and regulation of the use of emotional data. Choosing one of these alternatives requires determining the major problem that emotional data poses for individuals. Is the problem the fact that  third parties can access the most intimate part of our being, our emotions — in which case the collection of emotional data needs to be regulated? Or, is the problem the use of our emotional data against us, in which case it is only the use of emotional data that needs to be regulated?

In my opinion, the evil lies in the violation of our mental privacy. In other words, I believe that the brain should be a protected zone inaccessible to third parties, except for medical reasons. Consequently, I recommend directly regulating the collection of emotional data. It may also be worth considering adopting international “neurorights,” which are the fundamental normative rules for the protection of the human brain and mind.22 It would mark the transition from a right to feel emotions (deriving from the right to freedom of thought) to a right to protect one’s emotions.

This could prove to be the cure-all the EU and US need so that 2024 does not become 1984.


  1. Markets and Markets, “Emotion Detection and Recognition Market Size, Share and Global Market Forecast to 2027,” March 2023, https://www.marketsandmarkets.com/Market-Reports/emotion-detection-recognition-market-23376176.html. ↩︎
  2. Ron Wyden, “EU Restrictions on AI Emotion Detection Products,” Press Releases, June 15, 2023, https://www.wyden.senate.gov/news/press-releases/eu-restrictions-on-ai-emotion-detection-products. ↩︎
  3. Council of the European Union, “Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts,” 8115/21, January 26, 2024, https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf. ↩︎
  4. George E. Raptis et al., “Using Eye Gaze Data and Visual Activities to Infer Human Cognitive Styles: Method and Feasibility Studies,” Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization (July 2017): 164–73. ↩︎
  5. Konstantina Vemou and Anna Horvath, “Facial Emotion Recognition,” TechDispatch, 2021, https://www.edps.europa.eu/system/files/2021-05/21-05-26_techdispatch-facial-emotion-recognition_ref_en.pdf. ↩︎
  6. Lauren Rhue, “Racial Influence on Automated Perceptions of Emotions,” SSRN (2018): 1–11. ↩︎
  7. Thomas Gremsl and Elisabeth Hodl, “Emotional AI: Legal and Ethical Challenges,” Information Polity 27 (2022): 163–74. ↩︎
  8. Jean Tirole, “L’homo Economicus a Vécu,” Toulouse School of Economics, October 5, 2018, https://www.tse-fr.eu/fr/lhomo-economicus-vecu. ↩︎
  9. Antonio Damasio, Descartes’ Error: Emotion, Reason, and the Human Brain (Penguin Books, 2005). ↩︎
  10. Aude Chardenon, “Emotion is Decisive in Online Shopping, Study [L’émotion est déterminante dans les achats en ligne],” LSA Commerce Connecté, June 23, 2016, https://www.lsa-conso.fr/l-emotion-est-determinante-dans-les-achats-en-ligne-etude,241257. ↩︎
  11. Alex Edmans, Diego Garcia, and Oyvind Norli, “Sports Sentiment and Stock Returns,” Journal of Finance 62, no. 4 (August 2007):1967–98. ↩︎
  12. Daniel Boffey, “EU Border ‘Lie Detector’ System Criticised as Pseudoscience,” The Guardian, November 2, 2018, https://www.theguardian.com/world/2018/nov/02/eu-border-lie-detection-system-criticised-as-pseudoscience. ↩︎
  13. John McQuaid, “Your Boos Wants to Spy on Your Inner Feelings,” Scientific American, December 1, 2018, https://www.scientificamerican.com/article/your-boss-wants-to-spy-on-your-inner-feelings/. ↩︎
  14. Kate Kaye, “Intel Calls Its AI that Detects Student Emotions a Teaching Tool. Others Call It ‘Morally Reprehensible,'” Protocol, April 17, 2022, https://www.protocol.com/enterprise/emotion-ai-school-intel-edutech. ↩︎
  15. AI Act, art. 5 (dc). ↩︎
  16. AI Act, art 6 (2); Annex III. ↩︎
  17. AI Act, art. 5. ↩︎
  18. Tea Mustac, “RegInt: Decoding AI Regulation #6 | Unmasking AI – Faces Behind the Code,” LinkedIn, September 19, 2023, https://www.linkedin.com/search/results/all/?keywords=tea%20mustac%20emotion&origin=GLOBAL_SEARCH_HEADER&sid=ltW. ↩︎
  19. AI Act, recital 41. ↩︎
  20. Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and 2002/65/EC of the European Parliament and of the Council and Regulation (EC) No 2006/2004 of the European Parliament and of the Council, 2005 (L 149/22). ↩︎
  21. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), 2016 (L119/1). ↩︎
  22. Marcello Ienca, “On Neurorights,” Frontiers in Human Neuroscience 15, no. 701258 (September 2021): 1–11. https://doi.org/10.3389/fnhum.2021.701258. ↩︎