<em>Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe</em> (COM/2018/237 final).
Council of Europe, <em>Declaration by the Committee of Ministers on the manipulative capabilities of algorithmic processes</em>, Para. 9. Decl (13/02/2019)1.
Council of Europe. <em>Declaration by the Committee of Ministers on the manipulative capabilities of algorithmic processes</em> (2019). Decl (13/02/2019)1.
<em>Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications)</em>. OJ L 201, 2002.
<em>Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/ EC and 2002/65/EC of the European Parliament and of the Council and Regulation (EC) No 2006/2004 of the European Parliament and of the Council (“Unfair Commercial Practices Directive”)</em>. OJ L 149, 2005.
<em>European Commission White Paper on Artificial Intelligence – A European Approach to Excellence and Trust</em>. Brussels, 19.2.2020 COM (2020) 65 final.
<em>Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts.</em> COM/2021/206 final.
<em>Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)</em>. OJ L 119, 2016.
<em>Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act).</em> OJ L 277, 2022.
<em>Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)</em>. OJ L, 2024/1689.
<em>United Nations International Covenant on Economic, Social and Cultural Rights.</em> (1966) // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.ohchr.org/sites/default/files/cescr.pdf">https://www.ohchr.org/sites/default/files/cescr.pdf</ext-link>
AccessNow. <em>Human Rights in the Age of Artificial Intelligence</em>. (2018) // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.accessnow.org/wp-content/uploads/2018/11/AI-and-Human-Rights.pdf">https://www.accessnow.org/wp-content/uploads/2018/11/AI-and-Human-Rights.pdf</ext-link>Arendt, Hannah. <em>Hannah Arendt: From an Interview</em> (Interview with Roger Errera in 1974). The New York Review of Books, 1978.
Bakiner, Onur. “The promises and challenges of addressing artificial intelligence with human rights.” <em>Big Data & Society</em> (July–December 2023): 1–13 // DOI: <a href="https://doi.org/10.1177/20539517231205476" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1177/20539517231205476</a>
Bontridderp, Noémi and Yves Poullet. “The Role of Artificial Intelligence in Disin-formation.” <em>Data & Policy</em> 3 (2021): e32-2 // DOI: <a href="https://doi.org/10.1017/dap.2021.20" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1017/dap.2021.20</a>, Botes, Marietjie. “Autonomy and the social dilemma of online manipulative behavior.” <em>AI Ethics</em> 3 (2023): 315–323 // DOI: 1007/s43681-022-00157-5, Bublitz, Jan Christoph and Reinhard Merkel. “Crimes against minds: On mental manipulations, harms and a human right to mental self-determination.” <em>Crim Law Philos</em> 8(1) (2014): 51–77.
Cambridge online dictionary. <em>Manipulation</em> // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://dictionary.cambridge.org/dictionary/english/manipulation">https://dictionary.cambridge.org/dictionary/english/manipulation</ext-link> Carroll, Micah, Alan Chan, Henry Ashton, and David Krueger. “Characterizing manipulation from AI systems”: 1–13. In: <em>Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization</em> (2023).
Cohen, Tegan. “Regulating Manipulative Artificial Intelligence.” <em>SCRIPTed</em> Vol. 20, No. 1 (2003): 203–242 // DOI: <a href="https://doi.org/10.2966/scrip.200123.203" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.2966/scrip.200123.203</a>, European Commission Independent High Level Expert Group on Artificial Intelligence. <em>Ethics guidelines for trustworthy AI.</em> (2019) // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419">https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419</ext-link> European Commission. <em>Public Consultation on the AI White Paper. Final report</em>. (November 2020) // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=68462">https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=68462</ext-link> European Commission/Deloitte. <em>Opportunities and Challenges for the Use of Artificial Intelligence in Border Control, Migration and Security</em>. Vol. 2: Addendum (2018) // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://op.europa.eu/en/publication-detail/-/publication/69f33ff7-a156-11ea-9d2d-01aa75ed71a1/language-en">https://op.europa.eu/en/publication-detail/-/publication/69f33ff7-a156-11ea-9d2d-01aa75ed71a1/language-en</ext-link> EUROPOL. <em>Facing reality? Law enforcement and the challenge of deepfakes</em>. (2022) // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.europol.europa.eu/cms/sites/default/files/documents/Europol_Innovation_Lab_Facing_Reality_Law_Enforcement_And_The_Challenge_Of_Deepfakes.pdf">https://www.europol.europa.eu/cms/sites/default/files/documents/Europol_Innovation_Lab_Facing_Reality_Law_Enforcement_And_The_Challenge_Of_Deepfakes.pdf</ext-link> Farahany, Nita A. “The Costs of Changing Our Minds.” <em>69 Emory L. J. 75</em> (2019) // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://scholarlycommons.law.emory.edu/elj/vol69/iss1/2">https://scholarlycommons.law.emory.edu/elj/vol69/iss1/2</ext-link> Faraoni, Stefano. “Persuasive technology and computational manipulation: Hypernudging out of mental self-determination.” <em>Frontiers in Artificial Intelligence</em> 6 (2023) // DOI: 10. 3389/frai.2023.1216340Ferrara, Emilio, Onur Varol, Clayton Davis, Filippo Menczer, and Alessandro Flammini. “The Rise of Social Bots.” <em>Communications of the ACM</em> Vol. 59, No. 7 (July 2016): 96–104 // DOI <a href="https://doi.org/10.1145/2818717" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1145/2818717</a>, Gregory, Sam and Eric French. “How do we work together to detect AI-manipulated media?” <em>Witness Media Lab</em> (2019) // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://lab.witness.org/projects/osint-digital-forensics/">https://lab.witness.org/projects/osint-digital-forensics/</ext-link> Gregory, Sam. “Deepfakes, misinformation and disinformation and authenticity infrastructure responses: Impacts on frontline witnessing, distant witnessing, and civic journalism.” <em>Journalism</em> Vol. 0(0) (2021): 1–22 // DOI <a href="https://doi.org/10.1177/14648849211060644" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1177/14648849211060644</a>
Gregory, Sam. “Synthetic media forces us to understand how media gets made”. <em>NiemanLab Predictions for Journalism</em> (2023) // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.niemanlab.org/2022/12/synthetic-media-forces-us-to-understand-how-media-gets-made/">https://www.niemanlab.org/2022/12/synthetic-media-forces-us-to-understand-how-media-gets-made/</ext-link>
Gregory, Sam. “Fortify the Truth: How to Defend Human Rights in an Age of Deep-fakes and Generative AI.” <em>Journal of Human Rights Practice</em> 15 (2023): 702–714 // DOI: <a href="https://doi.org/10.1093/jhuman/huad035" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1093/jhuman/huad035</a>, Guha, Ahona. “Understanding and Managing Psychological Manipulation.” <em>Psychology Today</em> // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.psychologytoday.com/us/blog/prisons-and-pathos/202104/understanding-and-managing-psychological-manipulation">https://www.psychologytoday.com/us/blog/prisons-and-pathos/202104/understanding-and-managing-psychological-manipulation</ext-link> IEEE Standards Association. <em>The IEEE Global Initiative 2.0 on Ethics of Autonomous and Intelligent Systems.</em> (2023) // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://standards.ieee.org/industry-connections/activities/ieee-global-initiative/">https://standards.ieee.org/industry-connections/activities/ieee-global-initiative/</ext-link> Ienca, Marcello and Effy Vayena. “Digital Nudging Exploring the Ethical Boundaries.” In: Veliz (ed.), <em>The Oxford Handbook of Digital Ethics.</em> Oxford University Press, Oxford, 2021.
Klenk, Michael. “Ethics of Generative AI and Manipulation: a Design-Oriented Research Agenda.” <em>Ethics and Information Technology</em> 26 (2024): 9 // DOI: <a href="https://doi.org/10.1007/s10676-024-09745-x" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1007/s10676-024-09745-x</a>, Langguth, Johannes, Konstantin Pogorelov, Stefan Brenner, Petra Filkuková and Daniel Thilo Schroeder. “Don’t Trust Your Eyes: Image Manipulation in the Age of DeepFakes.” <em>Front. Commun.</em> 6:632317 (2021): 1–12 // DOI: <a href="https://doi.org/10.3389/fcomm.2021.632317" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.3389/fcomm.2021.632317</a>, Marcellino, William, Nathan Beauchamp-Mustafaga, Amanda Kerrigan, Lev Navarre Chao, Jackson Smith. <em>The Rise of Generative AI and the Coming Era of Social Media Manipulation 3.0. Next-Generation Chinese Astroturfing and Coping with Ubiquitous AI.</em> (2003) // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.rand.org/pubs/perspectives/PEA2679-1.html">https://www.rand.org/pubs/perspectives/PEA2679-1.html</ext-link>
Morozovaitė, Viktorija. “Hypernudging in the changing European regulatory landscape for digital markets.” <em>Policy Internet</em> 15 (2022): 78–99 // DOI: <a href="https://doi.org/10.1002/poi3.329" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1002/poi3.329</a>, Nepryakhin, Nikita. “Classification of vulnerability factors in the process of psychological manipulation.” <em>Conference: International Conference on Advanced Research in Social Sciences</em> (2019) // DOI: <a href="https://doi.org/10.33422/icarss.2019.03.93" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.33422/icarss.2019.03.93</a>, NIST. <em>AI Risk Management Framework.</em> (2024) // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.nist.gov/itl/ai-risk-management-framework">https://www.nist.gov/itl/ai-risk-management-framework</ext-link> OECD Working Papers on Public Governance. <em>Hello, World. Artificial intelligence and its use in the public sector.</em> (2019): 56 // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.oecd-ilibrary.org/docserver/726fd39d-en.pdf?expires=1666023983&id=id&accname=guest&checksum=2B0678AFDEB937C7A5B42575C7C96F44">https://www.oecd-ilibrary.org/docserver/726fd39d-en.pdf?expires=1666023983&id=id&accname=guest&checksum=2B0678AFDEB937C7A5B42575C7C96F44</ext-link>
<em>Partnership on AI</em> // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://partnershiponai.org/">https://partnershiponai.org/</ext-link>, PHD Media. <em>New beauty study reveals days, times and occasions when U.S. women feel least attractive.</em> (October 2012) // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.prnewswire.com/news-releases/new-beauty-study-reveals-days-times-and-occasions-when-us-women-feel-least-attractive-226131921.html">https://www.prnewswire.com/news-releases/new-beauty-study-reveals-days-times-and-occasions-when-us-women-feel-least-attractive-226131921.html</ext-link>, Poullet, Yves. <em>Ethique et droits de l’Homme dans notre société du numérique (Ethics and human rights in our digital society</em>). Brussels: Académie Royale de Belgique, 2020.
UNESCO. <em>Report of COMEST on robotics ethics – UNESCO Digital Library</em>. 17 // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://unesdoc.unesco.org/ark:/48223/pf0000253952">https://unesdoc.unesco.org/ark:/48223/pf0000253952</ext-link>
UNGA. <em>Promotion and Protection of the Right to Freedom of Opinion and Expression: Note by the Secretary-General.</em> UN Doc A/73/348, New York City, NY: UNGA, 2018.
WIRED. <em>Deepfakes Aren’t Very Good. Nor Are the Tools to Detect Them</em>. (2020) // <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.wired.com/story/deepfakes-not-very-good-nor-tools-detect/">https://www.wired.com/story/deepfakes-not-very-good-nor-tools-detect/</ext-link> Yadav, Vikrant Sopan. “AI and Human Rights: A Critical Ethico-Legal Overview.” <em>Agathos; Iasi,</em> Vol. 14, Issue. 1 (2023): 261–270.
Zuboff, Shoshana. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” <em>Journal of Information Technology</em> 30(1) (2015): 75–89 // DOI: <a href="https://doi.org/10.1057/jit.2015.5" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1057/jit.2015.5</a>