Over the years, the ways of accessing information have changed and evolved. For centuries, the paper press was the main source of information, giving way in the 20th century to radio and television. Then the Internet appeared, and mainstream media outlets started publishing information on their websites. It was argued that Internet would improve democracies, as it would permit greater transparency of information (1). Finally, social media platforms were developed and they allowed a very large audience direct access to posts and comments from politicians and celebrities, making access to information and ideas even faster and more efficient.
All of these sources of information, from traditional press to social media platforms, improve freedom of expression, as they permit different points of view to be presented and shared among millions of daily users. At the same time, there may appear tendencies to abuse these ways of communication in order to achieve different political goals or spread fake news.
Since the beginning of media’s existence, there have been attempts to use it as a means of propaganda. Here the most flagrant examples of such practices can be briefly depicted. In the Third Reich, almost everything constituted propaganda: public speeches, press, books, art, architecture. The propaganda machine led by Joseph Goebbels created a cult of personality around Adolf Hitler. Many atrocities of the Second World War would not have been possible without efficient propaganda. For these reasons, in the Nuremberg Trial after the war, not only politicians were accused, but also representatives of the German media (Julius Streicher, Hans Fritsche or Otto Dietrich (2). Radio propaganda of the Radio Télévision Libre des Mille Collines (RTLM) contributed to between 500,000 and 1 million deaths in the Rwandan genocide committed by the Hutu ethnic group against Tutsis (3). The International Criminal Tribunal for Rwanda decided that the members of the founding committee of the RTLM, Ferdninand Nahimana, Jean-Bosco Barayagwiza and Hassan Ngeze, were guilty of charges of genocide (4). These are just two of the most striking examples how freedom of expression and media can be abused with the aim of violating the most fundamental human rights.
The Internet has made it even easier to spread information, but also to spread propaganda and illegal content. The real catalyst in this regard was the emergence of social media, which, while being a place where freedom of expression is widely exercised, also constitutes a field for its abuse. While radio and television allowed only selected people to speak, social media allows anyone who wants to speak out to do so, which creates room for the spread of views that are contrary to democratic ideals.
The aim of this work is to deliberate whether the risks connected with the social media platforms could justify a ban on these platforms in the European Union, in the same way that China has blocked Twitter and Facebook (5). In the first part of the paper, these risks will be defined. Next, the paper will indicate the applicable law and then discuss the compatibility of a ban on social media platforms with the freedom of expression. Afterwards it will be explored whether the EU might dispose of other measures to control the activities of social media platforms instead of banning them, which would prevent millions of European users from sharing their views and ideas on these platforms.
On one hand, social media platforms foster political dialogue and access to news, they permit to collect data and enhance audience reach (6). On the other hand, they can be abused; under the guise of freedom of speech, content completely contrary to democratic values can be spread — for example, terrorist content. Therefore, at the same time social media platforms strengthen democratic values like the freedom of speech, they weaken it as well (7). Even the owners of social media platforms can have political ambitions to influence societies all over the world to achieve their goals, (8) what can pose a threat even to the freedom of expression itself. In the past, there have been accusations of political bias against Mark Zuckerberg’s Facebook, (9) as well as YouTube and Twitter (10). Currently, U.S. President Donald Trump and the owner of the X platform, Elon Musk, are allegedly taking steps against critics and media companies, including lawsuits and licence revocation. On X, accounts are allegedly being blocked and algorithms manipulated (11), and this influences the freedom of expression of the affected users (12).
Moderation rules and algorithm manipulation can have a serious impact on European democracy, yet the extent of this impact and the exact nature of these algorithms are unclear (13). Elon Musk already expressed his willingness to interfere in the European politics, communicating his wish to change the government in the UK (14) as well as his endorsement for the Alternative for Germany (AfD) party (15). In his speech in Davos on 22nd January 2025, the Spanish Prime Minister Pedro Sánchez even said that “tech billionaires want to use social media to overthrow democracy” (16). At the moment, it is definitely too early to agree with the Spanish Prime Minister, but it can be stated that these tech billionaires have means, including recommendation algorithms and rules on moderation, that can permit them to shape politics in international scale. Then the real power would remain not in the hands of people or politicians, but in the hands of these billionaires. That would mean that the decisions concerning, for example, European citizens might not be modelled in Europe, but in the United States.
The fear of the influence of social media platforms through moderation or recommendation algorithms is not new, as it was already the topic of a U.S. Congressional hearing in 2018 (17). Current developments only prove that this topic is actual, and that it has appeared on a global scale. The difference is that in 2018, social media platforms still tried to appear to be mere neutral conduits (18), whereas now, Musk, at least, has abandoned any appearances of neutrality.
There is also another novelty. For years, even centuries, these were state authorities who tried to censor media in order to suppress expression that could threaten the order of the state, in order to control public awareness and silence opposition (19). Through moderation policies and manipulating recommendation algorithms, social media platforms might in a way censor their users to please their owners or state authorities — for example, Trump. In democratic states state censorship is at least subject to review, including judicial review, from the perspective of the freedom of expression (the First Amendment to the U.S. Constitution, Article 10 ECHR). According to some scholars, however, social media platforms censorship escapes any review and is covered by unclear rules (20).
To this regard it can be also pointed out that in 2018, the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression had already recommended that global tech companies should ensure the standard of protection of freedom of expression relevant to human rights law, and that, moreover, these companies should impose minimum levels of consistency, transparency and accountability in their commercial content moderation (21). Content moderation and algorithm manipulation, if done on a political basis, may run counter to these recommendations. It must be taken into account that social media platform users do not choose the source of the content they view; it is the recommendation algorithms that target information toward users (22). Appropriately-set algorithms can thus shape public opinion, views and moods.
Another risk connected with social media platforms is the possibility of interference in democratic elections by foreign states through social media propaganda, as could be noticed in the U.S. elections in 2016. After the election, Facebook announced that around 470 false profiles connected to Russia spent about 100,000 USD to publish 3,000 ads during the electoral period. At the same time, X (then Twitter) observed 1.4 million interactions with the Russian governmental organisation Internet Research Agency (23) and deleted more than 50,000 Russia-linked accounts (24).
Russian influence also affected the December 2024 elections in Romania. Two weeks before the elections, Russian-driven social media campaigns, especially on TikTok, began, causing increased support for pro-Russian candidate Călin Georgescu. Altogether, around 130 influencers took part in the Romanian campaign, with a reach of 8 million users. As a result, the Romanian Constitutional Court annulled the results of the elections (25). The Court emphasised that legal and transparent financing of the electoral campaign is an important factor in the regularity of the electoral process, and social media platforms should be obliged to permanently disclose data on political advertising and electoral sponsors. According to the Court, Georgescu had violated the electoral law, especially principle of transparency in the financing of electoral campaigns, in reference to the costs of the campaign in the social media. Thus, irregularities in the electoral campaign affected the electoral competitors, creating a clear inequality between the candidate who manipulated digital technologies and the other candidates participating in the electoral process. The significantly greater exposure of Georgescu led to a directly proportional reduction in the online media exposure of the other candidates (26).
Another issue is the use of social media platforms to spread digital disinformation by different subjects — for example, governments or political parties (27), including Russian disinformation campaigns. Social media platforms made spreading information and ideas rapid, and the same applies to spreading disinformation. At the same time fake news is easier to generate and harder to detect (28), making Internet users susceptible to propaganda and disinformation due to their lack of awareness (29). Social media platforms have distanced themselves from deciding what is true and what is not, and have been reluctant to interfere with the content published therein, as they want to avoid censorship (30). Nevertheless, each platform has some moderation rules, on the basis of which it removes obscene, pornographic or violent content (31). From this perspective, however, disinformation would often be regarded as neutral and nuanced and would not violate the moderation rules in an obvious way, so it would not be removed nor flagged.
Respect for human rights lies within the core values protected by the European Union (Article 2 TEU). These rights are protected by the EU Charter of Fundamental Rights (Article 6(1) TEU), and they also constitute general principles of the Union’s law guaranteed by the European Convention for the Protection of Human Rights and Fundamental Freedoms, as they result from the constitutional traditions common to the Member States (Article 6(3) TEU). Within the EU, everyone has the right to freedom of expression, which includes freedom to hold opinions and to receive and impart information and ideas regardless of frontiers without interference by public authorities (Article 11(1) of the EU Charter of Fundamental Rights). The freedom and pluralism of the media are also to be respected (Article 11(2) of the EU Charter of Fundamental Rights).
The rights guaranteed in Article 11 of the Charter have the same meaning and scope as those guaranteed in Article 10 ECHR (Article 52(3) of the EU Charter of Fundamental Rights). Freedom of expression constitutes one of the essential freedoms of a democratic society. It is applicable not only to “information” or “ideas” that are favourably received due to inoffensiveness or indifference, but also to those that offend, shock or disturb. This is in accordance with the demands of pluralism, tolerance and broadmindedness, without which there is no “democratic society”. It means, moreover, that every “formality”, “condition”, “restriction” or “penalty” imposed in this sphere must be proportionate to the legitimate aim pursued (32). This confirms that freedom of expression is not absolute and may be subject to limitations, and that it also entails duties and responsibilities. The protection of the right of media to impart information on issues of general interest is subject to the proviso that the media is acting in good faith and on an accurate factual basis, and that it provides “reliable and precise” information in accordance with the ethics of journalism; media not only inform, but they can also suggest, through the way it presents information, how this information is to be assessed (33). The aforementioned case law should apply equally to social media platforms. It can be disputable whether these platforms are media and are bound by social and legal obligations connected thereto (34), but the case law on the freedom of expression applies generally to everyone engaged in exchange of information and ideas.
On the basis of the Digital Services Act (DSA) (35), providers of very large online platforms and search engines shall diligently identify, analyse and assess any systemic risks to the Union stemming from the design, functioning, or use of their services and any related systems, including algorithmic systems. In order to achieve that, they are obliged to carry out risk assessment, which should account for such risks as the dissemination of illegal content through their services; any actual or foreseeable negative effects for the exercise of fundamental rights; and any actual or foreseeable negative effects on civic discourse, electoral processes, and public security (Article 34 (1) DSA).
Once these risks are identified, the providers shall put in place reasonable, proportionate and effective mitigation measures, tailored to specific systemic risks — for example, adapting content moderation processes or testing and adapting their algorithmic systems, including their recommendation systems (Article 35 (1) DSA). If the provider does not comply with the DSA, the Commission shall adopt a non-compliance decision (Article 73 (1) DSA), in which it may impose on the provider fines not exceeding 6% of its total worldwide annual turnover in the preceding financial year (Article 74 (1) DSA).
Is a ban on social media a feasible option in Europe, then, especially in the European Union? Theoretically, the answer to this question is affirmative, as there have been examples of media bans in Europe, though they concerned traditional media like television and radio — for example, in the case of the Danish television company ROJ TV A/S, due to its participation in terrorist propaganda. The European Court of Human Rights concluded that the company had escaped the protection of Article 10 ECHR because its activities were an abuse of freedom of expression aimed at the destruction of rights and freedoms enshrined in the Convention (Article 17 ECHR) (36).
A ban on Russian media outlets was introduced by the European Union in 2022 due to the participation of these media in Russian war propaganda after the Russian Federation commenced a full-scale invasion of Ukraine. Therefore, in the EU, it shall be prohibited for operators to broadcast — or to enable, facilitate or otherwise contribute to broadcast — any content by the legal persons, entities or bodies listed in Annex XV by any means, including transmission or distribution by cable; satellite; IP-TV; internet service providers; and internet video-sharing platforms or applications, whether new or pre-installed. The broadcasting licence or authorisation, as well as transmission and distribution arrangements, with the legal persons, entities or bodies listed in Annex XV shall be suspended (Article 2f of the regulation 833/2014 (37)).
The Court of Justice of the European Union (CJEU) was confident that the ban in question did not violate freedom of expression, as enshrined in Article 11 of the EU Charter of Fundamental Rights and being part of general principles of EU law. The right to freedom of expression is not absolute, and there might be some limitations imposed thereon. The protection of the media’s right to impart information on issues of general interest is subject to the condition that they are acting in good faith and on an accurate factual basis, and that they provide ‘reliable and precise’ information in accordance with the ethics of journalism, especially since in today’s world media not only informs society, but also suggests assessments and opinions. Therefore, expression that promotes or justifies violence, hatred, xenophobia or another form of intolerance cannot claim protection. In this regard, the CJEU referred to Article 20(1) ICCPR, which prohibits any propaganda for war, and the CJEU determined that the ban they had imposed was proportionate, as it was appropriate and necessary to the aims pursued (38). This conclusion is in line with the case law of the ECtHR, as expression that promotes, justifies or glorifies violence, hatred, xenophobia or other forms of intolerance cannot normally claim protection (39).
Yet, it must be noticed that the above-mentioned bans, as judged by the ECtHR and the CJEU, concern terrorist or war propaganda — types of expression that go against the very essence of the freedom of speech. The case of social media is somehow different. The vast majority of content on social media would be protected by the freedom of expression. Moreover, social media platforms have reporting protocols that aim to remove prohibited content. Of course, it is disputable whether these policies are effective and efficient, but singular examples of prohibited content or hate speech should not constitute a basis for banning an entire social media platform for millions of its daily users.
The problem lays with the use of social media as a tool for propaganda (as Russia did), and with content moderation and the recommendation algorithms, which can increase or decrease the reach of some persons or content on social media, thus causing particular persons or content to gain or lose popularity. Done on a mass scale, both moderation and algorithm manipulations are susceptible to influencing the electoral process. Would that be acceptable in the European Union?
In my opinion, it would not, as it would run counter to the very values on which the EU is based — namely, democracy, equality, the rule of law (Article 2 TEU). If social media platforms’ algorithms were manipulated in order to create false popularity for some politicians or political parties, in an essential manner, it would make the entire democratic process biased and subject to foreign adversarial influence. It does not matter whether this influence comes from China (TikTok) or from the U.S. (X, Facebook, Instagram).
However, does it empower the European Union to introduce bans on entire social media platforms? It cannot be forgotten that any limitation of the freedom of expression in the European Union needs to meet certain conditions. In the first place, the limitation must be “provided for by law,” meaning that the EU institution introducing the measures restricting the freedom of expression must have a legal basis for its actions. The restrictions must also be intended to achieve an objective of general interest and must not be excessive. They must be necessary and proportionate to the aim sought, and the very essence of the freedom of expression must not be impaired (40).
Banning an entire social media platform would be disproportionate and thus it would impair the very essence of the freedom of expression. It would mean not only banning the content created for propaganda purposes, but all content, including neutral and non-political expressions. It must be remembered that social media platforms have created content moderation policies, so there are some rules on the removal of questionable content. Instead of banning entire social media platforms, the European Union could work together with social media platforms on the rules and practices of content moderation.
To ban specific social media platforms, the EU could use either an existing legal framework or create new regulations. As a pattern, it could use recent legislation adopted in the United States and design mirror rules. In March 2024, the U.S. Congress passed the Protecting Americans from Foreign Adversary Controlled Applications Act. Section 2(g)(3)(A)(ii) of the Act expressly mentions TikTok as an application within the purview of the Act. Although TikTok Inc. brought an action against the Act, the Supreme Court of the United States (SCOTUS) reached a conclusion that the Act does not violate the First Amendment, as TikTok Inc. claimed to be an entity exercising editorial discretion in the selection and presentation of content. On 19th January 2025, TikTok stopped operating in the U.S. for several hours, until the U.S. President-elect Donald Trump announced he would adopt an executive order so that TikTok could resume its operations in the U.S. on the same day (41). The next day, TikTok’s CEO Shou Chew attended Trump’s inauguration (42). Trump continues to search for a solution to keep TikTok in the United States (43).
In its judgment, the SCOTUS decided that the Act is content-neutral law, as it does not target particular speech based upon its content. Therefore, the Act requires intermediate scrutiny, and according to the SCOTUS, satisfies it. This is because its aim is to prevent China – a designated foreign adversary – from collecting vast amounts of sensitive data from U.S. citizens (44). Data protection could be an issue when analysing the possibility of banning social media in the European Union. In order to transfer personal data to a foreign state, the European Commission must find that this state ensures an adequate level of protection (45). Despite the Commission’s findings, the CJEU has on numerous occasions determined that the law and practices in force in the United States do not ensure an adequate level of protection (46). Nevertheless, a new Commission’s adequacy decision (47) is currently in force permitting data transfer from the European Union to the United States, so there is no ground for banning social media in the EU on this basis.
The U.S. Government also provided a secondary justification for the Protecting Americans from Foreign Adversary Controlled Applications Act by emphasising its interest in preventing foreign adversaries from having control over recommendation algorithms, and from being able to wield that control to alter the content on the platform in an undetectable manner. The SCOTUS did not find it necessary to delve deeper into this argument and determined that data protection justification was sufficient to adopt the Act. To this end, Justice Gorsuch, in his concurring opinion, assessed that it is up to journalists, publishers and speakers of all kinds to choose what they want to speak about and what content they want to present. Therefore, the argument concerning the functioning of the TikTok algorithm cannot be accepted, as one man’s “covert content manipulation” is another man’s “editorial discretion” (48).
However, we should ask whether in this case we can speak only about editorial discretion. Application owners choose whom to “boost” and whom to “shadow,” and all this creates, on a mass scale, false impressions on the popularity of persons, topics or views in a way that can influence democratic processes, including elections. This influence cannot be overlooked by states governed by the rule of law, as some part of the electoral choice process might be shifted outside state borders into the hands of the foreign entity or even government that is controlling the algorithms of the social media platform. There is a question whether the Romanian scenario could repeat itself in other EU member states, and whether elections in these states risk being annulled if such a foreign interference happens.
Nevertheless, it should be concluded that all these problems fall within editorial choices, just like those made every day by media outlets. Based on their own right to freedom of expression, media outlets can publish whatever information or ideas they want or like, publish whomever they want, and decide on their editorial policy (49). Tech billionaires and the governments they support might try to influence elections in other states, but the citizens of these states still make their choice. They might follow a billionaire’s lead, but they might just as well choose not to. Algorithm manipulation might be a form of interference (theoretically, it could even reduce the audience of a specific speaker to the point where it is practically nonexistent (50)), but it might also have an adverse effect as well, causing people to not only vote for a candidate that is not “boosted”, but also stop using a specific social media platform, just as it happened when users started leaving Musk’s X for platforms like Bluesky (51). However, the scale of this effect is difficult to assess and predict, as there are currently no data available on how many users left X for Bluesky. It can be indicated only that in the week following Donald Trump’s reelection, Bluesky gained about 2.5 million new users (52).
Banning a social media platform would be a disproportionate and excessive reaction, as there exist other remedies that would alleviate the alleged algorithm manipulation. As Justice Ginsberg rightly noticed, banning social media for its users would mean “being cut off from a very large part of the marketplace of ideas” (53). It cannot be overlooked that social media platforms are only one of the many means of spreading information and ideas, co-existing with television, radio and press (printed and electronic). Politicians have also contact with their voters during rallies or through distribution of fliers. Freedom of expression is inherent to all individuals in the territory of the EU, including social media platforms and their tech billionaire owners, and thus should be adequately protected, including political and editorial choices. Even if platform owners decide to “boost” the reach of some politician or political party, the information about the others can be found elsewhere.
Currently, the only option in the European Union that would allow for introducing a ban on social media platforms would be if they did not comply with the rules on personal data protection, especially if data are transferred outside the EU. Only in that case could the EU adopt legislation to mirror that in the United States. As was already indicated, at present, all the conditions for a lawful transfer are fulfilled, so there is no ground for introducing such a ban. Banning social media platforms on the basis of the content they promote through their algorithms would be disproportionate and thus incompatible with the freedom of expression, as these platforms are also entitled to editorial choices.
The above deliberations do not mean that the European Union is completely defenceless in relation to social media platforms. If these platforms fail to comply with the EU law, EU institutions might apply sanctions. The EU could also adopt laws regulating the obligations and responsibilities of these platforms on the EU territory. In the same way that traditional media outlets have their obligations and responsibilities, one minimal obligation imposed on social media platforms would be for them to moderate the content published thereon (54).
Therefore, the rules of the DSA already impose very specific responsibilities on social media platforms. Effective mitigation measures against social media’s negative effects on the electoral process might mean that the platforms are already obliged to shape their algorithms toward neutrality and impartiality, which would not give advance to any one political movement nor politician. Moreover, platforms are obliged to provide for a content moderation process, meaning that they must create and apply rules permitting the removal illegal content, or introduce fact-checking mechanisms enabling the addition of context or explanations to specific content.
Presently, the European Commission has instituted an investigation concerning Elon Musk’s X and its compliance with the DSA. One of the allegations is that X’s algorithms “boost” European far-right political parties — for example, AfD in Germany (55) — and that this can influence the results of national elections by giving these parties an unfair advantage. Such proceedings concerning the alleged violations of the DSA have also been instituted in the EU against Meta (Facebook). Facebook’s owner Mark Zuckerberg called upon the president-elect Donald Trump to intervene and prevent the EU from fining U.S. tech companies for committing EU law violations (56). In December 2025, X incurred a €120 million fine from the European Commission for violating its transparency obligations under the DSA (57). In response, Musk threatened the EU with some undefined ‘response’ (58), then started advocating on X that “the EU should be abolished and sovereignty returned to individual countries” (59).
Nevertheless, these American companies, if they want to operate in the European Union, should obey to the EU regulations. As the Spanish Prime Minister Pedro Sánchez said, “Social media tycoons should be held accountable if their algorithms poison our societies” (60). If the U.S. could ban TikTok (and other platforms owned by foreign adversaries), why should the EU be prohibited from imposing legal obligations on similar applications owned by American entrepreneurs, and make them pay fines if their violation of EU law is confirmed? These obligations, or even penalties, would interfere less with the rights of social media platforms than a complete ban. Moreover, even if EU financial sanctions would be imposed, companies like Meta have the right to judicial review from the perspective of the freedom of expression and the principle of proportionality.
It can be also argued that social media platforms are private entities with privately-owned resources, so no one is entitled to force property owners to exercise their property rights in a particular manner (61). This assumption is not completely true when we talk about Europe, as here the state is entitled to impose such laws as it deems necessary to control the use of property in accordance with the general public interest (Article 1 of the Protocol 1 to the ECHR, Article 52(1) of the EU Charter of Fundamental Rights). Protecting civic discourse and electoral processes are matters of general interest that might entail limitations to the property right.
It has to be added that not only does the DSA require moderation from social media platforms; EU law also provides for specific rules on combatting dissemination of terrorist content online (62). On the basis of Article 3(1) of the Regulation 2021/784, competent authorities of each EU member state have to power to issue removal orders requiring hosting service providers to remove terrorist content or to disable access to terrorist content within all Member States. Both hosting service providers and content providers have the right to an effective remedy (Article 9 of the Regulation 2021/784), and this safeguards freedom of expression if specific content is erroneously marked as promoting terrorism.
More problematic would be the issue of disinformation — for example, Russian propaganda concerning elections or Russian aggression against Ukraine. Disinformation can be spread intentionally, but is mostly spread unintentionally by unaware users of social media platforms, as well as by bots (63). At the same time, disinformation weakens democracy, as it undermines trust in elections and democratic institutions (64). Such content is nuanced and highly manipulative, and it cannot be easily distinguished from a mere political expression. That is why the European Union still has not elaborated any rules on removing propaganda online (including Russian propaganda), as practical implementation of such rules seems impossible if freedom of expression is to be safeguarded. Yet, in 2015 the European Council was already stressing the need to challenge Russia’s ongoing disinformation campaigns (65). For this reason, the East Stratcom Task Force was created in the EU as a team of experts forming part of the EU’s diplomatic service. This group is responsible for https://euvsdisinfo.eu/, a website available in all EU official languages that explains on a daily basis all cases of Russian disinformation. At least some of the obligations to fact-check and provide context could be imposed on social media platforms as one of the responsibilities included under the moderation of content. A reliable, independent and impartial fact-checking can improve credibility of news (66). However, Facebook went in a completely opposite direction, laying off its fact-checkers, who had worked on a system similar to X’s community notes. According to Zuckerberg, this move was a step towards reducing censorship on Facebook (67). This is a complete misunderstanding of the concept of “censorship”. Taking care of reliability and credibility of information is not censorship; it is an ethical responsibility. The EU could also finance and organise educational campaigns aiming at raising awareness of disinformation and fake news.
Another issue is the protection of the freedom of expression from the violations of this principle caused by the platforms themselves. At the moment, the platforms decide on the removal of content, and affected users have no recourse available against such a removal. Content providers should be granted a right to an effective remedy in such cases; Meta, for example, created an appeal process for Facebook, Instagram and Threads. Once this appeals process is exhausted, a user may also appeal to the Meta Oversight Board. Therefore, it can be said that the platform itself has provided its users with a remedy to content moderation.
There is also one more solution to the issue of content moderation and protecting freedom of speech. The European Union could provide as a counter-balance for the existing social media platforms. Meta (Facebook, Instagram), and X are American, and TikTok is Chinese; this raises some questions of American and Chinese interference with European politics. The EU could help, even financially, to create a competitive social media platform established in the European Union and completely subject to the EU rules, providing for neutral algorithms and safeguarding all the basic principles of the EU, such as equality, human rights, and diversity. This solution would improve the protection of freedom of speech, as the platform would constitute another place where information and ideas could be freely published and exchanged. This solution is, moreover, not intrusive into the freedom of expression granted to other social media platforms and their users.
Social media platforms emphasise that they want to support and reinforce freedom of expression. In its Global Transparency Report (68), X emphasized that it believed that “X users have the right to express their opinions and ideas without fear of censorship,” and present itself as devoted to human rights, especially freedom of expression. At the same time, X has indicated that it is founded on “Freedom of Speech, not Freedom of Reach”, which means that, where appropriate, it would restrict the reach of posts to make the content less discoverable, constituting an alternative to removal.
The aim of Meta (Facebook, Instagram) is to “create a place for expression and give people a voice” in all forms of expression (comments, photos, music, etc.), and it reserves the right to remove content that goes against the community standards and the preservation of international human rights standards (69).
TikTok’s mission is to “inspire creativity and bring joy”; however, it also moderates and sometimes removes content if it violates the platform’s rules (70).
On all of these media platforms, requests for removals might come not only from users, but also from governments. Despite their commitment to openness and the free exchange of ideas, these social network platforms are thus under pressure to censor content. (71) |When they remove terrorist content or hate speech, such censorship has to be approved, but in more ambiguous instances, there appear accusations that social media platforms apply their standards arbitrarily or in an inconsistent manner (72).
Moderation is also problematic, as it allows owners of social media platforms impose on users what they believe users should see by manipulating recommendation algorithms to favor their preferable political options over those of others, with the aim of influencing electoral processes. Although such practices can be seen as reprehensible, they are still covered by freedom of expression as the editorial choices of social media platforms. Naturally, this restricts to a certain point the right to freedom of expression for these content providers — whose reach is thus restricted — but it still falls within the ambit of editorial choices.
As it now stands, the manipulation of recommendations should not be a reason for the European Union to ban these platforms on the territory of the EU. These platforms compete and thus provide alternatives to each other. Moreover, politicians have other options for sharing their views (television, radio, press). Banning these platforms in the EU would only be feasible if their functioning was contrary to the rules on personal data protection, and this currently is not an issue. Of course, there are examples of Europe completely banning specific media outlets, but these were ones engaged in terrorist and war propaganda — activities that go against the very heart of the European system of human rights protection. The same cannot be said with regards moderation or algorithm manipulation.
Will the social media platforms “boost” some political options to favor them over others? To a certain extent, the answer is affirmative, it has been already indicated in this paper that Elon Musk is eager to influence politics outside the U.S. However, the heart of social media platforms is their users, and if such recommendation algorithm manipulation reached a point where it would be unbearable for supporters of contrary political opinions, they would simply quit or move to other platforms. Such move might be followed by advertisers, which would mean loss of profits for the platform at stake.
Nevertheless, it cannot be forgotten that these platforms are private for-profit organisations, so they need to bring profit to their stakeholders, so social media platforms cannot totally ignore the needs of their users (73). It is yet another reason to conclude that algorithms would not be manipulated beyond a specific point. The European Union could further mitigate the effects of algorithm manipulation by adopting specific rules thereon, or by applying existing the DSA framework to oblige social media platforms to mitigate the risks posed by algorithms to civic discourse, electoral processes, and public security.
The EU could also inspire and finance a project that would lead to the creation of a Europe-based social media platform that would understand the European values and human rights, and would be free from political preferences of American tech billionaires who want to influence European and global politics. Boosting specific political options using algorithms, mentioned in the previous part of the paper, can create an unfair advantage for some European political parties — an advantage coming from outside the EU.
The only issue that finds no immediate solution currently is the problem of disinformation and propaganda being spread using social media platforms. Any attempts to remove such content bear risks that regular political discourse would be thus limited, as the line between disinformation and political expression can be extremely thin. At the same time, vast majority of fake news spreaders do so unintentionally, being simply unaware of the information’s falsity (74).
One available option has already been effectuated by the EU regarding the Russian disinformation: daily rectification of disinformation and providing appropriate explanations and context, which is carried out on the website https://euvsdisinfo.eu/. The risk of social media platforms being used for propaganda and disinformation is not a reason to ban them outright in the European Union, as it would restrict the right to freedom of expression for millions of European users disproportionately and unnecessarily. Adequate anti-disinformation campaigns are an acceptable way of dealing with this risk, and some fact-checking obligations could also be imposed on social media platforms.
Patricia L Moravec, Randall K Minas, and Alan R Dennis, ‘Fake News On Social Media: People Believe What They Want To Believe When It Makes No Sense At All’ [2019] 43(4) MIS Quarterly, 1343.
Gregory S Gordon, ‘The Propaganda Prosecutions at Nuremberg: The Origin of Atrocity Speech Law and the Touchstone for Normative Evolution’ [2017] 39(1) Loyola of Los Angeles International and Comparative Law Review 209–245.
Elizabeth Baisley, ‘Genocide and Constructions of Hutu and Tutsi in Radio Propaganda’ [2014] 55(3) Race & Class, 38, 39.
ICTR, case ICTR-99-52-A Ferdninand Nahimana, Jean-Bosco Barayagwiza and Hassan Ngeze, judgment of 28 November 2007.
Benjamin F Jackson, ‘Censorship and freedom of expression in the age of Facebook,’ 44 New Mexico Law Review [2014] 121, 128.
Bethany A Conway, Kate Kenski, and Di Wang, ‘The Rise of Twitter in the Political Campaign: Searching for Intermedia Agenda-Setting Effects in the Presidential Primary’ [2015] 20 Journal of Computer-Mediated Communication, 363, 365.
Lance Y Hunter, ‘Social Media, Disinformation, And Democracy: How Different Types of Social Media Usage Affect Democracy Cross-Nationally’ [2023] 30(6) Democratization 1040, 1041.
Pavel Slutskiy, ‘Freedom of Expression, Social Media Censorship, and Property Rights’ [2020] 48 Tripodos 53, 64.
Blair Guild, ‘Sen. Ted Cruz Grills Mark Zuckerberg About Facebook Political Bias,’ CBS News (10 April 2018) <https://www.cbsnews.com/news/sen-ted-cruz-grills-mark-zuckerberg-facebook-political-bias/> accessed 31 January 2025.
Michael Patty, ‘Social Media and Censorship: Rethinking State Action Once Again’ [2019] 40 Mitchell Hamline Law Journal of Public Policy and Practice 99, 108.
Sabine Siebold and Friederike Heine, ‘Exclusive: German Ambassador Warns of Trump Plan to Redefine Constitutional Order, Document Shows’ Reuters (19 January 2025) <https://www.reuters.com/world/us/german-ambassador-warns-trump-plan-redefine-constitutional-order-document-shows-2025-01-18/> accessed 19 January 2025.
It is sometimes argued that users’ freedom of expression is not limited by social media censorship practices, as simply users can go present their opinions and ideas elsewhere, and that social media platforms are not obliged to continue to provide its services to anyone on any conditions. See Slutskiy, ‘Freedom of Expression...’, p. 62.
Matthew P Hooker, ‘Censorship, Free Speech & Facebook: Applying the First Amendment to Social Media Platforms Via the Public Function Exception’ [2019] 15(1) Washington Journal of Law, Technology and Arts, 36, 44.
Louis Staples, ‘Elon Musk Isn’t Trolling Britain. He’s Doing Something Much Worse’ The New York Times (28 January 2025) <https://www.nytimes.com/2025/01/28/opinion/elon-musk-britain-trump.html> accessed 28 January 2025.
Tom Soufi Burridge, ‘EU politicians warn against Elon Musk’s incursions into European politics’ ABC News (9 January 2025) <https://abcnews.go.com/International/eu-politicians-warn-elon-musks-incursions-european-politics/story?id=117460305> accessed 22 January 2025.
Aitor Hernández-Morales, ‘Tech Billionaires Want To ‘Overthrow Democracy’ With Social Media, Spain PM Sánchez Says’ Politico (22 January 2025) <https://www.politico.eu/article/spain-pedro-sanchez-big-tech-billionaires-democracy-social-media/> accessed 22 January 2025.
Evelyn Mary Aswad, ‘The Future of Freedom of Expression Online’ [2018] 17(1) Duke Law & Technology Review, 26, 32.
Tarleton Gillespie, Custodians of the Internet – Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (Yale University Press 2018), 206.
Kris M Obiaje, ‘Nigeria Twitter Ban: An Erosion of Freedom of Information?’ [2021] 4(4) International Journal of Management, Social Sciences, Peace and Conflict Studies 37, 41. Irum Saeed Abbasi, Laila Al-Sharqi, ‘Media censorship: Freedom versus responsibility’ [2015] 7(4) Journal of Law and Conflict Resolution 21.
Marjorie Heins, ‘The Brave New World of Social Media Censorship’ [2014] 127 Harvard Law Review Forum, 325, 326.
UN Human Rights Council, Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, 6 April 2018, A/HRC/38/38.
Patricia L. Moravec, Randall K. Minas, Alan R. Dennis, ‘Fake News on Social Media...’, 1344.
Aaron Erlich and Calvin Garner, ‘Is Pro-Kremlin Disinformation Effective? Evidence from Ukraine’ [2023] 28(1) The International Journal of Press/Politics 5, 8.
Jon Swaine, ‘Twitter Admits Far More Russian Bots Posted on Election Than It Had Disclosed’ The Guardian (20 Jan 2018), <https://www.theguardian.com/technology/2018/jan/19/twitter-admits-far-more-russian-bots-posted-on-election-than-it-had-disclosed> accessed 31 January 2025
Anna Mierzyńska, ‘Rosja Może Ingerować W Wybory Prezydenckie W Polsce. Jak? Analizujemy Scenariusze [Russia May Interfere In The Presidential Election In Poland. How? We Analyse Scenarios]’ Oko Press (23 January 2025) <https://oko.press/rosja-ingerowac-w-wybory-prezydenckie> accessed 23 January 2025.
Constitutional Court of Romania, judgment no. 32 of 6 December 2024, https://www.ccr.ro/wp-content/uploads/2024/12/Hotarare_32_2024.pdf (accessed: 2025-01-23).
Samantha Bradshaw, and Phillip N. Howard, ‘The Global Organization of Social Media Disinformation Campaigns’ [2018] 71(1.5) Journal of International Affairs 23, 24.
Esma Aïmeur, Sabrine Amri, and Gilles Brassard, ‘Fake News, Disinformation and Misinformation in Social Media: A Review’ [2023] 13(30) Social Network Analysis and Mining 1, 2.
Edson C Tandoc Jr, Darren Lim, and Rich Ling, ‘Diffusion of Disinformation: How Social Media Users Respond to Fake News and Why’ [2020] 21(3) Journalism 381, 385.
Petros Iosifidis and Nicholas Nicholi, Digital Democracy, Social Media and Disinformation (Routledge 2021), 8.
Tarleton Gillespie, ‘Custodians of the Internet...’, 5.
ECtHR, Handyside v. United Kingdom, 5493/72, judgment of 7 December 1976.
ECtHR, NIT S.R.L. v. the Republic of Moldova, 28470/12, judgment of 5 April 2022.
M. Zuckerberg claimed that Facebook is not a media company: Tarleton Gillespie, ‘Custodians of the Internet...’, 7.
Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) (Text with EEA relevance), OJ L 277, 27.10.2022, p. 1–102.
ECtHR, ROJ TV A/S v. Denmark, 24683/14, decision of 17 April 2018.
Council regulation (EU) No 833/2014 of 31 July 2014 concerning restrictive measures in view of Russia’s actions destabilising the situation in Ukraine, OJ L 229, 31.7.2014, p. 1–11. The operations of the following media outlets was suspended in the EU: RT - Russia Today English, RT - Russia Today UK, RT - Russia Today Germany, RT - Russia Today France, RT - Russia Today Spanish, Sputnik, Rossiya RTR / RTR Planeta, Rossiya 24 / Russia 24, TV Centre International, NTV/NTV Mir, Rossiya 1, REN TV, Pervyi Kanal, RT Arabic, Sputnik Arabic, RT Balkan, Oriental Review, Tsargrad, New Eastern Outlook, Katehon, Voice of Europe, RIA Novosti, Izvestija, and Rossiiskaja Gazeta.
CJEU, T-125/22 RT France v Council, judgment of 27 July 2022.
ECtHR, Sürek v. Turkey (no. 1), 26682/95, judgment of 8 July 1999.
CJEU, T-262/15 Dmitrii Konstantinovich Kiselev v Council of the European Union, judgment of 15 June 2017.
Clare Duffy and David Goldman, ‘TikTok Is Back Online After Trump Pledged To Restore It’ CNN (20 January 2025) <https://edition.cnn.com/2025/01/19/tech/tiktok-ban/index.html> accessed 21 January 2025.
Kurt Wagner, ‘TikTok CEO Joins Trump’s Inauguration With App’s Future in Doubt’ Bloomberg (20 January 2025) <https://www.bloomberg.com/news/articles/2025-01-20/tiktok-ceo-joins-trump-s-inauguration-with-app-s-future-in-doubt> accessed 21 January 2025.
Nandita Bose, Dawn Chmielewski, Milana Vinn, and Kanishka Singh, ‘Trump Discussing Tiktok Purchase With Multiple People, Decision in 30 Days’ Reuters (26 January 2025) <https://www.reuters.com/markets/deals/white-house-talks-have-oracle-us-investors-take-over-tiktok-npr-reports-2025-01-25/> accessed 26 January 2025
SCOTUS, TikTok Inc., et al. v. Garland, 604 U.S. __ (2025).
Article 45(1) of the Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance), OJ L 119, 4.5.2016, p. 1–88. Previously Article 25(1) of the Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, OJ L 281, 23.11.1995, p. 31–50.
CJEU, C-311/18 Data Protection Commissioner v Facebook Ireland Limited and Maximillian Schrems, judgment of 16 July 2020. CJEU, C-362/14 Maximillian Schrems v Data Protection Commissioner, judgment of 6 October 2015.
Commission Implementing Decision EU 2023/1795 of 10 July 2023 pursuant to Regulation (EU) 2016/679 of the European Parliament and of the Council on the adequate level of protection of personal data under the EU-US Data Privacy Framework (notified under document C(2023)4745) (Text with EEA relevance), OJ L 231, 20.9.2023, p. 118–229.
SCOTUS, TikTok Inc., et al. v. Garland, 604 U.S. __ (2025).
Kris M. Obiaje, ‘Nigeria Twitter ban...’, 38. Pavel Slutskiy, ‘Freedom of Expression...”, 4.
Benjamin F. Jackson, ‘Censorship and freedom...’, 132.
Raphael Boyd, ‘From X To Bluesky: Why Are People Fleeing Elon Musk’s ‘Digital Town Square’?’ The Guardian (11 December 2024) <https://www.theguardian.com/media/2024/dec/11/from-x-to-bluesky-why-are-people-abandoning-twitter-digital-town-square> accessed 26 January 2025.
Aditya Soni and Jaspreet Singh, ‘Bluesky Attracts Millions as Users Leave Musk’s X After Trump Win’, Reuters (14 November 2024) <https://www.reuters.com/technology/hate-speech-watchdog-ccdh-quit-musks-x-ahead-terms-change-2024-11-14/> accessed 7 December 2025.
Maria Wood, ‘Access to Social Media Protected by First Amendment’ (New Jersey State Bar Foundation, 12 May 2020) <https://njsbf.org/2020/05/12/access-to-social-media-protected-by-first-amendment/> accessed 31 January 2025.
Tarleton Gillespie, ‘Custodians of the Internet...’, 13, 21.
Pieter Haeck, ‘EU Steps Up Scrutiny Of X as Anger Grows Over Musk’s Political Meddling’ Politico (18 January 2025) <https://www.politico.eu/article/european-commission-steps-up-scrutiny-of-musks-x/>.
Aitor Hernández-Morales, ‘Zuckerberg Urges Trump To Stop The EU From Fining US Tech Companies’ Politico (11 January 2025) <https://www.politico.eu/article/zuckerberg-urges-trump-to-stop-eu-from-screwing-with-fining-us-tech-companies/> accessed 22 January 2025.
Ben Munster, ‘EU Slaps €120M Fine on Elon Musk’s X, Straining Ties With US’ Politico (5 December 2025) <https://www.politico.eu/article/eu-slaps-e120m-fine-on-x-straining-ties-with-us/> accessed 7 December 2025.
Ben Munster, ‘Musk Threatens ‘Response’ Against Individuals Who Imposed €120M X Penalty’, Politico (6 December 2025) <https://www.politico.eu/article/elon-musk-threatens-response-against-individuals-who-imposed-e120m-x-penalty/> accessed 6 December 2025
Elon Musk (X, 6 December 2025), https://x.com/elonmusk/status/1997279325876367719> accessed 7 December 2025.
Aitor Hernández-Morales, ‘Tech Billionaires Want To ‘Overthrow Democracy’ With Social Media, Spain PM Sánchez Says’ Politico (22 January 2025) <https://www.politico.eu/article/spain-pedro-sanchez-big-tech-billionaires-democracy-social-media/> accessed 22 January 2025
Pavel Slutskiy, ‘Freedom of Expression...’, 60–61, 65.
Regulation (EU) 2021/784 of the European Parliament and of the Council of 29 April 2021 on addressing the dissemination of terrorist content online (Text with EEA relevance), OJ L 172, 17.5.2021, p. 79–109.
Esma Aïmeur, Sabrine Amri, and Gilles Brassard, ‘Fake News, Disinformation and Misinformation in Social Media: A Review’ [2023] 13(30) Social Network Analysis and Mining, 13.
Lance Y. Hunter, ‘Social media...’, 1043–1044.
European Council, Conclusions on External Relations (19 March 2015), <https://www. consilium.europa.eu/en/press/press-releases/2015/03/19/conclusions-russia-ukraine-european-council-march-2015/> accessed 26 January 2025.
Moravec, Randall K. Minas, and Alan R. Dennis, ‘Fake News on Social Media...’, 1345.
Robert Booth, ‘Meta to Get Rid of Factcheckers and Recommend More Political Content’ The Guardian (7 January 2025) <https://www.theguardian.com/technology/2025/jan/07/meta-facebook-instagram-threads-mark-zuckerberg-remove-fact-checkers-recommend-political-content> accessed 19 January 2025.
X, ‘Global Transparency Report’ <https://transparency.x.com/content/dam/transparency-twitter/2024/x-global-transparency-report-h1.pdf> accessed 29 January 2025.
Meta, ‘Community Standards’ <https://transparency.meta.com/en-gb/policies/community-standards/> accessed 29 January 2025.
TikTok, ‘Community Guidelines’ <https://www.tiktok.com/community-guidelines/en/> accessed 29 January 2025.
Benjamin F Jackson, ‘Censorship and freedom of expression in the age of Facebook,’ 44 New Mexico Law Review [2014] 121, 127.
Matthew P Hooker, ‘Censorship, Free Speech & Facebook: Applying the First Amendment to Social Media Platforms Via the Public Function Exception’ [2019] 15(1) Washington Journal of Law, Technology and Art, 36, 43. Pavel Slutskiy, ‘Freedom of Expression...’, 54, 60.
Matthew P Hooker, ‘Censorship, Free Speech & Facebook: Applying the First Amendment to Social Media Platforms Via the Public Function Exception’ [2019] 15(1) Washington Journal of Law, Technology and Arts 36, 70
Esma Aïmeur, Sabrine Amri, and Gilles Brassard, ‘Fake News, Disinformation and Misinformation in Social Media: A Review’ [2023] 13(30) Social Network Analysis and Mining 1, 13
