The Reverse Brussels Effect: Human Rights, Free Speech, and the Future of Online Spaces

By Hounaz Beheshti, Policy Advisor Media and Human Rights 

The European Union (EU) has long been known for setting global regulatory standards, a phenomenon often referred to as the “Brussels Effect”. It has enacted several key legislative measures aimed at regulating the digital space. The Digital Services Act (DSA), the EU AI Act, and the European Media Freedom Act (EMFA) represent significant attempts to safeguard democracy and ensure accountability in tech governance. However, these regulations face mounting challenges, particularly as some platforms and global actors push back against EU-led governance. Are we, in fact, witnessing a reverse Brussels Effect—where external pressures and tech giants influence the EU rather than the other way around? 

The Dilemma of Content Moderation: Free Speech vs. Disinformation 

In recent developments, major online platforms have made significant changes to their content moderation policies, particularly concerning fact-checking, following pressure from the newly re-elected US President Donald Trump.  

Meta Platforms, announced the termination of their third-party fact-checking programme in the United States. This initiative is being replaced by a “Community Notes” system, modelled after a similar feature on X (formerly Twitter). The new approach allows users to add contextual information to posts they consider ‘misleading’, shifting the responsibility from independent fact-checkers to the broader user community.  

Two primary concerns come to mind. One that the current digital literacy ecosystem is not robust enough to sustain the shift in responsibility of fact-checking to users. While some initiatives exist to promote digital literacy, several challenges persist. Unequal digital literacy levels and lack of standardized education are amongst them. Additionally, many users lack the critical thinking skills necessary to assess the credibility of sources, making them more susceptible to misinformation. The rapid evolution of digital platforms and the increasing sophistication of disinformation campaigns further complicate the landscape, often outpacing efforts to educate users. Moreover, disparities in access to digital tools and reliable internet connections, particularly in lower-income and rural areas, create further obstacles to fostering a well-informed online community. 

The second concern is the monetization of content online. Major online platforms often benefit from content sharing, utilizing moderation algorithms that prioritize financial profit over human rights considerations. This exposes users to algorithmic biases and content manipulation amongst other threats. In addition, the intricacies of these algorithms are becoming increasingly opaque, leading to discomfort among EU monitoring bodies regarding their ability to oversee various AI applications. With substantial power concentrated in the hands of a select few, the feasibility of transferring fact-checking responsibilities to citizens is questionable.  

The emotional investment in shared information is another factor that can impede the effectiveness of user-driven fact-checking. Research indicates that users who are deeply emotionally invested in a piece of information are significantly less likely to reconsider or change their beliefs about its accuracy.  

Finally, the effectiveness of this shift assumes that other pillars of a free democratic society, such as independent and public interest media, possess the necessary knowledge, skills, freedom, and financial means to perform their roles as fact-checkers and counteract false narratives. The recent developments have raised significant concerns among civil society organizations (CSOs) and philanthropists dedicated to supporting independent media. CSOs play a crucial role in combating misinformation through various strategies; from fact-checking and verification, conducting media literacy programmes to coalition building, advocacy and policy engagement to promote transparency and accountability of tech platforms. RNW Media’s recently-published Haarlem Declaration is an example of that —an inclusive framework for the ethical use of AI in public interest and independent journalism, co-created with 88 public interest media, civil society groups, and media experts from 34 countries. A statement that aims to set a precedent in proactively and inclusively addressing AI-driven challenges.  

Human Rights in the Digital Age: Redefining the Boundaries 

The intersection of technology, democracy, and media freedom has reached a critical juncture. The power held by a handful of tech giants, operating with minimal civic oversight, raises pressing concerns. Authoritarian governments are making sweeping changes to digital policies overnight, and the resilience of democratic institutions is being tested. Policymakers, struggle to keep pace with evolving digital realities, and increasingly find themselves at a loss when attempting to enforce regulatory frameworks. 

In addition to the above, major human rights concepts such as free speech and the right to information—once well-established principles—are being contested in ways that challenge traditional legal frameworks. The EU Parliament’s recent heated discussions on the meaning of free speech in the digital sphere and its definition and implications is an example of this.  

In a context where human rights have become more politicized than ever, the role of civil society organisations as watchdogs is becoming even more pronounced. However, even as their importance rises, these organisations face dwindling resources; A challenge that has been starkly illustrated by the recent decision of the U.S. government to halt aid from public funds following President Trump’s re-election.  

If democratic societies are to reclaim control over digital governance, tech accountability must be prioritized. This means enforcing stricter transparency measures, ensuring independent oversight bodies have a say in decision-making processes, and holding platforms accountable for the societal consequences of their policies. Governments and civil society must work together to create mechanisms that prioritise the need to protect public discourse from manipulation. The reverse Brussels Effect highlights the pressing need for proactive policymaking, international cooperation, and a reimagined approach to tech governance.