The past year has seen COVID-19 disinformation inundate online spaces, accelerating the spread of the virus and loss of life. The pandemic and ensuing lockdowns have also turbo-charged the digital transformation of the economy, with this shift exemplified in changing workplace patterns and personal routines that are increasingly revolving around the digital sphere. Accordingly, 2020 witnessed the EU set in motion its most ambitious plan yet to rein in online platforms and establish clear rules in the largely unregulated virtual space.

The level of toxicity in online spaces has indeed become a pertinent issue for a broad swath of Europeans, with 65% of EU citizens agreeing that the Internet is not safe for its users. The pandemic has only compounded the issue. Wild conspiracy theories denying the very existence of the virus and outlandish allegations purporting that vaccination was being deployed as a tool to nanochip people spread even faster than the virus itself. While digital platforms enacted some measures including the promotion of official sources of information on the pandemic and the displaying of warning labels on misleading content, these initiatives did little to diminish their spread. In some cases, social media companies, however, removed sources of coronavirus disinformation entirely, a method that proved more efficient. These moves were, nonetheless, done sporadically and without a transparent strategy and communication approach.

Real-life implications of online toxicity: violence in the streets and vaccine rejection

The impact of toxic narratives was not confined only to the digital sphere. They rather prompted people in many countries across the EU – Germany, Italy, Slovakia and the Czech Republic to name just a few – to take to the streets and clash with police. According to GLOBSEC Trends 2020, one third (34%) of people living in Central and Eastern Europe believe that COVID-19 has been fabricated with the aim of manipulating society and one in four (24%) find credible the claim that the COVID vaccine is a tool intended to be used to nanochip the population.

In 2019, more than 75% of Europeans agreed that vaccines can be effective in preventing infectious diseases. Yet a year later, the willingness of the population to get vaccinated against the most pressing health threat of modern history, COVID-19, has revealed wrinkles. Across Central and Eastern Europe, only 37%, on average, expressed interest in getting the vaccine, with the numbers dropping further in places where preposterous conspiracy theories are flourishing.

From the outset of the pandemic, numerous countries including Russia and China spared no time in exploiting it as an opportunity to weaken the EU. These attempts were epitomized in the shipment of largely useless medical assistance for propaganda purposes in Italy and the pursuit of mask diplomacy amplified by concerted online campaigns. According to the EEAS Stratcom East disinformation database, there have been more than 700 articles published by pro-Russian sources, many of them financed by Russia, that have peddled COVID-19 related conspiracy theories and other forms of toxic content, some as early as January 2020.

European solutions are a must

As the impact of the online world has gradually increased, so too have calls for greater regulation of digital platforms and enforcement of rules and laws in this terrain. The big tech and online platforms, in particular, wield enormous power, rendering it next to impossible for national authorities, especially in smaller countries, to effectively regulate them.

Just to put things into perspective: the current 2.7 billion user base of Facebook nearly rivals the combined populations of China and India. Facebook, Google and other prominent online platforms indeed exert a level of power and influence comparable not only to that of countries but also supranational authorities. Endeavours of smaller EU member states to enforce their existing national laws and regulations against such behemoths are doomed to fail.

In the past, the EU has sought, successfully, to wrestle back privacy controls of its citizens from big tech. GDPR, vehemently opposed at the time of its adoption, has become a model throughout the world, with the EU setting global norms in online privacy protection. In the runup to EP elections in 2019, in the form of a voluntary Code of Practice, the European Commission pressed the largest online platforms to adopt comprehensive measures aimed at increasing transparency and limiting the spread and impact of disinformation. According to the Commission’s own assessment and those of several independent evaluations, however, there have been significant deficiencies in its application. A lack of country-specific data, ambiguous procedures and inadequate performance indicators and monitoring considerably hampered its intended impact.

When the European Democracy Action Plan (EDAP) was unveiled by Commission Vice-President Jourová, many applauded the move. And rightly so. The EDAP presents a coherent blueprint for the EU in seeking to protect the fundamental elements of democratic societies including free and fair elections, a free press and the ability of citizens to form their own opinions free of manipulation. In this regard, the EDAP pledge to revamp the Code of Practice on disinformation by summer 2021 is a much-needed step.

DSA adds new transparency requirements: ads, algorithms, data sharing.

The Commission proposal for the Digital Services Act (DSA), officially unveiled on December 15, in aiming to rewrite the rules of all digital services lays out a different focus, one that appears to promise a much wider impact. Most of the relevant content-related provisions of DSA would apply only to illegal content though. A great deal of harmful, but not illegal content, including medical hoaxes and COVID-19 disinformation, would consequently be omitted. The DSA, however, contains several important provisions that could have broader effects, namely through a rewriting of the rules on how online platforms operate.

Three specific elements of the DSA are notable in this respect. The DSA would establish explicit rules on the transparency of digital advertising, requiring platforms not only to distinguish ads from other content but also the entity which paid for the ads and the targeting criteria. The micro-targeting of issue-based content has been a widely criticised practice and the new rules should bring greater transparency. The largest online platforms (those having at least 45M monthly users in the EU) would, furthermore, be required to keep detailed data on all online adverts accessible in online libraries.

The functioning of recommending systems used on large online platforms has also drawn significant attention and scrutiny. As research has demonstrated, these recommender systems often lead platform users into rabbit holes and offer increasingly outlandish, bizarre and even radical content to maintain user engagement. The DSA aims to remedy this by requiring major platforms to provide access to the main parameters used in their recommender systems and turn off user profiling.

While the Code of Conduct and Code of Practice were important steps in limiting the spread of illegal and harmful content, independent scrutiny of platforms’ efforts was rather limited. Data provided by online platforms themselves often suffered from a lack of detail. Automated access to online libraries was also limited. The requirement for greater data sharing with authorities and researchers, therefore, would be an important move, allowing for impartial control and further improvement upon existing measures.

What’s next?

Once adopted, the DSA will be directly applicable across the entire EU, leading to the harmonisation of approaches to online content across the bloc without the need for national authorities to implement the new rules in any specific legislation.

The new Commission initiatives announced this year are bold and ambitious and represent a significant first step in a lengthy process that will involve negotiation and numerous amendments. The process, nevertheless, will hopefully lead to a more transparent, safe and predictable online environment, one where hostile actors – domestic or foreign – will find it more challenging to manipulate, deceive and commit crimes. At a time when our lives are increasingly shifting to the virtual domain, these measures are more important than ever.

Source:
https://www.neweurope.eu/article/is-the-digital-services-act-a-watershed-moment-in-europes-battle-against-toxic-online-content/

Similar articles:

Leave a Reply

Your email address will not be published. Required fields are marked *