European Union lawmakers haveon efforts to combat the spread of vaccine disinformation on their platforms for six months. “The continuation of the monitoring program is necessary as the vaccination campaigns throughout the EU are proceeding at a steady and increasing pace, and the upcoming months will be reaching a high level of vaccination in the Member States. It is key that is not fuelled by harmful disinformation in this important period in this important period,” the Commission writes today.
Facebook, Google, Microsoft, TikTok, and Twitter are signed to make monthly reports due to participating in the bloc’s (non-legally binding) Code of. Although going forward, they’ll be switching to bi-monthly reporting.
Publishing the latest batch of platform reports for April, the Commission said the tech giants have shown they’re unable to police “dangerous lies” by themselves — while continuing to express dissatisfaction at the quality and granularity of the data that is being (voluntarily) provided by platforms vis-a-via how they’re combatinggenerally.
“These reports show how important it is to be able to effectivelyto reduce disinformation,” said Věra Jourová, the EU’s VP for values and transparency, in a statement. “We decided to extend this program because the amount of lies continue to flood our information space and because it will inform the creation of the new generation Code against disinformation. We need a robust monitoring program and clearer indicators to measure the impact of platform actions. They simply cannot police themselves alone.”
Last month, the Commission announced a plan to beef up the voluntary Code, saying it wants more players — especially from the adtech ecosystem — to help de-monetize harmful nonsense. The Code of Practice initiative pre-dates the pandemic, kicking off in 2018 when concerns about the impact of ‘fake news’ on democratic processes andhigh in the wake of major political disinformation scandals. But the COVID-19 accelerated concern over the issue of dangerous nonsense being amplified online, bringing it into sharper focus for lawmakers.
In the EU, lawmakers are still not planning to put regional regulation of online disinformation on a legal footing, preferring to continue with a voluntary — and what the Commission refers to as ‘co-regulatory — an approach which encourages action and engagement from platforms vis-a-vis potentially harmful (but not illegal) content, such as offering tools for users to report problems and appeal takedowns, but without thesanctions if they fail to live up to their promises.
The regulation proposed at the end of last year will set rules for how platforms must handle illegal content. It will also have a new lever to ratchet up pressure on platforms in the form of the Digital Services Act (DSA). But commissioners have suggested that those platforms that engage positively with the EU’s disinformation Code are likely to be looked upon more favorably by the regulators overseeing DSA compliance.
In another statement today, Thierry Breton, the EU’s Internal Market commissioner, suggested that combining the DSA and the beefed-up Code will open up “a new chapter in countering disinformation in the EU”. “At this crucial phase of the campaign, I expect platforms to step up their efforts and deliver the strengthened Code of Practice as soon possible, in line with our Guidance,” he added.
Disinformation remains a tricky topic for regulators, given that the value of online content can be highly subjective, and any centralized order to remove information — no matter how stupid or ridiculous the content in question might be — risks a charge of censorship.
Removal of COVID-19-related disinformation is certainly less controversial, given apparent risks to public health (such as from anti-vaccination messaging or the sale of defective PPE). But even here, the Commission seems most keen to promote pro-speech measures being taken by platforms — such as encouraging to get vaccinated, and that Twitter introduced prompts appearing on users’ home timeline during World Immunisation Week in 16 countries, and held conversations on vaccines that received 5 million impressions.
In the April Twitter reported challenging 2,779 accounts, suspending 260, and removing 5,091 pieces of content globally on the COVID-19 disinformation topic in April., there is more detail on actual removals. Facebook, for example, says it removed 47,000 pieces of content in the EU for violating misinformation policies, which the Commission notes are a slight decrease from the previous month. At the same time,
Google, meanwhile, reported taking action against 10,549 URLs on AdSense, which the Commission notes as a “significant increase” vs. March (+1378). But is that increase good news or bad? Increased removals of dodgy COVID-19 ads might signify better enforcement by Google — or significant growth of the COVID-19 disinformation problem on its ad network.
The ongoing problem for the regulators trying to tread a fuzzy line on online disinformation is how to quantify any of these tech giants’ actions — and truly understand their efficacy or impact — without having standardized reporting requirements and full access to. For that, regulation would be needed, not selective self-reporting.