European Regulators, Wielding a New Law Just Coming Into Effect, Begin a Broad Crackdown on Big Tech

The landmark guidelines of the Digital Services Act provoke concerns over content moderation that could affect an area far wider than Europe.

AP/Michael Dwyer
Google, Facebook, TikTok and other Big Tech companies operating in Europe are facing one of the most far-reaching efforts to clean up what people encounter online. AP/Michael Dwyer

New European Union rules on “harmful” online content underscore how foreign and global bodies have become de facto regulators of what Americans, in their own country, see and do online.

European governments, acting under the new Digital Services Act passed last year, can require companies to remove a wide range of content deemed illegal or in violation of a platform’s terms of service, such as the promotion of genocide or hate speech. 

The landmark new guidelines, which took effect on Friday, seek to ensure that the online environment remains a safe space,” is how the president of the European Commission, Ursula von der Leyen, puts it. “The greater the size, the greater the responsibilities of online platforms,” she says.

Nineteen online platforms with 45 million or more users, representing a tenth of the EU’s population, face the Act’s highest level of regulation. They must “redesign their systems” to include explicit labels on AI-generated content, bans on advertising featuring “sensitive data” or political opinions, safety measures for minors, and rigorous content moderation, the commissioner for the internal market of the EU, Thierry Breton, asserted in an online briefing.

The ratcheting up of restrictions on Big Tech firms is fueling concerns about infringements on free speech online. Although the new infrastructure applies legally only to the European Union — whose 27 nations represent a market of 400 million consumers — it is likely to have global consequences.

The platforms targeted by the act will fund a European Commission task force to ensure the companies comply with the EU’s voluntary disinformation code of practice. Otherwise, they may face a fine of up to 6 percent of their annual global revenue, an investigation by the commission, or a ban on operating in the EU at all, a requirement that one British political commentator calls “Orwellian.” 

“This legislation, while introduced with the intention of curbing content that is already illegal, brings with it certain provisions that infringe upon the very essence of open dialogue and free speech,” the chief executive officer of a social media site, Gab, which seeks to promote “the free flow of information online,” Andrew Torba, says. That’s from a statement on the Gab website. 

When asked by a reporter at a press conference at Dublin if state institutions’ authority to determine the truth is dangerous for democracy, Ireland’s media minister, Catherine Martin, skirted the question. She instead said the act will protect children from harmful content online. She insisted that “anything that can prevent misinformation is to be welcome,” according to a post on the social media platform X, formerly known as Twitter. The reporter retorted: “You’re literally regulating truth itself.”

“My concern is that any challenge to the prevailing orthodoxy on Covid vaccines, climate change, the war in Ukraine, mass immigration, and trans rights will be classified as ‘harmful,’” the founder of a British organization that advocates for freedom of speech, Free Speech Union, Toby Young, tells the Sun. 

Some companies have already modified their platforms to abide by the new guidelines. Facebook and Instagram made it easier for users to flag content. Amazon added a new tool for reporting suspicious goods. TikTok implemented an extra option for reporting videos for issues such as fraud or harassment.

Some companies are pushing back. In the first challenge by a Silicon Valley tech giant to the new standards, Amazon filed a legal claim to the European General Court last month for being “unfairly singled out” since none of the largest retailers in the European country where Amazon operates have been similarly designated as a very large platform. 

The German online fashion retailer Zalando filed a similar challenge, stating that it doesn’t pose a “systemic risk” of spreading harmful or illegal content from third parties and should not be targeted by the act, the company’s head of public affairs for the EU, Aurelie Caulier, argued, as reported by the Washington Post.

Regulatory efforts might also hinder innovation and undermine a valuable business model. As an analysis by the Pepperdine Law Review concludes, robust legal structures intended to grant users greater control over their data “may be the product of protectionist impulses rather than concerns for consumer welfare.”

Yet European regulators are solidifying their crackdown on tech giants through additional measures. In June, the European Parliament overwhelmingly approved sweeping protections against potentially nefarious uses of artificial intelligence through the EU A.I. Act. 

Meanwhile, the United Nations is negotiating a Cybercrime Treaty that aims to upend international criminal law and intensify policy surveillance of user data across nations, raising a further set of privacy and free speech concerns.


The New York Sun

© 2024 The New York Sun Company, LLC. All rights reserved.

Use of this site constitutes acceptance of our Terms of Use and Privacy Policy. The material on this site is protected by copyright law and may not be reproduced, distributed, transmitted, cached or otherwise used.

The New York Sun

Sign in or  Create a free account

or
By continuing you agree to our Privacy Policy and Terms of Use