Social media giants like Facebook, TikTok and Twitter will no longer be obliged to take down “legal but harmful” content under the U.K.’s new proposals for online safety.
The government said the amendment would help preserve free speech and give people greater control over what they see online.
Critics have described the move as a “major weakening” of the bill, which risks undermining the accountability of tech companies.
Social media platforms like Facebook, TikTok and Twitter will no longer be obliged to take down “legal but harmful” content under revisions to the U.K.’s proposed legislation for online safety.
The Online Safety Bill, which aims to regulate the internet, will be revised to remove the controversial but critical measure, British lawmakers announced Monday.
The government said the amendment would help preserve free speech and give people greater control over what they see online.
However, critics have described the move as a “major weakening” of the bill, which risks undermining the accountability of tech companies.
The previous proposals would have tasked tech giants with preventing people from seeing legal but harmful content, such as self-harm, suicide and abusive posts online.
Under the revisions — which the government dubbed a “consumer-friendly ‘triple shield’” — the onus for content selection will instead shift to internet users, with tech companies instead required to introduce a system that allows people to filter out harmful content they do not want to see.
Crucially, though, firms will still need to protect children and remove content that is illegal or prohibited in their terms of service.
‘Empowering adults,’ “preserving free speech”
U.K. Culture Secretary Michelle Donelan said the new plans would ensure that no “tech firms or future government could use the laws as license to censor legitimate views.”
“Today’s announcement refocuses the Online Safety Bill on its original aims: the pressing need to protect children and tackle criminal activity online while preserving free speech, ensuring tech firms are accountable to their users, and empowering adults to make more informed choices about the platforms they use,” the government said in a statement.
The opposition Labour party said the amendment was a “major weakening” of the bill, however, with the potential to fuel misinformation and conspiracy theories.
“Replacing the prevention of harm with an emphasis on free speech undermines the very purpose of this bill, and will embolden abusers, COVID deniers, hoaxers, who will feel encouraged to thrive online,” Shadow Culture Secretary Lucy Powell said.
Meantime, suicide risk charity group Samaritans said increased user controls should not replace tech company accountability.
“Increasing the controls that people have is no replacement for holding sites to account through the law and this feels very much like the government snatching defeat from the jaws of victory,” Julie Bentley, chief executive of Samaritans, said.
The devil in the detail
Monday’s announcement is the latest iteration of the U.K.’s expansive Online Safety Bill, which also includes guidelines on identity verification tools and new criminal offences to tackle fraud and revenge porn.
It follows months of campaigning by free speech advocates and online protection groups. Meantime, Elon Musk’s acquisition of Twitter has thrown online content moderation into renewed focus.
The proposals are now set to go back to the British Parliament next week, before being intended to become law before next summer.
However, commentators say further honing of the bill is required to ensure gaps are addressed before then.
“The devil will be in the detail. There is a risk that Ofcom oversight of social media terms and conditions, and requirements around ‘consistency,’ could encourage over-zealous removals,” Matthew Lesh, head of public policy at free-market think tank the Institute of Economic Affairs, said.
Communications and media regulator Ofcom will be responsible for much of the enforcement of the new law and will be able to fine companies up to 10% of their worldwide revenue for non-compliance.
“There are also other issues that the government has not addressed,” Lesh continued. “The requirements to remove content that firms are ‘reasonably likely to infer’ is illegal sets an extremely low threshold and risks preemptive automated censorship.”
Source: CNBC
Recent Comments