We use limited cookies
We use cookies where necessary to allow us to understand how people interact with our website and content, so that we can continue to improve our service.
View our privacy policyAn AI-powered crackdown could see platforms suppressing voices who have nothing to do with the group targeted by the home secretary
As Shabana Mahmood takes her bid to brand Palestine Action a terrorist organisation to the Court of Appeal, Good Law Project has uncovered how her authoritarian crackdown could have a chilling effect on free speech.
Voices supporting Palestine and arguing against genocide could be silenced by the combination of the Online Safety Act 2023 and the Terrorism Act 2000 – even if they have nothing to do with the group the home secretary wants to ban.
The Online Safety Act obliges platforms to remove “priority illegal content” from the internet in the UK. At the top of this list is “terrorism content” which includes posts that relate to section 12 of the Terrorism Act, such as “inviting support” for a terrorist organisation or “expressing an opinion or belief that is supportive of a proscribed organisation” while being “reckless as to whether a person to whom the expression is directed will be encouraged to support a proscribed organisation”.
These are very broadly and poorly drafted offences which even the police have found very difficult to understand and apply. And when the home secretary targets a group with a generic name like Palestine Action, it’s even harder to work out whether they are in play.
The interaction between these two pieces of legislation places these obligations not on trained police officers but instead on online platforms – and creates a very significant incentive on their part to take content down.
A failure to comply doesn’t just result in bad PR – Ofcom now has the power to levy fines of up to £18m or 10% of global turnover. This massive financial risk means that platforms are likely to decide that it’s better to be safe than sorry. There is no financial downside to removing lawful content but a huge downside to leaving illegal content up.
And it gets worse. For years, platforms generally operated on a reactive “notice and takedown” basis which involved removing illegal content after it was reported by users. So-called content moderators – often relatively low-skilled workers working in low wage jurisdictions – would review content and consider whether to take it down when they receive a complaint.
It’s not clear how they would deal with complaints that concern the Terrorism Act and a group with a name like Palestine Action – questions that have proved a struggle for highly trained police officers. But we know where the very strong financial incentives lie.
And the situation will shortly get still worse. Under the latest phase of the Online Safety Act, which will take effect later this year, digital platforms are moving into the more challenging era of proactive moderation.
Major platforms will be legally obliged to use proactive technologies, likely to be AI systems, to identify and remove this content, ideally before it even reaches a user’s feed. AI is notoriously bad at nuance. It will struggle to distinguish between direct action – such as calling for property damage, which may trigger “terrorism” filters – political activism – for example general calls for “action” on Palestine – and news reporting – like articles on protests or conflict zones.
There is a real risk – even a likelihood – that a post simply calling for a peaceful protest in London could be swept up in the same automated net designed to catch the promotion of terrorism. This censorship is likely to silence a wide spectrum of voices supporting Palestine and speaking out against genocide, even those with zero connection to the targeted group.