In the wake of tumult sparked by online misinformation, the U.K. government may be considering enhancing its regulatory authority over digital platforms, following incidents of violence across England and Northern Ireland.
Prime Minister Keir Starmer, on Friday, disclosed plans to assess the effectiveness of the Online Safety Act (OSA).
The OSA, legislated into law in September 2023 after prolonged debates in parliament, mandates user-to-user communication services (like social media networks and messaging applications) to eliminate unlawful content and safeguard users from various dangers, including hate speech. Non-adherence could result in fines amounting to 10% of their worldwide yearly revenue.
“The internet and social media are not exempt from the law, a fact made evident by ongoing prosecutions and sentencing,” Starmer remarked, highlighting the legal actions against those inciting hate online, as reports of the initial sentencing for hate speech linked to violent unrest begin to surface here.
However, Starmer acknowledged the need for a broader examination of social media’s role post-disturbance, stressing that the immediate focus should be on restoring order and ensuring public safety.
A review of the OSA was confirmed following critiques, notably by London Mayor Sadiq Khan, who deemed the act inadequate for its intended purpose, as reported by The Guardian.
The narrative of three young girls murdered in Southport on July 30 has incited widespread havoc in cities and towns throughout England and Northern Ireland.
Falsely accusing a Muslim asylum seeker as the assailant, online misinformation quickly proliferated, including posts by right-wing extremists. This misrepresentation has been closely associated with the recent upheaval.
Another incident on Friday involved the arrest of a British woman under the suspicion of inciting racial hatred through misleading social media posts regarding the attacker’s identity, as per a report. Her arrest underscores the government’s intent on tackling the dissemination of disinformation.
Yet, the broader issue of how to effectively manage tech platforms and digital tools that propagate misinformation remains unresolved.
Although the OSA is yet to be fully implemented, awaiting regulatory guidance, some argue that a premature review could hinder the legislation’s potential efficacy. Critics also point to its inadequacies in addressing the profit-driven algorithms of major platforms that fuel user engagement through controversy.
Revisions made in late 2022 by the previous Conservative government, especially the removal of clauses addressing “legal but harmful” speech, have drawn criticism for potentially compromising the fight against disinformation under the guise of protecting free speech.
While most mainstream social media platforms have policies against hateful content, the efficiency of their enforcement is questionable. For example, a recent arrest on August 6 involved a man charged with racial hate incitement for his Facebook posts targeting an asylum seeker hotel.
Platforms often claim ignorance, acting only when harmful content is reported. A more stringent regulatory framework could necessitate a shift towards a proactive engagement in halting the spread of harmful misinformation.
The European Union is currently examining X (formerly Twitter) under its Digital Services Act, focusing on its moderation practices concerning disinformation. The U.K.’s handling of related content may influence the ongoing EU probe.
In anticipation of the OSA’s full enactment by next spring, it is expected to impose significant pressure on major platforms to diligently enforce terms of service prohibiting misinformation, as stated by a spokesperson from the Department for Science, Innovation and Technology.
Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence


