Ofcom, the United Kingdom’s digital communications watchdog, has issued an open letter to online social platforms, voicing its concern over the employment of their services to instigate violence. This action comes on the heels of several days marked by violent civil disturbances in various locales across the UK, ignited by the tragic stabbing of three young girls in Southport on July 30.
Empowered by the U.K.’s recently ushered Online Safety Act (OSA), Ofcom now has the authority to penalize video-sharing platforms that fail to shield users from violence-inciting or hatred-spurring content. This Act has broadened Ofcom’s reach to include an array of digital platforms, notably social media networks.
According to the OSA, penalties could soar to as much as 10% of a company’s global yearly revenue, hinting at Ofcom’s strengthened capabilities to address major oversights in content moderation.
However, Ofcom is still in the transitional phase of applying this new framework. Actual enforcement actions against social media entities are not anticipated to commence until 2025, as the organization furthers discussions on compliance guidelines.
Moreover, the adoption of these guidelines by Parliament remains a prerequisite for enforcement initiation. Presently, Ofcom lacks a straightforward legal mechanism to mandate social media companies to combat incendiary conduct that fuels violent societal upheavals.
Despite these limitations, recent upheavals have prompted calls for an accelerated enforcement timetable by Ofcom, alongside a push for a more assertive stance towards major social media corporations.
In an appearance on BBC Radio 4’s World at One program, ex-minister Damian Collins implored Ofcom to take decisive action against tech giants by issuing them a formal warning.
Collins argued that social media behaviors that provoke violence, instill terror regarding potential violent victimization, or encourage racial animosity already constitute violations under the Act. He advocated for Ofcom to utilize its auditing powers to scrutinize tech companies’ efforts in curbing the dissemination of extremist content and misinformation.
He stressed the issue at hand is not just the inaction of these companies but their potential role in exacerbating the situation by promoting such harmful content.
Immediate concerns about social media’s influence, including Elon Musk’s X (formerly Twitter), were ignited by the rapid circulation of misleading claims concerning the perpetrator of the girls’ murder.
UK news agencies initially faced limits on disclosing the accused’s identity, a minor, due to legal restrictions. However, misinformation swiftly propagated on platforms like X, falsely pinning the crime on a Muslim asylum seeker, due to the ambiguous information early on.
Furthermore, social media and messaging apps such as Telegram became tools for organizing subsequent disorder. The unrest, initially sparked in Southport, extended across England and Northern Ireland, featuring lootings, arson, and racial assaults, with several law enforcement officers sustaining injuries.
Musk himself delved into the controversy, engaging with posts by far-right influencers on X, using the platform’s reach to advance polarizing narratives. This includes interactions with X user Tommy Robinson, whose previously banned account was reactivated, overturning a Twitter suspension for hateful conduct.
Musk’s comments suggesting the inevitability of civil strife in the UK and criticising Prime Minister Keir Starmer’s administration for alleged policing biases reflect a challenging environment for moderating online discourse.
Officials have countered Musk’s assertions and emphasized that the disturbances are not acts of protest but criminal behaviors by individuals the government labels as criminals.
Addressing the dilemma of managing major tech platforms, which are leveraged to foment violence and coordinate unrest, remains pressing. Particularly for X, where the platform’s proprietor personally amplifies contentious content.
Gill Whitehead, in Ofcom’s publicly issued letter, has made a modest regulatory intervention, merely advising platforms that they “can act now” without mandating immediate action.
This may signify Ofcom’s interim strategy until its complete guidelines and practices are established. Whitehead anticipates these measures to necessitate services to identify and mitigate illegal content risks, acting expediently upon recognition.
Once the OSA is in full force, Ofcom highlights future responsibilities for widely-used digital services to enforce their terms rigorously, explicitly addressing hate speech, violence incitation, and misinformation.
In closing, Ofcom encourages an ongoing dialogue with these platforms as regulations solidify and suggests firms need not await formal duties but should proactively strive for safer online environments for users now.
Absent a concrete framework compelling platforms to reform, Ofcom’s letter, while a step, might not be sufficient to deter detractors from continuing their disruptive activities online.
Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence


