In our previous article, we examined the U.K.’s Online Safety Act (OSA), which has faced significant criticism for its broad categorisation framework that potentially captures platforms beyond its intended scope, mandatory identity verification requirements that risk undermining user privacy, and the imposition of disproportionate compliance burdens on diverse online services.
On 11 August 2025, the first legal challenge to the OSA’s categorisation regulations was determined in Wikimedia Foundation vs. Secretary of State for Science, Innovation and Technology. Wikimedia, the American non-profit behind Wikipedia, challenged its potential classification as a Category 1 service, arguing this would compromise contributor privacy and fundamentally disrupt its collaborative model through mandatory identity verification requirements.
The U.K. High Court of Justice, presided over by Justice Johnson, largely dismissed the challenge, refusing permission on human rights grounds (Articles 8, 10, 11, and 14 ECHR) and irrationality. However, the court granted permission on one narrow ground: that the Secretary of State failed to properly consider user numbers and platform functionalities when drafting the regulations. The court cautioned that its decision does not endorse any future regulatory regime that would significantly impair platform operations.
Ronin Legal takes a closer look.
Background
Wikimedia announced its challenge to the OSA’s lawfulness on 8 May 2025, targeting the categorisation regulations that outline platform duties.
As detailed in our previous article, the Act creates three categories (Category 1, 2A and 2B) based on perceived risk determined by visitor numbers and specific features such as content resharing. Each category carries varying compliance requirements.
The primary concern with Category 1 classification is the mandatory identity verification requirement, which could exclude users unwilling to verify their identities and restrict access and participation for UK users.
Wikimedia’s Challenge
Wikimedia’s primary contention was that categorising the platform as a Category 1 service is “logically flawed.” The foundation argued the law was intended to regulate “large, profitable social media companies where anonymous content can go viral.” Including platforms such as theirs would fundamentally disrupt operations.
The organisation further argued that the regulation is incompatible with Articles 8, 10, and 11 of the European Convention on Human Rights, as well as Article 14, because it fails to differentiate between distinct types of online providers.
According to lead counsel Phil Bradley-Schmieg, the user verification obligation, allowing users to only see content from contributors whose identities have been verified, is incompatible with the platform’s model. It would be impossible to isolate content created by verified users from that of anonymous contributors without making articles incoherent or unusable.
The foundation also pointed to Ofcom’s research showing that among users who experienced online harm, 56% reported harm from social media, 9% from sites hosting user-posted videos, and 8% from webmail. Only 2% reported harm from the general “other” category, which includes their platform.
Highlighting its global reach, the organisation noted that it offers content in over 300 languages, with millions of articles viewed an estimated 15 billion times per month worldwide. Compliance with such regulations would divert significant resources away from what it describes as “digital public goods.”
The mandatory identity verification requirement could also jeopardise the privacy and safety of volunteer contributors, exposing them to risks such as data breaches, stalking, lawsuits, penalties, and imprisonment. The foundation stressed that such measures would threaten knowledge-sharing and undermine fundamental rights to privacy and free speech. For these reasons, Wikimedia has received support from several advocacy organisations and even a leading data protection law firm closely monitoring the case for its potential privacy implications.
The Defendant’s Response
The defendant claimed that Ofcom’s research to inform the categorisation regulations under the UK’s Online Safety Act was structured around four key themes.
The first was consistency, where Ofcom applied the same data sources and research for Category 1 conditions as for other category conditions.
The second was objectivity, wherein research was conducted in a “service-agnostic” manner, which means that data was analysed without linking it to any specific identified service, to maintain impartiality and neutrality in regulatory assessment.
Third was scope, aimed to use comprehensive and reliable data sources to cover a broad range of relevant factors, and fourth was transparency, by publishing the data and additional research sources.
The defence argued that the regulations were designed to apply consistently and fairly across services based on objective criteria such as user numbers and functionality, aiming to protect users, especially children and vulnerable groups, while balancing freedom of expression and privacy safeguards.
It stated that the claimants have not shown they are victims of a breach of the Convention and therefore lack sufficient standing to bring the claim, making it hypothetical. It also clarified that it is not yet known whether the claimant will fall within Category 1 or be treated in the same way as social media companies.
In any event, services falling within the scope of regulation differ in many ways, and this variation was understood and expected.
The Court’s Ruling
The court refused permission to pursue claims on the grounds of compatibility with Articles 8, 10, and 11 of the European Convention on Human Rights and breach of Article 14 of the Convention or irrationality, as these grounds were considered to lack arguable merit.
However, it granted permission on the grounds of challenge that the Secretary of State, in accordance with paragraph 1(5) of Schedule 11 of OSA, had failed to “take into account the likely impact of the number of users of the user-to-user part of the service, and its functionalities, on how easily, quickly and widely regulated user-generated content is disseminated by means of the service.”
It also dismissed the motion that the classification was irrational. However, it emphasised that this decision does not authorise a regime to significantly hinder operations.
Should such a regime be introduced, it would need to be justified as proportionate to avoid breaching the right to freedom of expression. It was agreed that the decision for classification is within Ofcom’s authority.
What Lies Ahead
The court’s partial dismissal leaves several critical issues unresolved. While it refused permission on human rights grounds, it granted permission to challenge whether the Secretary of State properly considered how user numbers and platform functionalities affect content dissemination when drafting the categorisation regulations.
Crucially, Ofcom has not yet made final determinations about which specific services will be classified as Category 1. The court confirmed that such future classification decisions will constitute public law decisions subject to judicial review on grounds including human rights incompatibility.
Justice Johnson made clear that any public authority decision which substantially hinders Wikipedia’s ability to operate would, unless justified, be unlawful under section 6 of the Human Rights Act 1998 read with Article 10 of the European Convention on Human Rights.
Consequently, if Ofcom classifies Wikipedia as a Category 1 service and imposes burdensome identity-verification or similar requirements, those measures must withstand a proportionality assessment under Article 10. Should they unjustifiably impair Wikipedia’s collaborative model, Wikimedia would have grounds to invoke exemptions under section 220(4) of the Online Safety Act or to seek amendment of the threshold regulations via judicial review.
Authors: Shantanu Mukherjee, Alan Baiju, Akshara Nair