On April 24, 2025, the UK’s Office of Communications (Ofcom) published the ‘Protection of Children Codes of Practice’, implementing Part 3 of the Online Safety Act.
This framework imposed new obligations on digital platforms accessible within the UK, with a compliance deadline of July 24, 2025.
Following implementation, platforms began restricting access to UK users, while VPN services reported increased British subscriptions.
Understanding The Online Safety Act
The Online Safety Act regulates online content with a focus on protecting minors from harmful digital material. The law requires platforms to implement “highly effective age assurance” (HEAA) systems to block children’s access to content, including pornography, bullying, hateful material, and content encouraging self-harm, suicide, dangerous stunts, substance abuse, or eating disorders.
Platforms must also provide reporting mechanisms for parents and children, conduct risk assessments, and modify their algorithms to exclude harmful content from children’s feeds. The legislation applies to any platform accessible from within the UK, regardless of where the company is based.
Platform Classifications and Obligations
Ofcom estimates that over 100,000 online services will be affected, including gaming sites, adult content platforms, dating apps, e-commerce platforms, and social media sites. The Act creates three categories with different compliance requirements:
Category 1 covers major user-to-user platforms like Facebook, YouTube, and TikTok. These services face the most requirements if they use content recommender systems, have more than 34 million UK users, or exceed 7 million users while allowing user-generated content resharing.
Category 2A applies to large search engines with more than 7 million UK users, typically including Google and Bing.
Category 2B covers messaging services like WhatsApp or Messenger that allow direct messaging and serve more than 3 million UK users, with lighter obligations than other categories.
The Age Verification Challenge
The Act requires highly effective age assurance systems. Traditional methods like self-declaration, credit card payments without identity checks, or terms and conditions age restrictions are considered insufficient. Platforms must implement systems that reduce minors’ access to harmful content.
This requirement creates tensions with privacy principles under UK GDPR, particularly regarding data minimisation and user confidentiality. Acceptable verification methods include facial recognition, facial age estimation through third-party services requiring image submission, or official identity document verification using passports or driver’s licenses.
Market Response
Following implementation, platforms began restricting UK access to various content types. Digital Future Daily confirmed through VPN testing that platforms were limiting access to content ranging from political discussions on X (formerly Twitter) to specific posts on Reddit. Gab, a platform known for hosting extremist content, blocked UK access entirely.
VPN providers reported increased demand, with Proton VPN noting a 1,400% hourly increase in sign-ups as the law took effect. NordVPN and Windscribe similarly reported rises in UK subscriptions.
Enforcement and Penalties
The Act includes financial penalties up to £18 million or 10% of global turnover, whichever is higher. Personal liability can extend to executives and managers. Ofcom, as the independent regulator, has investigative powers and can block access to non-compliant services. To navigate these complex legal obligations, many platforms are turning to technology law firms for expert guidance. These firms assist in aligning compliance strategies with UK regulations while safeguarding user privacy under UK GDPR.
Opposition and Concerns
Critics have submitted parliamentary petitions titled ‘Repeal the Online Safety Act’, arguing the legislation exceeds necessary boundaries. The Electronic Frontier Foundation (EFF) has criticised the Act for potentially threatening privacy and free expression through verification procedures that require users to provide biometric and identity data to platforms.
Security incidents, such as the Tea app hack, highlight privacy concerns about data breaches that could expose user information. Reform UK, led by Nigel Farage, has pledged to repeal the Act, characterising it as an overreach.
Implementation Challenges
The Act’s requirements often demand platform redesigns. For startups and small-to-medium enterprises, these obligations create financial burdens and operational challenges that may force business closures or UK market exits.
Larger platforms face a choice between implementing compliance measures or restricting UK access to protect their international operations. This may result in UK users having reduced access to some global digital services.
International Context: Uk vs. US Approaches
The UK’s approach differs from US legislation. The UK’s Online Safety Act applies to platforms accessible within British borders, while American law takes a more limited approach through the existing Children’s Online Privacy Protection Act (COPPA) and the pending Kids Online Safety Act (KOSA).
COPPA, effective since 2000, focuses on privacy protections for children under 13, requiring parental consent for data collection and mandating data minimisation. KOSA, still awaiting enforcement, introduces federal duty of care requirements, but applies mainly to platforms predominantly used by minors rather than universal coverage.
US legislation doesn’t require universal age verification for general online platforms or authorise service blocking, representing a different regulatory approach that focuses on targeted protections rather than comprehensive content control.
Authors: Shantanu Mukherjee, Alan Baiju, Akshara Nair