For years, politicians from across the political spectrum insisted the Online Safety Act would focus solely on illegal content – shielding children from pornography, criminal exploitation, and material encouraging or assisting suicide – without threatening free expression. But from the moment its age-verification duties took effect on 25 July, that reassurance began to unravel.
Social media sites, search engines, and video-sharing services are now legally required to shield under-18s from content deemed harmful to their mental or physical well-being. Failure to comply risks fines of up to £18 million or 10% of global turnover, whichever is greater.
At the heart of the regime is a requirement to implement “highly effective” age checks. If a platform cannot establish with high confidence that a user is over 18, it must restrict access to a wide category of ‘sensitive’ content, even when that content is entirely lawful. This has major implications for platforms where news footage, protest clips or political commentary appear in real time.
Ofcom’s guidance makes clear that simple box-ticking exercises – like declaring your age or agreeing to terms of service – will no longer suffice. Instead, platforms are expected to use tools like facial age estimation, ID scans, open banking credentials, or digital identity wallets.