Ofcom’s Gill Whitehead described the communications watchdog’s child safety announcements as a significant step towards making the internet much safer. But can these measures truly bring about a substantial improvement in protecting children online?
Resetting the internet to enhance child safety is no small feat. While targeting major social media platforms, the Online Safety Act, under Ofcom’s enforcement, encompasses over 150,000 services.
While acknowledging actions taken by tech giants like Meta and Amazon’s Twitch to address issues like grooming and underage exposure to mature content, the challenge extends beyond these platforms.
Internet Matters’ recent research reveals alarming statistics, indicating that one in seven teenagers aged 16 and under has experienced some form of image-based sexual abuse, with a significant portion attributing the abuse to someone they know.
However, the timeline for implementing these new rules extends to the second half of 2025, drawing criticism from child safety advocates who deem it insufficient and delayed.
The announcement marks the beginning of a consultation process involving regulators, tech companies, experts, parents, and advocacy groups.
Among the proposed measures, age verification stands out as a contentious issue. While the regulator emphasizes the need for robust age assurance methods, privacy concerns arise over the potential collection of personal data from millions of social media users.
Third-party age verification firms argue for systems that verify age without compromising users’ privacy, but critics worry about the trade-off between privacy and online safety.
However, some experts caution that stringent age checks may not deter determined youngsters from accessing restricted content through alternative means, potentially leading them to less regulated corners of the internet.
The role of app stores in age verification remains unaddressed in the consultation, with Ofcom planning further consultation and potential government intervention by 2026.
Another challenge lies in the increasing adoption of end-to-end encryption, which prevents platforms from detecting and mitigating child abuse material. While Ofcom can mandate scanning for such content, encrypted services like Signal and WhatsApp have resisted measures that compromise their security and privacy.
WhatsApp’s parent company, Meta, plans to expand end-to-end encryption despite concerns from children’s charities and the government.
The path ahead is complex, with tech firms facing hefty fines for non-compliance and potential risks to children’s mental health and well-being if effective measures are not implemented.