Every Child Deserves a Safer Screen. Here's Why the Rules Alone Won't Deliver It.

Apr 22, 2026

Something has shifted in the conversation about children and smartphones.

It's no longer just parents sharing anxieties at the school gate. Clinicians are raising it in consultation rooms. Headteachers are calling it a safeguarding issue. Researchers across developmental psychology, psychiatry, and paediatric medicine are publishing findings that point in the same direction: the digital environment that children have grown up in since the mid-2010s has changed childhood and not in ways that are easy to reverse.

Jonathan Haidt, social psychologist at NYU and author of The Anxious Generation, has framed it in terms that have resonated far beyond academic circles: "We are overprotecting our children in the real world while underprotecting them online." The evidence for that underprotection is no longer disputed:

  • One in four children and young people are using their smartphones in ways consistent with behavioural addiction. 

  • Depression rates in adolescents rose in direct proportion to hours spent on social media, according to the British Millennium Cohort Study, which followed 19,000 children. 

  • Nearly a third of 8–17-year-olds say they have seen something online that they found "worrying or nasty" in the past year. 

  • 78% of Gen Z say that if they were parents, they would try to delay their child's social media access for as long as possible, a generation that has seen the costs of early exposure from the inside.

Grassroots movements, campaigns of concerned parents, and organisations led by health professionals have all reached the same conclusion: something needs to change.

At Voop, we agree. Where we part company with most of the current debate is on the mechanism. The rules, however well designed, only work if the infrastructure exists to deliver them.


The Evidence of Harm Reaches Further Than the Headlines

The headline statistics are striking enough. But the data from health professionals working directly with children paints a fuller picture of what unmanaged screen exposure actually does.

Speech and language challenges in UK children have increased 27% in the past two years, a rise that paediatric clinicians are linking to screen exposure in early childhood displacing the human interaction that language development depends on. Since smartphones became widespread, ADHD diagnosis rates have seen a relative increase of approximately 56%. 

As of mid-2024, 109,000 children under 18 were waiting over a year for community mental health services in England, a system under pressure that cannot absorb the volume of demand being generated.

The harms are not evenly distributed. Government data shows that looked-after children account for over three quarters of all child sexual exploitation notifications in England. More than 2,400 children in local authority care went missing in 2024 and the overwhelming majority of those episodes involve communication through apps that caring adults cannot access or monitor. The NCA estimates that between 710,000 and 840,000 adults in the UK pose varying degrees of sexual risk to children, and they reach those children through the devices in their pockets.

For the most vulnerable children, the smartphone is not just a distraction. It is frequently the channel through which serious harm arrives.


What Children in Care Actually Said

We wanted to understand what children themselves think about digital safety, not what adults assume. In 2025, we partnered with Kids Industries to run co-creation workshops with children aged 8 to 16 across residential and foster care settings.

The findings were striking. Children are not primarily worried about carers seeing their messages. They're worried about groomers, persistent callers, and strangers who won't stop making contact. The ability to block a number at the network level (not just within an app that can be uninstalled) was universally welcomed across every age group. For the children who had experienced unwanted contact, that kind of protection wasn't a restriction on their freedom. It was their freedom.

Where older teenagers pushed back was on constant surveillance (recording their screens, reading their messages, etc). They wanted a say, and they wanted to understand what any safety system did and didn't do. That matters. But it's worth being clear about what those same young people were asking for: better protection from strangers, stronger controls over who could reach them, and safety tools that had been explained honestly rather than imposed. One 11-year-old put it simply: "I trust my care provider, so I would definitely take the phone."

That is not reluctant acceptance. That is a child asking to be kept safe.


The Parental Control Problem

When families try to do the right thing, they are largely working with the wrong tools.

Point solutions exist (Google Family Link, Apple's Screen Time controls, a range of third-party apps) and for many families in stable home environments they offer something useful. But each operates at the app or device level, which means a motivated child can bypass them. Family Link can be worked around by switching accounts. Screen Time restrictions are navigated by hundreds of thousands of teenagers every day. Any child who has grown up in the care system, with the resourcefulness that typically comes with it, will find the gap faster than the adult who set the control.

As James Paul, CEO and Co-founder of Voop, has put it: "Traditional tech treats safeguarding as an optional setting. If the protection isn't at the core of the network itself, it's not real protection."

Baroness Beeban Kidron, founder of 5Rights Foundation and architect of the UK's Age Appropriate Design Code, has identified the deeper structural failure: "Both government and regulator are really governing by press release. They keep on making announcements but the children in their homes, in their schools, in their lives, are no safer."

The challenge is that rules and awareness campaigns operate at the surface. The technology that harms children operates at the network level. It’s persistent, optimised, and present regardless of what any individual parent, school, or app setting decides.


The School Ban and What It Doesn't Reach

Schools across England have now been given a statutory basis for phone-free environments. Most of them had already got there on their own. The Children's Commissioner's survey found that 99.8% of primary schools and 90% of secondary schools had introduced policies limiting phone use before the legislation arrived. Pepe Di'Iasio, general secretary of the Association of School and College Leaders, was measured in his response: the statutory change "doesn't really change very much. Most schools already have policies in place."

Where evidence of school-level bans does exist, it is broadly positive. Headteachers who have implemented full-day phone-free environments report improved focus, reduced disruption, and children who are happier in school. The case for clearing phones from classrooms is well made.

But Di'Iasio put his finger on the real gap too: stronger regulation is needed outside school hours. The harms that phone bans are partly aimed at reducing (grooming, exploitation, contact from unknown adults, exposure to violent or sexual content) don't observe school hours. Children who are most at risk are not primarily at risk during maths class. They are at risk at 11pm, on a weekend, from a device that left the school gate hours ago.

A school phone ban is a sensible distraction-reduction measure. It is not a safeguarding strategy. Conflating the two allows progress to be claimed while the real problem goes unaddressed.


Age Limits Set a Floor, Not a Ceiling

Age-based restrictions on social media access are a reasonable response to a real problem. The argument that platforms should be required to meet clear safety standards before accessing children, which is the same bar applied to any hazardous offline product, is hard to argue with. Baroness Kidron has made it directly: "Tech products and services must only access children if they're willing to treat them respectfully, safely and age appropriately."

Australia, the most advanced jurisdiction in this space, found within days of its under-16 social media ban taking effect that many children had already bypassed it. Age-assurance tools misclassified users, VPNs worked, and migration to less-regulated platforms began immediately. The law places responsibility on platforms. But platforms are the same entities that have been making that responsibility a low priority for years.

Age limits define who is theoretically permitted. They say nothing about what happens when a child bypasses the limit, accesses equivalent content elsewhere, or is contacted by someone who has no interest in what the platform's terms of service say.


The Infrastructure Argument

Haidt uses an analogy that is worth taking seriously. We didn't make roads safer by asking drivers to be more careful. We built pavements, installed crossings, and engineered physical structures that change behaviour regardless of individual intent. Road safety improved not because people became better drivers, but because we changed the environment they drive in.

Children's digital safety needs the same structural logic.

Baroness Kidron has made the product version of this argument: "If it was a product like a baby buggy, if it was a product like a radio, if it was a product like a car, there would be rules and regulations that say, 'You cannot sell that product to people unless it is safe for use.'" But product-level safety standards apply at the point of manufacture. What we are missing is the road infrastructure itself. The system that shapes how children move through the digital world regardless of which device they're using, which app they've opened, or which platform has decided to take its responsibilities seriously this quarter.

Network-level safeguarding is that infrastructure. It doesn't depend on a child choosing not to open an app, or on a platform enforcing a rule it has a financial incentive to underenforce. It operates at the connection (the SIM, the network, the layer beneath everything else) and it travels with the device wherever it goes. A known bad actor can be blocked before they make contact. Harmful content doesn't load regardless of which Wi-Fi network a child has connected to. And as our research with children in care demonstrated, when this is designed transparently and with children's input, it isn't experienced as surveillance. It's experienced as safety.

The NSPCC has been direct about where platform responsibility has fallen short. Chris Sherwood, its Chief Executive, has said: "Children, and their parents, must not solely bear the responsibility of keeping themselves safe online. It's high time for tech companies to step up." Andy Burrows, Chief Executive of the Molly Rose Foundation, has been blunter about what happens when they don't: "The lack of ambition and accountability will have been heard loud and clear in Silicon Valley."

Shifting responsibility to platforms is necessary. But it is not sufficient. Platforms will always have competing interests. Infrastructure does not.


What This Means for the People Who Work with Children

For care providers and corporate parents: The duty of care that comes with corporate parenting extends to the digital world as directly as it extends to any other domain. A safeguarding policy that doesn't address what happens when a looked-after child connects to an unknown Wi-Fi network, opens an app at midnight, or receives contact from someone not on their approved contacts list has a gap in it. The children who told us they want this protection, when it's explained honestly and built with their input, are the same children your policies are meant to protect.

For schools and multi-academy trusts: Statutory phone-free environments are good policy for the school day. But the safeguarding responsibility of a MAT extends to the wellbeing of its most vulnerable pupils, and the school gate is not the end of that obligation. The children who most need protection during school hours are the same children who most need it in the hours that follow.

For parents: The evidence is clear that the harms of unmanaged smartphone use are real and wide-ranging, from sleep disruption to language development to mental health. The answer is not only to delay access: it is to ensure that when children are online, the environment has been built with their safety as a structural requirement, not an optional setting that a determined child can switch off.

For commissioning bodies: The children in your care are at the highest documented risk of harm via their devices. Commissioning digital safeguarding with the same rigour applied to physical safeguarding is not a future consideration. The gap is current and the cost of it, in missing episodes, exploitation, and long-term outcomes, is measurable.


Whatever policy direction emerges from the current debate, there is a question underneath it that any serious child protection strategy has to answer: what happens to the child who falls outside the policy, because their parents and guardians aren't present, because they found a workaround, because the platform didn't comply, because it was 4am and nobody was watching?

Our workshops gave us a clear answer to a different question: what do children actually want? To be safe from people who wish them harm. Adults they can trust. Protection that doesn't feel like surveillance. A say in how it works.

Building the infrastructure that delivers those things, at the network level, proportionate to each child's needs, designed with their input, is what makes every other intervention work. Without it, each policy response carries the same flaw: a chain of compliance that has consistently broken down at the exact moment a child is most at risk.

The rules matter. The infrastructure is what holds them.


Voop provides network-level digital safeguarding for children in foster and residential care. If you work in children's social care, corporate parenting, or multi-academy trust leadership and want to understand what safeguarding infrastructure looks like in practice, get in touch.

More Posts

Every Child Deserves a Safer Screen. Here's Why the Rules Alone Won't Deliver It.

The policy around children's smartphone use are getting stronger. The harm isn't getting weaker. Network-level safeguarding is the infrastructure gap that nobody is closing. Here's why it matters.

Apr 22, 2026

Children Want to Be Safe Online. Here's How We Make That Real.

Children in care fear groomers, not oversight. Voop's network-level safeguarding closes the gaps that put them at risk, without surveillance, without taking away their freedom.

Mar 31, 2026

Therapeutic Safeguarding: A New Era of Digital Protection for Young People

Voop's network-level solution provides digital safeguarding for children in care and vulnerable adults. Go beyond parental controls to prevent grooming, exploitation, and improve digital resilience.

Feb 10, 2026

Protecting Vulnerable People on Smartphones Requires a Technology Rethink

Smartphones have become the default device for communication, information, and connection. But for vulnerable people - especially children in care - they’re also a gateway to real and persistent harm.

May 22, 2025

Our Manifesto and Mission

Learn more about what we are building at Voop.

May 5, 2025

A product by