Beyond Bans: Digital Equity, Judgement and Children’s Rights
Why one-size-fits-all solutions risk widening digital divides in schools
Words by
Luke Ramsden
Deputy Headteacher, GEC Circle Member and Inclusion Champion
Dr Nicole Ponsford
Founding CEO, Global Equality Collective (GEC)
When schools are asked to solve digital harm with blanket bans, we should ask why the systems causing that harm remain largely untouched. Schools have always had to respond to new technologies, but few have arrived with the speed, scale, and cultural reach of the modern smartphone and its surrounding ecosystem of social media platforms. Current debates tend to frame this challenge in stark terms: whether phones should be banned outright, or whether access to social media should be delayed until a particular age. These arguments are usually grounded in familiar and legitimate concerns—media literacy, safeguarding, distraction in lessons, sleep deprivation, online bullying, the circulation of inappropriate images, and exposure to misinformation.
The negative evidence from schools is already substantial. Excessive screen use displaces sleep, reading, and sustained attention. Online conflict rarely stays online, instead reshaping relationships in classrooms and corridors. Group messaging can escalate minor disagreements into wider social crises within hours. The sharing of images—sometimes careless, sometimes coercive—has become a routine safeguarding issue rather than an exceptional one. Taken together, these realities help explain why policymakers and school leaders feel pressure to draw firmer boundaries, and to do so in ways that are simple to explain and enforce.
But the picture becomes incomplete when the debate stops there.
Young people’s digital lives are not experienced uniformly, and policies built on aggregate data or generalised assumptions risk obscuring more than they reveal. An intersectional, mixed-methods approach—drawing on Kaleidoscopic Data—shows that risk, benefit, and vulnerability do not sit evenly across the pupil population. Digital experiences are shaped by overlapping factors including gender, neurodivergence, disability, ethnicity, socioeconomic context, and prior experiences of marginalisation. When these dimensions are flattened into single measures such as “screen time” or device access, important differences in need, agency, and impact are lost.
These differences matter because blanket policies are often felt most sharply by pupils who already experience marginalisation, while offering fewer additional benefits to those who are already well supported offline.
Research by Professor Sonia Livingstone and colleagues has consistently cautioned against framing children’s digital lives solely through the lens of risk. Large-scale longitudinal and cross-national studies show that online opportunities and online risks are often intertwined. The same forms of engagement that enable creativity, learning, and connection can also expose young people to harm. Crucially, attempts to minimise risk by restricting access alone may also reduce opportunities for developing digital skills, resilience, and critical judgement. From this perspective, digital participation is not simply a hazard to be delayed, but a social reality to be navigated, supported, and shaped.
For many young people, digital communication is not only a source of risk but a central part of their social and creative lives. Staying in contact with friends across distance and time is not a marginal convenience. For pupils who feel isolated, anxious, or unsafe in their immediate environment, digital spaces can provide continuity, reassurance, and a sense of belonging that would otherwise be difficult to sustain.
The creative dimension matters too. Blogging, fan fiction, home-produced music, short films, and digital illustration are not fringe pursuits; they are part of the everyday cultural output of many teenagers. To be able to share work, receive feedback, and find even a small audience changes young people’s relationship to culture. They are not only consumers, but participants. This does not eliminate risk, but it complicates any account that treats digital life solely as a problem to be managed.
The same complexity applies to information. Misinformation is real and damaging. At the same time, the capacity to look up sources, compare accounts, and explore ideas across disciplines is now part of ordinary student life. Access does not guarantee understanding—but restricting access does not create understanding either. The educational question is not whether pupils encounter information, but how they learn to judge, weigh, and question what they encounter.
This is where proposals for simple bans need careful handling.
There are strong arguments for restricting phone use during the school day. Classrooms require sustained attention, and social spaces often benefit from the absence of constant digital interruption. Schools are entitled to insist on conditions that make learning and relationships possible. But when prohibition is presented as the primary solution rather than one element within a wider educational strategy, there is a risk of confusing behavioural management with the formation of judgement.
The same issue arises in debates about age thresholds for social media use. The intention is protective, and rightly so. But it is worth asking what outcome is actually being sought. If access is delayed until a particular birthday, are we confident this will produce more thoughtful and resilient users from that point onwards? Or does it simply postpone exposure to the same pressures and designs, without having invested time in teaching how to navigate them? Age alone is not a reliable safeguard. Maturity does not develop independently of experience.
None of this argues for indifference, or for leaving young people to manage alone. Boundaries are necessary. Safeguarding interventions are necessary. The question is not whether limits should exist, but what role those limits play. Limits embedded within an educational process—accompanied by explanation, reflection, and gradual responsibility—serve a different purpose from limits that function primarily as exclusion.
If the aim is to help pupils become capable, ethical users of technology, then education and empowerment must do more of the work. This requires evidence that moves beyond surface indicators towards understanding how different pupils experience digital environments in context. Kaleidoscopic Data offers one way of capturing this complexity, combining quantitative patterns with lived experience to inform proportionate, inclusive responses rather than one-size-fits-all rules.
This also requires scrutiny of the systems themselves. Many of the harms associated with young people’s digital lives are not accidental side effects of technology, but the result of specific design choices: recommendation algorithms optimised for engagement over wellbeing, interfaces that reward comparison and visibility, and feedback loops that amplify emotionally charged content. Public debate often concentrates on children’s behaviour, while paying far less attention to the companies that design and profit from the environments shaping that behaviour.
If a physical school environment were knowingly designed in ways that heightened anxiety, excluded disabled pupils, or undermined self-regulation, responsibility would not be placed primarily on pupils to adapt. We would expect redesign. Yet in digital spaces, responsibility is routinely displaced downwards—onto children, families, and schools—rather than upwards to the systems that structure attention and interaction at scale.
Here, principles from Universal Design for Learning (UDL) offer a powerful lens. UDL starts from the premise that variability is normal, not exceptional. Applied to digital environments, this suggests platforms used by children should anticipate diversity in attention, communication, sensory processing, and emotional regulation from the outset. Systems designed with flexibility, transparency, and user control by default are more likely to support learning, wellbeing, and inclusion than those reliant on restriction after harm has occurred.
Seen this way, banning devices or delaying access risks missing the structural problem. It treats harm as a consequence of individual misuse rather than as a signal of environments misaligned with children’s developmental needs. A focus on algorithmic accountability shifts attention from limiting children’s access to reshaping the conditions under which access occurs.
Recent regulatory developments reflect this shift. The European Union’s Digital Services Act and the UK Information Commissioner’s Age Appropriate Design Code frame children not simply as users in need of restriction, but as rights-holders entitled to privacy, protection, and meaningful consideration in the design of digital services. This challenges narratives that locate responsibility primarily with schools, families, or young people themselves.
Delaying access or removing devices may reduce immediate exposure, but it does not address the design conditions that shape digital engagement once access is restored.
So what?
If schools and policymakers respond to digital risk with uniform bans and blanket delays, they may achieve short-term order, but at the cost of long-term capability. Prohibition can reduce immediate exposure, yet it does little to build the judgement, resilience, and critical understanding young people will need once restrictions are lifted.
More importantly, such approaches risk deepening existing inequalities. Pupils with strong offline support, cultural capital, and parental guidance are better placed to recover lost opportunities elsewhere. Those who rely on digital spaces for connection, creativity, or belonging are more likely to experience restriction as exclusion. When policy ignores these differences, digital safety becomes unevenly distributed—and digital equity is undermined.
A rights-based approach reframes the question. Children are not simply future adults to be protected from technology, but present citizens with rights to participation, provision, and protection. Supporting those rights requires more than restriction. It requires intentional education about how digital systems work, proportionate boundaries that evolve with experience, and firm expectations that technology companies design digital spaces worthy of children.
The choice is not between bans and laissez-faire access. It is between treating digital life as a behavioural problem to be delayed, or as a social reality to be taught, shaped, and governed. Without that shift, bans may quiet classrooms in the short term—but they will do little to close digital divides or prepare young people for the realities they will inevitably face.
References
CAST (2018). Universal Design for Learning Guidelines version 2.2. Wakefield, MA: CAST.
Available at: https://udlguidelines.cast.org
European Union (2022). Regulation (EU) 2022/2065 on a Single Market for Digital Services (Digital Services Act). Official Journal of the European Union.
Available at: https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package
Information Commissioner’s Office (ICO) (2020). Age Appropriate Design Code: A Code of Practice for Online Services. London: ICO.
Available at: https://ico.org.uk/for-organisations/childrens-information/age-appropriate-design-code/
Livingstone, S., Mascheroni, G., & Staksrud, E. (2018). European research on children’s internet use: Assessing the past and anticipating the future. New Media & Society, 20(3), 1103–1122.
https://doi.org/10.1177/1461444816685930
Livingstone, S., Kirwil, L., Ponte, C., & Staksrud, E. (2014). In their own words: What bothers children online? European Journal of Communication, 29(3), 271–288.
https://doi.org/10.1177/0267323114521045
Livingstone, S., & Helsper, E. (2010). Balancing opportunities and risks in teenagers’ use of the internet: The role of online skills and internet self-efficacy. New Media & Society, 12(2), 309–329.
https://doi.org/10.1177/1461444809342697
OECD (2021). Children & Young People’s Mental Health in the Digital Age: Shaping the Future. Paris: OECD Publishing.
https://doi.org/10.1787/9b7b1c4a-en
UNICEF (2021). Policy Guidance on AI for Children. New York: UNICEF.
Available at: https://www.unicef.org/globalinsight/reports/policy-guidance-ai-children
UNICEF (2023). Children’s Rights in the Digital Environment: A General Comment (No. 25). Committee on the Rights of the Child.
Available at: https://www.ohchr.org/en/documents/general-comments-and-recommendations/general-comment-no-25-2021
Ponsford, N. (2025). Intentional Inclusion: Investigating Equitable Education and Intersectional EdTech. Doctoral thesis, Bournemouth University. Available at: https://eprints.bournemouth.ac.uk/41595/

