Foreword

Professor Cecilia Wong, Professor of Spatial Planning, The University of Manchester, and Policy@Manchester Co-Director

Whilst the digital world brings life-changing benefits to many, the rapidly evolving landscape of digital access, social media, misinformation and harmful content bring with them a multitude of new societal challenges for todays decision makers to navigate.

As policymakers review the merits of minimum age use for social media access and the threats posed by conspiracy theories and misinformation, it is especially important to balance viewpoints and engage with the positive aspects of moderation, online support and critical awareness. Policymakers and regulators face multiple challenges and it is crucial that rigorous evidence is up to date and available whether they are considering problematic, unwanted content, tackling forms of extremism or assessing the impacts of social media use for children and young people.

It is difficult to predict how the digital landscape will evolve in an increasingly turbulent world. There are also big questions about the impacts of AI and deepfakes and a population that feels divided and disillusioned.

Whilst digital access is crucial, developing and deploying the skillsets to deal with information pollution, harmful content and misinformation are also increasingly necessary. Media literacy is important – but it is one aspect of a bigger picture.

‘Digital Truths’, Policy@Manchester’s new online report addresses the risks and challenges. Counter extremism, online misogyny and conspiracy theory are prominent issues and narratives around them move quickly. In such a fast-moving policy space it can be all too easy to fall into reactive mode. The evidence led articles in this report provide balanced, evidence led viewpoints and recommendations.

Digital access is crucial but must go hand in hand with feeling and being safe online. Media literacy is a key part of this and so is regulation. As strategies and legislation are developed, the thoughtful, well-balanced pieces in this report are useful tools, underpinned by a depth and breadth of academic evidence that draws out the actions needed to create a digital world constructed of equity, safety and opportunity.

Building Positive Virtual Ecosystems: Digital Mental Health and Media Literacy for Young People

By Professor Terry Hanley

white and brown living room set

With almost one in five young people in the UK experiencing mental health challenges, pressure on already stretched support services is increasing. Online activity has become integral to many young people’s lives, and as the UK’s Online Safety Act comes into force, government and decision makers face the dual challenge of protecting young people from harm while also empowering them with the skills to navigate the digital world confidently. Professor Terry Hanley advocates for the development of positive virtual ecosystems – safe online platforms for support and learning. 

  • Almost 20% of young people experience mental health difficulties, highlighting the need for reliable, accessible online support.
  • The Online Safety Act provides policymakers with the opportunity to balance safety with empowerment, rather than rely on restriction alone.
  • Policy actions are needed to embed media literacy into education in schools, develop credible research-informed positive virtual ecosystems of support and learning and develop regulatory standards that distinguish trusted services from unregulated or harmful content.
The online safety challenge

The online world is integral to young people’s lives, offering connection, information, and self-expression. But alongside these opportunities lie misinformation, harmful content, and online abuse.

Research from Ofcom shows that while most young people are confident online, many lack the critical skills to judge the reliability of digital information. This highlights the need for education that not only protects but empowers, giving young people the tools to evaluate resources, recognise credible support, and make informed choices.

Amid these debates, there is a danger of overlooking how positive virtual ecosystems - well-designed and flexible research-informed platforms for support and learning - can complement existing mental health and education services. As the UK’s Online Safety Act comes into force, regulators and educators have an opportunity to balance protection with empowerment, ensuring that young people are not only shielded from harm but supported to thrive.

Research insights – ecosystems of support

At the University of Manchester, research into digital counselling platforms such as the online counselling and support service Kooth shows that young people do not experience online mental health services as stand-alone interventions. Instead, they integrate them into broader networks of support - families, peers, schools, and community resources – which are both online and in-person.

This might be thought of as an ecosystem of support and learning in which there is an acknowledgement that wellbeing is most likely to be enhanced by interconnected supports rather than isolated fixes.

Key findings – accessibility, trust and support

In a landscape dominated by concerns about screentime, our evaluations show that not all online engagement is equal - well-designed, moderated platforms can have clear benefits for mental health and wellbeing. Key findings from our evaluations include:

  • Accessibility and anonymity matter. Young people value being able to reach out without fear of stigma. Platforms like Kooth have been shown to engage groups who might avoid in-person therapy.
  • Trusted moderation builds safety. Structured safeguarding, human oversight (including oversight of automated systems), and clear community guidelines reduce risks compared to unregulated online spaces.
  • Online support complements, rather than competes with, other forms of care. Some individuals will use it as an initial contact before accessing in-person therapy, while others find digital interaction - often involving real human conversation - meets their needs fully. Diverse, well-integrated options are essential.

This work underscores the importance of creating interconnected ecosystems of support and learning, where positive virtual ecosystems work together with human relationships and professional practice.

Policy pathways – platforms, education and health

Education and training in media literacy, and the development of clear regulatory guidance should be key priorities to ensure that digital resources complement in-person services, strengthen media literacy, and create a safer, more empowering online environment for young people.

Based on our research evidence, three specific areas stand out for policymakers and regulators:

Support trusted platforms through clear regulatory standards. The Online Safety Act should differentiate between research-informed, moderated platforms (such as Kooth, NHS-backed resources) and unregulated apps or forums that may spread misinformation or cause harm. Ofcom will need to work closely with organisations to ensure that as much as the Act works to mitigate and manage risks, that it does not restrict or block platforms that can provide meaningful support to young people. 

Embed media and digital mental health and wellbeing literacy in education. Rather than blanket bans on smartphones, schools should equip young people with the critical skills to evaluate online content, recognise reliable resources, and understand mental health and wellbeing support options.

Promote integrated digital ecosystems within NHS and community services. Investment should focus on connecting online tools with local mental health provision, ensuring smooth transitions between digital and in-person care.

The conversation about online safety often defaults to fear and restriction, but this misses the potential of digital tools to enhance support and, as a consequence, wellbeing.

Building positive virtual ecosystems - where young people can find trustworthy information, moderated communities, and pathways to further help - offers a more balanced and hopeful path. By combining robust regulation, education, and investment in research-informed platforms, policymakers can help shape an online environment that protects young people and can empower them to flourish.

Tackling polluted information: Media literacy, platform governance, and user goals

By Dr Ashley Matthias

white and brown living room set

Digital media - the internet, platforms, social networking systems, apps - are socio-technical systems. Polluted information, which consists of mis/disinformation, propaganda, conspiracy, evolves through usage of these systems. In this article, Dr Ashley Mattheis argues that using either a solely technical or solely social approach to tackling polluted information is ineffective and only intervenes in one aspect of the problem. Her research advocates for a multi-vectored approach to tackling polluted information that goes beyond media literacy.

  • Current approaches to tackle polluted information tend to focus on media literacy.
  • This approach is insufficient, as although media literacy is important, it is only one aspect of a bigger picture.
  • Policymakers can combine media literacy initiatives with other interventions including holding platforms to account and developing alternatives to algorithmic recommendation systems to tackle information pollution.
Digital media and current media literacy approaches

Digital media systems have been developed to keep audiences engaged through a variety of mechanisms both social and algorithmic. These systems also manipulate user emotions, shaping users’ realities with the aim of making money for platforms and advertisers.

Along with this, “influencers,” (a user class of salespeople), shape media circulation for profit. Many, by productising and selling ideology, politics, and polluted information. And with the refashioning of Twitter into X, a major platform now prioritises its’ owners preferred content inclusive of political, economic, and polluted information circulation with concrete impacts on global discourse and politics.

In this context, media literacy approaches work from the perspective that “if we can just educate users to be more aware and understand what they are doing online, that will fix all the problems” (problems including polarisation, radicalisation, online extremism / terrorism). Such approaches simply have not worked.

This is because media literacy is only one vector of polluted information. And whilst media literacy programs are an important aspect of this issue, improving media literacy is not the only necessary intervention.

Polluted information

Content production and circulation are also key vectors of polluted information. Within this arena, platform governance and digital technology developments and practices are a crucial aspect. Much more should be done to address the amplification of polluted information and disallowed content. This can include platforms’ lack of content moderation of “high engagement” content, selective bypass of content moderation for chosen users, and the recent trend in removing fact-checking.

News reports indicate that unwanted content that is deeply problematic has been pushed to users’ feeds. In one case, Meta sent adult men images originally shared by parents of young teen girls in school uniforms as marketing for the Threads app. In other cases, teens are reporting that unwanted violent and sexualised content just pops into their feeds. A stark case comes from X where its embedded GrokAI tool is being used to generate non-consensual, sexualised images of women and children that are publicly shared and adapted by networks of users. These cases highlight the necessity of addressing platform practices as a vector of spreading and amplifying polluted information and harmful content.

A third vector in the spread of polluted information is human aims. This is more difficult to address because it is a function of digital culture and community. We must acknowledge that users knowingly spread polluted information. Often people have enough media literacy but choose to share the content anyway for reasons including trolling, participating in trends (such as the the ‘Where is Kate Middleton conspiracy’), or monetisation. This vector is often how socio-technical systems link between extreme content, user engagement, and monetisation.

Here, practices that produce motivation for user participation with content, so-called edge lord behaviour across a variety of platforms (such as 4Chan, 8Kun, /pol boards, telegram) as well as dark web sites shore up the movement of polluted content across platforms. Examples include users producing memeified clips of mass attackers live stream footage and users modifying opening sequences of content and reloading it to platforms.

In these cases, users create, share, and spread content as well as (re)populating and (re)circulating removed / banned content because doing so aligns with their aims. The GrokAI sexualised images are a case in point as users are building community around sharing how to create prompts and producing the harmful images. This vector is more difficult to address as it requires convincing users to stop doing what they find useful and enjoyable. The key insight for this vector is that such practices are related to developing and maintaining community online, offering potential interventions through offline community programming, engagement initiatives, and outreach. 

A multi-pronged policy approach

Any policy approach that does not consider at least these three aspects will struggle to deliver its intended outcomes successfully. Therefore, the following recommendations should be considered broadly in policy circles with particular attention from and collaboration between the Department for Digital, Culture, Media and Sport (DCMS), Ofcom, the Department for Education (DfE), and the UK Council for Internet Safety:

  • Continue and expand media literacy and pre-bunking programs. Develop programming for multiple user demographics (such as youth, adult, parent, senior, care giver) and the variety of reasons users may share content that assists amenable users in avoiding harmful trends, narratives, and ideologies.
  • Hold platforms accountable —at least to their own terms of service and national policies regarding content moderation. Including best practice solutions (such as combined human – technological moderation) and improved labour conditions (proper pay, benefits, and mental / wellbeing services) for moderators. The Online Safety Act begins to address this but has not been in force long enough to determine success. Of primary concern here is the relative impacts of non-compliance, which may work well for smaller platforms, but have much lower impacts with deterring very large online platforms due to the relatively small fines for failure to comply.
  • Prioritise research on alternatives to algorithmic recommendation systems. Consider policies that enforce transparency around algorithm design and use, and content moderation practices. Policy should encourage developers to explore regulation requiring the development of user-friendly “opt out” mechanisms for algorithmic recommendation systems as a standard affordance on social media platforms.
  • Reject the view that social media are in any way a “digital public square”. Platforms are inherently private entities (private businesses, in fact). As private businesses they can and do control speech within their platform environments, as their own direction of policy for and against specified political actors and their manipulation of algorithmic controls clearly shows.
  • Consider building or supporting alternative platform options that are public. Ofcom in particular, should look to alternative platform options following prior media (e.g., radio and television) which ultimately developed to have both public and private options enabling different frames of regulation, enforcement, and transparency to better serve users as consumers.  enforcement. At a minimum, support tools that enable users to curate their social media feeds and require platforms to allow them.

Government and society need to take a multi-vectored approach in order to tackle this multi-vectored problem. In addition, support for the development and growth of community engagement, outreach, and programming as alternatives to online community relationships is crucial.

Calling in, calling out: Responding to male supremacist ideologies through informal conterspeech

Dr Allysa Czerwinsky

white and brown living room set

As misogynist and male supremacist content is increasingly platformed within digital environments, there is a pressing need to strengthen existing interventions. Existing approaches to addressing online male supremacism often prioritise top-down knowledge from government-led security efforts - but citizen-led initiatives also represent an important avenue for building resilience, resistance, and trust. In this article, Dr Allysa Czerwinsky assesses the role of informal counter-speech in support-focused manosphere subreddits, highlighting:

  • Limitations of existing approaches to counter-speech that centre fact-checking and debunking misogynist narratives.
  • How user interactions are an important – but underexplored – element of effective counter-speech.
  • Policy actions to strengthen counter-speech approaches for male supremacism should include emotion/identity-based appeals, cultural fluency, and critical empathy.  
 Digital environments and male supremacism

Online spaces are essential in platforming, mainstreaming, and sharing misogynist and male supremacist content. This is especially prevalent within the manosphere, a loose network of online communities across online platforms connected by a shared belief that feminism has harmed men’s rights and social standing, whilst framing women as responsible for men’s perceived victimisation. Research has linked the visibility of manosphere content and ideologies to a rise in male supremacist rhetoric across both mainstream and alt-tech platforms, as well as in offline educational settings. As online spaces are crucial in platforming, mainstreaming, and sharing misogynist content, they can also be an essential arena for counters-speech interventions.

Importantly, much of the power behind the manosphere’s ideologies stems from its interactive components, allowing users to imbue wider community concepts with personal stories and experiences that strengthen their legitimacy. Participating in manosphere spaces is a social phenomenon, as involvement provides users with a sense of belonging, prompted through sharing of stories and similar experiences with rejection, victimisation, and harm, whether perceived or actual.

Current approaches to countering online misogyny

Existing efforts for counter-messaging have focused on top-down, individual-level approaches to addressing misogyny and male supremacism, often shaped by government initiatives for preventing and countering extremism. For instance, the Home Office’s Prevent programme has incorporated a loosely-defined ‘incel concerns’ category to classify referral cases, despite the broader prevalence of male supremacist ideologies in contemporary social, political, and cultural discourses. Similarly, while recent research funded by the Home Office has called for greater mental health supports for self-identified incels, a sole focus on individual-level interventions fails to account for the communicative and interactive elements that shape community involvement.

Current approaches to addressing male supremacism often employ counter-speech strategies like fact-checking and pre-bunking, techniques aimed at countering misinformation and pseudoscientific arguments. These techniques were an important strategy observed within support-focused subreddits (discussion forums), where responses by former and non-manosphere users focused on challenging the (mis)use of scientific evidence and pseudoscience used to support community concepts and theories.

Limitations of current approaches and tactics

While these approaches can be useful in prompting conversations about the evidence used to support male supremacist ideologies, they do little to challenge how subjective experiences of victimhood legitimise misogyny and male supremacism as reasonable and just. My research seeks to address this gap, exploring how everyday conversations on support-focused manosphere subreddits can strengthen our approaches to countering misogyny and male supremacism.

 Across the two subreddits analysed, conversations centred distinct parts of users’ original posts, ranging from (1) the belief systems and ideologies shaping and motivating participation in manosphere spaces; (2) the weaponisation of subjective experiences with victimisation, rejection, and harm; and (3) identity-based motivations and emotional impacts of community involvement. Additionally, approaches to counter-speech often encompassed additional tactics beyond fact-checking and debunking, including:

  •  Emotional appeals that drew from lived experiences of discrimination, harm, and rejection, as well as provided non-critical validation of subjective life circumstances and negative experiences
  • Identity-based appeals highlighting similar characteristics with community members (race, ethnicity, disability, neurodiversity, and identifying as former users of manosphere spaces) to help adopt cultural fluency in responses and open avenues for bonding
  • Practical advice aiming to address users’ questions or needs through suggesting tangible steps toward self-improvement and limiting harmful behaviours
  • Counternarratives that targeted male supremacist ideologies and community concepts, particularly gender essentialism and ‘scientific’ evidence
  • Alternative narratives that provided users with new ways of making sense of subjective experiences and identities outside of the manosphere

Importantly, this research demonstrates that approaches used often impacted the direction of conversations with current and exiting members, prompting either repeated engagement in replies or stymying opportunities for additional support.

 A key aspect of fuelling further engagement included matching the approach taken to the concerns outlined in posts, opening additional opportunities for discussion and new avenues for counter-speech in further replies. A mismatch between posters’ needs and the tone or content in replies resulted in aggression or limited further engagement. Further, signals of cultural fluency (or a lack thereof) influenced the credibility of responses. Demonstrating an awareness of community language and concepts helped build trust in the message being delivered. Whilst demonstrating a willingness to learn and an acceptance of knowledge gaps (particularly for non-manosphere users) created opportunities for productive communication and rapport.

Approaches that challenged community members’ own understandings of cultural concepts (such as assertions around true definitions of theories or ideologies) were met with hostility and a lack of further engagement. Finally, empathy – both non-judgmental and critical – helped continue discussion across posts. Instances of critical empathy (validating the subjective experiences and emotions of current manosphere users while critiquing the weaponisation of these experiences to legitimise misogyny and male supremacism) allowed for additional discussion.

 Widening the scope for counter-speech policies

Results from this research reveal a delicate balance of both calling out male supremacism as a  response to rejection, hurt, and harm, either through direct counter-speech and counterevidence, while calling in users by emphasising shared experiences and providing emotional validation through empathy and identity-based appeals.

These forms of informal counter-speech show promise to complement and strengthen counter-speech campaigns. They move efforts beyond the content of male supremacism and orienting them instead towards its appeal. Specific pathways for policymakers to develop include:

  • Incorporating a focus on the appeal behind male supremacist rhetoric, acknowledging how emotions, subjective experiences can motivate and sustain involvement in the manosphere. This focus can be included in existing approaches to addressing the spread of extreme misogyny (such as Prevent and Ofcom’s regulatory guidance) with a cross-department initiative to centre the emotive appeals behind manosphere messaging.
  • Tailoring campaigns directly to the concerns highlighted by users of manosphere forums.  Existing campaigns that address misogyny – such as the GMCA’s #IsThisOk? Campaign – focus on encouraging boys and men to recognise the signs of coercive or controlling behaviour, and act as a valuable starting point for more focused discussions of the manosphere’s specific brand of male supremacist ideology. Developing new campaigns across local authorities in partnership with organisations countering extremism and centring men’s mental health to disrupt talking points common in online manosphere spaces is key in countering harmful messages.
  • Government should endorse partnerships with support-focused spaces in making help accessible and fostering human connections outside of manosphere spaces. These can help to bridge top-down and community-based approaches to interventions. Forming working relationships with former community members and moderators of support-focused forums through counter-extremism organisations could provide opportunities to create tailored support resources for members of male supremacist communities.
  • These are interventions that can better shape counter-speech efforts at both local and national levels. Additionally, opportunities for peer-to-peer mentoring for boys and men at risk could be implemented as an avenue for support in Prevent strategies, as well as part of local authority approaches to addressing violence against women and girls.

Conspiracy theories and counter-disinformation in the UK

By Professor Peter Knight

Conspiracy theories in the UK are not fringe anomalies – they are adaptive narratives rooted in historical anxieties, cultural identity and political discontent. The REDACT project is a multi-country study examining how the online environment shapes conspiracy theories and counter-disinformation efforts. In this article, Professor Peter Knight and Professor Clare Birchall spotlight findings which highlight the need for bespoke strategies, deeper understanding of conspiracism’s social functions and reforms to the counter-disinformation ecosystem.

  • The REDACT project analysed how digital media shape the form, content and consequences of conspiracy theories.
  • Conspiracy theories reflect systemic issues – they are symptoms of democratic dysfunction and public disillusionment, not just misinformation.
  • Counter-disinformation efforts must evolve as the UK’s sector faces regulatory gaps, funding pressures and increasing politicisation.
The REDACT project
The REDACT project, co-led by The University of Manchester and King’s College London considered online conspiracy theories and counter-disinformation organisations in a selection of European countries. The project involved a team of 14 researchers analysing data from Western Europe, Central Europe, the Baltics and the Balkans.
The project used keywords from a range of conspiracy theory topics, and gathered 6 million posts from Twitter/X, Facebook, Instagram and Telegram between 2019-2024. Researchers used a mixture of digital methods and close reading strategies to analyse the datasets. Each regional team also conducted ethnographic interviews with key members of counter-disinformation organisations across Europe. Political, social and economic contexts were brought to bear on all these methods and findings.
A history of conspiracy
The project explored the history and context of conspiracy theories in the UK. It looked at historical roots and cultural specificity. British conspiracism has evolved from elite-driven fears of subversion in the 18th and 19th centuries to today’s populist narratives.

Events like the 1857 Indian uprising, antisemitic scapegoating in the early 20th century, and Cold War espionage fears laid the groundwork for modern conspiracy thinking. Contemporary figures like David Icke and Russell Brand, and modern events such as Princess Diana’s death and the Covid-19 pandemic, have further shaped the landscape.

A response to systemic failures – and the grey zone of conspiracy talk
Conspiracy theories often express frustration with the gap between political promises and lived realities. Rather than dismissing them as irrational, they should be understood as responses to perceived institutional failures. This reframing allows for more constructive engagement and policy design.
Conspiracist rhetoric frequently operates in a grey zone between legitimate political debate and disinformation. Dog whistles, memes and coded language make it difficult to regulate or counter. Topics like immigration and climate change are particularly vulnerable to this dynamic, where conspiracist framing dominates public discourse.
The role of digital media, culture wars and counter-discourse

Social media platforms have amplified conspiracist narratives, but they are only one part of a broader media-political ecosystem.

The UK’s post-Brexit and post-Covid context has intensified culture war dynamics, with conspiracy theories increasingly shaping debates on sovereignty, identity and freedom. REDACT examined how theories like the “Great Replacement” and opposition to 15-minute cities illustrate how conspiracism migrates across topics and gains traction through populist rhetoric.

Despite the visibility of conspiracist content, many ordinary users actively challenge false claims online. These grassroots efforts are often overlooked in big data studies, leading to an exaggerated sense of conspiracism’s dominance. Recognising and supporting everyday counter-discourse is vital for a healthy information environment.

Counter-Disinformation sector: strengths and challenges
The UK’s counter-disinformation sector includes government bodies, regulators such as Ofcom, media organisations and NGOs. It is internationally engaged, reflecting the global nature of digital disinformation.

 The UK’s Online Safety Act (2023) focuses on illegal and harmful content but does not explicitly address health misinformation, election-related disinformation or AI-generated content, leaving significant gaps in tackling systemic risks. In contrast, the EU’s Digital Services Act imposes broader obligations, including risk assessments, transparency for political advertising and measures against disinformation and algorithmic manipulation.

 Short-term funding models also hinder agility and long-term planning. Organisations struggle to balance nuanced analysis with the need to secure resources, sometimes leading to overstated threats or reactive strategies.

Policy pathways for UK specific strategies

Conspiracism in Britain is shaped by unique historical, cultural and political factors that require tailored responses. Strategies should not be imported models from the US or other contexts but are most likely to be effective if they are grounded in Britain’s unique socio-political landscape.

Government and regulators need to focus on the underlying grievances that make conspiracy theories resonate. This requires moving beyond sensational examples and engaging with the “grey zone” where conspiracism intersects with legitimate concerns.

Conspiracy theories are not just false claims – they are compelling stories tied to identity and belonging. Counter-narratives must be equally engaging and rooted in democratic values.

Government should enable long-term, flexible funding for counter-disinformation work. DSIT and DCMS should consult and collaborate with sector organisations to design responsive and sustainable support mechanisms.

Platforms must also collaborate with regulators and researchers to reduce the financial and algorithmic incentives for spreading falsehoods. Access to platform data is essential and should be a legal requirement.

Conspiracy theories in the UK are not merely digital misinformation – they are expressions of deeper social, political and historical dynamics. Effective counter-disinformation policy must move beyond reactive moderation and embrace a holistic, context-sensitive approach. By reforming regulatory frameworks, supporting grassroots counter-discourse and fostering trust in institutions, the UK can build resilience against conspiracist narratives and strengthen democratic communication.

Combating information obesity: policy as problem space

Dr Drew Whitworth

The online sphere is now an essential element of everyday life. To map, navigate and use the resources of this vast information landscape most effectively requires the application of digital and information literacy whether deployed by individuals, communities, businesses or government. This has been made ever-more salient by the emergence of generative AI tools such as ChatGPT. In this article, Dr Andrew Whitworth assesses the current policy around the online world and media literacy.

  • Information pollution is prevalent in the digital world – but UK policy approaches focus on online safety rather than digital literacy.
  • Research from The University of Manchester analysed digital literacy in the UK Overseas Territory of St Helena.
  • The study indicated the benefits of community learning around information technology.
Online pollution and information obesity

Social media, podcasting and the internet generally offers innumerable opportunities to spread whatever information, and misinformation one likes due to limited gatekeeping. This is a form of pollution: a concept just as applicable to information as it is to other parts of the environment. Misinformation, junk and other waste that is difficult to process leads to "information obesity".

On the other hand, tools and resources exist for fact-checking and a productive digital literacy approach can deploy technology to identify and address problems at community level and scrutinise claims and information.

UK policy and digital information

UK policy around the online world gathers around the trope of 'online safety' but this has been clumsily implemented thanks to the omission of any accompanying educational angle.  Digital and information literacy should mean more than just learning technical IT skills'; rather, learners must develop a broader awareness of the impact of these technologies on their work, social relationships, mental and physical health and knowledge of the world in general. 

Yet successive UK governments have never recognised the notion of "information literacy", unlike other countries such as Australia, Finland and Hong Kong and UNESCO, whose Moscow Declaration on Media and Information Literacy in 2012 noted the positive correlations between information literacy and economic growth, sustainability and building egalitarian and inclusive societies, development goals which the UK should aspire to just as much as any other country.

Research from St Helena

This policy problem is illustrated by a research project which, over the four years 2021-25, has studied the UK Overseas Territory of St Helena in the South Atlantic in collaboration with the St Helena Research Institute.  This island's connection to Google's undersea cable, fully activated in October 2023, allowed a study of the impact of new information-carrying capacity on a community both in the present and historically. 

Despite lying approximately 1,200 miles from continental land, this remote place has not always been peripheral in information networks. No communications expertise was developed on the island, and, until late on in the 20th century at least, no educational initiatives and no training of teachers as users of new technologies. 

€21m of European Development Funding hooked up St Helena to the cable itself, but no support was provided to improve on-island information infrastructure (most Saints still connect through copper cabling). There was also no development of programmes of digital and information literacy education.

For young people, the Saint schoolchildren were in fact found to display good levels of self-generated critical awareness of IT.  But the older generation of Saints experience more difficulties with the technology. 

Where public funding has been found to undertake this work, the focus has been, as in the UK, on 'security' and 'protection'. 

Developing tech solutions and information literacy

On the other hand, bodies like the St Helena Research Institute and St Helena National Trust have used the increased bandwidth as the basis for developing new technological solutions in the local context, including the digitisation of valuable old East India Company records, and the iRecord St Helena app, which engages locals and visitors alike in the collective task of protecting the island's biodiversity and dealing with invasive species.

These are examples of a community learning about the use of information technology on its own account and thereby working to combat information obesity.  But these developments in St Helena's collective digital and information literacy have been made despite, rather than because of, policy, whether originating from Westminster or locally, from the St Helena Government. 

This demonstrates how government approaches to digital and media policy need to understand gaps in user knowledge and understanding, not just focus on safety

Lessons from St Helena

St Helena is a microcosm of how communities, and the families and businesses that reside in them, have been historically let down by policy around information technology.

Media narratives can be skewed against 'media literacy' with emphasis on control and censorship (in the name of safety) rather than developing the skills and knowledge of local people so they can use IT to improve their own knowledge or financial situation. 

In the face of AI and its capabilities, this is no longer tenable. School curricula should acknowledge 'information literacy' for (in the UK) the first time.  Techniques such as lateral reading and fact-checking can be taught not just as an add-on but integrated into all subjects. Misinformation is nothing new, and a historical perspective could be taken: historical case studies of misinformation would be less controversial and can be undertaken now using digital tools (such as the National Archives initiative).

Community responses observed in St Helena to media literacy show that providing education can improve users fact-checking capabilities and can empower positive programmes like the biodiversity app.

Evidence of digital inequalities, particularly in older people shows that education initiatives cannot be addressed by schools only. Media literacy programmes for workplaces could be rolled out alongside online safety programmes.

Immediate actions could also include the compulsory watermarking of AI text and content.  Alongside regulators reviewing the channels and content that cause misinformation AI generated campaigns and deepfakes.

A call for the 'Online Safety Act' may not be possible, but what is also needed is a parallel 'Online Education Act' - not just for children, but all, focusing not on 'protection' but on how funding and support can be found to allow people to use these technologies to develop their potential: as UNESCO recognises, something which benefits entire economies. 

The St Helena example shows that by educating people on emerging technology in a way that builds on community learning, embraces co-production and promotes media literacy leads to a more nuanced understanding of what is misinformation and what is useful.

Social media literacy: What should be on the new curriculum

Dr Jo Hickman-Dunn, Dr Margarita Panayiotou and Jade Davies

white and brown living room set

Media literacy is now understood to be a critical life skill, and as part of this, the risks and challenges of social media are now increasingly covered in school lessons. However, research suggests that some of the most common challenges young people experience are not captured by current curriculum guidance. The implications of this are that schools are delivering lessons that do not meet young people’s needs or reflect the complex role social media plays in their everyday lives. In this article, Dr Jo Hickman Dunne, Jade Davies and Dr Louise Black outline their findings and recommendations for supporting social media literacy through the school curriculum.

  • Research findings from The University of Manchester demonstrate the subtle and everyday issues that young people must navigate on social media.
  • These everyday issues are largely absent from the secondary Relationships and Sex Education (RSE) and Health Education curriculum, which focuses on generally rarer and more acute risks associated with social media.
  • Dedicated lessons and resources supporting open conversations about social media will strengthen social media literacy approaches.
The need for social media literacy

Social media are firmly embedded in young people’s everyday lives, with over 90% of 12-17 year olds reporting using at least one social media platform. It has become an important space for young people to socialise, find entertainment, and access news and information.

Alongside these opportunities, however, we know that 20-40% of young people report encountering harm online, including misinformation, contact from strangers, violent content, and hateful messages. The form and intensity of these harms can vary significantly, but overall, young people are reporting feeling less safe online.

Our research shows that newspapers typically emphasise much more severe risks of social media, such as access to harmful and dangerous content (and associated risk of suicide), cyberbullying and inappropriate contact with adults.

However, when we spoke to young people about their everyday experiences on social media, they highlighted the challenges of much more subtle issues. Including managing their online profile, the importance of fitting in online and feeling out of control of their social media use.

There is evidence that the breath of the challenges young people face through social media can undermine their mental health and wellbeing. This points to a need to update existing provisions to keep young people safe online, whilst supporting them to access the benefits social media can bring. Indeed, there is growing consensus that media literacy is no longer simply desirable, but a critical life skill that young people must master.

Our research suggests that media literacy education should be expanded to reflect the diverse experiences young people report, capturing both the more severe risks and the more everyday, subtle challenges.

An inadequate school curriculum

Following our research with young people, and through the #So.Me project, we have developed seven key areas of social media experience that have the potential to impact on young people’s mental health and wellbeing. These are: sense of control; exclusionary experiences; pressure to fit in; appearance worry; fear of judgement; social connection; and self-expression.

Whilst our research findings highlight a complex and varied set of experiences young people must navigate online, this is not fully reflected in the new statutory guidance on Relationships and sex education (RSE) and Health Education in secondary schools. The curriculum content in relation to wellbeing online covers topics such as the benefits of limiting time online; the impact of unhealthy or obsessive online comparisons and the curated nature of online content; identifying, reporting and seeking support for harmful behaviours; and the risk of illegal behaviours online.

This updated curriculum neglects the complexity of the evidence-base around social media and mental health. For example, whilst there is no clear agreement on what the ‘right’ amount of time spent on social media should be and this may vary by young person, our research shows that feeling in control of how much time they spend online is important to young people. Thus, a focus on supporting young people to develop skills that can help them control how and when they use social media may be more productive than telling them to simply ‘spend less time’.

More generally, through RSE and Health Education, the curriculum continues to place emphasis on the most acute issues such as illegal activity, gambling and debt risks, with much less consideration of more everyday, and likely more common, experiences. Promisingly,  the recent curriculum and assessment review calls for a heightened focus on media and digital literacy. However, it narrowly focuses on teaching pupils how to identify and protect against mis/disinformation, and the functions and limitations of AI. It fails to consider other everyday experiences young people find important, and does not mention social media, despite this playing a central role in young people’s everyday digital lives and forming a key part of adult concerns.  

Overall, the curriculum presents a relatively narrow and deficit-based approach of social media and young people’s online lives. For young people, the positives of social media outweigh the negatives, and our research shows clear differences in how adults and young people think about social media, with adults holding more negative views.

If space is not created in the curriculum to acknowledge the complex role social media plays in young people’s lives, it is unlikely to be effective in fully meeting their needs in terms of social media literacy, or in creating a safe space to have supportive, authentic conversations about social media.

Prioritising everyday experiences in the curriculum

Our research demonstrates that young people experience subtle and everyday challenges online that are not adequately addressed in the current curriculum. To better support young people to maximise the benefits of social media and minimise potential risks, we recommend that:

  • The Department for Education (DfE) expands the concepts of digital and media literacy outlined in the curriculum and assessment review, to acknowledge that being digitally literate is not just about having the practical skills to navigate new technologies, but also about having the social-emotional skills to manage individual experiences and support personal wellbeing whilst engaging with digital technology.
  • Given this shortfall in the curriculum and assessment review, the DfE, in collaboration with DCMS, launches a consultation about the content and delivery of media literacy lessons. The review should also be amended to include more in depth thought on the relationship between social media and young people.
  • Within this consultation, the DfE should consider the need for dedicated RSE lessons focused on social media. These lessons may involve, for example, exploring why and how young people might use social media, providing space for a balanced discussion and encouraging them to critically reflect on the role social media play in their lives, how they can manage online social interactions, and how to manage their time.
  • To support these lessons, the DfE should provide specific resources for teachers to support them in engaging young people in open and non-judgemental conversations about social media. These resources should align with the new statutory RSE guidance’s guiding principle of positivity, and enable teachers to establish a safe and supportive environment for discussing social media with young people.

Acknowledgements

Jade Davies - Doctor of Philosophy, Manchester Institute of Education, The University of Manchester

Dr Allysa Czerwinsky - Research Fellow in AI Trust and Security, The University of Manchester

Professor Terry Hanley  - Associate Director of Research (impact) for the School of Environment, Education and Development and a Professor of Counselling Psychology within the Manchester Institute of Education, The University of Manchester.

Dr Jo Hickman- Dunne - Senior research fellow in Mental Health, University of Cumbria

Professor Peter Knight - Professor of American Studies, The University of Manchester

Dr Ashley Mattheis - Lecturer in Digital Media and Culture, The University of Manchester

Dr Margarita Panayiotou - Lecturer in BSc Educational Psychology, Manchester Institute of Education, The University of Manchester

Dr Drew Whitworth - Director of Teaching and Learning Strategy, Manchester Institute of Education, The University of Manchester

Thought leadership and ideas on digital truths

Curated by Policy@Manchester

Read more and join the debate at blog.policy.manchester.ac.uk

#digitaltruths

The University of Manchester
Oxford Road
Manchester M13 9PL
United Kingdom

www.manchester.ac.uk

The opinions and views expressed in this publication are those of the respective authors and do not necessarily reflect the views of The University of Manchester.

Recommendations are based on authors’ research evidence and experience in their fields. Evidence and further discussion can be obtained by correspondence with the authors; please contact policy@manchester.ac.uk in the first instance.

April 2026