Skip to main content

Young voices shaping tomorrow: exploring the role of young people in AI development

Home > Articles > This Article

Students attending the AI Youth Summit at the National STEM Learning Centre, York.

Artificial Intelligence (AI) has dominated public discourse for the better part of 2023 with attention on its potential future capabilities. But these technologies already interact with our lives in the here and now, from facial recognition technology on phones to algorithms scanning medical images. Since we as members of the public are affected by these technologies, we should also be key stakeholders in the narrative.

Understanding public perspectives on AI

The evidence so far is consistent in showing what the public think about AI.

The Ada Lovelace Institute’s review of 29 studies on public attitudes around AI found that people see benefits of AI in making tasks efficient or supporting public good, but have concerns around it replacing human judgement and decision making.

Attitudes are nuanced - for example, while feelings are generally positive towards the potential of AI in health settings, there is also concern about its use in care settings. And above all, there is a desire for regulation that is independent and has enforcement powers strong enough to hold companies to account.

But there is a gap in evidence on what young people think about AI. The Ada Lovelace Institute’s joint survey with the Alan Turing Institute suggests that younger people have different views than older people on who should be responsible for ensuring AI is used safely.

This gap is a problem.

Young people have experienced AI in unique ways, with technology playing a role in their social connections from an early age. They will likely have unique perspectives on the impacts of AI on society. Their expectations of private companies may differ to those of older people based on their unique interactions with technology, potentially explaining some of the differences in attitudes we have already found, but more research is needed.

Young people will also have to live with the consequences of decisions and design choices being made now. Regulation is already being developed: the EU AI Act and UK Online Safety Act, for example, will affect the ways young people interact with AI-enabled technologies for years to come.

Therefore, young people need to have a say in how these technologies are developed, used and regulated.

Encouraging public engagement in AI decision-making

Public participation around AI can happen at different levels.

Deliberative processes like citizens’ juries and assemblies bring together representative members of the public to learn about a specific issue or theme, hear from a range of experts and deliberate to work towards a set of recommendations for decision makers.

An empowering way of engaging the public, these methods can generate rich and actionable insights. Meaningful participation isn’t easy, but existing models offer inspiration, such as those in Newham and Ireland.

At such an inflection point for AI development and policy, the voices of young people cannot remain a gap in evidence. As we argue in our evidence review, it is important to include the public, especially those groups least represented, in decision-making processes.

Is it time for a UK-wide youth assembly on AI?


About the author

Roshni Modhvadia is a Researcher in Public participation and research at the Ada Lovelace Institute.