“While children are the group that may be most impacted by the widespread deployment of generative AI, they are simultaneously the group least represented in decision-making processes relating to the design, development, deployment or governance of AI,” we learn in Understanding the Impacts of Generative AI Use on Children, research conducted by The Alan Turing Institute in partnership with the LEGO Group.
But it’s the children who shared their perspectives who can teach adults about some potential positive and negative aspects of GenAI, and help inform us in managing downside risks.
FYI, The Institute is headquartered in the British Library in London, and was created in 2015 by five UK universities — Cambridge, Edinburgh, Oxford, UCL and Warwick, with the UK Engineering and Physical Sciences Research Council — as the nation’s national institute for data science.
[If you don’t know who Alan Turing was, check out the film The Imitation Game with Benedict Cumberbatch portraying the brilliant applied mathematician who was a pioneer in envisioning AI].
One of the Institute’s focuses is on children, in terms of both using and benefiting from AI, and in safeguarding privacy and security. Here’s a video on this to give you some context into the Institute’s research on GenAI and children.
The research into impacts of GenAI was conducted as part of the Institute’s Children and AI and AI for Public Services programs, undertaken via two online surveys:
- WP1 survey was conducted in the United Kingdom among 780 children ages 8-12 and their parents and caregivers. A second survey was conducted among 1,001 teachers working in primary or secondary schools with children from 1 to 16 years of age.
- WP2 survey was conducted through workshops in two state-funded schools in Scotland with 40 children ages 9 through 11.
This diagram synthesizes the findings of the research — and reveals many insights that are useful for considering GenAI’s benefits and drawbacks for adults, and also for health care (which is also a program of research at The Institute — a great example being their collaboration with Roche figuring out why diseases affect people differently and how treatment responses vary person to person).
Among the many findings, three converged for me when considering health citizens’ potential interactions with AI for people’s personal use in their health care and well-being:
Relationships — because health embeds social health, mental health, and peer-support. In the study, “when children worked with traditional art materials they generally did so while chatting with classmates. Children choosing to use generative AI typically did so individually in a quieter and less social process. However, children enjoyed comparing AI generated images with others. While the task of creating AI generated images was more solitary, it became social and interactive through collective engagement with outputs.”
Identities — because patients respond to clinicians and care providers that share values and cultural touchpoints — which then bolsters trust and engagement. “In many instances children of colour experienced a frustrating process where they had to refine their prompts multiple times before they felt satisfied that the image produced represented them. This led to some children becoming frustrated and disappointed. In general, children who felt that generative AI tools did not produce images representing their identity, subsequently chose to use only traditional art materials.”
Third, several areas in this study also mesh that, together, can bolster, or diminish, a person’s feeling of self-agency and confidence — issues of Autonomy, Competence, and Diversity-Equity-and-Inclusion.
Health Populi’s Hot Points: The paper offers many recommendations based on what the research revealed about children’s experiences with GenAI.
What’s good for the kids is good for adults and health care, as it turns out: take a look at six of the Institute’s recommendations, and you’ll see that user-centered AI design, bolstering AI literacy, supporting both online and offline activity (think: health research), improving representation in databases, and assuring access to GenAI, are all relevant and impactful for health care and peoples’ access and equity challenges.
Long-time readers of the Health Populi blog know of my affection for LEGO as a company and as a perennial product for play and well-being.
I appreciate their collaboration with the Turing Institute — noting the company’s long-time commitment embedded in the roots of its name, “LEGO.” In the Danish, the name derives from the words “Leg Godt,” meaning “Play Well.”
Onward, together, then, with people-centered AI developments….