KnowBe4
Source: KnowBe4 |

The Artificial Intelligence (AI) revolution: A double-edged sword for children

It is challenging to maximise AI’s benefits for children’s education and growth, while also ensuring their privacy, healthy development, and emotional well-being

As we step into this AI-driven era, we must carefully weigh the incredible potential against the genuine risks

JOHANNESBURG, South Africa, October 8, 2024/APO Group/ --

In just two years, artificial intelligence has undergone a revolution. Generative AI tools like ChatGPT, Google’s Gemini, and Microsoft’s Copilot have rapidly become part of our daily lives. With Meta integrating AI chatbots into popular platforms like WhatsApp, Facebook, and Instagram, the technology is more accessible than ever before. For children growing up in this AI-powered world, the implications are both exciting and concerning, warns Anna Collard, SVP Content Strategy & Evangelist at KnowBe4 AFRICA (www.KnowBe4.com). 

“These AI tools offer unprecedented opportunities for learning, creativity, and problem-solving. Children can use them to create art, compose music, write stories, and even learn new languages through engaging interactive methods,” Collard explains. “The personalised nature of AI chatbots, with their ability to provide quick answers and tailored responses, makes them especially appealing to young minds.” 

However, as with any transformative technology, AI brings with it a host of potential risks that parents, educators, and policymakers must consider carefully. From privacy concerns and the danger of overtrust to the spread of misinformation and possible psychological effects, the challenges are significant. “As we step into this AI-driven era, we must carefully weigh the incredible potential against the genuine risks,” warns Collard. “Our challenge is to harness AI’s power to enrich our children’s lives while simultaneously safeguarding their development, privacy, and overall well-being.” 

Privacy concerns 
“Parents need to know that while they seem harmless, chatbots collect data and may use it without proper consent, leading to potential privacy violations.” 

The extent of these privacy risks varies greatly. According to a Canadian Standards Authority report (https://apo-opa.co/3YdfR2E), the threats range from relatively low-stakes issues, such as using a child’s data for targeted advertising, to more serious concerns. Because chatbots can track conversations, preferences, and behaviours, they can create detailed profiles of child users. When used for malicious purposes, this information can enable powerful manipulative tactics to spread misinformation, polarisation, or grooming.       

Collard points out further that large-language models were not designed with children in mind. The AI systems that power these chatbots train on vast amounts of adult-oriented data, which may not account for the special protections needed for minors’ information. 

Overtrust 
Another concern for parents is that children may develop an emotional connection with chatbots and trust it too much, whereas in reality, they are neither human nor their friends. “The overtrust effect is a psychological phenomenon that is closely linked to the media equation theory, which states that people tend to anthropomorphise machines, meaning they assign human attributes to them and develop feelings for them,” comments Collard. “It also means that we overestimate an AI system’s capability and place too much trust in it, thus becoming complacent.” 

Overtrust in generative AI can lead children to make poor decisions because they may not verify information. “This can lead to a compromise of accuracy and many other potential negative outcomes,” she explains. “When children rely too much on their generative AI buddy, they may become complacent in their critical thinking, and it also means they may reduce face-to-face interactions with real people.” 

Inaccurate and inappropriate information 
AI chatbots, despite their sophistication, are not infallible. “When they are unsure how to respond, these AI tools may ‘hallucinate’ by making up the answer instead of simply saying it doesn’t know,” Collard explains. This can lead to minor issues like incorrect homework answers or, more seriously, giving minors a wrong diagnosis when they are feeling sick.  

“AI systems are trained on information that includes biases, which means they can reinforce these existing biases and provide misinformation, affecting children’s understanding of the world,” she asserts. 

From a parent’s perspective, the most frightening danger of AI for children is the potential exposure to harmful sexual material. “This ranges from AI tools that can create deepfake images of them or that can manipulate and exploit their vulnerabilities, subliminally influencing them to behave in harmful ways,” Collard says.  

Psychological impact and reduction in critical thinking 
As with most new technology, over-use can have poor outcomes. “Excessive use of AI tools by kids and teens can lead to reduced social interactions, as well as a reduction in critical thinking,” states Collard. “We’re already seeing these negative psychological side-effects in children through overuse of other technologies such as social media: a rise in anxiety, depression, social aggression, sleep deprivation and a loss of meaningful interaction with others.” 

Navigating their way through this brave new world is difficult for children, parents and teachers, but Collard believes that policymakers are catching up. “In Europe, while it doesn’t specifically relate to children, the AI Act (https://apo-opa.co/3ZVf1sx) aims to protect human rights by making sure AI systems are safer.” 

Until proper safeguards are in place, parents will need to monitor their children’s AI usage and counter their negative effects through introducing some family rules. “By prioritising play and reading that children do not do on screens, parents will help boost their children’s self-esteem, as well as their critical-thinking skills.” 

Distributed by APO Group on behalf of KnowBe4.