A ‘friend’ in your pocket?
Young people increasingly trust AI
Last summer, it was reported that OpenAI, the company behind ChatGPT, was sued by the parents of a 16-year-old American boy who committed suicide after lengthy conversations with the chatbot. In the Netherlands, too, young people are increasingly using AI chatbots such as ChatGPT, Snapchat My AI or other chatbots to talk about mental and health problems, financial and political (voting) advice or virtual friendships and relationships. The conversations with the chatbot are about school stress, arguments at home, insecurity, mental or physical complaints. For many children, AI feels like a safe place: always available, non-judgmental and discreet. However, its use raises important (privacy) questions about the role of AI in the daily lives of young people, how they can use this technology safely and what role the parties involved can play in this.
The role of AI
Young people told NOS Stories why they use AI in their daily lives.[1] One of the users indicated that he does not share much with people, but was able to express himself freely with ChatGPT. It was much more accessible, easier, and less confrontational.
In addition, young people indicate that AI is always available. This is in contrast to healthcare, for example, where there are sometimes (long) waiting lists. Psychologists also emphasise that people with early symptoms or who are on the waiting list for treatment can benefit from conversations with a chatbot, but they certainly also see the risks.[2]
The risks
AI chatbots pose various risks, especially for vulnerable users such as young people. After all, the output of an AI chatbot is based on a statistical prediction model, which means that answers are not necessarily correct or true. It is therefore inherent in the way these chatbots work that you cannot simply trust the answers and should always check them. However, because they sound particularly convincing, young people are quick to accept the answers as true. In addition, AI chatbots often contain addictive elements to keep users chatting for longer. This is done by ending the conversation with a question to the user, in order to prolong the interaction and keep users engaged for longer. Some AI apps also pretend to be real people by offering virtual characters to chat with, ranging from ‘dream partners’ and film characters to psychologists. The design can be hyper-realistic, such as the appearance of a telephone call screen when using the voice option. This makes it seem as if the user can ‘call’ the virtual conversation partner. This feeling is only reinforced if the AI bot also sounds like a real person and the chatbot provider is not transparent about the fact that the user is not communicating with a real person. Finally, the use of AI chatbots in crisis situations can be dangerous. First of all, because AI chatbots are not able to draw up a personalised treatment plan, as a professional would. AI chatbots also often fail to refer users to support services, or do so incorrectly. This is because an AI chatbot cannot recognise emotions and nuances in conversations. It does not understand context in the same way as a support worker, and has no empathy or sense of responsibility. As a result, users may rely on answers that do not help them and ultimately fail to receive the care they need.
Commercial purposes
The providers of many of these apps are commercial companies. These parties are profit-oriented and gain access to a great deal of personal information through these conversations.[3] This often includes special categories of personal data within the meaning of the GDPR: data concerning health, family situation or emotional well-being. The GDPR imposes strict requirements on the processing of such data. According to ChatGPT's Privacy Policy, personal data is shared with third parties and affiliates of OpenAI.[4] This may create a risk that personal data will be used outside its original context or processed for purposes other than those for which it was provided. This may lead to a loss of control over one's own data. In addition, a lower level of security or transfer to countries outside the EU may increase the risk of misuse or unauthorised access. Personal data is also used for the further development of AI models. This has an indirect commercial purpose, as better models make the product more valuable and competitive.
Parties involved
Parents
It is important for parents to remain involved with an AI chatbot or app that is always available. Digiwijzer emphasises that it is important to discuss with young people how AI works, what it can and cannot do, and why it is important to use AI technology consciously.[5] This requires parents to actively engage in conversation about their child's use of an AI chatbot and explain that a chatbot is not a human being, does not treat what a child shares as confidential, and can give incorrect advice. Parents should also check the settings on AI chatbots to minimise the amount of data shared with the provider(s) of such AI technologies.
Educational institutions
Schools are increasingly encountering AI in the classroom, both as a learning tool and in the everyday lives of pupils. Education has a dual role to play in this regard: providing information and offering protection. According to an article on Kennisnet, ‘a good AI strategy is essential for every school’.[6] Without its own vision on AI, the school loses control over the use of AI. A prerequisite is that schools understand how AI works, where the limits lie and what it means for young people. This requires digital literacy, critical thinking about technology, and insight into how young people use AI.
Government
The government plays an important role in ensuring the responsible use of AI technology by young people. On the one hand, it has a duty to protect young people from risks such as privacy violations, manipulation, discrimination and exposure to inappropriate content. The government does this by establishing laws and guidelines, monitoring compliance and promoting awareness of the safe use of AI. On the other hand, the government plays a supporting role by helping young people develop the knowledge and skills they need to use AI critically and responsibly. This can be done, for example, by promoting digital literacy in education or providing information through media channels. In this way, the government seeks to strike a balance between protecting young people and encouraging their development in an increasingly digital society.
Providers
Developers and providers of AI systems must realise that children are not ordinary users. At OpenAI, they realise this and state in their Privacy Policy that their services are not targeted at or intended for children under the age of 13. Users under the age of 18 must have permission from their parent or guardian to use OpenAI services.[7] In response to some worrying experiences, the creators of OpenAI have also introduced a new feature specifically for parents. One of these new features allows parents to link their own account to their child's, so that they receive a notification if ChatGPT detects that their child is in acute distress.[8] This is a step in the right direction, but an AI chatbot cannot assess whether someone is truly in distress. The new features give parents more control, but they do not solve the core problem.
Design choices must be based on safety and comprehensibility. Important considerations include data minimisation, transparency, privacy-friendly age verification options, and ethics. by design. The latter focuses on the privacy, protection and development of the child. The European AI Regulation also introduces new obligations for developers and providers of AI systems, particularly when AI interacts with children and exploits the vulnerability of minors.
Privacy First calls on the providers of such AI chatbots to take responsibility.
[1] See NOS Stories 26 September 2025. For the above examples, see also Volkskrant, Parents sue ChatGPT over their son's suicide, 27 August 2025; NOS News, ChatGPT as an aid for mental health issues: ‘Better AI than therapy’, 26 September 2025; Telegraaf, 10 per cent of young people would rather seek financial advice from AI than from their parents., 14 October 2025; Data Protection Authority, AP warns: chatbots give biased voting advice, 21 October 2025; Data Protection Authority, AI chatbot apps for friendship and mental health are simplistic and harmful, 12 February 2025.
[2] NOS News, ChatGPT refers people too quickly, notes suicide helpline 113, 26 September 2025.
[3] Data Protection Authority, AI chatbot apps for friendship and mental health are simplistic and harmful, 12 February 2025.
[4] OpenAI Privacy Policy, updated on 27 June 2025.
[5] AI enables children to converse with fictional characters: innovation or risk? | Digiwijzer
[6] KnowledgeNet, ‘A sound AI strategy is essential for every school’ – Kennisnet, 1 October 2025.
[7] OpenAI Privacy Policy, updated on 27 June 2025.
[8] OpenAI, Creating better ChatGPT experiences for everyone | OpenAI, 2 September 2025.