AI in education
Artificial Intelligence (AI) has the potential to radically change the education landscape by providing adaptive learning tools, personalised learning pathways and more efficient learning processes. However, these advances also raise privacy issues that require careful consideration and regulation.
Adaptive Learning and Personalisation
AI-powered adaptive learning systems have the ability to adapt to students' individual needs and learning styles. These systems continuously analyse student behaviour and performance, allowing them to provide customised learning materials. While this can enrich the educational experience, privacy concerns arise as students' personal data is collected and analysed. If such an adaptive learning system is designed incorrectly or used incorrectly, a student could be wrongly offered learning materials at a level that is not appropriate.
It is therefore conceivable that their use could entail (privacy) risks, especially if there were no further human control. The current text of the European AI Regulation identifies some AI systems in education as high-risk systems. These include AI systems for admitting and assessing pupils and students, but could also include adaptive learning tools and fraud detection systems.[1] High-risk AI systems must meet strict requirements in line with the AI regulation.
Data collection and profiling
One of the privacy challenges is the scope and nature of the data being collected. AI systems often collect not only academic performance data, but also behavioural data, socio-economic information and other personal details. This can lead to comprehensive and potentially inaccurate profiles of individual students, which poses risks regarding the protection of this sensitive information, such as discrimination, unfair treatment or incorrect assumptions and decisions. Users may also be driven in a certain direction based on their data profile, other than their own preferences. This affects a person's free personal development.
Transparency
Another concern is the level of consent and awareness students and their parents have about the use of their data in AI systems. Proper information provision, where data subjects are fully aware of what data is collected and how it is used, is essential. The AI expert group, set up by the European Commission, has ethical guidelines in which, among other things, transparency was fleshed out. This was done using the following three core values: traceability, explainability and communication. Educational institutions and or other providers of AI systems within education should therefore invest in transparently informing stakeholders. In particular, because transparency about the algorithms and decision-making processes of AI systems is also crucial for their reliability.
Bias
AI algorithms can inadvertently introduce bias and discrimination if they are trained on datasets that have inherent bias include. In a Article from Kennisnet tells Eric Postma, "It is a fact that AI discriminates and does not respond in a gender-neutral way because it is a reflection of our society." Every person has a bias, thus unintentionally incorporating this into the AI algorithm. In education, these biases can result in unequal opportunities and treatment of students. It is important to be aware of these risks and build in mechanisms to counter discrimination.
Use
Privacy in education is an important area of focus for Privacy First in the coming years. To ensure this, it is essential to develop guidelines for the use of AI in education. It is vital to strike a balance between innovation and privacy protection. A joint effort by educational institutions, policymakers, advocates and technology developers is needed to ensure responsible use of AI in education.
[1] KnowledgeNet, The AI Act: what can schools expect from this new law? (29 June 2023) and recital 35 in the AI Regulation.