Interview | Responsible AI in education: opportunities and challenges

Luuk Terbeek is a leading expert in AI and blended learning. As project leader AI in education at the UvA and strategic adviser digital transformation in education at Vrije Universiteit, he focuses on integrating AI responsibly into education. He initiated the AI Maturity in Education Scan (AIMES) to help institutions navigate ethical, regulatory, and practical challenges. Passionate about AI literacy – the ability to understand, use, monitor, and critically reflect on AI applications -, he advocates for critical thinking and trustworthy AI tools. We spoke with Luuk about how to responsibly use AI in education.

How would you describe the role of GenAI in transforming teaching and learning?

Currently, it’s somewhat undesirable—not because of the technology itself, which holds great potential to enhance educational quality, but due to several challenges. These include a lack of mature guardrails (e.g., ethical and environmental considerations), insufficiently developed vision or policy (e.g., alignment with the EU AI Act and institutional values), and concerns over transparency from certain providers, including OpenAI. Addressing these issues requires advancing AI literacy across all levels of education, from educators to policymakers. This inspired me to develop the AIMES to support institutions in responsibly integrating AI.  

Can you provide an example of a successful implementation of GenAI in education?

Not yet—but this isn’t surprising given the early stage we’re in. While there are promising pilot results (and some cautionary ones), we lack data on large-scale, long-term impact. The key question is how we can responsibly harness GenAI to truly prepare students for their professions, including those that don’t yet exist. The focus now should be on developing trustworthy tools and approaches that align with ethical standards and educational needs. 

What do you think are the most significant opportunities that GenAI offers educators and students?

If GenAI (hopefully soon) meets key requirements for Trustworthy AI (not coincidentally the basis of AIMES), it has the potential to responsibly enable the perhaps most important characteristic of higher education; critical thinking. For example, by having GenAI participate in a debate. In the report of the Experiences and conversations about AI use at the UvA conference (which I co-organised at the end of last year) you can read more about the potential of GenAI as explained by several pilot presenters. In addition, GenAI can then be used to provide formative feedback and/or help assess assignments if a good rubric has been developed for this. Unlike people, real Trustworthy AI is not biased, a valuable advantage. This does not alter the fact that Human Agency must always be safeguarded. Hopefully we will soon have good tools that we also know how to use well, but as mentioned, this requires a mature level of AI Literacy. AIMES helps to promote this. 

How can GenAI help personalise learning experiences and improve engagement?

AI can help personalise learning experiences by matching (the most appropriate) content to user data. In addition, AI can contribute to the accessibility of education, such as converting text to speech or images and summarising or simplifying texts. GenAI can potentially improve engagement by helping students brainstorm and explore possible scenarios. Again, the importance of (further) developing AI literacy applies, such as prompting skills.  

What are the key challenges or risks educators should be aware of when adopting GenAI tools?

In Europe, some AI applications will be restricted under the EU AI Act from February 2025. For instance, systems aimed at manipulating behaviour or recognising emotions in education will be prohibited unless for medical or safety reasons. High-risk classifications apply to AI used for admissions, learning outcome evaluations, or exam monitoring. 

Other risks include:

  • Lack of accuracy, reliability, or reproducibility in some systems. 
  • Poor data governance, leading to concerns about user data sharing or transparency. 
  • Potential biases in GenAI systems. 
  • Environmental sustainability concerns related to AI usage.

Addressing these challenges is vital for responsible and effective integration of GenAI into education. 

In your experience, what are some practical ways teachers can integrate GenAI into their courses today?

More evidence-informed work is important to ensure educational quality. This can take a lot of time. GenAI makes it possible to have articles summarised, and GenAI can be used to quickly connect learning objectives and course content. GenAI can also be used for inspiration in redesigning blended learning, making suggestions to improve (e.g. instructional) texts, creating images and converting text to video or vice versa, help with programming, translating, podcast creation, etc. See e.g. this list of GenAI tools 

How do you envision the role of GenAI evolving in education over the next decade?

In Europe, we will likely see GenAI develop into more trustworthy systems, meeting ethical, regulatory, and educational standards. AI literacy will also become a key competency for educators and students, enabling the responsible use of AI to improve education quality and outcomes. These two factors—trustworthy AI and AI literacy—will be essential for realising GenAI’s transformative potential. 

What advice do you give to educators who are hesitant about using GenAI in their teaching?

Stay critical, but adopt a constructive mindset. Seek advice on both practical applications and policy implications, and learn from best practices and lessons shared by peers and students. Consider how GenAI might complement your teaching, whether in small or significant ways. Ultimately, it’s up to you to decide how—and if—GenAI aligns with your educational goals. Maintaining human agency is always paramount.

This interview was first published in the TLC-EB magazine 2025.