Recap: Experiences and conversations about AI use at the UvA

Last 19 November 2024, TLC Central together with the DLO Board (Digital Learning Environment) organized the UvA AI event: Conversations & Experiences in the Humanities Lab (Bushuis). It was a successful, well-attended afternoon in which faculty shared their experiences about pilots around AI use. There was also room for the public – which consisted of faculty, ICT staff, software developers and interested parties – to discuss issues and ask questions to an expert panel of experts. An interesting afternoon that would need a sequel, because there is still much to learn about responsible AI use and how to anticipate this as a university.

Pioneers: five AI pilots at the UvA

This event was organized by Frank Benneker, Central Education Information Manager and Board Member DLO, Ivar Kolvoort Advisor AI in Education at TLC Central and Co-Chair UvA Working Group AI in Education and Luuk Terbeek, Project Leader AI in Education, TLC Central Chair UvA Working Group AI in Education. They had long been toying with the idea of organizing such an event and offering pioneers a stage to share their experiences with a wide audience.

Co-organizer and host Luuk Terbeek opened the afternoon with a welcoming speech in which he shared some content and relevant updates for a wide audience. Explicitly, Luuk discussed the valuable content of the UvA.nl/AI website and discussed the TEAMS channel AI in Education Lab. In addition, Luuk explained the AI Maturity in Education Scan (AIMES). AIMES is aimed at increasing AI literacy of teacher, program directors and faculty administrators. AIMES is a joint initiative of the UvA and VU Amsterdam and was developed in collaboration with an international expert panel. Luuk, initiator and project leader indicated that AIMES will be launched in January 2025.

Then the pilot presenters had the floor, each briefly introduced by Luuk.

Pilot 1 – AI used in debate

Spearheading the event was Paul Verhagen (Faculty of Science) who ran the pilot for FNWI on AI as a debating partner in the Data Futures Lab. Students debated with AI. The goal was to increase the diversity of counterarguments, improve students’ ability to defend their positions and gain practical experience with LLM prompting. Main conclusion: AI models can be very useful as diversifiers in debate, and Paul urged those who want to use AI for debate to get creative with it.

Pilot 2 – Diversification of AI interfaces

The second pilot by Morten Strømme (Faculty of Science) was demonstrated through a live demo of working with AI in the UvA Makerspace. There are several pilots running with AI at IIS. Morten discussed how the Makerspace Bot acts as a virtual assistant to guide Science, Technology & Innovation students on their projects, tailored to their skills and the equipment available in the UvA Makerspace. As a result, the space itself becomes a collaborative partner while building their prototypes. Key Conclusions: Your own lessons (slides/slides) are the best instructions, one bot can serve multiple purposes, and some work is still needed to prevent the AI from confusing itself in longer conversations. Different AI interfaces fit different student learning types and practices. They saw that AI use differs by human type and concluded that people with certain traits, such as perfectionists, struggle with the interactive and unpredictable nature of AI. Highly intuitive creators prefer unstructured, spontaneous creation, and solitary creators see creativity as highly personal and resist collaborating with AI. Changes in the way we interact with AI can help these types of personalities. Morten therefore sees a future in the diversification of AI interfaces.

Pilot 3 – AI for formative feedback

The third pilot by Erik Elings (Amsterdam UMC) was about automated formative feedback on academic writing. The question was: automated formative feedback on an undergraduate thesis, is it possible with generative AI? And how is this perceived by students? In his presentation, Erik shared the development of a feedback tool and preliminary research findings on student experiences. Students responded positively to Ai’s use of formative feedback. They find the tool useful and appreciate that the tool is in addition to teacher feedback. “With such an AI feedback tool, you can ask for feedback at any time if you get stuck. “ But there are also more critical voices: “ It is very tempting to use the feedback without critically evaluating it. Certainly, we need to learn to use it and use it for good in a changing world, but at the same time we need to implement it in our independent, critical scientific thinking.”

Pilot 4 – Voting for AI

The fourth pilot “Would you vote for AI?” was presented by Computational Sciences student Sahir Dhanani Enarth. He was part of a semester-long project with the Azure API within the undergraduate Computational Social Science program. He is also doing research on this project and shared his insights. They examined the role of AI in predicting vote shares, generating political manifestos and influencing public opinion. Conclusions: AI is widely used in politics today. AI enables micro-targetting and voter manipulation. It proves difficult for people to recognize what is created by AI and what is not: the public sees AI as absolute truth, which it is not. So this is where creating awareness is needed. Regulation also lags behind AI developments.

Pilot 5 – Use AI critically in a safe environment

Jasper ter Schegget (Institute for Interdisciplinary Studies) concluded the pilot presentations with his project Digital Literacy. This pilot focuses on increasing the digital literacy of first-year undergraduate students of the Bachelor Bèta – Gamma, specifically in the area of generative AI. By giving students an understanding of how large language models work and allowing them to experiment with generative AI in a safe environment, the program strives to use this technology responsibly. To accomplish this, generative AI has been integrated into the Academic Skills course. A new learning objective has been added: The student will be able to explain the capabilities and limitations of generative AI when writing academic texts. This learning objective is addressed during an interactive lecture by AI instructor Daniël Kooij and by having students experiment with feedback in Azure. To test whether students have achieved the learning objective, a canvas test has been introduced.

Panel discussion: AI cannot be ignored

After the break there was a panel discussion with leading experts, who discussed the developments of AI and the different perspectives they have on the use of AI in education following different propositions. The discussion was led by Ivar Kolvoort (Advisor AI in Education, TLC Central), the panel consisted of Dora Achourioti (Chair of the AUC Taskforce on Generative AI in Education), Rik Jager (Product owner Digital Collaboration) and Jolanda Broex (Education Advisor) They discussed the following propositions: 1. All undergraduate students should learn how to use AI in their first year. 2. The role of teachers will change dramatically because of AI. 3. The UvA should allow students to actively use commercial AI software (e.g. ChatGPT) so that they gain experience with the most advanced AI technology.

The general opinion was that Ai cannot be ignored. You can’t tell students that you can’t use AI, but you better teach them how to use AI. They are going to (have to) use it in the workplace later on. AI use should therefore be included in the curriculum of bachelors, and in fact this education should start as early as high school. Everyone also agreed that the use of free AI models should not be encouraged from privacy and ethical considerations. AI models should be used only when they are managed by the educational institution.

Increasing AI literacy

Reference was also made to a gap that can arise between students who do not use AI and those who do, It is therefore good to teach students in a safe environment to use AI and also to learn to look critically at AI. But the same goes for teachers as well. Not all teachers are experts on AI use and there is also a lot of resistance to AI use among some teachers. A better understanding of opportunities and how to deal with threats, also potentially helps to increase support and requires urgency for AI Literacy.

The role of the teacher is changing; for example, through Blended Learning

Learning is best done together. Encourage students to adopt a critical attitude toward AI. And discuss prompting with them; for example, to (further) develop the skill of asking the right questions of AI. Dora Achourioti argued that the role of the teacher was already changing before AI through Blended Learning, for example. AI, according to her, fits into that development. So AI is actually building on a trend that had already started.

Positive aspects

For example, teachers may have more time to give feedback because of AI. AI may also make a teacher’s profession easier, and fears of the profession dying out are premature: teachers are still needed.

The audience had the opportunity to participate in the conversation because the organizers believe it is important to learn from each other and explore where we can make strides in the use of AI within the UvA. This was actively taken advantage of. Many questions were asked and own experiences and opinions were shared. At the closing drinks, the conversations and discussions about AI continued enthusiastically. We are far from finished talking about AI.

Participants reactions

“I got a good overview through the pilots how AI can increase the quality of education, surprising”

“Very clever to use AI as a debating partner to learn how to strengthen your arguments, I will definitely take that into my own teaching.”

“The event, both the pilots and panel discussions, helped me get an idea of the lines I can take in AI in my own working groups.”

“I was able to expand my network today and now know who is doing what in the AI field and who I could contact.”

“Surely these are also ethical dilemmas and the question is how do we as humans stay in control?”

“I wish the UvA would have a dialogue with not only the ‘followers’ and ‘believers,’ but also people from outside the bubble who raise objections and concerns.”

“Very nice and inspiring to see what is already happening. Am impressed what has been accomplished. Impressive and didactically well thought out”

“We need guidelines on AI and how to set boundaries on this in our education.”

More about AI-pilots

Want to see more about the AI-pilots within the UvA?

See videos about AI-pilots
Responsible AI education grant

AI offers many opportunities in education, both to students and teachers. However, its use raises many ethical issues. Do you have an innovative idea for responsibly incorporating an AI technology into your lesson or learning environment? And would you like to experiment with it? Then apply for the Responsible AI Scholarship and have a chance to be part of a groundbreaking innovation initiative that is shaping the educational future.

This call is open till 31 January, 13.00h.

Go to Responsible AI grant