AI and education at the UvA: a lecture and discussion with rector magnifcus Peter-Paul Verbeek

It’s a theme that has been causing quite a stir in the educational community this academic year. UvA lecturers and students are also increasingly being confronted with Artificial Intelligence (AI) and the chatbot ChatGPT. The developments related to AI are causing concern, but many lecturers also see the opportunities that AI presents. Reason enough for rector magnificus Peter-Paul Verbeek and the UvA Teaching & Learning Centre (TLC) to organise an event together on the role of AI and ChatGPT within education at UvA. The event was designed to gather questions and ideas and to advance an open discussion. Following a lecture by the rector, the 150 attendees engaged with each other on various topics regarding AI. On this page, you will find a recap of the lecture by the rector and you can read more about the main outcomes and leads from the afternoon.

A lot of questions but not so many answers yet

From 14:30, the beautiful auditorium of the Hermitage Amsterdam slowly filled up. At 15:00, there was no empty seat to be found. “The popularity of the event is beyond expectations,” says Head of TLC Mariska Min-Leliveld, who kicked off the afternoon with an introduction. “It started as a set-up for a small event on AI and ChatGPT, but soon the original venue grew too small. In fact, we even had to stop the registration after 150 applications.” However, the interest in the event and the topic is no surprise either explains Min-Leliveld. “We are moving into a very interesting time period and currently there are a lot of questions among UvA colleagues, but not many answers yet. And that is exactly why we are here today.”

Lecture by the rector magnificus

After welcoming everyone, the afternoon continued with a lecture by UvA’s rector magnificus Peter-Paul Verbeek. With his background in ethics and technological developments within science, he is an expert on AI within science and education. Back in 2020, Verbeek argued that we should neither accept nor reject AI, but rather learn how to deal with it. In his lecture, he builds on this. “ChatGPT and AI are challenging and we cannot provide all the definitive answers today, but it is a great start to come together to further explore AI.” To this, he adds that there are some things we need to understand if we want to learn how to deal with AI. “We didn’t know yet that AI could develop so quickly. ChatGPT is just one of many AI technologies out there. It’s not some kind of database you browse through and look up information in, but a technology that actually does no more than constantly predict what the most logical next sentence would be. That means the answer you receive depends very much on the input you provide. The danger is, that the output ChatGPT gives seems to be human-written, when it is not. This blurs the distinction between real and unreal.”

Three main fields

“Within the university, we need to especially consider the use of AI within three main fields,” Verbeek continues. “Education is the first field and of course, this immediately raises a lot of questions. How can you still take exams and have the students write essays if you are not sure whether the work submitted was written by students themselves? What should students learn to be prepared for a world in which artificial intelligence plays a central role? How do we ensure that their degrees are still valid? Second, we need to think about doing academic research. Because AI can support us in our research, in addition to being used to review the work of others. Finally, it is, of course, important to consider the role of AI in society, because this is the society we are preparing our students for: how is artificial intelligence changing healthcare, justice and education?”

Technology is part of what makes us human

Verbeek understands that the questions within these themes lead to different academic concerns. However, he believes we should embrace technology with all the pros and cons it brings: “We need to expand the debate and that is why we are here today. After all, AI is not going to disappear and it will be part of science. We have to accept it without going overboard and becoming fatalistic. Technology is not our opponent but instead forms a connection between humans and the world. Technologies have always been fundamental to our existence and thinking and are part of what makes us human.”

Triangle

Nevertheless, Verbeek acknowledges that the connection with AI is sometimes more difficult to understand and accept. “AI looks like an ‘artificial agent’, an artificial actor, taking over human tasks. That’s why we often see AI as a competitor. When one of the first disruptive technologies emerged with writing, people feared we would never be able to store anything in our heads again. Plato even feared there would be no more truth. Everyone could form their own image of Plato’s ideas without him being there to explain himself.” Passionately, Verbeek continues, “Technologies are taking us into new eras and AI is now leading us into a new digital revolution. In this, AI is not opposed to humans, but connects people to the world around them: it helps people understand that world and to act in it. But in this, AI does play a very distinct role. We have to understand how AI understands the world and AI has to learn to understand how we understand the world,” he adds, laughing.

Acting fast with time to reflect

At the end of his lecture, Verbeek returns to the implications AI has for education at the UvA. “Students are already massively using ChatGPT, so we need to start integrating it into our education. This is quite difficult because of the required licences and legal aspects, but we can certainly anticipate students using it. Therefore, we need to re-evaluate the way we assess and test students. In doing so, we need to act quickly, but paradoxically also take enough time to reflect. In the short term, this means supporting teachers, sharing knowledge and identifying the risks of using AI. In the medium term, we will work on updating our policies concerning educational content and assessment, both for faculties specifically and university-wide. In addition, for the long term, we have formed a task force in collaboration with the VU to anticipate future developments as well.”

The rector magnificus concluded his lecture with a clear message for the rest of the afternoon: “All the knowledge that we need to responsibly deal with ChatGPT is present in the room, so let’s have a good discussion!”

Watch the recording of the introduction and the lecture by the rector magnificus below.

'Open space' discussion with 'market stalls'

This discussion took place and it was designed using an ‘open space session’. In this, the agenda of topics and items to be discussed is not fixed in advance. There is plenty of room for participants to suggest topics and agenda items. This facilitated a completely open discussion.

At the back of the room, a temporary market had been created with seven different ‘market stalls’. At each stall were one or more hosts, or ‘market vendors’, who guided the discussion on a particular AI topic within education. Everyone was free to move around the ‘market’, different topics were discussed at the different stalls and participants could also suggest new discussion points themselves. Whereas the stalls at a normal market become increasingly empty, the stalls at this open space market became increasingly full or ‘richer’ in knowledge. Below you can read the main outcomes of the discussions per topic.

 

The most important outcomes per topic

Assessment

At the assessment stall, the discussion was focused on the huge impact AI has on the current forms of assessment at the university. These were the most important outcomes:

  • Many of our current forms of assessment are vulnerable to unauthorized use of AI. We must decrease our reliance on unsupervised written assignments, papers and take-home assignments and/or lower the weighting of these assessments in the final grade.
  • Because of the rise of AI, it will likely become less important that students master certain skills themselves. These skills no longer need to be practised and assessed as thoroughly. The flipside is that students must learn to work with AI responsibly and critically. All this means teaching and assessment must change drastically, perhaps even including programme exit qualifications.
  • Assessing lower cognitive levels such as knowledge, understanding and application can best be done on location and/or orally. For oral exams, there must be enough teaching staff to handle larger groups of students. Good calibration and coordination are essential here: this requires structural investment in teaching resources.
  • If we choose to use written assignments to assess higher cognitive levels such as critical analysis and evaluation, then it is best to shift the focus to the process rather than the end product. However, Supervising and overseeing the writing process is time-consuming for teaching staff.
  • Project-based courses are also less vulnerable to fraud and can help students develop the skills they need for the job market. Not all courses lend themselves to this kind of teaching, and staff will need support in this.
  • Completely out of the box: the current emphasis on summative assessment may tempt students to resort to cheating with AI. Perhaps we should move away from this grading culture and try out ideas such as ‘ungrading’ to motivate students in a different way.
  • Teaching staff (and programme directors) need time to chart and process the impact of AI on their teaching and assessment.
Education

The term education is, of course, broad. At this stall, there was a broader discussion on how we can ensure education keeps track of the latest developments. These were the most important outcomes:

  • Don’t call the use of AI fraud, don’t ban the use of AI and make sure it doesn’t become a taboo. It is a new skill that we need to develop and be able to discuss. For this, the UvA should open the dialogue and also facilitate larger meta-discussions.
  • We do need to differentiate between different faculties and disciplines and have different standards for the use of AI.
Information literacy

With the presence of AI, it is important that students also know how to use AI. The UvA should therefore help them develop their information literacy skills regarding AI. These were the most important outcomes:

  • The verifiability of sources is an important component, as students remain responsible for their use of sources. However, this can be complex.
  • ChatGPT should not start being used as a ‘shortcut’, students should continue to do their own work. ChatGPT, on the other hand, can be used as a source of inspiration.
Policy

The purpose of the event was not to share the UvA’s policy regarding AI. In fact, these policies are still largely under development. So this afternoon was a great opportunity to gather ideas on policy. These were the most important outcomes:

  • The policy is rapidly falling behind with developments, so policy on AI will have to be flexible in the coming period to move with those developments.
  • These developments demand an enormous amount from the education of the future. Therefore, as UvA, we need to develop a new vision on education.
  • The examination board needs new guidelines. The input of individual lecturers can be used to establish these. Currently, the thesis is very dominant in the curricula. Other forms of assessment can be considered as graduation projects. It is necessary to review the whole assessment process of programmes to ensure that we can continue to guarantee that students can responsibly achieve the final attainment levels of their studies.
  • We need licences for AI tools that are ethical and ensure privacy. Policies must be transparent at all times.
  • Local policies may differ from one faculty to another. Central UvA policy should be flexible for that.
Responsible AI

At this stall, the discussion was not so much about whether we should use AI, but rather about how to use AI responsibly. These were the most important outcomes:

  • Responsibility when using AI is complex. It is difficult to determine who is liable and issues such as privacy also create difficulties.
  • When using and assessing AI, it is important that people are always involved in the process. If digital systems start assessing other digital systems, it can become dangerous.
  • At the UvA, we need licences to use different technologies related to AI. We as a university also need to start developing new tools and technologies with our own research.
  • A course should be included in the curriculum that teaches students and also teachers to use AI responsibly. An example of this could be including ‘Responsible AI’ as one of the academic skills.
Students

A delegation from the student council was also present at the event. Their input is vital because AI is having such a huge impact on education. And because many students are already working with AI and ChatGPT themselves. These were the most important outcomes:

  • Students need support and guidelines regarding the use of AI, especially in relation to ethics.
  • With the emergence of AI, the focus within education needs to shift. Activating learning and critical- and analytical thinking are becoming increasingly important. Therefore, the focus should also be more on content rather than on the way students write.
  • However, the students who do not (want to) use AI and ChatGPT should not be forgotten.
Teacher support

Especially for teachers, the emergence of AI presents a huge challenge. They need to learn how to deal with students using AI and be taught how to use these new technologies themselves. In this, teachers need to be supported and facilitated. These were the most important outcomes:

  • Every teacher should receive basic training on generative AI (or at least the possibility to participate in this; think of an e-learning) and we should now start thinking about how to design this training. We also need to start developing best practices, templates and guidelines to support teachers.
  • We need to design assessment that can withstand students using AI. AI necessitates activating forms of teaching, in which discussions are held about what students need to learn and be able to do. And how they can best do this. The question is not whether students can use ChatGPT, the conversation is about how to use it appropriately.
  • Teachers need time to review their forms of assessment and to consider other forms of assessment.
  • Teachers and staff need support and encouragement to think creatively and optimistically, especially after Covid.

Reflection on the afternoon

The organisation and attendees looked back on a successful afternoon. There was a positive atmosphere and all participants in the discussions were excited to learn more about AI as well as to share their ideas. The open discussion and the stalls with the various topics allowed everyone to provide valuable input and to gain new insights themselves.

In his closing words, the rector magnificus commented, “The energy was excellent today. Everyone sees AI as a challenge, but we have a lot of knowledge and expertise at the UvA. In any case, we are going to ensure that a mandatory course on AI will be introduced and that teachers will have time to implement everything. This afternoon was the beginning of the process and everything discussed will be taken into account.”

Tangible leads

What’s next in the field of AI and where can you turn for questions? We briefly list the most important leads:

AI Contacts per faculty