AI as an Exam Prep Assistant

AI as an Exam Prep Assistant – Chei Billedo

We’ve all stared at a blank page, knowing the huge task of writing an exam lies ahead. It’s a job that can drain precious time and energy. But what if an AI assistant could help you beat that blank page and get the ball rolling?

That’s exactly what Chei Billedo wanted to find out. She took part in the AI pilot program to see if the UvA AI Chat could help generate multiple-choice questions for the final exam of the Psychology of Misinformation course she teaches together with Anna Fenko. Her experience offers an interesting, real-world peek into the pros and cons of welcoming an AI into your course preparation.

A Conversation of Refinement

Chei’s first attempts with the AI yielded “really bad items.” However, through experimentation, she learned that getting quality results wasn’t about a single command, but an iterative conversation. “I did a lot of tweaking,” she explained, refining her prompts to adjust the difficulty level and ensure the answer options weren’t too obvious.

This process of dialogue and refinement was key. Simple prompts gave simplistic results, but by asking the Chat to make questions “more analytical,” for example, and specifying the level of difficulty expected in the prompt, she was able to guide the tool toward producing questions that were much closer to her standards.

Was it worth the effort? Absolutely.

As one of the biggest advantages, Chei emphasizes efficiency. “It definitely did save me time,” she confirmed, adding that tool was maybe less of a time-saver in terms of total hours, but more of an “energy saver.” It eliminated the stress and cognitive load of “thinking about items from scratch.” The Chat proved excellent at providing a baseline of questions that could then be curated and improved. It also generated a large pool of items, enough to select the best ones for the main exam and reserve others for a future resit.

Another advantage Chei appreciated was the new angles the AI output provided. She specifically mentions that the Chat generated some “good questions that I think we wouldn’t have come up with,” offering fresh perspectives on the course material.

So, can we just sit back an rely on the AI to create our exams from now on? Absolutely not.

The pilot also highlighted critical limitations where the teacher’s role is irreplaceable. The teachers expertise is still needed as a quality filter. The Chat, for example, occasionally produced questions focused on trivial details, like the number of participants in a study, which aren’t the intended learning outcomes. It was the teacher’s job to “weed out” these irrelevant items. Chei also stressed that the AI tool doesn’t know what you specifically emphasized in class. “It’s really [the] teacher’s responsibility to be familiar with articles and what you also lectured in class,” she noted, and to ensure that the exam questions reflect this.

Another subtle but significant flaw emerged: the AI-generated correct answers were often noticeably longer and more detailed than the incorrect ones. This pattern, spotted by the exam’s language reviewer, is a classic tell that can give the answer away.

The Main Message for Other Teachers

Chei’s core message is clear: the UvA AI Chat is a powerful assistant, not a replacement for the educator. “I really welcome it,” she said, “but at the same time, I think the teacher’s responsibility [is] to ensure that we still get valid and reliable exam questions.”

The human component is essential. The AI can’t replicate a teacher’s deep understanding of the material or their specific pedagogical goals. It can give you a starting point, save you energy, and even inspire new ideas, but the final product must be shaped by your professional expertise. As Dr. Billedo concludes, “it doesn’t work without your brain to modify, to tweak, in order to get to exactly what is needed.”