Once you have decided which assessment method you are going to use, you can start putting your assessment together. For exams that means writing open- and/or closed-ended questions, while for other methods, such as a paper, presentation or creative assignment, you will be making assignment guidelines.
For exams it’s helpful to decide beforehand how many questions you are going to ask, which subjects/learning outcomes and cognitive levels you want to cover, and what the weighting will be. An easy way to do this is by creating an assessment planning chart, also known as a test blueprint or specification model. By considering this beforehand, you can ensure your exam is well-balanced, with a good spread of subjects and levels, providing insight into the extent to which the student has mastered the learning outcomes.
An assessment planning chart creates an overview of the weighting and spread of learning outcomes, subjects and levels in your assessment. It helps you see whether all the material is covered equally, and at the appropriate cognitive level. This ensures that your assessment is in line with the course learning outcomes and covers all the course material (content validity). It can help you avoid including too many questions covering the same material and skills.
Making an assessment planning chart is also helpful if you need to create two exams on the same material, for instance a regular exam and a resit. This helps you ensure that the two exams assess the learning outcomes as similarly as possible. In addition, you can use the chart to explain your assessment choices to colleagues, exam boards, or accreditation committees. You can also share it with students to help them approach the course material correctly.
You can decide the level of detail for your planning chart yourself. You can specify the number of questions per cognitive level for each learning outcome/topic, how many points these questions are worth, or which topics are part of each learning outcome. You can also distinguish between open and closed questions. You can even make a chart for a whole course, detailing which learning outcomes/course materials are covered in each assessment (download a template here). Some faculties require you to do this for all courses and have a specific format you should use.
Here is an example of an assessment planning chart for a written exam (download a blank template here):
Which question types you choose depends on your course learning outcomes. Do you want to assess recall, understanding or application? Then closed-ended questions or very short open-ended questions are most suitable, depending on the size of the group. If you want to assess higher cognitive levels, open-ended questions are generally more suitable. It might be possible to answer an application question in a few sentences, but if you want students to analyze or evaluate something, a longer answer works better, potentially in the form of a paper, presentation or creative assignment. Read more about closed- and open-ended questions below.
Closed-ended questions are suitable for assessing lower-order cognitive skills, such as remembering, understanding and applying. It is possible to create closed-ended questions that speak to higher-order cognitive skills, but this is more difficult and time-consuming. Closed-ended questions work well for large groups of students (>100), because they can be graded automatically and objectively. Creating a good multiple-choice exam is time-consuming, however. For a reliable result you need a large number of questions, since you need to take into account that students with no knowledge of the material can guess a proportion of the correct answers (see below). Closed-ended questions work well in digital exams. One of the advantages is the exam software generates information about your exam after it has been completed, which you can use to gain insight into the quality of your assessment.
There are many different kinds of closed-ended questions, such as yes/no, multiple choice, fill-in-the-blanks, ranking and hotspot questions.
Creating unambiguous multiple choice questions is very time-consuming, especially creating good distractors (= incorrect alternatives). Here are some guidelines:
Make sure…
Do not use…
If you are using closed-ended (multiple choice) questions, you must take into account the probability that students can guess a proportion of the correct answers without any knowledge of the course material. Effectively this means that a number of guessed answers will be left out of the grade calculation, and you must make sure that there are enough ‘unguessed’ answers left to give you a reliable sense of whether the student understands the material. In general, we can say that the more questions you have, the more reliable the result is, but there are minimum requirements to guarantee the reliability of the exam:
A multiple-choice exam with fewer questions than this is not reliable: you can’t use it to determine whether students have actually mastered the learning outcomes. There are no guidelines for the number of open-ended questions you should include on an exam. However, the exam should cover all the learning outcomes it sets out to, of course. An assessment planning chart can help you work this out.
Also see this article on the guess factor and calculating grades.
In all multiple-choice exams, students can answer a certain proportion of questions correctly without knowing any of the course material. For a question with 4 alternatives, any given student with no knowledge of the material will answer 25% of the questions correctly. Those points are meaningless, in other words: they say nothing about how well the student has understood the material.
Online exam software can apply a correction-for-guessing formula automatically, otherwise you will have to do it manually. This correction-for-guessing formula works as follows. In an exam with 40 questions with 4 alternatives, there is a 25% chance of guessing the correct answer, so on average, that’s 10 questions. The grade is calculated using the remaining 30 questions. For a 55% cut-off point, that’s 17 questions. That means students need to answer 10 + 17 = 27 out of 40 questions correctly in order to score a 5.5.
Another example: in an exam with 100 questions with 2 alternatives, there’s a 50% chance of guessing the correct answer, so you must use the remaining 50 questions to calculate the grades. To pass the exam, students need to answer 50 + 28 = 78 out of 100 questions correctly.
For more information, see this article on the guess factor and calculating grades.
Open-ended questions are very suitable for assessing higher-order cognitive skills. Of course you can also use them to test recall, but that can be very time-consuming and inefficient, especially if you have many students. When writing open-ended questions, keep in mind what sort of answer you’re expecting. Is it a few words, sentences, paragraphs or pages long? Can the student answer in short sentences or bullet points, or do you want them to write a coherent text with an introduction and a conclusion? Open-ended questions also include papers, essays, presentations or more creative assignments (see below). The type of open-ended question you choose depends on the learning outcomes you want to assess.
Whatever question format you choose, make sure that students know what is expected of them beforehand. Give them plenty of opportunities to practice, at home or during class. Provide a mock exam or sample questions and consider hosting a Q&A session. This helps prevent exam stress. The exam shouldn’t contain any big surprises for students who have prepared well.
In the interest of transparency, make your assignment guidelines for a paper, essay, presentation or creative assignment as clear as possible:
The principal aim of assessment is to determine whether or not students have achieved the learning outcomes. In an exam, this means you need a cut-off point, also known as the pass/fail mark. You also need to decide how many points are needed for a certain grade, in other words, grade calculation (a useful website for this is cijfersberekenen.nl, in Dutch). In the Netherlands, the cut-off point is usually at 55% of the total number of points, but in certain cases a different percentage may be used. For instance, it may turn out that an exam was much too easy or difficult: in such cases, you might decide (after consulting an assessment specialist) to adjust the cut-off point. It’s important to be transparent towards students about cut-off points and grade calculation. You don’t have to pin down the cut-off point beforehand, but do make sure students know how the grading procedure works, and that the cut-off point may be adjusted after the fact, based on the results. The rules and regulations on this differ per faculty.
There are three main ways of establishing the cut-off point: there are absolute and relative methods, forms that lie between these two: so-called ‘compromise’ methods
The instructor decides beforehand how many points students must have in order to pass. The grade is then calculated based on this cut-off point. The advantage of this method is that students know beforehand how many points they must score to pass, but the disadvantage is that it does not take into account the quality of the exam and the teaching.
When using the relative method, also known as grading on a curve, you use the students’ scores as a starting point to determine the cut-off point. The highest and lowest scoring students form the ends of the scale, in this case. The scores of the other students are calculated in relation to this. This means that the grade is an expression of the student’s position relative to other students, rather than relative to the course material. The cut-off point is determined after the exam has taken place. This method is not suitable for small groups of students (<50) and the proportion of passes and fails is fixed, irrespective of the actual exam results. This approach is not very common in Dutch teaching practice.
Both the relative and absolute methods of choosing a cut-off point have advantages and disadvantages. There are several methods that lie between these two extremes, so-called ‘compromise’ methods. Usually these operate on the basis of an absolute cut-off point that can be adjusted if circumstances require it (for instance, an exam that was too difficult, leading to too many fails). An interesting compromise method was put forward by Cohen-Schotanus. It assumes that the maximum score is rarely achieved on an exam: in other words, that all answers are correct. The average score of (for instance) the best 5% of students (who actually deserve a 10) is therefore considered as the maximum total number of points. The cut-off point is determined based on this. Ask your faculty’s assessment specialist for help in using this method.
While constructing your assessment, don’t forget to make an answer key (for exams; assessment form or rubric for assignments, see below). Writing an answer key or sample answers will help you think about the answer you expect students to be able to provide, but also about how many points each (aspect of the) answer should be worth. It also helps you spot ambiguities in the phrasing of the question. Make sure the answer key is as clear as possible on which responses can be considered correct or incorrect, so it can’t lead to different interpretations for assessors. Your answer key should also indicate how many points students must score to pass the exam.
You should use an assessment form or rubric if you are setting longer open-ended questions or assignments such as a paper, presentation or creative assignment where there is more than one correct answer, and where your grading is based on more criteria than just content (for instance structure and use of language). This can help you convey clearly to students what is expected of them. It’s helpful to make such forms or rubrics available to students beforehand.
For multiple choice questions, the answer key is simple: it lists the correct answer(s) and specifies how many points each correct answer is worth. For digital exams (for instance in TestVision or ANS), you will have to provide this information when constructing the exam.
For open-ended questions, the answer key is more elaborate and serves in part to limit differences between assessors. It’s also useful for when students come in to view their exam, to explain how their grade was calculated. An answer key includes sample answers, but also explains how the points are allotted for each question. Assessors must be informed how to deal with answers that are partially correct, or answers that are not in the key. The answer key can be updated during the grading process if it turns out that other answers are also possible. In this case, work that has already been graded must be checked again using the updated answer key. For students, the answer key should help them understand how their grade was reached.
An assessment form helps you assess oral and written skills (such as papers or presentations), group work and portfolios as objectively, consistently and transparently as possible. There are several different kinds of assessment form, such as a checklist or grading scale/form.
A checklist allows you to compile a list of aspects to take into account while grading. This is particularly useful for pass/fail assignments (AVV/NAV). If you want to allocate actual grades, a grading scale or form works better. This offsets the criteria you will look at against different levels (for instance a 5 or 10 point scale). You can also attach a weighting to each of the criteria, and stipulate whether there are minimum requirements for a passing grade. A more detailed version of this is a rubric, which not only includes a description per criteria, but further specifies this per level.
A rubric is a table you can use to ensure grading is objective, consistent and transparent. Read more about rubrics in this article.
Below is an example of a rubric used to assess writing within the Law Faculty that can be used as an example or format for any course in which writing skills are worked on. The rubric can be used as is or supplemented or adjusted in parts as needed, so that the assessment of the writing assignment optimally matches the content of the course, the type of assignment and the place in the assessment programme.
When you have finished writing the exam or assignment guidelines, ask a colleague to peer review them for you. Make sure you also give them the answer key and/or assessment form/rubric. This helps you edit out any ambiguities, typos or layout issues, and avoid a mismatch between the way the question is formulated and the answer you expect students to give.
Refer to the assessment planning chart for your course: this specifies which topics or learning outcomes are covered in the assessment, and at which cognitive level.
Your assessment should cover all of the course material. If one topic is more important than others, or you covered it in more depth in your teaching, then it goes without saying that this should be reflected in the exam. If all topics are equally important, then there should be more or less the same number of questions per topic.
The format and length of the exam also dictate the number of questions you can ask on each topic. In a two-hour exam, you won’t be able to ask more than 3-4 longer open-ended questions – that gives students about 15-20 minutes per question. If this means you can’t cover all the material that should be in the exam, you will have to consider using shorter questions (or a longer exam). Another option is spreading the material over several exams.
This depends on the type and number of questions. You can make an estimate using the tables below. Do take into account that students will need more time if the question is very long, or if they need to calculate or look something up (in an open book exam, for instance).
Multiple choice questions
2 alternatives | 50 seconds |
3 alternatives | 60 seconds |
4 alternatives | 75 seconds |
Open questions *
A few words or one sentence | 1 minute |
Quarter A4 page | 5 minutes |
Half A4 page | 10 minutes |
1 A4 page | 25 minutes |
2 A4 pages | 60 minutes |
*This table assumes handwritten answers, but experience shows that this is similar to typed answers.
(Source: Berkel, H. van e.a. (red.) Toetsen in het hoger onderwijs. 2014)
An exam cover page often includes the following information:
Some faculties have a sample cover page available that you can fill in.
Instructors often try to make their exam worth a total of 100 points by allocating points based on weighting (30 points for an important question, 10 points for a shorter or easier question). However, it’s usually less work for you if you base the number of points on the content of the answer. If you’re asking students to provide 3 arguments, make the question worth 3 points: one per valid argument. This makes marking much easier. In addition, you can apply a different weight per question.
Example
If your exam contains 4 questions that all contribute equally to the final grade, you can allocate 25 points per question to reach a total of 100 points. But when is a student answer worth 25 points? How do you allocate points for a partially correct answer?
It’s easier to allocate 1 point per correct aspect of the answer: for instance, if students are expected to give 3 reasons, allocate 3 points for that question, and so on. If a student has given 2 correct reasons, and that question is worth 25% of the exam grade, then their answer is worth 25 x 2/3.
Here is an example of a scoring table for an exam with 4 questions that are each worth 25%:
Weight | Max. score | Student 1 | ||
Question 1 | 25% | 3 pt | 2 pt | 25 x 2/3 |
Question 2 | 25% | 4 pt | 3 pt | 25 x 3/4 |
Question 3 | 25% | 6 pt | 5 pt | 25 x 5/6 |
Question 4 | 25% | 6 pt | 4 pt | 25 x 4/6 |
Final grade | 73 / 10 = 7,3 |
25 x 2 / 3 + 25 x 3 / 4 + 25 x 5 / 6 + 25 X 4 / 6 = 73. That means the final grade is 73 / 10 = 7,3. Of course you can also choose to vary the weighting, for instance 20% for question 1, 30% for question 2, 10% for question 3 and 40% for question 4. If you keep track of student scores per question in Excel, you can use the formula function to calculate the individual grades.
Students must be given the opportunity to practice with the types of questions they will have to answer on the exam. This also gives you a sense of whether there have been any misunderstandings and prevents such issues from coming up during the exam itself. Being familiar with the type of questions that will be asked also takes away a lot of stress, and allows students to prepare for the exam more effectively. If possible, provide time to practice during class, and make a few sample questions available. You can also host a question and answer session about the exam.
The previous step: Designing | How do I choose a form of assessment that accurately measures my learning outcomes? | |
Constructing | How do I construct effective questions and assignments? | |
The next step: Administering | What should I keep in mind while administering an exam? | |
Grading | How can make sure my grading is efficient and reliable? | |
Analyzing | How do I evaluate and improve assessment quality after the fact? | |
Reporting | What should I keep in mind when returning grades and feedback? | |
Evaluating | How do I improve my assessment next year? |