Grading assessment is the fourth step of the assessment cycle and relies on the answer key or assessment form/rubric you already made while constructing your assessment. Intuitively, we often assume grading is objective and that the final grade is an accurate reflection of a student’s performance, but it’s easy for bias and subjectivity to creep in. Therefore, continuing to keep in mind the four quality standards for assessment (see step 1, designing) during the grading process is key.
Make sure you are only grading students on things they have learned during the course, the learning outcomes, in other words. Subconsciously, we often weigh other factors too, which may benefit or disadvantage students unfairly. For instance, if students make a podcast or a poster presentation and the learning outcomes focus on content and argumentation, not form, then it is not fair to award higher grades to work that displays professional-level design or editing skills. Similarly, check to make sure that your expectations of students’ written or spoken language are realistic for the level of your course.
Take steps to ensure your grading is reliable, even when there are multiple assessors. Use answer keys, rubrics or grading forms to keep grading consistent and organize calibration sessions. Be mindful of possible grading biases:
Grading should be feasible within the hours allotted for teaching a course. A very high grading workload may lead to grading fatigue and a lack of consistency. Ideally, this aspect would already have been considered when designing assessment.
Students should know beforehand how they will be graded (for instance by sharing rubrics), and afterwards, they should be able to understand how their grade was determined.
Ideally, you should already have an answer key (for exams, assessment form or rubric for assignments, see below) while constructing your assessment (see step 2 of the assessment cycle). Writing an answer key or sample answers will help you think about the answer you expect students to be able to provide, but also about how many points each (aspect of the) answer should be worth. It also helps you spot ambiguities in the phrasing of the question. Make sure the answer key is as clear as possible on which responses can be considered correct or incorrect, so it can’t lead to different interpretations for assessors. Your answer key should also indicate how many points students must score to pass the exam.
You should use an assessment form or rubric if you are setting longer open-ended questions or assignments such as a paper, presentation or creative assignment where there is more than one correct answer, and where your grading is based on more criteria than just content (for instance structure and use of language). This can help you convey clearly to students what is expected of them. It’s helpful to make such forms or rubrics available to students beforehand.
For multiple choice questions, the answer key is simple: it lists the correct answer(s) and specifies how many points each correct answer is worth. For digital exams (for instance in TestVision or ANS), you will have to provide this information when constructing the exam.
For open-ended questions, the answer key is more elaborate and serves in part to limit differences between assessors. It’s also useful for when students come in to view their exam, to explain how their grade was calculated. An answer key includes sample answers, but also explains how the points are allotted for each question. Assessors must be informed how to deal with answers that are partially correct, or answers that are not in the key. The answer key can be updated during the grading process if it turns out that other answers are also possible. In this case, work that has already been graded must be checked again using the updated answer key. For students, the answer key should help them understand how their grade was reached.
An assessment form helps you assess oral and written skills (such as papers or presentations), group work and portfolios as objectively, consistently and transparently as possible. There are several different kinds of assessment form, such as a checklist or grading scale/form.
A checklist allows you to compile a list of aspects to take into account while grading. This is particularly useful for pass/fail assignments (AVV/NAV). If you want to allocate actual grades, a grading scale or form works better. This offsets the criteria you will look at against different levels (for instance a 5- or 10-point scale). You can also attach a weighting to each of the criteria, and stipulate whether there are minimum requirements for a passing grade. A more detailed version of this is a rubric, which not only includes a description per criteria, but further specifies this per level.
There are several different kinds of rubrics. A holistic rubric includes one descriptor for each performance level (for instance: good, sufficient and insufficient). An analytic rubric goes further and splits the assessment into several criteria (such as content, structure and language), with a descriptor for each performance level. In addition, you can include how many points can be earned per level. A single-point rubric only describes the pass mark for each criterion, leaving room to the left and right of the middle column to give students feedback on how they performed below or above expectations.
Go through your rubric with all assessors beforehand, to limit scoring differences. You can also organize a so-called calibration session, where you use the rubric to score the first few assignments together, to make sure you are all filling it in consistently. Visit this page for more information, examples and instructions on how to write a rubric.
The principal aim of assessment is to determine whether or not students have achieved the learning outcomes. In an exam, this means you need a cut-off point, also known as the pass/fail mark. You also need to decide how many points are needed for a certain grade, in other words, grade calculation (a useful website for this is cijfersberekenen.nl, in Dutch). In the Netherlands, the cut-off point is usually at 55% of the total number of points, but in certain cases a different percentage may be used. For instance, it may turn out that an exam was much too easy or difficult: in such cases, you might decide (after consulting an assessment specialist) to adjust the cut-off point. It’s important to be transparent towards students about cut-off points and grade calculation. You don’t have to pin down the cut-off point beforehand but do make sure students know how the grading procedure works, and that the cut-off point may be adjusted after the fact, based on the results. The rules and regulations on this differ per faculty.
There are three main ways of establishing the cut-off point: there are absolute and relative methods, and forms that lie between these two: so-called ‘compromise’ methods.
The instructor decides beforehand how many points students must have in order to pass. The grade is then calculated based on this cut-off point. The advantage of this method is that students know beforehand how many points they must score to pass, but the disadvantage is that it does not take into account the quality of the exam and the teaching.
When using the relative method, also known as grading on a curve, you use the students’ scores as a starting point to determine the cut-off point. The highest and lowest scoring students form the ends of the scale, in this case. The scores of the other students are calculated in relation to this. This means that the grade is an expression of the student’s position relative to other students, rather than relative to the course material. The cut-off point is determined after the exam has taken place. This method is not suitable for small groups of students (<50) and the proportion of passes and fails is fixed, irrespective of the actual exam results. This approach is not very common in Dutch teaching practice.
Both the relative and absolute methods of choosing a cut-off point have advantages and disadvantages. There are several methods that lie between these two extremes, so-called ‘compromise’ methods. Usually these operate on the basis of an absolute cut-off point that can be adjusted if circumstances require it (for instance, an exam that was too difficult, leading to too many fails). An interesting compromise method was put forward by Cohen-Schotanus. It assumes that the maximum score is rarely achieved on an exam: in other words, that all answers are correct. The average score of (for instance) the best 5% of students (who actually deserve a 10) is therefore considered as the maximum total number of points. The cut-off point is determined based on this. Ask your faculty’s assessment specialist for help in using this method.
To learn more about how to give effective and efficient feedback, you can follow our e-learning module.
Plagiarism and fraud are a serious breach of academic integrity. Students can be penalized for this in various ways, ranging from suspension from the course to – in extreme cases – suspension from the programme. It’s good to keep in mind that plagiarism and fraud are not always intentional. Students should be clearly informed of the rules concerning plagiarism and fraud for the course, assignment or exam in question, and why it’s so important to follow these.
If you suspect a student of plagiarism, cheating or unauthorized use of GenAI, please report it to your faculty or programme’s Examinations Board (see the A-Z list on your faculty’s staff website). Don’t come up with your own solution or penalty: students are entitled to equal treatment. The Examinations Board is tasked with ensuring that cases of plagiarism and fraud are dealt with fairly and consistently. For more information, please see the Regulations Governing Fraud and Plagiarism for UvA students.
There could be several reasons for this. Some students are stronger in written assignments than in exams, or students might suffer from exam stress. Other students are better at exams, or they might have put in a lot of effort for the exam despite not having worked hard during the course. It’s important not to let this sort of thing influence your assessment. That’s why it you might consider grading anonymously to avoid this kind of problem. If you think a student has plagiarized or cheated, contact your faculty’s Examinations Board.
In that case, the student did answer the question correctly, so you should mark it as correct. Update the answer key accordingly, and if there are other assessors, make sure you inform them. It might be the case that the answer given was not mentioned in the course material. But if the question doesn’t explicitly ask for information from the course material, you must mark the answer as correct. If you’re still uncertain, you can ask the student to explain their answer (how did you reach this answer? Can you explain what you wrote here?).
You can only do this if language is an explicit part of the rubric or assessment form. In other words: you can’t deduct points for language errors unless language is part of the course learning outcomes and assessment criteria. In that case, you can mark the answer as (partially) incorrect. It’s important to discuss this kind of concern in your team to ensure grade consistency.
Designing | How do I choose a form of assessment that accurately measures my learning outcomes? | |
Constructing | How do I construct effective questions and assignments? | |
The previous step: Administering | What should I keep in mind while administering an exam? | |
Grading | How can make sure my grading is efficient and reliable? | |
The next step: Analyzing | How do I evaluate and improve assessment quality after the fact? | |
Reporting | What should I keep in mind when returning grades and feedback? | |
Evaluating | How do I improve my assessment next year? |