4. Grading

1. Ensuring objective and reliable grades

Grading assessment is the fourth step of the assessment cycle and relies on the answer key or assessment form/rubric you already made while constructing your assessment.  Intuitively, we often assume grading is objective and that the final grade is an accurate reflection of a student’s performance, but it’s easy for bias and subjectivity to creep in. Therefore, continuing to keep in mind the four quality standards for assessment (see step 1, designing) during the grading process is key.

Validity

Make sure you are only grading students on things they have learned during the course, the learning outcomes, in other words. Subconsciously, we often weigh other factors too, which may benefit or disadvantage students unfairly. For instance, if students make a podcast or a poster presentation and the learning outcomes focus on content and argumentation, not form, then it is not fair to award higher grades to work that displays professional-level design or editing skills. Similarly, check to make sure that your expectations of students’ written or spoken language are realistic for the level of your course.

Reliability

Take steps to ensure your grading is reliable, even when there are multiple assessors. Use answer keys, rubrics or grading forms to keep grading consistent and organize calibration sessions. Be mindful of possible grading biases:

  • Contamination effect: Using assessment for other ends than simply grading. For instance, giving a low grade to show that your course is difficult or to stimulate students to work harder.
  • Halo/horn effect: Adjusting grades up or down based on a student’s behaviour rather than their performance (i.e. ‘This student always tries so hard…’, ‘This student is doesn’t seem to care…’)
  • Shifting standards: Adapting to overall student performance while grading, for instance by becoming less and less strict when many students got something wrong.
  • Restriction of range: Not using the whole range of a grading scale (very common in the Netherlands): ‘We never give a 10,’ ‘Giving a 3 is so mean’.
  • Sequence effect: Grading is influenced by the order in which you do it. For instance, if you see a correct answer after many incorrect ones, you might have a tendency to give an unduly high grade.
  • Significance effect: Not everyone pays attention to the same things. In grading written assignments, for instance, you may have a personal focus on argumentation, style or language.
Feasibility

Grading should be feasible within the hours allotted for teaching a course. A very high grading workload may lead to grading fatigue and a lack of consistency. Ideally, this aspect would already have been considered when designing assessment.

Transparency

Students should know beforehand how they will be graded (for instance by sharing rubrics), and afterwards, they should be able to understand how their grade was determined.

2. Answer keys, assessment forms and rubrics

Ideally, you should already have an answer key (for exams, assessment form or rubric for assignments, see below) while constructing your assessment (see step 2 of the assessment cycle). Writing an answer key or sample answers will help you think about the answer you expect students to be able to provide, but also about how many points each (aspect of the) answer should be worth. It also helps you spot ambiguities in the phrasing of the question. Make sure the answer key is as clear as possible on which responses can be considered correct or incorrect, so it can’t lead to different interpretations for assessors. Your answer key should also indicate how many points students must score to pass the exam. 

You should use an assessment form or rubric if you are setting longer open-ended questions or assignments such as a paper, presentation or creative assignment where there is more than one correct answer, and where your grading is based on more criteria than just content (for instance structure and use of language). This can help you convey clearly to students what is expected of them. It’s helpful to make such forms or rubrics available to students beforehand.

Closed-ended questions

For multiple choice questions, the answer key is simple: it lists the correct answer(s) and specifies how many points each correct answer is worth. For digital exams (for instance in TestVision or ANS), you will have to provide this information when constructing the exam.

Open-ended questions

For open-ended questions, the answer key is more elaborate and serves in part to limit differences between assessors. It’s also useful for when students come in to view their exam, to explain how their grade was calculated. An answer key includes sample answers, but also explains how the points are allotted for each question. Assessors must be informed how to deal with answers that are partially correct, or answers that are not in the key. The answer key can be updated during the grading process if it turns out that other answers are also possible. In this case, work that has already been graded must be checked again using the updated answer key. For students, the answer key should help them understand how their grade was reached. 

Assessment forms

An assessment form helps you assess oral and written skills (such as papers or presentations), group work and portfolios as objectively, consistently and transparently as possible. There are several different kinds of assessment form, such as a checklist or grading scale/form. 

A checklist allows you to compile a list of aspects to take into account while grading. This is particularly useful for pass/fail assignments (AVV/NAV). If you want to allocate actual grades, a grading scale or form works better. This offsets the criteria you will look at against different levels (for instance a 5- or 10-point scale). You can also attach a weighting to each of the criteria, and stipulate whether there are minimum requirements for a passing grade. A more detailed version of this is a rubric, which not only includes a description per criteria, but further specifies this per level.  

Rubrics

There are several different kinds of rubrics. A holistic rubric includes one descriptor for each performance level (for instance: good, sufficient and insufficient). An analytic rubric goes further and splits the assessment into several criteria (such as content, structure and language), with a descriptor for each performance level. In addition, you can include how many points can be earned per level.  A single-point rubric only describes the pass mark for each criterion, leaving room to the left and right of the middle column to give students feedback on how they performed below or above expectations.  

Go through your rubric with all assessors beforehand, to limit scoring differences. You can also organize a so-called calibration session, where you use the rubric to score the first few assignments together, to make sure you are all filling it in consistently. Visit this page for more information, examples and instructions on how to write a rubric. 

3. Cut-off points and grade calculation

The principal aim of assessment is to determine whether or not students have achieved the learning outcomes. In an exam, this means you need a cut-off point, also known as the pass/fail mark. You also need to decide how many points are needed for a certain grade, in other words, grade calculation (a useful website for this is cijfersberekenen.nl, in Dutch). In the Netherlands, the cut-off point is usually at 55% of the total number of points, but in certain cases a different percentage may be used. For instance, it may turn out that an exam was much too easy or difficult: in such cases, you might decide (after consulting an assessment specialist) to adjust the cut-off point. It’s important to be transparent towards students about cut-off points and grade calculation. You don’t have to pin down the cut-off point beforehand but do make sure students know how the grading procedure works, and that the cut-off point may be adjusted after the fact, based on the results. The rules and regulations on this differ per faculty. 

There are three main ways of establishing the cut-off point: there are absolute and relative methods, and forms that lie between these two: so-called ‘compromise’ methods.

Absolute method

The instructor decides beforehand how many points students must have in order to pass. The grade is then calculated based on this cut-off point. The advantage of this method is that students know beforehand how many points they must score to pass, but the disadvantage is that it does not take into account the quality of the exam and the teaching.  

Relative method

When using the relative method, also known as grading on a curve, you use the students’ scores as a starting point to determine the cut-off point. The highest and lowest scoring students form the ends of the scale, in this case. The scores of the other students are calculated in relation to this. This means that the grade is an expression of the student’s position relative to other students, rather than relative to the course material. The cut-off point is determined after the exam has taken place. This method is not suitable for small groups of students (<50) and the proportion of passes and fails is fixed, irrespective of the actual exam results. This approach is not very common in Dutch teaching practice.

'Compromise' methods

Both the relative and absolute methods of choosing a cut-off point have advantages and disadvantages. There are several methods that lie between these two extremes, so-called ‘compromise’ methods. Usually these operate on the basis of an absolute cut-off point that can be adjusted if circumstances require it (for instance, an exam that was too difficult, leading to too many fails). An interesting compromise method was put forward by Cohen-Schotanus. It assumes that the maximum score is rarely achieved on an exam: in other words, that all answers are correct. The average score of (for instance) the best 5% of students (who actually deserve a 10) is therefore considered as the maximum total number of points. The cut-off point is determined based on this. Ask your faculty’s assessment specialist for help in using this method.

4. Tips for grading and feedback

Tips for effective, efficient and reliable grading
  • Use a rubric or assessment form and discuss how you will use it beforehand with the rest of the grading team. 
  • Grade anonymously (no name; only student number). 
  • Shuffle the order of exam papers from time to time while grading.
  • Grade per question rather than by exam for more consistency. While grading, you will likely come across things that you will score more strictly or leniently as you go along. In these cases, it’s good to go back over the first answers you looked at and adjust your scoring if necessary. 
  • If you’re grading in a team, it can help to divide up the grading per question, so that students aren’t disadvantaged by grading differences. If each question is scored by only one person, the overall grades are more consistent.  
  • Don’t leave (extensive) feedback on a final assessment, as students won’t be able to use it to improve their performance anymore. Grades are also feedback! 
  • Determine how much time you will spend grading each question or exam beforehand and use a timer. 
  • Don’t put off grading and leave yourself enough time, but don’t grade for too long in one sitting. Grading is intensive, high-concentration work. Take regular breaks to ensure fairness and consistency (and sanity). 
Tips for providing efficient feedback
  • Decide how much time you can spend grading each assignment and use a timer.  
  • Use a rubric or checklist to give students general feedback. 
  • Give in-class feedback on common issues to the whole group and then have students apply the feedback to their own work individually or in pairs. 
  • Give oral rather than written feedback. You can audio-record your feedback using Speedgrader in Canvas. 
  • Make a sheet with error codes and use these in your feedback. You can also copy frequent comments to reuse.
  • Focus on max. three main issues in a longer comment at the end of the assignment for each student. If you give lots of detailed feedback in the text itself, students will not take it all in. 
  • Read the whole assignment before you leave feedback: this way you can avoid getting lost in the details and focus on the quality of the assignment as a whole instead. 

To learn more about how to give effective and efficient feedback, you can follow our e-learning module.

5. Plagiarism and fraud  

Plagiarism and fraud are a serious breach of academic integrity. Students can be penalized for this in various ways, ranging from suspension from the course to – in extreme cases – suspension from the programme. It’s good to keep in mind that plagiarism and fraud are not always intentional. Students should be clearly informed of the rules concerning plagiarism and fraud for the course, assignment or exam in question, and why it’s so important to follow these. 

If you suspect a student of plagiarism, cheating or unauthorized use of GenAI, please report it to your faculty or programme’s Examinations Board (see the A-Z list on your faculty’s staff website). Don’t come up with your own solution or penalty: students are entitled to equal treatment. The Examinations Board is tasked with ensuring that cases of plagiarism and fraud are dealt with fairly and consistently. For more information, please see the Regulations Governing Fraud and Plagiarism for UvA students. 

6. Frequently Asked Questions

A student who did very well during the course performed poorly on the exam. Or: a student who did very little during the course performed very well on the exam. What could be behind this?

There could be several reasons for this. Some students are stronger in written assignments than in exams, or students might suffer from exam stress. Other students are better at exams, or they might have put in a lot of effort for the exam despite not having worked hard during the course. It’s important not to let this sort of thing influence your assessment. That’s why it you might consider grading anonymously to avoid this kind of problem. If you think a student has plagiarized or cheated, contact your faculty’s Examinations Board.

One of my students gave an answer that is not in the answer key, but it is actually a correct answer. What should I do?

In that case, the student did answer the question correctly, so you should mark it as correct. Update the answer key accordingly, and if there are other assessors, make sure you inform them. It might be the case that the answer given was not mentioned in the course material. But if the question doesn’t explicitly ask for information from the course material, you must mark the answer as correct. If you’re still uncertain, you can ask the student to explain their answer (how did you reach this answer? Can you explain what you wrote here?).

Some students make a lot of language errors (in Dutch or English). Can I factor this into the final grade?

You can only do this if language is an explicit part of the rubric or assessment form. In other words: you can’t deduct points for language errors unless language is part of the course learning outcomes and assessment criteria. In that case, you can mark the answer as (partially) incorrect. It’s important to discuss this kind of concern in your team to ensure grade consistency. 

 

Designing How do I choose a form of assessment that accurately measures my learning outcomes?
Constructing How do I construct effective questions and assignments?
The previous step: Administering What should I keep in mind while administering an exam? 
Grading How can make sure my grading is efficient and reliable?
The next step: Analyzing How do I evaluate and improve assessment quality after the fact?
Reporting What should I keep in mind when returning grades and feedback? 
Evaluating How do I improve my assessment next year?