Summative Assessment Marking for Large Cohorts - Implementing a Grading Rubric

Summative assessment marking for large student cohorts with multiple marking teams can be a complex and time-consuming process. One of the main challenges is ensuring consistency, fairness, and equity across different markers. With large volumes of marking and tight deadlines, we often have to rely on multiple markers with varying levels of expertise, knowledge, skills, and experience. Coordinating and moderating these markers can be difficult, especially when they are hourly paid, retired, located in different places, and may not regularly check their emails. Additionally, providing individualized feedback to students can be challenging, especially in large class contexts and with varied marking teams. Providing high-quality, personalized feedback to students can be time-consuming and may not always be practical.

One approach to addressing these challenges is to design shorter, well-designed authentic assignments. This helps students practice prioritizing and summarizing information concisely (Dawson et al., 2021). Module leaders can provide detailed, clear, and specific assignment guidance and instructions, along with assessment and marking criteria, to help students understand the purpose of each task and any special requirements. This is particularly useful when scripts are distributed among multiple marking teams, as it ensures consistency in the marking process. Additionally, providing examples of past student work with a range of grades and feedback comments to all markers can help ensure consistency, equity, and clarity in the marking process. This can help markers understand what is considered high-quality work and the criteria that determine the academic quality (Sadler 2002).

As a module leader using multiple marking teams, it is important to provide clear guidelines and criteria for grading and to ensure that all members of the marking team are familiar with these criteria. This can help to ensure consistency in the grading process across different marking teams. One useful tool for creating a standard assessment process is a grading rubric, which outlines the criteria used to assess a student's work and the levels of performance that correspond to different grades or scores. Grading rubrics are effective and efficient tools that allow for objective and consistent assessment in team marking situations (Panadero & Jonsson, 2013). They can also clarify expectations for students and show them how to meet these expectations, making them more accountable for their performance. There are various approaches to creating a rubric for assessing large numbers of students, and the best approach will depend on the specific learning outcomes or competencies and the type of assignments. For example, a rubric for evaluating a report or essay might include criteria such as critical thinking, discussion, evidence, content, presentation, and use of sources, and assign different levels of performance (e.g., "excellent," "good," "fair," "poor") to each criterion. 

Example of a rubric grading

This can be supplemented with the use of a comment bank, which is a list of common and standard comments that can be used by markers to provide personalized feedback to students. Markers can combine this approach with their own feedback, using the comment box to indicate to students that their work has been engaged with and valued. It can also be helpful to set a proforma for markers to provide at least one or two strengths and suggestions for improvement for each student's work, using direct annotations in scripts or comment boxes. This helps to ensure that students receive personalized, constructive feedback on their work.

The module leader should include instructions for marking, time frames for each step, and protocols for communication between the marking teams. It can also be helpful to have a checklist of detailed criteria that must be met before a mark can be assigned, to ensure that all marking teams are following the same guidelines and that the assessment process has been conducted consistently and fairly. Additionally, it may be useful for the module leader to review and compare the grades assigned by different marking teams, to identify any inconsistencies or discrepancies in the early stages of the marking process. This can provide valuable feedback to both the module leader and the marking teams and can help to improve the overall quality of the grading process. It is also important to have good communication between members of the marking team and a system in place to monitor and track the accuracy of the marking criteria and the consistency of the marking process.

Students’ comments are very positive:

"The rubric was extremely helpful in understanding exactly where I need to improve my work."

"I really appreciate the detailed feedback I received. It gave me a clear roadmap for improving my work."



Dawson, P., Carless, D., & Lee, P. P. W. (2021). Authentic feedback: Supporting learners to engage in disciplinary feedback practices. Assessment and Evaluation in Higher Education, 46(2), 286-296

Sadler, D.R. (2002). Ah! ... So that's 'quality'. In P. Schwartz and G. Webb (eds), Assessment: Case Studies, Experience and Practice from Higher Education (P130-136), London: Kogan Page

Panadero, E. & Jonsson, A. (2013) "The use of scoring rubrics for formative assessment purposes revisited: A review", Educational Research Review, 9(1), 129-144.

Image credit: Alex Shutin on Unsplash

Last updated