Understanding Double Valuation: How Two-Evaluator Systems Prevent Marking Errors
Double valuation is one of the most effective quality control mechanisms in exam evaluation. Here's how it works, when to use it, and how digital platforms make it practical at scale.

What Is Double Valuation?
Double valuation (also called double marking or dual evaluation) is a quality control mechanism where the same answer script is independently evaluated by two different evaluators. Neither evaluator knows the other's marks. After both evaluations are complete, the scores are compared. If the difference exceeds a predefined threshold, the script is flagged for review — typically by a third evaluator or a moderator who makes the final determination.
The principle is simple: if two independent evaluators arrive at similar scores, the evaluation is likely fair and accurate. If their scores diverge significantly, something needs attention — one evaluator may be too strict, too lenient, or may have missed a section of the answer.
Why Double Valuation Matters
Reducing Evaluator Bias
Every evaluator brings unconscious biases to the marking process. Some are naturally generous markers, others are strict. Some may fatigue after marking hundreds of scripts and become less attentive. Double valuation doesn't eliminate these biases, but it catches cases where bias leads to significantly unfair outcomes.
Catching Marking Errors
In high-volume evaluation — where a single evaluator may mark 50-100 answer books per day — errors happen. An evaluator might skip a page, misread a score entry, or accidentally award marks for the wrong question. Double valuation catches these mechanical errors that a single evaluation pass would miss.
Protecting Students
For students, their exam marks directly affect academic progression, placements, and career opportunities. A single evaluator's bad day shouldn't determine a student's future. Double valuation provides a safety net that ensures no student is unfairly penalised by a single evaluator's error or bias.
Regulatory and Legal Protection
When marks are challenged — through RTI requests, revaluation applications, or legal proceedings — double valuation demonstrates that the institution took reasonable steps to ensure fair evaluation. It's harder to argue systematic bias when two independent evaluators arrived at the same conclusion.
How Double Valuation Works in Practice
The Basic Workflow
- If the difference is within the acceptable threshold → the average (or higher) score is accepted
- If the difference exceeds the threshold → the script is flagged for adjudication
Setting Discrepancy Thresholds
The discrepancy threshold is a policy decision that varies by institution and subject:
| Subject Type | Common Threshold | Rationale |
|---|---|---|
| Objective/short answer | 5-10% of total marks | Less room for interpretation |
| Descriptive/essay | 10-15% of total marks | More subjective marking |
| Practical/project | 15-20% of total marks | Highly subjective assessment |
Some institutions use absolute thresholds (e.g., "flag if difference exceeds 10 marks") while others use percentage-based thresholds. The threshold should balance catching genuine errors against generating too many false positives that overwhelm moderators.
Adjudication Models
When a discrepancy is detected, institutions typically use one of these resolution approaches:
Third evaluator model: A senior evaluator independently marks the flagged script. The final mark is typically the average of the two closest scores among the three evaluations.
Moderator adjudication: A moderator reviews both evaluations side-by-side, examining the question-wise marks to identify where the evaluators diverged. The moderator then makes the final determination.
Average with cap: The average of both evaluations is accepted, but only if neither evaluator's total is an outlier beyond a secondary threshold.
Double Valuation in Paper vs Digital Systems
Paper-Based Double Valuation
In paper systems, double valuation is logistically complex:
Due to this complexity, many paper-based exam boards use double valuation only for borderline cases or random samples — not universally.
Digital Double Valuation
Digital evaluation platforms make double valuation practical at any scale:
The marginal cost of double valuation in a digital system is essentially just the evaluator's time — there's no additional logistics, no physical handling, and no administrative overhead for comparison and routing.
When to Use Double Valuation
Double valuation is most valuable when:
It may be unnecessary for:
The Quality Control Ecosystem
Double valuation is most effective as part of a broader quality control system, not in isolation:
Evaluator anonymity — Scripts should be stripped of student identity before distribution. If evaluators know whose paper they're marking, double valuation can't catch identity-based bias.
Random assignment — Scripts should be randomly distributed across the evaluator pool. If both evaluators are from the same department or institution, they may share the same biases.
Moderation workflows — Beyond double valuation, moderators should review a broader sample of evaluations to identify evaluators who are systematically over-marking or under-marking.
Score analytics — Digital platforms can analyse scoring patterns across evaluators to detect anomalies: an evaluator who consistently marks 20% higher than the average, or one who never uses the full mark range, or one whose scoring speed suggests inadequate attention.
Audit trails — Every evaluation action should be logged. If a discrepancy is later investigated, the institution needs to show exactly what happened, when, and by whom.
Implementing Double Valuation
For institutions considering double valuation, here are the practical steps:
Conclusion
Double valuation is one of the most effective quality control mechanisms available to exam boards. It catches marking errors, reduces evaluator bias, protects students from unfair outcomes, and provides institutional protection against challenges. Digital evaluation platforms have made double valuation practical at any scale — what was once logistically prohibitive with paper is now a configuration option that can be enabled for any subject or exam.
For institutions evaluating lakhs of answer scripts, the question isn't whether double valuation is worth the additional evaluator time — it's whether you can afford the reputational and legal risk of not using it.
Related Reading
Ready to digitize your evaluation process?
See how MAPLES OSM can transform exam evaluation at your institution.