Evaluation Moderation & Quality Assurance
Moderation in digital evaluation is the process of verifying evaluator consistency and accuracy through systematic oversight. MAPLES OSM implements multi-layer moderation — double valuation, moderator review, and chief examiner oversight — with automatic discrepancy detection and evaluator performance monitoring to ensure every script is marked fairly.
Multi-Layer Moderation
Each script passes through up to three layers of verification. The system enforces these layers automatically — evaluators cannot skip moderation steps, and moderators cannot approve scripts without reviewing each question.
Double Valuation
The same answer script is assigned to two independent evaluators. Neither evaluator sees the other's marks. When both evaluations are complete, the system automatically compares scores and flags discrepancies beyond a configurable threshold.
Moderator Verification
Moderators review evaluator marks question by question. They can toggle visibility of the evaluator's annotations, adjust scores with comments, and approve or send scripts back for re-evaluation.
Chief Examiner Oversight
Chief examiners have subject-level supervision — they monitor all evaluators and moderators within a subject, review flagged scripts, and make final decisions on disputed marks.
Evaluator Warning System
The system continuously monitors marking patterns and raises warnings when evaluator behaviour deviates from expected norms. Warnings are visible to moderators and chief examiners for immediate action.
Over-Valuation Warning
Triggered when an evaluator consistently awards marks significantly above the subject average. Flags potential leniency bias.
Under-Valuation Warning
Triggered when marks fall consistently below the subject average. Identifies evaluators who may be marking too strictly.
Adherence Warning
Flags evaluators who deviate from the answer key or marking scheme. Ensures the rubric is followed consistently.
Knowledge Warning
Raised when marking patterns suggest inadequate subject knowledge — such as awarding full marks for partially correct answers.
Evaluator Performance Monitoring
- Average evaluation speed per script and per question
- Score distribution compared to subject average
- Variance analysis — how much an evaluator deviates from other evaluators on the same scripts
- Completion rate and scripts pending vs. completed
- Warning count by type — over-valuation, under-valuation, adherence, knowledge
- Discrepancy rate on double-valued scripts
Workload Management
Workload Reallocation
Reassign scripts between evaluators with mandatory reason tracking. Every reallocation is logged in the audit trail with timestamp and reason.
Duplicate Detection
Automatically detects if the same script has been assigned to an evaluator twice or evaluated more than the configured number of times.
Third Evaluator Assignment
When double-valuation scores exceed the discrepancy threshold, the system automatically assigns the script to a third independent evaluator.
Real-Time Dashboards
Live dashboards showing evaluation progress, moderator queue depth, pending discrepancies, and subject-wise completion rates.
See moderation in action
Schedule a demo to see how double valuation, warning systems, and performance monitoring work together to guarantee evaluation quality.