Glossary

Digital Evaluation Glossary

30+ key terms used in digital evaluation, on-screen marking, and exam digitization — defined by the team behind MAPLES OSM.

#

9-Point Validation
A suite of automated checks executed before result publication. These include detection of unevaluated questions, score limit violations, missing marks, and other anomalies — ensuring error-free results without manual totalling.

A

Answer Book / Answer Script
The physical booklet in which students write their exam responses. In a digital evaluation workflow, answer books are scanned and converted into page images for on-screen marking.
Audit Trail
A complete, tamper-proof log of every action taken within the evaluation system — including logins, mark entries, modifications, moderations, and result processing steps. Essential for compliance and dispute resolution.
Auto-Capture
A motion-detection scanning feature that automatically captures page images as the scanning operator flips through an answer booklet, eliminating the need to press a button for each page.

B

Barcode Validation
The process of reading Code128 barcodes printed on answer booklet pages to verify correct page order and ensure no pages are missing or duplicated during scanning.
Batch
A group of answer booklets — typically 50 — tracked as a single unit through the evaluation pipeline. Batches simplify logistics from physical receipt through scanning, evaluation, and result processing.

C

Chief Examiner
A senior academic appointed to oversee evaluation quality for a specific subject. The chief examiner reviews evaluator performance, resolves score discrepancies, and ensures consistent marking standards.
Concurrent Evaluations
The number of evaluators actively marking answer scripts at the same time on the platform. MAPLES OSM supports 4,000+ concurrent evaluations at peak load.

D

Double Valuation
A quality assurance method where the same answer script is independently assigned to two different evaluators. If their scores diverge beyond a threshold, the script is flagged for review or sent to a third evaluator.
Dummy Numbering
A legacy manual process used to anonymize answer scripts before evaluation. Staff would assign temporary numbers to hide student identity — a labor-intensive step eliminated by automated script randomization in digital systems.

E

Evaluator Anonymity
A system-enforced guarantee that evaluators cannot identify whose paper they are marking. Student details are stripped before distribution, preventing bias and ensuring fair evaluation.

F

Face Recognition Proctoring
Continuous identity verification of evaluators during marking sessions using facial recognition. Ensures that the authenticated evaluator is the person actually performing the evaluation.

I

Inwarding
The process of receiving, registering, and logging physical answer booklets when they arrive at an evaluation or scanning facility. Each booklet is tagged (typically via QR code) to begin its digital chain of custody.

M

Moderation
The review of evaluator work by designated moderators to ensure marking quality and consistency. Moderators can flag over-valuation or under-valuation and request re-marking when standards are not met.

O

On-Screen Marking (OSM)
A digital evaluation method where examiners view scanned answer sheet images on a computer screen and assign marks using annotation tools — replacing traditional pen-on-paper marking at physical valuation camps.
OTP Authentication
One-time password verification sent via SMS and email as part of multi-factor authentication. Evaluators must enter the OTP during login to confirm their identity before accessing the marking interface.
Over-valuation
A quality warning issued when an evaluator awards significantly more marks than expected based on moderation benchmarks or double-valuation comparison. Triggers a review by the moderator or chief examiner.

P

Page Splitting
The automatic separation of left and right pages from a scanned open booklet image. Spine detection algorithms identify the center binding, and the image is split into two individual page images for evaluation.

Q

QR Code Tracking
A unique QR identifier affixed to each answer booklet that enables chain-of-custody tracking from physical receipt through scanning, evaluation, and result publication.
Quality Control (QC)
The inspection of scanned answer sheet images for defects such as blur, missing pages, cut-off content, or incorrect page ordering — performed before scripts are released for evaluation.
Question Grid
An interface element showing all question numbers for a given answer script, with color-coded status indicators (unevaluated, in-progress, completed, flagged) to help evaluators track their marking progress.
Question Paper Structure
The hierarchical definition of a question paper including questions, sub-questions, maximum marks per part, and marking scheme rules. This structure drives the evaluation interface and automated score validation.

R

Re-evaluation
The process of having a different evaluator re-mark an answer script, typically after a student dispute or quality concern. In digital systems, re-evaluation is instant since scripts are already digitized.
Result Processing
The automated validation, aggregation, and compilation of marks from individual evaluations into publishable results. Includes multi-point validation checks to eliminate totalling errors and catch anomalies.
Role-Based Access Control (RBAC)
A security model that restricts system access based on the user's assigned role. MAPLES OSM defines 11 distinct roles — from scanning operator and evaluator to chief examiner and controller of examinations — each with specific permissions.

S

Scanning Station
An overhead camera setup used to digitize physical answer booklets. The station captures high-resolution page images with features like auto-capture, barcode validation, and spine detection for page splitting.
Script Randomization
The automatic anonymization and random distribution of answer scripts to evaluators. Replaces manual dummy numbering by stripping student identity and assigning scripts algorithmically to prevent bias.
Spine Detection
An image-processing algorithm that identifies the center binding (spine) of an open answer booklet. Used in conjunction with page splitting to accurately separate left and right pages from a single scan.

U

Under-valuation
A quality warning issued when an evaluator awards significantly fewer marks than expected based on moderation benchmarks or double-valuation comparison. Triggers a review to ensure fair marking.

V

Valuation Centre
A physical location where answer booklets are evaluated. In traditional systems, evaluators travel to these centres for paper-based marking. In digital systems, valuation centres serve primarily as scanning hubs.

See MAPLES OSM in action

From scanning to result publication — schedule a demo to see how these concepts come together in a single platform.