CBSE OSM 2026: Why 1.63 Lakh Students Are in Compartment and What It Reveals
CBSE's first full-scale on-screen marking cycle placed 1.63 lakh students in compartment. Here is what the data reveals about calibrating digital evaluation in its debut year.

The Number That Surprised Everyone
When CBSE declared Class 12 results on May 13, 2026, the headline was historic: 98.66 lakh answer books evaluated entirely through on-screen marking for the first time. No calculation errors. No transposition mistakes. No marking-sheet mismatches. The system worked.
But a second number landed quietly and caused immediate controversy: 1,63,000 students placed in the compartment category. Parents took to social media. A viral post showed a student who scored in the 99.2 percentile in JEE Main (B.Arch) receiving a compartment in CBSE 12th mathematics — the very subject she excelled at. Calls multiplied for free manual revaluation. Petitions circulated demanding that CBSE reconsider how marks are calibrated under the new digital system.
This is the central tension in India's largest digital evaluation experiment in 2026: the transition from manual to on-screen checking creates short-term calibration challenges that deserve honest examination rather than defensive dismissal.
What Changed Under OSM
Under the previous paper-based system, evaluators held answer scripts physically. Marks were totalled by hand, then checked by a second person. Human error crept in at several stages: wrong totals, missed questions, transposition mistakes during compilation. Each of these errors was invisible to students until they received their marksheet.
On-screen marking eliminates all mechanical errors. Each question's marks are captured digitally, totals auto-calculate, and no compilation error is possible. In that specific, measurable sense, OSM is provably more accurate than the system it replaced.
What OSM does not change is evaluator judgment — how many marks a particular answer deserves. That decision is still made by a human being, now on a screen rather than a physical script.
The Calibration Challenge in Year One
Every large-scale evaluation system goes through an adjustment period when it shifts to a new medium. Evaluators trained on paper develop intuitive expectations about how answers look and what credit they deserve. The same response, viewed on a 24-inch monitor rather than a physical booklet, can read differently. Annotations, margin notes, and cross-referencing that feel natural on paper require deliberate relearning in a digital environment.
In CBSE's first full-scale OSM cycle, several factors appear to have converged:
CBSE's press release noted an 87.92% overall pass rate — broadly comparable to previous years. But the compartment figures appear to exceed the historical average for this stage of the OSM rollout, suggesting the transition created marginal cases that went the wrong way in enough numbers to be statistically visible.
The Revaluation Process in 2026
CBSE's revaluation system operates in three tiers:
The photocopy tier is a meaningful improvement over the manual era. Digitally scanned scripts are clear, complete, and delivered faster than photocopies of handwritten evaluation sheets ever were. A student who wants to contest their Physics marks can now see exactly how each question was marked before deciding whether to proceed to re-evaluation — a transparency that simply did not exist before OSM.
What students and parents find difficult to accept is the fee structure attached to what feels, in some cases, like a systemic calibration issue rather than an individual evaluator error. When a student with a near-perfect JEE Main percentile gets a compartment in Mathematics, the question is not whether the OSM total is wrong — it is almost certainly correct — but whether the marking standard was calibrated appropriately for that paper's difficulty level.
Specific Concerns: Physics and Mathematics
Social media complaints in the days following May 13 cluster around two subjects: Physics and Mathematics. Students report that answers they consider complete received partial or zero credit under OSM, while evaluators in previous years might have awarded more marks for the same response.
The likely explanation is not that OSM is inherently harsher, but that the calibration training for evaluators in the first year was insufficient to replicate exactly the judgment standards that developed organically under manual evaluation over decades. Marking conventions that were implicit and passed on through evaluator experience need to be made explicit and encoded into evaluator training when the medium changes.
This is a solvable problem. It is also an expected one.
What OSM Needs in Year Two
The CBSE data from 2026 offers a clear roadmap for the second cycle:
Statistical monitoring before results: Mark distribution by evaluator and by subject should be reviewed before results are finalised. Evaluators whose distributions are outliers relative to the cohort average should have their scripts reviewed by senior examiners. This is standard practice in mature on-screen marking systems globally and can be implemented without changing OSM's fundamental architecture.
Difficulty-adjusted moderation: Paper difficulty varies year to year. A moderation layer that accounts for the statistical distribution of a particular year's paper can prevent marginal students from receiving compartment when the paper was objectively harder than the calibration baseline.
Evaluator simulation before the live cycle: First-year OSM evaluators need hands-on practice marking previous years' actual papers on the digital platform before the current year's evaluation begins. Technical orientation is not enough.
Published distribution data: CBSE should consider releasing anonymised mark distribution data by question and subject after each cycle. This gives educators, institutions, and students a factual basis for understanding where calibration compression occurred, rather than speculating on social media.
What Boards and Universities Are Watching
State boards and affiliating universities observing CBSE's OSM rollout are drawing practical conclusions. The technology is not the risk — the calibration process is. A digital platform that eliminates mechanical errors still requires structured human processes to ensure evaluator consistency. Those processes need to be designed, not assumed.
The good news is that CBSE now has granular data from 98.66 lakh answer books to inform a far smoother Year Two. The compartment numbers, the subject-wise distribution, the revaluation application rates — all of this feeds back into calibration improvements that manual evaluation could never provide, because manual evaluation never generated data at this resolution.
Institutions considering their own digital evaluation transition should treat calibration management — not technology deployment — as the primary implementation challenge.
Related Reading
Ready to digitize your evaluation process?
See how MAPLES OSM can transform exam evaluation at your institution.