When Wrong Marks Kill: The Human Cost of Evaluation Errors in Indian Exams
Recurring student deaths following exam result errors in Telangana, Madhya Pradesh, and beyond reveal a systemic failure in manual evaluation. The data is stark — and the solution is not abstract.

A Crisis That Repeats Every May
In 2019, the Telangana Board of Intermediate Education (TSBIE) contracted a Hyderabad-based technology firm to process and compute results for approximately 9.7 lakh intermediate students. The firm's software malfunctioned. Students who had attended every paper were marked absent. Toppers who had scored well in the first year suddenly "failed." The marks shown on result sheets bore no relationship to what students had written.
At least 24 students died by suicide in the days following result publication. The Telangana High Court called the situation a "grave disaster" and ordered re-evaluation of all papers for students declared failed. The government eventually passed the failed students. The technology vendor was removed. The students who died were not recovered.
In 2023, eight students in Telangana died by suicide in the 24 hours following intermediate result declaration. In 2024, seven students died in the 30 hours after the TSBIE results were announced. In May 2025, two students in Madhya Pradesh died by suicide and a third attempted it after the MPBSE Class 12 results were declared.
This is not a string of isolated tragedies. It is a pattern. And the pattern is inseparable from the way India evaluates exams.
The Scale of the Problem
India's National Crime Records Bureau (NCRB) recorded 2,248 suicide deaths attributed to exam failure in 2022 — a figure that does not capture students who died after incorrect results rather than genuine failure. Research published in The Lancet Regional Health — Southeast Asia found that student suicides in India rose 21% between 2019 and 2021. Mental health organisations report a fourfold increase in crisis calls and visits from suicidal students in May, when board results are announced.
Several distinct failure modes drive this crisis:
Computational errors: Manual totalling of marks across multiple question papers and components introduces arithmetic mistakes. A single totalling error on a critical subject can change a pass to a fail. These errors are preventable — addition is not a task that benefits from human judgement — but in manual evaluation they are endemic.
Software processing failures: The 2019 Telangana case demonstrates what happens when poorly implemented or inadequately tested software processes millions of records. When a computation system fails at scale, the damage is distributed across tens of thousands of students before anyone catches the error.
Unmarked questions: In manual evaluation, evaluators miss questions. An answer written on the last page of a booklet that is stapled or bound awkwardly may be skipped entirely. A student who wrote four pages for a 20-mark question and received zero is likely not a student who wrote nothing — they are a student whose page was not reached.
Data entry errors: In centralised mark processing, individual evaluator marks must be transcribed into a master register or digital system. Data entry at volume under time pressure produces errors. A 9 that looks like a 4. A 47 entered as 74.
Moderation miscalculation: Where papers undergo moderation to address difficulty variation across sets, moderation adjustments applied incorrectly can shift marks for entire batches of students.
Each of these failures is individually documented in court judgments, investigation reports, and media coverage across Indian states. Together, they constitute a structural vulnerability in manual evaluation systems.
The Downstream Consequences
When results are wrong, students and families have limited recourse. Most boards offer post-publication scrutiny — a recount of marks, not a re-reading of the answer book — for a fee. Full re-evaluation, where a second evaluator re-reads the answer book, is available from some boards but requires a separate application, takes weeks, and carries its own risk of inconsistency.
The consequences of waiting are not neutral. A student who fails based on a wrong result:
A Lancet study on exam failure suicides in India notes that "one-point evaluation" — the practice of a single result determining major life outcomes — amplifies the psychological stakes of any error. In this environment, the accuracy of evaluation is not merely an administrative matter. It is a welfare obligation.
What Digital Evaluation Changes
The specific failures that drive evaluation errors can be mapped directly to what digital evaluation eliminates:
Computational errors: In digital evaluation systems, marks entered per question are summed automatically. The evaluator assigns marks; the system computes totals. A human cannot add incorrectly because no human is adding. Mark computation becomes an audit trail rather than a source of error.
Unmarked questions: Digital platforms surface unanswered questions to the evaluator as alerts — a question that has received no marks before the evaluator submits is flagged. The system will not allow a complete submission with unreviewed sections, eliminating the missed-page failure mode.
Data entry transcription: Because marks are entered directly into the evaluation system rather than onto paper that must later be transcribed, the transcription step is eliminated entirely. The evaluator's input is the final record.
Multiple evaluator oversight: Double valuation — a second evaluator independently assessing the same answer book — is operationally difficult in paper-based systems because it requires physically duplicating and routing answer books. In digital systems, the same scanned image is routed to a second evaluator independently, with both sets of marks compared automatically. Discrepancies beyond a configurable threshold trigger moderation.
Audit trails: Every mark assigned in a digital system is time-stamped and linked to a specific evaluator. If a student disputes their result, the question-level mark trace is available. The system can show exactly what mark was assigned to each question, by whom, and when. Paper-based systems cannot produce equivalent evidence.
The 2019 Telangana Inquiry: A Technical Audit of Manual Processing Failure
The Telangana government commissioned a technical inquiry following the 2019 disaster. Its findings were instructive. The firm responsible for result processing had:
These are not exotic software engineering failures. They are the result of treating exam result computation as a trivial data processing task rather than a high-stakes operation where errors have irreversible human consequences.
Digital evaluation platforms used by examination boards today build in redundancy, automatic mark validation, and parallel verification as baseline requirements rather than optional features. The change is not merely technological. It reflects a different understanding of what the stakes are.
A Framework for Thinking About Institutional Responsibility
India's examination boards handle millions of students annually. CBSE Class 12 alone covers 46 lakh students. State boards add tens of millions more. In this context, even a small error rate translates to thousands of affected students.
A 0.1% computational error rate across 50 million answer books — a conservative assumption for fully manual processing — means 50,000 students receiving incorrect marks in a single year. A 0.01% rate means 5,000 students. The question for exam administrators is not whether manual evaluation produces errors. It is how many errors are acceptable and what the institution's liability is when those errors cause harm.
The High Court of Telangana's 2019 judgment held that the board had a duty of care to students and that the failure of its processing systems constituted a breach of that duty. Subsequent litigation across Indian courts has reinforced this principle. Examination boards are not insulated from liability for evaluation errors by virtue of scale or complexity.
Systemic Change, Not Individual Consolation
The response to student deaths following evaluation errors has historically been reactive. Free re-evaluation orders. Chief Minister consolation payments to families. Changes in vendor contracts. None of these address the underlying failure mode.
The systemic response is to restructure evaluation so that specific classes of error become impossible rather than improbable. Digital evaluation does this for computational errors, unmarked questions, and data transcription failures. It does not eliminate all possible sources of error — evaluator subjectivity, inadequate examiner training, and question quality remain — but it removes the mechanical failures that produce definitively wrong marks.
India evaluates hundreds of millions of answer books each year. The accuracy of that evaluation is a matter of direct consequence for the students whose marks are produced, and — in a small but documented set of cases — a matter of life and death.
Related Reading
Ready to digitize your evaluation process?
See how MAPLES OSM can transform exam evaluation at your institution.