NEP 2020 at Six: What India's Assessment Reform Promised and What Institutions Still Need
Six years into NEP 2020, the gap between its assessment vision and institutional infrastructure has become the defining implementation challenge for Indian higher education.

A Six-Year Ledger
When the National Education Policy 2020 was notified, its assessment chapter was arguably its most ambitious section. The policy proposed replacing rote-memorisation-driven summative exams with continuous, competency-based assessment; replacing binary report cards with 360-degree progress documentation; and establishing PARAKH as a national body to standardise evaluation frameworks across all school boards.
Six years on, the ledger is mixed. Meaningful progress has been made at specific pressure points — SAFAL is operational, PARAKH has issued benchmarking frameworks, CBSE has moved Class 12 to on-screen marking, and more than 200 universities have adopted the Four-Year Undergraduate Programme (FYUGP). But the gap between NEP's assessment vision and the infrastructure available to most Indian universities and affiliated colleges remains the single biggest obstacle to genuine compliance.
For decision-makers at colleges and universities, 2026 is the inflection point: UGC's revised minimum standards regulations, notified in 2025, now set enforceable continuous assessment requirements. NAAC's binary accreditation model demands documented evidence of evaluation quality. The assessment infrastructure question has moved from aspirational to operational.
What NEP 2020 Mandated: The Assessment Architecture
NEP 2020's assessment framework rests on five structural pillars:
For higher education, these principles are operationalised through UGC's Outcome-Based Education (OBE) mandates, the Academic Bank of Credits (ABC), and the FYUGP's credit-hour framework.
What Has Changed: The School Level
At school level, progress is tangible. SAFAL is now operational for Classes 3, 5, and 8, generating diagnostic attainment data that feeds curriculum review at the state level. PARAKH has published competency benchmarks aligned with the National Curriculum Framework 2023 (NCF 2023), giving boards a common reference for assessment design.
CBSE's 2026 question papers show a measurable increase in application and higher-order thinking items. A CBSE analysis of Class 12 papers published in early 2026 found that approximately 40% of marks in core subjects now test analysis, evaluation, or creation — up from roughly 25% in 2021. This reflects the competency-based assessment shift NEP mandated.
CBSE's move to on-screen marking for Class 12 in 2026 is the most visible operational change: 18.5 lakh answer scripts evaluated digitally, with automated totalling and no post-result marks verification. While primarily a technology upgrade, it demonstrates the evaluation infrastructure investment NEP's accuracy and transparency requirements implied.
What Has Changed: The Higher Education Level
FYUGP adoption has crossed 200 central and autonomous universities. The Academic Bank of Credits (ABC) has registered over 1.5 crore students, enabling credit transfer between institutions — a prerequisite for the flexible, multi-exit degree structure NEP envisioned.
UGC's Minimum Standards and Procedures for Award of UG and PG Degrees, revised in 2025, now explicitly require continuous assessment to constitute at least 40% of total marks for all programmes. This is no longer a recommendation — it is a compliance requirement enforceable through UGC inspections and NAAC accreditation audits.
The NAAC binary accreditation model, which applies a minimum threshold rather than a simple grade, treats evaluation quality as a hard parameter under Criterion 2 (Teaching-Learning and Evaluation). Institutions that cannot produce structured, auditable assessment data face failure to cross the accreditation floor regardless of their physical infrastructure scores.
Where the Gap Persists
Despite this progress, three infrastructure deficits stand out across the sector.
1. Data Collection at Scale
Competency-based continuous assessment generates far more data than traditional end-term exams. A single student in a FYUGP programme may complete 20 to 30 assessed components per semester across four or five courses. Multiplied across thousands of enrolled students, this creates a data volume that paper-based systems cannot manage reliably.
Institutions that collect CCE data on paper have no mechanism for aggregating it, identifying learning gaps systematically, or producing the outcomes evidence that NAAC's Criterion 2 and UGC's OBE mandate requires. The data exists in fragmented form across handwritten grade sheets that cannot be searched, audited, or analysed.
2. Evaluation Consistency Across Internal Assessment
Competency-based rubrics are inherently more complex to apply than traditional marking schemes. Inter-rater reliability — the degree to which two evaluators would award the same mark to the same answer — is difficult to maintain without structured digital evaluation tools that enforce rubric compliance, flag statistical outliers, and route borderline scripts for moderation.
Without structured oversight, CCE produces grading that is inconsistent and legally exposed. A series of RTI-driven challenges at university tribunals in 2025 and early 2026 have targeted exactly this inconsistency in internal assessment marks, with students successfully obtaining court orders requiring institutions to produce the evaluation criteria and methodology behind their continuous assessment scores. Institutions that cannot produce this documentation are vulnerable in ways they may not have anticipated.
3. The Analytics and Feedback Loop
NEP's deeper ambition is a feedback loop: assessment data should inform teaching, curriculum design, and institutional research. For this loop to function, evaluation data must be structured, searchable, and mapped to programme outcomes.
Most affiliated colleges generate assessment data but cannot query it. They cannot answer basic operational questions: which programme outcomes are consistently underperformed across a batch? Which question types are poorly answered across sections? Which evaluators deviate significantly from cohort means? Without this analytical layer, continuous assessment becomes an administrative exercise rather than a pedagogical instrument.
What Compliant Assessment Infrastructure Looks Like
Institutions that have successfully operationalised NEP's assessment vision share four structural characteristics:
| Characteristic | Description |
|---|---|
| Digital answer-sheet capture | All assessed work digitised at submission or through scanning |
| Rubric-enforced evaluation | Evaluators mark against mandatory structured rubrics, not free-form annotation |
| Outcome-mapped analytics | Every assessment item tagged to a programme outcome, enabling attainment reporting |
| Longitudinal student records | Continuous assessment data linked to individual student profiles across semesters |
This infrastructure is not prohibitively expensive. It is, however, a prerequisite — not a future aspiration — for genuine NEP compliance and defensible NAAC evidence.
The UGC 2025 Regulations: A Hard Deadline
UGC's 2025 minimum standards update introduced three enforceable assessment requirements that most affiliated colleges have not yet operationalised:
These requirements assume digital infrastructure. An institution operating on paper records cannot meet all three simultaneously at any meaningful scale.
The NEP Assessment Gap as an Institutional Risk
Institutions that have not addressed the assessment infrastructure gap face risks across multiple dimensions:
Accreditation risk: NAAC Criterion 2 audits now examine evaluation documentation in depth. Assessors look for evidence of rubric-based marking, outcome attainment data, and structured CCE records. The absence of digital audit trails is increasingly treated as a systemic gap rather than an administrative oversight.
Legal risk: Court-ordered disclosure of evaluation methodology is becoming more common. Institutions that cannot produce structured marking criteria for internal assessments lose these challenges by default.
Institutional ranking risk: NIRF parameters for Teaching, Learning, and Resources and Graduation Outcomes both depend on demonstrable assessment quality. Institutions with higher-quality evaluation data consistently produce stronger NIRF submissions in these parameters.
What Institutions Need to Do Now
The most practical immediate step is mapping the current assessment workflow against the specific data requirements of NAAC Criterion 2 and UGC's 2025 OBE guidelines. The gaps that emerge almost invariably cluster around three areas:
Answer-sheet custody and auditability: can the institution demonstrate that evaluation was conducted by authorised evaluators, on the correct scripts, within the prescribed timeline, against documented criteria?
Rubric documentation: are evaluation criteria formalised, stored, version-controlled, and accessible to students upon request?
Outcome attainment records: can the institution produce attainment data per outcome per batch per programme for the last three academic years?
Each of these gaps has a digital solution that is deployable within a single semester. The NEP assessment vision is achievable — but only for institutions that treat evaluation infrastructure as a strategic priority rather than a back-office function.
Related Reading
Ready to digitize your evaluation process?
See how MAPLES OSM can transform exam evaluation at your institution.