Industry2026-04-21·9 min read

NEP 2020 at Six: What India's Assessment Reform Promised and What Institutions Still Need

Six years into NEP 2020, the gap between its assessment vision and institutional infrastructure has become the defining implementation challenge for Indian higher education.

NEP 2020 at Six: What India's Assessment Reform Promised and What Institutions Still Need

A Six-Year Ledger

When the National Education Policy 2020 was notified, its assessment chapter was arguably its most ambitious section. The policy proposed replacing rote-memorisation-driven summative exams with continuous, competency-based assessment; replacing binary report cards with 360-degree progress documentation; and establishing PARAKH as a national body to standardise evaluation frameworks across all school boards.

Six years on, the ledger is mixed. Meaningful progress has been made at specific pressure points — SAFAL is operational, PARAKH has issued benchmarking frameworks, CBSE has moved Class 12 to on-screen marking, and more than 200 universities have adopted the Four-Year Undergraduate Programme (FYUGP). But the gap between NEP's assessment vision and the infrastructure available to most Indian universities and affiliated colleges remains the single biggest obstacle to genuine compliance.

For decision-makers at colleges and universities, 2026 is the inflection point: UGC's revised minimum standards regulations, notified in 2025, now set enforceable continuous assessment requirements. NAAC's binary accreditation model demands documented evidence of evaluation quality. The assessment infrastructure question has moved from aspirational to operational.

What NEP 2020 Mandated: The Assessment Architecture

NEP 2020's assessment framework rests on five structural pillars:

  • Competency-Based Assessment (CBA) — evaluating what students can do, not merely what they can recall
  • Continuous and Comprehensive Evaluation (CCE) — spreading assessment across the academic year rather than concentrating it in end-term exams
  • PARAKH — a national assessment standards authority to align evaluation frameworks across boards
  • SAFAL — Structured Assessment for Analysing Learning, a diagnostic census for Classes 3, 5, and 8
  • 360-Degree Progress Cards — holistic records capturing cognitive, affective, and psychomotor performance alongside academic scores
  • For higher education, these principles are operationalised through UGC's Outcome-Based Education (OBE) mandates, the Academic Bank of Credits (ABC), and the FYUGP's credit-hour framework.

    What Has Changed: The School Level

    At school level, progress is tangible. SAFAL is now operational for Classes 3, 5, and 8, generating diagnostic attainment data that feeds curriculum review at the state level. PARAKH has published competency benchmarks aligned with the National Curriculum Framework 2023 (NCF 2023), giving boards a common reference for assessment design.

    CBSE's 2026 question papers show a measurable increase in application and higher-order thinking items. A CBSE analysis of Class 12 papers published in early 2026 found that approximately 40% of marks in core subjects now test analysis, evaluation, or creation — up from roughly 25% in 2021. This reflects the competency-based assessment shift NEP mandated.

    CBSE's move to on-screen marking for Class 12 in 2026 is the most visible operational change: 18.5 lakh answer scripts evaluated digitally, with automated totalling and no post-result marks verification. While primarily a technology upgrade, it demonstrates the evaluation infrastructure investment NEP's accuracy and transparency requirements implied.

    What Has Changed: The Higher Education Level

    FYUGP adoption has crossed 200 central and autonomous universities. The Academic Bank of Credits (ABC) has registered over 1.5 crore students, enabling credit transfer between institutions — a prerequisite for the flexible, multi-exit degree structure NEP envisioned.

    UGC's Minimum Standards and Procedures for Award of UG and PG Degrees, revised in 2025, now explicitly require continuous assessment to constitute at least 40% of total marks for all programmes. This is no longer a recommendation — it is a compliance requirement enforceable through UGC inspections and NAAC accreditation audits.

    The NAAC binary accreditation model, which applies a minimum threshold rather than a simple grade, treats evaluation quality as a hard parameter under Criterion 2 (Teaching-Learning and Evaluation). Institutions that cannot produce structured, auditable assessment data face failure to cross the accreditation floor regardless of their physical infrastructure scores.

    Where the Gap Persists

    Despite this progress, three infrastructure deficits stand out across the sector.

    1. Data Collection at Scale

    Competency-based continuous assessment generates far more data than traditional end-term exams. A single student in a FYUGP programme may complete 20 to 30 assessed components per semester across four or five courses. Multiplied across thousands of enrolled students, this creates a data volume that paper-based systems cannot manage reliably.

    Institutions that collect CCE data on paper have no mechanism for aggregating it, identifying learning gaps systematically, or producing the outcomes evidence that NAAC's Criterion 2 and UGC's OBE mandate requires. The data exists in fragmented form across handwritten grade sheets that cannot be searched, audited, or analysed.

    2. Evaluation Consistency Across Internal Assessment

    Competency-based rubrics are inherently more complex to apply than traditional marking schemes. Inter-rater reliability — the degree to which two evaluators would award the same mark to the same answer — is difficult to maintain without structured digital evaluation tools that enforce rubric compliance, flag statistical outliers, and route borderline scripts for moderation.

    Without structured oversight, CCE produces grading that is inconsistent and legally exposed. A series of RTI-driven challenges at university tribunals in 2025 and early 2026 have targeted exactly this inconsistency in internal assessment marks, with students successfully obtaining court orders requiring institutions to produce the evaluation criteria and methodology behind their continuous assessment scores. Institutions that cannot produce this documentation are vulnerable in ways they may not have anticipated.

    3. The Analytics and Feedback Loop

    NEP's deeper ambition is a feedback loop: assessment data should inform teaching, curriculum design, and institutional research. For this loop to function, evaluation data must be structured, searchable, and mapped to programme outcomes.

    Most affiliated colleges generate assessment data but cannot query it. They cannot answer basic operational questions: which programme outcomes are consistently underperformed across a batch? Which question types are poorly answered across sections? Which evaluators deviate significantly from cohort means? Without this analytical layer, continuous assessment becomes an administrative exercise rather than a pedagogical instrument.

    What Compliant Assessment Infrastructure Looks Like

    Institutions that have successfully operationalised NEP's assessment vision share four structural characteristics:

    CharacteristicDescription
    Digital answer-sheet captureAll assessed work digitised at submission or through scanning
    Rubric-enforced evaluationEvaluators mark against mandatory structured rubrics, not free-form annotation
    Outcome-mapped analyticsEvery assessment item tagged to a programme outcome, enabling attainment reporting
    Longitudinal student recordsContinuous assessment data linked to individual student profiles across semesters

    This infrastructure is not prohibitively expensive. It is, however, a prerequisite — not a future aspiration — for genuine NEP compliance and defensible NAAC evidence.

    The UGC 2025 Regulations: A Hard Deadline

    UGC's 2025 minimum standards update introduced three enforceable assessment requirements that most affiliated colleges have not yet operationalised:

  • CCE documentation requirement: institutions must maintain structured records of all continuous assessment components, including evaluation criteria and marks awarded, for a minimum of five years
  • OBE attainment reporting: annual attainment reports per programme per batch must be submitted to the institution's IQAC and included in the AQAR
  • Grievance redressal trail: every internal assessment mark must be accompanied by a documented evaluation methodology that can be presented to students upon request
  • These requirements assume digital infrastructure. An institution operating on paper records cannot meet all three simultaneously at any meaningful scale.

    The NEP Assessment Gap as an Institutional Risk

    Institutions that have not addressed the assessment infrastructure gap face risks across multiple dimensions:

    Accreditation risk: NAAC Criterion 2 audits now examine evaluation documentation in depth. Assessors look for evidence of rubric-based marking, outcome attainment data, and structured CCE records. The absence of digital audit trails is increasingly treated as a systemic gap rather than an administrative oversight.

    Legal risk: Court-ordered disclosure of evaluation methodology is becoming more common. Institutions that cannot produce structured marking criteria for internal assessments lose these challenges by default.

    Institutional ranking risk: NIRF parameters for Teaching, Learning, and Resources and Graduation Outcomes both depend on demonstrable assessment quality. Institutions with higher-quality evaluation data consistently produce stronger NIRF submissions in these parameters.

    What Institutions Need to Do Now

    The most practical immediate step is mapping the current assessment workflow against the specific data requirements of NAAC Criterion 2 and UGC's 2025 OBE guidelines. The gaps that emerge almost invariably cluster around three areas:

    Answer-sheet custody and auditability: can the institution demonstrate that evaluation was conducted by authorised evaluators, on the correct scripts, within the prescribed timeline, against documented criteria?

    Rubric documentation: are evaluation criteria formalised, stored, version-controlled, and accessible to students upon request?

    Outcome attainment records: can the institution produce attainment data per outcome per batch per programme for the last three academic years?

    Each of these gaps has a digital solution that is deployable within a single semester. The NEP assessment vision is achievable — but only for institutions that treat evaluation infrastructure as a strategic priority rather than a back-office function.

    Related Reading

  • NAAC Criterion 2: Building an Evaluation Evidence Portfolio
  • NEP 2020 and the FYUGP Examination Challenge
  • IQAC and AQAR: How Digital Evaluation Data Feeds Your Annual Report
  • Ready to digitize your evaluation process?

    See how MAPLES OSM can transform exam evaluation at your institution.