Guide2026-04-11·8 min read

What NAAC Peer Teams Check in Criterion 2: An Evaluation Evidence Guide

Criterion 2 carries 350 points in NAAC's framework — the highest of the seven criteria. Here is exactly what peer teams verify during site visits, and how digital evaluation systems generate this evidence automatically.

What NAAC Peer Teams Check in Criterion 2: An Evaluation Evidence Guide

Why Criterion 2 Decides Accreditation Outcomes

When a NAAC peer team arrives on campus for a site visit, its most consequential days are spent on Criterion 2: Teaching-Learning and Evaluation. Of the 1,000 points that determine a NAAC grade, Criterion 2 contributes 350 — more than any other single criterion.

Criterion 1 (Curricular Aspects) carries 150 points. Criterion 3 (Research) carries 250. But it is Criterion 2 — which spans everything from student enrollment profile through teaching methodology to the examination process itself — that defines the academic heartbeat of an institution.

Within Criterion 2, the two metrics most directly affected by how an institution manages its examination and evaluation system are:

  • 2.5: Evaluation Process and Reforms
  • 2.6: Student Performance and Learning Outcomes
  • Institutions that can present strong, verifiable evidence under these two metrics frequently lift their overall NAAC score by more than any other single improvement area.

    This guide is for Controllers of Examinations, Registrars, and IQAC coordinators who want to understand precisely what peer teams look for — and how to build the evidence portfolio that answers those questions before the visit begins.

    ---

    What Metric 2.5 Actually Measures

    NAAC breaks Metric 2.5 into two sub-components.

    2.5.1 — Transparency and Robustness of Internal Assessment

    Peer teams evaluating this metric want to know whether the institution's internal assessment mechanism is:

  • Transparent: Students understand how they are being assessed, receive feedback on their performance, and have access to their evaluated scripts or at least their marks breakdown.
  • Robust: The assessment is conducted regularly, by multiple evaluators where the scale warrants it, and with quality checks at each stage.
  • Varied: The institution uses more than one modality — assignments, tests, practicals, presentations — rather than relying solely on end-semester examinations.
  • What peer teams ask to see:

    Evidence TypeWhat the Team Looks For
    Internal exam scheduleWere exams conducted on the published schedule?
    Answer script samplesRandom scripts: are marks recorded clearly, with question-wise breakdowns?
    Student mark disclosure recordsCan students see their scripts or marks?
    Double valuation recordsWere second evaluations conducted? Were discrepancies resolved?
    Evaluation guidelinesDid examiners receive marking schemes before evaluating?

    The most common peer team observation under 2.5.1 is the absence of double valuation records. Institutions that conduct second evaluations informally — without logging outcomes — cannot demonstrate the practice even when it occurs.

    2.5.2 — Grievance Redressal for Examination-Related Issues

    This sub-metric assesses whether the process for handling student complaints about marks, evaluation quality, or examination administration is documented, time-bound, and effective.

    Peer teams ask to see:

  • A register or system record of revaluation/re-totalling requests received
  • Date of receipt and date of resolution for each request
  • Outcomes (marks revised upward, downward, unchanged)
  • Escalation records for unresolved cases
  • The NAAC framework specifies that this process should be "transparent, time-bound and efficient." In practice, this means peer teams look for evidence that the institution has a published turnaround commitment — and meets it. An average response time of four to six weeks is generally considered acceptable; three months or longer invites adverse comment.

    ---

    What Metric 2.6 Requires

    Metric 2.6 — Student Performance and Learning Outcomes — is where examination data connects directly to NAAC scores. Peer teams examine:

  • Pass rates by subject, semester, and year over a five-year window
  • Attainment levels: what percentage of students achieved each grade band
  • Trend analysis: is the institution's examination performance improving, static, or declining?
  • Slow learner interventions: does the institution identify at-risk students early (typically after mid-semester evaluations) and provide structured support?
  • Correlation with learning outcomes: do students who complete the course demonstrate the outcomes specified in the Course Outcomes (COs) and Program Outcomes (POs)?
  • For Outcome-Based Education (OBE)-aligned institutions — which includes most engineering and management colleges seeking NBA accreditation, and an increasing number of universities seeking NAAC A++ grades — the requirement extends further. Peer teams want to see how evaluation results are used to measure CO attainment, and how CO attainment data feeds back into curriculum revision decisions.

    ---

    The Evidence Gap: Why Most Institutions Struggle

    The majority of institutions understand what NAAC looks for. The gap is not in knowledge of the framework but in the ability to produce clean, complete, verifiable evidence quickly.

    The Paper-Based Evidence Problem

    When evaluation records are maintained in physical registers:

  • Retrieving answer script samples for peer team review requires physically locating scripts that may have been archived (or returned to students, or destroyed after a retention period)
  • Marks ledgers cannot be queried for trend analysis — someone must manually extract and compile multi-year data
  • Grievance response records are scattered across departmental registers and central examination office files
  • Double valuation outcomes, if recorded at all, are handwritten entries that may be incomplete or illegible
  • The result is that COEs preparing for NAAC visits spend weeks manually compiling evidence that a well-configured digital system could generate in hours.

    The Completeness Problem

    Peer teams increasingly ask follow-up questions that require drilling into the data rather than accepting summary reports. If a team asks "how many revaluation requests resulted in a mark increase of more than 10 marks across the last three examination cycles," a digital system answers this in seconds. A paper-based system may not be able to answer it at all.

    ---

    How Digital Evaluation Generates NAAC Evidence Automatically

    A digital answer script evaluation platform, configured and operated correctly, continuously produces the evidence that NAAC Criterion 2 requires — without any additional effort at inspection time.

    NAAC RequirementWhat Digital Evaluation Provides
    Evaluation schedule adherenceTimestamps of evaluation session start and completion for every script
    Double valuation recordsAutomatic log with evaluator IDs, marks entered by each, discrepancy computed, moderation decision recorded
    Grievance redressal turnaroundDate-stamped request intake and resolution records, response time statistics
    Answer script samplesPermanent digital archive, retrievable by roll number, subject, or examination date
    Moderation process recordsFull trail of head examiner reviews, mark adjustments, reasons recorded
    Result generation chain of custodySystem log from raw marks input through automated totalling to result finalization
    Student mark disclosureDate when results were published, student access log if applicable
    Slow learner identificationMid-semester performance flags generated automatically against defined threshold

    The peer team does not need to take the institution's word for any of these. It can request a dashboard walkthrough, a data export, or a live demonstration of the system retrieving a specific script. The evidence exists, it is complete, and it cannot have been curated for the visit.

    ---

    A Pre-NAAC Evidence Checklist for COEs

    For examination departments preparing six to twelve months before a NAAC cycle:

    Evaluation Process (Metric 2.5)

  • [ ] Generate evaluation completion timeline report for last five academic years — confirm exams were evaluated and results declared within published schedules
  • [ ] Extract double valuation implementation statistics — percentage of papers double-valued, average discrepancy, moderation intervention rate
  • [ ] Compile grievance redressal data — request volume, average response time, resolution rate, by semester and year
  • [ ] Prepare sample set of ten to fifteen digital answer scripts across different subjects for peer team review — include scripts that show clear marks, annotated feedback, and question-wise breakdown
  • [ ] Document evaluator training records — mock evaluation participation, certification of platform competency
  • Student Performance (Metric 2.6)

  • [ ] Run five-year pass rate trend report, disaggregated by subject and semester
  • [ ] Generate grade distribution analysis — percentage of students in each grade band per examination
  • [ ] Identify at-risk student identification dates vs. support program enrollment dates — demonstrates early intervention
  • [ ] Prepare CO attainment report if institution operates under OBE framework
  • IQAC Linkage

  • [ ] Map digital evaluation outputs to current AQAR submission format — Section 2.5 and 2.6 data should flow directly from the evaluation system into the annual report
  • [ ] Confirm data retention policy covers at least the previous two accreditation cycles (typically ten years)
  • ---

    Connecting Examination Evidence to the Broader Accreditation Picture

    Criterion 2 evidence does not exist in isolation. Strong examination data reinforces performance in other criteria:

  • Criterion 6 (Governance, Leadership and Management): Transparent, well-documented examination processes demonstrate institutional governance quality. Peer teams observing that the examination office can produce complete, accurate data on demand draw favorable conclusions about broader institutional management.
  • Criterion 5 (Student Support and Progression): Data showing that the institution identifies slow learners through early evaluation results and offers structured remedial support directly contributes to Criterion 5 scores.
  • NIRF Rankings: NIRF's Teaching, Learning and Resources (TLR) parameter carries 30% of the total ranking weight. Within TLR, faculty teaching quality and student outcome metrics are drawn in part from the same underlying examination performance data that feeds NAAC Criterion 2.
  • The institution that builds a robust digital evaluation infrastructure is not merely solving an examination management problem. It is systematically building the evidence base for every quality assessment framework it participates in.

    ---

    The Practical Starting Point

    For institutions that are still managing evaluation primarily on paper, the path to NAAC-ready evaluation evidence begins with a specific set of decisions: choosing a digital evaluation platform that retains complete audit logs, training examination staff on data export and reporting workflows, and connecting the IQAC office to the examination data system so that annual reporting is generated from the same source of truth that peer teams will review.

    The institutions that perform strongest in NAAC Criterion 2 are not necessarily those with the most elaborate examination machinery. They are the ones that can demonstrate, cleanly and quickly, that their examination processes are exactly what they claim to be.

    ---

    Related Reading

  • How Digital Evaluation Improves NAAC Accreditation Scores
  • IQAC, AQAR, and Digital Evaluation Data in 2026
  • NAAC Binary Accreditation 2025: What MBGL Means for Institutions
  • Ready to digitize your evaluation process?

    See how MAPLES OSM can transform exam evaluation at your institution.