What NAAC Peer Teams Check in Criterion 2: An Evaluation Evidence Guide
Criterion 2 carries 350 points in NAAC's framework — the highest of the seven criteria. Here is exactly what peer teams verify during site visits, and how digital evaluation systems generate this evidence automatically.

Why Criterion 2 Decides Accreditation Outcomes
When a NAAC peer team arrives on campus for a site visit, its most consequential days are spent on Criterion 2: Teaching-Learning and Evaluation. Of the 1,000 points that determine a NAAC grade, Criterion 2 contributes 350 — more than any other single criterion.
Criterion 1 (Curricular Aspects) carries 150 points. Criterion 3 (Research) carries 250. But it is Criterion 2 — which spans everything from student enrollment profile through teaching methodology to the examination process itself — that defines the academic heartbeat of an institution.
Within Criterion 2, the two metrics most directly affected by how an institution manages its examination and evaluation system are:
Institutions that can present strong, verifiable evidence under these two metrics frequently lift their overall NAAC score by more than any other single improvement area.
This guide is for Controllers of Examinations, Registrars, and IQAC coordinators who want to understand precisely what peer teams look for — and how to build the evidence portfolio that answers those questions before the visit begins.
---
What Metric 2.5 Actually Measures
NAAC breaks Metric 2.5 into two sub-components.
2.5.1 — Transparency and Robustness of Internal Assessment
Peer teams evaluating this metric want to know whether the institution's internal assessment mechanism is:
What peer teams ask to see:
| Evidence Type | What the Team Looks For |
|---|---|
| Internal exam schedule | Were exams conducted on the published schedule? |
| Answer script samples | Random scripts: are marks recorded clearly, with question-wise breakdowns? |
| Student mark disclosure records | Can students see their scripts or marks? |
| Double valuation records | Were second evaluations conducted? Were discrepancies resolved? |
| Evaluation guidelines | Did examiners receive marking schemes before evaluating? |
The most common peer team observation under 2.5.1 is the absence of double valuation records. Institutions that conduct second evaluations informally — without logging outcomes — cannot demonstrate the practice even when it occurs.
2.5.2 — Grievance Redressal for Examination-Related Issues
This sub-metric assesses whether the process for handling student complaints about marks, evaluation quality, or examination administration is documented, time-bound, and effective.
Peer teams ask to see:
The NAAC framework specifies that this process should be "transparent, time-bound and efficient." In practice, this means peer teams look for evidence that the institution has a published turnaround commitment — and meets it. An average response time of four to six weeks is generally considered acceptable; three months or longer invites adverse comment.
---
What Metric 2.6 Requires
Metric 2.6 — Student Performance and Learning Outcomes — is where examination data connects directly to NAAC scores. Peer teams examine:
For Outcome-Based Education (OBE)-aligned institutions — which includes most engineering and management colleges seeking NBA accreditation, and an increasing number of universities seeking NAAC A++ grades — the requirement extends further. Peer teams want to see how evaluation results are used to measure CO attainment, and how CO attainment data feeds back into curriculum revision decisions.
---
The Evidence Gap: Why Most Institutions Struggle
The majority of institutions understand what NAAC looks for. The gap is not in knowledge of the framework but in the ability to produce clean, complete, verifiable evidence quickly.
The Paper-Based Evidence Problem
When evaluation records are maintained in physical registers:
The result is that COEs preparing for NAAC visits spend weeks manually compiling evidence that a well-configured digital system could generate in hours.
The Completeness Problem
Peer teams increasingly ask follow-up questions that require drilling into the data rather than accepting summary reports. If a team asks "how many revaluation requests resulted in a mark increase of more than 10 marks across the last three examination cycles," a digital system answers this in seconds. A paper-based system may not be able to answer it at all.
---
How Digital Evaluation Generates NAAC Evidence Automatically
A digital answer script evaluation platform, configured and operated correctly, continuously produces the evidence that NAAC Criterion 2 requires — without any additional effort at inspection time.
| NAAC Requirement | What Digital Evaluation Provides |
|---|---|
| Evaluation schedule adherence | Timestamps of evaluation session start and completion for every script |
| Double valuation records | Automatic log with evaluator IDs, marks entered by each, discrepancy computed, moderation decision recorded |
| Grievance redressal turnaround | Date-stamped request intake and resolution records, response time statistics |
| Answer script samples | Permanent digital archive, retrievable by roll number, subject, or examination date |
| Moderation process records | Full trail of head examiner reviews, mark adjustments, reasons recorded |
| Result generation chain of custody | System log from raw marks input through automated totalling to result finalization |
| Student mark disclosure | Date when results were published, student access log if applicable |
| Slow learner identification | Mid-semester performance flags generated automatically against defined threshold |
The peer team does not need to take the institution's word for any of these. It can request a dashboard walkthrough, a data export, or a live demonstration of the system retrieving a specific script. The evidence exists, it is complete, and it cannot have been curated for the visit.
---
A Pre-NAAC Evidence Checklist for COEs
For examination departments preparing six to twelve months before a NAAC cycle:
Evaluation Process (Metric 2.5)
Student Performance (Metric 2.6)
IQAC Linkage
---
Connecting Examination Evidence to the Broader Accreditation Picture
Criterion 2 evidence does not exist in isolation. Strong examination data reinforces performance in other criteria:
The institution that builds a robust digital evaluation infrastructure is not merely solving an examination management problem. It is systematically building the evidence base for every quality assessment framework it participates in.
---
The Practical Starting Point
For institutions that are still managing evaluation primarily on paper, the path to NAAC-ready evaluation evidence begins with a specific set of decisions: choosing a digital evaluation platform that retains complete audit logs, training examination staff on data export and reporting workflows, and connecting the IQAC office to the examination data system so that annual reporting is generated from the same source of truth that peer teams will review.
The institutions that perform strongest in NAAC Criterion 2 are not necessarily those with the most elaborate examination machinery. They are the ones that can demonstrate, cleanly and quickly, that their examination processes are exactly what they claim to be.
---
Related Reading
Ready to digitize your evaluation process?
See how MAPLES OSM can transform exam evaluation at your institution.