The Ministry of Human Resources Development, Government of India has come up with National Institutional Ranking Framework (NIRF) to rank all educational institutions of higher education in India. I tried to dig deeper and do some research on my own to find out if this will help us build a better education system for us in the country.

What is NIRF ?

NIRF is a framework ranking our education institutions on the basis of 5 parameters with each of them consisting of few metrics. You can know more about the methodology in this document.

Parameters

  • Teaching, Learning & Resources
    • Student strength including Doctoral students
    • Faculty-student ratio with emphasis on permanent faculty
    • Combined metric for faculty with PhD (or equivalent) and experience
    • Total budget and its utilisation
  • Research and Professional practice
    • Combined metric for publications
    • Combined metric for quality of publications
    • IPR and Patents: filed, published, granted and licensed
    • Footprint of projects and professional practice and Executive Development programs
  • Graduation outcomes
    • Combined % for placement, higher studies, and Entrepreneurship
    • Metric for university examinations
    • Median salary
    • Metric for graduating students admitted into top universities
    • Metric for number of Ph.D. students graduated
  • Outreach and Inclusivity
    • Percent students from other states/countries
    • Percentage of women
    • Economically and socially challenged students
    • Facilities for physically challenged students
  • Perception
    • Peer Perception: employers and research investors
    • Peer Perception: academics
    • Public Perception
    • Competitiveness

Why we don't need NIRF ?

Its measuring too many things and the intent is not very clear

From the NIRF website and their wikipedia page and numerous other sources, it is pretty clear that it is just a framework to rank our educational institutions. I had expected this to be more than just a framework and as a basis towards other goals like fostering research work, improving student graduation rates, encouraging inclusivity, etc.

However, it is very unclear how we are going to use these metrics to gain any actionable insight towards a goal which there seems is none as of now. This is as good as saying that Virat Kohli has scored more runs than Sachin Tendulkar did in his first 50 test matches. It is a good piece of trivia, but does not help either of them in becoming better players. We all know it is not good for a team when individual players start chasing personal milestones. These stats on their own are not enough to compare Kohli with Tendulkar but they give us an idea and are hence proxy indicators and not direct measures.

Proxy metrics are often misleading

It is good that these metrics are taking into account important factors like graduation rate, placement rate, inclusion, research, etc. but none of these metrics are giving you an idea about what I think is the most important metric - what the students have learned. Since proxy metrics are just rough indicators this also makes it very easy to game them. In the same way how more Instagram followers does not "always" imply a great photographer.

It is not uncommon for us to lose the real picture while chasing proxies.

Our quality of education has suffered because we have been chasing proxy metrics like grades, ranks and GPAs and the real value in learning has taken a backseat. The parameters within the framework are good goals to strive for but should not be at the expense of quality education.

If any institution is improving its metrics and its rank might still fail if others are growing at a much faster rate. This has happened with Texas Christian University where their ranking has fallen even after improving their metrics year on year. This system fails to acknowledge improvement of an institution in isolation.

Dangers of losing out

In the past, there have been multiple accounts of institutes cheating to boost their rankings. For example, King Abdullah University hired top researchers to boost its ranking.

The cobra effect occurs when an attempted solution to a problem makes the problem worse.

Cobra Effect - Wikipedia

The following is a table from a paper by Marc A. Edwards and Siddhartha Roy on Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition and clearly outlines unindented negative consequences of certain incentives. NIRF shares some of these metrics as well.

More about universities using inflated data to increase their ranking on US News, New York Times and Washington Post.

One framework does not fit all

Not all institutions are the same and each have their own specialities and proficiencies. This is as good as comparing apples with not apples. Even though the framework now includes separate rankings for different types of institutions, specialities and strengths of these institutions might differ. Once these institutions start getting ranked on a set of metrics, they might have to compromise the individuality in favor of better conformance with the metrics and to get a better ranking.

"The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."

Campbell's Law

The Silver Bullet

Ummm.. I don't really have a silver bullet. The framework is just a ranking system, it is not trying to solve any problem, atleast that is what it looks like from the current information. Case studies and research in the past have shown that ranking systems are not only useless but, in fact, worse. I don't know what works, but I do know what hasn't worked historically and the pitfalls of implementing such system by looking at existing systems.

In 2013, President Obama argued that such ranking systems incentivize gaming the system. In 2015, President Obama introduced a new tool - College Scorecard which is not a ranking system but a tool which provides students with relevant data about colleges to make an informed decision.

It’s impossible to capture an institution’s value, the college experience, and its impact on students with a single metric.

The Atlantic

Closing notes

This is a great step towards encouraging transparency in the system but I am not in favor of the ranking process. I am afraid this will soon turn into a rat race with a potential to worsen the current state of the system.

We as a civilization are not hesitant to game systems to get our way, be it SEO rankings or a more recent trying to denotify National Highways to workaround the ban on selling liquor near major highways.

I am not hopeful if NIRF will help us solve any major discrepancies within the current system and might even make it worse. I am happy to be proven wrong here but looking at existing data points this seems very unlikely. Most of the burden to boost the rankings will have to be borne by the students applying even more pressure on them.

When a measure becomes a target, it ceases to be a good measure.

Goodhart's Law

The framework in its current form is not helpful at all, I hope we progress in the right direction and I sincerely hope that we don't lose our way in the process.

I'd like to know your thoughts about NIRF. Please comment in the comments section and let me know, what you think.