A few days ago, the Ministry of Human Resource Development released the NIRF rankings for the year 2020. And we couldn’t help but notice that there were some discrepancies in the rankings of colleges this year with their previous ranks and the overall public perception of an institute.
The new list seems incomplete, incompetent and bordering on random. While some part of this outcome is because of the Indian Education System, the ranking methodology seems to be the major problem. And after only a little bit of research, you get the idea that this has been the scenario with NIRF rankings right from 2016 when they were first published. In fact, an article printed in The Quint, in 2018, called the NIRF Rankings “Laughable and Ridiculous”.
NIRF Rankings were started by MHRD as an answer to various misleading information floating around the web. These rankings were supposed to be authentic and up-to-date with the happenings around the country. But in the 5 years its been active, these rankings seem to be the farthest thing from reality, year on year.
To prove this point lets take the example of IIT Madras. The Indian Institute of Technology (IIT) Madras, which has been ranked as the best Indian institute for the second consecutive year in India’s National Institutes Ranking Framework (NIRF), was nowhere in the list of top 100 institutes released by the Times Higher Education (THE) Ranking and in top 250 of the Quacquarelli Symonds (QS) World University Rankings released earlier this month.
When the director of IIT-Madras, Dr. Bhaskar Ramamurthy, was interviewed by The Indian Express, he stated how the international ranking matrix is different from the NIRF matrix.
“The international rankings do not depend upon the data provided by the institutes. They just ask us 2-3 basic parameters like faculty-student ratio, etc, which too is an important parameter. However, the international rankings overlook other key indices like research, industry collaboration, among others. For international rankings, if we increase our faculty members in one year, it gives us more marks at one point but also ranks us low in citation done by faculty for many years as it takes time for new faculty to research and submit citations. NIRF, on the other hand, asks for data which takes months to compile and one cannot fake these data points.”Dr. Bhaskar Ramamurthy, Director, IIT Madras
At first these appear to be genuine concerns but when you think about it, from the above example itself, it is evident that NIRF isn’t taking into account parameters like infrastructure etc. which brings us to the first issue with the NIRF rankings.
NIRF takes into account 5 parameters while ranking a college. Now as holistic as these parameters might be, they aren’t the best and certainly not the only judge of a college.
In order to quantify the learning at an institute, NIRF employs student strength, student to faculty ratio with focus on permanent faculties, number of faculties with PhDs and the total budget of an institute. What about the count of guest lectures, visiting faculties, availability of labs, proper classrooms and hostels etc.? It doesn’t even take into account the number of students who participated in international competitions or projects or even the alumni base.
While it is important to keep track of the student strength and the student to faculty ratio, NIRF might also want to include parameters like the number of guest lectures conducted by a college, competitions won by students, institute’s emphasis on extra-curricular activities, strength of the alumni base, the kind of infrastructure provided, condition of hostels etc. Learning is not only in the classrooms but NIRF ranking do not reflect so.
Now, experts are beginning to remind the world that the primary purpose of a university is to help students learn; research is only secondary. But when excessive impetus begins to be placed on papers and journals, the process of classroom learning is largely neglected. But NIRF doesn’t understand so and places about 30% weightage on Research and Projects.
This brings in the second problem with NIRF rankings.
NIRF in its approach to rank colleges across the country is essentially comparing apples with oranges. Many universities in India have an area of focus: engineering (like BITS-Pilani) or social sciences (like the Tata Institute of Social Sciences). Others offer courses across the arts and sciences, like JNU, UoH, Delhi University, etc. It is the latter that fits the general definition of a university. This means that comparing these two kinds of institutions is not very useful – both from the institutions’ and from the students’ points of view. This is a structural issue.
The MHRD seems to understand this and publishes rankings of institutions according to subject-wise categories like pharmacy, engineering and management. It also admits that rankings for medicine, architecture and law couldn’t be formulated because of a lack of meaningful participation by the respective institutes. However, this understanding seems to be very partial. Many of the highly-ranked institutions are primarily engaged in teaching engineering.
And this brings us to the next problem with NIRF Rankings.
Now, the fundamental issue with weighted rankings is the question of weights. Weights are an indication of priority– but whose priority? Currently as seen above the NIRF has decided that research performance counts towards 30% of an institute’s rank. Does the average undergraduate student really value research to that extent? No, this is an indication of the government’s priorities.
With its various segments- Medical, Law etc., NIRF Rankings need to be scaled according to each program’s need and curriculum.
The weightage method isn’t wrong, the weightage criteria is wrong
The list of colleges and universities ranked does not include all the college in the country. As it depends on the institute if it wants to participate and share the data, a lot of universities and colleges do not impart this knowledge on grounds that they don’t seem to consider the NIRF rankings authentic and true.
This is a fundamental flaw as the population used for rankings doesn’t include all the colleges. Due to this, many colleges tend to go up the ladder but do not deserve the rank assigned to them.
While MHRD’s initiative to provide one list that encapsulates all the aspects and views is not wrong but it is somewhat misguided. When students or their parents research for a college, they are bound to look for more than NIRF rankings and not finding their chosen college ranked poorly globally but highly in the country, might raise some questions about the authenticity of the rankings.
So maybe instead of making an absolute list, NIRF should publish a number of variables like the number of male and female students, student-faculty ratio, infrastructure available, spending per student, etc., and let the students sort the list of institutions based on their idea of a good institution, in order to genuinely empower them. This is the system that is followed in Germany and it allows for a less cut-throat, more diverse ecosystem for higher education which is what is essentially needed for today’s youth.