Skip to main content



Ranking in the Scientific Research Network

http://blog.impactstory.org/four-great-reasons-to-stop-caring-so-much-about-the-h-index/

Recently, we have heavily discussed the internet network and specific methods of ranking web pages to establish power or “authority” hierarchies in the internet. As I listened in class, I realized that these discussions bared remarkable similarities to the scientific research publication network and the systems in place for ranking researchers.

On the Internet, each web page will usually contain directed links to other web pages, and this will continue and create a giant strongly connected component. For scientific research, each publication cites other relevant publications in that field of work, again creating a directed graph. This tends to also lead to the creation of a giant strongly connected component for each specific subfield of scientific research.

The methods for finding the “best” choice are also similar for both the internet and the scientific research network. To identify the “best” web page for a search option, one can give each page an Authority and Hub Score and a ranking based on their endorsements (either indirect or direct). For scientific research, one main method of measuring an authors rank is the h-index. This measures their citational impact by basically asking “ what is the highest number h such that h of their publications have been cited in OTHER papers at least h times.” This is similar to the Authority score in that it measures how often other publications have directed links to this author. This index essentially establishes a score for each scientist and creates a ranked hierarchy.

However, because these two networks are similar, they also experience similar flaws. The linked article talks about the flaws of utilizing the h-index to rank researchers. In both types of networks, these ranking systems have only been created in order to quantify (in an easily measurable form) something that inherently should not be quantifiable, the quality of someone’s work. In fact, these systems encourage quantity over quality. Specifically, in the same way it undervalues lesser-known scientists who may have a single hit paper (the article cites the example of “Big Food” research), it also undervalues lesser-known websites who have good content. Further, it overvalues instances where an author (or a webpage’s role) may have been small. For instance, some research papers may feature hundreds of co-authors. Even though each authorship should not be considered equal, the h-index does not take this into consideration.

Use of the h-index in scientific research is a controversial topic, especially to the graduate students who I work with, but I hope the article brings to light the intertwined nature (and flaws) of the research and web networks.

Comments

Leave a Reply

Blogging Calendar

October 2017
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Archives