Research assessment 
SBA Research Support

Contacts

If you need more information on this topic contact your library:
 ricercabali@unive.it,  ricercabas@unive.it,  ricercabaum@unive.it,  ricercabec@unive.it,  bda@unive.it 

Measuring research is an essential activity for:

  • the evaluation of the impact of academic scientific production
  • the allocation of funding to research projects
  • procedures for national scientific qualification and career progression.

ANVUR (Italian National Agency for the Evaluation of Universities and Research Instituts) oversees the national quality evaluation system for universities and research bodies and the processes for the National Scientific Habilitation and the Research Quality Assessment.
ANVUR also oversees the the rating of scientific journals classifying them in 'scientific' and 'class A' journals (subset of scientific journals).

The analysis must be based on the critical interpretation of data and information in order to reach an objective judgement of merit based on qualitative and quantitative tools, with the awareness that no assessment is perfect.

The assessment method differs according to the disciplinary areas:

  • for technical-scientific-medical areas, the evaluation is based on bibliometric indicators (number of publications, citations…)
  • for humanistic-social areas , the evaluation is based on non-bibliometric indicators (peer review)

However, there is no clear division between bibliometric and non-bibliometric sectors because scientific communication evolves very rapidly.

Assessment methods

A quantitative analysis, carried out using bibliometric indicators and citation databases. Quantitative evaluation is based on citation. In a scientific article, it is essential (as well as ethical) to cite sources; this creates a link with previous work.
This evaluation can take several elements into account:

  • number of citations received by an article (citation index), which can be found in citation databases such as Web of Science, Scopus, Google Scholar. This indicator answers the question, "What scientific value does this research product have?"
  • number of citations received by a journal in which an article is published. Depending on the citation databases consulted, there will be different bibliometric indicators; the main ones include:

    • IF (Impact Factor): a bibliometric indicator of Journal Citation Reports from Clarivate Analytics that derives data from citation indexes linked to Web of Science. The IF calculates the average between the number of citations of articles published in a journal and the total number of articles published by the same journal in the previous two years
    • SJR (Scimago Journal & Country Rank): a bibliometric indicator of the Scopus database, calculated both by counting the number of citations and by assessing the prestige of the journal from which the citation came, thus assigning a different “weight” to the citations depending on their origin
    • Google Scholar Citation: an indicator calculated on the basis of the unique association between a researcher's Google profile and the publications that Google Scholar is able to identify through Google's data mining mechanism.

    These indicators answer the question: "How scientifically authoritative is this journal?"

  • number of citations received by a single author: an indicator measuring the productivity and impact of an author's publications is the H-index (Hirsch Index), which is based on both the number of publications and the number of citations received. The H-index is automatically calculated in Web of Science and Scopus.

Peer review is a qualitative analysis through which the quality and scientific rigour of articles proposed for publication are assessed. The products are evaluated by experts in the field to assess their suitability for publication in specialised journals or, in the case of projects, for funding.
Peer review is a tool that complements bibliometric analysis especially in “non-bibliometric” research areas (humanities and social sciences).

There are different types of peer review. Traditional ones are single blind (i.e., the author does not know the identity of the reviewer) and double blind (i.e., the author does not know the identity of the reviewer and vice versa). These types of peer review based on the anonymity of the reviewers and the confidentiality of the references have shown some limitations over time: difficulty in guaranteeing true anonymity, potential conflicts of interest, reviewer selection bias on the part of the journal, danger of plagiarism of as yet unpublished work, lack of incentives and recognition of the review work that make the whole process increasingly costly and less sustainable. This is why open peer review is increasingly popular. It is based on maximum transparency of the process, as not only author and reviewer are mutually identified, but readers can view the reviewers' comments on the journal's website with the aim of increasing the quality and reliability of reviews.

Peer review filters out information and research that is truly reliable and worthy of publication, while discarding that which is unoriginal, dubious, unconvincing, false or even fraudulent. However, the review process can be very lengthy and slow down the publication of an article.

Towards a reform of the research assessment system

A profound revision of the research assessment system is under way, which also involves the European Commission (Towards a reform of the research assessment system, Coalition for Advancing Research Assessment) and the national assessment agencies.
The spread of Open Science poses a change of perspective that goes in the direction of rewarding practices of openness and sharing and a revision of the concept of “excellence”, in which the real impact of research on society is considered rather than the Impact Factor of journals.
All this prompts the use of alternative and complementary metrics (Altmetrics) and adherence to international initiatives (DORA Declaration) in support of new evaluation parameters that take into account the content of the scientific work more than the container (journal).

Alternative metrics: Altmetrics

Altmetrics (article-level metrics) consider the degree of popularity and dissemination of scientific contributions via the web and social media; they do not replace the traditional metrics of commercial bibliographic databases based on citation counts, but are complementary to them.
They also lend themselves well to use in open access platforms thanks to their graphic layout and use of bibliometric badges that can be incorporated into any database (institutional repositories, bibliographic-citational databases, etc.).
An example is Plum-X, i.e., Ebsco software acquired and used by Elsevier that engagingly shows the social visibility of research products indexed in Scopus and complements traditional bibliometric indicators.

Declaration on Research Assessment (DORA)

The Declaration on Research Assessment (DORA) is a document drawn up in 2012 by a group of editors and publishers of scientific journals with the aim of improving the way in which scientific research products are evaluated in terms of quality and impact. DORA contains 18 recommendations addressed to the various actors in the world of research (funders, institutions, publishers, organisations producing bibliometric data, researchers), which can be summarised in 3 main principles:

  1. elimination of the use of quantitative metrics related to scientific journals, such as the Impact Factor, in funding, recruitment and promotion decisions, as these indicators are critical
  2. evaluation of scientific research on its intrinsic merits and not on the basis of the journal in which the articles are published
  3. need to exploit the opportunities offered by online publication (e.g., by reducing the limits placed on the number of words, images and bibliographical references in articles, and exploring new indicators of relevance and impact).

The recommendations of the DORA Declaration, while referring specifically to scientific articles published in peer-reviewed journals, can also be extended to data sets.

Useful tools

Research evaluation support tools:

  • ARCA: among its functions, it also features tools dedicated to the monitoring and self-assessment of the scientific production of the university's researchers.
  • SciVal: is a tool for the quantitative analysis and evaluation of scientific production developed by Elsevier and based on the Scopus database.

Last update: 17/04/2024