Read the latest Issue
Working towards more responsible research assessment
We posed questions to Responsible Research Assessment (RRA) ‘champion’ Andrew Hogan about research assessment and why he and others want to see it improve
Developed in 2012, the Declaration on Research Assessment (DORA) aimed to establish solid, trustworthy strategies that fairly and equitably assess research around the globe and across all academic disciplines. In 2018, EMBL became an official signatory of DORA and a working group was then convened to ensure the principles are embedded into EMBL practice and culture. In 2023, EMBL signed the Agreement on Reforming Research Assessment by CoARA, a coalition of organisations committed to developing a shared direction for research assessment practices.
The working group has developed and formalised best practices in research assessment and enlisted the help of a group of champions to spread the word. Our Responsible Research Assessment (RRA) champions share information about the progress EMBL is making and advocate for the importance of assessing research meaningfully, and how everyone can contribute to this much-needed change.
Andrew Hogan is a joint postdoctoral researcher between Typas group and van Gestel Group at EMBL Heidelberg and one of the 20 current RRA champions. This group, which includes champions at all of EMBL’s sites, comprises those interested in research assessment becoming fairer and recognising a wider range of scientific outputs, beyond publications. Here, he shares his views and explains what motivated him to become an RRA champion.
Currently, what are some of the biggest problems with how scientific research is assessed?
Discussions on how scientists and their work should be evaluated have been going on for many years. Many at EMBL may already be familiar with a few issues. Overall, assessments rely too heavily on inadequate measures of productivity (e.g., the Journal Impact Factor, or JIF, and h-index). JIFs, calculated by the global intelligence provider Clarivate, are defined as the number of citations a journal received in the current year divided by the number of citable items published by the journal in the previous two years. The h-index, instead, is a score that represents the impact of an author’s work based on the number of their publications and citations.
While the concept of using metrics to assess researchers is not inherently wrong, the JIF and h-index have many known flaws. For example, the JIF doesn’t accurately compare journals across fields – with lower scores assigned to social sciences journals compared to those sharing biomedical research, for example. Also, the JIF only measures a narrow range of scientific outputs (e.g. research articles and reviews) and holds biases against publications that aren’t in English.
Relying on inadequate metrics can lead to distorted incentives, where researchers may prioritise publishing in larger volume or in ‘high-impact’ journals, over the quality or openness of their research. Inevitably, the pressure to publish increases, which leads to the aphorism ‘publish or perish’.
In the earliest days of my science career path, I would gaze with starry eyes at the weekly issues of Nature, Cell, and Science. They are still impressive bodies of work, but for me, they’ve lost some of their shine. I like to look past the magazine cover and into the rigour of the work and its implications.
Additionally, some types of research outputs get overlooked by current productivity metrics. For example, planning conferences/workshops, developing scientific equipment (either for commercial or educational use), writing interviews or blogs, participating in policy boards, engaging in public outreach, and developing software, databases, or protocols for other scientists – these are just a small sampling of valid research outputs. However, they have often been excluded in metrics because it’s been difficult to quantify impact, especially beyond the scientific community.
Why is it important to solve these problems?
Research assessment is an aspect of the profession that affects science culture, including how we (and others!) perceive our work. Unfortunately, lack of career or funding-related benefits can lead researchers to devalue these non-traditional outputs. This makes it an important issue to resolve. We have deep connections to what we do, and science is a whole way of life, rather than just a job. If we stay focused on traditional outputs, we risk undervaluing other important contributions that could have significant real-world impacts. While such contributions may not result in immediate high-impact publications, they could have potential and significant long-term impacts.
The traditional system also disadvantages researchers from underrepresented groups and ignores their experiences. The diversity in our life paths, and new ways of thinking we gain along the way, cannot be represented by current impact metrics. Diversity is a strength, and accepting this allows us to recognise all forms of excellence and talent. As an international organisation, EMBL is in a unique place to advocate for positive change.
How have your own views on research assessment changed?
In the earliest days of my science career path, I would gaze with starry eyes at the weekly issues of Nature, Cell, and Science. They are still impressive bodies of work, but for me, they’ve lost some of their shine. I like to look past the magazine cover and into the rigour of the work and its implications.
For a long time, I had a feeling that how our work was assessed just wasn’t ‘right’. Near the end of my PhD when I began producing my own research outputs, and especially over the past year with EMBL’s Responsible Research Assessment (RRA) Champions group, my feelings crystallised into coherent ideas. The biggest shift for me was in learning and accepting subjective methods of assessment, such as the narrative CV – a CV format where instead of simply listing degrees and awards, one can elaborate on a variety of contributions and achievements to highlight skills and experiences that would otherwise go unnoticed.
Many young scientists still perceive that journal impact matters most. While some universities and research institutes do still ask applicants to list their publications from Cell, Nature, and Science, the reality is that the research assessment landscape is changing. Perceptions are just as powerful as reality, and it takes work to change those too – as it did for me.
So there’s seemingly not one easy solution. How do we ultimately change perceptions and reality in research assessment?
I see two different approaches: bottom-up and top-down. They complement each other and should be done together. For the bottom-up approach, it’s all about individual researchers buying into the attitude of change. The simplest way to keep the momentum going is talking about research assessment with peers. Even a good-spirited discussion and debate is welcome! I recently hosted a discussion-style seminar about research assessment with Soraya Zwahlen (predoc in the Vincent group) for the DB Unit where we posed questions to a mixed group of approximately 30 predocs, postdocs, and group leaders to talk about research assessment in an informal, but thought-provoking way. It was great to see people have their ‘lightbulb’ moments and grasp new ideas.
It took me a while to figure out the importance of internal validation: pursue your ideas in innovative ways with genuine passion, while avoiding the allure of hype. If you feel your research is a valuable contribution, others will likely feel the same way too.
The top-down approach involves policy changes, which can happen at different levels (e.g., government, funder, institute, etc). Many guiding principles can help here, such as DORA (San Francisco Declaration on Responsible Research Assessment) and CoARA (Coalition for Advancing Research Assessment). Individual researchers can sign DORA, as can organisations and funders.
EMBL has put forth four broad recommendations in this regard:
- To be more explicit to candidates on EMBL selection criteria, all job advertisements include a statement on EMBL’s commitment to DORA.
- To ensure all research outputs are considered during assessment, candidates for recruitment, promotions, and awards are asked to include the full range of research outputs in their CVs and describe the significance of key research outputs in a narrative statement.
- To ensure that EMBL committees engaged in recruitment, research performance assessments, and promotion evaluations are adequately briefed on EMBL’s research assessment practices, committee members and reviewers are provided with a guide that guides them to assess scientific outputs rather than journal-based metrics of the publication platform.
- EMBL is committed to a culture of content-based research assessment. The way EMBL researchers talk about their, or their colleagues’, research achievements needs to embody the DORA and CoARA spirit to emphasise content quality over publication venues.
What needs to happen next to change attitudes in the scientific community?
For readers here, the best advice is to start thinking about this topic early on in your career, but it’s something to implement throughout one’s career. Eventually, we all get to the point of publication or producing other forms of research output. After that, we’re assessed on the quality and value of that work as a representation of our abilities. It’s better to be prepared with expectations and knowledge in hand than to be caught off guard when you may need to produce an Equity, Diversity, Inclusion, and Accessibility statement or a narrative CV. Many funders, such as the European Research Council, the UK Research and Innovation, and the National Institutes of Health, have these options within their applications. Many institutes have helpful resources.
Changes to ingrain responsible research assessment in science culture are happening, but they will take time and generational shifts. All of our research will be assessed in these ways, so we’re all stakeholders and have a say in the outcome. What’s more, our work is assessed by our peers, not by shadowy board panels of non-experts. If we all embrace these new attitudes, the quality of science and the people who produce it will improve.
The key is to keep the momentum of this culture change going. Good ways to do that are to become informed and talk with others who may have different opinions. Check out the links below and join the conversation.