The Conventions

View Original

Much Ado About Impact

Impact. The fleeting quantity by which scientists, academic journals and universities are ranked to direct research funding. For a 21st century academic, this elusive entity should simultaneously pervade research motivation, method, output and outreach. In some it raises eyebrows and suspicion just like any other buzzword, in others it raises hope for making a difference in an increasingly complex and deranged world. There’s much ado about impact, but what exactly is it, and how, if at all, should it be measured?

To begin with, I’ll start with a disclaimer: if by impact we mean having positive and high-quality influence with our research, I’m all in for impact. In fact, as someone working in environmental research, I’d be absolutely thrilled to see our work having more impact. Unfortunately, however, “impact” isn’t always what it seems to be. Therefore, a critical dissection of the notion of impact seems to be in place.

Impact and its assessment seem to come in two castes: scientific and social.

In the context of science, impact is often taken to mean the effects of one’s research findings within the scientific community. This is most typically measured with bibliographic proxies such as citation counts (such as the h-index), publication counts or the venues in which scientists publish. Makes sense, right?

Well, not really. For one, let’s start with the rather self-evident notion that nothing in a citation indicates real impact or quality. To make an extreme example, a paper might be cited for being rubbish, such as Wakefield’s infamous retracted study on autism and vaccinations. Sure, this is exceptional, but more commonly you’ll find papers being cited simply because they study a trendy topic in a common field of research. Tough luck for all you working at the frontiers of scientific inquiry, unless you’re reconciled by the chance of you being cited post-mortem (Reverend Bayes is getting his attention 250 years after!).

Moreover, often citations bear no informative content, and are merely used as uncritical suffixes to reinforce a self-evident argument. As if that’s not enough, citations are suspect to what’s known as the “Matthew-effect”, whereby those lucky enough to get cited at an early stage are more likely to be cited in the future due to increased visibility. I won’t even get started on self-citations (although see this hilariously ironic case). The fact that scientists are ranked globally based on citation counts, when you really give it some thought, feels quite absurd indeed.

Unfortunately, publication assessments don’t get any better. Measuring publication counts encourages researchers to game the system and dissect their output into smaller entities, resulting in an ever-expanding mass of research papers. Why publish a single great article when you can halve it into two decent ones? If anything, this reduces potential for social impact, since the public will never work through the endless publications to find truly informative ones. Another problem is the so-called “natural selection of bad science”, where incentives for high output lead to poorer methods and high false discovery rates. Finally, research suggests that highly prestigious publication outlets (high “impact factor” journals) might not even be publishing better and more replicable science. This makes intuitive sense: prestigious journals select surprising results for publication, and surprising results are more likely to be random variation or fabrication. Yet the fallacy of scientific impact persists. For instance, most global university rankings are heavily based on citation and publication counts, and media outlets repeat such near-nonsense without critique. If you’re interested in making a change, you can start by signing the DORA declaration.

So, what about social impact? The notion of social impact generally refers to the social outreach or consequences of research. Socially impactful research should “foremost be easy to communicate and capture public imagination”. Quite benevolent, right? Well, perhaps, but not necessarily. Let’s think this through with a case example. Would Boolean algebra, an abstract branch of mathematical logic invented by George Boole around 1847, have been deemed by 19th century research funders as socially impactful? I suspect the answer would be negative. Yet, nearly a century later, Claude Shannon used Boolean algebra to come up with the recipe for the information age. Quite some impact, indeed. We are notoriously bad at identifying social impact, and the reasons for this are logical, originally sketched out by philosopher Karl Popper: To know the consequences of a discovery, you’ll first have to make that discovery. If we seek to ensure that research has social impact prior to discovery, we might not make many socially impactful discoveries to begin with.

Concerning is also the possibility that many extremely important pieces of research are entirely disinteresting to the public eye. Perhaps we should entertain the idea that sometimes scientists might have an inkling of impact which funders and governments can’t perceive? Think of, for instance, the (literally) grassroots entomologists who spend their hours weighing dead insects. Socially engaging? Not quite. Increasingly important research providing more evidence on the ongoing sixth mass extinction? Exactly.

Science has easily been one of the most impactful of institutions. So where does this suspicion of non-impactful science emerge from? It seems deeply ironic and scapegoating that whilst many governments, ministries and industries – the people with greatest power to change the direction in which our world is heading – are doing next to nothing to have radical positive impact on our social and ecological systems, they are giving increasingly distressed and poorly funded scientists a bad rap for not being impactful enough.

My message follows: if you want to be impactful and encourage public outreach, that’s great! But if you want to quantify or assess scientific or social impact to direct research funding, please proceed with caution. There are valid reasons for maintaining strong degrees of academic freedom, and you just might be doing more harm than good.