I’ve been thinking further about altmetrics and impact, and how they relate to scholarly publication.
As we know scholarly publication is a numbers game. The more a publication of yours gets cited, the better a researcher you are deemed to be. Get consistently cited and you get tenure.
Citation indices provide the numerical evidence underlying this, and as we’ve discussed before, the bibliometrics people try and provide some weighting to account for some journals being seen to better than others.
This is in a sense simply the formalisation of tacit knowledge. We know for example that a paper in Psychophysiology can be said to have a greater impact among psychophysiological researchers than one in a less important journal, in part due to the reputation of the editorial board, and their desire to retain their reputations by not letting any gonzo research into their journal.
In fact, in a relatively small group such as psychophysiology researchers there is a group of eminent researchers who function as guardians of integrity due to their positions on editorial boards and learned societies.
In a larger field, such as organic chemistry, this is not quite the case, but there is enough of a network in place to provide reputation checking.
Of course, in trying to quantify the relative worth of journals the biblometrics people have created a pressure for researchers to publish in more eminent journals to get a better score. This forces publication towards particular journals, and may inhibit alternative publication such as on arXiv.org.
Now, let’s take a look at altmetrics. Altmetrics really attempts to provides some sort of numerical basis to guage impact from non traditional publication, commonly thought of as blogs twitter and the like.
There’s a little problem here. If a research group blogs about their latest results that’s not unlike announcing results at a conference. If someone outside the group blogs about the announcement its really a measure of public engagement, not scholarly impact. Ditto for twitter.
It’s telling us that the public are engaged by the research, but it doesn’t measure the impact of the research within the discipline.
As an example, we know that for various complex reasons (education, culture, direct relevance to personal circumstances, degree of specialist knowledge required) the general public tend to be more interested in content relating to history and archaeology than plant ecology, even though the latter field might tell them things about something of public interest, such as climate change, and consequently any measure of engagement will reflect this.
This could be considered immediacy of impact.
Let’s also be clear that I am not arguing that altmetrics is flawed, rather that it tells us something different about impact, perhaps reflecting the public perception of worth. It also perhaps suggests that researchers need to consider how better to communicate the relevance of their research, even though it may be very complex …
Written with StackEdit.