Usage metrics, statistics from one paper

I am gradually becoming more and more interested in open access, and have followed the PLoS One evolution for a while. Working in materials science, PLoS One is not quite our common avenue for publishing our research. Although it is theoretically open to any domain of science, it is still strongly dominated by biology, for historical reasons.
Last year, though, we had cool and intriguing results about a compound exhibiting ice shaping properties, similar to that of antifreeze protein. I was very interested in having these results reaching biologists instead of ceramists. My first paper in PLoS One, thus.
One of the benefit of publishing there is the availability of usage metrics, updated daily. Curious to see how the paper would be perceived, or at least accessed, I tried to follow the usage over time. I did not manage to do it everyday, but maybe at least twice a week or so. So here are the results, with the total views and daily views for the past 5 months or so.

What we see is a very strong first increase of the views, which then decreases very fast. The window to catch attention of readers is very short, less than a week, with readers coming either from the front page when the paper is still in the recently published papers list, or through RSS or other feed. After that, there is a long tail, with 4 or 5 daily views in average.
A second peak is also visible, shortly after the first one. It corresponds to the publication of the press release by the CNRS, which was tweeted and retweeted a couple of time, and caught the attention of a number of new readers.

The other, less visible observation is the absence of a peak in the last month. I went to a conference in Germany to present these results, and apparently people did not rush to PLoS One to download the paper. Oh well.

Although these data are limited to a single paper, I suspect the general behaviour for all journals is very similar. I would be curious to see such data averaged for a journal. I wish all the journals would make such data available. I guess this is just a matter of time before they do so.

Google Scholar citations metrics

Google is now tracking the metrics of journals. They chose the h5 factor, which is basically the h factor taking into account the last 5 years. The search function works with keyword, as you can guess. So if you search for materials science journal with the keyword “materials”, you will only get results of journals whose name include “materials”, and skip journals like Nanoletters, ACS Nano or other ones.

If you click on the h5 link, you get a list of the top cited paper for that journal, neat. The 2007 graphene paper in Nature Materials of Geim and Novoselov is already cited >5600 times. Holy cow.

Google Scholar Citations: vanity page and the social potential

The service is now open for anyone. I was very curious to see how it performed, so I created my profile straight away. To put it back into context, this is a direct competitor to paying -and expensive !- services like Scopus. My take in a nutshell: it’s good. And the potential is really good too.

What is the main idea behind the service, then? It’s basically a follow-up of Google Scholar, focusing not on the papers but on the authors. A typical Google’s mission of organising the world’s knowledge, and this time, we are taking about academic knowledge. There are two ways to consider Google Scholar Citations.

The first one is the vanity page point of view. Researcher like to show off their long list of papers, their h, j, k or z-impact factor and so on. Google Scholar Citations is really good at this. Setting up your profile is really fast and frictionless, and Google’s also really efficient at finding your papers, as far as I can tell. It then build a table of your documents with the number of citations, which you can order and so on. Neat, convenient and efficient, but not ground-breaking, apart that it is a free service. Beside, it is finding more citations than Scopus or MS Academic, which is good for your ego. My score went from 150 (MS Academic) and 1000 (Scopus) to almost 1200 (Google) citations. Part of the reason is that Google is also taking into account proceedings. I actually discovered that some of my proceedings were cited. Cool. They are also indexing open access journals, which are not in Scopus, for instance. You can export your articles list (BibTex, Endnote and Reference Manager format) if you wish.

The second one, more interesting I think, is the social aspect. When setting up your profile, you can provide keywords to describe what you do. Once you’ve done that, try clicking on one of these keywords. You are brought to a page with a list of researcher sharing the same keyword, sorted out by their number of citations. The potentialities here are really interesting. You can imagine all sort of use from this service, from finding the most relevant person in a field (without forgetting, of course, that citations are just a part of the story), to identifying expert in a domain, whom you might want to contact. Similar to what Mendeley is offering, except that everything is automated here and you get the citations count as a bonus. Social is maybe not the right word here, it’s more about finding the connexion than interacting, since there are not tools but your profile to exchange information’s.

I’ve just finished reading Michael Nilsen’s book, a must read if you are interested in open and networked science. One of the interesting idea that comes back in the book is the fact that you can get stuck on a very specific problem out of your competences. Spend days or weeks to solve it (if you can), while someone, somewhere, with this very expertise, could do it much faster and more efficiently. The problem is to get the connexion. Having a global database of people’s profile and expertise could considerably speed up this process, by making the connexions possible. And I think that Google Scholar Citations might have the potential to do exactly that. The only problem so far is that the number of keywords you can provide is limited, which makes it difficult to get into a more detailed description of your domain of expertise, but that’s easy to fix.

Overall, I’m really excited by the potentialities offered by this new service. Let’s see how it evolves. Now go and try it yourself. Oh, and it’s already integrated in ScienceCard, by the way, which provide alternative metrics.

Wikipedia article traffic statistics

Just discovered this site, where you can get the traffic statistics for individual wikipedia articles. A list of the most consulted articles has been computed (not up to date, though), but I am more interested in the traffic of highly specialized articles, like scientific ones. An example of alternative metrics ? I long for the day when we will replace the impact factor with the usage factor.

(Soure: Hacker News)

Computing giants launch free science metrics

Hope for a free alternative to Scopus or WoK ? Will this only be a metrics tool ? If that’s the case, I don’t really care…. What they (Google and MS) need is to improve the accuracy of their database and search tools; I have always found Scopus to be far more accurate and relevant than Google Scholar, so far. I’m curious to see how it evolves.

I was not aware of the Microsoft Academic Search. Materials science is not covered yet, alas.

(Source)