April 29, 2016 § Leave a comment
That’s the conclusion of a paper published today in Science. The server log data of Sci-Hub were analyzed, and the results laid out on a world map. It’s a surprise for no one that Sci-Hub is used everywhere in the world.
I was curious about my own experience, so I did the maths quickly, and used my Mendeley library for it. After converting my library to a csv file (thanks Jabref for that), I analyzed the listing with Pandas (Python). I counted all the papers added in my library since 2013.
I have a total of 850 papers for this period. I took into account the 40 most represented journal, which corresponds to 537 papers. My work is getting more and more multidisciplinary, so I read papers for many different domains. Hence a lot of journals.
The split is the following:
- I had access to 318 papers (among which 39 are open access)
- I had no access to 219 papers. That’s 41%.
Had I analyzed all the journals and not just the top 40, I suspect the actual figure to be even worse. Anyway.
So there you go. I work for the main research organization (the CNRS) of one of the richest country in the world, and I have access to only 60% of the papers I need to read. I don’t blame it on the CNRS of course. They do a pretty good job with the money they have. I have access to Elsevier, ACS, Nature, and Springer journals. That’s really not bad. What I miss the most are Wiley and RSC. I found the papers I was missing from various techniques, including emailing the authors, preprints (arXiv !), authors personal website, and a few other popular strategies.
Draw your own conclusions.