Help ! My co-author is an animal !

I came across this paper yesterday, co-authored by Polly Matzinger and Galadriel Mirkwood. So far, so good. Wait, Galadriel Mirkwood ? The elve character in Lord of the Rings ? Turns out that this Galadriel is actually a nickname for the author’s dog. Yep, the authors co-authored the paper with her dog, which she was very fond of. I didn’t ended very well, as, as reported in Wikipedia:

Once discovered, papers on which she was a major author were then barred from the journal until the editor died and was replaced by another.

It’s not the only case where an animal is a co-author of a paper:

  • Andre Geim, the Nobel prize recipient for his co-discovery of graphene, co-authored a paper with H.A.M.S. ter Tisha, his hamster.
  • The American physicist and mathematician Jack H. Hetherington co-authored papers with his cat, F.D.C. Willard. You can read the whole story here.

There might be more, let me know if you’re aware of any other similar case.

Beyond the obvious provocative move by these authors (and Geim seems to be an interesting character), there are some serious underlying considerations here, I believe. When these cases were uncovered, most of the people were not happy. Academic is a serious endeavour, and pranks like these are not really welcome. To the public, scientists are serious people, doing serious work.

Humor is a terrific way of transmitting ideas. And yet, this is largely underestimated and underused in science. The main argument is that humor undermines credibility. I don’t buy it. Two years ago, I tried to incorporate a (very relevant) Calvin & Hobbes strip at the end of a review paper. The reviewer was happy with it, but the editor clearly said no (I was ready to pay the copyright charges for it. I thought it was worth it.). On of the main problem is that humor is a very cultural thing. Experimenting with jokes at an international conference can be a disastrous experience…

Keeping this rigid attitude nevertheless does not make outreach any easier. I suspect that this somehow contributes to the defection of young people for scientific career, among many, many other reasons of course. Similarly, papers should (well, not really, but this is still the majority) be written in the passive voice, destroying every single hint that the research was performed by humans, because we’re all robots, right ?

What does it takes to be a co-author ? According to Nature for instance:

All authors have agreed to be so listed & approved the manuscript submission

and, more importantly to me:

Authorship provides credit for a researcher’s contributions to a study and carries accountability

Many journals are now requiring to specify the authors’ contribution, which is a move in the right direction, but is still an imperfect solution.

It is well known that academic research can also be a dirty little business. We all know a Prof. Big Name whom requires his name on every single paper coming out from his group, in particular if published in a high profile journal (Nature, Science, Cell, etc.), even if he even hardly read the paper before submission. This is unfortunately still a common practive plaguing academia. When we published our paper in Science, since I was the main contributor, my colleague and friend Tony Tomsia asked me to be the corresponding author. A very unusual move in my experience. The consequence was very direct: the day after, I started receiving emails and phone call starting by “Dear Professor Deville”… This little move also impacted my career in many more ways that I could have imagined.

So, an animal as a co-author ? You’re complaining about what’s just a funny little prank in a handful of papers (among several millions published per year) ? Give me a break. Give credits where it’s due for every paper published out there, and come back to me. As this point, I may listen to you. In the meantime, I’ll get myself a blobfish to help me draft my next paper.

PS: Stuart Cantrill  tweeted me that he’s written a blog post on a similar topic. Before reading his post, I wrote this one. Let’s see what’s his take on the issue.

Making impact – Random thoughts on the various types of papers and their impact

I noticed an interesting stat the other day: I had two papers published at almost the same time (2009) in two different journals: one a high profile journal (Nature Materials, IF>30), and a highly specialized journal (JACS – where C stands for Ceramics, not Chemical – IF<3). Both papers had almost exactly same citations count (53 and 54). Albeit a very poor indicator of the performance of individual papers, the impact factor gives some idea of the visibility of the journals, and hence of what one could expect from it.

With high profile comes high visibility. The larger the number of readers, the more likely the paper to be cited in the future, among other consequences. Trying to choose where to submit your paper and how to best write it makes for an important part of our work time. After all, a study does not exist until it’s been published, right ?

If you believe in what you do, that is, think your ideas are worth further attention and exploration, you’re hoping for your papers to have the best possible impact. Impact could be many different things, from citations to contacts with industrial partners, outreach, obtaining a grant, etc. Easier said than done, of course. Predicting the impact of a paper is a very difficult, if not impossible task. And yet, after >10 years of research, >50 papers published, and reading thousands of papers over the years, I have my own classification of papers, which also gives me some idea of what to expect in terms of impact.

Papers fall under several categories for me:

  •  The proof-of-concept papers. These papers are the first steps towards an exhaustive exploration of a topic. If you’re in materials science, you show for instance that a process is possible to achieve such or such architecture. The properties are not optimized, but the idea is here. The titles are often quite short. The papers are not very common of course, but are usually highly cited in my experience. In my own record for instance, the 2006 freezing paper is the best example of this, with over 600 citations to date. The mechanical properties of these ice-templated materials where qualitatively good but quantitatively poor. The concept nevertheless attracted a lot of attention. The JACS paper I mentioned previously falls in this category too. The impact is usually high is the results open enough new avenues. If it’s a dead end, it’s quickly forgotten.
  • The exhaustive exploration papers. Here, a phenomenon, process, or whatever you want is systematically explored. The parameters space is assessed in a systematic way. Their titles tend to be very long, the topic highly specific. This represent the vast majority of papers. These papers are generally much less glamorous but indispensable to the advancement of science. Their impact varies greatly, depending, among other variables, on how rigorous the study is. –
  • The “WTF is going on?” papers. Here, observations of a novel phenomenon are reported. A theory is proposed, but quite often the authors are not quite sure of what exactly is going on. They tend to be highly specialized. Predicting the impact of such paper is tough. Our Nature Materials paper discussed above fall within this category. We thought the results were interesting and puzzling, and so did the editors and reviewers. But it was a highly specific issue (instabilities of a freezing front in a colloidal suspension), and I never expected this paper to receive a large number of citations. Its citation count is indeed comparatively quite low, with respects to papers of the same age in this journal. Most of the citations are coming from people whom discovered ice-templating through this paper, which is a nice side effect.
  • The review papers. Whether such papers should be counted in your citations records is a matter of debate, but they are definitely useful. There are many types of review papers. Their impact varies, but they are usually attracting a lot of attention, in particular if there’s a small number of such papers in a field or if they come first. If well done, they are terrific time-saver for newcomers on a given topic.
  • The reproducibility papers. Way too rare, although indispensable to the advancement of science. One usually get very little to none recognition for this, and such papers are difficult to publish since journals almost always require novelty. Their impact is minimal from an academic point of view, but possibly very important in real life (think drug testing for instance).
  • The time-wasters. Self-explanatory. A waste of time for the people who did it, for the reviewers who spend some time on it, for the journals that publish them, for the readers. Every time you write a time-waster, god kills a kitten. I am not sure about your experience in your own field, but in materials science, there’s a fair amount of such papers. They can be useless for many different reasons. Because there are already dozens (hundreds) of papers reporting the same thing. Because the experiments are obviously a bit sloppy. Because the reporting is incomplete. Because the variation in the parameters in infinitely small. These papers are never cited, even by their own authors. Their only benefit is to add one more line to the curriculum of the authors. With several millions of papers published every year, such papers are also largely unnoticed.

Whenever I write a new paper, I find it useful to know which kind of papers I’m shooting for. This is providing some guidance for where to publish it and how to write it, which is another story.

Did I miss any type ? Let me know in the comments, I’ll be curious to know how your experience differs.

NB: In addition, there are a few more, highly specific types of papers such as the response/comments, or the fake papers, but these are very rare.