A while back I covered a study called "From funding agencies to scientific agency," by researchers from Indiana University's Department of Information and Library Science (Bollen, Crandall, Junk, Ding & Börner, 2014) which suggested an alternative for today's method of allocating research funds using peer review.
Last post we talked about traditional peer review, which is at least single-blinded. This time we will focus on Open Peer Review (OPR). The narrowest way to describe OPR is as a process in which the names of the authors and reviewers are known to one another.
My PhD mostly dealt with research blogs from ResearchBlogging.org (RB) an aggregator of blog posts covering peer-reviewed research. In this article, we (Prof.
Peer review was introduced to scholarly publication in 1731 by the Royal Society of Edinburgh, which published a collection of peer-reviewed medical articles.
One of the challenges we face when using alternative metrics is the interpretation of what we measure. This is even more confusing than interpreting traditional citation impact (which is challenging and confusing in itself) because "altmetrics" is an umbrella term for a wide range of activities.
With over a billion views, TED (Technology, Entertainment, and Design) talks are a huge business. There are two main TED conferences a year the TED conference and the TEDGlobal, and a large number of satellite conferences (TEDx) all over the world.
When in trouble or in doubt, invent new words. We have bibliometrics and scientometrics from the Age of Print. Now they are joined by informetrics, cybermetrics, webometrics and altmetrics, which might not be an accurate term, but it’s sticky (more than social media-based complimentary metrics, that’s for sure).
Funding agencies allocate funds for scientific research mainly based on peer-review of research proposals. In 2010, more than 15,000 researchers peer-reviewed more than 55,000 proposals.
I haven’t written about altmetrics so far. Not because it’s not a worthwhile subject, but because there’s so much I don’t know where to begin.
Do blog posts correlate with a higher number of future citations? In many cases, yes, at least for Researchblogging.org (RB). Judit Bar-Ilan, Mike Thelwall and I already used RB, a science blogging aggregator for posts citing peer-reviewed research, in our previous article.RB has many advantages (if you read the previous article’s post, you can probably skip this part), the most important being structured citation(s) at the end of each post.
Once upon a time, journals were made of paper and ink. However, we left the dark ages of dead woods behind us and moved forward to an age in which authors don’t need to publish in journals (but still want to).
The new Leiden Ranking (LR) has just been published, and I would like to talk a bit about its indicators, what it represents and equally important - what it doesn’t represent.
“ Excuse me; the whole tenure system is ridiculous. A guaranteed job for life only encourages the faculty to become complacent. If we really want science to advance, people should have chips implanted in their skulls that explode when they say something stupid.” Sheldon Cooper, The Big Bang Theory Between the recent ACUMEN (academic careers understood through measurement and norms) workshop and my searches for a post-doc, it seemed like an excellent time to look at one of the most important land marks in an academic’s career: the tenure.
Science seems to be full of controversies and conflicts; famous scientists willing to kill and be killed for their pet theories, former students challenging the views of their academic "parents" and so on.
What's wrong with citation analysis?Other than your papers not being cited enough, what's wrong with measuring scientific influence based on citation count?
This interview is with Mr. Rob Walsh, co-founder of Scholastica and its lead interaction designer. Mr. Walsh holds a BA in International Studies from the Texas A&M and an MA in Political Science from the University of Chicago (but like most entrepreneurs, now does something completely different...).
This time (no, I haven't gone interview-only. One more after this one and we're back to regular posting) I'm interviewing Dr. Victor Henning. Dr Henning has a PhD in Psychology from the Bauhaus-University of Weimar, Germany, and is co-founder and CEO of Mendeley, a program which allows managing and sharing of research articles.
This post is a bit different from what Bonnie and I usually post in this blog - an interview with Dr. Richard Price, founder and CEO of Academia.edu, a social network for researchers.
Most articles today are results of teamwork, whether it's only two authors working together or thousands, (think CERN). As science keeps getting bigger, authorship no longer equals actual writing, but one way or another of contribution to team effort.
Every enthusiastic scientist knows that once you reach a certain level of specialization, there are very few people in your immediate surroundings that actually understand what you say.