NPR science journalist Richard Harris talks about his book, Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope and Wastes Billions.
Steve Mirsky: Welcome to Scientific American's Science Talk, posted on August 2, 2018. I'm Steve Mirsky. On this episode –
Richard Harris: What starts to happen is the quality of the science starts to erode and the result – what we're seeing is significant studies that get published aren't true.
Mirsky: That's the familiar voice of Richard Harris. He's been on National Public Radio for three decades covering science, medicine, and the environment, and he's the author of the book Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions. We actually spoke in April 2017 when the book first came out in hardcover. It was released as a paperback this year. I met with Richard Harris in a recording studio at New York University.
Richard, let's talk about the title of the book, 'cause I think that's important to get out of the way. This is not a book about dead bodies. Although, in some ways...
Harris: Well, yes. Rigor Mortis is not about the stiffness that comes with death, but it was, for me, an irresistible pun – "rigor" being what is missing or in short supply in biomedicine right now – and I hasten to say that rigor is not dead in scientific research, but I will say that it is limping along a little bit and it could use a good jolt of energy.
Mirsky: Let's talk about what the issues are.
Harris: Right. Well, there are many issues. Depends where you want to start. But, we can start talking about the deepest issue, which is – I think that there's something amiss with the culture right now in science, in biomedical research in particular. And I think that that culture is being put into trouble by the funding problems that are facing biomedical research. The NIH budget doubled between 1993 in 2003 and then, Congress said, "We've done enough" and they stopped.
They didn't increase it, they didn't decrease it, but inflation has been eating away at that budget. And between 2003 and 2015, in real spending terms, the amount of money has basically gone down by 20 percent. And that increase at the beginning is important, because they built a huge number of labs, they hired a bunch of people. So, there were more mouths to feed and now, there's less and less money to do it. And that has created a hyper competitive environment and that's really the back story to this whole book – which is that that pressure, that search for funding, that brutal fight for funding, it makes people do things that they don't necessarily want to do.
People really honestly often have a choice between doing what's best for their career or what's doing best for science. And that's a position no one should be in, but that's fundamentally what's going on here.
Mirsky: I just want to say – because we have a wide audience, and we mentioned the NIH and we should probably just say what that is and what they do.
Harris: Sure. The NIH is the National Institutes of Health. They provide about $30 billion a year for biomedical research. They are, by far, the leading federal agency that provides research for civilian research in this country. Bigger than the National Science Foundation, NASA, all of the rest of those things that you may have heard about.
But they basically put out grants to universities across the country, to the researchers who are working at those universities – they fund some research on their campus in Bethesda, as well, but that is really the backbone of funding for biomedical research in this country. And I'm talking here not necessarily about the end stage of drug development, 'cause their pharmaceutical companies are taking the baton and they actually spend more than the NIH does in drug development issues. But, the NIH funds the basic research that goes from just the biology to just saying, "Hey, here's a great idea. This might be something useful as a drug or as a medical advance of some sort." So, they fund the research that no one would fund otherwise, because it's not obviously profitable. It's exploration.
Mirsky: Right. So, if you're gonna scoop up a bunch of stuff from the bottom of the sea and then screen the compounds – the chemical compounds that you find in there for anything that might look like it has some anti-tumor property, that's the basic research that the NIH might fund. But, once you find something that appears to have that kind of property and a pharmaceutical company tries to develop it, that becomes the private stuff.
Harris: That's correct. Yes.
Mirsky: Okay. So, what is going on in the culture where there's so much fighting for the money that then is doing something harmful to the entire enterprise?
Harris: Well, let's start with the fact that when people apply for a grant through the NIH, the NIH will only approve 20 percent. And so, that means if you're running a laboratory, you have to have figured, "Well, I'm gonna have to apply, on average, for five grants just to get one." And it takes a lot of time to do that. So, in the first place, people are spending a huge amount of time writing grants. They may spend more time writing grants than doing research in their laboratory.
If you're a head of a lab, that's probably true. So, the pressure is on. And, if you don't get that money, then your lab is in trouble. Because back in the day, states like California, New York, Texas, and a lot of these states – Virginia and so on – funded these fabulous research universities in their states, and gradually, over the years, they have been spending less and less money to support the research going on in these universities. So, all of a sudden, the federal grants are it.
In many cases, if you can't get the grant, the University probably doesn't even have money to keep your lab going – at least limping along maybe for a year or two. But, if you're not raising money, you're out of the business. And so, the pressure is absolutely intense. And the result is that people realize that in order to get grants, they need to do flashy research. They need to have exciting results, and they need to publish them in the biggest journals.
And that creates an incentive to say, "Hmm. This result isn't 100 percent flashy – maybe it's only 80 percent flashy. Maybe I can just do a few things – I still think it's really an exciting thing, but maybe if I just, you know, pretty up the picture or leave out some of the data that doesn't support this idea..." that all of a suddent, what starts to happen is the quality of the science starts to erode. And the result is that what we're seeing is a significant number of studies that get published aren't true.
Mirsky: You talk in the book about hype words that have proliferated in the last few decades.
Harris: Yes. There was a study done in Europe looking at some of these hyperbolic words and these scientists went back and they looked at – they tracked journals, as I recall, from 1974 up into the 2000s. And they found that – they actually used a hyper number themselves. They said it was a 15,000 percent increase – instead of 150 times; which is still a big number – but basically, these words like "extraordinary" – there's a whole list of them –
Mirsky: "Novel". "First of its kind".
Harris: Yeah. And all of a sudden, science journals, which you think of as sort of stayed places where people don't exaggerate – all of a sudden, these words start to proliferate as a way of saying, "Hey, look at the really exciting study. Look at the really exciting finding I've had here."
Mirsky: So, you talk about – you mentioned just now the problem of research being published and the findings might not necessarily be true. And that gets to this thing that we've really seen a lot of commentary about in the last couple of years, which is what's being called the reproducibility crisis.
Harris: Right. And I think the key example of that was the drug company Amgen. A guy named Glenn Begly, who was running the cancer research at Amgen, decided when he was sort of ready to move on, that he wanted to go back and sort of open up his file drawers and look back again at a bunch of studies that when they came across the transom in scientific journals, he thought, "Wow. If these things were real, these things could be drug leads for us." And he was very excited about them.
He picked out 53 studies, and he tried to reproduce them. He walked up to the researchers in his labs at Amgen and said, "See if you can make this work." And if they couldn't, he would go back to the original researchers at the labs that did the work and say, "We're having trouble reproducing this. Can you help?" And very often, they couldn't.
And of those 53, ultimately, he was only able to reproduce 6, which is, what? 11 percent. So, that was a red flag that something is fundamentally not right – that what's coming out of these research labs at universities basically ends up being dead ends. And, when you think about it, this is really where drugs begin. University makes an exciting discovery, a drug company picks it up – obviously, they know now that the first thing they need to do is try to reproduce it, because if they can't get it to work, then, that's a dead end right there.
But, even if they can get it to work, 90 percent of the time or more, those ides fizzle for one reason or another. And I talk about those various things in my book.
Mirsky: Yeah. You talk about different reasons why things aren't reproducible, and one terrific example was this study on gene activation in Caucasians versus Asian people, which was done by a Caucasian Asian –
Harris: Married couple.
Mirsky: – married couple. And it turned out – it depended on the day of the week that you did the testing.
Harris: Right. This is a common phenomenon that has been recently recognized, and it's called the "batch effect" where you see a difference between one set of people and another set of people, and you say, "Wow, this is a real difference, and this is..." And in this case, they tested all the Caucasians at one point in one set of test apparatuses and then, actually, I think it was a year or two later they checked the other race. And they said, "Oh, look. These don't match up."
And so, they concluded, "Oh, Asians and whites have this significantly different gene expression that's called which genes are turned on and turned off under certain circumstances." And they published this, and they said, "Look. This is big news." 'Cause it was like, 25 percent difference between these two kinds of genes. Well, other scientists had been looking at these sorts of gene expression stuff, too, and being careful to test everything on the same chip as opposed to one chip one year, and one chip the next.
And they found that there were actually smaller differences between Caucasians and Africans, and the understanding is that we are more – the genetic differences are bigger in that case. So, they looked at the Asian studies and said, "How could the Asians have more differences than the Africans did?" And they determined – well, actually, they discovered that there was this batch effect going on. And the interesting thing about this is they published their finding not in the same journal where the original findings were published; they published their observation as a letter in one of the nature journals, and the scientists who had done the original study said, "Well, blah, blah, blah." They sort of admitted some of their errors, but they were sort of were still defensive.
And they said, "This is still a real thing – that there's genetic differences between whites and Asians" – which there are, but it was not nearly as big, apparently, as they said. And here's the rub, though. Their original paper is still being cited all the time because that – and people don't go and find this discussion elsewhere in the scientific literature to say, "Oh, you shouldn't really put too much weight on those findings, because we now understand that that was caused by the batch effect." So, this is how ideas – sort of bad ideas – get sort of embedded in the literature, and people don't know how to go find out whether they're still valid or not.
Mirsky: If I remember right, the original paper was in Nature Genetics so, just watch out for it if you're in that line of research. Let's talk about a couple of the other sources of these problems. You discuss the fact that researchers used to primarily do mouse tests with only male mice.
Harris: Right. There's lots of problems with mice, but that was one problem – that the sex of the mouse – which was done for convenience – probably ended up skewing a lot of those studies. But another problem is that the sex of the mouse handler matters enormously as well. If you're a man and you're handling the mice, you're gonna get very different results than if you're a woman handling the mouse. So, basically – so, you ideally would want to know, in your data, "This mouse was handled by a man. This mouse was handled by a woman."
There are so many little subtle differences like that that really creep into this. I think a more fundamental problem with mice is that we have come to assume that they are just basically furry little people, and that could not be farther from the truth. When you think about it – are we evolved – we have a common ancestor, which probably, hundred million years ago – maybe not quite that long ago, but a long time ago, and we've been evolving our own unique ways since then. So have the mice. It's not as though they are like – if you just go back in time, that's what our common ancestor looked like.
We've diverged a lot over the years, and so, we're very different. And people, all the time, make assumptions that if it's gonna work on a mouse, it's gonna work in a human. In fact, that's often not the case. Even, quite often, if it works in a mouse, it doesn't even work in a rat. So, unfortunately, we've built up a whole huge infrastructure around this idea.
It's understandable, because you don't want to test things directly in human beings. You want to start in animals and see what you can learn from animals first. But, I think it's actually another reason that there's so much that gets published in the literature that may be reproducible if somebody does it also in mice, but then, if you try to do it in generalized, it doesn't work out. Of course, we aren't that interested in curing cancer in mice. We basically try to create the cancer in mice before we cure it.
So, what we really care about is what that means for human beings, and often, that's way too big a leap.
Mirsky: Yeah. The difference between us and mice – I mean, I can show it right away. If you put a big dollop of peanut butter on a surface in a jail cell, I probably won't go in there to get it. But a mouse will, almost every time, go into the trap that I have at home and try to get that dollop of peanut butter. I know the people are gonna want to know about the difference in the sex of the handler, so, let's just talk about that for 30 seconds why that made a difference.
Harris: Basically, mice respond to the pheromones that men put out. And, in fact, if you take a T-shirt that a man has been wearing and put it in the cage or put it in the room with the mice, the mice react to it. They react very strongly. It's a flight reflex. It's a sense of danger for them.
And it can affect their biochemistry, essentially. So, that's really what's going on.
Mirsky: Yeah. So, if you want to frighten your mice, just send men into the room. 'Cause they're getting the stuff that we waft off us and they know it's bad. And let's – we have to talk about HeLa cells, because Rebecca Skloot's book – there's a chapter in the book where this stuff that was thought to be a different cell line – no. That was HeLa cells.
This other stuff that was thought to be a different – no. That was also HeLa cells.
Harris: Right. Yeah. So, the story begins back in 1951 when Henrietta Lacks, who's an African American woman and had cervical cancer, went to the Johns Hopkins Hospital, and they tried and failed to treat her cancer, but they did collect some of the cells. And the cells became the very first cells that were an immortal cell line that we'd grow indefinitely forever in a test tube. At first, they were this great boon to biomedical science, 'cause all of a sudden, you could now grow cancer cells and you could study cancer just by brewing up a batch of cells.
And so, over the years, they were a major laboratory tool that had been used all around the world, but there's also a problem with them, which is they grow so well that they outgrow practically any other cell. And if you're not incredibly careful at how you handle your cells, if you're working with three cell lines, probably before you know it, they're all actually HeLa cells. And what has happened over the years is these HeLa cells have become incredible imposters and they've taken over. They fooled so many people that they were called also different cell lines. Like, Chang Liver is a cell line, and if you actually go analyze Chang Liver, it's HeLa cells.
In fact, there's a group that has been trying to pull together a list of all of these contaminated cell lines. They found 450 different cells lines that are contaminated cells that are not what they are advertised to be. And of those 110 or more are HeLa cells. So, they are – imposter run amuck everywhere, but there are many, many other cells, as well, that are also not what they pretend to be. And if you look at – for example, one of the stories I talk about is a breast cancer cell line that was isolated in 1976 in Houston.
It's been used incredibly widely. And finally, in the year 2000, somebody developed a genetic fingerprinting test to be able to look at the genetics of these things and say, "What is this, actually?" And it turns out – it was a melanoma. And they published that in the literature. The NIH warns about it, because the cell line is one of the 60 top cell lines that people always use, and still – there have been hundreds and hundreds of publications since the year 2000 of this cell line calling it a breast cancer when, in fact, it's a melanoma.
Mirsky: And when some of the researchers are confronted with that truth, they contend that, "Well, the results are still valid."
Harris: People are – I think many scientists recognize that this is a problem cell line, and they steer clear of it if they're aware of it. But again, they may not look in the literature to realize, "Oh, you mean MDAMB435 isn't a breast cancer cell? It came from a vial that said that. That's what my friends thought." Whatever.
So, there are still people who stumble across it accidentally. There are a few people, I think, who still say, "I don't believe it" despite the pretty overwhelming science. Sometimes, bad ideas die hard, and if somebody's devoted a lot of work, a lot of their career, to studying this and calling it a breast cancer, they're very reluctant to say, "Oh, I was wrong. And everything I did for those 10 years was actually a mistake." So, they're holdouts.
Mirsky: In the book, you talk about – at the formal events at conferences, people might talk about some of the problems – like that we're talking about – and the audience may be quiet or push back a little bit. But down at the bar later, everybody – it reminded me of pilots talking about the near disasters that they've had.
Harris: Yeah. One of these stories come from Tom Cech, who's a Nobel Laureate, in Colorado at the University of Colorado and the Howard Hughes Medical Institute. And Tom tells the story about – basically, you know, he's got the chops. And he says, "I can stand up in a conference and say, 'You know, I tried to repeat this, and I couldn't get it to work.'" And he'll say, "Later at the bar, six other people say, 'Thank you for saying that, 'cause I've been trying all these years, but I can't – I'm not gonna stand up and criticize somebody else, 'cause it could hurt my career. They might review one of my grants one day' and so and so."
So, absolutely. People – there are open secrets in science just the way there are in any other culture.
Mirsky: One of my fears – and I think it's very important that this stuff be brought out into the light – but my fear is that politicians who ultimately control the purse strings at the NIH and other funding agencies will try to use this as an excuse to cut funding, because they see so much of the funding as wasteful if this kind of business is going on.
Harris: That concerns me as well, because the pressure is really due to the fact that there's not enough funding to go around as it is. And, if you cut the funding, you actually make this underlying problem all that much worse. So, you're not actually solving the problem. You're exacerbating it. And a couple of weeks before my book came out, the Trump administration did put out a budget plan that was talking about slashing the NIH budget by 18 percent – by $6 billion or something like that – which would be devastating.
At least they can't blame by book for doing that, 'cause they thought of it before the book came out, but it is a very serious issue for science, because what science needs right now is some help. It doesn't need to be put into a tailspin.
Mirsky: And you quote – I believe it's Senator Richard Shelby from Alabama, who is very careful to say that he would like more funding, but is very concerned that this stuff is gonna make it harder for the rest of his senate colleagues to go for it.
Harris: Right. That was a hearing in 2012, so, it's practically ancient history by now, but yeah, there is clearly very strong support for biomedical research in Congress. In fact, this past year, Congress gave NIH a big budget boost for the first time in many years. So, people care about this because they know this is where advances in medicine come from. And everybody knows somebody who has a medical condition, and everyone wants those conditions to be treated and cured.
And so, there's very strong support for making the NIH work. My book is not intended to tear the NIH down at all. It's to point out, "Hey, there are some issues here that deserve some attention. We, as taxpayers, want to make sure we're getting the most bang for our buck." And the answer's not to give them less money, but to say, "Look. Focus on these issues. Fix them." And everybody wins.
Mirsky: So, give me some good news. As a reporter, I often cover a single study and that's a dangerous thing to do. So, we try to be really careful. So, give me some good news and tell me that not 90 percent of the stuff that I look at in journals is actually wrong.
Harris: Oh, okay. Good news is – it's not 90 percent. But it might be 50 percent, or it could be 60 or 70 percent. And there was a paper that has come out since my book was printed, and basically, some scientists went and they looked at the newspaper database – so, my stuff isn't in there; yours probably isn't in there – you're safe on this one – but they looked at the newspaper databases to follow coverage of major scientific advances – medical advances that were interesting enough that they got attention in the media, in newspapers. And what they did was they followed through and they said, "Well, how many of these studies do we have more data on now?"
And they found 156 studies where enough time had elapsed that there was sort of a scientific verdict about whether or not they were valid studies. And they found that about half of the studies that were reported in the newspaper didn't stand up. Upon further analysis, they failed. And two-thirds of the studies that have sort of surprising – this was the first time someone's seen this and this, of course, draws the attention of reporters like you and me – two-thirds of the time those studies didn't stand up to scrutiny. So, it's tough being a daily news reporter.
It's like, how do you deal with this stuff? And I do try to steer away from that. But, this is the reality of science. And sometimes, it's because there's bad pathological problems in these studies, but a certain amount of failure is inevitable in science, even if you do everything right. If you don't fall into any of the traps that I write about in my book, scientists will come up with answers that don't stand up.
It's just part of science. That's why it's called "experiments" right? If we knew the answer, we wouldn't have to do the study to begin with. But, it is very sobering to think about. And I think listeners intuitively know that, well, gee, when they think about – particularly nutrition advice – it's like, next week, coffee's bad for you; last week it was good for you.
People know, on one plane, that a lot of these ideas are really in play. But, I think it's deeper than people recognize and that there certainly have been medical advances that have been used on many, many people that have turned out to be quite harmful. Early studies suggested it was a good idea; careful studies later on show that it was actually causing – for example – hormone replacement therapy in post-menopausal women was killing thousands of women for breast cancer and heart disease when early studies said, "Hey, this is good for you."
Mirsky: Yeah. One of my other concerns – and again, this is in no way trying to say we shouldn't try to fix these problems – is not just that the congress won't fund the agencies, but that the anti-science people out there – the people who want to promote creationism in public schools or want to say that climate change is not understood – will grab this material and say, "See? The scientists don't know what they're doing. Nobody really knows what they're doing. They say this this week, and next week, they'll say the other thing."
Harris: Well, my experience is that people compartmentalize a lot. You cannot believe in evolution, but you darn well do believe in antibiotics. And I think that in the same – it's true with climate science. People may be dismissive of climate science, but they accept a lot of other scientific ideas. They will vaccinate their children and so on.
Mirsky: Some of them will.
Harris: Well, I think the climate deniers are a different group of people than the anti-vaccine crowd. It's actually, generally speaking, opposite ends of the political spectrum. People who are more conservative are more likely to be climate deniers; people who are ultra-liberal are likely to be the anti-vaxers. So, there's not one big pool of people who say, "I just completely distrust science." People pick and choose what to trust and what not to trust based on their life experience and what they bring to it and how they were brought up and so on.
And that's human nature. But, I don't necessarily think that people are gonna say, "Let's stop trying to understand biology. Let's try to stop the advancement of medicine." Because everyone knows that at some point in their life, they're gonna – or many of them – will have some sort of medical issue. If it's not heart disease or cancer or ALS or something – Alzheimer's disease – that they're gonna say, "Well, what can you do for me?"
And they have to think that, "Hey, this is where it starts." And they should be thinking, "We want to make this process work better so we don't have to wait decades and decades to figure out Alzheimer's" which, so far, we've been spinning our wheels on Alzheimer's, and I think – I mean, it's a really hard problem to solve, but I also think part of that is due to the fact that people relied really heavily on mouse studies for Alzheimer's and I think people have been misled by some of the things I write about in the book.
Mirsky: The book talks about biomedical research. To your knowledge, do these problems plague any other area? I mean, are theoretical physicists bothered by these same things?
Harris: Well, that's interesting. The theoretical physicists –
Mirsky: They've got their own set of problems.
Harris: Yeah. They do. But, if you look at the search for the Higgs boson in Switzerland, right – which was this huge, enormous, multi-billion-dollar project – you know what they did? When they bit the machine to look for the Higgs boson, they didn't just put one detector in that machine to look for it; they put two detectors on completely different designs to do it. So, they were thinking very much about exactly these problems, and they were saying, "We want to make sure we're not being fooled by something that we did with one technology."
So, they are highly confident because they have baked into their experiment the knowledge that there's lots of ways to get misled, and they're working really hard to avoid that. But, that said, there are many other areas of science where these problems exist, and psychology is a classic example of that. There was a fairly well known attempt to reproduce 100 psychology papers not so long ago, and many of those could not be reproduced. So, to a greater or lesser extent, this is part and parcel of science. And the reason that's the case, actually, is when you think about it – I'm gonna quote Richard Feynman, the famous physicist who was at Caltech and he gave a Caltech commencement speech one year and he said, "The whole trick about science is it's a method to make sure you don't fool yourself. And you are the easiest person to fool."
And that's absolutely true. Demosthenes said something like that in third-century B.C., as a matter of fact. But this is the fundamental issue of science. It's a method to make sure that you're not saying, "Oh, that's a great idea" and you get deluded by your own – you just fall in love with your own ideas. So, when that happens – this is a trait of human nature.
So, any scientist will experience that. Different fields have different ways of trying to reduce the chances of that happening, but it's why science isn't perfect. But, it also makes it – it's why it's – let's remember, it's a human endeavor, and it's a really interesting human endeavor.
Mirsky: Absolutely. I think when Demosthenes said it, he had marbles in his mouth, though. But let me ask you –
Harris: So, I could be misquoting him.
Mirsky: Not to mention it was a whole different language. But, let's talk about a prescription from you on – other than, "Let's double the NIH's funding so that many more grants can be funded" let's talk about what – the first thing you said was we have a problem in the culture. And obviously, we're not talking about cell culture. We have problems there, too, but the culture of science, as it is currently done in this country, at any rate, and in most places that have a state funding setup, so, what do we do about it?
Harris: Well, there are many ways you can attack this problem, and people have been thinking about those. I think one thing you can do is you can ask the deans who make the hiring decisions at the universities to think more carefully about how they make those judgements. 'Cause right now, what happens is they say, "How many papers have you published and how many journals have you published them?" And this is part of that incentive system that is driven by funding shortage, but it's also driven by the way that science operates right now. And so, if you're a scientist, you say, "I've gotta get as many papers as I can, and I've got to get as many flashy papers as I can." And actually, the ramifications of having a flashy paper that turns out to be wrong – not that big a deal.
What the dean should be asking the scientist is, "Give me your two or three really best ideas and show me how well-developed they are." 'Cause once you get past that one flashy idea, where's it going? Take me there. So, that's one thing that deans of universities could do to help tweak this problem. I think another thing that would help a lot is to increase the transparency in science.
This is a struggle throughout this area of biomedicine as well as it is in medical research with human beings. But, basically, scientists don't necessarily exactly explain how they did the experiment. They don't put their – if there's a computer code that helps people understand how they had analyzed their data, they don't necessarily make that public. They don't necessarily make their raw data public. A lot of this stuff happens behind sort of – behind the glass.
And they put out their paper and they say, "Here's my exciting results. I'm not telling you exactly how I got there, but these are really exciting results, believe you me." If scientists had to be more transparent about that – if they had to put their data out, they had to put their methods out and make their ingredients available to other scientists – if they actually put their computer codes online – which is easy to do – and let other people run the same numbers to the extent that there's an analytical part of this, I think two very important things would happen, one of which is that very quickly, people would be able to take an exciting result and say, "I'm gonna try to verify this." And it won't take years and years and it won't sort of drift around in the literature, something that may or may not be true, and very different people have different ideas. You can, very quickly, troubleshoot stuff and if there's something wrong, you bring that out.
Science wins, everybody wins – except maybe the guy – or woman – who published the paper to begin with and might feel a little bit embarrassed about it, but that's a really important role that could happen. The other important things is, if you know that that's what's gonna happen, maybe you'll be a little more careful with how you analyze stuff. If you're excluding data and you have to maybe do a little bit better job of saying, "Oh, I left out that data 'cause it undercut my argument" as opposed to "I left out that data because I know that there was something wrong with the machine that day." And so, I think that transparency is another really important thing. And, this is not an idea that comes from me.
There are people like Brian Nosek, who's at the Center for Open Science in Charlottesville, Virginia, who's been pushing on this idea very hard, and there are others as well. And I think that that's – I think that of all the sort of easy to describe solutions, I think that one would go a long way.
Mirsky: Anything else?
Harris: Well, I think scientists could also take a lot more care with the ingredients they use. We talked about cell cultures earlier. Many of those contaminated cells – you can send off any cell you want very quickly and get a cheap test that will tell you, "Oh, this is actually what you think it is" or "It's not. It's been contaminated in the process of your experiment or you started out with something that isn't right." So, actually, the NIH is now requiring people to put in their research proposals that "I'm gonna get my cells authenticated."
That became sort of new regulation as of January of 2016. It's too early to see how that's panning out, what penalty people will get for not doing it, how thoroughly people are being checked, but I think that's a very helpful thing. The NIH, actually, to its credit, has not shied away from these problems. You might think that this agency might try to sweep it under the rug, but to his credit, Francis Collins, who is the director of NIH, went in front of Senator Shelby in 2012 and said, "You're right. This is a problem and we intend to deal with it."
And he's taken other steps to make sure you get the sex of your mice right in experiments and basic things like that. So, there are little things you can do. There are medium things you can do. There are big things you can do. Obviously, the fundamental problem of the funding imbalance is not an easy thing to solve.
Even increasing the budget by five percent per year from now on into the future really won't make up for the fact that the funding is out of whack. So, I think that even people in science who are used to saying, "Oh, we need more money" recognize that that's – in this case, things are so far out of line that that won't solve this problem. And so, they need to think about other solutions as well. They've tried to have those conversations. There's sort of a groups of elder statesmen and stateswomen of science who are working on this, and they're finding it really hard.
They're not really making a lot of progress. But, they realize, this is the nub of the problem, and they've got to take it seriously.
Mirsky: So, if you write a sequel in 10 years, are you optimistic? Will it be called Rigor Vita or –
Harris: Ooh. I should write that down. Yeah. I actually think that recognizing a problem is the most important step in solving it, and I think the scientific community is increasingly realizing that this is a real problem, that they need to address it, and they're thinking about how to do that. So, I am optimistic. I think will these problems persist? Some of them, absolutely. Will science fail? For the entire – as long as the universe is going on, science will sometimes fail. But, we should embrace that. We should not chastise scientists for making a mistake or for putting forward something that turns out to be wrong. It's all part of the exploring the edges of knowledge, which is what they're doing, and we should be grateful they're doing that. We should also make sure that if they're doing things that are not helpful, let's also surface that and find ways of dealing with that.
Mirsky: What's the line? If I knew what I was doing, it wouldn't be research.
Harris: That's right. Yes.
Mirsky: The Demosthenes quote translates to "The easiest thing of all is to deceive one's self. For what a man wishes, he generally believes to be true." That's it for this episode. Get your science news at our Web site, www.ScientificAmerican.com, where you can also check out our new publication, Scientific American Space and Physics, select articles from Scientific American and from the journal Nature, bring you recent developments in everything from particle physics to cosmology, and, from time to time, time. To access issues, look for the Storm, a new item on our Web site, and follow us on Twitter where you'll get a tweet whenever a new item hits the website.
Our Twitter name is @SciAm. For Scientific American Science Talk, I am Steve Mirsky. Thanks for clicking on us.
[Commercial for another podcast from 0:38:12 to 0:38:42]