Uncertainty in Science – It’s kind of a big deal

I still remember those heady days in high school when I used to do science subjects and learn all these, like, facts. I learnt about the way atoms dropped electrons in neat rounds of eight in chemistry. I learnt about the taxonomic classification tree and speciation in biology. In maths I learnt about… some other stuff. But everything I learnt I learnt with the sense that everything was certain. That these were particular rules that had been decided on. That science was helpfully sorting through all the piles of uncertainties in the world and making some things certain.

Then I got to university. During the first lesson of first year chemistry we were advised to forget what we had learnt in high school as it had been dumbed down – and also because they weren’t sure of everything. Then in biology we learnt about all the holes in taxonomic classification systems and how different biologists were fighting over what they thought were the best ways to fill the gaps. Suddenly science didn’t seem so certain.

And science has only become a whole lot more uncertain since then. I’ve learnt about the half-life of facts; that in 45 years half of everything in science we think of as being factual will be clarified, expanded on or generally shown to be complete rubbish.

We’re also contending with more scientific analysis than ever before. And as I mentioned in my post about confirmation bias, this allows people an awful lot of leeway to perpetuate ideas that a lot of other evidence has proven to be incorrect.

Finally there’s the manner of doing science itself. While everything seems certain scientists have, in fact, made subjective decisions on definitions, on confidence intervals, sample sizes and numerous other parameters when they ‘do’ science. Even something that sounds simple, like deciding whether something is a new species, is in fact fraught with difficulty. How do you define a species? Does it have to look different? Or live somewhere different? If two animals can mate and produce offspring that makes them the same species, right? But what about if their courting behaviours are so different they’d never get it on in the wild?

A major cornerstone of accepting anything as scientific “fact” is the extent to which results can be replicated. Repeating the same experiment should result in the same findings. But so much of what we use in an experiment can’t be replicated, particularly if you’re doing anything relating to people. All you can do is another study to contribute to the idea of developing a “fact”.

And then there’s the use of statistics. Analyses have found that there is an unnecessarily high number of papers that just meet the generally accepted standard for testing validity – a p value of 0.05. This means that for every 100 tests you do you might find that five of them show a positive result by chance alone. This is generally believed to be an acceptable level of chance. The high number of papers showing statistical significance at this p value is indicative of p-hacking, the practice of repeating different types of statistical tests until you find one that works for your data. It’s shameful, but if you want to have something to say, you’ll try to swing some statistical validity. The problem with having so many statistical tests available that do similar things but end up with different results is starting to be recognised by journals. A study was recently published in Nature where the authors asked several teams of statisticians to analyse THE SAME data. You guessed it, the results were completely different.

Following on from this issue there’s also the problem of recognising the difference between statistically significant differences and significant differences. A punishing diet and work out regime might be able to show improved weight conditions in 97% of cases and with statistical significance, but if the average weight loss is only one kilogram no one should really want to take part in the program. The citing of research based on statistically significant rather than significant findings is rife.

And the problem with all of these uncertainties is that most people in the general public aren’t aware of them. This means that people generally aren’t any good at analysing the validity of information available to them. And even if they could analyse the information for robustness, who knows what kinds of assumptions have underpinned the findings?

So what do you do about it as a scientist?

Firstly, you acknowledge where your research might not be statistically valid. You clarify how your study can then be used.

Secondly, you clearly explain and justify all statistical tests used, if these have a fundamental influence on the way your results will be interpreted.

Finally, you try to back up your findings with as many other similar findings as you can. After-all, there’s safety in numbers.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: