Underlying L’Aquila: The Multiple Balancing Act of Public Science

Last time I wrote to you all, science commentary was spanning a wide range of wonderfully interesting, stimulating, and oft-quirky topics.  But now it’s mostly centring around one extremely depressing story.  Six Italian seismologists and a government official have been accused of manslaughter and sentenced to six years’ imprisonment, in relation to an earthquake in 2009 which devastated the town of L’Aquila and resulted in 309 deaths. It’s not the usual whimsical material that this blog is made of.  But it can’t really be ignored.  It’s a result with huge potential implications for science – as a very wise man (a lecturer) once said to me, when discussing this very topic a year ago, ‘would you be a scientist if you could be punished for miscalculations?’.  But ‘miscalculations’ is too simple – what actually are they being punished for? Well, depending on which outlet you get the story from: they’re being punished by an ignorant court for failing to predict the completely unpredictable.  Or they’re being justly held accountable for giving reassurance without which innocent people would have left town and survived.  Or, on more sensitive mid-way accounts, they’ve become victims of governmental rhetoric, or are paying an unfortunate price for failing to communicate risk assessment in a public-suitable fashion, or…  Well, there’s a lot written about this case, much of it really quite under-informed (and some of it really quite inappropriate to the sensitivities of the case, and this from a man normally sympathetic to his fellow blogger).  So whatever else you’ve already read, I urge you to clear your head of it all and look at this http://www.nature.com/news/2011/110914/full/477264a.html which sets it all out in a detailed and level-headed way*. In fact what with all those details I don’t really fancy picking over this specific case any more, but I would like (if you’ll let me) to outline the rather pervasive general concerns underlying the whole thing.

In any case like this, where the public world and the science world meet with unpleasant consequences, there are always a lot of dilemmas and balancing acts to consider.  Everyone ends up between many rocks and multiple hard places.  There’s a lot of interesting scholarship about how involved non-scientific public folks should be involved in any scientific decision-making which is relevant to public concerns, as a sort of science-jury for new developments (I’m reading about it at the moment, so expect a piece on it soon you lucky lucky people).  One of the troublesome points raised is that of the two ‘paralyses’: over-involve the public and science becomes paralysed by a mass need for educational discussion; but if all responsibility for  any ill result of the science is deferred to scientists then they can become paralysed by trying to constantly second-guess what result would be ‘best’ for the public**.  Closely allied to the latter is the balancing act of the ‘precautionary principle’ – or to put it more bluntly ‘how bad would it be if we got it wrong?’.  

No science project really ends with a ‘wahey, we’ve 100% definitely got it’ moment (see my first ever post, back in the depths of time).  There’s always a bit of residual uncertainty.  And, more significantly, there’s no hard-and-fast rule for what’s an acceptable amount of uncertainty.  If the potential result of you being wrong is problematic (‘I’m pretty sure this button doesn’t detonate the nuclear doomsday device’), you want to be more certain, obviously.  But what if the result of you saying ‘no the bad thing really probably is going to happen everyone please panic’ was even more problematic?  You do the moral thing and shout your scary results from the rooftops, despite that statistical quibble…. only to find that the statistical quibble was actually really significant and you’ve mistakenly caused a whole mass of people to flee for the hills, at no minor inconvenience.  Even waiting and gathering more data often isn’t a good idea.  Add in a host of other factors – like how complicated communicating ‘risks’ really is, and the fact that we’re exposed to far too many risks all the time to be feasibly warned about –  and ‘public science’ can be a very delicate balancing act indeed.

But it’s a balancing act that really has to be done, and regularly.  What I’ve vaguely referred to as ‘public science’ above crops up in the law, medicine***, public health, safety procedures, environmental concerns…  Another problem is that it’s a balancing act a lot of people don’t even realise exists.  And suddenly we’re slap bang in the middle of my favourite topic, the public image of science.  Welcome back, old friend.  Basically, cases of public science can illustrate how the wrongful idea of science as definitely right / wrong (without any awareness of those old residual uncertainties) is actually very harmful.  The use of science in the law is a great example.  DNA Tests: for a long time they could be considered The Clinching Evidence, Your Honour.  DNA doesn’t lie, DNA doesn’t have a secret agenda, DNA isn’t secretly the heir to the murdered aristocrat’s massive fortune.  But DNA Testing is a real art, and I use that phrase in the sense of ‘it’s a bit subjective and different people have their own way of doing it and it’s quite fair to disagree a lot of the time’.  An example: http://www.washingtonpost.com/wp-dyn/content/article/2005/08/20/AR2005082000998.html.  That’s not to say it isn’t a powerful extra tool in a legal arsenal.  But it’s interesting to note that in a few of the cases given in that link, the DNA only ‘came good’ on a second test.  But if you just assume ‘oh it’s a DNA test it must be right’ then you wouldn’t even bother taking a second test.  Even Lance Armstrong’s lawyer, who I presume is pretty respected in lawyer world, claimed that Armstrong’s accusers would be comprehensively found out by a lie detector test.  Yes, those ever-so-trusted devices that have made the jury system a thing of the past****.  

So science has to work alongside other systems of debate and decision-making.  But let’s not take the blame too completely off scientists.  As a group, they haven’t systematically shirked from the image of science as almightily correct.  But even if they do attempt to communicate their uncertainties, there’s another group involved in the publifying process: their mouthpieces, the ones who (as with the convicted government official in the L’Aquila case) have the job of representing the scientists to the public.  Or, as often happens, mis-representing them.  Who’s to blame there?  The scientists, for being insufficiently clear?  Or the spokespeople, for distorted translation?  As Oreskes and Conway’s book Merchants of Doubt (read it, now) shows, scientists are sometimes systematically (even sinisterly) blocked from projecting their original messages to the wider world.  But as the very same book also shows, scientists can often be spectacularly bad at PR, with little awareness of effective public outlets and no inclination to step outside their cagey and codified lab-language (although this is improving).  Unsurprisingly, this can be a problem. 

Finally, one extra point that’s forgotten with worrying frequency in cases of public science – scientists are members of the public too.  They don’t just dispassionately manipulate risk probabilities for other people to do with as they see fit – they might actually, you know, care about the results.  But to get those results they have to use the method they know best i.e. induction from the past results of science to predict the future.  And, in a lot of the increasingly-complex sciences of the modern world, that sometimes doesn’t work.  Of course, that’s not to say that scientists are entirely blameless when things go wrong.  Politicians, diplomats, and the like regularly have incredibly complex tasks to pursue, and they don’t escape blame when stuff blows up in everyone’s faces.  But considering how many factors there are in any form of public science – L’Aquila very much included – to lay the blame solely at the feet of one group seems an oversimplification.  And, unlike most oversimplifications, not one that helps to find the right answer.


* = Or at least, it seems to.  Problem with studying media is you end up trusting no-one and nothing, always watching your back.  I’m like a scholarly secret agent.  Or maybe that’s wishful thinking.

** = Interestingly, a few authors point suggest that this is a unique feature of our modern world.  The argument goes: a lot of the modern public-relevant sciences – climate change, economic modelling and the like – are so damn complicated and so tied up to public activity that trying to ‘find the solution’ like a traditional science just isn’t feasible because the situations just keep changing.  Instead the answers to questions like ‘what is the best way to slow climate change?’ can only be answered with reference to public behaviour.  An extreme (in my view too extreme) formulation of this is the dramatically-named ‘post-normal science’, which will be the subject of next post.  However for those who are just too intrigued – http://www.nusap.net/

*** = There’s some very interesting parallels to be drawn between the L’Aquila case and ideas of medical malpractice.  And, while we’re on parallels, anyone familiar with the British BSE story might find some interesting similarities.  But I’ll leave that to you.

**** = Sarcasm may be the lowest form of wit but boy is it needed sometimes

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s