[Quite a long post sorry. But it’s got 2 parts so you can get tea and a biscuit in the middle. And if you’re already familiar with sociology of scientific knowledge, well done, you can skip straight to Part 2]
So I’m currently working on a short research paper entitled ‘Imagery and Social Knowledge: Cambridge Psychology and Edinburgh Sociology.’ For anyone who knows about sociology of scientific knowledge, that’s hopefully enough to intrigue and excite you. But I’m aware that not everyone is au fait with sociology of scientific knowledge – a shame, because it’s really rather exciting and extremely eye-opening. So this will actually be a two-part blog, or the ‘Peter Jackson’s Hobbit Method’ as it’s fashionably known1. Part One will introduce you to the wonderful world of SSK. Part Two (‘This Time It’s Personal’) forcefully hurls my own ideas into the proceedings. Hopefully this series has a happy ending, but I can’t promise that.
– PART ONE –
Scientists, it would seem, like to study things. They also like to study things ‘scientifically’ – whatever that might mean, they like to do it. And throughout modernity it’s become ever more popular – there’s been sciences of ‘the state’ (population, resources, etc. – the origin of the term ‘statistics’), sciences of social groups and cultures (sociology, anthropology), even sciences of happiness and prosperity (welfare economics). But during the late 60s and early 70s, in Edinburgh, this idea took a turn for the meta. At the newly-founded Science Studies Unit, four recent graduates of experimental psychology, crystallography, genetics, and applied mathematics decided to take their shared interest a step further, and start a science of science. These four individuals were David Bloor, Barry Barnes, Steven Shapin, and Donald Mackenzie, and (with others) they ended up founding The Strong Program in the Sociology of Scientific Knowledge. And, in the field of sociology of scientific knowledge (or SSK, as it prefers to be known), this turned out to be jolly influential indeed.
So what was the Strong Program (also called, less dramatically, the Edinburgh School) all about? Well, its first issue was with more general sociology of knowledge. Very roughly, this studies ‘knowledge’ in the sense of ‘belief’ – how people come to ‘know’ things, how people convince others they ‘know’ things, how this ‘knowledge’ gets passed around, and so on. The key point is that all these processes depend on surrounding social factors, like who is ‘super important’ or what sort of knowledge would be ‘useful’ to particular social groups. But Bloor and co. argued that many of these sociologists (Karl Mannheim gets chucked around a lot here2) were a bit over-cautious about extending their work into scientific knowledge, as if studying the natural world was a clear-cut case of right-and-wrong; ‘Knowledge’ of science got passed around because it matched up with nature, they seemed to say, and there just didn’t seem much point in studying underlying social factors there.
Dead wrong, the Strong Program argued. On this point they weren’t alone, but the Strong Program took issue with other sociology of scientific knowledge for simply being sociology of error. On this account, social factors came in only to explain why scientists had gone wrong; to give a simplified example, they might argue ‘Newton believed gravity varies as an inverse square law because it does, Leibniz believed it didn’t because he was German’.3 This makes an assumption –which should be dubious to anyone who’s ever (1) done any science or (2) ever had an argument – that what is right is easy to make people believe. But it isn’t. Even the most basic assumptions of the modern scientific brain, such as the idea that experiments are a really good way of finding out about nature,4 were at one point socially-fraught battlefields. All agreed-on (and disagreed-on) Facts Of Science deserve some sort of explanation.
So how did the Strong Program intend to go about this? Well, they set up a few principles. They get re-formulated in various ways in various places by various people, but hopefully this list can give you some idea:
1) The Symmetry Principle: All scientific beliefs, whether we now believe them to be true or false, should be discussed in the same way. This is basically the big no-no to sociology of error as mentioned above.
2) Meaning Finitism: A rather fancy term, but (as I understand it) this just means that you can’t just look at a natural object (an electron, say, or a fish) and say ‘that’s a fish because it’s a fish’. You have to classify things, decide how these things should be studied – and those decisions emerge from people, not nature.
3) Causality: As a science-of-science, the Strong Program argues that we should be describing the formation of scientific beliefs in terms of what caused them to be believed, just as a physicist describes motion as caused by particular forces. At no point should scientific belief ‘just happen’.
4) The Principle of Reflexivity: As the Strong Program should be a science-of-science, any suggestions the Strong Program makes about science should also be applicable to the Strong Program.
The combination of these points has led to quite a lot of criticism (mostly from philosophers, who tend to get upset when you say questions of truth aren’t important), so let’s have a think about why these ideas might be controversial. The main intuitive upset is that point 2 implies that scientific knowledge is local and dependent on particular individual decisions or cultural factors. Science is just made up, in other words. There’s often two confusions here. The first is equating ‘nature’ with ‘science’. Sociologists of science are often accused (it happened to me today, in fact) of saying that nature varies with human belief; something along the lines of ‘scientists decide there’s gravity, and suddenly gravity pops into existence’. That’s clearly silly. But sociologists aren’t silly (usually) – there’s clearly something holding me into my chair, but what we call ‘gravity’ is described by ideas of potential energy, curvature in space and the like. They’re not natural, they’re pictures we built to make the whole thing easier to think about. And they’d be nonsensical to scientists from another world, trained to do science in a different way – even though those scientists, presumably, still live with the same gravity-force-thing that we do.
The second confusion (this comes from point 3) is that the Strong Program is often accused of saying our beliefs about science are purely caused by social (even *shudder* political and ideological) factors. NO. The argument is that scientific belief is caused by a COMBINATION of natural and social factors. Take the example of a human brain. It’s clearly built by nature, and what we believe about the human brain will come from looking at this natural creation. But when we convert this brain-in-front-of-us into graphs, tables of data, pictures etc., what we choose to focus on, what we deem as ‘scientifically relevant’, comes from us not nature. And when our friends and enemies read our papers, assess our theories, ‘virtually witness’4 our experiments, they’ll also inevitably be thinking things like ‘how significant would it be if this was wrong/right?’, ‘do I trust them to do this experiment correctly?’, and the like. So when you’re trying to explain how people come to ‘know’ about some natural object or process, you can’t just ignore the object or process. But you can’t ignore all that other social stuff either. To do either of those things would be silly.
– PART TWO –
So that’s the Strong Program in a rather reductive nutshell. Now to my bit. I’ve been looking at a bizarre book by the psychologist Richard Gregory (a bit of a sci-leb in the later 20th century) called Mind in Science. Amongst the book’s many (rather ambitious) arguments is that experimental psychology could be used to analyse how scientists come to think about ‘truth’. Gregory’s discussion is quite a philosophical one, so he thinks about ‘truth’ in an abstract sense of ‘what-really-is-in-the-world’. But what if you replace his ideas of truth with sociological ones of ‘what-people-believe’ then really interesting things start to emerge… Consider that Mind in Science, like the Strong Program, is basically a study-of-science. Secondly, when one considers Gregory’s other work – he did of experiments with illusions – it also fits very pleasingly with the Strong Program principles of symmetry, causality, and reflexivity. Illusions may be perceptions going ‘wrong’, but you study illusions to establish how both ‘normal’ and ‘abnormal’ perceptual systems function in the eyes, brain, etc. Symmetry, tick. Secondly, Gregory approaches studying illusions by considering the brain as a sort of machine, dealing with input stimuli and output perceptions. Causality, tick. Thirdly, Gregory – as with all psychologists – has to be aware that the mechanisms of perception he studies are happening all the time inside himself when he does psychological experiments. One facet of the Mind in Science argument is that scientists base their work on their own perceptions, even though they aren’t really aware (unless they’re psychologists) of how misleading those perceptions can be. And there we have a very powerful point about studying science. Reflexivity, big tick.
So on first glance Gregory seems to be backing up the Strong Program – saying that their fundamental principles for studying science can also be found in psychology. But as a conclusion I find that quite limiting – David Bloor did this by suggesting that the founder of Cambridge Experimental Psychology, F.C. Bartlett, could be seen as a ‘prototype sociologist of scientific knowledge’, which is interesting but does tend to result in simply showing how he asked questions that later sociologists have asked already. I’m interested in what new things psychology has to add to SSK. For me, the main thing is Psychological Experiments. As mentioned, Gregory spent his life studying illusions. To massively skip over all his analysis (sorry), he used these experiments to develop this picture of how we perceive things:
<Wordpress isn’t happy with me putting the picture here. Dunno why. So for now go to the top of this post, or to https://sidewayslookatscience.wordpress.com/2013/03/07/picture11-jpg/. We’ll wait here until you get back.>
(Don’t be confused by the fact Gregory calls this a ‘hypothesis generator’ – one of Gregory’s favourite points was that perceptions are like scientific hypotheses, in that they extrapolate beyond available data and draw relationships etc.)
The Strong Program has a lot to say about ‘conceptual knowledge’, i.e. how we know what objects ‘should’ look like and that affects how we see them. In a really cool example, Gregory showed that we see concave faces as convex faces, simply because we ‘know’ what a face looks like and it ain’t concave. The Strong Program says loads about that, especially when talking about Principle Number 2 (‘Meaning Finitism’) – scientists discover new stuff, they decide how it relates to other stuff and how to classify it, they teach this to their later students, and the students then have new ‘conceptual knowledge’ which makes them see experimental outcomes differently to the uninitiated. It also takes account of the ‘bottom-up’ processes (recall that the Strong Program says natural things, not just social things, act as causes). But it has very little to say about the other bits of that diagram. That’s because the Strong Program has always been interested in social collectives, and they don’t straightforwardly relate to things like ‘perceptual knowledge’ (how to turn sense-data into shapes, distances, etc.) or ‘rules’ (the cognitive programs in the brain). On the other hand, by rejecting the ‘social’ element of his mentor Bartlett, Gregory focussed completely on the individual. They’re two sides of the same coin – to understand scientists we must see them as both individuals and members of social groups – but to get the whole coin we’d probably have to go back to Bartlett, and he’s a bit outdated now.
To be honest Gregory’s also probably outdated, but that’s not my point. I’m not saying that all Strong Program followers should just read Gregory. I’m saying that the general methods Gregory used are really important gap-fillers for the Strong Program. That’s particularly important when you remember that the Strong Program themselves are all relativist about scientific knowledge – i.e. its usefulness is greatly affected by the context it’s being used in. So SSK needs a more detailed, and appropriate, psychological basis. I’m basically dreaming of a day when the Edinburgh Science Studies Unit has its own laboratory, where scientists study other scientists doing science. If that ever happens, expect this blog to have a Part 3.
(Postscript: Having written this post, I’m very conscious just how much I’ve condensed the work I’ve been doing. If you’d like a more informed chat about what I’ve been doing – preferably before my Monday submission deadline… – I’d be happy to supply more info).
1 = Although the main advantage of the PJH Method doesn’t really work here, as doubling the income from this blog would make precisely zero difference.
2 = Although David Kaiser’s paper ‘A Mannheim for All Seasons’ does make some really interesting points about Mannheim’s relationship with science. It’s an excellent paper, if a bit tangential to this discussion; but with a title like that, I couldn’t really resist mentioning it.
3 = For this example, as indeed for much else in this current work, as indeed in much else in my academic life, I am indebted to Simon Schaffer. He also gave me a great example of how the horse-meat scandal exemplifies SSK at work, but I haven’t the room to draw it out. If you’re intrigued and want to hear more, just shout. I mention it purely because Simon’s the only man I know who finds sociology in a dodgy lasagne.
4 = This example is drawn from Steven Shapin and Simon Schaffer’s Leviathan and the Air-Pump, a classic historical example of SSK arguments (although, note, not 100% Strong Program arguments) in action.
5 = A really funky term from Leviathan and the Air-Pump. It’s basically drawing on the idea that scientific publications represent (in the sense of ‘re-present’) experiments for people who weren’t present when the experiment happened, and will most probably never see the real thing for themselves. It’s the basis of all modern scientific communication, really.