This is an extremely long post – something in the ballpark of 13,000 words – for which I apologise.  I can’t claim that it isn’t rambling and digressive, etc., but for what it’s worth it felt more or less like a single line of thought while I was writing it.  Unfortunately I don’t really have it in me to revise it in any serious way, so here it is.  The post is organised roughly as follows.  First I talk very briefly about individual-level epistemology, in its traditional ‘Enlightenment’ form.  Then I make a shift to social epistemology.  I draw on Neurath, Brandom, the strong programme – all the classics of my personal epistemological canon – to outline what I take to be a reasonably coherent social-epistemological account of science as an anti-foundationalist epistemic system.  This is the bulk of the post.  I then finish up by applying this model to a couple of personal preoccupations – a rather bathetic conclusion given the intellectual resources I’m drawing on, but again, it is what it is.  I guess you can see the post as trying to do two main things.  First, I want to give a social-institutional answer to the traditional demarcation problem: what is science?  Second, I want to reflect a little on what this answer implies for how individuals – both as citizens consuming scientific output, and as researchers contributing to the scientific endeavour – can and should relate to this broader institutional structure.  The ‘emotional’ point, from my perspective, is to try to think about the location of my own research within the broader scientific institutional space.

Start, then, with non-social epistemology – specifically, with the good old Enlightenment project of trying to figure out the way the world is, using the resources of science and reason.  I’ll take as an exemplary expression of this project the Royal Society motto: ‘Nullius in verba’ – ‘on the word of no one’.  This is the commendable (to my mind – of course, opinions on this matter differ) Enlightenment idea that the authority of tradition qua tradition is no authority at all – that we should not simply defer to the tradition, whether it be religious or political or philosophical or whatever.  Rather, we should figure things out for ourselves.

It’s worth pausing for a moment here, perhaps, to mention that there’s a philosophical connection between the scientific project (understood in these terms) and the anti-traditionalist anti-authoritarian forms of political liberalism and radicalism that emerged during this same historical period.  Both of these projects are, at some level, driven by the same thought: we should not simply defer to authority (whether that be political or epistemic): whatever authority authority has comes from our own judgements and actions.  I think this is a good philosophical approach, and I take myself to be aligned with it, important caveats notwithstanding.  But this post isn’t about the connections between the epistemic and the political dimensions of ‘the Enlightenment’ – that’s all for another day.

What I want to start by talking about, rather, is the different ways in which “on the word of no one” could be understood.  If we reject the authority of tradition, what are we basing our epistemic judgements on?  As usual, I’m going to be maximally crude here, but I’m going to say there are basically three broad categories of alternative authority-source: experience (empiricism); reason (rationalism); and Mysterious Other (mysticism).  I appreciate that this is all a sort of first-year-undergraduate-level understanding of Enlightenment epistemology.  But at the same time it seems basically fine to me, and it’s what I’m going with.  Typologising in this way, then, and ignoring mysticism (on the grounds that it is a transformation of the Enlightenment rejection of tradition into a basically anti-scientific epistemic approach, and thus Does Not Align With My Values) we have two basic projects: grounding knowledge in the senses, and grounding knowledge in the faculty of reason (plus, of course, combinations of the two).

All well and good.  But then, as is I think at this point abundantly well-established by the subsequent philosophical tradition, once we start trying to elaborate these approaches we run into worlds of trouble.  When we think about our faculty of reason, does it not seem that our processes of reasoning are themselves, at least in part, socially inculcated and influenced – that is to say, influenced by the traditional authorities that our faculty of reason is meant to break with?  In my view the answer to this question is definitely “yes”.  Similarly, when we think about how to form judgements on the basis of experience, it seems plausible that theory is ‘underdetermined’ by experience and that, moreover, the way in which our experience is taken up into judgement is itself partly determined by socially- (and thus authority-) influenced processes of thought.  Obviously I’m not aiming to make the case for either of these positions in this post – I’m gesturing to the long philosophical debates around these issues.  Still, I’m a philosophical pragmatist, and therefore I’m on the “most stuff is socially constituted” side of these debates, and I tend to think that appeals to non-social faculties (whether of experience or reason) often tacitly rely on socially-constituted categories.

Even putting all this aside, though, there are more practical ways in which the “on the word of no one” principle runs into problems.  Obviously we’re not all carrying out every scientific experiment ourselves – we’re relying on other researchers to make empirical observations, and then reading their reports of those observations, or other researchers’ syntheses and summaries of those reports.  So testimonial authority is central to scientific empiricism.  Similarly, even when we are engaged in Cartersian rationalism, are we really thinking things through from first principles ourselves – or are we using others’ accounts of their reasoning as an aid to, and frequently substitute for, our own?  Here again, for example, the canonical status of Descartes’ ‘Discourse on Method’ is an interesting kind of performative… if not contradiction, then at least tension: a canonical authority for rejecting canonical authority.  There are tensions here, I think – in the constitution of an anti-traditionalist tradition; the social inculcation of the project of rejecting socially-inculcated judgements.

This kind of line of reasoning is one of the ways you can get to a social- or practice-theoretic critique of Enlightenment rationalism or empiricism.  The crude argument here would go: the Enlightenment project aspired to break with social authority; but we can show that the very categories with which Enlightenment thinkers engaged in this project are socially constituted via unacknowledged relations of authority. From here it is easy to conclude that the Enlightenment project as rejection of authority is basically a contradiction in terms, and we should throw it in the bin.

Obviously this is a very crude summary of the critique, but I think this is recognisable as a summary of quite a lot of critical science studies.  For example (and since I started with the Royal Society), I would argue that Schaffer and Shapin’s ‘Leviathan and the Air Pump’ clearly falls within this broad genus.  Bloor and Barnes’ ‘strong programme’ argument for relativism can likewise easily be taken to point in this direction.  So do at least some categories of critical theory (in the Frankfurt sense) and Marxism, as well as some forms of more standpoint-epistemology-adjacent contemporary critical theory.

So.  At this point we’ve traversed two moments of what we can see as a kind of ‘dialectic’.  We started with a picture of the Enlightenment epistemological project that understood itself as rejecting social, authority-based sources of knowledge in favour of various kinds of individual epistemic grounds – rationalist or empiricist.  That’s moment one.  Then we argued that this doesn’t work: social relations, and authority-relations, implicitly constitute even the apparently non-socially-constituted categories of Enlightenment epistemology.  This is so in at least two ways.  First, the ‘individual’ psyche is always partly socially constituted, in its faculties of both observation and reason: you can’t find your way to a faculty that is not shaped by the forces of social authority that the faculty superficially appears to transcend or escape.  Second, you can’t in practice engage in any serious project of knowledge construction without relying on testimony, and so we need to bring authority-relations back into our epistemology in order to deal with testimony.

Now, if you are of a critical turn of mind, you can interpret these critiques of ‘individualist’ Enlightenment epistemologies as damning for the entire epistemological project.  The Enlightenment thinkers sought to construct knowledge “on the word of no one”; they are not able to do so; too bad for the project.  This is the second moment of our ‘dialectic’, which takes itself to simply refute the first.

But not so fast!  We don’t have to accept critical science studies’ debunking application of these insights.  Our third ‘moment’, then, is accepting the idea that we can’t get away from either the social constitution of ‘individual’ faculties or testimonial authority structures, and trying to construct an understanding or version of the Enlightenment epistemological project that is grounded in these insights, rather than refuted by them.

This, obviously enough, is the ‘moment’ of this ‘dialectic’ that I endorse.  I take it that this broad approach has been pursued, in different ways, by a lot of thinkers that I’m interested in.  On the one hand, there are the explicit ‘social epistemologists’ who are interested in the social structure of science as an institution.  On the other hand, there are the pragmatist philosophers – especially, for me, Robert Brandom.  I take it that Brandom’s work – especially his recent Hegel book – also presents a highly sophisticated social epistemology, which aims to reground Enlightenment rationalism in social-institutional terms.  In the rest of this post I’m going to dwell on this third ‘moment’, or paradigm.

Start with ‘Neurath’s Boat’.  The other day I finally got round to reading Neurath’s critique of Spengler, in which Neurath articulates his famous boat metaphor.  There Neurath writes:

Even if we wish to free ourselves as far as we can from assumptions and interpretations we cannot start from a tabula rasa as Descartes thought we could.  We have to make do with words and concepts that we find when our reflections begin.  Indeed all changes of concepts and names again require the help of concepts, names, definitions and connections that determine our thinking.

This understanding of our intrinsic enmeshment in inherited concepts and associations is part of Neurath’s understanding of rational thought in holistic, rather than atomistic terms:

When we progress in our thinking, making new concepts and connections of our own, the entire structure of concepts is shifted in its relations and in its centre of gravity, and each concept takes a smaller or greater part in this change.

Neurath goes on:

Not infrequently our experience in this is like that of a miner who at some spot of the mine raises his lamp and spreads light, while all the rest lies in total darkness.  If an adjacent part is illuminated those parts vanish in the dark that were lit only just now.  Just as the miner tried to grasp this manifoldness in a more restricted space by plans, sketches and similar means, so we too endeavour by means of conceptually shaped results to gain some yield from immediate observation and to link it up with other yields.  What we set down as conceptual relations is however, not merely a means for understanding, as Mach holds, but also itself cognition as such.

I think this last sentence is a noteworthy remark from Neurath.  Of course Neurath isn’t here proposing an elaborated Brandomian-Hegelian argument that conceptual content should be understood in terms of inferential connections, but I think it is clear that for Neurath here “cognition as such” is about tracking connections between concepts.  For this reason, for Neurath, changes in our overall web of concepts can arguably also be understood as transformation of the concepts themselves.  There is a relatively strong sense, then, in which Neurath’s understanding of cognition is both holistic and cultural, as well as (plausibly) inferentialist-adjacent.

Now comes the famous boat metaphor:

That we always have to do with a whole network of concepts and not with concepts that can be isolated, puts any thinker into the difficult position of having unceasing regard for the whole mass of concepts that he cannot survey all at once, and to let the new grow out of the old.  Duhem has shown with special emphasis that every statement about any happening is saturated with hypotheses of all sorts and that these in the end are derived from our whole world-view.  We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom.  Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as a support.  In this way, by using the old beams and driftwood, the ship can be shaped entirely anew, but only by gradual reconstruction.

This is, I think, probably the classic statement of anti-foundationalism in philosophy of science.  It’s wonderful stuff, and I fully endorse it.  But this move then opens up a whole set of other questions.  In particular, granted that we are sailors adrift remaking our boat at sea – how much faith do we put in the existing state of the boat?  

The fallibilist and anti-foundationalist approach I’ve been describing, I think, is typically associated with two commitments that stand in apparent tension.  On the one hand, there is the commitment to the idea that existing scientific beliefs and methods are our best starting point.  On the other hand, there is a commitment to the ongoing remaking of those beliefs and methods.  The tension between these stances is not, of course, the tension of incompatibility – the endorsement of both stances is precisely what gives the fallibilist anti-foundationalist position its power.  But this tension is something that needs to be navigated in practice by anyone pursuing this approach to scientific epistemology.

I think another classic expression of this idea is Max Weber’s reflections in ‘Science as a Vocation’. There Weber writes:

In science, each of us knows that what he has accomplished will be antiquated in ten, twenty, fifty years.  That is the fate to which science is subjected; it is the very meaning of scientific work, to which it is devoted in a quite specific sense, as compared with other spheres of culture for which in general the same holds.  Every scientific ‘fulfilment’ raises new ‘questions’; it asks to be ‘surpassed’ and outdated.  Whoever wishes to serve science has to resign himself to this fact.  Scientific works certainly can last as ‘gratifications’ because of their artistic quality, or they may remain important as a means of training.  Yet they will be surpassed scientifically – let that be repeated – for it is our common fate and, more, our common goal.  We cannot work without hoping that others will advance further than we have.  In principle, this progress goes on ad infinitum.

This is the ‘paradox’ of fallibilism: that we put our confidence in judgements precisely because we expect them to ultimately be found inadequate.  I think this is a coherent – and, indeed, a robust and correct – philosophical perspective.  But it does raise questions about exactly what attitude to adopt to any specific judgement, as well as to the institutional structure of science as a whole.

Before I write more on that theme, I want to present one more representative of anti-foundationalist philosophy of science: Michael Polanyi.  In his classic essay ‘The Republic of Science’, Polanyi sketches an account of the institutional structure of science that I think is broadly correct.  Polanyi aims to give an account of how the institution of science can as a whole embody the Enlightenment ideal of “on the word of no one”, even as every specific moment of the institution relies on extensive authority-claims.  In Polanyi’s words:

the authority of scientific opinion enforces the teachings of science in general, for the very purpose of fostering their subversion in particular points.

That is to say: the community of scientists trains new aspiring members of the community in the scientific tradition – some competence in the tradition is a precondition of full membership in the scientific community of mutual recognition.  Yet one of the norms of the scientific community that scientists thereby enter is that any element of this tradition can in principle be challenged.  This institutional structure thus both transmits a tradition and aims to ensure that every element of that tradition is in principle open to rebuttal, and thereby capable of empirical and rational grounding.

As I keep saying, something in this broad space is the vision of science I endorse.  Moreover, this is not a particularly niche or strange opinion on my part but, I take it, an at least in principle widely-held one.  The institutions of science are constructed in the way they are in large part because they are informed by precisely this fallibilist understanding of the rationalist and empiricist endeavour.  Individually we cannot but take most of our opinions on the basis of the authority of others.  But collectively we have constructed those authority-relations, within the institutional structure of science, such that any and every individual claim can be subjected to the tests of experience and reason.  And this fact about the scientific community as a whole is what justifies any individual within that community accepting so many of the community’s conclusions on the basis of (apparently) nothing more than community authority.  This institutional fallibilist structure is the basis of the authority of the beliefs and techniques that the community transmits.

Ok.  So this is the third ‘moment’ of the ‘dialectic’ I’m discussing: this vision of science as a fallibilist institution, and the dual role of authority within this institution.

But our thinking about how science is structured doesn’t, and shouldn’t, stop there.  In the remainder of the post, then, I want to start to build on this core picture by thinking in a very crude way about a few different challenges or problems that can be presented by or to this picture.  I’ll aim to be somewhat brief and (therefore, as usual) crude.

First issue.  How are we to assess the overall reliability of our scientific institutional structures?  Our basic Neurath-Weber-Polanyi picture is of science as an institution which is, over time, self-correcting and self-improving.  It may well be the case that any given commitment turns out to be misguided, but the general mechanics of the institution’s internal checks and balances will tend over time to improve its claims and methods.  Moreover, for this reason, it’s reasonable to treat the institution’s current overall output as a reasonable approximation of our current best guess as to how things really are.

The basic critique here is: what if that’s not the case?  What if the institution is just fundamentally broken in some way?  The way in which you think science is broken is likely to depend on your ideological location: maybe it’s in the pocket of capital or the ruling class, or ‘globalist elites’.  Maybe the social location of scientists shapes their judgement in a way that is destructive of real insight.  Or maybe science is just, for whatever contingent reason, a self-selecting cadre of people with bad methods and bad views, using their institutional clout to prevent self-correction mechanisms from operating.  There are broader and narrower versions of this kind of critique.  At the limit case, there’s the rejection of science tout court.  But there are also many narrower critiques: such-and-such a discipline or sub-discipline is in the hands of fools and/or scam artists and/or powerful interests, and science’s self-correction mechanisms are not working because of the way those with institutional power have structured the relevant field.

How does one respond to this kind of critique?  Well, to a large extent it depends on context.  The reason it depends on context is that it is probably impossible in the abstract to draw a clear line between bad versions of this critique, which aim to reject what’s best in science, and good versions of this critique, which are themselves part of the ‘self-correction’ mechanism from which science derives its authority.  Put differently: if one simply rejects out of hand, in an undifferentiated way, critiques of current scientific practice and scientific findings, on the grounds that such critiques are ‘anti-science’, then one is, potentially, cutting away the basis for the scientific authority one seeks to appeal to.  Because, of course, the whole point of the scientific enterprise is that any dimension of scientific orthodoxy is in principle up for questioning.

This dynamic, of course, is why basically every crackpot thinks that whatever they are doing is real science, and the existing body of scientific knowledge and practice is an anti-science conspiracy masquerading as real science in order to fool the rubes.  We’re all, I take it, familiar with this kind of argument, and we are (most of us) not keen to take the flat earth people (or whoever) very seriously, still less to give them chairs in theoretical physics at major universities.  And yet the rational core of the crackpot’s vision of themselves as persecuted truth-teller is that if science is to function according to that Enlightenment vision with which we began – “nullius in verba” (albeit now understood at the collective and institutional level rather than the individual level, as discussed above) then there must be some sliver of possibility that the crackpot is onto something.

And this creates a further problem.  Presumably we don’t want to place flat earth theory and the best current theoretical physics on a completely level institutional-epistemic playing field.  And yet the kinds of gatekeeping that are established to keep out the cranks always risk doing more than that: blunting the self-correcting dimension of science’s ongoing, self-constitutive self-critique.  The challenge of scientific institution-design is to balance these imperatives: the gatekeeping required to produce high-quality knowledge-claims, balanced with the ability to critique in principle every dimension of those knowledge-claims, and of the mechanisms by which they are derived.  Of course this balance is hard to get right, even in the best of circumstances – without all the interests and errors at work that we’re all familiar with.

Ok.  So this is one set of issues – rather crudely put.  But here’s another, though closely related, set of issues.  As social scientists and philosophers have explored the social dimensions of science as an epistemic system, there has been increasing focus on ‘epistemic diversity’.  Here, again, the picture is one of science as a system with internal epistemic checks and balances – and those checks and balances require epistemic diversity.  If science is an ‘evolutionary’ system (as per Popper), then the way that evolutionary process works is by selection among variation.  And even if you don’t buy the entire science-as-evolutionary-system package – or the closely related science-as-catallaxy ‘marketplace of ideas’ vision of Polanyi – there’s still a basic insight here: if you don’t have some diversity of hypotheses, as well as the ability to adjudicate between different hypotheses using evidence, then you just don’t have science.  Again, then, we have an apparent ‘tension’ which is of the essence of the scientific enterprise: diversity of opinion oriented towards consensus around truth.  The epistemic authority that scientific consensus enjoys derives precisely from its willingness to adjudicate between diverse hypotheses – but that diversity of hypotheses is, intrinsically, a limit to consensus.

This is another version of the general point I made earlier.  But recent work in formal social epistemology has drilled down in this general problem space, and found some interesting, more specific results.  Kevin Zollman’s recent(ish) work on ‘the epistemic benefits of transient diversity’ is one such research strand.  Zollman mathematically models the opinion dynamics of very simple epistemic systems.  Agents exist on a graph (i.e. a network), and they interact with other agents via the edges (links).  Zollman asks: what graph or network structure results in overall better epistemic outcomes?  And he finds that (under plausible assumptions) relatively weakly connected networks result in better overall epistemic outcomes than do strongly connected networks.  Why?  Because in strongly-connected networks agents tend to coalesce quickly around a specific consensus – and that consensus may well be wrong.  It is better for there to be higher ongoing diversity of opinion, so the collective selection of the ‘correct’ opinion among that diversity takes place over a longer time-frame, with more evidence and more measured judgement in play.

This kind of result, I take it, supports a fallibilist and (in some sense) evolutionary perspective on the scientific research process.  It supports the idea that the strength of science lies in its ability to accommodate high diversity of opinion and (although Zollman isn’t studying this) method.  Of course, Zollman’s analysis is just a toy model, and (as Zollman emphasises) one doesn’t want to draw very strong conclusions on such a basis – but I take it that we have strong philosophical reasons to believe this kind of thing anyway, as discussed above.  Again, in other words, we are led to the idea that lots of scientists being wrong is central to the epistemic authority of science as a whole.  Efforts to establish institutional structures that speed up the consensus-formation process are likely to result in worse collective epistemic outcomes.

Ok.  So this is the basic picture of science as an institution that I endorse.  But here is where I want to go, with all this theoretical apparatus.  If we understand science in these terms, then a set of difficult problems are presented about how individual scientists – or really any individual, scientist or not – should interact with the scientific institutional structure as a whole.  If we adopt the original, individualist Enlightenment epistemological approach then this category of problem doesn’t present itself: the individual is the seat of knowledge, and epistemic authority can be assessed at the level of the individual.  But if we adopt this social and fallibilist understanding of epistemology, then the individual is not the seat of knowledge – knowledge is something that we produce and assess collectively via a mechanism that intrinsically involves much individual-level error.  Moreover, individual epistemic virtues are far from the only thing that need to be considered when evaluating epistemic authority: the specific reliability of scientific knowledge is a feature of the system as a whole, rather than of any one of its moments.  In addition, we have derived the apparent ‘paradox’ that (at least for large classes of claim – I’ll introduce more necessary nuance here later) even if we want everyone to be ‘correct’, we also don’t want everyone to believe the same thing – because that would undermine the basis for the authority of the claims we take to be correct.  My question in the remainder of this post is: what does this mean for the way in which any given individual ‘ought’ to relate to the scientific institutional apparatus and tradition?

At this point I want to talk a bit about some different attitudes that can be taken to the scientific enterprise.  In an earlier draft of this post I used the ‘case study’ of the discourse around the science of COVID-19 to illustrate some of these points.  That former draft is probably still more visible than it should be in what follows, but I decided it’s a much too contentious – and concrete – topic to be worth dragging into this basically philosophical argument.  Still, the debates over COVID science are the kind of thing I have in mind in the following discussion.  There is a scientific discourse; how do we choose to relate to it, as ‘consumers’ of the output and implications of scientific research?

Here then are some ways it is possible to relate to ‘science’, or scientific institutions, as a citizen:

  1. Just flat-out rejection of the epistemic legitimacy of science.  Obviously this attitude comes in a range of different forms, some a lot more sinister than others.  Still, there is a problem in how to engage with this perspective, if (like me) you are broadly pro-science.  Obviously you can’t really argue with fundamentally anti-scientific claims on the basis of the scientific literature, because this perspective simply rejects the scientific literature.  The real argument is at the level of ‘basic worldview’ – and it is very difficult to know where to begin with that kind of debate.
  1. Moving on, then, another orientation to the scientific discourse is to just accept what specific prominent science communicators say as a summary of ‘the science’.  In my view this approach makes a lot of sense as a time-saving heuristic.  Most of us are extremely busy and time poor – we simply don’t have the capacity to form judgements about the state of the current scientific literature, and therefore we delegate that job to people who have assumed the public role of assimilating and communicating the current state of the relevant science.  This is reasonable and rational – it’s how epistemic delegation works.  Of course, if you think the relevant scientific and science communication institutions are fundamentally broken, then this is a bad heuristic.  But if you don’t think that, this is a reasonable approach, in my view – given paucity of time, etc.  It needs to be borne in mind, however, that this is a shortcut heuristic – which is relevant to approach (3).
  1. The third approach is the same as (2), but more dogmatic.  That is, this approach doesn’t just accord public science communicators authority as a shortcut heuristic, but it insists that there is something very problematic or suspicious about dissenting from their views.  For this perspective, the authority of specific scientists or science communicators is identical with the authority of science in general – to doubt these science summarisers and communicators is to doubt science itself.

    This is a much more dubious stance, in my view.  It’s appropriate to defer to public science communicators as a time- and labour-saving heuristic – but we need to remember that their role is to summarise and synthesise an intrinsically pluralistic and internally diverse field of discourse.  These communicators’ judgements about how to synthesise that internal diversity of scientific opinion are very much not the same as the authority of science in general.  Inevitably, many experts will dissent from the specific synthesis proposed.

    I think this tendency (a dogmatic ‘pro-science’ attitude, where ‘science’ is identified with some specific figure or figures within the extremely diverse scientific ecosystem) is quite common among what I would call the “I bloody love science!” tendency, as well as among some scientists and science communicators who find it convenient to claim the authority of science as a whole for their contributions to an ongoing pluralistic scientific discourse.  It is a way of understanding science as technocratic expertise, rather than in more fallibilist and pluralist terms.  You could do worse than this, but I don’t think it’s a great orientation to science as an institution.
  1. A fourth approach is a different, more nuanced form of denialism or scepticism.  Unlike (1) – the “everything is lies” approach – this perspective takes the scientific literature seriously.  However, it mobilises the intrinsic fallibility of any and every individual study to cast doubt over the literature as a whole.

    I think there are two variants of this approach.  One is bad faith – the kind of Darrell-Huff-working-for-the-tobacco-industry ‘merchants of doubt’ cynical mobilisation of scepticism in the service of a predetermined agenda.  This is denialism proper, the cold-eyed use of scientifically literate hyperbolic scepticism to cast doubt on an agenda the author opposes.

    There’s a more good faith version of this approach, though.  This happens when scientifically literate people, who spend a lot of time engaging with the scientific literature, slowly become horrified at the fact that when you scratch at the methods of scientific publications, or the structures of scientific institutions, you typically find flaws.  It’s really hard to do good research; most research isn’t good; and even research that’s good will have significant intrinsic limitations.  If you have the right (or wrong) sensibility, as you look at this stuff you slowly become convinced that we just don’t know the first thing about anything, that the entire scientific enterprise is a towering stack of cards built on sand.  This is, I think, the good faith road to denialism.

    How should we react to this approach?  Well, again, I think we need to be careful.  Sometimes it is indeed the case that a scientific field or subfield is just fundamentally broken – the studies carried out in it simply aren’t good enough for us to draw any meaningful conclusions; the noise massively outweighs any signal; the biases or interests at work are so overwhelming in their influence that we can’t glean any kind of legitimate signal through the shadow they cast.  We can’t rule out this possibility a priori, and should pay attention to the sceptics enough to take it seriously.

    At the same time, though, the fact that any individual study is flawed (which basically every study is for some value of ‘flawed’), or even that there are systematic generative flaws in the relevant institutional structures (which again will always be the case for some value of ‘flaws’) doesn’t mean that the scientific subfield (or whatever) in question should be written off.  The reason for this is that – as I was arguing at greater length earlier – scientific conclusions are really aggregate phenomena.  They emerge as signal from noise, and the noise can be very substantial indeed – can even be systematic – while still generating a useful signal.  The idea of the scientific enterprise is that we are all engaged in a highly fallible process, but our collective research endeavour is stronger than any individual study or claim, or the flaws that afflict them.  And if we have the institutions of science functioning halfway properly, this indeed ought to be the case – a signal ought to be detectable, over time, despite everything.

    In this sense, the ‘good faith sceptic’ is (I’m arguing) taking a too pessimistic or perfectionist approach to the assessment of scientific validity.  The good faith sceptic assumes that errors are magnified by aggregation – that if all the individual studies have some flaws, then the field as a whole must be a true disaster – rather than understanding the way in which institutionalised fallibilism allows fallible studies to produce something greater than the sum of their parts, via the process of collective sifting and checks and balances.  Again, we can’t assume that this aggregate-level effectiveness of the scientific research process is the case a priori for any given field, but I’m claiming that it in fact often is the case for many actually-existing scientific institutions.

Ok – so far we’ve looked at four different ways to approach scientific findings.  I want to suggest that each of these ways has a sort of partial or lopsided attitude to the complex dynamic system of our scientific institutions.  The full anti-science denialist just rejects the whole thing; the heuristic timesaver and (in a less defensible form) the pro-technocracy expertise lover focus on some specific ‘output’ as bearing the authority of the institution of science as a whole (investing a moment of the system with an attribute that can really only legitimately be attributed to the system overall); the good faith denialist fixates on research flaws without understanding how the checks and balances of aggregation associated with the practice of the community as a whole can permit useful signal to emerge even through very significant noise.  

But we can, in principle, do better than all of this: we can ourselves triangulate between many different studies, and we can assess the likely institutional incentives and strengths and weaknesses in play.  Of course, we need to have the time available to do this – and it relies on our own judgement.  So this isn’t an easier – or even, necessarily, a better – way to approach things than making use of simpler heuristics.  Our judgement may be worse than that of whichever science synthesiser and communicator we would otherwise choose to delegate this task to!  But the approach is at least available.  And there is a certain sense in which this approach is more adequate to what science ‘is’.

Ok.  Let’s say we adopt this kind of approach.  Here we are trying to engage not just with science in the sense of individual outputs – whether individual papers or summary overviews assembled by science communicators – but with the dynamics of the relevant field as a whole.  This is, at least potentially, a good way to go about things – but it is also extremely cognitively taxing.  Moreover, even if we adopt this kind of approach – moving beyond the first-pass heuristic of trusting some specific synthesiser or synthesisers – we are still constantly engaged in acts of epistemic delegation.  Understanding science in the systemic fallibilist way I’m advocating means there is simply no way to get away from epistemic delegation – from trying to make rule-of-thumb judgements about whose word to rely upon.  In taking the approach I’ve described we are attempting to engage in a more sophisticated and triangulated effort at weighting the credibility of testimony – but we cannot but ultimately make judgements about how to weight testimonial credibility.  At the end of the day, this is core to the entire scientific enterprise.

And this basic, unavoidable fact about how science works means that we are never not going to be vulnerable to ‘scepticism’.  I began this post with the early modern Enlightenment approaches to foundationalist epistemology.  Rationalist foundationalism placed the individual faculty of reason centre stage, while empiricist foundationalism placed observations of nature centre stage, but regardless the idea was that the appeal to testimony could ultimately be grounded in something that itself did not need the grounding of the attribution of social credibility.  To use Barnes and Bloor’s phrase, the epistemic ground of such philosophical approaches were meant to “glow with their own light”.

If we adopt a fallibilist, anti-foundationalist approach to science as a complex system, though, we lose this kind of grounding.  The point at which our chains of reasoning ‘bottom out’ is always contingent.  There is always, in principle, more that one could do; one is always engaged in epistemic delegation, treating something as contingently trustworthy that is, in principle, open to contestation.

This fact opens the door to an infinite application of specific scepticisms.  It is always possible to continue asking “and what’s your basis for believing that?”  And this infinite application of specific scepticisms itself has a double face.  On the one hand, and to reiterate, the goal of our scientific system as a whole is to collectively institutionalise the principle that was fallaciously individualised in the first wave of Enlightenment rationalism and empiricism – “on the word of no one”.  On the other hand, in a social, anti-foundationalist and fallibilist understanding of science, this principle is institutionalised through a structure of contingently authoritative testimony – specifically taking people’s word as authority enough to believe things.  Sceptical questioning of taken-for-granted authorities can thus be seen both as the essence of rational, empiricist scientific inquiry, and as undercutting the testimonial institutions that we use to pursue rational, empiricist science at all.  Which of these a given act of questioning ‘counts as’ is a matter of social perspective.

Alright.  Here I want to pull back slightly, and start writing at a greater level of generality – talking not about science specifically but epistemic systems in general.  I think there’s a tendency in quite a lot of philosophy of science to somewhat conflate the specific features of science with human reason and observation in general (indeed, there are lots of people who would argue that that’s justified because there’s actually nothing that really differentiates science from other kinds of human epistemic practices!)  I don’t want to do that – I do think science can be demarcated, albeit loosely, from non-science.  Even so, I want to pull back now to make some broader remarks, drawing (as usual) on Brandom’s theory of practice and discourse.  Then I’ll circle back round to the problem space of fallibilist understandings of science.

So – start with Brandom’s ‘default, challenge, response’ model of the game of giving and asking reasons.  And start by thinking about the philosophical problems this model is responding to.  If we are thinking about inferential chains, then we are faced with the problem: where do those inferential chains stop?  It seems like we have three options.  First, there is no terminus – the inferential chain just keeps on going forever down an endless series of new premises.  This seems like it might be an infinite regress, such that we’re never able to ground our reasoning in anything because we never reach a stopping point.  Second, there is a terminus, but it is itself ungrounded.  This seems like it might be a form of dogmatism.  Third, there is a circular chain of inferences, such that our original inference effectively functions as its own ground.  This seems to combine negative features of the first two scenarios – an infinite regress that is somehow also a dogmatism.

But what else are we going to do?  It seems like these options are exhaustive.  Moreover, something like this problem occurs not just at the level of substantive premises, but also at the level of logical inferences themselves – this is the argument Lewis Carroll makes in ‘What the Tortoise Said to Achilles’.  Because an inference can be (in Brandom’s terminology) explicitated as itself a substantive premise – this is what logic does, on an expressivist account: it makes the formal machinery of reasoning available as conceptual content, not just practice – the exact same problem can be made to recur in relation to the logical processes of inference by means of which the inferential chain is itself constructed.

So what to do?  Brandom’s ‘default, challenge, response’ model proposes that we start with “material inferences” – that is, substantive, not just formal, inferential claims (and, of course, on Brandom’s account inferences are the stuff of conceptual content in general) – which are presumed (by default) to be good.  Then those material inferences (or conceptual contents) can be challenged as part of the general discursive practice of asking for and giving reasons.  Once challenged, we are obliged to give a reason for our commitments.  On Brandom’s account, then, we inhabit a space of reasons which is filled with ‘default’ commitments that are not, at that moment, vulnerable to challenge.  Indeed, there is no other way to enter the space of reasons – to be a sapient creature at all.  This world of ‘default’ commitments is the inherited materials of our conceptual space – the boat we are remaking, in Neurath’s metaphor.  Then the process of reasoning – the game of asking for and giving reasons – is the remaking of that boat, by challenging default commitments, and thereby extending our inferential chains outward, turning what had been premises into conclusions, grounded in new premises.

As we reshape our conceptual world, then, on this model, we are (like Neurath’s miners) shifting the location of the light of inquiry.  Commitments that had been default premises become the conclusions of newly established inferential chains.  At the same time, commitments that we arrived at by inferential chains get integrated into the default background of our conceptual habits.  In this latter scenario, inferential chains ‘drop out’ as they achieve local consensus, and what had been a laboriously-arrived-at conclusion becomes a habitual, unexamined premise.  It’s important to recognise that both ‘sides’ of this process – default premises becoming contested conclusions; contested conclusions becoming default premises – are essential dimensions of our rational discursive practices: you can’t have one without the other.

I take it that (as appropriately elaborated by Brandom) this is a more carefully developed (albeit also more boringly articulated) version of the vision of “cognition as such” that Neurath laid out in the passages from ‘Anti-Spengler’ I quoted above.  Ok.  But all this means two things.  First: we have swapped a general foundationalism (as in the original Enlightenment foundationalisms I began by discussing) for a series of ‘local foundationalisms’ – default commitments always vulnerable to challenge.  And, second: what counts as a local foundation, a currently default commitment, is a matter of local social practice.

In order to elaborate on this latter point, I now want to compare and contrast the Brandomian ‘default, challenge, response’ model to the vision of discursive practice articulated by Barnes and Bloor in their defence of relativism (‘Relativism, rationalism and the sociology of knowledge’). I already discussed this paper in a Journal of Sociology article, co-authored with N. Pepperell.  I’m not 100% happy with our treatment of Barnes and Bloor in that article (obviously the fault here lies with me, not with NP), but I don’t want to divert this post into a lengthy relitigation of all those issues.  For now, I just want to focus on one specific area.

In their paper, then, Barnes and Bloor field a range of objections to their relativism from an ideal-typical rationalist.  First, they field the objection that (contra relativism) our ideas are in fact determined by the way the world really is.  Interestingly (and in contrast to some other prominent figures in the strong programme broadly understood – e.g. Harry Collins, at least in some moods) Barnes and Bloor have no objection to the idea that the way the world really is should play a role in our accounts of why people believe the things they do.  Barnes and Bloor are relativists, but they are not anti-realists, not even ‘methodologically’.  In Brandomian terms, Barnes and Bloor are happy to incorporate ‘reliable differential responsive dispositions’ into their analytic apparatus.  That is to say, they are happy to say that (for example) the fact that the object of an experiment really did behave in such-and-such a way should be part of our account of why a given scientist believes what they believe about the object of the experiment.

But Barnes and Bloor insist that this can’t be where our account stops.  One reason for this is that (as they say), nature has always behaved the way it does.  In Barnes and Bloor’s words:

reality is, after all, a common factor in all the vastly different cognitive responses that men produce to it.  Being a common factor it is not a promising candidate to field as an explanation of that variation.

Moreover, we know from the history of science that working scientists can of course interpret the ‘same’ or similar experimental results in vastly different ways.  Barnes and Bloor give the example of Priestley and Lavoisier having very different interpretations of the same basic experimental data.  The fact that experimental results are interpreted in the way they are by the researchers in question therefore can’t simply be accounted for by the behaviour of the experimental object; it must also be explained by the researchers’ interpretive practices.  And those interpretive practices (Barnes and Bloor argue) are socially determined.

So Barnes and Bloor are not disputing the role of ‘reality’ in determining belief; they are arguing (and here, as often, they are more aligned with the logical empiricists than either group’s reputation would suggest) that reality underdetermines interpretation, and that the other relevant factor – the appropriate object of sociologists’ study – is social norms.  They then argue – and this is the crux of the paper – that there is no non-relativist way to ground that social-normative determination of interpretation.  And here I think we need to be careful to distinguish at least two different elements of Barnes and Bloor’s argument.

The first element of this argument is a fight with anti-relativists who believe a shared universal faculty of reason is a precondition of the intelligibility of communication, science, reason, and so forth.  Here Barnes and Bloor cite, and argue with, Hollis and Lukes.  Barnes and Bloor make the case, in essence, that the logical principles like modus ponens are socially instituted rather than features of some essential core invariant faculty of reason.  As Barnes and Bloor see it, Hollis and Lukes and other rationalists are simply dogmatically insisting on a particular set of conventional practices as necessary features of human reason, without providing any compelling justification for their preferred norms beyond bluster.

I think this argument has a lot to recommend it.  But what I’m interested in here is the second, more general, dimension of Barnes and Bloor’s argument.  Here they argue that the rationalist in general ultimately cannot avoid dogmatically insisting on some category of proposition in which credibility and validity are fused.  In Barnes and Bloor’s words:

[the rationalist] will treat validity and credibility as one thing by finding a certain class of reasons that are alleged to carry their own credibility with them; they will be visible because they glow by their own light.

I love this passage; I think it provides a great, evocative articulation of the core critique of dogmatic rationalist foundationalism.  But I also think that this element of Barnes and Bloor’s argument is insufficiently attentive to the possibility of anti-foundationalist rationalisms of the kind articulated by Neurath.

Here the relevant question is: what does it mean to say that at some point in the rationalist’s argument, credibility and validity must fuse?  I think Barnes and Bloor mean to suggest that a dogmatically unjustified foundational premise must exist somewhere in the rationalist’s reasoning – and that the explanation for the rationalist treating that premise as foundational must be social.  But should we understand this foundation as aligned with actual philosophical foundationalism?  Or should we treat it as provisionally foundational in the way that is central to the ‘default-challenge-response’ model?

My claim here is that the ‘default-challenge-response’ model allows us to have ‘foundational’ premises for reasoning in a way that does not commit us to philosophical foundationalism.  My claim, moreover, is that there is potentially a large conceptual gap between philosophical anti-foundationalism and relativism.  Barnes and Bloor take themselves to be providing a set of arguments for relativism, but by my lights what they are really doing is providing a set of arguments that could lead to relativism, but could also lead to Neurathian anti-foundationalist rationalism.  (I further think that Barnes and Bloor themselves are influenced by Vienna Circle logical empiricism, and take themselves to be elaborating what they take to be its underlying relativist commitments, but I don’t think we need to follow them down this path.)

So – let’s assume we have followed some inferential chain down to its foundational premise.  This premise is treated as foundational by some local community, but there is no transcendental or metaphysical reason why this premise should be treated as foundational – the fact that it is treated as foundational is a matter of contingent sociological fact.  Have we now ‘relativised’ this premise?  My claim is: not necessarily.  The fact that (on the default-challenge-response model) this premise is currently, by default, foundational doesn’t mean that a challenge won’t bring forth reasons.  The sociologically and epistemologically relevant question is: what kind of reasons are they?

Here we switch levels again, back to the specific features of science as an institution (rather than ‘cognition as such’).  I think the claim advanced by anti-foundationalist scientific rationalists needs to be something like the following: what distinguishes science from other epistemic systems is the institutional-epistemic structure within which ‘default’ premises are embedded, such that users of those ‘default’ premises can legitimately assume that at the level of the system as a whole the premises are not ungrounded, but are rather empirically and rationally grounded by other components of the overall epistemic system.  There is a complex cognitive division of labour here, such that countless methodological and substantive claims serve as locally-unexamined premises for some members of or moments of the system, but no premises are ‘foundational’ for the system as a whole.

This is (at least part of) my answer to the ‘demarcation problem’ (that is, to the question of what differentiates science from non-science).  My claim is that what picks out science as a uniquely reliable epistemic system is an institutional structure that makes this category of claim warranted.

Ok.  This is one of the core claims I want to advance in this blog post, so I guess I want to flag that here and take a short imaginary breather.  However, this claim immediately needs to be qualified, in at least two ways.

First up (a critic might query) are we really saying that no premises are ‘foundational’ for the system as a whole?  What about the premise that ‘nullius in verba’ is a desirable project in any sense in the first place?  Isn’t this simply a normative judgement – a question of world-view – that itself can’t possibly be ‘empirically or rationally’ grounded?  To this set of questions I think I basically, like the logical empiricists, just shrug my shoulders and say “sure, I guess that a lot of this kind of thing comes down to values at the end of the day.”  Perhaps this ‘admission’ is enough to place me in the ‘relativist’ camp.  I’m sure it would be in the eyes of many.  But I think it’s important to remember that – at least within a Brandomian framework – to talk of values is not to leave the space of reasons: for Brandom, all normative commitments are part of the game of giving and asking for reasons.  It’s true that one cannot give a scientific basis for the norms that undergird the scientific enterprise as a whole – but this is not the same thing as saying that no reasons can be given.  Anyway, maybe I’ll come back to this issue another time – there’s much more to say here, but I think it mostly falls outside the scope of this post.

A second challenge is not focussed on the ‘fundamental values’ that animate the scientific project, but on the idea that one can indeed legitimately take scientific institutions as warranting the kind of ‘epistemic delegation’ that treats so much of the received wisdom of science as locally taken-for-granted default background commitments.  And this objection in turn, I take it, as I’ve already briefly discussed, comes on a spectrum.  At one end of the spectrum is global contempt for the scientific project tout court.  But as we move along the spectrum, we reach more and more ‘local’ objections to specific features of the local scientific enterprise in question.  Does such-and-such a scientific claim or practice really merit being taken as an unproblematic default?  Often, clearly, the answer to this question is going to be “no” – indeed, it has to sometimes be “no”, on the antifoundationalist account I’m endorsing, because the answer to this question being “no” is how we get scientific progress, discovery, self-correction, etc. etc.  The answer to this question being “no” is the driving force of the scientific endeavour.

At this point, though, I want to introduce one more distinction – the Brandomian-Hegelian distinction between the retrospective versus prospective dimensions of reason.  When Weber writes that the “very meaning of scientific work” is that our current best science “will be surpassed” he is pointing to a prospective dimension of deference to scientific institutions.  In other words, in deferring to the authority of science, we are not just deferring to the current authority of current scientific findings – we are also, and critically, deferring to the broader institutional process by which those findings will, very likely, be overturned or supplanted.  The form of epistemic delegation or authority involved here is quite complex.  We are deferring to the institution of science precisely because we take it to have the resources to supplant the specific claims to which we are at the same time locally deferring.  Here again the tension between the dynamic pluralistic dimension of science and the authority of specific, static, concrete claims or findings is in play.

Ok.  More caveats could be articulated, but we’ve now I think covered the main substantive points I wanted to make about what science ‘is’.  I now want, much more briefly, to apply some of these points to some specific bugbears that have been bothering me recently.  There are two.

The first ‘application’ is around what I see as ‘sceptical’ discourses.  Here I want to use the apparatus I’ve articulated above to draw a couple of distinctions.  As I’ve said ad infinitum now, there is a constitutive tension in scientific institutions between the fact that in principle everything is open to challenge, and the fact that in order for any progress to actually be made on anything, a huge amount needs to be ‘black-boxed’ as locally-unexamined premises.  Science as a whole is a way to manage this tension, by permitting justified ‘black-boxing’ at a local level, on the grounds that at the global level of the institution as a whole everything is contestable, via division of epistemic labour.  This division of labour can be temporal – diachronic as well as synchronic.  Thus even when a near-total consensus is reached in the present, this is because there is epistemic delegation to earlier generations of researchers who robustly debated and tested these conclusions so that we don’t have to.  Moreover, this delegation can be prospective – we can take a premise for granted on the assumption that in the future we will get around to assessing its legitimacy more robustly than we yet have.  All of this is how science works.

Now: what I want to object to, using this apparatus, is sceptical discourses that take locally-unexamined premises as evidence of anti-scientific thinking.  It’s easy to see where this idea comes from: science is meant to challenge presuppositions and take nothing for granted, and yet manifestly you have countless working scientists who are simply taking huge amounts of stuff on the basis of authorities that they haven’t bothered to independently assess, or whose work they haven’t bothered to master.  This is the antithesis of science (the reasoning goes)!  Therefore science is a fraud.

And my strong counter-claim is that this is just a fundamental misunderstanding about how science works.  This kind of ‘scepticism’ is the application of (often a fairly debased version of) Enlightenment Mark One epistemological reasoning to an epistemic institution that simply isn’t justifying its epistemic claims in this way.  That doesn’t mean that the scientific claims in question are right.  It may, in any given case, in fact be the case that the authorities on which scientists are relying are misguided, that the locally-unexamined premises are bad ones, and so forth.  Challenging such premises is (part of) the work of science.  But neither is it intrinsically irrational or unscientific to engage in the kind of epistemic delegation, the kind of deference to authority, that is here being criticised.  Epistemic delegation and deference are simply non-negotiable features of science (indeed, of reason) in general. 

The characteristic sceptical move that I’m here objecting to, in other words, is an apparent belief that unless any given individual can trace back the chain of inferences to the comprehensive evidence-base that justifies their claims, those claims lack justification.  This is what’s (at least purportedly) going on (often, not always) when you see ‘sceptics’ of whatever kind demanding “a source for that claim” in online debates over (often relatively consensus) science.  The issue isn’t that it’s illegitimate to want claims to be sourced.  The issue is that it’s unrealistic to expect any given individual to be able to personally replicate, on demand, the inferential chains that ground the entire research enterprise in question.  By challenging individuals in this way, you are not ‘exposing’ the fact that they are making claims without evidence – rather, you are indicating that you don’t understand the epistemic basis for scientific authority.

That’s my first bugbear – which really just derives from observing, and sometimes participating in, too many tedious online debates with people who think they are being particularly rational by making these kinds of discursive moves.  But this is a quite petty bugbear.  The second point I want to make is perhaps slightly more meaningful.

This second point concerns my own research – and here I guess I want to get very slightly more autobiographical.  Whenever you talk about your past self’s errors I think it’s easy to overstate and simplify things, which I’m definitely going to do here – I didn’t in fact hold clean ‘ideal type’ positions of the kind I’m about to attribute to myself.  But still, I think I can discern in retrospect some kind of intellectual trajectory that roughly traverses the same ‘dialectic’ that I outlined towards the start of this post.  That is to say: I think when I was (let’s say) a teenager, one of the appeals of philosophy for me was something in the general ballpark of the ‘crude’ ‘Enlightenment Mark One’ idea of excavating through layers of unwarranted belief that one had inherited from one’s social environment, in order to find bases for belief that were more robust than simply accepting contingent tradition.  Then as I started actually studying philosophy I became pretty convinced – let’s say in my early twenties – that this was a pipe dream, and that you can’t get away from the contingently social determination of your categories.  This in turn I think led me to a more ‘critical-theoretic’ space, which was highly sceptical about philosophical rationalist claims.  And then, from that more critical space, I feel like I’ve slowly assembled the resources required for a more pragmatist and social-theoretic rationalism – thanks in no small part (obviously) to Brandom.  Again, I think I’m sort of warping things a bit to fit into this narrative, but there’s something to it, in terms of my own personal intellectual trajectory.

The point being, I guess, that I definitely feel like I understand the persuasive pull of what I would now characterise as two different categories of scepticism.  One category of scepticism seeks a socially-transcendent basis for the critique of the social determination of belief (and becomes a hyperbolic form of scepticism because it is in fact impossible to find such a basis).  Another category of scepticism relentlessly critiques claims to such a basis for rational judgement, on the grounds that all such bases are contingently socially determined, and therefore unreliable.  I take it that the broad philosophical orientation I’ve outlined above incorporates at least some of the strengths of both orientations, in the service of a more robust and reasonable rationalism and empiricism.

But one still encounters the ‘sceptical’ challenge – and one encounters it in two ways.  First, introspectively: I think most people with this kind of ‘philosophical’ orientation are nagged by worries about the basis for their beliefs; these worries are the motive for a lot of theoretical and scientific inquiry.  Second, though, one encounters these sceptical challenges from others.  And this is what I want to conclude this post by talking about.

One of the ways to think about rationalism (extensively criticised by Barnes and Bloor in their paper on rationalism and relativism) is to assume that there are shared fundamental commitments that are constitutive of sapience as such.  If this is your ultimate grounding for the reasonableness of our commitments, then the process of argument and persuasion can be understood as a practice of following inferences ‘upstream’ until one reaches commitments that nobody could possibly reasonably reject.

A social-pragmatist rationalism rejects this approach.  For the approach I am endorsing – exactly as Barnes and Bloor say – the commitments that we reach when we follow a chain of inference back to its self-evident premises are socially contingent.  They are locally unchallenged, but that does not mean that they are rationally unchallengeable – quite the reverse.  What you take to be self-evident is a function of the social norms that you contingently endorse – in large part due to the relevant recognitive community of individuals of which you are a member.  In other words, the establishment of ‘self-evident’ premises for reasoning is in significant part a process of socialisation: if a commitment is ‘self-evident’ or premiseless, this is a fact about your socialisation, or your social milieu, not a fact about the commitment.

Now, one of the implications of this understanding of how our epistemic world functions, is that we are typically inclined to an asymmetry about commitments.  Of course the commitments that I regard as self-evident really are self-evident.  On the other hand, the commitments that you regard as self-evident are manifestly not self-evident at all; on the contrary, I can see as clear as day that these are unexamined prejudices inculcated by your social environment, from which you have failed to free yourself.  Similarly, my reasoning moves in secure, robust steps, relying on only the most self-evidently legitimate inferences.  By contrast, your reasoning is constantly supported by implicit and yet dubious substantive commitments that you not only have failed to justify, but that you apparently fail even to recognise as commitments requiring defence.  This asymmetry is, of course, a structural feature of the fact that we have different background assumptions – different locally-foundational premises.  It is important not to mistake this difference in local default premises for a difference in rationality as such.

Now, the fact that we all have slightly different – and sometimes substantially different – locally-foundational premises for our reasoning means that when we encounter someone else’s arguments, we are often inclined to challenge some of what they have to say.  We take it that we see things clearly that they are confused about.  And there’s nothing wrong with this!  This is the process of asking for and giving reasons that makes us rational creatures in the first place!

And yet here we again encounter a version of the tension that has animated this entire post.  For, as I argued above, it is not just the challenging of default premises that permits us to inhabit the space of reasons, but also the creation of default premises. Without the rich backdrop of locally-unchallenged default commitments – both explicit and implicit – our reasoning processes wouldn’t be able to get off the ground at all.  In other words, we (or the traditions we inhabit and inherit) have to make decisions (whether deliberately or implicitly) about what premises will not be challenged within any given moment of the game of asking for and giving reasons.  Neurath’s sailors cannot remake the entire boat at once – they cannot remove the planks beneath their feet, even if they can choose where to stand.

In other words, part of the set of decisions we make in engaging in rational thought and discourse is precisely what commitments are not up for debate – at least here and now.  This is true of thought in general – but it is also true of scientific discourse.  This fact is the basis on which Polanyi can construct his two-stage account of science as a structure of authority-relations.  Admittance to the community of practising scientists is accomplished by a process of socialisation in which a set of shared community commitments are established.  Then, this having been accomplished, those commitments serve as the ground upon which individual scientists can stand, as they aim to dismantle and reconstruct some elements of the framework they have been socialised into.  As I discussed above, there are significant epistemic risks from the kind of gatekeeping associated with scientific community socialisation – but some establishment of default premises upon which reasoning can build is an essential precondition of establishing any community of epistemic practice.

From the perspective of those outside the community in question, however, this can easily look like a process of unreason at work.  Here, after all, in the process of socialisation, we are effectively engaged in the construction of an in-group out-group boundary, where those who refuse to accept the principles and commitments of the in-group are banished from its charmed circle.  Moreover, these mechanics of in-group and out-group membership really do frequently involve a process of unreason at work.  It is often a lot easier to respond to the ‘challenge’ moment of the ‘default-challenge-response’ model by simply insisting that those who do not accept a given premise are not welcome here, than it is to provide a substantive rationale.  And it’s easy to see how this dynamic can operate in the service of irrationalism – not irrationalism in the sense of “stepping outside the space of reasons” but in the sense of “providing bad reasons”.

But here’s my claim: the goal of the institution of science is to establish an overall institutional structure that allows a plurality of locally-unexamined commitments to serve as epistemic checks and balances against each other, while simultaneously permitting local sub-communities to fruitfully pursue lines of thought facilitated by the establishment of very substantial – or even very contentious – locally-taken-for-granted premises.  I’m here basically saying that Brandom’s ‘default-challenge-response’ model, plus Polanyi’s account of the republic of science, provides a more elaborated account of the way in which Neurath’s boat operates as a specifically scientific institutional structure and dynamic.  Moreover, I’m arguing that subcommunities of practice are critical to the dynamics of Polanyi’s account of scientific pluralism.

Here it helps, I think, to consider another element of the social character of reason: it really helps to think things through, if we have other people to bounce ideas off.  Of course, at some very abstract level, if we accept the Brandom-Hegel account of reason, reason is necessarily and intrinsically social: you can’t have reason at all without community.  But moving down levels of abstraction, and having gotten our rational, sapient recognitive practices off the ground in some sense, it also just practically helps to have people to talk to and think with on any given specific topic.  This is what a scientific discipline is, it’s what a subdiscipline is, it’s what a research programme is, it’s what a research team is, it’s what a collaboration is.  These are all ways in which people work together on the basis of shared default premises.  The point is that when people with similar default premises and commitments come together, they can build on those premises together in a way that is impossible for people with more widely divergent worldviews.  Here the ‘black-boxing’ of much of the debate that animates science in general allows the kind of focus on specific problem-spaces that often leads to scientific advance.  Without that ‘black-boxing’, I’m claiming, you wouldn’t be able to move far enough down the relevant inferential chains to get to novel findings.  If you’re constantly having to re-establish the rational and empirical bases of fundamental commitments, you don’t have the cognitive resources left over to follow the implications of those commitments.  This, I’m claiming, is why the kind of broad scepticism (“where’s your source for that?”) that I mentioned above is destructive of the ability to actually pursue a lot of scientific inquiry, if taken too seriously.

The pluralism that is constitutive of a well-functioning scientific institutional dynamic is a pluralism of these subcommunities of shared commitments.  In my PhD, and following Jason Potts, I called these communities “scientific innovation commons”.  But in my PhD I didn’t, in my view, adequately draw out the epistemic implications of this institutional structure – I’m trying to do better in this post.

So the model of science that I’m proposing here involves a pluralism of research subcommunities, each with their own local norms and commitments, which knit together into a cognitive division of labour that – if everything is working roughly as it should – allows each subcommunity to serve as an epistemic check and balance on others, resulting in a (diachronic) large-scale dynamic that can credibly claim, as a whole, to approximate the ‘rejection of tradition’ that early modern Enlightenment thinkers hoped and failed to achieve at the level of the individual scientist, precisely via the way in which this community as a whole constructs and transforms its own traditions.

If this pluralism is to function as an internal check and balance system, though, it needs to be genuine pluralism.  As Zollman’s formal opinion dynamics modelling illustrates, a community that too-quickly orients to consensus is an epistemically unreliable community.  And this in turn, I’m claiming, produces another apparently paradoxical result: there is – potentially – an epistemic virtue in local research subcommunities refusing to ‘update’ their presuppositions in the light of criticism from other subcommunities.  Obviously we don’t want everyone to be too fanatical or dogmatic.  But neither do we want everyone to rush too quickly towards consensus.  We want a diversity of research programmes each of which can explore the implications of their approach in some depth.  Only by permitting and facilitating this kind of ongoing pluralism are we (the community as a whole) able to reliably assess the strengths and weaknesses of these different research programmes.  

And here the prospective dimension of the scientific process also becomes relevant.  Our research is future-oriented.  It is aimed towards some future collective process of assessment.  This is the point of the ‘conjectures and refutations’ model of science – we don’t need a rationale now for a given research programme, we simply need the possibility of a future evidence-base that will support (or refute) our hypotheses.  We can then set about the task of discovering whether that evidence-base will (or could) in fact exist.  Thus even highly speculative and flimsily-supported-by-traditional-commitments communities of research are a fully legitimate – indeed, essential – dimension of the dynamics of science as a whole.  Of course, if a research programme is unable, over the long term, to find rational or empirical justifications for its existence, then it slowly loses value within the pluralistic chorus of scientific debate.  But we wouldn’t want scientists to abandon a research programme at the first sign of trouble – some measure of perseverance, even on highly unpromising ground, is essential to the overall collective endeavour.

Ok.  But here we reach some of the tensions that I gestured at earlier.  For how is an individual scientist or researcher to behave, within this institutional framework?  I have made a case for strong scientific pluralism.  I have made the case that such pluralism requires perseverance from scientists and scientific subcommunities even in the face of epistemic adversity.  But how does an individual scientist’s decision-making fit within this framework?  Let’s say a team of researchers is pursuing a research programme.  They find unpromising results.  It seems to them, in the light of those results, that it is much more likely that an alternative research programme is the right one.  And yet, were they to abandon this current research programme, the result would be a significant reduction in the pluralism of this section of the scientific ecosystem.  Do the scientists in this research team have an epistemic obligation to pursue what they now see as the most credible lines of reasoning and inquiry, and abandon their current research project?  Is this what following the best evidence – apparently a core scientific norm – demands of them? Or do they have an obligation to maintain scientific pluralism by doing the best they can by their current research programme, even though they have lost faith in it?  After all, a degree of epistemic pluralism is (I have argued) critical to the credibility of science as a whole.  And if they pursue this second course of action, at what point does this course of action stop being a commendable commitment to scientific rigour in pursuing an unpromising line of inquiry for the sake of epistemic pluralism and meticulous care in eliminating unlikely possibilities (and on the off-chance that it turns out to be correct after all), and start being a dogmatic refusal to accept the best scientific evidence?

I don’t think there are really correct answers to these questions.  Individuals need to make their own decisions.  But I think these kinds of problems present themselves if you accept the understanding of science that I am recommending.

And now, finally, we come to the rather self-involved personal reflections that I was trying to get to with this line of thought.  As I was saying above, I feel like my own personal intellectual trajectory has been driven by some measure of the kind of ‘scepticism’ that I’m here expressing wariness about.  That is to say, the kind of scepticism that asks “how can we be sure that what we’re doing is even in the right ballpark here?”  I think this was the kind of worry that drew me into philosophy, and it’s also the kind of worry that pushed me away from philosophy (because too much philosophy seemed itself to be dogmatic, to me).  I think this kind of worry (“what if our basic approach and categories are just wrong?”) has sent me running around between different fields and subfields – philosophy, sociology, economics – in part because I didn’t want to just accept being socialised into such-and-such a set of handed-down disciplinary norms.  And I don’t think that impulse was exactly misguided, though I certainly would have benefited from applying myself a lot more along the way.

In any case, I feel I’ve been, over time, reasonably responsive to both this kind of introspective scepticism, and to the ‘external’ scepticism of people telling me that I’ve gotten it all wrong and I should be thinking about things in [such-and-such] terms instead.  And I feel I’ve learned a lot from those kinds of interactions.  Recently, however, I’ve found myself increasingly unwilling to take this kind of advice – to listen to people telling me that I’ve gotten things all wrong.  And I feel like this unwillingness comes from two distinct sources.

The first source is that at this point I’ve been reading and thinking about the areas that interest me for (let’s say) about twenty five years.  In that time I’ve done a lot of thinking – and I feel like I’ve already at this point given consideration to a lot of the kinds of objections and criticisms that people throw at me.  Increasingly, my reaction to being told that I haven’t considered [X] is not (as it once was) “oh, yeah, I should really spend some time reading or thinking about [X]”, but rather “yes I have”.

So that’s one consideration: one reason why I’m less inclined to put a lot of energy into responding to objections to my overal intellectual project.  But of course, as I discussed above, the fact is that there is always room for giving more thought to any given issue.  So it’s really not a very reliable or commendable attitude, to simply think to yourself, “no, I’ve already settled that”.

The important consideration, in my view, is a different one: the role of research programmes in the scientific epistemic system.  As I discussed at length above, there are two sides to intellectual progress: on the one hand, challenging taken-for-granted ‘default premises’; on the other hand, adopting premises as taken-for-granted defaults, in order to explore their implications.  And my view is that, in my intellectual life to date, I have spent a large proportion of my time doing the former, and it is now time to focus on the latter. 

In other words, I feel I have a research programme here.  That research programme is still much more inchoate and underdeveloped than I would want it to be, at my stage of life.  And of course the research programme may be flawed; its premises may be faulty; its goals may be misguided.  But I feel – whether rightly or wrongly is not for me to say – that at this point I’ve done enough in trying establish reasonable default background premises.  Now I want to actually try to do something with the intellectual resources I’ve committed myself to.

All this is by way of saying, that at this point in my life I regard it as basically an appropriate response to those questioning the premises of my intellectual project to simply say: well, this is my research programme; if you think it is flawed, there are many other research programmes out there which may be more to your liking.  After all, as I’ve argued at inordinate length in this post, science is a pluralistic epistemic system.  Part of what makes science epistemically reliable is precisely the fact that it has lots of people running around in it doing misguided things, pursuing misguided research programmes.  I think it’s clear enough that I’m at the more crankish end of the research spectrum: I’m not affiliated with any institution; I publish in academic venues infrequently; I work through my intellectual interests in rambling, loose, overly personal blog posts like this one; and so on and so forth.  But that’s ok.  Science, in the expansive sense, has space for all of this, and much more.  My goal on this blog, and in my work in general, as I see it, is now to pursue the lines of thought I’ve committed myself to.  We’ll see how much I can get out of them, in whatever time on this earth I have left.

I’m still circling here around Hayek and the calculation debate, but I want to take a quick detour into the work of Michael Polanyi on science.  Unlike Hayek, with whom I am still getting to grips, I know Polanyi’s key works well – I drew on Polanyi heavily in my Ph.D. thesis, and I think Polanyi’s 1962 article on ‘The republic of science’ is one of the best works in the social studies of science full stop.  Polanyi is also, in a fairly direct sense, engaged in a closely related intellectual project to Hayek – both Polanyi and Hayek were members of the Mont Pelerin Society and both are interested in the value and functioning of ‘spontaneous order’, in contrast to communist-style planning.  I think Polanyi’s work is of great intrinsic interest and value – but the reason I’m blogging on it now is because I want to start to draw out parallels (and perhaps points of difference) between Polanyi’s analysis of science as spontaneous order and Hayek’s arguments about spontaneous order in market society.

For Polanyi, then, it is critical to the functioning of science as an epistemic system that science be in a relatively strong sense unplanned, decentralised, polycentric, etc.  That is to say: scientific knowledge, for Polanyi, cannot and must not be understood on the model of the knowledge held by a single mind.  It is an intrinsic, not merely a contingent, feature of scientific knowledge that it be dispersed across a complex spontaneous order (‘the republic of science’).  For this reason, Polanyi argues, efforts to regiment and homogenise science are misguided at a fundamental level.

Polanyi does not, however, have one specific argument for this position – rather he has a cluster of arguments that need to be carefully distinguished.  In this post I want to highlight four different elements of Polanyi’s argument for the intrinsic polycentricity of science, and make some provisional and tentative connections to broader (Hayekian) arguments about spontaneous order.

First, and arguably most superficially, science must be ‘decentralised’ for Polanyi because there is simply too much of it for any one human mind to hold.  Perhaps there was a time when the greatest (and most well-resourced) intellectual figures could plausibly achieve mastery over all the major scientific (or pre-scientific, depending on how you periodise the intellectual history) domains, but in recent centuries it is simply impossible for anyone to gain mastery over anything more than a very small slice of our scientific knowledge.  Science is, in this very direct sense, a vast cognitive division of labour.  “We” possess vast scientific knowledge, but this “we” is a vast community – any individual within that community can of course possess only a small fragment of that knowledge.  Given that science hangs together in ways that span individuals’ knowledge, it can very plausibly be claimed that “we” know things that no individual knows.  (Even an individual scientific paper may be co-authored by a range of researchers with different skills and specialisations, such that the conclusions of the paper as a whole are not ‘known’ in the strongest sense of full comprehension by any individual author.  Think, then, how much more true this must be of broader coherent swathes of scientific knowledge.)  Scientific knowledge, then, is intrinsically dispersed simply because there’s so much of it.

Polanyi’s next argument is about tacit knowledge.  Scientific knowledge is often understood as explicit propositional content, paradigmatically published in scientific journals – but Polanyi emphasises that “knowing how” is as important to the scientific enterprise as “knowing that”.  We can, moreover, distinguish two different senses of “knowing how”.  First, there are the kinds of skills that cannot be communicated by means of explicit propositional knowledge: for example, how to operate such-and-such a piece of equipment is the kind of thing that one may have to learn in practice.  Second, there are the kinds of creativity and inspiration that for Polanyi are critical to the scientific enterprise, but that are missed by ‘positivist’ approaches that reductively overemphasise propositional knowledge.  (For what it’s worth, I’m less impressed than Polanyi is by ‘divine spark of creativity’ arguments – I think Polanyi is at times problematically mystical when discussing this sort of thing.  But of course Polanyi is right that tacit knowledge is important.)  For tacit knowledge, as for explicit knowledge, you could in principle I suppose imagine some sort of superhumanly skilled and creative individual who would render the division of labour associated with tacit knowledge redundant, but in any imaginable real-world scenario of course we are again dealing with a division of labour here, driven perhaps by slightly different factors from the division of labour associated with explicit propositional knowledge.

There’s a third argument in this broad space that isn’t really central to Polanyi, but that I think I might as well canvas since I’m itemising arguments: this is the problem of the incentives for knowledge sharing.  There’s a whole bunch of work, much of it coming out of Robert K. Merton’s sociology of science, about how the reputational economy of science incentivises publication of research findings.  But there are also many institutional incentives that push in the other direction: difficulties in publishing replications; reasons not to publish the ‘wrong’ findings; reasons to hoard data or other intermediate scientific inputs; incentives to commit fraud.  All of these problems can be (a bit crudely) grouped under the heading of “incentives not to share knowledge” – and if knowledge cannot be shared then the knowledge is intrinsically going to remain ‘decentralised’.

So these are three reasons why we would expect science to be intrinsically polycentric or decentralised.  Polanyi’s really interesting argument, though, from my perspective, is only weakly related to these.

Polanyi’s final argument is that science must remain decentralised because disagreement is central to the entire scientific project.  Science operates via (to use more Popperian/Lakatosian language) conjectures and refutations.  There is therefore a double-movement (apparently but only apparently paradoxical) to the scientific knowledge-production process.  On the one hand, would-be scientists must be taught the best existing knowledge, and their ability to demonstrate expertise in that knowledge is what grants scientists entry to the ‘club’ of recognised scientific research.  On the other hand, scientists must be willing and able to challenge existing knowledge in the service of making new discoveries and overthrowing old errors.  The first of these two elements pushes science towards a certain uniformity of belief – a shared canon is established and disseminated via scientific training, etc.  The second of these two elements, however, pushes science towards an intrinsic diversity of belief.  Scientific progress simply cannot happen if scientists don’t disagree.  More strongly – scientific knowledge itself cannot be scientific knowledge (but will rather become dogmatism) if the scientific community does not have the internal capacity to reject it.  There must therefore be not only cognitive division of labour – whereby knowledge and expertise are divided up across the scientific community by means of specialisation – but also genuine substantive pluralism (a stronger claim).  Although the preponderance of scientific opinion at any given time can reasonably be taken to represent ‘scientific knowledge’, it is intrinsic to the very concept of scientific knowledge (which must be fallibilist or it is no longer science) that any given scientific ‘consensus’ may be wrong, and that (therefore) scientists who are prepared to reject any given element of scientific ‘knowledge’ must also be part of the scientific community.

I think Polanyi is right about all of these points (caveats aside).  Moreover, I think the last point – about the necessary pluralism of the scientific endeavour – is a profound insight, which really does explain why science as a collective project must be in some sense polycentric and decentralised at its core, if it is to function as science at all.

But what about Hayek?  As I say, my goal here at the moment is not really to discuss Polanyi on science, but rather to extend my own field of competence, from the political economy of science to political economy more broadly.  So what are the analogies between Polanyi’s account of science and Hayek’s account of the market as catallaxy?

Well, I want to return to this in future posts.  Very quickly and crudely, though, I think we can see analogies with all four of these points.

  1. The division of epistemic expertise in the Polanyi account of science of course corresponds to the division of labour within market society.  This is a practical ‘decentralisation’ of a sort associated with the need for specialisation.  Of course, division of labour can exist within a planned economy – but here the second analogy is that, just as no one mind can in fact master all of scientific knowledge, so no one planner (or planning bureaucracy, or whatever) can master the information required to organise such a division of labour.  This is, as it were, the ‘weakly practical’ argument against ‘socialist calculation’.
  2. Like Polanyi, Hayek is interested in the problem of tacit knowledge.  One of the other Hayekian arguments against central planning is that tacit knowledge intrinsically cannot be communicated to the central planner, because the knowledge is not explicitated into propositional content.  This seems like it could potentially be a more ‘intrinsic’ obstacle to central planning than the difficulty of mastering the sheer quantity of ‘explicit’ information involved in economic decision-making.
  3. The problem of incentives not to share knowledge is a challenge for planners just as much as for the scientific community.  Again, one of the arguments against central planning is that there are many reasons why the information informing the plans would be faulty, due to the many incentives economic actors may have to pass on faulty information (or fail to pass on relevant information).
  4. Finally, and again I think arguably most importantly, there is the issue of intrinsic and necessary pluralism in science as a precondition of the creation of scientific knowledge (as opposed to scientistic dogmatism) at all.  Here I think the analogy is to Hayek’s arguments about the market as a discovery process.  Hayek’s argument here is not just that there are insuperable practical difficulties involved with aggregating the knowledge dispersed across the market.  His argument is, rather, that the relevant knowledge simply doesn’t exist without the pluralism of a decentralised process.  Specifically, all kinds of knowledge about what to produce and how to produce it does not exist without the experimental ‘entrepreneurial’ trial and error process of decentralised market dynamics.  Just as we need pluralism of belief for scientific knowledge to take on the positive epistemic attributes of scientific knowledge, so we need a decentralised discovery process to generate the kinds of information that could be aggregated in a planning process.  (This, I take it, is one of the key arguments of Don Lavoie’s book on the socialist calculation debate – which I recommend!)

Now, I’m not in this post trying to take a position on the strengths or weaknesses of any of these arguments – at this point I’m still trying to get clear on what the Hayekian (or, more broadly, the Austrian) arguments about planning and spontaneous order even are.  Moreover, I’m completely confident that these four points do not exhaust either Hayek’s arguments about spontaneous order, or the broader Austrian arguments in the socialist calculation debate.  (For example, I don’t think the core of Mises’ original argument about calculability is captured by any of these four points – hopefully I’ll come back to all this.)  But for myself, I find this way of breaking down some of Hayek’s arguments about ‘the knowledge problem’ of socialist planning clarifying.  Hopefully I will return to these issues, expand on these points, and supplement them with different arguments in future posts.

Very brief reading notes on a paper by Benoit Godin, ‘National Innovation System: the System Approach in Historical Perspective’. The basic goal of Godin’s paper is to argue that many of the core concepts of the National Innovation Systems literature – as articulated by Freeman, Lundvall, Nelson and others, from the late 1980s onwards – were already present in publications put out by the OECD in the 1970s. In these OECD publications, Godin argues, the ‘research system’ was composed of four sectors – government, university, industry, and nonprofit – and embedded within a broader economic and international environment. Analysis of the research system focused on five relationships: between economic sectors; between basic and applied research; those determined by policy itself; between the research system and the broader economic environment; and those associated with international cooperation.

This research system framework therefore already incorporated many of the elements of the later National Innovation Systems approach. Godin argues that there are two big differences between the research system and the NIS approaches. First, for the research system approach, government was regarded as having “prime responsibility in the performance of the system”. For the later NIS approach, “it would rather be the role of government as facilitator that was emphasised”. Second, the research system approach focused on the research system as a whole, whereas the NIS approach privileges the firm as the key component of the system. Both of these shifts (I would argue) can be seen as representative of the shift towards neoliberal economic governance and theory.

Some quick notes on Mirowski’s ‘Science-Mart: Privatizing American Science’. This is a wide-ranging semi-popular book about neoliberal governance of US science, with different chapters addressing different elements of the topic. These include:

– a very critical survey of the economics of science;
– periodisation of twentieth and twenty-first century US science governance into three regimes: the ‘captains of erudition’ regime in which the modern research laboratory developed; the ‘cold war’ regime in which the state greatly increased both its funding and its control of scientific production; and the ‘neoliberal’ regime characterised by privatisation of the research process and greater ‘enclosure’ of scientific inputs and outputs in intellectual property law;
– a discussion of material transfer agreements and the constraints they place on researchers;
– a critique of biotech – and, more broadly, commercialised science – as a ‘Ponzi scheme’ in which very few companies are, in fact, commercially viable;
– an argument that the quality of scientific research outputs is declining as a result of the neoliberalisation of science;
– a discussion of a range of different ways in which the neoliberal regime produces ignorance, rather than knowledge (such as the ghostwriting of apparently independent research papers by employees of pharmaceutical companies, for example).

All up the book is a concerted attack on ‘neoliberal science’, and connects to Mirowski’s critiques of other dimensions of neoliberal economic governance, in other works.

My take, fwiw: some of the book provides a good entry into important issues in the political economy of science, and Mirowski’s periodisation seems like a useful way to carve up both the political and the intellectual history of US science governance. Mirowski’s discussion of the deliberate creation of systematic bias in the scientific literature is good as far as it goes – though I’d recommend Ben Goldacre’s ‘Bad Pharma’ as a popular work focussed specifically on this issue. However, I think Mirowski’s book as a whole should be approached with some caution.

It’s possible I have the wrong end of the stick, but it seems to me that Mirowski’s critique of biotech as a ‘Ponzi scheme’ is based on a misunderstanding: in a speculative industry many companies can fail because the investment gambles they take do not pay off in the creation of a marketable product. This fact enables fraudsters to make money off the industry, because a straight-up fraud is from a distance indistinguishable from a bad but rational bet – so significant segments of a speculative industry based on product innovation will, typically, be actual scams. Nevertheless, provided the few success stories are profitable enough, the industry as a whole can be fulfilling the capitalist social function of profit generation just fine – and the rent-seeking associated with intellectual property monopolies over medical goods means that successful medical innovations are indeed often extremely profitable.

Mirowski also seems to me to sometimes be unreliable as a summariser of the intellectual figures (mostly economists) that he discusses. Mirowski is critical of economists who advocate for neoliberal policies (privatisation, expanded intellectual property rights, etc.); but he is also critical of economists who oppose these policies, on the grounds that – as economists – they are tacitly supporting the same policies regardless, by virtue of their use of ‘neoclassical’ economic theory. So, for example, Paul David (an advocate of open science, who engages heavily with intellectual resources outside economics, and who has also developed models of scientific research dynamics that do not make use of the ‘rational actor’ approaches Mirowski elsewhere criticises) is nevertheless for Mirowski as much a participant in the neoliberalisation of science as those advocating neoliberal policies, by virtue of him using game theoretic and other ‘neoclassical’ modelling tools. Social scientists can of course be criticised for tacit implications of their approaches which contradict their stated policy goals. But Mirowski’s broad brush dismissal of economics of science as a whole seems excessive, to me.

At some point maybe I’ll write something on Mirowski’s criticisms of neoliberalism more broadly – my thoughts on this issue don’t feel quite nailed down enough, yet – but I wanted to put up these very brief notes on Science-Mart now, before it all goes down the memory hole.

Economics as Science

October 29, 2013

The recent Nobel Prize [1] in economics has prompted a fair bit of commentary/discussion along the lines of ‘is economics a science’? I thought I’d add to that commentary. The extremes of the commonly articulated positions are roughly:

“Of course it is – and a stronger, more manful, more mathematical science than your [puny / relativistic / fraudulent / etc.] [psychology / sociology / history / etc.]”

“Of course it isn’t – it’s a series of barely coherent apologies for the interests of the powerful, detached from any reference to or understanding of the suffering inflicted upon billions by the policies it advocates and sophistically excuses”

With of course a range of other positions too.

The former of the two positions above is articulated principally by economists; the latter principally by left critics of economics. I’m in many respects on the left [2] – but I’m also in training to become an economist. Where does that place me? [Well – not to build up suspense: I think economics is indeed a science (that’s why I think it’s worth doing economics). But the longer version follows.]

Prior question: what does it mean for something to be a science? As a first pass, I take a disciplinary research-space to be a science if:

1) The object it studies is a real phenomenon that can actually be empirically studied.[3] [4] (So astrology doesn’t count – because the relationships between celestial objects and human personality is not a real phenomenon; but astronomy does count, because celestial objects are real things.) (What’s actually real is of course itself a scientific question – but so it goes; there’s no paradox there – just the usual Neurath’s Boat principle of there being no discursive ‘outside’.)

2) There exists a set of established norms and research practices for testing claims about these objects against empirical evidence – for an endeavour to be scientific, claims must be vulnerable to rejection in the light of empirical findings.

3) There’s a discursive space, for researchers, within which those norms for testing claims against evidence can themselves be debated, contested and transformed.

Science is therefore a communal endeavour – it can’t exist outside of a community of research. Science relies on the collection of evidence; the positing of claims on the basis of and for testing by evidence; and the collective ongoing assessment of the evidence, the claims, the methodological connections between the two, and the norms governing the whole endeavour, within a community of researchers.

This definition of science does not require the following things:

– That practitioners of scientific inquiry be particularly rational. All else being equal it’s better for practitioners to be reasonable and informed than not, but the ‘rationality’ of the system resides principally in the possibilities made available by the overall institutions of the system, rather than in the virtues of individual researchers. (It is necessary, though, for a sufficiently large number of members of the community to be committed to reproducing those broad institutional practices enumerated above, that the practices are indeed reproduced.)

– That scientific claims be correct. The whole point of the scientific endeavour is that claims (including both empirical claims and the methodological claims that inform empirical claims) are open to revision.

– That members of a research community be capable of predicting the future behaviour of the phenomena studied. Some phenomena are amenable to this, in the current state of knowledge; others are not. It is not a requirement of science that predictions of future events be created, only that evidence (including evidence generated by future events) be capable of modifying our claims.

– Relatedly, that ‘general laws’ be discovered. Science can study unique specificity just as scientifically as it can study general principles; one is not more sciencey than the other.

– That there be broad consensus on most major topics within the research community. One hopes that warranted consensus can be established, but an important part of the mechanism from which it might emerge is disagreement.

None of those things just listed need apply for a community to count as a group of scientific researchers.

In these terms, is economics a science?

I think the answer is: clearly yes, economics is a science. There is a real object of study (the economy, however that’s understood). There are established principles for collecting evidence and testing claims against evidence. There are ongoing (quite sophisticated) debates about the methodological principles involved in these tasks. So I think economics is pretty unambiguously a science, and I’m happy to be a member of and participate in that research community of scientific practitioners.[5]

What about the second objection, though? The objection that economics is just serving the interests of the powerful, etc.?

Well – just because a community of research meets all the criteria for science, doesn’t mean that it isn’t full of bullshit – nor does it mean that the kinds of bullshit dominant in a field are in any way accidental. Here, as often, it’s important to distinguish between best practice and actual practice. Best practice is not something that exists independently of actual practice – it is generated in and emerges out of actual practice. But it is also something that actual practice can be judged against – and often judged severely. Like any discursive space, the discursive space of economics is variegated – it contains many voices in dispute. In participating in that discursive space, we add our own voices and evaluate those voices already in contention. The norms of evaluation – and, therefore, the conclusions – that we take away from engagement in that space may be minority views relative to the space overall.

So it’s important to distinguish between the claim that economics is a science, and the claim that economics in general has things right, or is even on the right path. It’s important to have an account of the many things wrong with economics too – which I’ll start to talk about in a future post.

[1] The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel

[2] This post articulates my politics I think reasonably well – although I’m losing patience with left positions and figures sufficiently rapidly that, while I don’t think I’m on the classic ‘Trot to neocon’ ideological trajectory here [not least because I was never a Trot, but you know what I mean], it’s hard not to see why some such view would look reasonable, from the outside.

[3] “What about mathematics?” Well, mathematical objects (whatever their status – as it happens, I have a conventionalist line on the status of mathematical objects, but nothing here relies on that) can’t be empirically studied, so mathematics isn’t a science in this sense. What gives mathematics its objective character (on my account at least) is the degree of consensus that can be (and has been) attained around mathematical norms – math is pretty much unique in this respect. This is what distinguishes mathematics from, say, theology, which also has an object of study of ambiguous status (real? fictional? social? supernatural?) but where the degree of consensus is far lower, even within specific religious communities, let alone between religions.

[4] “What about the SCIENCE OF BEING that myself and three other graduate students in this Heidegger course are developing?” Sorry – that’s not a science.

[5] Note, though, that economics is not a more manful or vigorous science than any other social science, even if it involves a lot of math.

Interacting with some medical professionals, recently, has made me think a bit more about evidence-based belief and practice. I am in favour of evidence-based belief and practice; my ‘theoretical’ perspective is a broadly empiricist one, with an admixture of (to my mind) relatively sophisticated pragmatism. But what does evidence-based belief and practice consist in?

This is a more complicated question than it may appear, and these remarks don’t aim to do more than touch on the relevant issues. But, trivially, for our belief and practice to be evidence-based is for our beliefs and practices to be oriented to, and ‘checkable by’, the way things are in the world. In evidence-based belief and practice, we grant authority to empirical evidence to legitimise or de-legitimise our beliefs and practices. And, further, we evaluate this evidence itself by sets of rationally and empirically justifiable criteria, to evaluate which evidence counts as good evidence and warrants such authority, and which does not.

This granting of authority to specific types of event or entity – ’empirical evidence’ – is a social practice. Authority – on the Brandomian pragmatist metatheoretical approach I endorse – is created and granted by sapient entities’ social practices. We grant a specific social status to specific kinds of non-human things (pieces of evidence), such that these non-human things can possess social authority within human discourse. Once authority has been granted in this way, it cannot – again, as a matter of social practice – necessarily be easily revoked; this is one reason why, on a Brandomian account, the authority of evidence can go against all human preferences or authority-decisions, even when authority has its source only in human action.

In the ‘analytic’ philosophical tradition – as, often elsewhere – there is a tendency to assimilate the evidence-based responsiveness of the typical sapient organism reacting to environmental stimuli (the phenomenon of experience, or perception) to the formalised truth-seeking investigatory practices of the modern sciences. Willard van Orman Quine puts the point as follows:

The scientist is indistinguishable from the common man in his sense of evidence, except that the scientist is more careful.

I disagree with this assimilation: I think that the belief-forming practices of scientific investigation are quite socially and historically specific, and should not be seen as the extension, or fuller realisation, of more mundane and broadly-engaged-in practices of everyday empirical observation. Science cannot be defended on those grounds; it must be defended in its social and historical idiosyncrasy.

I believe this defence is a worthy one; I am an advocate for scientific practice. But engaging in this metatheoretical defence of science involves steering between two, opposing, flawed accounts.

On the one hand, if we understand science as through-and-through a social practice like any other, there is a temptation to see this perspective as robbing science of its authority (rather than as explicating the nature of its authority); this approach can therefore often lead theorists into a relativism that regards our choice of the scientific approach as arbitrary or unjustified. In classical social theory, this perspective is perhaps best expressed by Max Weber’s movingly pessimistic reflections in Science as a Vocation, where Weber’s own commitment to the social scientific endeavour is presented as an ultimately irrational obedience to a demon “who holds the fibers of his very life.” In more recent social theory, a similar perspective is conveyed well by the Edinburgh strong programme’s conviction that the social-scientific analysis of scientific practice leads, inevitably and correctly, to relativism.

Relativism – whether it understands itself as anti-science, as a consequence of science, or both – is a common object of critique. The opposing flaw is also a serious one, however: this is the perspective that grounds science’s authority in an appeal to the way things are in the world, without seeing how this appeal must itself be understood as a social practice embedded in a complex system of social practices. For this approach, in Hegel – and Marx’s – phrase, “the process vanishes in the result”: the mechanism by which truth-claims are arrived at is forgotten, and truth-claims are wielded as if they are the source of science’s social authority, rather than the result of that authority (as is in fact the case). These approaches, then, are dogmatic – they understand themselves as (and, in most practical contexts, are) pro-science, but they have an inadequate understanding of what science is, as a historically-specific set of social practices. Advocates of this perspective may be able to do science, but they are not able to adequately justify their findings, without relying on a tacit set of social norms that their dogmatism overtly denies. Many of the pugnacious contemporary advocates of science, like Richard Dawkins and Daniel Dennett, belong in this category.

If, then, we are to be good – or, more to the point, metatheoretically enlightened – proponents of evidence-based belief and practice, we need to steer a course between these twin dangers of relativism and dogmatism. This can comfortably be done – in the posts here on Robert Brandom I’ve gone some way towards explaining the broad metatheoretical approach that, to my mind, best enables such a position (though, again, Brandom’s work is pitched at the level of everyday empirical experience, rather than scientific practice). But I am interested, now, in beginning to actually do evidence-based work. I’ve still got a lot to do in elaborating the metatheoretical perspective I endorse; but I also want to begin to leave that space behind. Enough with philosophy; enough, especially, with ‘Theory’ that regards itself, in a smug but profoundly confused way, as ‘post-empiricist’. I’ve spent enough of my life in that space already. The task of social science is to describe and analyse the social world, through the collection and interpretation of data; that’s the project I’m committed to; one of these days I’d like to get to work.

The mathematics of inferential statistics is based on the logic of random sampling: the inferences we make in inferential statistics work on the assumption that the data we are inferring from is randomly sampled from the population we are inferring to – that every member of the population has an equal chance of ending up in our dataset. Obviously this usually isn’t the case; but that’s the assumption, and the further our actual sampling practice deviates from that ideal situation, the less likely our inferences are to have any validity.

In much inferential statistics, the population we are sampling from is an actual population of cases, which could in principle be observed directly if we only had the money, time, staff, access, etc. etc. Here the ideal situation is to create a sampling frame that lists all the cases in the population, randomly select a subset of cases from the sampling frame, and then collect data from those cases we’ve selected. In practice, of course, most data collection doesn’t work this way – instead researchers pick a convenience sample of some kind (sometimes lazily, sometimes unavoidably) and then try to make the argument that this sampling method is unlikely to be strongly biased in any relevant way.

Sometimes, however, the population from which we draw our sample is not an actual population of cases that happen for contingent practical reasons to be beyond the reach of observation. Sometimes the population from which we draw our sample is a purely theoretical entity – a population of possible circumstances, from which actuality has drawn, or realised, one specific instance. Thus our actual historical present is a ‘sample’ from a ‘population’ of possible realities, and the generalisations we aim to make from our sample is a generalisation to the space of possibilities, rather than simply to some aspect of crass and meagre fact.

When we make claims that are predictive of future events, not merely of future observations of present events, we are, tacitly or overtly, engaged in this endeavour. To predict the future is to select one possible reality out of a space of possibilities, and to attribute a likelihood to this prediction is to engage in the statistical practice of assigning probability figures to a range of estimates of underlying population parameters – or, equivalently, to give probability figures to a range of estimates of future sample statistics ‘drawn from’ that underlying population. I may try to articulate this point with more precision in a future post – I’d like to spend more time on Bayesian vs. frequentist approaches to probability. And there is, of course, a ‘metaphysical’ question as to whether such a ‘population’ ‘really exists’, or whether the ‘samples’ themselves are the only reality, and the ‘population’ a speculative theoretical entity derived from our experience of those samples. Functionally, however, these stances are identical: and by my pragmatist lights, to note such functional equivalence is to collapse the two possibilities together for most theoretical purposes.

When we speak of universal natural laws, then, we are stating that a given fact – the law in question – will be true in the entire range of possible worlds that might, in the future, be actualised in reality. (Whether this ‘possibility’ should be understood in ontological or epistemological terms is beside the point). For some, it is the role of science to make such predictions: on this erroneous stance, science attempts to identify universal features of reality, and any uncertainty that accrues to scientific results is the uncertainty of epistemological weakness, rather than ontological variation. Here, for example, is a video of Richard Feynman making fun of social science for its inability to formulate universal laws of history:

To take this attitude is to misunderstand the nature not just of social science, but of science in general. Science is not characterised by a quest for certainty or for permanence, but is rather characterised by an ongoing collective process of hypothesis formation and assessment, based on specific collectively accepted evidentiary standards. The conclusions of science cannot be certain, because they must always be vulnerable to refutation in the light of empirical evidence and the application of community norms of argument. Similarly, the phenomena examined by science need not be necessary, or even ongoing. A scientific endeavour can be entirely descriptive, of the most local and variable phenomena imaginable, so long as the process of description is subject to the appropriate communal evidentiary norms. It can, similarly, be explanatory without being predictive, for we can analyse the causes of the phenomena we observe without being able reliably to predict those causes’ future impacts and interactions. The set of phenomena regarding which long-term or even short-term reliably predictive hypotheses can be formed is smaller than the set of phenomena that can be studied empirically using the relevant community norms of hypothesis formation and assessment.

The social sciences often approach this limit case of the purely descriptive. Social reality is enormously variegated – and often there is little in the way of testable general claims that can be taken from a study of any given social phenomenon. But prediction is nevertheless sometimes the goal of social science. When the social sciences aim to study social phenomena, the ‘laws’ they aspire to uncover are always local and limited in scope – and when we form a hypothesis, this hypothesis applies within a certain local limit and no further. Where to draw the line – where to locate this limit – is a qualitative question that the community of social scientists must always bear in mind, but the existence of this limit in no way renders the endeavour ‘unscientific’.

When we make a social-scientific prediction, then, we are making a claim about what future reality will drawn from the space of possibility. We do not know the scope of this space – nor do we have any reason to regard the principle of selection as random or unbiased – indeed, we have strong reasons to believe the contrary. Further, the nature of social reality is such that we can and do aspire to intervene in this selection – to attempt to influence what possibilities are realised. As social scientists we sometimes aim to predict what outcomes will be drawn from this space of possibilities – and such a prediction can only be made within the framework of a broader, historically informed judgement of the narrower space, within the space of possibilities, that we aspire to model.

But we should also be aware of other, unrealised but potentially realisable social possibilities, beyond the set of possibilities we are modelling at any given moment. Part of the function of the scrupulous social scientist is to describe this space of possibilities itself – to describe not just regularities, but also the possible variety from within which those local regularities are drawn. We cannot know the limits to the space of possibilities – no sampling frame of possible societies exists. But we can explore what the ‘samples’ themselves – existing and historical societies and behaviours – tell us about the scope of that hypothetical space.

This latter task is where social science intersects with political practice. The understanding of the likely behaviour of social reality is important for political practice – but so too is a sense of the larger space of possibilities from which our own past and present societies have been drawn, and from which alternative futures could be drawn, or made, if we only had the political ability to do so.