This is an extremely long post – something in the ballpark of 13,000 words – for which I apologise.  I can’t claim that it isn’t rambling and digressive, etc., but for what it’s worth it felt more or less like a single line of thought while I was writing it.  Unfortunately I don’t really have it in me to revise it in any serious way, so here it is.  The post is organised roughly as follows.  First I talk very briefly about individual-level epistemology, in its traditional ‘Enlightenment’ form.  Then I make a shift to social epistemology.  I draw on Neurath, Brandom, the strong programme – all the classics of my personal epistemological canon – to outline what I take to be a reasonably coherent social-epistemological account of science as an anti-foundationalist epistemic system.  This is the bulk of the post.  I then finish up by applying this model to a couple of personal preoccupations – a rather bathetic conclusion given the intellectual resources I’m drawing on, but again, it is what it is.  I guess you can see the post as trying to do two main things.  First, I want to give a social-institutional answer to the traditional demarcation problem: what is science?  Second, I want to reflect a little on what this answer implies for how individuals – both as citizens consuming scientific output, and as researchers contributing to the scientific endeavour – can and should relate to this broader institutional structure.  The ‘emotional’ point, from my perspective, is to try to think about the location of my own research within the broader scientific institutional space.

Start, then, with non-social epistemology – specifically, with the good old Enlightenment project of trying to figure out the way the world is, using the resources of science and reason.  I’ll take as an exemplary expression of this project the Royal Society motto: ‘Nullius in verba’ – ‘on the word of no one’.  This is the commendable (to my mind – of course, opinions on this matter differ) Enlightenment idea that the authority of tradition qua tradition is no authority at all – that we should not simply defer to the tradition, whether it be religious or political or philosophical or whatever.  Rather, we should figure things out for ourselves.

It’s worth pausing for a moment here, perhaps, to mention that there’s a philosophical connection between the scientific project (understood in these terms) and the anti-traditionalist anti-authoritarian forms of political liberalism and radicalism that emerged during this same historical period.  Both of these projects are, at some level, driven by the same thought: we should not simply defer to authority (whether that be political or epistemic): whatever authority authority has comes from our own judgements and actions.  I think this is a good philosophical approach, and I take myself to be aligned with it, important caveats notwithstanding.  But this post isn’t about the connections between the epistemic and the political dimensions of ‘the Enlightenment’ – that’s all for another day.

What I want to start by talking about, rather, is the different ways in which “on the word of no one” could be understood.  If we reject the authority of tradition, what are we basing our epistemic judgements on?  As usual, I’m going to be maximally crude here, but I’m going to say there are basically three broad categories of alternative authority-source: experience (empiricism); reason (rationalism); and Mysterious Other (mysticism).  I appreciate that this is all a sort of first-year-undergraduate-level understanding of Enlightenment epistemology.  But at the same time it seems basically fine to me, and it’s what I’m going with.  Typologising in this way, then, and ignoring mysticism (on the grounds that it is a transformation of the Enlightenment rejection of tradition into a basically anti-scientific epistemic approach, and thus Does Not Align With My Values) we have two basic projects: grounding knowledge in the senses, and grounding knowledge in the faculty of reason (plus, of course, combinations of the two).

All well and good.  But then, as is I think at this point abundantly well-established by the subsequent philosophical tradition, once we start trying to elaborate these approaches we run into worlds of trouble.  When we think about our faculty of reason, does it not seem that our processes of reasoning are themselves, at least in part, socially inculcated and influenced – that is to say, influenced by the traditional authorities that our faculty of reason is meant to break with?  In my view the answer to this question is definitely “yes”.  Similarly, when we think about how to form judgements on the basis of experience, it seems plausible that theory is ‘underdetermined’ by experience and that, moreover, the way in which our experience is taken up into judgement is itself partly determined by socially- (and thus authority-) influenced processes of thought.  Obviously I’m not aiming to make the case for either of these positions in this post – I’m gesturing to the long philosophical debates around these issues.  Still, I’m a philosophical pragmatist, and therefore I’m on the “most stuff is socially constituted” side of these debates, and I tend to think that appeals to non-social faculties (whether of experience or reason) often tacitly rely on socially-constituted categories.

Even putting all this aside, though, there are more practical ways in which the “on the word of no one” principle runs into problems.  Obviously we’re not all carrying out every scientific experiment ourselves – we’re relying on other researchers to make empirical observations, and then reading their reports of those observations, or other researchers’ syntheses and summaries of those reports.  So testimonial authority is central to scientific empiricism.  Similarly, even when we are engaged in Cartersian rationalism, are we really thinking things through from first principles ourselves – or are we using others’ accounts of their reasoning as an aid to, and frequently substitute for, our own?  Here again, for example, the canonical status of Descartes’ ‘Discourse on Method’ is an interesting kind of performative… if not contradiction, then at least tension: a canonical authority for rejecting canonical authority.  There are tensions here, I think – in the constitution of an anti-traditionalist tradition; the social inculcation of the project of rejecting socially-inculcated judgements.

This kind of line of reasoning is one of the ways you can get to a social- or practice-theoretic critique of Enlightenment rationalism or empiricism.  The crude argument here would go: the Enlightenment project aspired to break with social authority; but we can show that the very categories with which Enlightenment thinkers engaged in this project are socially constituted via unacknowledged relations of authority. From here it is easy to conclude that the Enlightenment project as rejection of authority is basically a contradiction in terms, and we should throw it in the bin.

Obviously this is a very crude summary of the critique, but I think this is recognisable as a summary of quite a lot of critical science studies.  For example (and since I started with the Royal Society), I would argue that Schaffer and Shapin’s ‘Leviathan and the Air Pump’ clearly falls within this broad genus.  Bloor and Barnes’ ‘strong programme’ argument for relativism can likewise easily be taken to point in this direction.  So do at least some categories of critical theory (in the Frankfurt sense) and Marxism, as well as some forms of more standpoint-epistemology-adjacent contemporary critical theory.

So.  At this point we’ve traversed two moments of what we can see as a kind of ‘dialectic’.  We started with a picture of the Enlightenment epistemological project that understood itself as rejecting social, authority-based sources of knowledge in favour of various kinds of individual epistemic grounds – rationalist or empiricist.  That’s moment one.  Then we argued that this doesn’t work: social relations, and authority-relations, implicitly constitute even the apparently non-socially-constituted categories of Enlightenment epistemology.  This is so in at least two ways.  First, the ‘individual’ psyche is always partly socially constituted, in its faculties of both observation and reason: you can’t find your way to a faculty that is not shaped by the forces of social authority that the faculty superficially appears to transcend or escape.  Second, you can’t in practice engage in any serious project of knowledge construction without relying on testimony, and so we need to bring authority-relations back into our epistemology in order to deal with testimony.

Now, if you are of a critical turn of mind, you can interpret these critiques of ‘individualist’ Enlightenment epistemologies as damning for the entire epistemological project.  The Enlightenment thinkers sought to construct knowledge “on the word of no one”; they are not able to do so; too bad for the project.  This is the second moment of our ‘dialectic’, which takes itself to simply refute the first.

But not so fast!  We don’t have to accept critical science studies’ debunking application of these insights.  Our third ‘moment’, then, is accepting the idea that we can’t get away from either the social constitution of ‘individual’ faculties or testimonial authority structures, and trying to construct an understanding or version of the Enlightenment epistemological project that is grounded in these insights, rather than refuted by them.

This, obviously enough, is the ‘moment’ of this ‘dialectic’ that I endorse.  I take it that this broad approach has been pursued, in different ways, by a lot of thinkers that I’m interested in.  On the one hand, there are the explicit ‘social epistemologists’ who are interested in the social structure of science as an institution.  On the other hand, there are the pragmatist philosophers – especially, for me, Robert Brandom.  I take it that Brandom’s work – especially his recent Hegel book – also presents a highly sophisticated social epistemology, which aims to reground Enlightenment rationalism in social-institutional terms.  In the rest of this post I’m going to dwell on this third ‘moment’, or paradigm.

Start with ‘Neurath’s Boat’.  The other day I finally got round to reading Neurath’s critique of Spengler, in which Neurath articulates his famous boat metaphor.  There Neurath writes:

Even if we wish to free ourselves as far as we can from assumptions and interpretations we cannot start from a tabula rasa as Descartes thought we could.  We have to make do with words and concepts that we find when our reflections begin.  Indeed all changes of concepts and names again require the help of concepts, names, definitions and connections that determine our thinking.

This understanding of our intrinsic enmeshment in inherited concepts and associations is part of Neurath’s understanding of rational thought in holistic, rather than atomistic terms:

When we progress in our thinking, making new concepts and connections of our own, the entire structure of concepts is shifted in its relations and in its centre of gravity, and each concept takes a smaller or greater part in this change.

Neurath goes on:

Not infrequently our experience in this is like that of a miner who at some spot of the mine raises his lamp and spreads light, while all the rest lies in total darkness.  If an adjacent part is illuminated those parts vanish in the dark that were lit only just now.  Just as the miner tried to grasp this manifoldness in a more restricted space by plans, sketches and similar means, so we too endeavour by means of conceptually shaped results to gain some yield from immediate observation and to link it up with other yields.  What we set down as conceptual relations is however, not merely a means for understanding, as Mach holds, but also itself cognition as such.

I think this last sentence is a noteworthy remark from Neurath.  Of course Neurath isn’t here proposing an elaborated Brandomian-Hegelian argument that conceptual content should be understood in terms of inferential connections, but I think it is clear that for Neurath here “cognition as such” is about tracking connections between concepts.  For this reason, for Neurath, changes in our overall web of concepts can arguably also be understood as transformation of the concepts themselves.  There is a relatively strong sense, then, in which Neurath’s understanding of cognition is both holistic and cultural, as well as (plausibly) inferentialist-adjacent.

Now comes the famous boat metaphor:

That we always have to do with a whole network of concepts and not with concepts that can be isolated, puts any thinker into the difficult position of having unceasing regard for the whole mass of concepts that he cannot survey all at once, and to let the new grow out of the old.  Duhem has shown with special emphasis that every statement about any happening is saturated with hypotheses of all sorts and that these in the end are derived from our whole world-view.  We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom.  Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as a support.  In this way, by using the old beams and driftwood, the ship can be shaped entirely anew, but only by gradual reconstruction.

This is, I think, probably the classic statement of anti-foundationalism in philosophy of science.  It’s wonderful stuff, and I fully endorse it.  But this move then opens up a whole set of other questions.  In particular, granted that we are sailors adrift remaking our boat at sea – how much faith do we put in the existing state of the boat?  

The fallibilist and anti-foundationalist approach I’ve been describing, I think, is typically associated with two commitments that stand in apparent tension.  On the one hand, there is the commitment to the idea that existing scientific beliefs and methods are our best starting point.  On the other hand, there is a commitment to the ongoing remaking of those beliefs and methods.  The tension between these stances is not, of course, the tension of incompatibility – the endorsement of both stances is precisely what gives the fallibilist anti-foundationalist position its power.  But this tension is something that needs to be navigated in practice by anyone pursuing this approach to scientific epistemology.

I think another classic expression of this idea is Max Weber’s reflections in ‘Science as a Vocation’. There Weber writes:

In science, each of us knows that what he has accomplished will be antiquated in ten, twenty, fifty years.  That is the fate to which science is subjected; it is the very meaning of scientific work, to which it is devoted in a quite specific sense, as compared with other spheres of culture for which in general the same holds.  Every scientific ‘fulfilment’ raises new ‘questions’; it asks to be ‘surpassed’ and outdated.  Whoever wishes to serve science has to resign himself to this fact.  Scientific works certainly can last as ‘gratifications’ because of their artistic quality, or they may remain important as a means of training.  Yet they will be surpassed scientifically – let that be repeated – for it is our common fate and, more, our common goal.  We cannot work without hoping that others will advance further than we have.  In principle, this progress goes on ad infinitum.

This is the ‘paradox’ of fallibilism: that we put our confidence in judgements precisely because we expect them to ultimately be found inadequate.  I think this is a coherent – and, indeed, a robust and correct – philosophical perspective.  But it does raise questions about exactly what attitude to adopt to any specific judgement, as well as to the institutional structure of science as a whole.

Before I write more on that theme, I want to present one more representative of anti-foundationalist philosophy of science: Michael Polanyi.  In his classic essay ‘The Republic of Science’, Polanyi sketches an account of the institutional structure of science that I think is broadly correct.  Polanyi aims to give an account of how the institution of science can as a whole embody the Enlightenment ideal of “on the word of no one”, even as every specific moment of the institution relies on extensive authority-claims.  In Polanyi’s words:

the authority of scientific opinion enforces the teachings of science in general, for the very purpose of fostering their subversion in particular points.

That is to say: the community of scientists trains new aspiring members of the community in the scientific tradition – some competence in the tradition is a precondition of full membership in the scientific community of mutual recognition.  Yet one of the norms of the scientific community that scientists thereby enter is that any element of this tradition can in principle be challenged.  This institutional structure thus both transmits a tradition and aims to ensure that every element of that tradition is in principle open to rebuttal, and thereby capable of empirical and rational grounding.

As I keep saying, something in this broad space is the vision of science I endorse.  Moreover, this is not a particularly niche or strange opinion on my part but, I take it, an at least in principle widely-held one.  The institutions of science are constructed in the way they are in large part because they are informed by precisely this fallibilist understanding of the rationalist and empiricist endeavour.  Individually we cannot but take most of our opinions on the basis of the authority of others.  But collectively we have constructed those authority-relations, within the institutional structure of science, such that any and every individual claim can be subjected to the tests of experience and reason.  And this fact about the scientific community as a whole is what justifies any individual within that community accepting so many of the community’s conclusions on the basis of (apparently) nothing more than community authority.  This institutional fallibilist structure is the basis of the authority of the beliefs and techniques that the community transmits.

Ok.  So this is the third ‘moment’ of the ‘dialectic’ I’m discussing: this vision of science as a fallibilist institution, and the dual role of authority within this institution.

But our thinking about how science is structured doesn’t, and shouldn’t, stop there.  In the remainder of the post, then, I want to start to build on this core picture by thinking in a very crude way about a few different challenges or problems that can be presented by or to this picture.  I’ll aim to be somewhat brief and (therefore, as usual) crude.

First issue.  How are we to assess the overall reliability of our scientific institutional structures?  Our basic Neurath-Weber-Polanyi picture is of science as an institution which is, over time, self-correcting and self-improving.  It may well be the case that any given commitment turns out to be misguided, but the general mechanics of the institution’s internal checks and balances will tend over time to improve its claims and methods.  Moreover, for this reason, it’s reasonable to treat the institution’s current overall output as a reasonable approximation of our current best guess as to how things really are.

The basic critique here is: what if that’s not the case?  What if the institution is just fundamentally broken in some way?  The way in which you think science is broken is likely to depend on your ideological location: maybe it’s in the pocket of capital or the ruling class, or ‘globalist elites’.  Maybe the social location of scientists shapes their judgement in a way that is destructive of real insight.  Or maybe science is just, for whatever contingent reason, a self-selecting cadre of people with bad methods and bad views, using their institutional clout to prevent self-correction mechanisms from operating.  There are broader and narrower versions of this kind of critique.  At the limit case, there’s the rejection of science tout court.  But there are also many narrower critiques: such-and-such a discipline or sub-discipline is in the hands of fools and/or scam artists and/or powerful interests, and science’s self-correction mechanisms are not working because of the way those with institutional power have structured the relevant field.

How does one respond to this kind of critique?  Well, to a large extent it depends on context.  The reason it depends on context is that it is probably impossible in the abstract to draw a clear line between bad versions of this critique, which aim to reject what’s best in science, and good versions of this critique, which are themselves part of the ‘self-correction’ mechanism from which science derives its authority.  Put differently: if one simply rejects out of hand, in an undifferentiated way, critiques of current scientific practice and scientific findings, on the grounds that such critiques are ‘anti-science’, then one is, potentially, cutting away the basis for the scientific authority one seeks to appeal to.  Because, of course, the whole point of the scientific enterprise is that any dimension of scientific orthodoxy is in principle up for questioning.

This dynamic, of course, is why basically every crackpot thinks that whatever they are doing is real science, and the existing body of scientific knowledge and practice is an anti-science conspiracy masquerading as real science in order to fool the rubes.  We’re all, I take it, familiar with this kind of argument, and we are (most of us) not keen to take the flat earth people (or whoever) very seriously, still less to give them chairs in theoretical physics at major universities.  And yet the rational core of the crackpot’s vision of themselves as persecuted truth-teller is that if science is to function according to that Enlightenment vision with which we began – “nullius in verba” (albeit now understood at the collective and institutional level rather than the individual level, as discussed above) then there must be some sliver of possibility that the crackpot is onto something.

And this creates a further problem.  Presumably we don’t want to place flat earth theory and the best current theoretical physics on a completely level institutional-epistemic playing field.  And yet the kinds of gatekeeping that are established to keep out the cranks always risk doing more than that: blunting the self-correcting dimension of science’s ongoing, self-constitutive self-critique.  The challenge of scientific institution-design is to balance these imperatives: the gatekeeping required to produce high-quality knowledge-claims, balanced with the ability to critique in principle every dimension of those knowledge-claims, and of the mechanisms by which they are derived.  Of course this balance is hard to get right, even in the best of circumstances – without all the interests and errors at work that we’re all familiar with.

Ok.  So this is one set of issues – rather crudely put.  But here’s another, though closely related, set of issues.  As social scientists and philosophers have explored the social dimensions of science as an epistemic system, there has been increasing focus on ‘epistemic diversity’.  Here, again, the picture is one of science as a system with internal epistemic checks and balances – and those checks and balances require epistemic diversity.  If science is an ‘evolutionary’ system (as per Popper), then the way that evolutionary process works is by selection among variation.  And even if you don’t buy the entire science-as-evolutionary-system package – or the closely related science-as-catallaxy ‘marketplace of ideas’ vision of Polanyi – there’s still a basic insight here: if you don’t have some diversity of hypotheses, as well as the ability to adjudicate between different hypotheses using evidence, then you just don’t have science.  Again, then, we have an apparent ‘tension’ which is of the essence of the scientific enterprise: diversity of opinion oriented towards consensus around truth.  The epistemic authority that scientific consensus enjoys derives precisely from its willingness to adjudicate between diverse hypotheses – but that diversity of hypotheses is, intrinsically, a limit to consensus.

This is another version of the general point I made earlier.  But recent work in formal social epistemology has drilled down in this general problem space, and found some interesting, more specific results.  Kevin Zollman’s recent(ish) work on ‘the epistemic benefits of transient diversity’ is one such research strand.  Zollman mathematically models the opinion dynamics of very simple epistemic systems.  Agents exist on a graph (i.e. a network), and they interact with other agents via the edges (links).  Zollman asks: what graph or network structure results in overall better epistemic outcomes?  And he finds that (under plausible assumptions) relatively weakly connected networks result in better overall epistemic outcomes than do strongly connected networks.  Why?  Because in strongly-connected networks agents tend to coalesce quickly around a specific consensus – and that consensus may well be wrong.  It is better for there to be higher ongoing diversity of opinion, so the collective selection of the ‘correct’ opinion among that diversity takes place over a longer time-frame, with more evidence and more measured judgement in play.

This kind of result, I take it, supports a fallibilist and (in some sense) evolutionary perspective on the scientific research process.  It supports the idea that the strength of science lies in its ability to accommodate high diversity of opinion and (although Zollman isn’t studying this) method.  Of course, Zollman’s analysis is just a toy model, and (as Zollman emphasises) one doesn’t want to draw very strong conclusions on such a basis – but I take it that we have strong philosophical reasons to believe this kind of thing anyway, as discussed above.  Again, in other words, we are led to the idea that lots of scientists being wrong is central to the epistemic authority of science as a whole.  Efforts to establish institutional structures that speed up the consensus-formation process are likely to result in worse collective epistemic outcomes.

Ok.  So this is the basic picture of science as an institution that I endorse.  But here is where I want to go, with all this theoretical apparatus.  If we understand science in these terms, then a set of difficult problems are presented about how individual scientists – or really any individual, scientist or not – should interact with the scientific institutional structure as a whole.  If we adopt the original, individualist Enlightenment epistemological approach then this category of problem doesn’t present itself: the individual is the seat of knowledge, and epistemic authority can be assessed at the level of the individual.  But if we adopt this social and fallibilist understanding of epistemology, then the individual is not the seat of knowledge – knowledge is something that we produce and assess collectively via a mechanism that intrinsically involves much individual-level error.  Moreover, individual epistemic virtues are far from the only thing that need to be considered when evaluating epistemic authority: the specific reliability of scientific knowledge is a feature of the system as a whole, rather than of any one of its moments.  In addition, we have derived the apparent ‘paradox’ that (at least for large classes of claim – I’ll introduce more necessary nuance here later) even if we want everyone to be ‘correct’, we also don’t want everyone to believe the same thing – because that would undermine the basis for the authority of the claims we take to be correct.  My question in the remainder of this post is: what does this mean for the way in which any given individual ‘ought’ to relate to the scientific institutional apparatus and tradition?

At this point I want to talk a bit about some different attitudes that can be taken to the scientific enterprise.  In an earlier draft of this post I used the ‘case study’ of the discourse around the science of COVID-19 to illustrate some of these points.  That former draft is probably still more visible than it should be in what follows, but I decided it’s a much too contentious – and concrete – topic to be worth dragging into this basically philosophical argument.  Still, the debates over COVID science are the kind of thing I have in mind in the following discussion.  There is a scientific discourse; how do we choose to relate to it, as ‘consumers’ of the output and implications of scientific research?

Here then are some ways it is possible to relate to ‘science’, or scientific institutions, as a citizen:

  1. Just flat-out rejection of the epistemic legitimacy of science.  Obviously this attitude comes in a range of different forms, some a lot more sinister than others.  Still, there is a problem in how to engage with this perspective, if (like me) you are broadly pro-science.  Obviously you can’t really argue with fundamentally anti-scientific claims on the basis of the scientific literature, because this perspective simply rejects the scientific literature.  The real argument is at the level of ‘basic worldview’ – and it is very difficult to know where to begin with that kind of debate.
  1. Moving on, then, another orientation to the scientific discourse is to just accept what specific prominent science communicators say as a summary of ‘the science’.  In my view this approach makes a lot of sense as a time-saving heuristic.  Most of us are extremely busy and time poor – we simply don’t have the capacity to form judgements about the state of the current scientific literature, and therefore we delegate that job to people who have assumed the public role of assimilating and communicating the current state of the relevant science.  This is reasonable and rational – it’s how epistemic delegation works.  Of course, if you think the relevant scientific and science communication institutions are fundamentally broken, then this is a bad heuristic.  But if you don’t think that, this is a reasonable approach, in my view – given paucity of time, etc.  It needs to be borne in mind, however, that this is a shortcut heuristic – which is relevant to approach (3).
  1. The third approach is the same as (2), but more dogmatic.  That is, this approach doesn’t just accord public science communicators authority as a shortcut heuristic, but it insists that there is something very problematic or suspicious about dissenting from their views.  For this perspective, the authority of specific scientists or science communicators is identical with the authority of science in general – to doubt these science summarisers and communicators is to doubt science itself.

    This is a much more dubious stance, in my view.  It’s appropriate to defer to public science communicators as a time- and labour-saving heuristic – but we need to remember that their role is to summarise and synthesise an intrinsically pluralistic and internally diverse field of discourse.  These communicators’ judgements about how to synthesise that internal diversity of scientific opinion are very much not the same as the authority of science in general.  Inevitably, many experts will dissent from the specific synthesis proposed.

    I think this tendency (a dogmatic ‘pro-science’ attitude, where ‘science’ is identified with some specific figure or figures within the extremely diverse scientific ecosystem) is quite common among what I would call the “I bloody love science!” tendency, as well as among some scientists and science communicators who find it convenient to claim the authority of science as a whole for their contributions to an ongoing pluralistic scientific discourse.  It is a way of understanding science as technocratic expertise, rather than in more fallibilist and pluralist terms.  You could do worse than this, but I don’t think it’s a great orientation to science as an institution.
  1. A fourth approach is a different, more nuanced form of denialism or scepticism.  Unlike (1) – the “everything is lies” approach – this perspective takes the scientific literature seriously.  However, it mobilises the intrinsic fallibility of any and every individual study to cast doubt over the literature as a whole.

    I think there are two variants of this approach.  One is bad faith – the kind of Darrell-Huff-working-for-the-tobacco-industry ‘merchants of doubt’ cynical mobilisation of scepticism in the service of a predetermined agenda.  This is denialism proper, the cold-eyed use of scientifically literate hyperbolic scepticism to cast doubt on an agenda the author opposes.

    There’s a more good faith version of this approach, though.  This happens when scientifically literate people, who spend a lot of time engaging with the scientific literature, slowly become horrified at the fact that when you scratch at the methods of scientific publications, or the structures of scientific institutions, you typically find flaws.  It’s really hard to do good research; most research isn’t good; and even research that’s good will have significant intrinsic limitations.  If you have the right (or wrong) sensibility, as you look at this stuff you slowly become convinced that we just don’t know the first thing about anything, that the entire scientific enterprise is a towering stack of cards built on sand.  This is, I think, the good faith road to denialism.

    How should we react to this approach?  Well, again, I think we need to be careful.  Sometimes it is indeed the case that a scientific field or subfield is just fundamentally broken – the studies carried out in it simply aren’t good enough for us to draw any meaningful conclusions; the noise massively outweighs any signal; the biases or interests at work are so overwhelming in their influence that we can’t glean any kind of legitimate signal through the shadow they cast.  We can’t rule out this possibility a priori, and should pay attention to the sceptics enough to take it seriously.

    At the same time, though, the fact that any individual study is flawed (which basically every study is for some value of ‘flawed’), or even that there are systematic generative flaws in the relevant institutional structures (which again will always be the case for some value of ‘flaws’) doesn’t mean that the scientific subfield (or whatever) in question should be written off.  The reason for this is that – as I was arguing at greater length earlier – scientific conclusions are really aggregate phenomena.  They emerge as signal from noise, and the noise can be very substantial indeed – can even be systematic – while still generating a useful signal.  The idea of the scientific enterprise is that we are all engaged in a highly fallible process, but our collective research endeavour is stronger than any individual study or claim, or the flaws that afflict them.  And if we have the institutions of science functioning halfway properly, this indeed ought to be the case – a signal ought to be detectable, over time, despite everything.

    In this sense, the ‘good faith sceptic’ is (I’m arguing) taking a too pessimistic or perfectionist approach to the assessment of scientific validity.  The good faith sceptic assumes that errors are magnified by aggregation – that if all the individual studies have some flaws, then the field as a whole must be a true disaster – rather than understanding the way in which institutionalised fallibilism allows fallible studies to produce something greater than the sum of their parts, via the process of collective sifting and checks and balances.  Again, we can’t assume that this aggregate-level effectiveness of the scientific research process is the case a priori for any given field, but I’m claiming that it in fact often is the case for many actually-existing scientific institutions.

Ok – so far we’ve looked at four different ways to approach scientific findings.  I want to suggest that each of these ways has a sort of partial or lopsided attitude to the complex dynamic system of our scientific institutions.  The full anti-science denialist just rejects the whole thing; the heuristic timesaver and (in a less defensible form) the pro-technocracy expertise lover focus on some specific ‘output’ as bearing the authority of the institution of science as a whole (investing a moment of the system with an attribute that can really only legitimately be attributed to the system overall); the good faith denialist fixates on research flaws without understanding how the checks and balances of aggregation associated with the practice of the community as a whole can permit useful signal to emerge even through very significant noise.  

But we can, in principle, do better than all of this: we can ourselves triangulate between many different studies, and we can assess the likely institutional incentives and strengths and weaknesses in play.  Of course, we need to have the time available to do this – and it relies on our own judgement.  So this isn’t an easier – or even, necessarily, a better – way to approach things than making use of simpler heuristics.  Our judgement may be worse than that of whichever science synthesiser and communicator we would otherwise choose to delegate this task to!  But the approach is at least available.  And there is a certain sense in which this approach is more adequate to what science ‘is’.

Ok.  Let’s say we adopt this kind of approach.  Here we are trying to engage not just with science in the sense of individual outputs – whether individual papers or summary overviews assembled by science communicators – but with the dynamics of the relevant field as a whole.  This is, at least potentially, a good way to go about things – but it is also extremely cognitively taxing.  Moreover, even if we adopt this kind of approach – moving beyond the first-pass heuristic of trusting some specific synthesiser or synthesisers – we are still constantly engaged in acts of epistemic delegation.  Understanding science in the systemic fallibilist way I’m advocating means there is simply no way to get away from epistemic delegation – from trying to make rule-of-thumb judgements about whose word to rely upon.  In taking the approach I’ve described we are attempting to engage in a more sophisticated and triangulated effort at weighting the credibility of testimony – but we cannot but ultimately make judgements about how to weight testimonial credibility.  At the end of the day, this is core to the entire scientific enterprise.

And this basic, unavoidable fact about how science works means that we are never not going to be vulnerable to ‘scepticism’.  I began this post with the early modern Enlightenment approaches to foundationalist epistemology.  Rationalist foundationalism placed the individual faculty of reason centre stage, while empiricist foundationalism placed observations of nature centre stage, but regardless the idea was that the appeal to testimony could ultimately be grounded in something that itself did not need the grounding of the attribution of social credibility.  To use Barnes and Bloor’s phrase, the epistemic ground of such philosophical approaches were meant to “glow with their own light”.

If we adopt a fallibilist, anti-foundationalist approach to science as a complex system, though, we lose this kind of grounding.  The point at which our chains of reasoning ‘bottom out’ is always contingent.  There is always, in principle, more that one could do; one is always engaged in epistemic delegation, treating something as contingently trustworthy that is, in principle, open to contestation.

This fact opens the door to an infinite application of specific scepticisms.  It is always possible to continue asking “and what’s your basis for believing that?”  And this infinite application of specific scepticisms itself has a double face.  On the one hand, and to reiterate, the goal of our scientific system as a whole is to collectively institutionalise the principle that was fallaciously individualised in the first wave of Enlightenment rationalism and empiricism – “on the word of no one”.  On the other hand, in a social, anti-foundationalist and fallibilist understanding of science, this principle is institutionalised through a structure of contingently authoritative testimony – specifically taking people’s word as authority enough to believe things.  Sceptical questioning of taken-for-granted authorities can thus be seen both as the essence of rational, empiricist scientific inquiry, and as undercutting the testimonial institutions that we use to pursue rational, empiricist science at all.  Which of these a given act of questioning ‘counts as’ is a matter of social perspective.

Alright.  Here I want to pull back slightly, and start writing at a greater level of generality – talking not about science specifically but epistemic systems in general.  I think there’s a tendency in quite a lot of philosophy of science to somewhat conflate the specific features of science with human reason and observation in general (indeed, there are lots of people who would argue that that’s justified because there’s actually nothing that really differentiates science from other kinds of human epistemic practices!)  I don’t want to do that – I do think science can be demarcated, albeit loosely, from non-science.  Even so, I want to pull back now to make some broader remarks, drawing (as usual) on Brandom’s theory of practice and discourse.  Then I’ll circle back round to the problem space of fallibilist understandings of science.

So – start with Brandom’s ‘default, challenge, response’ model of the game of giving and asking reasons.  And start by thinking about the philosophical problems this model is responding to.  If we are thinking about inferential chains, then we are faced with the problem: where do those inferential chains stop?  It seems like we have three options.  First, there is no terminus – the inferential chain just keeps on going forever down an endless series of new premises.  This seems like it might be an infinite regress, such that we’re never able to ground our reasoning in anything because we never reach a stopping point.  Second, there is a terminus, but it is itself ungrounded.  This seems like it might be a form of dogmatism.  Third, there is a circular chain of inferences, such that our original inference effectively functions as its own ground.  This seems to combine negative features of the first two scenarios – an infinite regress that is somehow also a dogmatism.

But what else are we going to do?  It seems like these options are exhaustive.  Moreover, something like this problem occurs not just at the level of substantive premises, but also at the level of logical inferences themselves – this is the argument Lewis Carroll makes in ‘What the Tortoise Said to Achilles’.  Because an inference can be (in Brandom’s terminology) explicitated as itself a substantive premise – this is what logic does, on an expressivist account: it makes the formal machinery of reasoning available as conceptual content, not just practice – the exact same problem can be made to recur in relation to the logical processes of inference by means of which the inferential chain is itself constructed.

So what to do?  Brandom’s ‘default, challenge, response’ model proposes that we start with “material inferences” – that is, substantive, not just formal, inferential claims (and, of course, on Brandom’s account inferences are the stuff of conceptual content in general) – which are presumed (by default) to be good.  Then those material inferences (or conceptual contents) can be challenged as part of the general discursive practice of asking for and giving reasons.  Once challenged, we are obliged to give a reason for our commitments.  On Brandom’s account, then, we inhabit a space of reasons which is filled with ‘default’ commitments that are not, at that moment, vulnerable to challenge.  Indeed, there is no other way to enter the space of reasons – to be a sapient creature at all.  This world of ‘default’ commitments is the inherited materials of our conceptual space – the boat we are remaking, in Neurath’s metaphor.  Then the process of reasoning – the game of asking for and giving reasons – is the remaking of that boat, by challenging default commitments, and thereby extending our inferential chains outward, turning what had been premises into conclusions, grounded in new premises.

As we reshape our conceptual world, then, on this model, we are (like Neurath’s miners) shifting the location of the light of inquiry.  Commitments that had been default premises become the conclusions of newly established inferential chains.  At the same time, commitments that we arrived at by inferential chains get integrated into the default background of our conceptual habits.  In this latter scenario, inferential chains ‘drop out’ as they achieve local consensus, and what had been a laboriously-arrived-at conclusion becomes a habitual, unexamined premise.  It’s important to recognise that both ‘sides’ of this process – default premises becoming contested conclusions; contested conclusions becoming default premises – are essential dimensions of our rational discursive practices: you can’t have one without the other.

I take it that (as appropriately elaborated by Brandom) this is a more carefully developed (albeit also more boringly articulated) version of the vision of “cognition as such” that Neurath laid out in the passages from ‘Anti-Spengler’ I quoted above.  Ok.  But all this means two things.  First: we have swapped a general foundationalism (as in the original Enlightenment foundationalisms I began by discussing) for a series of ‘local foundationalisms’ – default commitments always vulnerable to challenge.  And, second: what counts as a local foundation, a currently default commitment, is a matter of local social practice.

In order to elaborate on this latter point, I now want to compare and contrast the Brandomian ‘default, challenge, response’ model to the vision of discursive practice articulated by Barnes and Bloor in their defence of relativism (‘Relativism, rationalism and the sociology of knowledge’). I already discussed this paper in a Journal of Sociology article, co-authored with N. Pepperell.  I’m not 100% happy with our treatment of Barnes and Bloor in that article (obviously the fault here lies with me, not with NP), but I don’t want to divert this post into a lengthy relitigation of all those issues.  For now, I just want to focus on one specific area.

In their paper, then, Barnes and Bloor field a range of objections to their relativism from an ideal-typical rationalist.  First, they field the objection that (contra relativism) our ideas are in fact determined by the way the world really is.  Interestingly (and in contrast to some other prominent figures in the strong programme broadly understood – e.g. Harry Collins, at least in some moods) Barnes and Bloor have no objection to the idea that the way the world really is should play a role in our accounts of why people believe the things they do.  Barnes and Bloor are relativists, but they are not anti-realists, not even ‘methodologically’.  In Brandomian terms, Barnes and Bloor are happy to incorporate ‘reliable differential responsive dispositions’ into their analytic apparatus.  That is to say, they are happy to say that (for example) the fact that the object of an experiment really did behave in such-and-such a way should be part of our account of why a given scientist believes what they believe about the object of the experiment.

But Barnes and Bloor insist that this can’t be where our account stops.  One reason for this is that (as they say), nature has always behaved the way it does.  In Barnes and Bloor’s words:

reality is, after all, a common factor in all the vastly different cognitive responses that men produce to it.  Being a common factor it is not a promising candidate to field as an explanation of that variation.

Moreover, we know from the history of science that working scientists can of course interpret the ‘same’ or similar experimental results in vastly different ways.  Barnes and Bloor give the example of Priestley and Lavoisier having very different interpretations of the same basic experimental data.  The fact that experimental results are interpreted in the way they are by the researchers in question therefore can’t simply be accounted for by the behaviour of the experimental object; it must also be explained by the researchers’ interpretive practices.  And those interpretive practices (Barnes and Bloor argue) are socially determined.

So Barnes and Bloor are not disputing the role of ‘reality’ in determining belief; they are arguing (and here, as often, they are more aligned with the logical empiricists than either group’s reputation would suggest) that reality underdetermines interpretation, and that the other relevant factor – the appropriate object of sociologists’ study – is social norms.  They then argue – and this is the crux of the paper – that there is no non-relativist way to ground that social-normative determination of interpretation.  And here I think we need to be careful to distinguish at least two different elements of Barnes and Bloor’s argument.

The first element of this argument is a fight with anti-relativists who believe a shared universal faculty of reason is a precondition of the intelligibility of communication, science, reason, and so forth.  Here Barnes and Bloor cite, and argue with, Hollis and Lukes.  Barnes and Bloor make the case, in essence, that the logical principles like modus ponens are socially instituted rather than features of some essential core invariant faculty of reason.  As Barnes and Bloor see it, Hollis and Lukes and other rationalists are simply dogmatically insisting on a particular set of conventional practices as necessary features of human reason, without providing any compelling justification for their preferred norms beyond bluster.

I think this argument has a lot to recommend it.  But what I’m interested in here is the second, more general, dimension of Barnes and Bloor’s argument.  Here they argue that the rationalist in general ultimately cannot avoid dogmatically insisting on some category of proposition in which credibility and validity are fused.  In Barnes and Bloor’s words:

[the rationalist] will treat validity and credibility as one thing by finding a certain class of reasons that are alleged to carry their own credibility with them; they will be visible because they glow by their own light.

I love this passage; I think it provides a great, evocative articulation of the core critique of dogmatic rationalist foundationalism.  But I also think that this element of Barnes and Bloor’s argument is insufficiently attentive to the possibility of anti-foundationalist rationalisms of the kind articulated by Neurath.

Here the relevant question is: what does it mean to say that at some point in the rationalist’s argument, credibility and validity must fuse?  I think Barnes and Bloor mean to suggest that a dogmatically unjustified foundational premise must exist somewhere in the rationalist’s reasoning – and that the explanation for the rationalist treating that premise as foundational must be social.  But should we understand this foundation as aligned with actual philosophical foundationalism?  Or should we treat it as provisionally foundational in the way that is central to the ‘default-challenge-response’ model?

My claim here is that the ‘default-challenge-response’ model allows us to have ‘foundational’ premises for reasoning in a way that does not commit us to philosophical foundationalism.  My claim, moreover, is that there is potentially a large conceptual gap between philosophical anti-foundationalism and relativism.  Barnes and Bloor take themselves to be providing a set of arguments for relativism, but by my lights what they are really doing is providing a set of arguments that could lead to relativism, but could also lead to Neurathian anti-foundationalist rationalism.  (I further think that Barnes and Bloor themselves are influenced by Vienna Circle logical empiricism, and take themselves to be elaborating what they take to be its underlying relativist commitments, but I don’t think we need to follow them down this path.)

So – let’s assume we have followed some inferential chain down to its foundational premise.  This premise is treated as foundational by some local community, but there is no transcendental or metaphysical reason why this premise should be treated as foundational – the fact that it is treated as foundational is a matter of contingent sociological fact.  Have we now ‘relativised’ this premise?  My claim is: not necessarily.  The fact that (on the default-challenge-response model) this premise is currently, by default, foundational doesn’t mean that a challenge won’t bring forth reasons.  The sociologically and epistemologically relevant question is: what kind of reasons are they?

Here we switch levels again, back to the specific features of science as an institution (rather than ‘cognition as such’).  I think the claim advanced by anti-foundationalist scientific rationalists needs to be something like the following: what distinguishes science from other epistemic systems is the institutional-epistemic structure within which ‘default’ premises are embedded, such that users of those ‘default’ premises can legitimately assume that at the level of the system as a whole the premises are not ungrounded, but are rather empirically and rationally grounded by other components of the overall epistemic system.  There is a complex cognitive division of labour here, such that countless methodological and substantive claims serve as locally-unexamined premises for some members of or moments of the system, but no premises are ‘foundational’ for the system as a whole.

This is (at least part of) my answer to the ‘demarcation problem’ (that is, to the question of what differentiates science from non-science).  My claim is that what picks out science as a uniquely reliable epistemic system is an institutional structure that makes this category of claim warranted.

Ok.  This is one of the core claims I want to advance in this blog post, so I guess I want to flag that here and take a short imaginary breather.  However, this claim immediately needs to be qualified, in at least two ways.

First up (a critic might query) are we really saying that no premises are ‘foundational’ for the system as a whole?  What about the premise that ‘nullius in verba’ is a desirable project in any sense in the first place?  Isn’t this simply a normative judgement – a question of world-view – that itself can’t possibly be ‘empirically or rationally’ grounded?  To this set of questions I think I basically, like the logical empiricists, just shrug my shoulders and say “sure, I guess that a lot of this kind of thing comes down to values at the end of the day.”  Perhaps this ‘admission’ is enough to place me in the ‘relativist’ camp.  I’m sure it would be in the eyes of many.  But I think it’s important to remember that – at least within a Brandomian framework – to talk of values is not to leave the space of reasons: for Brandom, all normative commitments are part of the game of giving and asking for reasons.  It’s true that one cannot give a scientific basis for the norms that undergird the scientific enterprise as a whole – but this is not the same thing as saying that no reasons can be given.  Anyway, maybe I’ll come back to this issue another time – there’s much more to say here, but I think it mostly falls outside the scope of this post.

A second challenge is not focussed on the ‘fundamental values’ that animate the scientific project, but on the idea that one can indeed legitimately take scientific institutions as warranting the kind of ‘epistemic delegation’ that treats so much of the received wisdom of science as locally taken-for-granted default background commitments.  And this objection in turn, I take it, as I’ve already briefly discussed, comes on a spectrum.  At one end of the spectrum is global contempt for the scientific project tout court.  But as we move along the spectrum, we reach more and more ‘local’ objections to specific features of the local scientific enterprise in question.  Does such-and-such a scientific claim or practice really merit being taken as an unproblematic default?  Often, clearly, the answer to this question is going to be “no” – indeed, it has to sometimes be “no”, on the antifoundationalist account I’m endorsing, because the answer to this question being “no” is how we get scientific progress, discovery, self-correction, etc. etc.  The answer to this question being “no” is the driving force of the scientific endeavour.

At this point, though, I want to introduce one more distinction – the Brandomian-Hegelian distinction between the retrospective versus prospective dimensions of reason.  When Weber writes that the “very meaning of scientific work” is that our current best science “will be surpassed” he is pointing to a prospective dimension of deference to scientific institutions.  In other words, in deferring to the authority of science, we are not just deferring to the current authority of current scientific findings – we are also, and critically, deferring to the broader institutional process by which those findings will, very likely, be overturned or supplanted.  The form of epistemic delegation or authority involved here is quite complex.  We are deferring to the institution of science precisely because we take it to have the resources to supplant the specific claims to which we are at the same time locally deferring.  Here again the tension between the dynamic pluralistic dimension of science and the authority of specific, static, concrete claims or findings is in play.

Ok.  More caveats could be articulated, but we’ve now I think covered the main substantive points I wanted to make about what science ‘is’.  I now want, much more briefly, to apply some of these points to some specific bugbears that have been bothering me recently.  There are two.

The first ‘application’ is around what I see as ‘sceptical’ discourses.  Here I want to use the apparatus I’ve articulated above to draw a couple of distinctions.  As I’ve said ad infinitum now, there is a constitutive tension in scientific institutions between the fact that in principle everything is open to challenge, and the fact that in order for any progress to actually be made on anything, a huge amount needs to be ‘black-boxed’ as locally-unexamined premises.  Science as a whole is a way to manage this tension, by permitting justified ‘black-boxing’ at a local level, on the grounds that at the global level of the institution as a whole everything is contestable, via division of epistemic labour.  This division of labour can be temporal – diachronic as well as synchronic.  Thus even when a near-total consensus is reached in the present, this is because there is epistemic delegation to earlier generations of researchers who robustly debated and tested these conclusions so that we don’t have to.  Moreover, this delegation can be prospective – we can take a premise for granted on the assumption that in the future we will get around to assessing its legitimacy more robustly than we yet have.  All of this is how science works.

Now: what I want to object to, using this apparatus, is sceptical discourses that take locally-unexamined premises as evidence of anti-scientific thinking.  It’s easy to see where this idea comes from: science is meant to challenge presuppositions and take nothing for granted, and yet manifestly you have countless working scientists who are simply taking huge amounts of stuff on the basis of authorities that they haven’t bothered to independently assess, or whose work they haven’t bothered to master.  This is the antithesis of science (the reasoning goes)!  Therefore science is a fraud.

And my strong counter-claim is that this is just a fundamental misunderstanding about how science works.  This kind of ‘scepticism’ is the application of (often a fairly debased version of) Enlightenment Mark One epistemological reasoning to an epistemic institution that simply isn’t justifying its epistemic claims in this way.  That doesn’t mean that the scientific claims in question are right.  It may, in any given case, in fact be the case that the authorities on which scientists are relying are misguided, that the locally-unexamined premises are bad ones, and so forth.  Challenging such premises is (part of) the work of science.  But neither is it intrinsically irrational or unscientific to engage in the kind of epistemic delegation, the kind of deference to authority, that is here being criticised.  Epistemic delegation and deference are simply non-negotiable features of science (indeed, of reason) in general. 

The characteristic sceptical move that I’m here objecting to, in other words, is an apparent belief that unless any given individual can trace back the chain of inferences to the comprehensive evidence-base that justifies their claims, those claims lack justification.  This is what’s (at least purportedly) going on (often, not always) when you see ‘sceptics’ of whatever kind demanding “a source for that claim” in online debates over (often relatively consensus) science.  The issue isn’t that it’s illegitimate to want claims to be sourced.  The issue is that it’s unrealistic to expect any given individual to be able to personally replicate, on demand, the inferential chains that ground the entire research enterprise in question.  By challenging individuals in this way, you are not ‘exposing’ the fact that they are making claims without evidence – rather, you are indicating that you don’t understand the epistemic basis for scientific authority.

That’s my first bugbear – which really just derives from observing, and sometimes participating in, too many tedious online debates with people who think they are being particularly rational by making these kinds of discursive moves.  But this is a quite petty bugbear.  The second point I want to make is perhaps slightly more meaningful.

This second point concerns my own research – and here I guess I want to get very slightly more autobiographical.  Whenever you talk about your past self’s errors I think it’s easy to overstate and simplify things, which I’m definitely going to do here – I didn’t in fact hold clean ‘ideal type’ positions of the kind I’m about to attribute to myself.  But still, I think I can discern in retrospect some kind of intellectual trajectory that roughly traverses the same ‘dialectic’ that I outlined towards the start of this post.  That is to say: I think when I was (let’s say) a teenager, one of the appeals of philosophy for me was something in the general ballpark of the ‘crude’ ‘Enlightenment Mark One’ idea of excavating through layers of unwarranted belief that one had inherited from one’s social environment, in order to find bases for belief that were more robust than simply accepting contingent tradition.  Then as I started actually studying philosophy I became pretty convinced – let’s say in my early twenties – that this was a pipe dream, and that you can’t get away from the contingently social determination of your categories.  This in turn I think led me to a more ‘critical-theoretic’ space, which was highly sceptical about philosophical rationalist claims.  And then, from that more critical space, I feel like I’ve slowly assembled the resources required for a more pragmatist and social-theoretic rationalism – thanks in no small part (obviously) to Brandom.  Again, I think I’m sort of warping things a bit to fit into this narrative, but there’s something to it, in terms of my own personal intellectual trajectory.

The point being, I guess, that I definitely feel like I understand the persuasive pull of what I would now characterise as two different categories of scepticism.  One category of scepticism seeks a socially-transcendent basis for the critique of the social determination of belief (and becomes a hyperbolic form of scepticism because it is in fact impossible to find such a basis).  Another category of scepticism relentlessly critiques claims to such a basis for rational judgement, on the grounds that all such bases are contingently socially determined, and therefore unreliable.  I take it that the broad philosophical orientation I’ve outlined above incorporates at least some of the strengths of both orientations, in the service of a more robust and reasonable rationalism and empiricism.

But one still encounters the ‘sceptical’ challenge – and one encounters it in two ways.  First, introspectively: I think most people with this kind of ‘philosophical’ orientation are nagged by worries about the basis for their beliefs; these worries are the motive for a lot of theoretical and scientific inquiry.  Second, though, one encounters these sceptical challenges from others.  And this is what I want to conclude this post by talking about.

One of the ways to think about rationalism (extensively criticised by Barnes and Bloor in their paper on rationalism and relativism) is to assume that there are shared fundamental commitments that are constitutive of sapience as such.  If this is your ultimate grounding for the reasonableness of our commitments, then the process of argument and persuasion can be understood as a practice of following inferences ‘upstream’ until one reaches commitments that nobody could possibly reasonably reject.

A social-pragmatist rationalism rejects this approach.  For the approach I am endorsing – exactly as Barnes and Bloor say – the commitments that we reach when we follow a chain of inference back to its self-evident premises are socially contingent.  They are locally unchallenged, but that does not mean that they are rationally unchallengeable – quite the reverse.  What you take to be self-evident is a function of the social norms that you contingently endorse – in large part due to the relevant recognitive community of individuals of which you are a member.  In other words, the establishment of ‘self-evident’ premises for reasoning is in significant part a process of socialisation: if a commitment is ‘self-evident’ or premiseless, this is a fact about your socialisation, or your social milieu, not a fact about the commitment.

Now, one of the implications of this understanding of how our epistemic world functions, is that we are typically inclined to an asymmetry about commitments.  Of course the commitments that I regard as self-evident really are self-evident.  On the other hand, the commitments that you regard as self-evident are manifestly not self-evident at all; on the contrary, I can see as clear as day that these are unexamined prejudices inculcated by your social environment, from which you have failed to free yourself.  Similarly, my reasoning moves in secure, robust steps, relying on only the most self-evidently legitimate inferences.  By contrast, your reasoning is constantly supported by implicit and yet dubious substantive commitments that you not only have failed to justify, but that you apparently fail even to recognise as commitments requiring defence.  This asymmetry is, of course, a structural feature of the fact that we have different background assumptions – different locally-foundational premises.  It is important not to mistake this difference in local default premises for a difference in rationality as such.

Now, the fact that we all have slightly different – and sometimes substantially different – locally-foundational premises for our reasoning means that when we encounter someone else’s arguments, we are often inclined to challenge some of what they have to say.  We take it that we see things clearly that they are confused about.  And there’s nothing wrong with this!  This is the process of asking for and giving reasons that makes us rational creatures in the first place!

And yet here we again encounter a version of the tension that has animated this entire post.  For, as I argued above, it is not just the challenging of default premises that permits us to inhabit the space of reasons, but also the creation of default premises. Without the rich backdrop of locally-unchallenged default commitments – both explicit and implicit – our reasoning processes wouldn’t be able to get off the ground at all.  In other words, we (or the traditions we inhabit and inherit) have to make decisions (whether deliberately or implicitly) about what premises will not be challenged within any given moment of the game of asking for and giving reasons.  Neurath’s sailors cannot remake the entire boat at once – they cannot remove the planks beneath their feet, even if they can choose where to stand.

In other words, part of the set of decisions we make in engaging in rational thought and discourse is precisely what commitments are not up for debate – at least here and now.  This is true of thought in general – but it is also true of scientific discourse.  This fact is the basis on which Polanyi can construct his two-stage account of science as a structure of authority-relations.  Admittance to the community of practising scientists is accomplished by a process of socialisation in which a set of shared community commitments are established.  Then, this having been accomplished, those commitments serve as the ground upon which individual scientists can stand, as they aim to dismantle and reconstruct some elements of the framework they have been socialised into.  As I discussed above, there are significant epistemic risks from the kind of gatekeeping associated with scientific community socialisation – but some establishment of default premises upon which reasoning can build is an essential precondition of establishing any community of epistemic practice.

From the perspective of those outside the community in question, however, this can easily look like a process of unreason at work.  Here, after all, in the process of socialisation, we are effectively engaged in the construction of an in-group out-group boundary, where those who refuse to accept the principles and commitments of the in-group are banished from its charmed circle.  Moreover, these mechanics of in-group and out-group membership really do frequently involve a process of unreason at work.  It is often a lot easier to respond to the ‘challenge’ moment of the ‘default-challenge-response’ model by simply insisting that those who do not accept a given premise are not welcome here, than it is to provide a substantive rationale.  And it’s easy to see how this dynamic can operate in the service of irrationalism – not irrationalism in the sense of “stepping outside the space of reasons” but in the sense of “providing bad reasons”.

But here’s my claim: the goal of the institution of science is to establish an overall institutional structure that allows a plurality of locally-unexamined commitments to serve as epistemic checks and balances against each other, while simultaneously permitting local sub-communities to fruitfully pursue lines of thought facilitated by the establishment of very substantial – or even very contentious – locally-taken-for-granted premises.  I’m here basically saying that Brandom’s ‘default-challenge-response’ model, plus Polanyi’s account of the republic of science, provides a more elaborated account of the way in which Neurath’s boat operates as a specifically scientific institutional structure and dynamic.  Moreover, I’m arguing that subcommunities of practice are critical to the dynamics of Polanyi’s account of scientific pluralism.

Here it helps, I think, to consider another element of the social character of reason: it really helps to think things through, if we have other people to bounce ideas off.  Of course, at some very abstract level, if we accept the Brandom-Hegel account of reason, reason is necessarily and intrinsically social: you can’t have reason at all without community.  But moving down levels of abstraction, and having gotten our rational, sapient recognitive practices off the ground in some sense, it also just practically helps to have people to talk to and think with on any given specific topic.  This is what a scientific discipline is, it’s what a subdiscipline is, it’s what a research programme is, it’s what a research team is, it’s what a collaboration is.  These are all ways in which people work together on the basis of shared default premises.  The point is that when people with similar default premises and commitments come together, they can build on those premises together in a way that is impossible for people with more widely divergent worldviews.  Here the ‘black-boxing’ of much of the debate that animates science in general allows the kind of focus on specific problem-spaces that often leads to scientific advance.  Without that ‘black-boxing’, I’m claiming, you wouldn’t be able to move far enough down the relevant inferential chains to get to novel findings.  If you’re constantly having to re-establish the rational and empirical bases of fundamental commitments, you don’t have the cognitive resources left over to follow the implications of those commitments.  This, I’m claiming, is why the kind of broad scepticism (“where’s your source for that?”) that I mentioned above is destructive of the ability to actually pursue a lot of scientific inquiry, if taken too seriously.

The pluralism that is constitutive of a well-functioning scientific institutional dynamic is a pluralism of these subcommunities of shared commitments.  In my PhD, and following Jason Potts, I called these communities “scientific innovation commons”.  But in my PhD I didn’t, in my view, adequately draw out the epistemic implications of this institutional structure – I’m trying to do better in this post.

So the model of science that I’m proposing here involves a pluralism of research subcommunities, each with their own local norms and commitments, which knit together into a cognitive division of labour that – if everything is working roughly as it should – allows each subcommunity to serve as an epistemic check and balance on others, resulting in a (diachronic) large-scale dynamic that can credibly claim, as a whole, to approximate the ‘rejection of tradition’ that early modern Enlightenment thinkers hoped and failed to achieve at the level of the individual scientist, precisely via the way in which this community as a whole constructs and transforms its own traditions.

If this pluralism is to function as an internal check and balance system, though, it needs to be genuine pluralism.  As Zollman’s formal opinion dynamics modelling illustrates, a community that too-quickly orients to consensus is an epistemically unreliable community.  And this in turn, I’m claiming, produces another apparently paradoxical result: there is – potentially – an epistemic virtue in local research subcommunities refusing to ‘update’ their presuppositions in the light of criticism from other subcommunities.  Obviously we don’t want everyone to be too fanatical or dogmatic.  But neither do we want everyone to rush too quickly towards consensus.  We want a diversity of research programmes each of which can explore the implications of their approach in some depth.  Only by permitting and facilitating this kind of ongoing pluralism are we (the community as a whole) able to reliably assess the strengths and weaknesses of these different research programmes.  

And here the prospective dimension of the scientific process also becomes relevant.  Our research is future-oriented.  It is aimed towards some future collective process of assessment.  This is the point of the ‘conjectures and refutations’ model of science – we don’t need a rationale now for a given research programme, we simply need the possibility of a future evidence-base that will support (or refute) our hypotheses.  We can then set about the task of discovering whether that evidence-base will (or could) in fact exist.  Thus even highly speculative and flimsily-supported-by-traditional-commitments communities of research are a fully legitimate – indeed, essential – dimension of the dynamics of science as a whole.  Of course, if a research programme is unable, over the long term, to find rational or empirical justifications for its existence, then it slowly loses value within the pluralistic chorus of scientific debate.  But we wouldn’t want scientists to abandon a research programme at the first sign of trouble – some measure of perseverance, even on highly unpromising ground, is essential to the overall collective endeavour.

Ok.  But here we reach some of the tensions that I gestured at earlier.  For how is an individual scientist or researcher to behave, within this institutional framework?  I have made a case for strong scientific pluralism.  I have made the case that such pluralism requires perseverance from scientists and scientific subcommunities even in the face of epistemic adversity.  But how does an individual scientist’s decision-making fit within this framework?  Let’s say a team of researchers is pursuing a research programme.  They find unpromising results.  It seems to them, in the light of those results, that it is much more likely that an alternative research programme is the right one.  And yet, were they to abandon this current research programme, the result would be a significant reduction in the pluralism of this section of the scientific ecosystem.  Do the scientists in this research team have an epistemic obligation to pursue what they now see as the most credible lines of reasoning and inquiry, and abandon their current research project?  Is this what following the best evidence – apparently a core scientific norm – demands of them? Or do they have an obligation to maintain scientific pluralism by doing the best they can by their current research programme, even though they have lost faith in it?  After all, a degree of epistemic pluralism is (I have argued) critical to the credibility of science as a whole.  And if they pursue this second course of action, at what point does this course of action stop being a commendable commitment to scientific rigour in pursuing an unpromising line of inquiry for the sake of epistemic pluralism and meticulous care in eliminating unlikely possibilities (and on the off-chance that it turns out to be correct after all), and start being a dogmatic refusal to accept the best scientific evidence?

I don’t think there are really correct answers to these questions.  Individuals need to make their own decisions.  But I think these kinds of problems present themselves if you accept the understanding of science that I am recommending.

And now, finally, we come to the rather self-involved personal reflections that I was trying to get to with this line of thought.  As I was saying above, I feel like my own personal intellectual trajectory has been driven by some measure of the kind of ‘scepticism’ that I’m here expressing wariness about.  That is to say, the kind of scepticism that asks “how can we be sure that what we’re doing is even in the right ballpark here?”  I think this was the kind of worry that drew me into philosophy, and it’s also the kind of worry that pushed me away from philosophy (because too much philosophy seemed itself to be dogmatic, to me).  I think this kind of worry (“what if our basic approach and categories are just wrong?”) has sent me running around between different fields and subfields – philosophy, sociology, economics – in part because I didn’t want to just accept being socialised into such-and-such a set of handed-down disciplinary norms.  And I don’t think that impulse was exactly misguided, though I certainly would have benefited from applying myself a lot more along the way.

In any case, I feel I’ve been, over time, reasonably responsive to both this kind of introspective scepticism, and to the ‘external’ scepticism of people telling me that I’ve gotten it all wrong and I should be thinking about things in [such-and-such] terms instead.  And I feel I’ve learned a lot from those kinds of interactions.  Recently, however, I’ve found myself increasingly unwilling to take this kind of advice – to listen to people telling me that I’ve gotten things all wrong.  And I feel like this unwillingness comes from two distinct sources.

The first source is that at this point I’ve been reading and thinking about the areas that interest me for (let’s say) about twenty five years.  In that time I’ve done a lot of thinking – and I feel like I’ve already at this point given consideration to a lot of the kinds of objections and criticisms that people throw at me.  Increasingly, my reaction to being told that I haven’t considered [X] is not (as it once was) “oh, yeah, I should really spend some time reading or thinking about [X]”, but rather “yes I have”.

So that’s one consideration: one reason why I’m less inclined to put a lot of energy into responding to objections to my overal intellectual project.  But of course, as I discussed above, the fact is that there is always room for giving more thought to any given issue.  So it’s really not a very reliable or commendable attitude, to simply think to yourself, “no, I’ve already settled that”.

The important consideration, in my view, is a different one: the role of research programmes in the scientific epistemic system.  As I discussed at length above, there are two sides to intellectual progress: on the one hand, challenging taken-for-granted ‘default premises’; on the other hand, adopting premises as taken-for-granted defaults, in order to explore their implications.  And my view is that, in my intellectual life to date, I have spent a large proportion of my time doing the former, and it is now time to focus on the latter. 

In other words, I feel I have a research programme here.  That research programme is still much more inchoate and underdeveloped than I would want it to be, at my stage of life.  And of course the research programme may be flawed; its premises may be faulty; its goals may be misguided.  But I feel – whether rightly or wrongly is not for me to say – that at this point I’ve done enough in trying establish reasonable default background premises.  Now I want to actually try to do something with the intellectual resources I’ve committed myself to.

All this is by way of saying, that at this point in my life I regard it as basically an appropriate response to those questioning the premises of my intellectual project to simply say: well, this is my research programme; if you think it is flawed, there are many other research programmes out there which may be more to your liking.  After all, as I’ve argued at inordinate length in this post, science is a pluralistic epistemic system.  Part of what makes science epistemically reliable is precisely the fact that it has lots of people running around in it doing misguided things, pursuing misguided research programmes.  I think it’s clear enough that I’m at the more crankish end of the research spectrum: I’m not affiliated with any institution; I publish in academic venues infrequently; I work through my intellectual interests in rambling, loose, overly personal blog posts like this one; and so on and so forth.  But that’s ok.  Science, in the expansive sense, has space for all of this, and much more.  My goal on this blog, and in my work in general, as I see it, is now to pursue the lines of thought I’ve committed myself to.  We’ll see how much I can get out of them, in whatever time on this earth I have left.

Advertisement

Conflictual spontaneous order

September 17, 2022

In the last post I ended up back where I was in early August – talking about the strengths and weaknesses of the first volume of Hayek’s ‘Law, Legislation and Liberty’ (‘Rules and order’).  I made the claim that the late Hayek’s metatheoretical position is strongly compatible with the ‘empirical dialectical’ approach that I’ve been advocating on the blog of late.  I also said that I had substantial disagreements with Hayek once we move down a level of abstraction to the social theory proper.  That’s what this post is about.

In my previous post on ‘Rules and order’ I listed four ways in which I disagree with Hayek’s theoretical position in that work.  In this short post I want to meander around in the vicinity of the first two.  These are: 1) conflict within emergent orders; 2) non-homogeneity of emergent orders.  These are obviously very closely related points, so here I really just want to associate around them a little.

Start with a remark of James Buchanan’s.  In his very good essay ‘Law and the Invisible Hand’ (in ‘Freedom in Constitutional Constraint’) Buchanan criticises Hayek’s late theory of law from a social-contractarian perspective.  Buchanan – like Hayek – distinguishes between orders that are made versus those that evolve: constructed versus spontaneous orders.  As is common, Buchanan uses Smith’s ‘invisible hand’ of the market as an example of an efficient and normatively desirable spontaneous order.  But Buchanan also notes that the distinction between constructed versus spontaneous orders (taxis versus cosmos, in Hayek’s vocabulary) is itself normatively neutral.  In Buchanan’s words:

“invisible-hand explanation” may be as applicable to “orders” that are clearly recognized to be undesirable as to those that are recognized to be desirable. (28)

Or again:

The principle of spontaneous order, as such, is fully neutral in this respect.  It need not be exclusively or even primarily limited to explanations of unplanned and unintended outcomes that are socially efficient. (30)

Buchanan uses this (very correct) point as his basis for the critique of the late Hayek’s work on law as spontaneous order.  As Buchanan writes:

In his specific attribution of invisible-hand characteristics to the evolution of legal institutions, Hayek seems to have failed to separate properly the positive and the normative implications of the principle. (31)

Again:

The forces of social evolution alone contain within their workings no guarantee that socially efficient results will emerge over time. (31)

I think this is an absolutely correct and frankly dispositive argument against a large amount of the discussion of ‘spontaneous order’ within the Austrian tradition.  It is much too common within that tradition to suggest that because an order is spontaneous, it must be good.  Obviously Buchanan agrees with many of Hayek’s substantive political-economic claims – but from Buchanan’s perspective a straightforward deference to emergent spontaneous order is a wholly inadequate theoretical basis for grounding a normative stance.  I think he’s absolutely right about all this.

So this is the first point I want to make in this post – the order of a spontaneous order may be good or bad, and there is no reason to assume the quality of spontaneous emergence has any specific normative valence.

Next I want to make a slightly different point, as follows: Just because a social order is a spontaneously emergent property of many individual actions none of which had this order as their purpose, doesn’t mean that the actions themselves are good as actions.  This may seem like a trivial point, but it isn’t.  It is common, in my view, especially but by no means exclusively within the Austrian tradition, to treat spontaneous order as an emergent result of voluntary actions.  The core idea motivating this move is that people exercising their free rights to engage in whatever social action they please can produce a spontaneous order even if that order was no part of any individual’s intent.  And this core idea is correct!  But you cannot reverse the order of explanation here – you cannot say that because an order has emerged without being any individual’s intent, the actions that generate it must be free or good.  Of course, stated like that, this point is obvious.  But I believe this kind of slippage is actually quite common within a lot of ‘spontaneous order’ literature.  So it’s important to make this point explicitly.

So – these are two reasons to reject a too-rosy view of spontaneous order.  First: the order that is emergently produced as a spontaneous order may in fact be a bad order.  Second: the actions that produce a spontaneous order may also be bad actions – coercive, cruel, violent, etc.  We need to attend to the potentially highly negative aspects of spontaneous order at both this macro and this micro level.

The third point I want to make is about homogeneity of a specifically normative order.  When Hayek writes about the emergence of a spontaneous legal order – the norms that govern a society emerging out of social practice in a way that can subsequently be codified via common law institutions – there is a clear tacit premise that we are talking about one unitary emergent legal order.  In, for example, the UK common law system – which is Hayek’s paradigmatic case of desirable emergent law – Hayek is clearly thinking that, to a reasonable first approximation, a single shared set of norms are governing community practice, such that common law judges can aspire to codify the norms already tacitly informing practice in a relatively neutral way.

But what if this isn’t the case?  What if, in fact, there are multiple different emergent norms within the same broad community network?  It seems to me that this is, in fact, typically the case.  There is not one single emergent normative order in most large-scale communities.  Rather there are multiple emergent normative orders that themselves complexly interact as part of a still-larger complex normative system.  If this is the case, then participants in the large-scale normative system have to not only correctly identify the norms that have emerged out of that system – they also and in the first instance have to choose between different emergent normative orders that co-exist within the same complex community structure.  And this (if our social-theoretic, pragmatist understanding of normativity is correct) is itself a choice about how individual social actors will situate themselves within this complex system.

This is the core point I want to make in this post.  Over and above the fact that a) spontaneous orders may be bad, and b) the practices that produce spontaneous orders may also be bad, I want to emphasise that c) we can never assume that there is a single unitary normative order that has emerged from a given complex social system.  It is very possible that the same complex social system can and does support multiple incompatible emergent normative systems or frameworks.  I would claim that this almost always is indeed the case.  And I believe this possibility throws a spanner deep into the works of the theoretical (rather than metatheoretical) dimension of Hayek’s late writings on law and constitutionalism.

So.  The existence of multiple conflictual normative orders as emergent products of the same complex social system represents a fundamental challenge to the late Hayekian idea that law can be legitimately grounded in the (single, unitary) emergent normative order of our society.  But this idea also presents an alternative set of positive theoretical resources for thinking about socially-produced normativity.  I’ll expand on that thought in the next post in this series.

I recently read Don Lavoie’s ‘Rivalry and central planning’ – an account of the ‘socialist calculation debate’ which I can’t recommend highly enough.  Lavoie is a partisan – his goal is to present a ‘revisionist’ account of the debate which makes the case for the Austrian side.  But it’s also simply an excellent piece of ‘internalist’ intellectual history.  Moreover, to my mind Lavoie’s reconstruction of the Austrian arguments is much more clearly articulated than any of the debate’s ‘primary texts’, and the book is well worth engaging with on that basis.

The core of the calculation debate is, of course, over the feasibility of socialist or communist central planning.  The debate is multi-faceted and it’s not the goal of this post to even begin to attempt to summarise it, but the postage-stamp-sized version of the Austrian argument is that central planning isn’t going to work well because of a set of ‘knowledge problems’: local knowledge, tacit knowledge, and – especially – ineradicable uncertainty mean that central planners simply don’t have the categories of information required to engage in efficient economic planning.  Therefore, the argument goes, the Marxist goal of rational central planning is a pipe dream.  By contrast, the Austrians argue, market-based ‘spontaneous order’ – or ‘catallaxy’ – can be responsive to local knowledge, tacit knowledge, and the forms of unexpected discovery associated with ineradicable uncertainty, in a way that central planning never can be.  Therefore markets are better than planning.

Along the way in making this argument, Lavoie summarises two elements of Marx’s work.  First, Lavoie argues that Marx is committed to a naive concept of planning, which hasn’t reckoned with the very serious obstacles to ‘rational’ central planning.  I think this is a very fair critique – though again I don’t want to get into this side of things in this post.  Second, Lavoie gives an excellent summary of one dimension of Marx’s account of capitalism: the centrality of uncertainty and disequilibrium to market dynamics.  

It is this second element of Lavoie’s summary of Marx’s argument that I want to focus on in this post.  Interestingly – and in my view correctly – Lavoie argues that this dimension of Marx’s argument is a point of commonality between Marx and the Austrians.  Lavoie makes this argument by contrasting this position (shared by Marx and the Austrians) with two rival understandings of market dynamics.  On the one hand, there is the position that sees capitalist markets as simply chaos (a view that Lavoie argues Hayek wrongly attributes to Marx).  This is wrong – markets are not chaos – rather they are a form of ‘spontaneous order’.  On the other hand, there is the position that capitalist markets can in principle attain perfect efficiency (a view expressed by, for example, the first fundamental theorem of welfare economics).  This is also wrong, because of the ‘knowledge problem’ associated with ineradicable uncertainty.  It is impossible for markets to attain perfect efficiency, even in principle, because one of the functions of markets is to discover information that is not and cannot be known to any of the market participants when those market participants take the actions that will, ultimately, lead to the discovery of the information.  In Hayek’s phrase, market competition is a ‘discovery procedure’ – and precisely because it is a discovery procedure market actors can in principle never possess the kind of perfect knowledge required for fully efficient coordination.  For the Austrians, in other words, a large part of the value of markets lies in their failures of coordination, in their constantly renewed moments of disequilibrium, because such moments of disequilibrium are a necessary precondition of the production of the knowledge that can then be disseminated through the price system.  (Indeed, the workings of the price mechanism are themselves one of the mechanisms via which such knowledge is discovered.) I’m being very telegraphic here – Lavoie spells all of this out in much greater detail and one day I would like to too – but that’s the general idea.

One strand of Lavoie’s argument in the second chapter of ‘Rivalry and central planning’, then, is that Marx: a) understands this dimension of capitalism very well; b) regards this dimension of capitalism as a flaw rather than as a virtue of the system; c) believes that this feature of capitalist markets can feasibly be replaced with a form of central planning that would not exhibit disequilibrium dynamics; and d) is wrong about this.

Now, I think Lavoie is right to say that Marx understands this element of capitalism very well.  Ironically, this element of Marx’s argument is frequently missed by both Marx’s critics and his defenders.  Marx’s critics frequently don’t understand how sophisticated and developed Marx’s understanding of market dynamics is.  Of course, Marx doesn’t use the Austrian term of art ‘catallactics’, but Marx in ‘Capital’ is definitely giving an account of a complex spontaneous order that operates via constantly renewed disequilibrium.  Indeed, this element of Marx’s account of capitalism deeply informs Schumpeter’s concept of ‘creative destruction’.  At the same time, many Marxists are also indifferent to this element of Marx’s argument.  Perhaps a bit provocatively, I think you could give a reasonable first-pass typology of a range of strands of recent Marxist theory in terms of what they miss about this element of ‘Capital’.  On the one hand, there are forms of ‘political Marxism’ which see Marx’s contribution as an emphasis on class conflict – either at the micro level of the site of production, or at the national level of ruling capitalist class versus the proletariat, or at the international level of core versus periphery.  Of course all of these forms of class conflict are indeed essential to Marx’s account of capitalism – but it is easy for accounts of Marx’s argument that emphasise these issues to miss the elements of Marx’s analysis that do not focus on any of these forms of conflict, but rather on the ‘spontaneous order’ that emerges from dispersed social action.  On the other hand, there are forms of Marxism that make central use of categories which are presented in Marx’s analysis as ‘emergent’ phenomena – and yet ‘reify’ such categories in a way that severs them from the explanatory apparatus developed in ‘Capital’.  In my view quite a lot of recent Hegelian and ‘value form’ Marxism can be understood in this way – that is, as treating what are for Marx emergent categories in ways that render them analytically opaque by losing track of the mechanisms of their emergence.  In this sense, I think it’s important to see that ‘Capital’ is a ‘microfounded’ account of large-scale emergent phenomena, and looking exclusively at either ‘side’ of that dichotomy (either just the microfoundations or just the large-scale categories) will give an unrepresentative account of what Marx is doing.  (As always, I need to flag here how much of my own understanding of Marx’s argument is informed by N. Pepperell’s work.)

On this side of things, then, I think Lavoie is exactly right in his re-presentation of Marx’s analysis of capitalism.  Moreover, I think Lavoie is right to say that Marx thinks the disequilibrium dynamics he analyses are bad qua disequilibrium dynamics.  For example, one of the many dimensions of capitalism that Marx is interested in is large-scale economic crisis – financial system meltdown, the forms of underemployment of resources associated with recession and depression, boom and bust cycles, etc. – and of course these elements of capitalist dynamics are intimately connected to the constantly renewed disequilibrium and speculative uncertainty that both Marx and the Austrians agree are core to capitalism as a system.  For Lavoie, Marx thinks that this kind of disequilibrium process can be contrasted with rational central planning which would not exhibit these attributes, and rational central planning is to be preferred on this basis.

So, I think Lavoie is right about all of this.  I also think Lavoie is right that Marx is severely underestimating the difficulties associated with ‘rational’ planning.  So what’s the problem?  Isn’t Lavoie – and, by extension, the broader Austrian critique of Marx – just right, full stop, by my lights?

Well, in a sense, yes.  But here’s the issue.  In Lavoie’s discussion of Marx, he foregrounds Marx’s dissatisfaction with the disequilibrium dimensions of capitalism, and their consequences.  As I say, I think this is indeed a central element of Marx’s critique.  But I think it’s important to remember that there are other dimensions of Marx’s critique of capitalism.  In particular, one of the most central dimensions of Marx’s critique of capitalism is that it is oppressive.  And I think it is important to distinguish two different senses in which capitalism is oppressive, for Marx.

Here I am shifting from discussing Lavoie’s book – which is quite narrowly focused on the specifics of the calculation debate – to discussing the broader debate between the Austrians and Marx.  That is, I’m giving myself permission to paint with broader brush strokes.  Broadly speaking, then, I think there are two key elements of the Austrian enthusiasm for market catallactics which Marx’s analysis challenges, quite distinct from the critique of disequilibrium dynamics in general.

First, there is the problem of power and direct oppression.  I think it’s fair to say that one of the reasons that Austrian economists (and many other economists more broadly) often value markets as freedom-enhancing is the idea that market exchanges are voluntary, and in this respect differ from commands laid down by a violence-monopolising state.  One of the things that Marx relentlessly emphasises in ‘Capital’ is the power relations that run fractally through the apparently ‘voluntary’ exchanges of the market – in particular, but by no means limited to, the exchanges associated with labour.  For Marx, the pro-capitalist emphasis on voluntary exchange is frequently simply engaged in denialism about the many forms of coercion that can be and are encountered in ‘market society’.  This is the element of Marx that contemporary ‘political Marxism’ emphasises.

So that’s one problem with the Austrian position: are we sure that things are so free out there, really?  This problem is, as it were, at the micro level – it concerns the kinds of interactions that are going on all the time in capitalist society.

The second problem is at the macro level – that is, at the level of emergent spontaneous order, or ‘catallaxy’.  Here’s the problem: granted that market society is a catallaxy in which a spontaneous order emerges as the product of human action but not human design, what actually is the spontaneous order?  Is it good?  Or is it oppressive?

This, in my view, is where the large-scale elements of Marx’s argument in ‘Capital’ start to bite – because Marx in that work has an extremely involved account of the spontaneous order of capitalist society, built up from a micro-level analysis of a very large number of different social practices and institutions that comprise capitalist society.  Again, my goal here isn’t to summarise ‘Capital’, but Marx’s overarching claim is that, yes, capitalism represents a ‘catallaxy’, but it is a bad catallaxy: the emergent patterns of capitalist society have an oppressive, not a liberatory, impact on a large proportion of capitalism’s inhabitants.  The invisible hand of the market is not beneficent, even metaphorically; it is drenched with blood.

Now, this obviously isn’t to say that we need to accept this element of Marx’s argument (which, again, I am not even attempting to summarise here – I am simply gesturing at Marx’s conclusions).  But it is important to recognise that this element of Marx’s argument is not a rejection of ‘catallactics’ qua catallactics.  The claim is not simply that spontaneous order is bad because it is spontaneous, and because it exhibits the forms of inefficiency that are intrinsically associated with any ongoingly-evolving spontaneous order.  Rather, Marx’s critique is directed at a specific spontaneous order – the spontaneous order of the capitalist society he is examining.  Marx’s claim in ‘Capital’ is that this spontaneous order is bad – and this argument cannot be rebutted simply by making a case for catallactics against planning.  We need to engage with the specifics.  

This, I think, is where the critique I made a few posts ago about Austrian economics’ inconsistent application of the principle of epistemic limits comes into play.  My view is that Austrian economics wants to make two qualitatively different categories of argument.  One is that the actually-existing spontaneous order of capitalism is good not bad, that the invisible hand is more to our benefit than to our detriment, etc.  This is a normative social-scientific claim, which relies on us being able to analyse and understand in quite some detail the structure and dynamics of both capitalism and alternative political-economic systems.  The other category of claim Austrian economics wants to make is that our knowledge, understanding, and ability to act in ways that have the consequences we desire, are all so limited that we had better leave well alone in the face of spontaneous order – essentially ‘Chesterton’s fence’ at the level of large-scale political economy.  But it seems to me that these two positions are in significant tension.  If our understanding is so limited, how are we in a position to adequately assess the virtues of capitalist spontaneous order?  On the other hand, if we claim to understand capitalist catallactics well enough to make a dispositive case for capitalism’s virtues, what has become of our epistemic humility?  It seems to me that Austrian economics moves back and forth between these two epistemological poles of its argument, and that there is a tension – even, perhaps, at times, a convenient double-standard – in this movement.

So, let’s say we grant (as I think we should) that Marx is being naive in his imagining of a form of rational economic planning that could replace the spontaneous order of capitalism.  The claim I’m making is that this doesn’t in itself dispose of Marx’s broader arguments, because there are two further scenarios that need to be considered.  First: the possibility that the specific spontaneous order of capitalist society is sufficiently awful that forms of planning – even with all their inefficiencies and oppressions – are nevertheless an improvement.  Second: the possibility that other spontaneous orders are available – that we can transform our practices in ways that replace bad catallactics with good catallactics, or at least worse with better.  Moreover, there is arguably significant overlap between these scenarios, because one of the surprising claims underlying the Austrian critique of planning is that the institutions of a planned economy must themselves in practice be a complex system which does not lie under any individual’s control, if they are to function in the ways they often do in practice.  Thus there is a startling moment late in Lavoie’s book in which he argues that the USSR under Stalin is a good example of catallaxy:

although the Stalinist economy ‘professes to be planned,’ to use Hayek’s phrase, it in fact relies on the outcome of the clash among rivalrous, decentralised decision-makers – that is, it is anarchically rather than consciously organized. (155)

I think this view has a lot to recommend it – but in my view it also risks wreaking havoc with a lot of other Austrian arguments.  For if Stalin’s USSR and US market society are both examples of catallaxy, then it’s unclear the extent to which the categories ‘catallaxy versus planning’ can get a purchase on the relevant comparative institutional question.

Perhaps this seems like a facile debating point – and perhaps it is.  At the very least, this issue merits a lot more time and care than I’m giving it here. But I think this problem nevertheless captures something, which is that there are countless possible spontaneous orders.  Almost all of those spontaneous orders contain some degree of planning.  It’s unclear to me, then, how a general emphasis on catallaxy can guide us in choosing which actions we wish to take, in order to influence, in whatever ways, the specific nature of the spontaneous order we inhabit.  One response to this problem is, of course, full stoicism, or quietism.  But if we reject that route – as all participants in the calculation debate have, to some extent – then I see no real alternative to wrestling with concrete social-scientific questions of political economy.  The level of abstraction at which the socialist calculation debate is carried out cannot in itself be an adequate guide to political-economic action.  Which, in fairness, I think all the participants, on both sides, already knew – but that’s all I’ve got to say for now.

In his 1954 lecture ‘What does the economist economise?’, Dennis Robertson writes:

There exists in every human breast an inevitable state of tension between the aggressive and acquisitive instincts and the instincts of benevolence and self-sacrifice. It is for the preacher, lay or clerical, to inculcate the ultimate duty of subordinating the former to the latter. It is the humbler, and often the invidious, role of the economist to help, so far as he can, in reducing the preacher’s task to manageable dimensions. It is his function to emit a warning bark if he sees courses of action being advocated or pursued which will increase unnecessarily the inevitable tension between self-interest and public duty; and to wag his tail in approval of courses of action which will tend to keep the tension low and tolerable.

This passage is approvingly quoted in Part One of Buchanan and Tullock’s ‘The calculus of consent’. And this basic idea informs much of public choice theory – a branch of economics and political science that uses tools often associated with microeconomics to analyse political decision-making. Slightly more specifically, public choice theory often focuses on the ways in which political decision-makers’ individual interests and incentive structures influence their policy-making, frequently to the detriment of ‘the public good’. In Buchanan’s words, in his 1986 Nobel lecture:

Economists should cease proffering policy advice as if they were employed by a benevolent despot, and they should look to the structure within which political decisions are made.

As Robertson says, the idea here is not that altruistic acts are in some way incompatible with human nature; it is, rather, that an institutional structure that heavily relies on altruistic acts for its ongoing stability is likely to be more fragile, all else equal, than an institution that accommodates less noble motives as a major component of its day-to-day functioning. Acts of heroism, kindness, self-sacrifice, selflessness – these are, contrary to more pessimistic views of ‘human nature’, extremely widespread. But a political-economic institution that relies upon these facets of human nature for its day-to-day reproduction, and that will quickly fall apart in their absence – such an institution is at constant risk of either collapse, or transformation into an institution that does accommodate less noble elements of human behaviour, perhaps to the detriment of its intended or apparent goals.

This ‘pessimistic’ public choice vision of political-economic institutions has often not found favour on the left. Leftist critics of public choice theory – or of the broader liberal tradition of which it is apart – tend to object both to its methodological individualism, and to the kind of ‘human nature’ that is tacitly or overtly ascribed to the individuals it considers. For many leftists, furthermore, the public choice approach to political economy is less an analysis of the pitfalls of collective action, than it is an attempt to undermine or attack successful collective action, in the service of right-wing, anti-statist interests and policies. From this left perspective, public choice theorists attempt to emphasise the ways in which institutions of collective action are liable to fail, because public choice theorists want such institutions to fail: by arguing that the successful collective provision of social goods is difficult or impossible, and that apparently successful collective action is really a mask for individual self-interest, public choice theorists serve the interests of those opposed to emancipatory collective action.

There is much to be said for this left critique of public choice theory. Public choice theory has, indeed, typically emerged from and aligned itself with the right of the political spectrum, and sought to provide intellectual resources and arguments for those who wish to greatly reduce the size of the state and the scope of democratic or collective social decision-making. It is, primarily, a conservative school of thought, and much of the public choice tradition cannot usefully be interpreted unless its analysis is seen as informed and shaped by conservative political commitments.

But should the tools of public choice theory be exclusively the property of the right? Does it benefit the left for this to be the case? In my view, the answer to these questions is ‘no’, and a ‘public choice theory of the left’ is a worthwhile project, no matter our views on ‘actually existing public choice theory’.

Why is this so? First of all, analytically speaking, there is a lot of potential common ground between public choice theory and traditional left critical analysis: the capture of powerful institutions by special interest groups and the use of power to advance the interests of those with power, as against the broader public good… they are not themes that are entirely alien to left analysis. Public choice approaches should be capable of use for left critique.

Secondly, though, the normative public choice critique of would-be emancipatory collective action also carries weight: the left ought to reckon with this category of critique of its own projects and institutions. Public choice theory is suspicious that institutions – paradigmatically state institutions – that are intended to serve the common good have a tendency to serve instead the interests of those who wield power within those institutions. If left politics aspires to create institutions that are not disastrously vulnerable to this phenomenon, it needs to reckon with this risk and this critique. Moreover, it needs (I would argue) to reckon with this critique in a way that does not appeal to unrealistically utopian claims about long-term selfless action on the part of key social actors.

Perhaps the paradigmatic case here is Soviet communism. For many critics of the USSR, the Bolshevik project was intrinsically flawed because the institutions it proposed and implemented in the name of emancipation were always likely to result instead in state power serving the interests of a governing elite rather than the broader citizenry. Of course, there are many on the left who reject this analysis. But there are also many on the left – including me – who agree that Soviet-style communism was in practice a novel form of domination and oppression rather than a fundamentally emancipatory project. And this judgement raises the question of how to evaluate leftist transformative proposals, to ensure that would-be emancipatory institutions are likely to genuinely be emancipatory.

In my post on Erik Olin Wright’s ‘Envisioning Real Utopias’, I discussed one leftist response to this problem: Wright’s centring of ‘social power’ (as against state power) as the ‘true north’ that should guide ‘the socialist compass’. I argued, against Wright, that there is in fact no reason to believe that ‘social power’ is intrinsically more emancipatory than ‘state power’ or indeed ‘market power’ – that we need more fine-grained criteria for evaluating political-economic institutional proposals, to assess whether these proposals are likely to move us in a more or less emancipatory dimension.

The insight from Robertson with which I started this post, I believe, offers one such useful criterion (of course at a very high level of abstraction). As Robertson writes, we can distinguish between on the one hand institutions that, for their emancipatory functioning, require members of the institutions to persistently navigate a high tension between their own personal interests and those of the ‘public good’, and, on the other hand, institutions that reduce the tension between self-interest and public duty to a “low and tolerable” level. Institutions of the latter sort are, all else equal, more likely to be sustainable. The task for leftists is to construct institutions that are emancipatory in their outcomes and processes, while also exhibiting this feature.

In the jargon of game theory, this kind of institution design challenge is known as “incentive-compatible institution design”. That is to say: when we are constructing political-economic institutions, we want to construct those institutions in such a way that the incentives of individuals within the institutions are aligned with the tasks we would want those individuals to fulfill. In the maxim of many introductory economics courses: “incentives matter”.

This is a lesson that should be applicable across a broad range of categories of institutions. It should not be restricted to the political projects of the right, or to the critique of the left. And the left, I think, needs to get better at thinking about institutions in these terms. Paying closer attention to public choice theory is perhaps one route via which that could be accomplished.

I’ve talked on this blog before about three different concepts of liberty: negative liberty, in the sense of action unconstrained by others’ coercion; capabilities liberty, in the sense of possessing the material and social resources and capacities required to make use of one’s negative liberty; and positive liberty, in the sense of active participation in self-governance.

When I was taught political philosophy at an undergraduate level, I remember a lot of focus on liberty versus equality, with the idea that there was some trade-off between the two. Obviously one can value equality for itself – but I tend now to think that equality, at least in the sense of material equality, is mostly a derivative political virtue. The main reason we should value material equality, and the kinds of redistributive politics associated with it, is because of those policies’ impact on capabilities and positive liberty. Material redistribution increases capabilities liberty by directly increasing people’s material and social capabilities – destitution is a form of unfreedom, and redistributive policy therefore increases liberty in at least this sense. Moreover, at the other end of the material wealth spectrum, extremely high levels of wealth can be transformed into political power and influence, so reducing wealth inequality also reduces the inequality in forms of political voice and influence associated with wealth – which is in turn likely to increase the positive liberty of the non-wealthy. So: the major virtues of this kind of egalitarian policy can be derived from principles of liberty – and I think this is often a better way to think about the normative or political or ethical warrant for such policies than to simply value equality itself.

Similarly, I remember a lot of attention in my introductory political philosophy classes focusing on principles of political legitimacy, which were more often than not as I recall understood in democratic terms: a governance system only has legitimacy if it enjoys the endorsement of the governed, in some sense. Here, again, the principle of ‘positive liberty’ seems very similar indeed – so it seems like a lot of issues in normative political theory can ‘drop out’ of these basic ideas of liberty.

OK. So – if we are thinking about principles of institution-design in these terms, we are thinking in terms of trade-offs. We need to think of trade-offs between individuals: is it worth reducing my negative liberty to engage in some action, if that action also constrains the negative liberty of others? We also need to think of trade-offs between categories of liberty: is it worth risking a loss of negative liberty to make a gain in capabilities liberty, or vice versa? These two forms of trade-off seem to capture a lot – obviously by no means all, but a lot – of the normative problems we confront when thinking about political and political-economic institution design.

Continuing the institution-design thread on the blog, which I expect to be the dominant focus here for years to come…

I’m currently working through [Using the phrase “working through” is a trick I’ve picked up to make it sound like I’m doing something fancier than “reading”] Erik Olin Wright’s ‘Envisioning Real Utopias’, since the project I’m pursuing here seems to broadly fit within or alongside Wright’s. Wright characterises his work as an example of ‘emancipatory social science’, which he says in turn comprises three main tasks:

elaborating a systematic diagnosis and critique of the world as it exists; enivisioning viable alternatives; and understanding the obstacles, possibilities and dilemmas of transformation.

Moreover, although here Wright categorises ‘diagnosis and critique’ as one task, this can of course be broken down into very different component parts:

To describe a social arrangement as generating ‘harms’ is to infuse analysis with a moral judgement. Behind every emancipatory theory, therefore, there is an implicit theory of justice, some conception of what conditions would have to be met before the institutions of a society could be deemed just.

In this post I just want to focus (pretty superficially) on the relationship between this kind of political ideal – whether understood as a theory of justice or some other kind of normative framework – and an institutional proposal.

We evaluate institutions in terms of whether they realise our political ideals, so debates about which institutions we should adopt always play out in at least two registers: debates about what ideals they should try to realise, and debates about how they can best realise those ideals. These two debates intertwine. It is possible to bring together a coalition of very different political ideals under a shared institutional goal, and vice versa. It is also possible for our institutional goals to modify our political ideals.

As any very long-term readers of this blog, if such there be, may remember, I spent considerable time some years ago on the work of the analytic philosopher Robert Brandom, and in particular on Brandom’s normative pragmatics. I don’t want to revisit that fairly involved terrain here, but I want to highlight that the relationship between norms and practice is very relevant, at a metatheoretical level, to the normative study of institutions. Institutions are, after all, enacted by practices, and if we understand (as I think we should) norms as also products of practice (albeit in a complicated and non-reductive way), then we see that our norms are not just benchmarks against which institutions can be evaluated, but are also themselves, in part, products of our institutions. The institutional world we make shapes our values, and those values in turn react back on our institutions, and permit us to evaluate – and critique – them. For example: one of the ways in which capitalism is (potentially) self-undermining, for Marx, is not just that it creates the objective conditions for its abolition (for example, in creating productive forces that can be redirected to other ends), or even that it creates the subjective conditions for its abolition (in the sense of creating a ‘collective subject’ of a class-conscious proletariat) but just as importantly that it creates the subjective conditions for its abolition in another sense: the institutions of capitalism generate a range of historically novel normative ideals that provide resources for the emancipatory critique and rejection of capitalist institutions.

So the relationship between institutions and norms is complicated. It is a mistake, in an ‘abstract’ sense, to think that we begin with historically-abstracted norms and then move to devise institutions that can realise the ideals of those norms: our norms are a product of practice too, and may shift as our practices shift. Nevertheless, we do evaluate institutions against our norms, and in a less abstracted or philosophical sense it doesn’t matter much where those norms come from. After all, they are our norms – in our ethical and political debates we accept or reject them because of reasons, not simply causes.

So, to repeat, debates over institution design play out in two registers: debates over what ideals we should attempt to realise, and debates over what institutions we should adopt to attempt to realise those ideals. These debates are intertwined at an abstract metatheoretical level – but they are also intertwined at more ‘applied’ levels. One easy mistake to make, in ‘theoretical’ institution-design, is to think that one can begin with a set of foundational normative principles, and from these principles ‘derive’ the institutions that best realise them. This direction of political-theoretical reasoning is certainly one of the discursive and political resources at our disposal – but we need to be cautious. In practice our norms are complicated and conflictual, filled with competing preferences and values which need to be wrestled with to attempt to balance partially incompatible goods and goals. This kind of work cannot be carried out at the level of pure abstraction – it needs to be thought through in relation to concrete problems. Thinking about actual institutions is therefore important not only when we attempt to realise our political ideals, but also in order to understand what those ideals even are. Different people who share ‘the same’ values may find themselves with very different practical intuitions when confronting real-world political problems – and these practical problems therefore function to illuminate differences of values that might have been invisible, or at least difficult to discern, until they were tested.

One of the conclusions we could draw from this line of thought is the position discussed in my last blog post: the idea that politics can only really be carried out ‘in practice’, and that trying to theorise institutions (or anything else) in too much abstraction or too much in advance is hubristic. But, as I said in that post, I think we should reject this idea. The inseparability of theoretical ideals and practical problems should not lead us to reject the former – still less to reject theoretical attempts to provide resources for practical problem-solving. Nevertheless, it is useful to be aware of the ways in which these areas of theory, politics and experience intersect.

In short, in thinking about institutions, we should pursue both tasks: clarifying our political values, and clarifying our sense of what institutions can best realise those values. Moreover, for the reasons I have discussed in this post, it makes sense to ‘tack back and forth’ between these projects. To bastardise Kant, institutions without ideals are empty; ideals without institutions are blind. We will carry out both of these projects better, I think, if we keep them in close contact.