This is an extremely long post – something in the ballpark of 13,000 words – for which I apologise.  I can’t claim that it isn’t rambling and digressive, etc., but for what it’s worth it felt more or less like a single line of thought while I was writing it.  Unfortunately I don’t really have it in me to revise it in any serious way, so here it is.  The post is organised roughly as follows.  First I talk very briefly about individual-level epistemology, in its traditional ‘Enlightenment’ form.  Then I make a shift to social epistemology.  I draw on Neurath, Brandom, the strong programme – all the classics of my personal epistemological canon – to outline what I take to be a reasonably coherent social-epistemological account of science as an anti-foundationalist epistemic system.  This is the bulk of the post.  I then finish up by applying this model to a couple of personal preoccupations – a rather bathetic conclusion given the intellectual resources I’m drawing on, but again, it is what it is.  I guess you can see the post as trying to do two main things.  First, I want to give a social-institutional answer to the traditional demarcation problem: what is science?  Second, I want to reflect a little on what this answer implies for how individuals – both as citizens consuming scientific output, and as researchers contributing to the scientific endeavour – can and should relate to this broader institutional structure.  The ‘emotional’ point, from my perspective, is to try to think about the location of my own research within the broader scientific institutional space.

Start, then, with non-social epistemology – specifically, with the good old Enlightenment project of trying to figure out the way the world is, using the resources of science and reason.  I’ll take as an exemplary expression of this project the Royal Society motto: ‘Nullius in verba’ – ‘on the word of no one’.  This is the commendable (to my mind – of course, opinions on this matter differ) Enlightenment idea that the authority of tradition qua tradition is no authority at all – that we should not simply defer to the tradition, whether it be religious or political or philosophical or whatever.  Rather, we should figure things out for ourselves.

It’s worth pausing for a moment here, perhaps, to mention that there’s a philosophical connection between the scientific project (understood in these terms) and the anti-traditionalist anti-authoritarian forms of political liberalism and radicalism that emerged during this same historical period.  Both of these projects are, at some level, driven by the same thought: we should not simply defer to authority (whether that be political or epistemic): whatever authority authority has comes from our own judgements and actions.  I think this is a good philosophical approach, and I take myself to be aligned with it, important caveats notwithstanding.  But this post isn’t about the connections between the epistemic and the political dimensions of ‘the Enlightenment’ – that’s all for another day.

What I want to start by talking about, rather, is the different ways in which “on the word of no one” could be understood.  If we reject the authority of tradition, what are we basing our epistemic judgements on?  As usual, I’m going to be maximally crude here, but I’m going to say there are basically three broad categories of alternative authority-source: experience (empiricism); reason (rationalism); and Mysterious Other (mysticism).  I appreciate that this is all a sort of first-year-undergraduate-level understanding of Enlightenment epistemology.  But at the same time it seems basically fine to me, and it’s what I’m going with.  Typologising in this way, then, and ignoring mysticism (on the grounds that it is a transformation of the Enlightenment rejection of tradition into a basically anti-scientific epistemic approach, and thus Does Not Align With My Values) we have two basic projects: grounding knowledge in the senses, and grounding knowledge in the faculty of reason (plus, of course, combinations of the two).

All well and good.  But then, as is I think at this point abundantly well-established by the subsequent philosophical tradition, once we start trying to elaborate these approaches we run into worlds of trouble.  When we think about our faculty of reason, does it not seem that our processes of reasoning are themselves, at least in part, socially inculcated and influenced – that is to say, influenced by the traditional authorities that our faculty of reason is meant to break with?  In my view the answer to this question is definitely “yes”.  Similarly, when we think about how to form judgements on the basis of experience, it seems plausible that theory is ‘underdetermined’ by experience and that, moreover, the way in which our experience is taken up into judgement is itself partly determined by socially- (and thus authority-) influenced processes of thought.  Obviously I’m not aiming to make the case for either of these positions in this post – I’m gesturing to the long philosophical debates around these issues.  Still, I’m a philosophical pragmatist, and therefore I’m on the “most stuff is socially constituted” side of these debates, and I tend to think that appeals to non-social faculties (whether of experience or reason) often tacitly rely on socially-constituted categories.

Even putting all this aside, though, there are more practical ways in which the “on the word of no one” principle runs into problems.  Obviously we’re not all carrying out every scientific experiment ourselves – we’re relying on other researchers to make empirical observations, and then reading their reports of those observations, or other researchers’ syntheses and summaries of those reports.  So testimonial authority is central to scientific empiricism.  Similarly, even when we are engaged in Cartersian rationalism, are we really thinking things through from first principles ourselves – or are we using others’ accounts of their reasoning as an aid to, and frequently substitute for, our own?  Here again, for example, the canonical status of Descartes’ ‘Discourse on Method’ is an interesting kind of performative… if not contradiction, then at least tension: a canonical authority for rejecting canonical authority.  There are tensions here, I think – in the constitution of an anti-traditionalist tradition; the social inculcation of the project of rejecting socially-inculcated judgements.

This kind of line of reasoning is one of the ways you can get to a social- or practice-theoretic critique of Enlightenment rationalism or empiricism.  The crude argument here would go: the Enlightenment project aspired to break with social authority; but we can show that the very categories with which Enlightenment thinkers engaged in this project are socially constituted via unacknowledged relations of authority. From here it is easy to conclude that the Enlightenment project as rejection of authority is basically a contradiction in terms, and we should throw it in the bin.

Obviously this is a very crude summary of the critique, but I think this is recognisable as a summary of quite a lot of critical science studies.  For example (and since I started with the Royal Society), I would argue that Schaffer and Shapin’s ‘Leviathan and the Air Pump’ clearly falls within this broad genus.  Bloor and Barnes’ ‘strong programme’ argument for relativism can likewise easily be taken to point in this direction.  So do at least some categories of critical theory (in the Frankfurt sense) and Marxism, as well as some forms of more standpoint-epistemology-adjacent contemporary critical theory.

So.  At this point we’ve traversed two moments of what we can see as a kind of ‘dialectic’.  We started with a picture of the Enlightenment epistemological project that understood itself as rejecting social, authority-based sources of knowledge in favour of various kinds of individual epistemic grounds – rationalist or empiricist.  That’s moment one.  Then we argued that this doesn’t work: social relations, and authority-relations, implicitly constitute even the apparently non-socially-constituted categories of Enlightenment epistemology.  This is so in at least two ways.  First, the ‘individual’ psyche is always partly socially constituted, in its faculties of both observation and reason: you can’t find your way to a faculty that is not shaped by the forces of social authority that the faculty superficially appears to transcend or escape.  Second, you can’t in practice engage in any serious project of knowledge construction without relying on testimony, and so we need to bring authority-relations back into our epistemology in order to deal with testimony.

Now, if you are of a critical turn of mind, you can interpret these critiques of ‘individualist’ Enlightenment epistemologies as damning for the entire epistemological project.  The Enlightenment thinkers sought to construct knowledge “on the word of no one”; they are not able to do so; too bad for the project.  This is the second moment of our ‘dialectic’, which takes itself to simply refute the first.

But not so fast!  We don’t have to accept critical science studies’ debunking application of these insights.  Our third ‘moment’, then, is accepting the idea that we can’t get away from either the social constitution of ‘individual’ faculties or testimonial authority structures, and trying to construct an understanding or version of the Enlightenment epistemological project that is grounded in these insights, rather than refuted by them.

This, obviously enough, is the ‘moment’ of this ‘dialectic’ that I endorse.  I take it that this broad approach has been pursued, in different ways, by a lot of thinkers that I’m interested in.  On the one hand, there are the explicit ‘social epistemologists’ who are interested in the social structure of science as an institution.  On the other hand, there are the pragmatist philosophers – especially, for me, Robert Brandom.  I take it that Brandom’s work – especially his recent Hegel book – also presents a highly sophisticated social epistemology, which aims to reground Enlightenment rationalism in social-institutional terms.  In the rest of this post I’m going to dwell on this third ‘moment’, or paradigm.

Start with ‘Neurath’s Boat’.  The other day I finally got round to reading Neurath’s critique of Spengler, in which Neurath articulates his famous boat metaphor.  There Neurath writes:

Even if we wish to free ourselves as far as we can from assumptions and interpretations we cannot start from a tabula rasa as Descartes thought we could.  We have to make do with words and concepts that we find when our reflections begin.  Indeed all changes of concepts and names again require the help of concepts, names, definitions and connections that determine our thinking.

This understanding of our intrinsic enmeshment in inherited concepts and associations is part of Neurath’s understanding of rational thought in holistic, rather than atomistic terms:

When we progress in our thinking, making new concepts and connections of our own, the entire structure of concepts is shifted in its relations and in its centre of gravity, and each concept takes a smaller or greater part in this change.

Neurath goes on:

Not infrequently our experience in this is like that of a miner who at some spot of the mine raises his lamp and spreads light, while all the rest lies in total darkness.  If an adjacent part is illuminated those parts vanish in the dark that were lit only just now.  Just as the miner tried to grasp this manifoldness in a more restricted space by plans, sketches and similar means, so we too endeavour by means of conceptually shaped results to gain some yield from immediate observation and to link it up with other yields.  What we set down as conceptual relations is however, not merely a means for understanding, as Mach holds, but also itself cognition as such.

I think this last sentence is a noteworthy remark from Neurath.  Of course Neurath isn’t here proposing an elaborated Brandomian-Hegelian argument that conceptual content should be understood in terms of inferential connections, but I think it is clear that for Neurath here “cognition as such” is about tracking connections between concepts.  For this reason, for Neurath, changes in our overall web of concepts can arguably also be understood as transformation of the concepts themselves.  There is a relatively strong sense, then, in which Neurath’s understanding of cognition is both holistic and cultural, as well as (plausibly) inferentialist-adjacent.

Now comes the famous boat metaphor:

That we always have to do with a whole network of concepts and not with concepts that can be isolated, puts any thinker into the difficult position of having unceasing regard for the whole mass of concepts that he cannot survey all at once, and to let the new grow out of the old.  Duhem has shown with special emphasis that every statement about any happening is saturated with hypotheses of all sorts and that these in the end are derived from our whole world-view.  We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom.  Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as a support.  In this way, by using the old beams and driftwood, the ship can be shaped entirely anew, but only by gradual reconstruction.

This is, I think, probably the classic statement of anti-foundationalism in philosophy of science.  It’s wonderful stuff, and I fully endorse it.  But this move then opens up a whole set of other questions.  In particular, granted that we are sailors adrift remaking our boat at sea – how much faith do we put in the existing state of the boat?  

The fallibilist and anti-foundationalist approach I’ve been describing, I think, is typically associated with two commitments that stand in apparent tension.  On the one hand, there is the commitment to the idea that existing scientific beliefs and methods are our best starting point.  On the other hand, there is a commitment to the ongoing remaking of those beliefs and methods.  The tension between these stances is not, of course, the tension of incompatibility – the endorsement of both stances is precisely what gives the fallibilist anti-foundationalist position its power.  But this tension is something that needs to be navigated in practice by anyone pursuing this approach to scientific epistemology.

I think another classic expression of this idea is Max Weber’s reflections in ‘Science as a Vocation’. There Weber writes:

In science, each of us knows that what he has accomplished will be antiquated in ten, twenty, fifty years.  That is the fate to which science is subjected; it is the very meaning of scientific work, to which it is devoted in a quite specific sense, as compared with other spheres of culture for which in general the same holds.  Every scientific ‘fulfilment’ raises new ‘questions’; it asks to be ‘surpassed’ and outdated.  Whoever wishes to serve science has to resign himself to this fact.  Scientific works certainly can last as ‘gratifications’ because of their artistic quality, or they may remain important as a means of training.  Yet they will be surpassed scientifically – let that be repeated – for it is our common fate and, more, our common goal.  We cannot work without hoping that others will advance further than we have.  In principle, this progress goes on ad infinitum.

This is the ‘paradox’ of fallibilism: that we put our confidence in judgements precisely because we expect them to ultimately be found inadequate.  I think this is a coherent – and, indeed, a robust and correct – philosophical perspective.  But it does raise questions about exactly what attitude to adopt to any specific judgement, as well as to the institutional structure of science as a whole.

Before I write more on that theme, I want to present one more representative of anti-foundationalist philosophy of science: Michael Polanyi.  In his classic essay ‘The Republic of Science’, Polanyi sketches an account of the institutional structure of science that I think is broadly correct.  Polanyi aims to give an account of how the institution of science can as a whole embody the Enlightenment ideal of “on the word of no one”, even as every specific moment of the institution relies on extensive authority-claims.  In Polanyi’s words:

the authority of scientific opinion enforces the teachings of science in general, for the very purpose of fostering their subversion in particular points.

That is to say: the community of scientists trains new aspiring members of the community in the scientific tradition – some competence in the tradition is a precondition of full membership in the scientific community of mutual recognition.  Yet one of the norms of the scientific community that scientists thereby enter is that any element of this tradition can in principle be challenged.  This institutional structure thus both transmits a tradition and aims to ensure that every element of that tradition is in principle open to rebuttal, and thereby capable of empirical and rational grounding.

As I keep saying, something in this broad space is the vision of science I endorse.  Moreover, this is not a particularly niche or strange opinion on my part but, I take it, an at least in principle widely-held one.  The institutions of science are constructed in the way they are in large part because they are informed by precisely this fallibilist understanding of the rationalist and empiricist endeavour.  Individually we cannot but take most of our opinions on the basis of the authority of others.  But collectively we have constructed those authority-relations, within the institutional structure of science, such that any and every individual claim can be subjected to the tests of experience and reason.  And this fact about the scientific community as a whole is what justifies any individual within that community accepting so many of the community’s conclusions on the basis of (apparently) nothing more than community authority.  This institutional fallibilist structure is the basis of the authority of the beliefs and techniques that the community transmits.

Ok.  So this is the third ‘moment’ of the ‘dialectic’ I’m discussing: this vision of science as a fallibilist institution, and the dual role of authority within this institution.

But our thinking about how science is structured doesn’t, and shouldn’t, stop there.  In the remainder of the post, then, I want to start to build on this core picture by thinking in a very crude way about a few different challenges or problems that can be presented by or to this picture.  I’ll aim to be somewhat brief and (therefore, as usual) crude.

First issue.  How are we to assess the overall reliability of our scientific institutional structures?  Our basic Neurath-Weber-Polanyi picture is of science as an institution which is, over time, self-correcting and self-improving.  It may well be the case that any given commitment turns out to be misguided, but the general mechanics of the institution’s internal checks and balances will tend over time to improve its claims and methods.  Moreover, for this reason, it’s reasonable to treat the institution’s current overall output as a reasonable approximation of our current best guess as to how things really are.

The basic critique here is: what if that’s not the case?  What if the institution is just fundamentally broken in some way?  The way in which you think science is broken is likely to depend on your ideological location: maybe it’s in the pocket of capital or the ruling class, or ‘globalist elites’.  Maybe the social location of scientists shapes their judgement in a way that is destructive of real insight.  Or maybe science is just, for whatever contingent reason, a self-selecting cadre of people with bad methods and bad views, using their institutional clout to prevent self-correction mechanisms from operating.  There are broader and narrower versions of this kind of critique.  At the limit case, there’s the rejection of science tout court.  But there are also many narrower critiques: such-and-such a discipline or sub-discipline is in the hands of fools and/or scam artists and/or powerful interests, and science’s self-correction mechanisms are not working because of the way those with institutional power have structured the relevant field.

How does one respond to this kind of critique?  Well, to a large extent it depends on context.  The reason it depends on context is that it is probably impossible in the abstract to draw a clear line between bad versions of this critique, which aim to reject what’s best in science, and good versions of this critique, which are themselves part of the ‘self-correction’ mechanism from which science derives its authority.  Put differently: if one simply rejects out of hand, in an undifferentiated way, critiques of current scientific practice and scientific findings, on the grounds that such critiques are ‘anti-science’, then one is, potentially, cutting away the basis for the scientific authority one seeks to appeal to.  Because, of course, the whole point of the scientific enterprise is that any dimension of scientific orthodoxy is in principle up for questioning.

This dynamic, of course, is why basically every crackpot thinks that whatever they are doing is real science, and the existing body of scientific knowledge and practice is an anti-science conspiracy masquerading as real science in order to fool the rubes.  We’re all, I take it, familiar with this kind of argument, and we are (most of us) not keen to take the flat earth people (or whoever) very seriously, still less to give them chairs in theoretical physics at major universities.  And yet the rational core of the crackpot’s vision of themselves as persecuted truth-teller is that if science is to function according to that Enlightenment vision with which we began – “nullius in verba” (albeit now understood at the collective and institutional level rather than the individual level, as discussed above) then there must be some sliver of possibility that the crackpot is onto something.

And this creates a further problem.  Presumably we don’t want to place flat earth theory and the best current theoretical physics on a completely level institutional-epistemic playing field.  And yet the kinds of gatekeeping that are established to keep out the cranks always risk doing more than that: blunting the self-correcting dimension of science’s ongoing, self-constitutive self-critique.  The challenge of scientific institution-design is to balance these imperatives: the gatekeeping required to produce high-quality knowledge-claims, balanced with the ability to critique in principle every dimension of those knowledge-claims, and of the mechanisms by which they are derived.  Of course this balance is hard to get right, even in the best of circumstances – without all the interests and errors at work that we’re all familiar with.

Ok.  So this is one set of issues – rather crudely put.  But here’s another, though closely related, set of issues.  As social scientists and philosophers have explored the social dimensions of science as an epistemic system, there has been increasing focus on ‘epistemic diversity’.  Here, again, the picture is one of science as a system with internal epistemic checks and balances – and those checks and balances require epistemic diversity.  If science is an ‘evolutionary’ system (as per Popper), then the way that evolutionary process works is by selection among variation.  And even if you don’t buy the entire science-as-evolutionary-system package – or the closely related science-as-catallaxy ‘marketplace of ideas’ vision of Polanyi – there’s still a basic insight here: if you don’t have some diversity of hypotheses, as well as the ability to adjudicate between different hypotheses using evidence, then you just don’t have science.  Again, then, we have an apparent ‘tension’ which is of the essence of the scientific enterprise: diversity of opinion oriented towards consensus around truth.  The epistemic authority that scientific consensus enjoys derives precisely from its willingness to adjudicate between diverse hypotheses – but that diversity of hypotheses is, intrinsically, a limit to consensus.

This is another version of the general point I made earlier.  But recent work in formal social epistemology has drilled down in this general problem space, and found some interesting, more specific results.  Kevin Zollman’s recent(ish) work on ‘the epistemic benefits of transient diversity’ is one such research strand.  Zollman mathematically models the opinion dynamics of very simple epistemic systems.  Agents exist on a graph (i.e. a network), and they interact with other agents via the edges (links).  Zollman asks: what graph or network structure results in overall better epistemic outcomes?  And he finds that (under plausible assumptions) relatively weakly connected networks result in better overall epistemic outcomes than do strongly connected networks.  Why?  Because in strongly-connected networks agents tend to coalesce quickly around a specific consensus – and that consensus may well be wrong.  It is better for there to be higher ongoing diversity of opinion, so the collective selection of the ‘correct’ opinion among that diversity takes place over a longer time-frame, with more evidence and more measured judgement in play.

This kind of result, I take it, supports a fallibilist and (in some sense) evolutionary perspective on the scientific research process.  It supports the idea that the strength of science lies in its ability to accommodate high diversity of opinion and (although Zollman isn’t studying this) method.  Of course, Zollman’s analysis is just a toy model, and (as Zollman emphasises) one doesn’t want to draw very strong conclusions on such a basis – but I take it that we have strong philosophical reasons to believe this kind of thing anyway, as discussed above.  Again, in other words, we are led to the idea that lots of scientists being wrong is central to the epistemic authority of science as a whole.  Efforts to establish institutional structures that speed up the consensus-formation process are likely to result in worse collective epistemic outcomes.

Ok.  So this is the basic picture of science as an institution that I endorse.  But here is where I want to go, with all this theoretical apparatus.  If we understand science in these terms, then a set of difficult problems are presented about how individual scientists – or really any individual, scientist or not – should interact with the scientific institutional structure as a whole.  If we adopt the original, individualist Enlightenment epistemological approach then this category of problem doesn’t present itself: the individual is the seat of knowledge, and epistemic authority can be assessed at the level of the individual.  But if we adopt this social and fallibilist understanding of epistemology, then the individual is not the seat of knowledge – knowledge is something that we produce and assess collectively via a mechanism that intrinsically involves much individual-level error.  Moreover, individual epistemic virtues are far from the only thing that need to be considered when evaluating epistemic authority: the specific reliability of scientific knowledge is a feature of the system as a whole, rather than of any one of its moments.  In addition, we have derived the apparent ‘paradox’ that (at least for large classes of claim – I’ll introduce more necessary nuance here later) even if we want everyone to be ‘correct’, we also don’t want everyone to believe the same thing – because that would undermine the basis for the authority of the claims we take to be correct.  My question in the remainder of this post is: what does this mean for the way in which any given individual ‘ought’ to relate to the scientific institutional apparatus and tradition?

At this point I want to talk a bit about some different attitudes that can be taken to the scientific enterprise.  In an earlier draft of this post I used the ‘case study’ of the discourse around the science of COVID-19 to illustrate some of these points.  That former draft is probably still more visible than it should be in what follows, but I decided it’s a much too contentious – and concrete – topic to be worth dragging into this basically philosophical argument.  Still, the debates over COVID science are the kind of thing I have in mind in the following discussion.  There is a scientific discourse; how do we choose to relate to it, as ‘consumers’ of the output and implications of scientific research?

Here then are some ways it is possible to relate to ‘science’, or scientific institutions, as a citizen:

  1. Just flat-out rejection of the epistemic legitimacy of science.  Obviously this attitude comes in a range of different forms, some a lot more sinister than others.  Still, there is a problem in how to engage with this perspective, if (like me) you are broadly pro-science.  Obviously you can’t really argue with fundamentally anti-scientific claims on the basis of the scientific literature, because this perspective simply rejects the scientific literature.  The real argument is at the level of ‘basic worldview’ – and it is very difficult to know where to begin with that kind of debate.
  1. Moving on, then, another orientation to the scientific discourse is to just accept what specific prominent science communicators say as a summary of ‘the science’.  In my view this approach makes a lot of sense as a time-saving heuristic.  Most of us are extremely busy and time poor – we simply don’t have the capacity to form judgements about the state of the current scientific literature, and therefore we delegate that job to people who have assumed the public role of assimilating and communicating the current state of the relevant science.  This is reasonable and rational – it’s how epistemic delegation works.  Of course, if you think the relevant scientific and science communication institutions are fundamentally broken, then this is a bad heuristic.  But if you don’t think that, this is a reasonable approach, in my view – given paucity of time, etc.  It needs to be borne in mind, however, that this is a shortcut heuristic – which is relevant to approach (3).
  1. The third approach is the same as (2), but more dogmatic.  That is, this approach doesn’t just accord public science communicators authority as a shortcut heuristic, but it insists that there is something very problematic or suspicious about dissenting from their views.  For this perspective, the authority of specific scientists or science communicators is identical with the authority of science in general – to doubt these science summarisers and communicators is to doubt science itself.

    This is a much more dubious stance, in my view.  It’s appropriate to defer to public science communicators as a time- and labour-saving heuristic – but we need to remember that their role is to summarise and synthesise an intrinsically pluralistic and internally diverse field of discourse.  These communicators’ judgements about how to synthesise that internal diversity of scientific opinion are very much not the same as the authority of science in general.  Inevitably, many experts will dissent from the specific synthesis proposed.

    I think this tendency (a dogmatic ‘pro-science’ attitude, where ‘science’ is identified with some specific figure or figures within the extremely diverse scientific ecosystem) is quite common among what I would call the “I bloody love science!” tendency, as well as among some scientists and science communicators who find it convenient to claim the authority of science as a whole for their contributions to an ongoing pluralistic scientific discourse.  It is a way of understanding science as technocratic expertise, rather than in more fallibilist and pluralist terms.  You could do worse than this, but I don’t think it’s a great orientation to science as an institution.
  1. A fourth approach is a different, more nuanced form of denialism or scepticism.  Unlike (1) – the “everything is lies” approach – this perspective takes the scientific literature seriously.  However, it mobilises the intrinsic fallibility of any and every individual study to cast doubt over the literature as a whole.

    I think there are two variants of this approach.  One is bad faith – the kind of Darrell-Huff-working-for-the-tobacco-industry ‘merchants of doubt’ cynical mobilisation of scepticism in the service of a predetermined agenda.  This is denialism proper, the cold-eyed use of scientifically literate hyperbolic scepticism to cast doubt on an agenda the author opposes.

    There’s a more good faith version of this approach, though.  This happens when scientifically literate people, who spend a lot of time engaging with the scientific literature, slowly become horrified at the fact that when you scratch at the methods of scientific publications, or the structures of scientific institutions, you typically find flaws.  It’s really hard to do good research; most research isn’t good; and even research that’s good will have significant intrinsic limitations.  If you have the right (or wrong) sensibility, as you look at this stuff you slowly become convinced that we just don’t know the first thing about anything, that the entire scientific enterprise is a towering stack of cards built on sand.  This is, I think, the good faith road to denialism.

    How should we react to this approach?  Well, again, I think we need to be careful.  Sometimes it is indeed the case that a scientific field or subfield is just fundamentally broken – the studies carried out in it simply aren’t good enough for us to draw any meaningful conclusions; the noise massively outweighs any signal; the biases or interests at work are so overwhelming in their influence that we can’t glean any kind of legitimate signal through the shadow they cast.  We can’t rule out this possibility a priori, and should pay attention to the sceptics enough to take it seriously.

    At the same time, though, the fact that any individual study is flawed (which basically every study is for some value of ‘flawed’), or even that there are systematic generative flaws in the relevant institutional structures (which again will always be the case for some value of ‘flaws’) doesn’t mean that the scientific subfield (or whatever) in question should be written off.  The reason for this is that – as I was arguing at greater length earlier – scientific conclusions are really aggregate phenomena.  They emerge as signal from noise, and the noise can be very substantial indeed – can even be systematic – while still generating a useful signal.  The idea of the scientific enterprise is that we are all engaged in a highly fallible process, but our collective research endeavour is stronger than any individual study or claim, or the flaws that afflict them.  And if we have the institutions of science functioning halfway properly, this indeed ought to be the case – a signal ought to be detectable, over time, despite everything.

    In this sense, the ‘good faith sceptic’ is (I’m arguing) taking a too pessimistic or perfectionist approach to the assessment of scientific validity.  The good faith sceptic assumes that errors are magnified by aggregation – that if all the individual studies have some flaws, then the field as a whole must be a true disaster – rather than understanding the way in which institutionalised fallibilism allows fallible studies to produce something greater than the sum of their parts, via the process of collective sifting and checks and balances.  Again, we can’t assume that this aggregate-level effectiveness of the scientific research process is the case a priori for any given field, but I’m claiming that it in fact often is the case for many actually-existing scientific institutions.

Ok – so far we’ve looked at four different ways to approach scientific findings.  I want to suggest that each of these ways has a sort of partial or lopsided attitude to the complex dynamic system of our scientific institutions.  The full anti-science denialist just rejects the whole thing; the heuristic timesaver and (in a less defensible form) the pro-technocracy expertise lover focus on some specific ‘output’ as bearing the authority of the institution of science as a whole (investing a moment of the system with an attribute that can really only legitimately be attributed to the system overall); the good faith denialist fixates on research flaws without understanding how the checks and balances of aggregation associated with the practice of the community as a whole can permit useful signal to emerge even through very significant noise.  

But we can, in principle, do better than all of this: we can ourselves triangulate between many different studies, and we can assess the likely institutional incentives and strengths and weaknesses in play.  Of course, we need to have the time available to do this – and it relies on our own judgement.  So this isn’t an easier – or even, necessarily, a better – way to approach things than making use of simpler heuristics.  Our judgement may be worse than that of whichever science synthesiser and communicator we would otherwise choose to delegate this task to!  But the approach is at least available.  And there is a certain sense in which this approach is more adequate to what science ‘is’.

Ok.  Let’s say we adopt this kind of approach.  Here we are trying to engage not just with science in the sense of individual outputs – whether individual papers or summary overviews assembled by science communicators – but with the dynamics of the relevant field as a whole.  This is, at least potentially, a good way to go about things – but it is also extremely cognitively taxing.  Moreover, even if we adopt this kind of approach – moving beyond the first-pass heuristic of trusting some specific synthesiser or synthesisers – we are still constantly engaged in acts of epistemic delegation.  Understanding science in the systemic fallibilist way I’m advocating means there is simply no way to get away from epistemic delegation – from trying to make rule-of-thumb judgements about whose word to rely upon.  In taking the approach I’ve described we are attempting to engage in a more sophisticated and triangulated effort at weighting the credibility of testimony – but we cannot but ultimately make judgements about how to weight testimonial credibility.  At the end of the day, this is core to the entire scientific enterprise.

And this basic, unavoidable fact about how science works means that we are never not going to be vulnerable to ‘scepticism’.  I began this post with the early modern Enlightenment approaches to foundationalist epistemology.  Rationalist foundationalism placed the individual faculty of reason centre stage, while empiricist foundationalism placed observations of nature centre stage, but regardless the idea was that the appeal to testimony could ultimately be grounded in something that itself did not need the grounding of the attribution of social credibility.  To use Barnes and Bloor’s phrase, the epistemic ground of such philosophical approaches were meant to “glow with their own light”.

If we adopt a fallibilist, anti-foundationalist approach to science as a complex system, though, we lose this kind of grounding.  The point at which our chains of reasoning ‘bottom out’ is always contingent.  There is always, in principle, more that one could do; one is always engaged in epistemic delegation, treating something as contingently trustworthy that is, in principle, open to contestation.

This fact opens the door to an infinite application of specific scepticisms.  It is always possible to continue asking “and what’s your basis for believing that?”  And this infinite application of specific scepticisms itself has a double face.  On the one hand, and to reiterate, the goal of our scientific system as a whole is to collectively institutionalise the principle that was fallaciously individualised in the first wave of Enlightenment rationalism and empiricism – “on the word of no one”.  On the other hand, in a social, anti-foundationalist and fallibilist understanding of science, this principle is institutionalised through a structure of contingently authoritative testimony – specifically taking people’s word as authority enough to believe things.  Sceptical questioning of taken-for-granted authorities can thus be seen both as the essence of rational, empiricist scientific inquiry, and as undercutting the testimonial institutions that we use to pursue rational, empiricist science at all.  Which of these a given act of questioning ‘counts as’ is a matter of social perspective.

Alright.  Here I want to pull back slightly, and start writing at a greater level of generality – talking not about science specifically but epistemic systems in general.  I think there’s a tendency in quite a lot of philosophy of science to somewhat conflate the specific features of science with human reason and observation in general (indeed, there are lots of people who would argue that that’s justified because there’s actually nothing that really differentiates science from other kinds of human epistemic practices!)  I don’t want to do that – I do think science can be demarcated, albeit loosely, from non-science.  Even so, I want to pull back now to make some broader remarks, drawing (as usual) on Brandom’s theory of practice and discourse.  Then I’ll circle back round to the problem space of fallibilist understandings of science.

So – start with Brandom’s ‘default, challenge, response’ model of the game of giving and asking reasons.  And start by thinking about the philosophical problems this model is responding to.  If we are thinking about inferential chains, then we are faced with the problem: where do those inferential chains stop?  It seems like we have three options.  First, there is no terminus – the inferential chain just keeps on going forever down an endless series of new premises.  This seems like it might be an infinite regress, such that we’re never able to ground our reasoning in anything because we never reach a stopping point.  Second, there is a terminus, but it is itself ungrounded.  This seems like it might be a form of dogmatism.  Third, there is a circular chain of inferences, such that our original inference effectively functions as its own ground.  This seems to combine negative features of the first two scenarios – an infinite regress that is somehow also a dogmatism.

But what else are we going to do?  It seems like these options are exhaustive.  Moreover, something like this problem occurs not just at the level of substantive premises, but also at the level of logical inferences themselves – this is the argument Lewis Carroll makes in ‘What the Tortoise Said to Achilles’.  Because an inference can be (in Brandom’s terminology) explicitated as itself a substantive premise – this is what logic does, on an expressivist account: it makes the formal machinery of reasoning available as conceptual content, not just practice – the exact same problem can be made to recur in relation to the logical processes of inference by means of which the inferential chain is itself constructed.

So what to do?  Brandom’s ‘default, challenge, response’ model proposes that we start with “material inferences” – that is, substantive, not just formal, inferential claims (and, of course, on Brandom’s account inferences are the stuff of conceptual content in general) – which are presumed (by default) to be good.  Then those material inferences (or conceptual contents) can be challenged as part of the general discursive practice of asking for and giving reasons.  Once challenged, we are obliged to give a reason for our commitments.  On Brandom’s account, then, we inhabit a space of reasons which is filled with ‘default’ commitments that are not, at that moment, vulnerable to challenge.  Indeed, there is no other way to enter the space of reasons – to be a sapient creature at all.  This world of ‘default’ commitments is the inherited materials of our conceptual space – the boat we are remaking, in Neurath’s metaphor.  Then the process of reasoning – the game of asking for and giving reasons – is the remaking of that boat, by challenging default commitments, and thereby extending our inferential chains outward, turning what had been premises into conclusions, grounded in new premises.

As we reshape our conceptual world, then, on this model, we are (like Neurath’s miners) shifting the location of the light of inquiry.  Commitments that had been default premises become the conclusions of newly established inferential chains.  At the same time, commitments that we arrived at by inferential chains get integrated into the default background of our conceptual habits.  In this latter scenario, inferential chains ‘drop out’ as they achieve local consensus, and what had been a laboriously-arrived-at conclusion becomes a habitual, unexamined premise.  It’s important to recognise that both ‘sides’ of this process – default premises becoming contested conclusions; contested conclusions becoming default premises – are essential dimensions of our rational discursive practices: you can’t have one without the other.

I take it that (as appropriately elaborated by Brandom) this is a more carefully developed (albeit also more boringly articulated) version of the vision of “cognition as such” that Neurath laid out in the passages from ‘Anti-Spengler’ I quoted above.  Ok.  But all this means two things.  First: we have swapped a general foundationalism (as in the original Enlightenment foundationalisms I began by discussing) for a series of ‘local foundationalisms’ – default commitments always vulnerable to challenge.  And, second: what counts as a local foundation, a currently default commitment, is a matter of local social practice.

In order to elaborate on this latter point, I now want to compare and contrast the Brandomian ‘default, challenge, response’ model to the vision of discursive practice articulated by Barnes and Bloor in their defence of relativism (‘Relativism, rationalism and the sociology of knowledge’). I already discussed this paper in a Journal of Sociology article, co-authored with N. Pepperell.  I’m not 100% happy with our treatment of Barnes and Bloor in that article (obviously the fault here lies with me, not with NP), but I don’t want to divert this post into a lengthy relitigation of all those issues.  For now, I just want to focus on one specific area.

In their paper, then, Barnes and Bloor field a range of objections to their relativism from an ideal-typical rationalist.  First, they field the objection that (contra relativism) our ideas are in fact determined by the way the world really is.  Interestingly (and in contrast to some other prominent figures in the strong programme broadly understood – e.g. Harry Collins, at least in some moods) Barnes and Bloor have no objection to the idea that the way the world really is should play a role in our accounts of why people believe the things they do.  Barnes and Bloor are relativists, but they are not anti-realists, not even ‘methodologically’.  In Brandomian terms, Barnes and Bloor are happy to incorporate ‘reliable differential responsive dispositions’ into their analytic apparatus.  That is to say, they are happy to say that (for example) the fact that the object of an experiment really did behave in such-and-such a way should be part of our account of why a given scientist believes what they believe about the object of the experiment.

But Barnes and Bloor insist that this can’t be where our account stops.  One reason for this is that (as they say), nature has always behaved the way it does.  In Barnes and Bloor’s words:

reality is, after all, a common factor in all the vastly different cognitive responses that men produce to it.  Being a common factor it is not a promising candidate to field as an explanation of that variation.

Moreover, we know from the history of science that working scientists can of course interpret the ‘same’ or similar experimental results in vastly different ways.  Barnes and Bloor give the example of Priestley and Lavoisier having very different interpretations of the same basic experimental data.  The fact that experimental results are interpreted in the way they are by the researchers in question therefore can’t simply be accounted for by the behaviour of the experimental object; it must also be explained by the researchers’ interpretive practices.  And those interpretive practices (Barnes and Bloor argue) are socially determined.

So Barnes and Bloor are not disputing the role of ‘reality’ in determining belief; they are arguing (and here, as often, they are more aligned with the logical empiricists than either group’s reputation would suggest) that reality underdetermines interpretation, and that the other relevant factor – the appropriate object of sociologists’ study – is social norms.  They then argue – and this is the crux of the paper – that there is no non-relativist way to ground that social-normative determination of interpretation.  And here I think we need to be careful to distinguish at least two different elements of Barnes and Bloor’s argument.

The first element of this argument is a fight with anti-relativists who believe a shared universal faculty of reason is a precondition of the intelligibility of communication, science, reason, and so forth.  Here Barnes and Bloor cite, and argue with, Hollis and Lukes.  Barnes and Bloor make the case, in essence, that the logical principles like modus ponens are socially instituted rather than features of some essential core invariant faculty of reason.  As Barnes and Bloor see it, Hollis and Lukes and other rationalists are simply dogmatically insisting on a particular set of conventional practices as necessary features of human reason, without providing any compelling justification for their preferred norms beyond bluster.

I think this argument has a lot to recommend it.  But what I’m interested in here is the second, more general, dimension of Barnes and Bloor’s argument.  Here they argue that the rationalist in general ultimately cannot avoid dogmatically insisting on some category of proposition in which credibility and validity are fused.  In Barnes and Bloor’s words:

[the rationalist] will treat validity and credibility as one thing by finding a certain class of reasons that are alleged to carry their own credibility with them; they will be visible because they glow by their own light.

I love this passage; I think it provides a great, evocative articulation of the core critique of dogmatic rationalist foundationalism.  But I also think that this element of Barnes and Bloor’s argument is insufficiently attentive to the possibility of anti-foundationalist rationalisms of the kind articulated by Neurath.

Here the relevant question is: what does it mean to say that at some point in the rationalist’s argument, credibility and validity must fuse?  I think Barnes and Bloor mean to suggest that a dogmatically unjustified foundational premise must exist somewhere in the rationalist’s reasoning – and that the explanation for the rationalist treating that premise as foundational must be social.  But should we understand this foundation as aligned with actual philosophical foundationalism?  Or should we treat it as provisionally foundational in the way that is central to the ‘default-challenge-response’ model?

My claim here is that the ‘default-challenge-response’ model allows us to have ‘foundational’ premises for reasoning in a way that does not commit us to philosophical foundationalism.  My claim, moreover, is that there is potentially a large conceptual gap between philosophical anti-foundationalism and relativism.  Barnes and Bloor take themselves to be providing a set of arguments for relativism, but by my lights what they are really doing is providing a set of arguments that could lead to relativism, but could also lead to Neurathian anti-foundationalist rationalism.  (I further think that Barnes and Bloor themselves are influenced by Vienna Circle logical empiricism, and take themselves to be elaborating what they take to be its underlying relativist commitments, but I don’t think we need to follow them down this path.)

So – let’s assume we have followed some inferential chain down to its foundational premise.  This premise is treated as foundational by some local community, but there is no transcendental or metaphysical reason why this premise should be treated as foundational – the fact that it is treated as foundational is a matter of contingent sociological fact.  Have we now ‘relativised’ this premise?  My claim is: not necessarily.  The fact that (on the default-challenge-response model) this premise is currently, by default, foundational doesn’t mean that a challenge won’t bring forth reasons.  The sociologically and epistemologically relevant question is: what kind of reasons are they?

Here we switch levels again, back to the specific features of science as an institution (rather than ‘cognition as such’).  I think the claim advanced by anti-foundationalist scientific rationalists needs to be something like the following: what distinguishes science from other epistemic systems is the institutional-epistemic structure within which ‘default’ premises are embedded, such that users of those ‘default’ premises can legitimately assume that at the level of the system as a whole the premises are not ungrounded, but are rather empirically and rationally grounded by other components of the overall epistemic system.  There is a complex cognitive division of labour here, such that countless methodological and substantive claims serve as locally-unexamined premises for some members of or moments of the system, but no premises are ‘foundational’ for the system as a whole.

This is (at least part of) my answer to the ‘demarcation problem’ (that is, to the question of what differentiates science from non-science).  My claim is that what picks out science as a uniquely reliable epistemic system is an institutional structure that makes this category of claim warranted.

Ok.  This is one of the core claims I want to advance in this blog post, so I guess I want to flag that here and take a short imaginary breather.  However, this claim immediately needs to be qualified, in at least two ways.

First up (a critic might query) are we really saying that no premises are ‘foundational’ for the system as a whole?  What about the premise that ‘nullius in verba’ is a desirable project in any sense in the first place?  Isn’t this simply a normative judgement – a question of world-view – that itself can’t possibly be ‘empirically or rationally’ grounded?  To this set of questions I think I basically, like the logical empiricists, just shrug my shoulders and say “sure, I guess that a lot of this kind of thing comes down to values at the end of the day.”  Perhaps this ‘admission’ is enough to place me in the ‘relativist’ camp.  I’m sure it would be in the eyes of many.  But I think it’s important to remember that – at least within a Brandomian framework – to talk of values is not to leave the space of reasons: for Brandom, all normative commitments are part of the game of giving and asking for reasons.  It’s true that one cannot give a scientific basis for the norms that undergird the scientific enterprise as a whole – but this is not the same thing as saying that no reasons can be given.  Anyway, maybe I’ll come back to this issue another time – there’s much more to say here, but I think it mostly falls outside the scope of this post.

A second challenge is not focussed on the ‘fundamental values’ that animate the scientific project, but on the idea that one can indeed legitimately take scientific institutions as warranting the kind of ‘epistemic delegation’ that treats so much of the received wisdom of science as locally taken-for-granted default background commitments.  And this objection in turn, I take it, as I’ve already briefly discussed, comes on a spectrum.  At one end of the spectrum is global contempt for the scientific project tout court.  But as we move along the spectrum, we reach more and more ‘local’ objections to specific features of the local scientific enterprise in question.  Does such-and-such a scientific claim or practice really merit being taken as an unproblematic default?  Often, clearly, the answer to this question is going to be “no” – indeed, it has to sometimes be “no”, on the antifoundationalist account I’m endorsing, because the answer to this question being “no” is how we get scientific progress, discovery, self-correction, etc. etc.  The answer to this question being “no” is the driving force of the scientific endeavour.

At this point, though, I want to introduce one more distinction – the Brandomian-Hegelian distinction between the retrospective versus prospective dimensions of reason.  When Weber writes that the “very meaning of scientific work” is that our current best science “will be surpassed” he is pointing to a prospective dimension of deference to scientific institutions.  In other words, in deferring to the authority of science, we are not just deferring to the current authority of current scientific findings – we are also, and critically, deferring to the broader institutional process by which those findings will, very likely, be overturned or supplanted.  The form of epistemic delegation or authority involved here is quite complex.  We are deferring to the institution of science precisely because we take it to have the resources to supplant the specific claims to which we are at the same time locally deferring.  Here again the tension between the dynamic pluralistic dimension of science and the authority of specific, static, concrete claims or findings is in play.

Ok.  More caveats could be articulated, but we’ve now I think covered the main substantive points I wanted to make about what science ‘is’.  I now want, much more briefly, to apply some of these points to some specific bugbears that have been bothering me recently.  There are two.

The first ‘application’ is around what I see as ‘sceptical’ discourses.  Here I want to use the apparatus I’ve articulated above to draw a couple of distinctions.  As I’ve said ad infinitum now, there is a constitutive tension in scientific institutions between the fact that in principle everything is open to challenge, and the fact that in order for any progress to actually be made on anything, a huge amount needs to be ‘black-boxed’ as locally-unexamined premises.  Science as a whole is a way to manage this tension, by permitting justified ‘black-boxing’ at a local level, on the grounds that at the global level of the institution as a whole everything is contestable, via division of epistemic labour.  This division of labour can be temporal – diachronic as well as synchronic.  Thus even when a near-total consensus is reached in the present, this is because there is epistemic delegation to earlier generations of researchers who robustly debated and tested these conclusions so that we don’t have to.  Moreover, this delegation can be prospective – we can take a premise for granted on the assumption that in the future we will get around to assessing its legitimacy more robustly than we yet have.  All of this is how science works.

Now: what I want to object to, using this apparatus, is sceptical discourses that take locally-unexamined premises as evidence of anti-scientific thinking.  It’s easy to see where this idea comes from: science is meant to challenge presuppositions and take nothing for granted, and yet manifestly you have countless working scientists who are simply taking huge amounts of stuff on the basis of authorities that they haven’t bothered to independently assess, or whose work they haven’t bothered to master.  This is the antithesis of science (the reasoning goes)!  Therefore science is a fraud.

And my strong counter-claim is that this is just a fundamental misunderstanding about how science works.  This kind of ‘scepticism’ is the application of (often a fairly debased version of) Enlightenment Mark One epistemological reasoning to an epistemic institution that simply isn’t justifying its epistemic claims in this way.  That doesn’t mean that the scientific claims in question are right.  It may, in any given case, in fact be the case that the authorities on which scientists are relying are misguided, that the locally-unexamined premises are bad ones, and so forth.  Challenging such premises is (part of) the work of science.  But neither is it intrinsically irrational or unscientific to engage in the kind of epistemic delegation, the kind of deference to authority, that is here being criticised.  Epistemic delegation and deference are simply non-negotiable features of science (indeed, of reason) in general. 

The characteristic sceptical move that I’m here objecting to, in other words, is an apparent belief that unless any given individual can trace back the chain of inferences to the comprehensive evidence-base that justifies their claims, those claims lack justification.  This is what’s (at least purportedly) going on (often, not always) when you see ‘sceptics’ of whatever kind demanding “a source for that claim” in online debates over (often relatively consensus) science.  The issue isn’t that it’s illegitimate to want claims to be sourced.  The issue is that it’s unrealistic to expect any given individual to be able to personally replicate, on demand, the inferential chains that ground the entire research enterprise in question.  By challenging individuals in this way, you are not ‘exposing’ the fact that they are making claims without evidence – rather, you are indicating that you don’t understand the epistemic basis for scientific authority.

That’s my first bugbear – which really just derives from observing, and sometimes participating in, too many tedious online debates with people who think they are being particularly rational by making these kinds of discursive moves.  But this is a quite petty bugbear.  The second point I want to make is perhaps slightly more meaningful.

This second point concerns my own research – and here I guess I want to get very slightly more autobiographical.  Whenever you talk about your past self’s errors I think it’s easy to overstate and simplify things, which I’m definitely going to do here – I didn’t in fact hold clean ‘ideal type’ positions of the kind I’m about to attribute to myself.  But still, I think I can discern in retrospect some kind of intellectual trajectory that roughly traverses the same ‘dialectic’ that I outlined towards the start of this post.  That is to say: I think when I was (let’s say) a teenager, one of the appeals of philosophy for me was something in the general ballpark of the ‘crude’ ‘Enlightenment Mark One’ idea of excavating through layers of unwarranted belief that one had inherited from one’s social environment, in order to find bases for belief that were more robust than simply accepting contingent tradition.  Then as I started actually studying philosophy I became pretty convinced – let’s say in my early twenties – that this was a pipe dream, and that you can’t get away from the contingently social determination of your categories.  This in turn I think led me to a more ‘critical-theoretic’ space, which was highly sceptical about philosophical rationalist claims.  And then, from that more critical space, I feel like I’ve slowly assembled the resources required for a more pragmatist and social-theoretic rationalism – thanks in no small part (obviously) to Brandom.  Again, I think I’m sort of warping things a bit to fit into this narrative, but there’s something to it, in terms of my own personal intellectual trajectory.

The point being, I guess, that I definitely feel like I understand the persuasive pull of what I would now characterise as two different categories of scepticism.  One category of scepticism seeks a socially-transcendent basis for the critique of the social determination of belief (and becomes a hyperbolic form of scepticism because it is in fact impossible to find such a basis).  Another category of scepticism relentlessly critiques claims to such a basis for rational judgement, on the grounds that all such bases are contingently socially determined, and therefore unreliable.  I take it that the broad philosophical orientation I’ve outlined above incorporates at least some of the strengths of both orientations, in the service of a more robust and reasonable rationalism and empiricism.

But one still encounters the ‘sceptical’ challenge – and one encounters it in two ways.  First, introspectively: I think most people with this kind of ‘philosophical’ orientation are nagged by worries about the basis for their beliefs; these worries are the motive for a lot of theoretical and scientific inquiry.  Second, though, one encounters these sceptical challenges from others.  And this is what I want to conclude this post by talking about.

One of the ways to think about rationalism (extensively criticised by Barnes and Bloor in their paper on rationalism and relativism) is to assume that there are shared fundamental commitments that are constitutive of sapience as such.  If this is your ultimate grounding for the reasonableness of our commitments, then the process of argument and persuasion can be understood as a practice of following inferences ‘upstream’ until one reaches commitments that nobody could possibly reasonably reject.

A social-pragmatist rationalism rejects this approach.  For the approach I am endorsing – exactly as Barnes and Bloor say – the commitments that we reach when we follow a chain of inference back to its self-evident premises are socially contingent.  They are locally unchallenged, but that does not mean that they are rationally unchallengeable – quite the reverse.  What you take to be self-evident is a function of the social norms that you contingently endorse – in large part due to the relevant recognitive community of individuals of which you are a member.  In other words, the establishment of ‘self-evident’ premises for reasoning is in significant part a process of socialisation: if a commitment is ‘self-evident’ or premiseless, this is a fact about your socialisation, or your social milieu, not a fact about the commitment.

Now, one of the implications of this understanding of how our epistemic world functions, is that we are typically inclined to an asymmetry about commitments.  Of course the commitments that I regard as self-evident really are self-evident.  On the other hand, the commitments that you regard as self-evident are manifestly not self-evident at all; on the contrary, I can see as clear as day that these are unexamined prejudices inculcated by your social environment, from which you have failed to free yourself.  Similarly, my reasoning moves in secure, robust steps, relying on only the most self-evidently legitimate inferences.  By contrast, your reasoning is constantly supported by implicit and yet dubious substantive commitments that you not only have failed to justify, but that you apparently fail even to recognise as commitments requiring defence.  This asymmetry is, of course, a structural feature of the fact that we have different background assumptions – different locally-foundational premises.  It is important not to mistake this difference in local default premises for a difference in rationality as such.

Now, the fact that we all have slightly different – and sometimes substantially different – locally-foundational premises for our reasoning means that when we encounter someone else’s arguments, we are often inclined to challenge some of what they have to say.  We take it that we see things clearly that they are confused about.  And there’s nothing wrong with this!  This is the process of asking for and giving reasons that makes us rational creatures in the first place!

And yet here we again encounter a version of the tension that has animated this entire post.  For, as I argued above, it is not just the challenging of default premises that permits us to inhabit the space of reasons, but also the creation of default premises. Without the rich backdrop of locally-unchallenged default commitments – both explicit and implicit – our reasoning processes wouldn’t be able to get off the ground at all.  In other words, we (or the traditions we inhabit and inherit) have to make decisions (whether deliberately or implicitly) about what premises will not be challenged within any given moment of the game of asking for and giving reasons.  Neurath’s sailors cannot remake the entire boat at once – they cannot remove the planks beneath their feet, even if they can choose where to stand.

In other words, part of the set of decisions we make in engaging in rational thought and discourse is precisely what commitments are not up for debate – at least here and now.  This is true of thought in general – but it is also true of scientific discourse.  This fact is the basis on which Polanyi can construct his two-stage account of science as a structure of authority-relations.  Admittance to the community of practising scientists is accomplished by a process of socialisation in which a set of shared community commitments are established.  Then, this having been accomplished, those commitments serve as the ground upon which individual scientists can stand, as they aim to dismantle and reconstruct some elements of the framework they have been socialised into.  As I discussed above, there are significant epistemic risks from the kind of gatekeeping associated with scientific community socialisation – but some establishment of default premises upon which reasoning can build is an essential precondition of establishing any community of epistemic practice.

From the perspective of those outside the community in question, however, this can easily look like a process of unreason at work.  Here, after all, in the process of socialisation, we are effectively engaged in the construction of an in-group out-group boundary, where those who refuse to accept the principles and commitments of the in-group are banished from its charmed circle.  Moreover, these mechanics of in-group and out-group membership really do frequently involve a process of unreason at work.  It is often a lot easier to respond to the ‘challenge’ moment of the ‘default-challenge-response’ model by simply insisting that those who do not accept a given premise are not welcome here, than it is to provide a substantive rationale.  And it’s easy to see how this dynamic can operate in the service of irrationalism – not irrationalism in the sense of “stepping outside the space of reasons” but in the sense of “providing bad reasons”.

But here’s my claim: the goal of the institution of science is to establish an overall institutional structure that allows a plurality of locally-unexamined commitments to serve as epistemic checks and balances against each other, while simultaneously permitting local sub-communities to fruitfully pursue lines of thought facilitated by the establishment of very substantial – or even very contentious – locally-taken-for-granted premises.  I’m here basically saying that Brandom’s ‘default-challenge-response’ model, plus Polanyi’s account of the republic of science, provides a more elaborated account of the way in which Neurath’s boat operates as a specifically scientific institutional structure and dynamic.  Moreover, I’m arguing that subcommunities of practice are critical to the dynamics of Polanyi’s account of scientific pluralism.

Here it helps, I think, to consider another element of the social character of reason: it really helps to think things through, if we have other people to bounce ideas off.  Of course, at some very abstract level, if we accept the Brandom-Hegel account of reason, reason is necessarily and intrinsically social: you can’t have reason at all without community.  But moving down levels of abstraction, and having gotten our rational, sapient recognitive practices off the ground in some sense, it also just practically helps to have people to talk to and think with on any given specific topic.  This is what a scientific discipline is, it’s what a subdiscipline is, it’s what a research programme is, it’s what a research team is, it’s what a collaboration is.  These are all ways in which people work together on the basis of shared default premises.  The point is that when people with similar default premises and commitments come together, they can build on those premises together in a way that is impossible for people with more widely divergent worldviews.  Here the ‘black-boxing’ of much of the debate that animates science in general allows the kind of focus on specific problem-spaces that often leads to scientific advance.  Without that ‘black-boxing’, I’m claiming, you wouldn’t be able to move far enough down the relevant inferential chains to get to novel findings.  If you’re constantly having to re-establish the rational and empirical bases of fundamental commitments, you don’t have the cognitive resources left over to follow the implications of those commitments.  This, I’m claiming, is why the kind of broad scepticism (“where’s your source for that?”) that I mentioned above is destructive of the ability to actually pursue a lot of scientific inquiry, if taken too seriously.

The pluralism that is constitutive of a well-functioning scientific institutional dynamic is a pluralism of these subcommunities of shared commitments.  In my PhD, and following Jason Potts, I called these communities “scientific innovation commons”.  But in my PhD I didn’t, in my view, adequately draw out the epistemic implications of this institutional structure – I’m trying to do better in this post.

So the model of science that I’m proposing here involves a pluralism of research subcommunities, each with their own local norms and commitments, which knit together into a cognitive division of labour that – if everything is working roughly as it should – allows each subcommunity to serve as an epistemic check and balance on others, resulting in a (diachronic) large-scale dynamic that can credibly claim, as a whole, to approximate the ‘rejection of tradition’ that early modern Enlightenment thinkers hoped and failed to achieve at the level of the individual scientist, precisely via the way in which this community as a whole constructs and transforms its own traditions.

If this pluralism is to function as an internal check and balance system, though, it needs to be genuine pluralism.  As Zollman’s formal opinion dynamics modelling illustrates, a community that too-quickly orients to consensus is an epistemically unreliable community.  And this in turn, I’m claiming, produces another apparently paradoxical result: there is – potentially – an epistemic virtue in local research subcommunities refusing to ‘update’ their presuppositions in the light of criticism from other subcommunities.  Obviously we don’t want everyone to be too fanatical or dogmatic.  But neither do we want everyone to rush too quickly towards consensus.  We want a diversity of research programmes each of which can explore the implications of their approach in some depth.  Only by permitting and facilitating this kind of ongoing pluralism are we (the community as a whole) able to reliably assess the strengths and weaknesses of these different research programmes.  

And here the prospective dimension of the scientific process also becomes relevant.  Our research is future-oriented.  It is aimed towards some future collective process of assessment.  This is the point of the ‘conjectures and refutations’ model of science – we don’t need a rationale now for a given research programme, we simply need the possibility of a future evidence-base that will support (or refute) our hypotheses.  We can then set about the task of discovering whether that evidence-base will (or could) in fact exist.  Thus even highly speculative and flimsily-supported-by-traditional-commitments communities of research are a fully legitimate – indeed, essential – dimension of the dynamics of science as a whole.  Of course, if a research programme is unable, over the long term, to find rational or empirical justifications for its existence, then it slowly loses value within the pluralistic chorus of scientific debate.  But we wouldn’t want scientists to abandon a research programme at the first sign of trouble – some measure of perseverance, even on highly unpromising ground, is essential to the overall collective endeavour.

Ok.  But here we reach some of the tensions that I gestured at earlier.  For how is an individual scientist or researcher to behave, within this institutional framework?  I have made a case for strong scientific pluralism.  I have made the case that such pluralism requires perseverance from scientists and scientific subcommunities even in the face of epistemic adversity.  But how does an individual scientist’s decision-making fit within this framework?  Let’s say a team of researchers is pursuing a research programme.  They find unpromising results.  It seems to them, in the light of those results, that it is much more likely that an alternative research programme is the right one.  And yet, were they to abandon this current research programme, the result would be a significant reduction in the pluralism of this section of the scientific ecosystem.  Do the scientists in this research team have an epistemic obligation to pursue what they now see as the most credible lines of reasoning and inquiry, and abandon their current research project?  Is this what following the best evidence – apparently a core scientific norm – demands of them? Or do they have an obligation to maintain scientific pluralism by doing the best they can by their current research programme, even though they have lost faith in it?  After all, a degree of epistemic pluralism is (I have argued) critical to the credibility of science as a whole.  And if they pursue this second course of action, at what point does this course of action stop being a commendable commitment to scientific rigour in pursuing an unpromising line of inquiry for the sake of epistemic pluralism and meticulous care in eliminating unlikely possibilities (and on the off-chance that it turns out to be correct after all), and start being a dogmatic refusal to accept the best scientific evidence?

I don’t think there are really correct answers to these questions.  Individuals need to make their own decisions.  But I think these kinds of problems present themselves if you accept the understanding of science that I am recommending.

And now, finally, we come to the rather self-involved personal reflections that I was trying to get to with this line of thought.  As I was saying above, I feel like my own personal intellectual trajectory has been driven by some measure of the kind of ‘scepticism’ that I’m here expressing wariness about.  That is to say, the kind of scepticism that asks “how can we be sure that what we’re doing is even in the right ballpark here?”  I think this was the kind of worry that drew me into philosophy, and it’s also the kind of worry that pushed me away from philosophy (because too much philosophy seemed itself to be dogmatic, to me).  I think this kind of worry (“what if our basic approach and categories are just wrong?”) has sent me running around between different fields and subfields – philosophy, sociology, economics – in part because I didn’t want to just accept being socialised into such-and-such a set of handed-down disciplinary norms.  And I don’t think that impulse was exactly misguided, though I certainly would have benefited from applying myself a lot more along the way.

In any case, I feel I’ve been, over time, reasonably responsive to both this kind of introspective scepticism, and to the ‘external’ scepticism of people telling me that I’ve gotten it all wrong and I should be thinking about things in [such-and-such] terms instead.  And I feel I’ve learned a lot from those kinds of interactions.  Recently, however, I’ve found myself increasingly unwilling to take this kind of advice – to listen to people telling me that I’ve gotten things all wrong.  And I feel like this unwillingness comes from two distinct sources.

The first source is that at this point I’ve been reading and thinking about the areas that interest me for (let’s say) about twenty five years.  In that time I’ve done a lot of thinking – and I feel like I’ve already at this point given consideration to a lot of the kinds of objections and criticisms that people throw at me.  Increasingly, my reaction to being told that I haven’t considered [X] is not (as it once was) “oh, yeah, I should really spend some time reading or thinking about [X]”, but rather “yes I have”.

So that’s one consideration: one reason why I’m less inclined to put a lot of energy into responding to objections to my overal intellectual project.  But of course, as I discussed above, the fact is that there is always room for giving more thought to any given issue.  So it’s really not a very reliable or commendable attitude, to simply think to yourself, “no, I’ve already settled that”.

The important consideration, in my view, is a different one: the role of research programmes in the scientific epistemic system.  As I discussed at length above, there are two sides to intellectual progress: on the one hand, challenging taken-for-granted ‘default premises’; on the other hand, adopting premises as taken-for-granted defaults, in order to explore their implications.  And my view is that, in my intellectual life to date, I have spent a large proportion of my time doing the former, and it is now time to focus on the latter. 

In other words, I feel I have a research programme here.  That research programme is still much more inchoate and underdeveloped than I would want it to be, at my stage of life.  And of course the research programme may be flawed; its premises may be faulty; its goals may be misguided.  But I feel – whether rightly or wrongly is not for me to say – that at this point I’ve done enough in trying establish reasonable default background premises.  Now I want to actually try to do something with the intellectual resources I’ve committed myself to.

All this is by way of saying, that at this point in my life I regard it as basically an appropriate response to those questioning the premises of my intellectual project to simply say: well, this is my research programme; if you think it is flawed, there are many other research programmes out there which may be more to your liking.  After all, as I’ve argued at inordinate length in this post, science is a pluralistic epistemic system.  Part of what makes science epistemically reliable is precisely the fact that it has lots of people running around in it doing misguided things, pursuing misguided research programmes.  I think it’s clear enough that I’m at the more crankish end of the research spectrum: I’m not affiliated with any institution; I publish in academic venues infrequently; I work through my intellectual interests in rambling, loose, overly personal blog posts like this one; and so on and so forth.  But that’s ok.  Science, in the expansive sense, has space for all of this, and much more.  My goal on this blog, and in my work in general, as I see it, is now to pursue the lines of thought I’ve committed myself to.  We’ll see how much I can get out of them, in whatever time on this earth I have left.

Another one of those blog posts in which I post about ideas which I should probably be reading about instead.  Yes, it would be better to know more about what I’m talking about before posting, but I find it helpful to try to organise my thoughts as I go, and that’s what the blog is for, so here we are.  I’ll aim to refer to some of the literature that I’m not referring to. so to speak, as we go.

So – in the last post I re-articulated elements of this blog’s case against the labour theory of value.  One of the reasons that Marxists have traditionally adopted the labour theory of value (and thus one of the arguments against its rejection) is that it gives us a way to understand and elaborate the category of ‘exploitation’, which is taken to be central to Marxism’s critical standpoint.  In the simplest version, the argument goes something like: it’s labour that generates value; much of that value is appropriated by the owners of capital rather than those doing the work; the ratio of value appropriated to value produced is the rate of exploitation.  Obviously this can be elaborated in a lot of more complex ways, but that’s the basic idea.  Then the idea is: the central problem with capitalism is exploitation, and our political-economic goal is to eliminate exploitation.  From this perspective, the centrality of the category of exploitation is what distinguishes Marxist from non-Marxist political economy, where non-Marxist political economy has developed a complicated theoretical apparatus to elide the centrality of exploitation to capitalism.

Then there is a subtradition within Marxism that rejects the labour theory of value in one way or another – I think I’m right in saying that all the analytical Marxists end up in this space.  If this is where you are then the question becomes: well, what happens to the category of exploitation?  How can we still make use of the category of exploitation without the LTV?  And there are a range of different answers to this question.  John Roemer’s early work is one effort to answer this question, deriving a concept of exploitation from more ‘mainstream’ economic resources; Vrousalis’ recent, more philosophical, work is another.

My basic thought in this post is: do we really need the category of exploitation?  Or, more properly and carefully, do we really need the category of exploitation to be so central, either to our analytic framework or to our critical categories?  Obviously one response to these question would be: “if you want to be any kind of Marxist, of course you fucking do! Faux-radical idealist revisionist scum like you are the reason [etc. etc.]”, but this doesn’t seem like an actual argument to me.  On the other hand, there are serious reasons, in my view, why we might not want to accord the category of exploitation – even shorn of its LTV underpinnings – the status that it has been granted by much of the Marxist tradition.

My intent is largely to bracket the question “why might we not want to give exploitation a central categorial status?” until some hypothetical future post.  But I’ll take as my jumping-off point Vrousalis’ recent work on exploitation as domination.  I haven’t read Vrousalis’ recent book yet, so again this is all rather half-baked – but Vrousalis’ basic idea, I take it, is to say something like the following.  Analytical Marxism’s critique of traditional LTV-based theories of exploitation are bang on the money.  But analytical Marxism’s effort to produce an alternative account of exploitation – we’re basically talking about Roemer here – fails to pick out the appropriate set of political-economic phenomena.  Instead, we should understand exploitation as a sub-category of domination: domination for economic gain.  In Vrousalis’s more analytic vocabulary (the quote is from his paper ‘Exploitation, Vulnerability, and Social Domination’):

A exploits B if and only if A and B are embedded in a systematic relationship in which (a) A instrumentalizes (b) B’s vulnerability (c) to extract a net benefit from B.

(Vrousalis 2013: 132)

My view, for what it’s worth, is that this is a more promising way to understand exploitation than those proposed by much of the tradition. But my thought is: what’s doing the critical heavy lifting in this definition is the concept of domination that it’s operationalising.  Why not just build our approach on that concept of domination?  Or, perhaps more properly (and this is all more a critique of the tradition than of Vrousalis), why prioritise this specifically extractive category of domination in our political-economic critiques?

My motivating worry here is that there are categories of domination that we want to be able to talk about in critical political economy that do not in any immediate – or arguably even in any very mediated – sense involve one party extracting net economic benefit from the other.  There are some probably relatively easy cases to deal with – like scenarios in which there is, as it turns out, no net benefit to be had.  (A business fails, so there is, as it turns out, no value extraction, because there turned out to be no value creation over the relevant time period – one can probably deal with this by building in expectations in some way.)  But there are also major forms of domination that don’t have any clear value extractive component.  Much of the carceral state is in this category, in my view.  Prisons are often very expensive to run, and many of them don’t themselves generate economic value.  Clearly they are critical to maintaining the political-economic order, and in that sense they could be said to centrally contribute to macro-level value extraction – but this seems like a very indirect way of analysing what’s going on with the carceral state.  Better, in my view, at least in the first instance, to just talk about this category of domination directly.  Something related could be said for much capitalist military activity.  From a (very broadly) Marxist (or if you prefer just ‘radical’) perspective, we can understand a great deal of military activity as economically motivated in some sense – but even here, it is unclear that this economic dimension is best captured by the category of exploitation.  Are settler-colonial wars of conquest, extermination, subordination and expropriation examples of exploitation?  One could make the case, if one extends one’s categories in the right way – but I think the traditional Marxist approach has been to see this kind of violence as the ‘primitive accumulation’ that precedes and sets the stage for a properly exploitative capitalist economic relation – i.e. as falling outside the category of exploitation.  And these are far from the only phenomena of interest that the category of exploitation doesn’t intrinsically seem to latch on to.  More on all this, perhaps, some other day.

My worry, in short, is that the category of exploitation risks narrowing the focus of our critique of capitalism in a way that misses many of the most direct and appalling examples of coercion and domination we might want to analyse – even potentially including economic coercion and domination.  Alternatively, the tacit belief that we need to explain those phenomena in a way that retains the centrality of the concept of exploitation can warp our analysis.  And my thought, therefore, is that we should ground the critical categories of our political economy not on the idea of exploitation, but on the concept of domination that it (plausibly, as per Vrousalis) presupposes.

Ok.  None of what I’ve just said is actually attempting to make an argument that I would expect anyone to find convincing – again, my goal here is basically just to start to get some of my thoughts in this space in rough order.  But let’s assume we’re granting for the sake of argument the idea that domination is a better core category for critical political economy than exploitation.  What is the concept of domination?  

Here, again, I’m going to start by flagging my ignorance.  There is a large recent literature in political philosophy around the concept of domination – much of it seemingly centred on the ‘republicanism’ associated with Philip Pettit and other related figures.  I don’t know this literature.  When I look at it (and this is no criticism – it’s always the case when one looks at unfamiliar philosophical terrain) I find people drawing and debating fine-grained distinctions, the function and consequence of which escape me.  Clearly at some point I need to read this literature – but that time has not yet come.  So what follows is basically me just sounding off with my own associations, and I’ll try to come back around and engage with the actual debates in this space at some future time.

I want, then, to distinguish between four different senses in which one might think about the category of domination.  ‘Domination’ is very likely not the word I ought to be using for some – even, conceivably, all – of these ideas, but again my goal is just to crudely organise some thoughts.

  1. Straight-up murdering people.  Clearly this is an interpersonal form of violence.  Arguably it’s a limit case for coercion, and (again) arguably it doesn’t usefully fall within the frame of ‘domination’ at all – but I think it needs to be in play here as one of the major ways in which people exert force against other people.
  2. Direct coercion.  That is to say, using force to bend another person to one’s will.
  3. Differential bargaining power.  Here we’re fully into the terrain of ‘routine’ economic interactions (and standard economic theory) – most economic interactions involve some difference in bargaining power between the relevant parties, and this is the kind of thing that we now have a very extensive and developed formal apparatus to model.
  4. Impersonal domination.  Here I’m thinking of scenarios in which there is no immediate interpersonal relationship of domination between two individuals, but rather a ‘structural’ kind of domination that does not route via anyone’s direct intentions.  An example of this might be the macroeconomic dynamic whereby a community’s income is dependent upon the production of some specific commodity, to cater to a wealthier export market, and for whatever reason the demand in that market greatly reduces, leaving the exporting community impoverished.  This is an example of ‘market forces’, with no specific social actors intending this specific outcome, and yet it is an example of the exertion of (de facto brutal) power by one set of economic actors over another.  Probably there are better examples than this – but basically what I want to capture is that economics is full of dynamics that are the result of “human action but not human design”, and some of these dynamics will effectively be dynamics of domination, even when many, or even all, of the relevant actors are not engaged in direct interpersonal domination.

So we have here four different categories of domination – and I think they can be roughly ordered on a sliding scale from most to least personally violent.  Murdering someone is the limit case of violent action.  Coercion is the threat of violence – including deadly violence – to achieve gain.  Differential bargaining power will very often not include direct coercion at all, but it does involve a power differential.  And finally ‘impersonal’ domination doesn’t involve any direct interpersonal domination at all, but this doesn’t detract from its ability to produce de facto cases of domination.

Anyway, this is all I really wanted to say in this short post.  I’m not suggesting that anything I’m saying here is very noteworthy – and certainly everything here is so telegraphic in its articulation that I can’t imagine it would have the ability to persuade anyone of anything.  But my goal here, as I keep saying, is to get my own thoughts straight about the broad brush strokes of my research programme.  And this is the idea: domination should be a fundamental analytic category in critical political economy – more fundamental than, for example, exploitation.  Moreover – though of course I’ve not explored this at all on the blog to date – it seems to me that formal economic theory has significant resources to explore this problem space.  After all, there’s an entire subfield – bargaining theory – which looks specifically at differential bargaining power, and these analytic tools inform mainstream economic theory in all kinds of sophisticated ways.  So my broader idea, as I continue to work my way through economics, is to keep an eye on these themes, with the broad perspective or approach articulated in this post in mind.

A quick and not very useful blog post (edit: turns out not to be very quick either!) that aims to scout out some fairly familiar intellectual terrain using a slightly different vocabulary.  Specifically, I want to talk about different understandings of Marx’s value theory using the vocabulary of Huw Price’s distinction between “object naturalism” and “subject naturalism”.

As I’ve written about on the blog before (though probably not for quite some time now), this blog rejects the labour theory of value.  (I will sometimes shorthand ‘labour theory of value’ as ‘LTV’.)  This post isn’t really about the arguments for and against the LTV – hopefully in the longish medium term I will work through the analytical Marxists in a systematic rather than a haphazard way, and maybe I’ll discuss this issue more then, and with more abundant references.  Still, I will begin with a (very) brief recapitulation / summary of the basics of the argument.  I’ll then go on to talk about how Price’s opposition between ‘object naturalism’ and ‘subject naturalism’ can, in my view, shed some light on the stakes and implications of that debate.

So – what we can call the ‘simple’ or even ‘vulgar’ LTV runs something like this.  Question: what determines the value of commodities in the capitalist marketplace?  Answer: the amount of labour that went into producing them.  Understood in this way, the LTV is an explanatory or causal account that explains one phenomenon (the price or value of commodities) in terms of another (labour inputs in the production process).  From this starting point you can then (if you wish) make a number of further claims.  For example, you can explain the tendency of the rate of profit to fall in terms of the tendency of capitalist firms to increase the degree of automation in production: fewer labour inputs per unit produced means less value per unit, so less surplus value available for capitalists as profit.  You can also use this value theory to ground a political-ethical case against capitalism in terms of exploitation: if all value is generated by labour (and none by, say, the risk-taking associated with investment, or entrepreneurial innovation, or capitalist managerial acumen, or just the value of raw materials or of technological inputs into the production process) then workers deserve to receive the full fruits of their labour, which in turn necessitates a massive overhaul of our political-economic system.

The problem with this ‘simple’ or ‘vulgar’ labour theory of value is that it’s manifestly a bad, false theory: labour inputs cannot usefully be taken to determine prices.  This is for a whole range of reasons, including the following:

  • How do you measure labour inputs? Clearly there is no such thing as homogenous labour: there are countless different kinds of labour, and it’s unclear how to compare them.  One could, of course, compare them by analysing their productivity in terms of the value they produce – but this is very clearly question-begging if labour inputs are to serve as any kind of independent explanatory variable in determining commodities’ market value.
  • Clearly things other than labour inputs contribute to the value of commodities: machinery, raw materials, market conditions, etc.  Labour applied to the production of a desirable and rare commodity clearly results in a more valuable product than the ‘same’ (or similar) labour applied to an undesirable and abundant one.  Etc.
  • One of the features of capitalism is wild swings in the value of commodities – boom and bust cycles for firms, for industries, for national economies, and for the global economy.  Clearly these expansions and contractions of value are not driven by expansions and contractions in labour inputs – quite the reverse: swings in labour inputs (e.g. of employment and unemployment) can be explained by these boom and bust cycles.

I’m not suggesting that this list of objections to the labour theory of value is complete – but I think it’s clear that there are some very formidable hurdles for the theory to overcome if it is to be treated as a useful explanatory framework.

Now, theorists in the Marxist tradition (and, before that, in the classical political economic tradition) are not nitwits, and they are aware of these objections to the LTV.  At this point in the argument (or well before) they therefore object that my characterisation of the ‘vulgar’ LTV is an absurd straw man.  They then go on to (if you are sympathetic to the LTV) clarify the rich and sophisticated account of which this summary is a debased shadow, or (if you are unsympathetic to the LTV) add a series of gimcrack epicycles.

What are these clarifications/epicycles?  I think broadly speaking there are two key moves here.  First, LTV advocates insist that the LTV is a theory, not of price, but of value – where value as a concept is clearly not totally disconnected from price but also cannot be reduced to it.  This in turn leads to a vast set of debates about the so-called ‘transformation problem’: how are value and price related, or how can value be ‘transformed’ into price?  Second, LTV advocates insist that the LTV is interested not in some homogeneous empirical labour concept, but in “socially necessary labour time”, or “abstract labour”, or both.  In short, both “actual labour” and “price” acquire counterpart ‘shadow’ categories that are somehow related to them, and yet also non-identical with them.  The theoretical claims that seem to be clearly false when made about the relation between actual empirical labour inputs and actual empirical prices can then be said to be true of these more abstract and opaque parallel categories.

Now, I don’t think these categories (value rather than price; socially necessary labour time and/or abstract labour rather than empirical labouring activities) are empty or useless.  The problem with these categories, rather, is that they do not seem to be able to serve anything like the same explanatory role that “labour inputs” serves in the ‘vulgar’ labour theory of value.  The reason they cannot serve this function is that these categories – at least if understood correctly, by my lights – are intrinsically ‘back-constituted’ by the market processes they apparently aspire to explain.

So, for example: the “socially necessary labour time” it takes to produce a commodity is not just a fact about labour – it is a fact about the entire state of the industry and the broader world economy within which the labour in question is embedded.  If a technological innovation on the other side of the world can transform the socially necessary labour time required to make a widget in my factory (by increasing general global productivity in the widget industry, and thereby reducing socially necessary labour inputs for the industry as a whole), this ‘measure of labour’ is not simply measuring labour: it is measuring general market conditions.  This category therefore cannot serve an explanatory function as an independent variable that aspires to explain price or value within that market – because the behaviour of the broader economic system is already embedded within the category.  What may appear at first glance to be a basic explanatory category is in fact an output of the system.

Now, as I say, this isn’t necessarily a problem with the category.  One of the ways in which Marx uses this, and related, categories in Capital is to explain the impact of changes in general economic conditions on workers within specific industries.  Thus, for example, technological innovation may increase productivity within some segment of an industry, reducing the value at which its products are sold, in turn reducing the demand for a category of labour that had previously counted as ‘socially necessary’, and throwing people out of work.  Here there is a connection between labour inputs and value, but the connection comes close to reversing the explanatory poles of the ‘traditional’ or ‘vulgar’ labour theory of value.  Again, this isn’t a problem – capitalism is a complex system in which every component can in principle influence every other component.  But if this is how we are wielding our categories, we need to reject the idea of a labour theory of value in which labour is taken to be an independent explanatory variable, somehow “determining” value or prices.  In my own view, this rejection is precisely the project that Marx is engaged in in ‘Capital’ – this is one of the ways in which ‘Capital’ is a critique of political economy.  As usual, though, my main focus on the blog isn’t Marxological, and the blog’s official position with respect to Marx interpretation is to simply point at the work of N. Pepperell, which is much more sophisticated on these matters than anything I put up here.

In any case, as I see it the LTV is therefore caught on the horns of a dilemma.  Either empirical labour inputs serve as an independent explanatory variable in a theory that is manifestly false, or empirical labour inputs are replaced by an alternate category or set of categories that always already tacitly track or encode the economic information that the theory apparently aspires to explain.  In my view much of the literature in defence of the labour theory of value is in practice shuttling between variants of these two positions – using back-constituted (but non-explanatory) categories when the theory is challenged empirically, while simultaneously (or alternately) suggesting that these same categories are fundamental, foundational, and explanatory.  This won’t do – we need to choose.  And in my view the correct choice is to reject the labour theory of value as any kind of meaningful explanatory apparatus.  We need an alternative account of the determination of values or prices, and this account can then in turn contribute to the content of categories like “socially necessary labour time”.  This alternative account, at least by my lights, will almost certainly draw heavily on boringly mainstream explanatory principles like supply and demand, bargaining power, and so forth.  In other words, if these categories of the so-called ‘labour theory of value’ are understood correctly, we are drawn into the orbit of something fairly close to mainstream economic theory.

So – the above is a whirlwind summary of why I reject the labour theory of value.  But that’s not what this post is about.  This post is about using Huw Price’s distinction between subject and object naturalism to capture two different orientations to the ‘materialist’ dimension of Marx’s value theory.

Now, I’ve already talked about two reasons why people (specifically, people on the radical left) might choose to adopt the labour theory of value.  The first reason was: the LTV seems like it might be able to ground an account of the falling rate of profit, and therefore of capitalist crisis.  The second reason was: the LTV seems like it might be able to ground a moral critique of capitalism as exploitation.

In the remainder of this post, I want to talk about a third reason why people might find the LTV plausible – and that is a sense that Marx’s theoretical framework is “materialist”.  Here the basic idea, I take it, is that a distinction can be drawn between ‘idealist’ and ‘materialist’ analysis.  ‘Bourgeois economics’ falls on the bad, ‘idealist’ side of this divide, while Marxist economics falls on the good, ‘materialist’ side.  On this approach, when Marx writes about the “metaphysical subtleties and theological niceties” involved in bourgeois political economy’s analysis of the value of commodities, Marx is suggesting that bourgeois political economy is engaged in mystificatory idealist analysis.  Marx and Marxism, by contrast, bring these idealist fantasies down to earth by looking at the concrete material processes and material phenomena that idealist analysis obfuscates.  And this is part of a broader materialist approach, whereby concrete material phenomena and practices are placed at the centre of analysis, rather than free-floating ideas or obscure metaphysical principles.  Marxism, the argument goes, reveals the material bases for these mystified idealist categories.

I think a lot of Marxists would endorse something like this story – albeit no doubt rephrased in their own preferred terms.  And I think this story provides one of the grounds for endorsing the labour theory of value.  The argument here, I take it, is that this distinction between ‘idealist’ and ‘materialist’ analytic frameworks lines up with the distinction between ‘objective’ and ‘subjective’ value theories.  The dominant mainstream value theory within economics – marginalism, and its epigones – is subjective, therefore idealist, therefore bad, while the labour theory of value is objective, therefore materialist, therefore good.

Here perhaps some very telegraphic intellectual history is useful as context.  When Marx was writing ‘Capital’, the dominant value theory in political economy was the labour theory of value.  The LTV had been endorsed in some sense (though there is much thorny ongoing debate over what sense) by the great ‘classical’ political economists: Smith, Ricardo, Mill.  More or less contemporaneously with the publication of ‘Capital’, however, there was a major transformation in value theory – the shift that came to be known as the ‘marginal revolution’.  The first edition of ‘Capital’ was published in 1867.  Jevons had published his ‘General Mathematical Theory of Political Economy’ five years earlier, in 1862.  Menger’s ‘Principles of Economics’ was published in 1871.  Walras’s ‘Elements’ was published in 1874.  Between them these latter three works revolutionised political economy, by proposing a value theory that was both mathematised and ‘subjectivist’.

From one Marxist perspective, this marginal revolution represents a profound wrong turn in political economy.  There are many charges laid against marginalism, and the economic theories that derive from it.  One charge is that the mathematisation of economic theory turned its back on the real world, preferring idealised models to the study of concrete reality.  This is one version of the critique of idealism.  Another charge is that by proposing a ‘subjective’ value theory – a value theory that is built upon the subjective preferences of individual economic actors, rather than objective facts about the material world – modern economics is building a form of idealism into its very foundation stones.  It’s this latter point that I’m interested in, in this post.

Here I think it’s useful to step back a little, and ask: what exactly do we mean when we talk about ‘materialist’ and ‘idealist’ approaches to political economy (or to analysis in general)?  I think the general vibe of these concepts within Marxist discourse is reasonably clear – but pinning them down is much more difficult and contentious.  In large part that’s because – as with all concepts within diverse traditions – different people simply mean different things by them.  For the purposes of this post, I’m going to – quite tendentiously – map them onto the opposition, within the discourse of analytic philosophy, between ‘naturalism’ and ‘non-naturalism’.  I recognise that this is far from the only way that one can parse these terms, and there are many dimensions or connotations of these concepts that aren’t captured by this mapping.  Nevertheless, I hope the remainder of this post will make the case that it can be illuminating to think about this opposition in this way in this context.  So.

Let’s fiat, then, for the purposes of this post, that the Marxist concept of ‘materialism’ has at least some commonality with the analytic philosophical concept of ‘naturalism’.  What do we mean by ‘naturalism’?  

This is where the work of Huw Price that I’ve been trying to get to finally comes into the picture.  In his work on naturalism, Price draws a distinction between two different forms of naturalism, which he argues potentially have very different philosophical implications: object naturalism and subject naturalism.

For Price, object naturalism is the most prominent and popular form of philosophical naturalism.  In Price’s words, this perspective “exists in both ontological and epistemological keys” (the quotes here are from Price’s ‘Naturalism without representationalism’):

As an ontological doctrine, it is the view that in some important sense, all there is is the world studied by science.  As an epistemological doctrine, it is the view that all genuine knowledge is scientific knowledge.

(Price 2004: 5)

These doctrines seem like the kind of things that scientifically- or naturalistically-minded people might want to endorse.  From this perspective, if we are making meaningful (one possibly wants to add “non-analytic”) claims about the world, then those claims should be translatable into the language of the natural sciences.  If my claims to knowledge are legitimate, they should be rephrasable, however clumsily, as scientific claims. 

Price argues that there are significant philosophical problems posed by this doctrine.  I really can’t claim to be on top of this philosophical literature, so my summary of these arguments is going to involve some hand-waving.  Nevertheless, it seems like this doctrine prima facie runs into problems when we come to the (large) categories of statement that we intuitively want to say can count as knowledge claims, but which do not refer to anything picked out by natural science.  Norms are one such prominent category.  We do not observe norms – that is, norms themselves, rather than behaviour that takes itself to be normatively driven – anywhere in the scientific study of nature.  And yet many of us want to say that we can have normative knowledge of some kind.  How does object naturalism deal with this?  Another example of the same broad category of problem is mathematics.  How are we to understand mathematical objects?  It seems clear that mathematically objects are not observed by scientific study of nature, and yet of course one doesn’t want to say that mathematics is a non-scientific mysticism.  Price calls this kind of issue “placement problems”.

There are, of course, many ways of dealing with these kinds of ‘placement problems’ from within an object naturalist philosophical perspective.  But Price proposes an alternative approach: subject naturalism.  For the subject naturalist, in Price’s words:

philosophy needs to begin with what science tells us about ourselves.  Science tells us that we humans are natural creatures, and if the claims and ambitions of philosophy conflict with this view, then philosophy needs to give way.

(Price 2004: 5)

Price’s argument is that subject naturalism in this sense is explanatorily prior to object naturalism, and moreover that one can – and should – be a subject naturalist without being an object naturalist.  From the subject naturalist perspective, phenomena like norms and mathematical objects can be explained in basically pragmatist terms, as the result of naturalistically-analysable human social practices.  These phenomena remain, however, non-naturalist in the ‘object naturalist’ sense.  There are no objects of the physical sciences that correspond to norms or to mathematical phenomena – and yet these phenomena can be naturalistically analysed, by analysing the human practices that produce these artefacts.  We cannot, for example, pick out any scientifically-describable object that we are denoting when we talk about mathematical objects.  We can, however, describe the human practices associated with producing those mathematical objects, as the topics of our (naturalistically analysable) mathematical discourse.

Subject naturalism therefore aligns with (at least a common interpretation of) Wittgenstein’s approach to dissolving philosophical perplexities.  It may seem that there are spooky objects (like “meanings”) that are difficult to explain in either scientific or philosophical terms.  But we can dissolve this spookiness by turning our attention away from the ‘objects’ and towards the social practices that produce those ‘objects’.

Now – these are potentially fairly deep philosophical waters, but my goal in this post is not a deep philosophical one.  My goal is just to pick up this distinction between subject and object naturalism and apply it in a fairly obvious and banal way to political-economic value theory.

If we pick up and use this distinction in a fairly rough-and-ready way, then, I think the resonance with the LTV debate I very briefly summarised above is fairly clear.  We have this phenomenon that we’ll call “value”.  How are we going to explain this phenomenon in a ‘materialist’ way?  I think the LTV debate I’ve discussed above has two broad categories of answer to this question.

First, we can explain ‘value’ in a materialist way by finding a concrete material phenomenon to which ‘value talk’ really refers.  The candidate concrete phenomenon proposed by the labour theory of value is: labour.  From this perspective, we have two choices: we can either talk in a mystificatory way about this abstract phenomenon ‘value’, or we can figure out what real concrete phenomenon is in fact picked out by this talk, and thereby bring the value-talk down to materialist earth.  This is the ‘object naturalism’ approach to value theory.  It proposes translating political-economic value-talk into an alternative idiom with clear, non-mystified, empirical, scientifically-analysable referents.

Alternatively, we can take a ‘subject naturalism’ approach to value theory.  On this approach, we can explain ‘value’ not in terms of a real phenomenon to which ‘value’ refers, but rather in terms of the social practices that we need to understand in order to understand how value is produced, as a non-natural but social phenomenon.  From this perspective, asking “what is value?” is a little like asking “what is addition?”  There is no phenomenon that is denoted by addition – but we can explain what addition is by explaining what one needs to do in order to add.

This latter is the pragmatist approach to value theory.  My view is that it is clearly the right approach.  A few caveats or additional points need to be made, though, before we too incautiously apply this kind of analytic pragmatist framework.

First: contemporary analytic pragmatists are extremely preoccupied by the issue of specifically linguistic practice.  But this is not our concern, as political economists.  We are not particularly interested in what people say – or even necessarily in what people think: we are in the first place interested in what they do as economic actors.  Second, and relatedly, we are interested in the complex interactions between many different individual economic actors’ actions.  It may very well be that we cannot understand what economic actors do by looking only at individual actions – we need to look at aggregate behaviour which may be qualitatively different from individual-level behaviour: the study of emergent effects is a core part of political economy.  The phenomena we are interested in therefore may be doubly removed from economist actors’ deliberate intentions.

With all this said – what does applying this perspective suggest about the debates over the labour theory of value with which I began?  I think there are two main lessons here.

First: a ‘subject naturalist’ value theory is greatly to be preferred to an ‘object naturalist’ value theory.  My own view (which I haven’t done much to make the case for in this post) is that Marx’s own value theory can best be understood as a ‘subject naturalist’ theory in something like this sense.  That is to say, Marx is not aiming to provide an account of the “substance of value” in the sense of “what value-talk really denotes”, but rather a complex account of the social practices that, in combination, produce the aggregate social phenomenon of “value”. Whatever one thinks of this interpretive claim, I think that ‘subject naturalism’ is the correct way to apply a ‘materialist’ perspective to political-economic value theory.

Second: if we think of Marx’s value theory (or just value theory in general) in the way I’m recommending, then the opposition I discussed above between ‘materialist’ objective value theory and ‘idealist’ subjective value theory no longer makes much sense.  From this perspective ‘materialist’ no longer has to line up with ‘objective’, and ‘idealist’ no longer has to line up with ‘subjective’.  We can adopt a quote-unquote ‘subjective’ value theory, in the sense of subject naturalism.  That is to say: we can ‘deflate’ idealist categories not by finding the natural phenomena to which they really refer, but rather by analysing the social practices by which they are really produced.  From this perspective, a value theory can be both ‘subjective’ – in the sense that it denies that its categories denote natural phenomena, and looks instead at how these categories are produced by distributed social practice – and ‘materialist’ – in the sense that it studies those social practices as themselves natural phenomena. Moreover, and perhaps ironically, if we understand Marx’s value theory in something like these terms, then along at least one key dimension Marx can be seen as aligning with the ‘subjectivist’ revolution that ultimately came to be represented, in the economic mainstream, by marginalism, and against ‘objective’ value theorists, rather than the other way around. From this perspective, ‘Capital’ is not so much the last gasp of classical value theory, as an early (and non-mathematised) critique of classical value theory. Of course this is too simple, and misleading in its own way. Nevertheless, I think it at least stands up enough to serve as a reasonable first-pass challenge to the idea that Marx(ism) and ‘subjective’ value theories are obviously and profoundly incompatible.

Now, the fact that we take our value theory to be ‘subjective’ in the sense of ‘subject naturalism’ doesn’t mean that it has to be ‘subjective’ in other senses.  We don’t, for example, necessarily need to tether our value theory to beliefs – the internal (or subjective) state of mind of social actors could in principle be completely beside the point, when studying the emergent properties of their aggregate actions.  And it doesn’t mean we need to adopt the specific value-theoretic approach of mathematised marginalist economics.  It does, however, mean that one can’t (or shouldn’t) rebut ‘subject naturalist’ approaches in general on the grounds of their ‘subjectivism’, in opposition to materialism.

Now this is a pretty trivial point to have reached by a rather arduous and circuitous route.  Arguably it is worse than trivial, in the sense that it makes the case for a common-sensical and already very widely accepted position (“value in the political-economic sense is an emergent property of distributed economic action which can be scientifically studied”) in terms of a much more obscure and tendentious position (analytic neo-pragmatist anti-representationalism).  It would probably make more sense to try to justify the latter position by analogy with the former, in most contexts.  But so it goes.  My hope is that if I can just keep on assembling such arguments and associations, on the blog, eventually they’ll add up to something more useful.  We’ll see if that turns out to be the case.