Every year or two I do a post on ‘the broader intellectual project’ that’s been motivating my blogging in some sense since…. huh…. 2007.  In this post I want to repeat myself, again, on anticapitalism, in a very schematic and crude way, and then make a few remarks prompted by Jason Hickel’s popular degrowth book ‘Less is More’.

I take it that the objections to capitalism as a political-economic system are familiar to leftists and don’t need to be recapitulated.  For me, the main intellectual problem for anticapitalists isn’t “is capitalism bad?” (clearly, in many critical ways, yes), but “what’s your alternative then?”  This isn’t an easy question to answer, and I think it’s desirable for the anticapitalist advocates to have some credible responses available, and for there to be a good, broad debate about possible responses to this question.

Obviously anticapitalism comes in many varieties, and no typology is going to be close to exhaustive.  Still, very crudely, we can say that the most influential forms of left anticapitalism are anarchism and communism.  Broadly speaking, anarchists want to do away with the state and/or broader relations of hierarchical domination; communists want to seize control of the state, and expand it in the service of emancipatory political-economic outcomes.

(I’m aware, of course, that anything you could say at this level of abstraction and with this degree of brevity isn’t going to be really right, and that there are plenty of legitimate objections to this typology. (“What about libertarian communists?”; “I’m a Christian socialist who believes in the importance of hierarchical structures but rejects capitalist social relations”; etc.) Fundamentally you just can’t address all this stuff in a shortish blog post, so I’m just going to plough on.)

I need to spend a lot more time with the anarchist literature, but my baseline, under-informed opinion is that while the anarchist tradition is right about a huge amount, at the end of the day you’re not going to be able to get a form of complex political-economic organisation that doesn’t feature hierarchical and coercive power structures, and that our institution-design task is to mitigate the harm associated with those structures rather than to abolish them altogether.  I’m aware that this is a very crass and under-informed response to anarchism, and I want to leave a note to self here to read more anarchists, but that’s where I stand as of now.

As for the communist side of things: communism was, of course, the major real-world anti-capitalist institution-building project of the twentieth century, and it’s a fundamental premise of my own intellectual project here that the communist experiments did not go well.  It seems to me that answering the question “what’s your alternative, then?” needs to centrally include a sober assessment of actually-existing communism.

So: what was wrong with ‘actually existing’ communism?  I think, again at a very crass and high level of abstraction, the answer to that question can be broken down into three categories.

  1. Political domination and repression.

None of the major ‘actually existing’ communist states were or are democratic in any very meaningful sense.  Moreover, not only were/are they not democratic, they also frequently engage(d) in authoritarian repression of political dissent or perceived political dissent – sometimes in incredibly extreme forms. All of this is bad.

  1. Economic disaster.

Both the USSR and the PRC were responsible for world-historically appalling policy-driven famines.  This doesn’t, of course, differentiate communist from capitalist regimes – see Mike Davies’ ‘Late Victorian Holocausts’ – but it is still a large mark against the emancipatory claims of communism.  More broadly, there is a lot of debate about the strengths and weaknesses of different political-economic approaches, between and within capitalist and communist systems, of a kind that clearly I’m not going to be able to litigate here.  But in general it is, to say the least, far from obvious that communism has the economic policy answers – this is very much an objection that needs to be addressed.

  1. Is it even non-capitalist?

This third point is arguably the most controversial and pedantic – but were the major communist regimes of the twentieth century even breaks with capitalism? This depends on your definition of capitalism. If you understand capitalism as a market economy, then the command economies instituted by communist states broke with capitalism. If you understand capitalism in terms of the centrality of private property, then the forms of state ownership instituted by communist states broke with capitalism. On this blog, though, I’ve advocated an understanding of capitalism in terms of 1) an institutionalised structural imperative to economic growth, and 2) the structural displacement and reconstitution of labour as a central political-economic institution in the service of that growth. From the perspective of this analytic framework, the twentieth century communist states didn’t break with capitalist dynamics, so much as reconstitute then in an unusually statist form.

Ok. So these are, as I see it, the three big challenges that anti-capitalist institutional proposals must meet. First: is your alternative likely to be politically repressive, or politically emancipatory? Second: what kind of economic outcomes are we likely to be looking at here? And third: is it actually a break with capitalism?

It’s this third question that I want to focus on in the rest of this post, specifically in relation to ‘degrowth’ politics. I recently read Jason Hickel’s ‘Less is More: How Degrowth Will Save the World’ – a popular book that aims to make a broad-brush anticapitalist case on environmentalist grounds.  The argument of that book revolves around the idea that to end capitalist environmental destruction we need to abolish the political-economic institutional structures that generate ‘the growth imperative’.  In the remainder of this post, then, I want to put aside all the other considerations I’ve raised, and focus simply on the question of growth.

As I said above, I don’t think capitalism can be defined exclusively in terms of the growth imperative.  But I do think that if you were to abolish the growth imperative, then whatever political-economic system you ended up with wouldn’t be capitalist.  The political goals of the degrowth movement, then, are inimical to capitalism.  (Here it’s probably worth highlighting that for Hickel ‘degrowth’ doesn’t necessarily mean what it sounds like it would mean – i.e. the reversal of growth – but rather the abolition of the ‘blind drive’ to growth associated with capitalism, such that decisions around economic growth can be made in a more judicious way, taking environmental issues into adequate consideration.  That’s what we’re talking about here.)

One of the ways in which the 20th and then 21st century communist regimes failed to break with capitalism was their reproduction of the capitalist growth imperative within a communist system.  This reproduction of the growth imperative was in part driven by a commendable desire to raise living standards – the development of the productive forces was seen as a precondition for creating the material conditions required for a full communist society of equality and abundance.  It was also, in significant part, a geopolitical imperative – for a communist regime, major economic growth is a geopolitical necessity if you are to have any hope of resisting military and economic attacks from the major capitalist powers.  The point, here, is that while you are part of a capitalist world-system the growth imperative operative in the overtly capitalist bloc places institutional pressure even on the nominally non-capitalist bloc.  And this is one of the big reasons why a global abolition of the capitalist system would seem to be a precondition for any sub-component of the world-system becoming meaningfully non-capitalist (in the sense I’m advocating).

What about the growth imperative within the overtly capitalist system?  Here the institutional mechanisms are somewhat different.  Again, geopolitical competition plays a role, as do market competition and ideology, but there is also the important function of debt within the monetary system.  The monetary system is the major institutional mechanism via which we allocate social resources, and when banks make investments they do so on the understanding that the debt these investments create will be repaid with interest.  This in itself creates a massive growth imperative: if I have started or developed my business by taking out a loan, I will need my business to grow in order to repay the loan with interest.  Capital as a component of our political-economic institutions therefore has the growth imperative built into it at a fundamental level.  If we are to abolish the growth imperative, we need to abolish capital.

To be blunt, this is an amazingly difficult thing to do.  Hickel’s degrowth advocacy book lists a range of ways in which the incentives to economic growth could be reduced (regulate advertising better; reduce the length of the working week; decommodify goods and place them in the commons; etc.) – but as he notes, without a transformation of our monetary system, all of this is tinkering around the edges of a political-economic mode of production the fundamental nature of which remains unchanged.

Hickel’s degrowth argument therefore rests, in the end, on a handful of paragraphs in the penultimate chapter of his book (in my ebook, roughly locations 333-337), in which he points towards some recent work in ecological monetary economics.  I haven’t read (most of) that work in ecological monetary economics.  So I suppose the core function of this post, for me, is just to serve as a note to self that I really ought to read some of this literature, that it seems to me that this a critical area for anticapitalist institutional thinking, and that I hope I’ll be able to continue to inch along through these ideas, returning at some future date to these same issues with a slightly more informed perspective and analysis, fates willing.

Cops

March 26, 2021

For a number of grim reasons there has been a lot of discussion of policing and cops in the UK recently.  In this post I want to briefly write up one set of arguments for why violent, abusive policing is more usefully seen as a systemic product of policing as an institution, than as an difficult-to-understand anomaly.

I’m going to make a specific case at a pretty high level of abstraction, and there are therefore obviously lots of important issues that this post won’t touch on.  Moreover, in keeping with the general current project of this blog, I’m going to be making a case that I take to be grounded in basic liberal principles.  In this respect, the argument of the post departs from what I take to be one prominent and important strand in contemporary (and historical) radicalism: namely, the idea that the political solution to the horrors of the criminal justice system is the abolition of policing and prisons as such and as a whole.

From this radical perspective, a world without coercive institutions that bear even a ‘family resemblance’ to cops, courts, and prisons is both possible and desirable.  Perhaps controversially (at least among radicals) I don’t agree with this perspective.  In large part that’s because I have a relatively high degree of pessimism about ‘human nature’.  (‘Human nature’, as I am using it here, expresses a stochastic rather than an essentialising category.  That is to say: I’m not making a claim about an invariant individual nature, but about behaviours that will predictably be manifested somewhere across a large enough human population.)  Specifically: it doesn’t seem remotely plausible to me that we are ever going to be able to achieve a society with such an absence of monstrous behaviour that we don’t need some kind of institutionalised process of coercive constraint of some individuals’ freedoms in the interest of broader public safety.  That institutional structure may – and should – look radically different from the institutions of our current criminal justice system, and so one can make a case that the current system can be ‘abolished’, while an alternative, vastly more humane, system is instituted – but I think some system of coercive enforcement of some community norms is a going to be a feature of any complex social-political organisation one can realistically aspire to establish.

So – one of my premises here is the ‘pessimistic’ one that there is enough ‘darkness in human nature’ that we are never going to be able to achieve the wholesale abolition of coercive governance institutions.  This very same pessimism about ‘human nature’, however, should also lead us, I believe – more or less a priori, though also backed by a superabundance of empirical evidence – to cast an extremely sceptical eye on any such coercive governance institutions that exist.  Fundamentally, the sceptical logic here is the same: if people have the power to act badly, some will do so.  Moreover, the more power people have, and the fewer constraints people have on the ability to exercise that power, the more abusive that exercise of power is likely to be.

This is an argument that should be extremely familiar from – perhaps ironically – conservative discourse.  It is a mainstay of the conservative critique of state power that power corrupts, that abuses of power are inevitable, and that once institutions are established that wield such power, those institutions will not only be loath to relinquish their power, but will seek to expand it.

This critique of state power is correct.  (Though it also, of course, applies to private power.)  So it is a striking feature of contemporary conservatism that the most important instantiation of the state’s coercive power over its domestic populace – the criminal justice system – is frequently exempted from the sceptical conservative perspective on the state.  If domestic state power – which consists, in the classic Weberian formulation, in a “monopoly on the legitimate use of force” – is likely to be abused anywhere, it is in those institutions where that force is directly exercised: in policing and in prisons.  And yet many conservatives who regard themselves as opponents of the state abuse of power are admirers and defenders of these institutions, determined to protect them against attacks from bleeding heart liberals and leftists.

I say this not so as to accuse conservatives of hypocrisy – this dual attitude towards state power is less a hypocrisy than it is an indication that ‘scepticism of the state’ is an inadequate summary of the relevant conservative ideology.  My point, rather, is that one does not need particularly radical political commitments in order to have the basis for a far-reaching critique of the criminal justice system.  Indeed, the idea that police forces tend towards abuse of power – towards inexcusable violence and oppression – is a conclusion that ought to follow naturally from widely held – and correct – beliefs about how institutional power operates in general.  The very many well-documented cases of police actually being abusive, violent, corrupt, etc. etc. should serve simply to confirm an analysis that follows naturally from our understanding of how the social and political world typically works.

Now, as I’ve said many times before on this blog, ‘liberalism’ is a complicated political phenomenon that involves many sometimes contradictory elements.  Nevertheless, one of the key elements of political liberalism is this idea that possession of power will tend to lead to the abuse of power.  And liberalism’s ‘answer’ to this problem – an answer that I think we should take seriously – is checks and balances on power.  The liberal idea, here, is that we cannot in practice construct a political order that does not involve coercive power, and yet, at the same time, such coercive power will tend to be abused.  We should, therefore, construct our governance institutions in such a way that there is no one single concentration of power, but rather multiple centres of power, each of which is constructed in such a way that it can serve as a ‘check’ on others.  The problem liberalism confronts is that “the guards cannot guard themselves”: that, fundamentally, nobody can be fully trusted to consistently act responsibly and humanely if given the power to do otherwise.  The liberal solution to this problem is that different subcomponents of our political institutions can keep other subcomponents in check – that oversight, accountability, and sanctions for abuse of power must be instituted at every level of our political institutions.  Democracy is one such form of oversight and accountability, but there are others.

Of course, as I’ve also said many times on this blog, these (to my mind) positive elements of political liberalism are mingled with many negative elements.  The egalitarian commitments of classical and contemporary liberalism are more often than not accompanied by additional, tempering or overriding commitments – such as those of “patriarchal liberalism” or “racial liberalism” – that apply liberal egalitarian ideals to only one small component of the body politic, while enacting oppression and domination elsewhere – also in the name of liberalism.  Moreover, oversight and accountability are resented by those with power, at all levels of every institution.  Checks and balances are understood and presented by those subject to them not as essential constraints on abuse of power, but as absurd, uncomprehending, inefficient, carping obstacles to getting anything done at all.  In our current political discourse, the enemies of ‘effective policing’ are such villains as “lefty defence lawyers”, “human rights law”, “government bureaucrats”, “bleeding heart liberals”, “woke scolds”, and so on and so forth.  Dispiritingly, this discourse – the illiberal or antiliberal discourse that seeks to remove all constraints on the abuse of power by agents of the state – is very effective in bullying politicians into compliance; assuming, of course, that politicians do not already wholeheartedly endorse a pro-cop line.  The essential project of instituting significant and effective constraints on abusive policing is regarded as anathema by most of the UK ruling class, of whichever political party or factional alignment.  To align oneself seriously against police power is to align oneself against a vast propaganda apparatus and the bulk of public opinion.

Nevertheless, we should, of course, do so.  And there is some reason to believe that doing so is not a completely hopeless proposition.  One of the more encouraging developments of recent years has been the remarkable effectiveness of the Black Lives Matter protests in shifting large segments of public opinion on policing.  That shift is not stable – but it has been (to me) surprising and cheering to see even many conservatives react with horror to the police abuses and injustices brought to the centre of public attention by the BLM movement.  We shouldn’t see public opinion – or our political environment – as immovable, or beyond the reach of persuasion on the pressing need to curtail police powers and police impunity.

Nevertheless, awareness of the horrors of policing is typically only politically consequential if those horrors are understood as systemic features of policing as an institution.  To the extent that we see instances of cop violence as anomalies, we will not see the need for systemic change.  What I’ve tried to do in this post is simply make the case that – on basic, widely held liberal or even conservative premises – we would expect to see widespread cop abuse of power.  Cop abuse of power should be a default assumption, not a surprising fact that needs to be demonstrated with superabundant evidence time and time again, for each fresh instance.  And the reason for this is straightforward:  people with power will often abuse it, unless there are very strong and robust institutional constraints that prevent such abuse.

Of course, the systemic nature of cop violence, corruption, etc. has many elements – and the abstract discussion in this post does not begin to touch on many of the most important ones.  Nevertheless, more or less on ‘first principles’, there is every reason to believe that cops will continue to act horrifically as long as ‘we’ (in the sense of our political institutions) let them.  If there are no severe, widespread, institutionalised sanctions for cop abuse of power – if, among other things, cops are not personally and institutionally afraid of the consequences of their violent actions – our ‘criminal justice system’ will – always and inevitably – be a system for the production of criminality: that is, a system for facilitating and defending criminal actions perpetrated by those supposedly tasked with enforcing ‘justice’.

The US right is currently engaged in a rolling hysteria over the purported ‘cancellation’ of beloved children’s author Dr. Seuss.  This is, of course, nonsense.  But maybe it can illustrate what ‘cancel culture’ isn’t, as a way of saying something about what it is.

The background: Dr. Seuss Enterprises (the business that oversees the Theodore Geisel estate) announced on March 2nd that they would no longer be publishing six titles which “portray people in ways that are hurtful and wrong”.  This is good and appropriate.  The books in question are racist.  They’re not subtly or complexly racist.  They are, moreover, books written for very young children, and their target audience clearly cannot be expected to adopt a critical or reflective attitude toward their contents. They are also (luckily for the Geisel estate) far from his best books. They’re books that, when you buy your big Dr. Seuss box set, you quietly remove because obviously you don’t want to be teaching your children to read with these books.  So: dropping these books from the Seuss catalogue is an easy call.

Is this an example of ‘cancel culture’?  Here I think it helps to understand that the phrase ‘cancel culture’ denotes, to a first approximation, two completely different things.  

First, it denotes any effort to reduce the amount of racism, sexism, homophobia, transphobia, and sundry other forms of bigotry in our public sphere and social lives.  For the US conservatives furious about these Seuss books, the idea that we shouldn’t be teaching children to read with racist caricatures is “cancel culture”, because the reduction of racist imagery in society in general is “cancel culture”.  That’s all there is to it.

The second sense of “cancel culture” is more useful and interesting.  This second sense is something in the space of “excessively unforgiving and expansive attitudes to the level and scope of sanctions that should follow from transgressing community norms”.  This sense of “cancel culture” captures, for example, the idea that people should lose their jobs for making offensive social media posts.  The characteristic feature of “cancel culture” in this sense is the expansion of critique from one dimension of self to all dimensions of self – this is why it is sometimes called a ‘purity’ discourse.

Unlike many on the left, I think that “cancel culture” in this second sense picks out a real phenomenon, and a problematic one.  The reason this phenomenon is problematic, in my view, is that we are all, in one way or another, deeply flawed.  We all – individually and collectively – contain multitudes, and some elements of our individual and collective identities are, inevitably, deplorable.  For this reason, if we are too expansive and unforgiving in our social sanctions, we risk de facto committing ourselves to a global sanctioning – a scenario in which there is nothing that can legitimately or consistently escape sanction.  And in this scenario – since we cannot in fact sanction everything and everyone – we will end up resorting to ad hoc alternative criteria of social acceptance and exclusion, such that the principles that we aspire to realise are no longer really guiding our application of sanctions anyway.

In fact, if we aspire to build a better present and future out of the deeply flawed materials we have to hand, we need to be skilled at using what is good in our history and present, and rejecting what is bad, while acknowledging that the good and the bad are often also bound together in complex ways.  We all do this in practice anyway – we look back at the figures we admire, and we select what is most admirable for admiration, while letting the rest fall by the wayside.  Moreover, even grim elements of our societies and histories may have potentials that can be put to use.  (This, by the way, is one of the core commitments of Marxism – the idea that the social and technological potentials of capitalist society can be put to work to emancipatory ends, if only the political-economic structure within which they are embedded is transformed – a correct and worthwhile commitment still, in my view.)  It is necessary to reject what is bad without throwing out everything connected to it – because the latter risks leaving us with nothing to work with at all.

So, returning to the Dr. Seuss nonsense – is the withdrawal of these Dr. Seuss books an example of “cancel culture” in this second sense?  No, it is not.  More strongly, it is something close to the opposite of cancel culture in this sense.  Dr. Seuss has not been ‘cancelled’.  The withdrawal of these six works has not been extended to his corpus as a whole.  The Geisel estate has been presented with a situation in which the widely beloved figure whose literary legacy they are charged with preserving and transmitting (for a healthy profit) has strongly transgressed our community’s ethical-political norms in several works.  They have responded to this as we all should when faced with the profoundly mixed legacy of our history and traditions: they have rejected what is bad and preserved what is good.  Of course, this decision is a particularly easy one in this case, in part for the reasons I discussed above. Much more would have to be said about ‘strategies of inheritance’ in more complex cases. Nevertheless, the Dr. Seuss situation is certainly not an example of ‘cancel culture’ in any useful sense.

A very short post on an issue that comes up quite a bit in left debate (by which I mean not centre left, but ‘radical’ left): can capitalism tackle climate change? Meaning: is it even possible to adequately address the climate crisis within the capitalist system?  A popular answer to this question on the left is ‘no’.  My answer is ‘in principle, yes’.  I want to very quickly make the case for that position.

First the case against.  The first reason to think that capitalism is incompatible with tackling climate change is that capitalism is intrinsically growth-oriented.  Capitalist growth, the argument goes, is incompatible with the kind of reduction in consumption of resources that is required to adequately reduce greenhouse gas emissions.  Therefore, if we want to tackle the climate crisis, the abolition of capitalism is a necessary precondition.

Moreover, capitalism is characterised by a massive imbalance of socio-political power, such that the ruling capitalist class has a lot of power and the great bulk of humanity has very little power.  Since the ruling capitalist class has little interest in changing its destructive behaviour, the dire consequences of which will be visited upon the bulk of poor and powerless humanity rather than the ruling class itself, one cannot expect the capitalist system to reform itself, the argument goes.

This position definitely has a lot of plausibility to it!  But I think it’s wrong, for the following reasons.

First, on growth.  Capitalism as a system, and capital as a component of that system, is completely indifferent to the form that economic growth takes.  The valorisation of capital can in principle take place via any available mechanism.  This is one of the things that makes capitalism so internally varied, and so adaptable, as an economic system.  Capital does need to be tied to actual use-values at some point in the system – you can’t have a pure bubble economy.  But capital itself could not care less what the specific use-values are.  So: fossil fuels can be used to drive economic growth, but that’s just because fossil fuels are convenient for capital – there’s no intrinsic physical or social law that ties growth to this form of production and consumption.

Indeed, the incredible flexibility of capitalism is a cause for hope with respect to the possibility of shifting away from fossil fuels.  If we choose to make fossil fuels extremely inconvenient and costly for capital to use, capital will shift away from fossil fuels to easier forms of valorisation – that’s just what capital does.

There are two big ways to shift capital away from fossil fuels.  The first is regulation – in this case, something close to de facto global regulation.  This is, of course, a major political challenge – but we already know that capital can accommodate lots of different kinds of regulation, because capital already accommodates itself to all kinds of regulatory regimes.  Economic regulation is not intrinsically anti-capitalist – it’s just a form of capitalist governance.

The second way to shift capital away from fossil fuels is to make other energy sources more viable and appealing.  This is, fundamentally, a technological challenge.  And capitalism is good at technological innovation!  It’s common on the left to talk about how ‘tech won’t save us’, and of course in a narrow sense that’s true, but tech can potentially do a lot to save us.  Green energy alternatives are becoming increasingly widespread and viable, and this is extremely good news!  

So – it seems to me that it should be clearly within the capacity of the capitalist system to make a very large-scale shift to green energy and to massively reduce greenhouse emissions.  Capitalism’s drive to growth is not, in itself, an insurmountable obstacle to achieving this.

What about political power, though?  Even if we grant that capitalism’s orientation to growth is not an insurmountable obstacle to tackling climate change, the political problem remains: look at who’s running things.  In fact there are (at least) two political problems here.  First, there’s the general collective action problem of coordinated policy action across a large number of rivalrous socio-political actors.  Given that capitalist states are in many important respects in competition with each other, the challenge of tackling climate change is (in the game-theoretic jargon) a kind of n-player prisoner’s dilemma game, where it is often more advantageous to let other states take action than to take action oneself.  And, second, there’s the problem of who the ruling class decision-makers are in the first place, and what their interests are.

So there are several layers of collective action problem that must be overcome to get the kind of global governance structures that we need to tackle the climate crisis.  First, we need ruling class figures and groups who actually want to tackle the problem. Then we need them to be able to successfully coordinate international action in a way that overcomes the n-player prisoner’s dilemma problem of the interstate regulatory system.

Moreover, we need this coordination to persist in the face of capitalism’s ongoing tendency to ‘melt all that is solid into air’ – i.e. to revolutionise and destabilise its own structures in the pursuit of valorisation and growth.  Here capitalism’s internal diversity of institutional structure and dynamism is a cause for pessimism, because it means there are constant opportunities to overturn any regulatory regimes that have successfully pushed capitalist valorisation away from forms of production that contribute to the climate crisis.

So – can it be done?  Is it realistic to stably implement the necessary forms of economic governance within a capitalist system?

Well – maybe not!  Clearly it’s a difficult challenge.  But here’s the thing: it is also a difficult challenge to overthrow the capitalist system and institute an alternative mode of global economic organisation.  If we’re talking about things that are in principle possible, but that are also very hard, then we need to also be realistic on the comparative question of how hard the different options here are.  It seems to me that, at least on its face, overthrowing capitalism altogether is quite a bit more difficult than instituting an adequate global regulatory regime for greenhouse gas emissions.  

Of course, that doesn’t mean that abolishing capitalism can’t be done!  This isn’t a counsel of despair from an anti-capitalist viewpoint.  Rather, it’s an argument that we should be realistic about what the available political futures are.  If the climate crisis is a true crisis – which it is – then we quite likely can’t afford to hold out for our first preference solution.  If the problem is potentially fixable within capitalism, then it strikes me as a bad idea to pretend otherwise, in the hope of achieving what looks like, in present global circumstances, an unlikely prospect of a non-capitalist alternative solution. Indeed, it may be that our hopes for achieving an emancipatory post-capitalist political-economic system rest on our ability to forestall the climate crisis within capitalism.

Ok – I’ve now listened to the entirety of Brandom’s lectures on Hegel, which cover, in significantly briefer form, the content of ‘A Spirit of Trust’. I need to make time, somehow, to carefully read the book itself, and naturally anything I say about Brandom’s Hegel is provisional until I’ve done so. Still, I take it that the core of Brandom’s interpretation is clear from his lectures, and I am too impatient to want to wait until I’ve worked through the full text before commenting! (“Perhaps there is only one cardinal sin: impatience”, writes Kafka – but what does he know.)

Let me dive straight in and say that while obviously I think Brandom’s Hegel project is extraordinarily impressive, I think it jumps off a sort of ethical-political cliff in these closing sections, and it’s probably going to take some pretty heavy lifting to save it. The problem is the use to which Brandom puts the (retrospective) category of forgiveness, and the corresponding (prospective) category of Trust. (For the sake of easy expression, I’m going to stop talking about ‘Brandom’s Hegel’, and just talk about Brandom, but I hope it’s clear that all appropriate caveats about authorial identity apply.)

So – I’m not going to have my language or the details right here, but the closing sections of Brandom’s argument pivot around the distinction between the ‘noble’ and ‘base’ or ‘great-souled’ and ‘small-souled’ or ‘edelmütig’ and ‘niedertrachtig’ perspectives. Roughly speaking, the ‘noble’ perspective sees actions as guided by universal norms, and the ‘base’ perspective sees actions as guided by immediate and contingent material motives. Brandom argues that every action that has ever or could ever take place can, in principle, be viewed from either perspective. What this means, is that it is alway possible to interpret any action in a ‘debasing’ way, that strips away the pretension that an action is motivated by a norm, and ascribes to it, rather, base or ignoble motives – or it is possible to interpret the same action as carried out in accordance with some norm. These are ‘bad faith’ and ‘good faith’ perspectives on action.

The kind of community we want to institute, Brandom argues, is one characterised by mutual recognition – and to recognise another as a normative agent is to recognise their actions as carried out in a normative space, that is, in response to norms. At the same time, we are all aware that our actions may not necessarily be understood in those terms – we all fall short of the ideal selves we may aspire to, and take actions that may deviate from the norms we hope to realise.

At the same time, it is a core element of Brandom’s theory of action that we can never fully know what our actions are when we take them. Actions have unanticipated consequences, and future events may always and at any time retroactively transform what the content of any given action was. It is for this reason impossible to definitively say what the normative content of an action even is at any given time – that normative content is always open for future ‘transformation’ (or greater specification), by the consequences of future history and by future acts of interpretation.

This means that what we currently may interpret as failures to realise a norm could, in principle, be interpreted by future actors as instantiations of a norm. And this possibility opens up a dynamic that Brandom characterises as ‘confession and forgiveness’.

In this dynamic, a social actor confesses the base motives that drive their actions, and the ways in which their actions have deviated from norms, while the forgiver nevertheless recognises those actions in terms of an unfolding norm that they – the confessor – was unable to articulate or recognise. In this way, a ‘tradition’ is constituted that can, in principle, ‘recuperate’ even actions that seem to have no normative justification, retroactively understood as justified by the events and interpretations that followed. In Brandom’s words:

Something I have done should not be treated as an error or a crime… because it is not yet settled what I have done.

(‘A Spirit of Trust’, p. 625)

Moreover, for Brandom in these sections, the kind of political-social-discursive community we aspire to create, should be one characterised by expanding the scope of such confession and forgiveness. For Brandom, the community we aspire to create should be one characterised by a ‘spirit of trust’ in which we confess our our normative failures in the hope that future actors can ‘recuperate’ those failures within a larger, more ‘magnanimous’ interpretive framework, which brings more and more actions under the auspices of normative reason.

I’m racing past a huge amount of content here, and I will want to circle back round and give all of this a much more nuanced treatment once I’ve done my due diligence properly. Nevertheless, in a preliminary way, I think I have enough grasp of what Brandom is getting at here to say: are we sure about this? More specifically: granted that the full content of any action cannot be fully specified at any moment, are we sure that ‘forgiveness’ should be the attitude we ultimately aspire to achieve in relation to social actions?

I think there are two broad issues here. First, some elements of Brandom’s discussion seem to me to move too easily between two categories of non-conformity with norms. First, we may see actions as failing to adhere to any norm at all – as purely appetitive, or accidental, or whatever. This is the perspective of ‘particularity’. Second, we may see an action as taking place in conformity with the wrong norm – indeed, as taking place in conformity with a bad or evil norm. “Falling short of norms” is not the only form that evil may take – (what are taken to be) norms themselves may be evil.

In general I think it is a long-term problem with Brandom’s work that he is insufficiently attentive to worries about ‘bad norms’. Brandom is preoccupied by the threat of ‘nihilism’ – by the problem of adopting theoretical perspectives that, if taken seriously, are unable to make space for norms at all. But Brandom does not seem terribly preoccupied by the problem of communities or social spaces that have established values that we nevertheless wish to reject. This leads him, in my view, to misunderstand the place that a number of critiques of ‘Making It Explicit’, and of other pragmatist thinkers, are coming from – and it also means that he does not give nearly enough attention to this problem space in his interpretation of Hegel.

What if an entire society, in its dominant norms, practices, values, etc., is evil? This is not something that should be too much of a challenge to imagine. And yet it presents a challenge for pragmatist, practice-theoretic, accounts of normativity. If we have a transcendent account of norms, it is easy to understand how we can resist the evil around us. But if we have a practice-theoretic account of norms – if our own norms in some sense emerge out of our own social practices and those of the society we inhabit – then it is harder to see where the ‘critical distance’ that would allow us to reject the bad norms by which we are surrounded, in favour of good norms, might come from. I don’t, to be clear, think that this is an insoluble problem – in practice, all societies, even ‘totalitarian’ ones, are highly internally diverse, and there are always social practices and locations that provide a critical standpoint from which alternative value systems to the dominant ones may be assembled. Nevertheless, this category of worry is one that pragmatists need to address – and that Brandom, as I say, seems inattentive to.

In terms of Brandom’s discussion of recollection and forgiveness, a similar problem manifests in relation to history. How much, in history, in fact, can and should be forgiven? Do we want to provide a Whiggish rational reconstruction of history that ‘justifies’ apparent crimes because of their later consequences? Should we? Isn’t that kind of monstrous? Should the slave trade, the Holocaust, the great policy-driven famines of the nineteenth and twentieth centuries, the many exterminations, oppressions, and violences of the charnel house of history, all be grist for the mill of Absolute Spirit’s magnanimous forgiveness?

I think it’s clear – on political-ethical grounds – that the answer to these questions is “no”. If we want to build a better community characterised by a more expansive practice of mutual recognition and respect, part of this collective project needs to be a sombre recognition that much human action is not and can never be justified. The question of how we mine our history to construct the traditions that shape the present needs to be, in part, a question of which history we ‘fold in’ to our self-understanding precisely by rejecting the idea that any norm we value or respect can be found there.

I don’t think Brandom’s metatheoretical apparatus is incompatible with that more sombre approach to the thinking of history or tradition. But I do suspect that some significant modifications should probably be made to these elements of Brandom’s Hegel, if we are to assemble a ‘recollective reconstruction’ of our history that is fit for the purpose of emancipatory politics.

It’s common in discussions of the post-Hegelian German Idealist tradition to draw a distinction between ‘left’ and ‘right’ Hegelians.  I don’t know nearly enough about the history of German Idealism to make use of this distinction in any scholarly way – but my understanding of the distinction amounts to the extent to which the Hegelian apparatus is taken to be ‘critical’.  One can interpret the Hegelian apparatus as providing a quasi-metaphysical justification for the political status quo, or one can interpret the Hegelian apparatus as providing the resources for a far-reaching critique of actually-existing institutions.  This, crudely put, I take it, is the distinction between ‘left’ and ‘right’ Hegelianism.

Brandom sometimes, following Rorty, plays with this phrase to draw a distinction between ‘left’ and ‘right’ Sellarsians.  And in this post I want to likewise draw a potential distinction between ‘left’ and ‘right’ Brandomians.  In doing so, I want to flag straight up that this is extremely loose usage, and I don’t claim that this distinction necessarily usefully maps onto actual political left and right categories (themselves of course often extremely fuzzy).  Hopefully the post itself will make clear what I’m getting at.

So.  As I’ve said before, I’m slowly working through Brandom’s Hegel lectures (available at his YouTube channel here), as a precursor to slowly working through Brandom’s Hegel book.  I’m currently on the penultimate lecture – Genealogy and Magnanimity: The Allegory of the Valet – which covers similar ground to Brandom’s lecture of some years ago – Reason, Genealogy, and the Hermeneutics of Magnanimity.  This post is essentially a reflection on these lectures.  Brandom clearly regards the content discussed in these lectures as central to his Hegel’s philosophical project.

The lectures are focused on Hegel’s distinction between ‘noble’ and ‘base’ or ‘great-souled’ and ‘narrow-souled’ or ‘magnanimous’ or ‘suspicious’ meta-conceptual attitudes.  Brandom connects Hegel’s understanding of these categories to one of Brandom’s own master distinctions, between normative statuses and normative attitudes.  Normative statuses, recall, are things like “an obligation” and “an entitlement”.  Normative attitudes are things like “taking somebody to have an obligation or entitlement”.  One of the major overarching goals of Brandom’s entire philosophical project is to explain normative statuses in terms of normative attitudes in a non-reductive way – in a way, that is, that does not reduce normative statuses like obligations to purely ‘subjective’ categories like “believing somebody to have an obligation”, and yet at the same time does not attribute a “spooky” ontological substance to normative statuses or the norms associated with them.

I’ve been over all this in painful detail in my earlier series of posts on ‘Making It Explicit’, and I’m going to let a lot of important nuance fall by the wayside for that reason.  In his Hegel lectures, Brandom adds a historical-political dimension to these issues, by connecting these categories to Hegel’s ‘epochs of spirit’.  Broadly speaking, for Brandom’s Hegel, pre-Enlightenment understanding of norms saw normative attitudes as derivative of normative statuses.  On this understanding, norms are real things out there in the world somehow, and our normative attitudes can be explained as attempts to attend to their ‘ontological’ authority.  The historical shift to ‘Enlightenment’ facilitated a theoretical perspective that turned this account on its head: from this perspective, normative attitudes are the fundamental explanatory category, and normative statuses derive from them.  But, for Brandom’s Hegel, there is a tendency within this Enlightenment tradition to take this ‘subjectivist’ orientation ‘too far’ – to see normative statuses as fundamentally unreal, an otiose concept, and to understand norms, morality, etc., purely and reductively in terms of normative attitudes.  (Utilitarianism is, for Brandom’s Hegel, an example of this approach.)  This ‘Enlightenment’ attitude can then in turn be taken to provide warrant for a nihilism about norms – it can lead to the conclusion that there are no norms, really, only people believing or acting as if there were norms – a rejection of the normative as such.  (In more recent philosophy, Brandom characterises Gilbert Harman as an exemplar of this approach in the area of moral philosophy.)

Brandom connects all this in turn to what he sees as two different interpretive orientations to any given action.  One can interpret an action as taken in response to a normative obligation – as taking place within the space of reasons – or one can interpret an action as taking place for purely ‘causal’ reasons, as driven by factual contingencies that cannot themselves be understood in terms of reasons.  Brandom discusses Hegel’s ‘allegory of the valet’: “no man is a hero to his valet”, for Hegel, because a valet sees exclusively the ‘contingent’, ‘debased’, ‘appetitive’ motives associated with a public figure’s actions.  More broadly, because any action can be interpreted as (for example) motivated by psychological gratifications, it is possible to give a ‘debased’ account of any action which understands it not as driven by norms, but as driven by purely personal, appetitive, debased, contingent, etc. motives.  To analyse actions in terms of norms is to give a rational account of the sphere of action.  To analyse actions in terms of causes is to give a genealogical account of the sphere of action.

Brandom’s Hegel sees his philosophical, and our practical, task as reconciling these perspectives in a more capacious philosophical orientation and set of socio-political practices that can accommodate both the ‘subjective’ and the ‘objective’ – both the ‘rational’ and the ‘genealogical’ – dimensions of our understanding of action.  And this project of reconciliation, as recounted by Brandom, I would say, has two broad elements.

First, Brandom’s Hegel is keen to rebut what Brandom calls ‘global’ genealogy – the attempt to replace the analysis of norms and reasons in general with the analysis of causes alone.  Brandom (whether rightly or wrongly) takes Nietzsche to be an exemplar of this approach.  For Brandom’s Hegel, this orientation is in the end self-refutingly nihilistic – it ultimately cannot give an account of semantic content at all.

I regard this element of Brandom’s approach as largely unproblematic and correct.  Global reductivism about norms (at least in the sense in which Brandom means the term ‘reductivism’) is indeed an undesirable position for all the reasons that Brandom elaborates, and Brandom’s (and Brandom’s Hegel’s) alternative is, to my mind, both carefully elaborated and large satisfactory.  I appreciate that not everyone will agree with my take on this, but this isn’t the focus of this blog post!

Let’s say for the sake of argument that we agree, then, that global reductionism about norms is an undesirable position, and that Brandom’s Hegel’s approach outlines a broadly acceptable alternative.  Brandom’s Hegel also has a second, more ambitious philosophical-political goal – to participate in the development of a third ‘age of Geist’ in which the ‘objective’ and ‘subjective’ approaches can be reconciled via the institutionalisation of community practices characterised by ‘Trust’.

Now, I’m not going to tackle what this actually means at this point in working through Brandom’s lectures.  All I want to say, here, is that this project has a stronger objection to the ‘genealogical perspective than simply a narrow objection to ‘global’ genealogy.  This project (the institution of a community of trust) is characterised by a desire to expand the space of social actions that can be, are, and should be treated as ‘rational’ rather than as merely causal – it intervenes, as it were, not just in the question of whether our perspectives should be exclusively genealogical, but also in the question of the extent to which our perspectives should be genealogical. Or, perhaps better, the degree of emphasis that should be placed on the genealogical moment or perspective within our larger framework.

Now, this is where the distinction between ‘left’ and ‘right’ Hegelians reappears, it seems to me.  Brandom discusses the ‘great unmaskers’ or the ‘great genealogists’ of the nineteenth century – Nietzsche, Marx, and Freud.  Nietzsche, for Brandom, as I have already mentioned, is a ‘global genealogist’ – but Marx and Freud are, at least on some interpretations, more ‘local’ genealogists.  Marx’s account of class location (or, I would argue, more broadly, political-economic social practice) does not rule out the possibility of rationality or normativity – it merely ‘explains’ large categories of claims of reason in social practice terms.  Likewise, Freud’s psychoanalytic apparatus need not be seen as a global enemy of reason, it simply offers a category of causal explanation of our psychological dynamics.

What should our attitude be to such ‘debasing’ discourses – discourses that ‘explain’ rational discourse and belief in terms of specific categories of social or psychological causes?  How should such discourses be folded in to the Brandomian-Hegelian apparatus, assuming we broadly accept that apparatus?

Here it seems to me that there are (at least!) two broad orientations one might take.  On the one hand, one might react with relief to the Hegelian rebuttal of the ‘perspective of the valet’, and hope that the Brandomian-Hegelian apparatus can ultimately point the way to the ‘recuperation’ of the apparently irrational social-psychological dynamics analysed by our ‘genealogists’, building such apparent irrationalities into a larger account of reason unfolding through contingent history.  Such a perspective sees the genealogical moment as an analytic waypoint en route to a larger socio-political rationalism.  I’m going to call this the ‘right Hegelian / Brandomian perspective’.

But one might also take a different attitude.  One might react with relief that the Brandomian-Hegelian apparatus shows, yes, that ‘locally genealogical’ perspectives are not inimical to reason – and for that reason, be all the more happy to embrace large elements of the genealogical perspective!  This Brandomian-Hegelian synthesis might be taken, not as a reason to see genealogical perspectives as ‘surpassed’, but as a warrant for their use.  Of course, embracing the genealogical perspective through the prism of this framework means seeing the genealogical perspective as not necessarily exclusively genealogical – if genealogical analysis may always potentially have the double aspect of reason, when viewed ‘magnanimously’, this may change what we take ourselves to be doing in genealogical critique.  But, at the same time, we should not be too hasty to dismiss such critique as inimical to reason.

In a 2017 paper I co-authored with N Pepperell [preprint link here], we applied something like this theoretical approach to the debates over the ‘strong programme’ in science studies.  The strong programme is often taken to be a paradigmatically genealogical enterprise.  Critics of the strong programme, such as Sokal and Bricmont, or Laudan, see it as a fundamentally anti-rationalist enterprise, exchanging the analysis of scientific content in rational and evidentiary terms for a debasing or debunking analysis focussed purely on contingent sociological factors.  Surely, both critics and defenders of the strong programme argue, such an approach can only lead to relativism.  The debate over the strong programme therefore amounts to a debate over whether relativism is an acceptable price for the strong programme’s methodological approach.

We argued, by contrast, that the core elements of the strong programme can be retained without a commitment to relativism, because normative categories of objectivity and reason can still be preserved even alongside a sociological – or, in the vocabulary of Brandom’s Hegel lectures – genealogical analysis.  From my own perspective, this claim provides not a rebuttal of the strong programme (except in some of its metatheoretical conclusions), but a (perhaps counter-intuitive) justification for many of its methodological and empirical decisions.

A similar argument can be made, I think, about genealogical approaches in general.  The fact that the Brandomian apparatus is in principle capable of folding even ‘crassly debasing’ genealogical accounts into a larger rationalism should free us from worrying too much about whether any given genealogical account can in fact be folded in to such a rationalism.  It gives free reign to ‘critical theory’, in a genealogical sense, because it shows that genealogical critical theory is not intrinsically anti-rationalist.

So, I think there’s a loose distinction that can be drawn here between two different ‘lessons’ that different theoretical dispositions might draw from the Brandomian-Hegelian treatment of genealogy.  Incredibly crudely put, those lessons are “Ha! That showed those genealogists the inadequacy of their perspective!” versus “See! Nothing wrong with genealogy at all, it’s perfectly compatible with rationalism!”  And one can roughly align these perspectives with a ‘right’ and ‘left’ Brandomianism – Brandomians more inclined to focus on non-reductive accounts of the rational legitimacy of norms, versus Brandomians more inclined to focus on the practice-theoretic explanation of norms in terms of normative attitudes.

My theoretical orientation, I think, pretty clearly falls on the quote-unquote ‘left’, genealogical side of this dispositional divide.  Hopefully in future posts I’ll both do more to unpack this preference, and also perhaps add some much-needed nuance to this schema.

I’ve been doing some more reading recently in the theoretical space of formal institutional economics – meaning scholars thinking about what kind of formal (often though not always game-theoretic) resources we can best use to model and analyse institutions.

Within this literature, it’s fairly common to typologise approaches to thinking about institutions into two broad traditions.  On the one hand, there are scholars who define institutions as the ‘rules of the game’ that structure political-economic life.  In a modelling context, an institution would here be specified as the parameters and incentives within which game-theoretic agents make their strategic decisions.  On the other hand, there are scholars who define institutions as strategic equilibria within a formal ‘game’.  Here the paradigmatic examples are coordination games, in which agents achieve a stable equilibrium – understood as a normative convention – which is stable because, once established, it is in the interests of all agents to retain this ‘cultural consensus’.  There are of course other ways to typologise the literature, but let’s go with this for now.

The next question is: what is the relationship between these two ways of understanding institutions?  And one way to understand that relationship is to see ‘the rules of the game’ as themselves emergent properties of strategic equilibria.  From this perspective, specifying the rules of any given game ‘exogenously’ is just reifying for analytic convenience a phenomenon that can itself be modelled as an emergent property of agents’ strategic play within a different, more ‘expansive’ game.

OK.  Let’s say we accept this broad outline (which I broadly do).  But if we take it that rules emerge from social practice (and are not merely a guide or constraint for social practice), this raises a set of questions about how to understand instances of social practice (or of strategic play within a game) that appear to depart from a rule.

Such practices can be understood as simply deviant – perhaps as self-interested and opportunistic departures from cooperative play, perhaps as mistakes, perhaps as ‘characterological’ in some way, but in any case as clear deviations from the accepted consensus norm.

But given that rules are themselves shaped by the reality of practice, it may be that an action that some social actors interpret as a deviation from a rule, is interpreted by others as in conformity with the rule.  In this scenario, the disagreement over whether an action is in conformity with the rule, is a disagreement over the substance of the rule

The working out of such disagreements is how rules are specified.  Here a Wittgensteinian or Brandomian perspective sheds some light: because no rule can ever be fully specified, the way in which rules become further specified is via the community reaction to new actions that could in principle be interpreted as being either in accordance with or in contravention of a rule.  Brandom uses the analogy of common law legal judgements, in which new judgements aim to be grounded in precedent, but also themselves form new precedent for future judges.

At the same time, the working out of such disagreements may do more than further ‘specify’ or ‘clarify’ a norm (or rule) – it may specify the norm in a way that can reasonably be seen as transforming the substance of the norm itself.  A new specification of a rule, in other words, may be a new equilibrium which shifts the consensus of the game, and in turn shifts the rules of ‘subsidiary’ games.  This may happen in a ‘revolutionary’ way, in which an entirely new equilibrium is established after a period of normative upheaval.  But it may also happen in an ‘evolutionary’ way, wherein the ‘rules of the game’ gradually shift over time, by way of incremental ‘deviations’ that are then transformed into part of the subtly different new norm.  Linguistic drift is an example of this transformative collective practice. 

And of course in practice some combination of these things often happens.  For example – the role of subcultures in the transformation of larger cultural spaces, with a subculture as a location of normative innovation from which new norms can then (potentially) disseminate (or not).  Is a subcultural space of this kind a deviation from the larger normative space, a simple alternative to the larger normative space, or the bleeding edge of the large normative space’s ongoing self-transformation?  Which of these attitudes you adopt, of course, depends on your political and social perspective – but there is no ‘right’ answer – this question can only be settled in political-cultural practice, by the process of normative contestation, rejection, and consensus-formation that is a significant part of our political, economic, and cultural life.

Anyway, I’m not suggesting that I’m saying anything particularly innovative here – these are pretty familiar remarks about the ways in which norms emerge and transform in social practice.  But I want to foreground these kinds of considerations as I think about how to formally model institutional equilibria and dynamics.  I think that at least some institutional economics would benefit from more emphasis on this category of phenomena, when thinking about how the ‘rules of the game’ are made and changed.

On Thursday 12th November Owen Jones wrote an article for the Guardian in which he suggested that governments should not permit pharmaceutical companies to claim patents on COVID-19 vaccines.  Jones argues that such monopolistic patents, their enforcement, and the deals for vaccine provision made by some governments to the detriment of others, will result in predictable, avoidable, and highly undesirable global inequities in access to COVID-19 vaccines.

In response, Tom Chivers at Unherd wrote a piece that lays out some of the problems with removing intellectual property rights from pharmaceutical innovations.  Crudely put, the problem is one of incentivising scientific innovation.  As Chivers writes, there is a “real, and to some extent irresolvable, tension” between the two major functions of pharmaceutical companies.  On the one hand, these companies manufacture and distribute drugs – in most cases this can be done very cheaply.  On the other hand, these companies engage in scientific research to invent new drugs, and this process can be eye-wateringly expensive.

One of the functions of the patent system is to allow companies to recoup the costs of innovation via the sale of the final product.  By granted a state-enforced monopoly on the production and distribution of a product, patents ensure that the product can be sold at a very substantial mark-up, without the threat of market rivals selling the same product at closer to marginal cost.  Without this mechanism, Chivers worries, the major financial incentive to pharmaceutical innovation would be removed 

This is a real worry: much scientific innovation costs a huge amount of money, and without the ability to recoup that money, the major economic incentive to expensive scientific innovation is removed.  At the same time, Jones’ concerns about global equity and access to drugs are also real and very serious.  Fortunately, there are institutional solutions that might walk a path between these problems.  As Chivers writes, one possibility is for governments to award financial ‘prizes’ to the creators of effective COVID-19 vaccines:

we (governments, philanthropic agencies, etc) will give whoever comes up with the first vaccine some large amount of money: perhaps $1 billion, on the condition that they then agree to make it in large quantities and sell it at close to marginal cost to the developing world. 

Chivers argues that this approach is flawed, on the grounds that more than one company might produce an effective vaccine, and we don’t want our prize money to arbitrarily reward the first vaccine to be created, which might not be the most effective.

Luckily, this problem can be resolved by simply giving money to more than one company.  Moreover, we can use this institutional mechanism to take a step closer towards Jones’ preferred solution: in exchange for receiving the prize, a drug company forfeits the patent, and the drug enters the public domain.  (This mutually beneficial exchange between state actors and pharmaceutical companies could perhaps if necessary be backed by the threat of IP expropriation if the companies are unwilling to relinquish the relevant IP rights.)

As with any institutional structure designed to incentivise innovation, this idea has strengths and weaknesses.  The ‘collective action problem’ of establishing and administering the ‘COVID-19 Innovation Fund’ would not be trivial.  Ideally, to my mind, one would want such a fund to be administered at an international level, with national governments making contributions to the fund determined by their own level of national wealth – but of course this is easier said than organised.  Moreover, abolishing the IP associated with a drug does not in itself immediately facilitate production and supply of the drug – that is another challenge, requiring its own incentive system. There would be many other institutional challenges and obstacles.  

Nevertheless – we shouldn’t feel trapped in a false dichotomy created by the faulty idea that patents are the only effective mediating mechanism via which scientific innovation can be rewarded.  At the end of the day, patents are just a way that drug companies can make a lot of money.  It is perfectly possible to sever the link between the reward for innovation and the cost of drugs, by simply rewarding innovation directly.  In the case of COVID-19, this is relatively easily done, not least because governments are already spending colossal sums in their COVID responses, and because the costs of not rolling out a COVID vaccine are so high.

In conclusion: there is no good reason not to take seriously the approach of just giving the drug companies a load of money, and making the various COVID-19 vaccines part of humanity’s common treasury of knowledge. 

[Edited to add: I want to make clear that there’s a very extensive literature on these issues, and if I were doing this properly I’d actually discuss some of it – but I don’t have time, so I’m afraid this is the blog post.]

Back in the day (more than a decade ago, my god!) I sort of ‘live blogged’ my reading of Robert Brandom’s ‘Making It Explicit’.  That generated a few blog posts that in retrospect were badly wrong in key points (as well as a lot of blog posts that I still stand by and value!) – but I nevertheless found the process very helpful in working through Brandom’s system.  So, recognising that I risk again polluting the blogosphere with incorrect takes on Brandom, but selfishly going ahead anyway for purposes of self-clarification, I’m going to put up some remarks on Brandom’s interpretation of Hegel as I start to engage with it.

These are pre-preliminary remarks because I haven’t yet found time to even begin reading ‘A Spirit of Trust’ (Brandom’s Hegel book).  Instead, I’ve been listening to the Leipzig lectures on Hegel’s Phenomenology that Brandom has very helpfully put up on his YouTube channel.  I take it that these lectures basically cover the same terrain as the book, but of course ~18 hours of lectures can’t go into nearly as much detail as a ~800 page book, so I’m not imagining that these lectures are an adequate substitute for the text.  Nevertheless, until I can find time in my reading schedule for the book itself, this is what I’ve got.

It’s probably worth saying upfront that I’m not interested at all in the question of whether Brandom gets Hegel right.  Brandom’s is a reconstructive project, and while it’s obviously going to greatly irritate Hegel scholars if Brandom’s reconstruction departs in major ways from their interpretation of Hegel’s own position, I don’t care.  Moreover, although it is common and reasonable to assume that Brandom’s Hegel is simply Brandom himself dressed up in a slightly different technical vocabulary, I think it’s probably worth exercising a bit of caution here too.  Clearly Brandom’s Hegel’s system bears a striking – even an uncanny – resemblance to Brandom’s own system, but Brandom is still following the text of Hegel’s Phenomenology in his interpretation, so I don’t think it’s reasonable to assume that, if Brandom were sitting down to write a ‘phenomenology of spirit’ himself, it would look like this.  Rather, I think we can usefully operate as if what we have here is a third figure, analogous perhaps to ‘Kripkenstein’ – Saul Kripke’s influential and controversial interpretation of Wittgenstein – which exists somewhere in the space between or is produced in the interaction between Brandom’s and Hegel’s commitments.

So, with that said, some very preliminary, pre-preliminary remarks on starting to listen to the lectures.  First up: it probably doesn’t need saying, but as with ‘Making It Explicit’, my overwhelming impression is just how clever it all is.  Brandom has so many balls up in the air, and he juggles them with such deftness, interlocking different elements of the system in ways that are both intricate in detail and yet also load-bearing within an overall architectonic structure… it’s all just deeply impressive to watch.  I am, clearly, a Brandom fan, and that isn’t going to go away on the basis of this Hegel project.

With that said, I nevertheless have more unease about the Hegel project in some important areas than I did about ‘Making It Explicit’ (MIE).  As ever, there’s much more to be said than can be covered in a single blog post, even if I had actually read the book.  For now, though, I think the best way to begin discussing some of that unease is to highlight two key elements of MIE, and my takes on them, before contrasting those elements of the MIE project with similar elements of the Hegel project.

So.  Extremely long-term readers of the blog may remember that my main discomfort with ‘Making It Explicit’ focused on the role that Brandom grants to specifically linguistic practice within his system.  Clearly that’s a big disagreement to have, given that Brandom is first and foremost a linguistic philosopher, and given that he pretty clearly thinks that participation in a linguistic community is in some sense a precondition of sapience (a view I disagree with!).  Nevertheless, my disagreement with MIE on the role of the linguistic was tempered by the way in which Brandom embeds his ‘inferentialist’ semantics within his ‘normative pragmatics’.  MIE is interested in the way that language is, first and foremost, something that we do, as a social activity.  Moreover, one of the key elements of Brandom’s account of how linguistic practice generates the forms of normativity characteristic of sapience was his metaphor of ‘scorekeeping’.  In MIE, ‘scorekeeping’ plays a fundamental explanatory role – a role analytically more fundamental (I would argue) than the specific linguistic practices that Brandom uses to give an account of how scorekeeping functions within a discursive community.

It seemed to me then (and still does!) that the role of scorekeeping in MIE leaves open the door to a parallel philosophical apparatus (formally very similar to Brandom’s, but departing from it in key respects), that gives a non-linguistic account of social scorekeeping.  So (perhaps eccentrically), it seems to me that despite Brandom’s own heavy emphasis on specifically linguistic practice, the apparatus of MIE has much to teach us, even if we do not share Brandom’s own commitments in linguistic philosophy, or concerning the centrality of language to thought.

That’s one key element of MIE, and my reaction to it.  Another key element of MIE is Brandom’s account of objectivity.  For me, this is really the key ‘output’ of Brandom’s apparatus.  Again, it’s necessary to be extremely crude and simplistic, if one wants to give a subsection-of-a-reasonable-blogpost-length summary of what Brandom is doing.  But as I see it, one key goal of Brandom’s system is to address a problem that has plagued the pragmatist philosophical project from the beginning.

That problem is, to be crude about it, “what about objectivity, then?”  The pragmatist project, crudely put, is to ground our understanding of traditional philosophical categories – categories like knowledge, truth, value – in social practice theory.  The idea is that what we do as social beings is in some sense generative of these categories, and the categories can only be explained in terms of social practice.  The core objection to the pragmatist project is, basically, that this can’t be done.  Moreover, not only can it not be done, but the effort to do it opens the door to moral, political, and epistemic nihilism (at worst) or moral, political, and epistemic incoherence (at best).  This is what Bertrand Russell is saying when he suggests that US-style pragmatism is a gateway drug to fascism.  This is what Sokal and Bricmont were doing when they suggested that the strong programme in science studies was somehow destroying left politics.  And this is (part of) what many contemporary critics of ‘critical theory’ are doing when they suggest that ‘social justice’ accounts of politics or truth are destroying civilisation.  The idea is that truth, morality, etc. have some reality that exists beyond the social practice of contingent social groups, and that critical-theoretic efforts to ground these categories in social practice are undermining the categories themselves.

Obviously there is a lot mixed up in these debates besides the philosophical issue of the coherence of the pragmatist project, so I want to be clear that I’m not at all suggesting that these debates can be reduced to the kind of abstruse meta-theoretical problems that preoccupy Brandom.  Nevertheless, for me, one of the most important contributions of MIE was that it provided a detailed and (in my humble opinion) satisfactory account of how norms and objectivity can be explained in practice-theoretic terms without succumbing to the theoretical vulnerabilities that have bedevilled earlier pragmatist thinkers (such as Brandom’s doctoral supervisor Richard Rorty, but extending back to the ‘classical’ pragmatists like Dewey, James, etc.)

OK.  So for me Brandom’s account of the concept of ‘objectivity’ was probably the key contribution of MIE.  It’s this account of objectivity (of reference and of norms) that explains why pragmatism isn’t simply a way of explaining truth and value in terms of (say) the practices or beliefs of a dominant social group, and why pragmatism doesn’t simply evacuate these categories altogether. And that concept of ‘objectivity’ more or less emerges from Brandom’s account of scorekeeping.  In particular, Brandom’s account rests on a set of distinctions between different attitudes to normative commitments, established via his scorekeeping apparatus. 

On this account, I as a sapient creature have certain normative commitments about the way things are.  I also track other people’s commitments.  But this tracking of commitments operates via what Brandom calls a form of ‘double bookkeeping’.  I can have an opinion about what somebody takes themselves to be committed to; I can also have an opinion about what they actually are committed to, given my own views about what their commitments entail.  And this ‘double bookkeeping’ can reflexively be applied to my own commitments.  I know what I take my own commitments to entail, but I am also aware that others may take my commitments to entail something different – and this gap between my current perception of my own commitments, and the commitments I may eventually take myself to have really possessed all along, opens up a ‘formal’ concept of objectivity that can be understood independent of any specific account of what objectivity substantively consists in.

I’m being much too telegraphic here to capture how Brandom’s argument functions with any adequacy, I’m really just trying to gesture to the broad space of Brandom’s argument.  For the purposes of this blog post, what I mostly want to capture is that this account of objectivity is extremely ‘slimline’ – this key element of MIE’s argument does not make any ontological claims about what the substance of objective knowledge consists in.  It gets you out of the problem that has historically plagued pragmatism – how can we give an account of objectivity that cannot be reduced to, say, the consensus of a given sub-community? – and that’s ‘all’ it does.

Now, there are other elements of MIE – indeed, some of the most involved sections, such as Brandom’s lengthy discussion of anaphora – that I haven’t discussed here.  And indeed, I wouldn’t want to go anywhere near even trying to summarise those sections without reading the book again.  So I don’t want to make any strong claims about what the book doesn’t do.  My points here are more that: First, the elements of the book that I’ve highlighted are, to me, a big part of its core argument; Second, this core argument is quite ‘slimline’ in terms of its commitments: Brandom builds a great deal on the foundations of a quite minimal theory of practice.

Ok.  So, with that overly laborious background (given the brevity of the rest of what I have to say in this post), let me articulate some pre-preliminary thoughts on Brandom’s Hegel project.  And here I want to contrast two elements of Brandom’s Hegel with those elements of MIE I’ve just highlighted.

First, although Brandom’s Hegel is a pragmatist, and there is no inconsistency that I can see between the apparatus of MIE and the apparatus of A Spirit of Trust, the latter seems to me (again, at a very first pass) to devote less energy to grounding its account in a ‘deflationary’ pragmatics.  So far in Brandom’s Hegel lectures we have had no discussion of scorekeeping, that key explanatory component of MIE’s account of objectivity.  Rather, Brandom’s Hegel (so far) has a tendency to leap straight in to the more directly semantic elements of the argument.

Clearly there’s nothing wrong with this – and indeed for all I know these matters will be addressed in full later.  But for people like me for whom the normative pragmatics dimension of MIE was in some respects more interesting than its inferentialist semantics, this is a bit disappointing.

That’s my first, very brief and fairly trivial, observation.  My second observation is that it seems to me that Brandom’s Hegel may be making stronger ‘ontological’ claims than the core elements of MIE that I’ve highlighted need commit us to.

In particular, Brandom has an extremely intricate and carefully developed account of Hegel’s idealism.  I’ll want to circle back round and give a much fuller account of this once I’m more confident in my grasp of this material.  But at (again) a very preliminary and crude first pass, Brandom argues that for Hegel the world is already ‘conceptually structured’.  What this means is not that the world is ontologically dependent on thought – Brandom’s Hegel is not a ‘subjective’, Berkeleyan idealist.  For Brandom’s Hegel (much of) the world would be the way it is even if nobody had ever existed to perceive it.  Rather, the argument is that the structure of the world is such that we are capable of having ‘adequate knowledge’ of the world, and this seemingly requires a homology between the normative structure of thought and the ontological structure of the world.  Specifically, Brandom believes that for Hegel the normative component of semantics maps onto the modal structure of reality.  That is, if I am committed to a claim, what this means is that I am committed to some other claims also being the case, and some other claims also not being the case.  And this normative network of obligations and entitlements (legitimate and illegitimate inferences) is homologous with modal relations of possibility and impossibility between and within states of affairs in reality.  If such-and-such a commitment about the world is incompatible with such-and-such another commitment about the world, this normative obligation to not hold those two beliefs simultaneously is saying that such-and-such a state of affairs is in reality incompatible with such-and-such another state of affairs.  Modal claims about compatibility and incompatibility of real states of affairs map onto normative claims about our inferential obligations given our commitments, and vice versa.

My account of this argument here is desperately crude relative to Brandom’s – my goal is again just to gesture in the direction of the Brandomian Hegelian apparatus.  The point is that this account of Hegel’s idealism explains how we have objective knowledge of the world.  For Brandom’s Hegel, this argument meets the sceptical challenge thrown up by his predecessors in the modern philosophical tradition.  And this goal of meeting the sceptical challenge of Descartes, Kant, and others is a key motivator of this apparatus, on Brandom’s account.  For Brandom’s Hegel, one of the problems of the pre-Hegelian modern philosophical tradition was that it baked scepticism into its semantics, by postulating a relationship of representation that intrinsically rendered reality ungraspable in key elements.  ‘Objective idealism’ aims to address this problem, by showing how reality can be ‘conceptually structured’ and thus knowlable in itself without committing us to the idea that reality is ontologically dependent on knowing subjects.

Which is all fair enough.  My initial worry about this dimension of Brandom’s Hegel’s argument, though, is that it might ‘prove too much’.  Like Brandom’s Hegel, I am suspicious of any epistemology that seems to intrinsically condemn us to scepticism.  Maybe we’re completely misguided about reality, but it doesn’t seem right to have this deep epistemological failure be an intrinsic feature of our philosophical apparatus.  (I’m aware that ‘doesn’t seem right’ isn’t actually an argument, but I’m not going to shoulder the burden of grounding my philosophical intuitions in this blog post…)

At the same time, though, and in the other direction, I worry about arguments that seem to imply that reality must be knowable to us, at least in principle, or at least in general.  What if there are elements of reality that we simply cannot comprehend, and never could?  What if the reason for our inability to comprehend those elements of reality is that reality is not ‘conceptually structured’ in Brandom’s Hegel’s sense, or is so only in some of its aspects, or ‘from a certain point of view’?  I’m inclined to a ‘satisficing’ approach to knowledge – a ‘good enough’ account of what it is to know something – and it feels that Brandom’s Hegel’s account of epistemology might be after a stronger sense of epistemological adequacy.  What if this criterion for adequacy of knowledge is just too strong to actually capture the reality of how we know things?

Now, as I keep saying, these are only pre-preliminary thoughts.  I’m writing them up here not because I’m presenting them as an argument against Brandom’s Hegel’s project, certainly not as stands, but because I find it useful to get my reactions down in writing as I go.  Still, these are some of the things I’m going to be thinking about as I continue to work through Brandom’s remarkable project.

Aotearoa New Zealand has been consumed today by discussion of the varied fiascos around the government’s COVID-19 border quarantine and self-isolation policies and practices. Obviously there have been a wide range of failures here, which cannot be reduced to a single source. But I want to draw attention to a specific, very consistent element of the Ministry of Health’s attitude to COVID-19 policy that has, in my view, caused problems for the government’s COVID-19 response from the beginning, and which now risks wrecking the enormous progress the country has made in the goal of eliminating COVID-19.

Put bluntly, the Ministry of Health is absolutely convinced that there is very little point in testing non-symptomatic people. This has been clear in their testing guidelines from the beginning, and the great difficulty people had getting tested. It is now confirmed in testimony from people who have recently arrived in the country, and who were unable to persuade MoH staff to test them even as part of the government-overseen isolation process. Per this 1 News report:

She and her family members also didn’t get tested before leaving yesterday, because she was told it was optional. “She then said to us that it was pointless us having the test done unless we were showing any symptoms.” Another woman who stayed at the same hotel as the two new positive cases says she was told the same thing. She got a test anyway, but now, home, still hasn’t received her result.

Why is the MoH so determined not to test non-symptomatic people? Obviously the following is somewhat conjectural, but in my view some core elements of the NZ government’s view on COVID-19 are as follows:

First, the government is convinced that asymptomatic transmission of the virus is extremely rare. Asymptomatic here means not just “non-symptomatic”, but “will never become symptomatic.”

Second, the government is convinced that symptomatic cases can be identified when symptoms emerge, and contact tracing can then take care of any transmission that took place in the few days before symptoms emerged.

Third, these opinions are supplemented by the recognised statistical fact that interpreting the likelihood of a false positive in a test result is highly dependent on the underlying probability that the individual being tested has the condition.

Fourth, the government is convinced that for these reasons, those calling for widespread testing of non-symptomatic people are being irrational. To the extent that the government are willing to test non-symptomatic people they will do so reluctantly and tardily, in order to (in their eyes) placate a hysterical public and media, or at best out of a hyper-abundance of caution, rather than for good public health policy reasons.

Fifth, and particularly irrationally, these attitudes have been extended even to the testing of individuals in mandatory isolation to attempt to prevent new COVID-19 cases from circulating within the country.

All of these positions are problematic. It is plausible but not established that asymptomatic transmission is very rare; it is easy to miss the symptoms even of symptomatic cases – as in fact happened with the two new cases of COVID-19, one of whose symptoms was apparently (mis)attributed to a pre-existing condition; the risk of statistical artefacts can be greatly reduced by repeated testing; it is in fact extremely rational to adopt a policy of broader testing, as I argued in an earlier post; and none of this makes any sense when you’re talking about a process that has specifically been designed to prevent new COVID-19 cases from entering an as-far-as-we-know currently otherwise virus-free country.

And yet the Ministry of Health and the government more broadly, in my view, remain in the grip of a baseless conviction that calls for testing of non-symptomatic cases are fundamentally silly, a demand to be placated rather than acted upon. Until the MoH abandons this idée fixe, the country will not be able to adopt a fully rational COVID-19 pandemic policy.