A very short post on an issue that comes up quite a bit in left debate (by which I mean not centre left, but ‘radical’ left): can capitalism tackle climate change? Meaning: is it even possible to adequately address the climate crisis within the capitalist system?  A popular answer to this question on the left is ‘no’.  My answer is ‘in principle, yes’.  I want to very quickly make the case for that position.

First the case against.  The first reason to think that capitalism is incompatible with tackling climate change is that capitalism is intrinsically growth-oriented.  Capitalist growth, the argument goes, is incompatible with the kind of reduction in consumption of resources that is required to adequately reduce greenhouse gas emissions.  Therefore, if we want to tackle the climate crisis, the abolition of capitalism is a necessary precondition.

Moreover, capitalism is characterised by a massive imbalance of socio-political power, such that the ruling capitalist class has a lot of power and the great bulk of humanity has very little power.  Since the ruling capitalist class has little interest in changing its destructive behaviour, the dire consequences of which will be visited upon the bulk of poor and powerless humanity rather than the ruling class itself, one cannot expect the capitalist system to reform itself, the argument goes.

This position definitely has a lot of plausibility to it!  But I think it’s wrong, for the following reasons.

First, on growth.  Capitalism as a system, and capital as a component of that system, is completely indifferent to the form that economic growth takes.  The valorisation of capital can in principle take place via any available mechanism.  This is one of the things that makes capitalism so internally varied, and so adaptable, as an economic system.  Capital does need to be tied to actual use-values at some point in the system – you can’t have a pure bubble economy.  But capital itself could not care less what the specific use-values are.  So: fossil fuels can be used to drive economic growth, but that’s just because fossil fuels are convenient for capital – there’s no intrinsic physical or social law that ties growth to this form of production and consumption.

Indeed, the incredible flexibility of capitalism is a cause for hope with respect to the possibility of shifting away from fossil fuels.  If we choose to make fossil fuels extremely inconvenient and costly for capital to use, capital will shift away from fossil fuels to easier forms of valorisation – that’s just what capital does.

There are two big ways to shift capital away from fossil fuels.  The first is regulation – in this case, something close to de facto global regulation.  This is, of course, a major political challenge – but we already know that capital can accommodate lots of different kinds of regulation, because capital already accommodates itself to all kinds of regulatory regimes.  Economic regulation is not intrinsically anti-capitalist – it’s just a form of capitalist governance.

The second way to shift capital away from fossil fuels is to make other energy sources more viable and appealing.  This is, fundamentally, a technological challenge.  And capitalism is good at technological innovation!  It’s common on the left to talk about how ‘tech won’t save us’, and of course in a narrow sense that’s true, but tech can potentially do a lot to save us.  Green energy alternatives are becoming increasingly widespread and viable, and this is extremely good news!  

So – it seems to me that it should be clearly within the capacity of the capitalist system to make a very large-scale shift to green energy and to massively reduce greenhouse emissions.  Capitalism’s drive to growth is not, in itself, an insurmountable obstacle to achieving this.

What about political power, though?  Even if we grant that capitalism’s orientation to growth is not an insurmountable obstacle to tackling climate change, the political problem remains: look at who’s running things.  In fact there are (at least) two political problems here.  First, there’s the general collective action problem of coordinated policy action across a large number of rivalrous socio-political actors.  Given that capitalist states are in many important respects in competition with each other, the challenge of tackling climate change is (in the game-theoretic jargon) a kind of n-player prisoner’s dilemma game, where it is often more advantageous to let other states take action than to take action oneself.  And, second, there’s the problem of who the ruling class decision-makers are in the first place, and what their interests are.

So there are several layers of collective action problem that must be overcome to get the kind of global governance structures that we need to tackle the climate crisis.  First, we need ruling class figures and groups who actually want to tackle the problem. Then we need them to be able to successfully coordinate international action in a way that overcomes the n-player prisoner’s dilemma problem of the interstate regulatory system.

Moreover, we need this coordination to persist in the face of capitalism’s ongoing tendency to ‘melt all that is solid into air’ – i.e. to revolutionise and destabilise its own structures in the pursuit of valorisation and growth.  Here capitalism’s internal diversity of institutional structure and dynamism is a cause for pessimism, because it means there are constant opportunities to overturn any regulatory regimes that have successfully pushed capitalist valorisation away from forms of production that contribute to the climate crisis.

So – can it be done?  Is it realistic to stably implement the necessary forms of economic governance within a capitalist system?

Well – maybe not!  Clearly it’s a difficult challenge.  But here’s the thing: it is also a difficult challenge to overthrow the capitalist system and institute an alternative mode of global economic organisation.  If we’re talking about things that are in principle possible, but that are also very hard, then we need to also be realistic on the comparative question of how hard the different options here are.  It seems to me that, at least on its face, overthrowing capitalism altogether is quite a bit more difficult than instituting an adequate global regulatory regime for greenhouse gas emissions.  

Of course, that doesn’t mean that abolishing capitalism can’t be done!  This isn’t a counsel of despair from an anti-capitalist viewpoint.  Rather, it’s an argument that we should be realistic about what the available political futures are.  If the climate crisis is a true crisis – which it is – then we quite likely can’t afford to hold out for our first preference solution.  If the problem is potentially fixable within capitalism, then it strikes me as a bad idea to pretend otherwise, in the hope of achieving what looks like, in present global circumstances, an unlikely prospect of a non-capitalist alternative solution. Indeed, it may be that our hopes for achieving an emancipatory post-capitalist political-economic system rest on our ability to forestall the climate crisis within capitalism.

Ok – I’ve now listened to the entirety of Brandom’s lectures on Hegel, which cover, in significantly briefer form, the content of ‘A Spirit of Trust’. I need to make time, somehow, to carefully read the book itself, and naturally anything I say about Brandom’s Hegel is provisional until I’ve done so. Still, I take it that the core of Brandom’s interpretation is clear from his lectures, and I am too impatient to want to wait until I’ve worked through the full text before commenting! (“Perhaps there is only one cardinal sin: impatience”, writes Kafka – but what does he know.)

Let me dive straight in and say that while obviously I think Brandom’s Hegel project is extraordinarily impressive, I think it jumps off a sort of ethical-political cliff in these closing sections, and it’s probably going to take some pretty heavy lifting to save it. The problem is the use to which Brandom puts the (retrospective) category of forgiveness, and the corresponding (prospective) category of Trust. (For the sake of easy expression, I’m going to stop talking about ‘Brandom’s Hegel’, and just talk about Brandom, but I hope it’s clear that all appropriate caveats about authorial identity apply.)

So – I’m not going to have my language or the details right here, but the closing sections of Brandom’s argument pivot around the distinction between the ‘noble’ and ‘base’ or ‘great-souled’ and ‘small-souled’ or ‘edelmütig’ and ‘niedertrachtig’ perspectives. Roughly speaking, the ‘noble’ perspective sees actions as guided by universal norms, and the ‘base’ perspective sees actions as guided by immediate and contingent material motives. Brandom argues that every action that has ever or could ever take place can, in principle, be viewed from either perspective. What this means, is that it is alway possible to interpret any action in a ‘debasing’ way, that strips away the pretension that an action is motivated by a norm, and ascribes to it, rather, base or ignoble motives – or it is possible to interpret the same action as carried out in accordance with some norm. These are ‘bad faith’ and ‘good faith’ perspectives on action.

The kind of community we want to institute, Brandom argues, is one characterised by mutual recognition – and to recognise another as a normative agent is to recognise their actions as carried out in a normative space, that is, in response to norms. At the same time, we are all aware that our actions may not necessarily be understood in those terms – we all fall short of the ideal selves we may aspire to, and take actions that may deviate from the norms we hope to realise.

At the same time, it is a core element of Brandom’s theory of action that we can never fully know what our actions are when we take them. Actions have unanticipated consequences, and future events may always and at any time retroactively transform what the content of any given action was. It is for this reason impossible to definitively say what the normative content of an action even is at any given time – that normative content is always open for future ‘transformation’ (or greater specification), by the consequences of future history and by future acts of interpretation.

This means that what we currently may interpret as failures to realise a norm could, in principle, be interpreted by future actors as instantiations of a norm. And this possibility opens up a dynamic that Brandom characterises as ‘confession and forgiveness’.

In this dynamic, a social actor confesses the base motives that drive their actions, and the ways in which their actions have deviated from norms, while the forgiver nevertheless recognises those actions in terms of an unfolding norm that they – the confessor – was unable to articulate or recognise. In this way, a ‘tradition’ is constituted that can, in principle, ‘recuperate’ even actions that seem to have no normative justification, retroactively understood as justified by the events and interpretations that followed. In Brandom’s words:

Something I have done should not be treated as an error or a crime… because it is not yet settled what I have done.

(‘A Spirit of Trust’, p. 625)

Moreover, for Brandom in these sections, the kind of political-social-discursive community we aspire to create, should be one characterised by expanding the scope of such confession and forgiveness. For Brandom, the community we aspire to create should be one characterised by a ‘spirit of trust’ in which we confess our our normative failures in the hope that future actors can ‘recuperate’ those failures within a larger, more ‘magnanimous’ interpretive framework, which brings more and more actions under the auspices of normative reason.

I’m racing past a huge amount of content here, and I will want to circle back round and give all of this a much more nuanced treatment once I’ve done my due diligence properly. Nevertheless, in a preliminary way, I think I have enough grasp of what Brandom is getting at here to say: are we sure about this? More specifically: granted that the full content of any action cannot be fully specified at any moment, are we sure that ‘forgiveness’ should be the attitude we ultimately aspire to achieve in relation to social actions?

I think there are two broad issues here. First, some elements of Brandom’s discussion seem to me to move too easily between two categories of non-conformity with norms. First, we may see actions as failing to adhere to any norm at all – as purely appetitive, or accidental, or whatever. This is the perspective of ‘particularity’. Second, we may see an action as taking place in conformity with the wrong norm – indeed, as taking place in conformity with a bad or evil norm. “Falling short of norms” is not the only form that evil may take – (what are taken to be) norms themselves may be evil.

In general I think it is a long-term problem with Brandom’s work that he is insufficiently attentive to worries about ‘bad norms’. Brandom is preoccupied by the threat of ‘nihilism’ – by the problem of adopting theoretical perspectives that, if taken seriously, are unable to make space for norms at all. But Brandom does not seem terribly preoccupied by the problem of communities or social spaces that have established values that we nevertheless wish to reject. This leads him, in my view, to misunderstand the place that a number of critiques of ‘Making It Explicit’, and of other pragmatist thinkers, are coming from – and it also means that he does not give nearly enough attention to this problem space in his interpretation of Hegel.

What if an entire society, in its dominant norms, practices, values, etc., is evil? This is not something that should be too much of a challenge to imagine. And yet it presents a challenge for pragmatist, practice-theoretic, accounts of normativity. If we have a transcendent account of norms, it is easy to understand how we can resist the evil around us. But if we have a practice-theoretic account of norms – if our own norms in some sense emerge out of our own social practices and those of the society we inhabit – then it is harder to see where the ‘critical distance’ that would allow us to reject the bad norms by which we are surrounded, in favour of good norms, might come from. I don’t, to be clear, think that this is an insoluble problem – in practice, all societies, even ‘totalitarian’ ones, are highly internally diverse, and there are always social practices and locations that provide a critical standpoint from which alternative value systems to the dominant ones may be assembled. Nevertheless, this category of worry is one that pragmatists need to address – and that Brandom, as I say, seems inattentive to.

In terms of Brandom’s discussion of recollection and forgiveness, a similar problem manifests in relation to history. How much, in history, in fact, can and should be forgiven? Do we want to provide a Whiggish rational reconstruction of history that ‘justifies’ apparent crimes because of their later consequences? Should we? Isn’t that kind of monstrous? Should the slave trade, the Holocaust, the great policy-driven famines of the nineteenth and twentieth centuries, the many exterminations, oppressions, and violences of the charnel house of history, all be grist for the mill of Absolute Spirit’s magnanimous forgiveness?

I think it’s clear – on political-ethical grounds – that the answer to these questions is “no”. If we want to build a better community characterised by a more expansive practice of mutual recognition and respect, part of this collective project needs to be a sombre recognition that much human action is not and can never be justified. The question of how we mine our history to construct the traditions that shape the present needs to be, in part, a question of which history we ‘fold in’ to our self-understanding precisely by rejecting the idea that any norm we value or respect can be found there.

I don’t think Brandom’s metatheoretical apparatus is incompatible with that more sombre approach to the thinking of history or tradition. But I do suspect that some significant modifications should probably be made to these elements of Brandom’s Hegel, if we are to assemble a ‘recollective reconstruction’ of our history that is fit for the purpose of emancipatory politics.

It’s common in discussions of the post-Hegelian German Idealist tradition to draw a distinction between ‘left’ and ‘right’ Hegelians.  I don’t know nearly enough about the history of German Idealism to make use of this distinction in any scholarly way – but my understanding of the distinction amounts to the extent to which the Hegelian apparatus is taken to be ‘critical’.  One can interpret the Hegelian apparatus as providing a quasi-metaphysical justification for the political status quo, or one can interpret the Hegelian apparatus as providing the resources for a far-reaching critique of actually-existing institutions.  This, crudely put, I take it, is the distinction between ‘left’ and ‘right’ Hegelianism.

Brandom sometimes, following Rorty, plays with this phrase to draw a distinction between ‘left’ and ‘right’ Sellarsians.  And in this post I want to likewise draw a potential distinction between ‘left’ and ‘right’ Brandomians.  In doing so, I want to flag straight up that this is extremely loose usage, and I don’t claim that this distinction necessarily usefully maps onto actual political left and right categories (themselves of course often extremely fuzzy).  Hopefully the post itself will make clear what I’m getting at.

So.  As I’ve said before, I’m slowly working through Brandom’s Hegel lectures (available at his YouTube channel here), as a precursor to slowly working through Brandom’s Hegel book.  I’m currently on the penultimate lecture – Genealogy and Magnanimity: The Allegory of the Valet – which covers similar ground to Brandom’s lecture of some years ago – Reason, Genealogy, and the Hermeneutics of Magnanimity.  This post is essentially a reflection on these lectures.  Brandom clearly regards the content discussed in these lectures as central to his Hegel’s philosophical project.

The lectures are focused on Hegel’s distinction between ‘noble’ and ‘base’ or ‘great-souled’ and ‘narrow-souled’ or ‘magnanimous’ or ‘suspicious’ meta-conceptual attitudes.  Brandom connects Hegel’s understanding of these categories to one of Brandom’s own master distinctions, between normative statuses and normative attitudes.  Normative statuses, recall, are things like “an obligation” and “an entitlement”.  Normative attitudes are things like “taking somebody to have an obligation or entitlement”.  One of the major overarching goals of Brandom’s entire philosophical project is to explain normative statuses in terms of normative attitudes in a non-reductive way – in a way, that is, that does not reduce normative statuses like obligations to purely ‘subjective’ categories like “believing somebody to have an obligation”, and yet at the same time does not attribute a “spooky” ontological substance to normative statuses or the norms associated with them.

I’ve been over all this in painful detail in my earlier series of posts on ‘Making It Explicit’, and I’m going to let a lot of important nuance fall by the wayside for that reason.  In his Hegel lectures, Brandom adds a historical-political dimension to these issues, by connecting these categories to Hegel’s ‘epochs of spirit’.  Broadly speaking, for Brandom’s Hegel, pre-Enlightenment understanding of norms saw normative attitudes as derivative of normative statuses.  On this understanding, norms are real things out there in the world somehow, and our normative attitudes can be explained as attempts to attend to their ‘ontological’ authority.  The historical shift to ‘Enlightenment’ facilitated a theoretical perspective that turned this account on its head: from this perspective, normative attitudes are the fundamental explanatory category, and normative statuses derive from them.  But, for Brandom’s Hegel, there is a tendency within this Enlightenment tradition to take this ‘subjectivist’ orientation ‘too far’ – to see normative statuses as fundamentally unreal, an otiose concept, and to understand norms, morality, etc., purely and reductively in terms of normative attitudes.  (Utilitarianism is, for Brandom’s Hegel, an example of this approach.)  This ‘Enlightenment’ attitude can then in turn be taken to provide warrant for a nihilism about norms – it can lead to the conclusion that there are no norms, really, only people believing or acting as if there were norms – a rejection of the normative as such.  (In more recent philosophy, Brandom characterises Gilbert Harman as an exemplar of this approach in the area of moral philosophy.)

Brandom connects all this in turn to what he sees as two different interpretive orientations to any given action.  One can interpret an action as taken in response to a normative obligation – as taking place within the space of reasons – or one can interpret an action as taking place for purely ‘causal’ reasons, as driven by factual contingencies that cannot themselves be understood in terms of reasons.  Brandom discusses Hegel’s ‘allegory of the valet’: “no man is a hero to his valet”, for Hegel, because a valet sees exclusively the ‘contingent’, ‘debased’, ‘appetitive’ motives associated with a public figure’s actions.  More broadly, because any action can be interpreted as (for example) motivated by psychological gratifications, it is possible to give a ‘debased’ account of any action which understands it not as driven by norms, but as driven by purely personal, appetitive, debased, contingent, etc. motives.  To analyse actions in terms of norms is to give a rational account of the sphere of action.  To analyse actions in terms of causes is to give a genealogical account of the sphere of action.

Brandom’s Hegel sees his philosophical, and our practical, task as reconciling these perspectives in a more capacious philosophical orientation and set of socio-political practices that can accommodate both the ‘subjective’ and the ‘objective’ – both the ‘rational’ and the ‘genealogical’ – dimensions of our understanding of action.  And this project of reconciliation, as recounted by Brandom, I would say, has two broad elements.

First, Brandom’s Hegel is keen to rebut what Brandom calls ‘global’ genealogy – the attempt to replace the analysis of norms and reasons in general with the analysis of causes alone.  Brandom (whether rightly or wrongly) takes Nietzsche to be an exemplar of this approach.  For Brandom’s Hegel, this orientation is in the end self-refutingly nihilistic – it ultimately cannot give an account of semantic content at all.

I regard this element of Brandom’s approach as largely unproblematic and correct.  Global reductivism about norms (at least in the sense in which Brandom means the term ‘reductivism’) is indeed an undesirable position for all the reasons that Brandom elaborates, and Brandom’s (and Brandom’s Hegel’s) alternative is, to my mind, both carefully elaborated and large satisfactory.  I appreciate that not everyone will agree with my take on this, but this isn’t the focus of this blog post!

Let’s say for the sake of argument that we agree, then, that global reductionism about norms is an undesirable position, and that Brandom’s Hegel’s approach outlines a broadly acceptable alternative.  Brandom’s Hegel also has a second, more ambitious philosophical-political goal – to participate in the development of a third ‘age of Geist’ in which the ‘objective’ and ‘subjective’ approaches can be reconciled via the institutionalisation of community practices characterised by ‘Trust’.

Now, I’m not going to tackle what this actually means at this point in working through Brandom’s lectures.  All I want to say, here, is that this project has a stronger objection to the ‘genealogical perspective than simply a narrow objection to ‘global’ genealogy.  This project (the institution of a community of trust) is characterised by a desire to expand the space of social actions that can be, are, and should be treated as ‘rational’ rather than as merely causal – it intervenes, as it were, not just in the question of whether our perspectives should be exclusively genealogical, but also in the question of the extent to which our perspectives should be genealogical. Or, perhaps better, the degree of emphasis that should be placed on the genealogical moment or perspective within our larger framework.

Now, this is where the distinction between ‘left’ and ‘right’ Hegelians reappears, it seems to me.  Brandom discusses the ‘great unmaskers’ or the ‘great genealogists’ of the nineteenth century – Nietzsche, Marx, and Freud.  Nietzsche, for Brandom, as I have already mentioned, is a ‘global genealogist’ – but Marx and Freud are, at least on some interpretations, more ‘local’ genealogists.  Marx’s account of class location (or, I would argue, more broadly, political-economic social practice) does not rule out the possibility of rationality or normativity – it merely ‘explains’ large categories of claims of reason in social practice terms.  Likewise, Freud’s psychoanalytic apparatus need not be seen as a global enemy of reason, it simply offers a category of causal explanation of our psychological dynamics.

What should our attitude be to such ‘debasing’ discourses – discourses that ‘explain’ rational discourse and belief in terms of specific categories of social or psychological causes?  How should such discourses be folded in to the Brandomian-Hegelian apparatus, assuming we broadly accept that apparatus?

Here it seems to me that there are (at least!) two broad orientations one might take.  On the one hand, one might react with relief to the Hegelian rebuttal of the ‘perspective of the valet’, and hope that the Brandomian-Hegelian apparatus can ultimately point the way to the ‘recuperation’ of the apparently irrational social-psychological dynamics analysed by our ‘genealogists’, building such apparent irrationalities into a larger account of reason unfolding through contingent history.  Such a perspective sees the genealogical moment as an analytic waypoint en route to a larger socio-political rationalism.  I’m going to call this the ‘right Hegelian / Brandomian perspective’.

But one might also take a different attitude.  One might react with relief that the Brandomian-Hegelian apparatus shows, yes, that ‘locally genealogical’ perspectives are not inimical to reason – and for that reason, be all the more happy to embrace large elements of the genealogical perspective!  This Brandomian-Hegelian synthesis might be taken, not as a reason to see genealogical perspectives as ‘surpassed’, but as a warrant for their use.  Of course, embracing the genealogical perspective through the prism of this framework means seeing the genealogical perspective as not necessarily exclusively genealogical – if genealogical analysis may always potentially have the double aspect of reason, when viewed ‘magnanimously’, this may change what we take ourselves to be doing in genealogical critique.  But, at the same time, we should not be too hasty to dismiss such critique as inimical to reason.

In a 2017 paper I co-authored with N Pepperell [preprint link here], we applied something like this theoretical approach to the debates over the ‘strong programme’ in science studies.  The strong programme is often taken to be a paradigmatically genealogical enterprise.  Critics of the strong programme, such as Sokal and Bricmont, or Laudan, see it as a fundamentally anti-rationalist enterprise, exchanging the analysis of scientific content in rational and evidentiary terms for a debasing or debunking analysis focussed purely on contingent sociological factors.  Surely, both critics and defenders of the strong programme argue, such an approach can only lead to relativism.  The debate over the strong programme therefore amounts to a debate over whether relativism is an acceptable price for the strong programme’s methodological approach.

We argued, by contrast, that the core elements of the strong programme can be retained without a commitment to relativism, because normative categories of objectivity and reason can still be preserved even alongside a sociological – or, in the vocabulary of Brandom’s Hegel lectures – genealogical analysis.  From my own perspective, this claim provides not a rebuttal of the strong programme (except in some of its metatheoretical conclusions), but a (perhaps counter-intuitive) justification for many of its methodological and empirical decisions.

A similar argument can be made, I think, about genealogical approaches in general.  The fact that the Brandomian apparatus is in principle capable of folding even ‘crassly debasing’ genealogical accounts into a larger rationalism should free us from worrying too much about whether any given genealogical account can in fact be folded in to such a rationalism.  It gives free reign to ‘critical theory’, in a genealogical sense, because it shows that genealogical critical theory is not intrinsically anti-rationalist.

So, I think there’s a loose distinction that can be drawn here between two different ‘lessons’ that different theoretical dispositions might draw from the Brandomian-Hegelian treatment of genealogy.  Incredibly crudely put, those lessons are “Ha! That showed those genealogists the inadequacy of their perspective!” versus “See! Nothing wrong with genealogy at all, it’s perfectly compatible with rationalism!”  And one can roughly align these perspectives with a ‘right’ and ‘left’ Brandomianism – Brandomians more inclined to focus on non-reductive accounts of the rational legitimacy of norms, versus Brandomians more inclined to focus on the practice-theoretic explanation of norms in terms of normative attitudes.

My theoretical orientation, I think, pretty clearly falls on the quote-unquote ‘left’, genealogical side of this dispositional divide.  Hopefully in future posts I’ll both do more to unpack this preference, and also perhaps add some much-needed nuance to this schema.

I’ve been doing some more reading recently in the theoretical space of formal institutional economics – meaning scholars thinking about what kind of formal (often though not always game-theoretic) resources we can best use to model and analyse institutions.

Within this literature, it’s fairly common to typologise approaches to thinking about institutions into two broad traditions.  On the one hand, there are scholars who define institutions as the ‘rules of the game’ that structure political-economic life.  In a modelling context, an institution would here be specified as the parameters and incentives within which game-theoretic agents make their strategic decisions.  On the other hand, there are scholars who define institutions as strategic equilibria within a formal ‘game’.  Here the paradigmatic examples are coordination games, in which agents achieve a stable equilibrium – understood as a normative convention – which is stable because, once established, it is in the interests of all agents to retain this ‘cultural consensus’.  There are of course other ways to typologise the literature, but let’s go with this for now.

The next question is: what is the relationship between these two ways of understanding institutions?  And one way to understand that relationship is to see ‘the rules of the game’ as themselves emergent properties of strategic equilibria.  From this perspective, specifying the rules of any given game ‘exogenously’ is just reifying for analytic convenience a phenomenon that can itself be modelled as an emergent property of agents’ strategic play within a different, more ‘expansive’ game.

OK.  Let’s say we accept this broad outline (which I broadly do).  But if we take it that rules emerge from social practice (and are not merely a guide or constraint for social practice), this raises a set of questions about how to understand instances of social practice (or of strategic play within a game) that appear to depart from a rule.

Such practices can be understood as simply deviant – perhaps as self-interested and opportunistic departures from cooperative play, perhaps as mistakes, perhaps as ‘characterological’ in some way, but in any case as clear deviations from the accepted consensus norm.

But given that rules are themselves shaped by the reality of practice, it may be that an action that some social actors interpret as a deviation from a rule, is interpreted by others as in conformity with the rule.  In this scenario, the disagreement over whether an action is in conformity with the rule, is a disagreement over the substance of the rule

The working out of such disagreements is how rules are specified.  Here a Wittgensteinian or Brandomian perspective sheds some light: because no rule can ever be fully specified, the way in which rules become further specified is via the community reaction to new actions that could in principle be interpreted as being either in accordance with or in contravention of a rule.  Brandom uses the analogy of common law legal judgements, in which new judgements aim to be grounded in precedent, but also themselves form new precedent for future judges.

At the same time, the working out of such disagreements may do more than further ‘specify’ or ‘clarify’ a norm (or rule) – it may specify the norm in a way that can reasonably be seen as transforming the substance of the norm itself.  A new specification of a rule, in other words, may be a new equilibrium which shifts the consensus of the game, and in turn shifts the rules of ‘subsidiary’ games.  This may happen in a ‘revolutionary’ way, in which an entirely new equilibrium is established after a period of normative upheaval.  But it may also happen in an ‘evolutionary’ way, wherein the ‘rules of the game’ gradually shift over time, by way of incremental ‘deviations’ that are then transformed into part of the subtly different new norm.  Linguistic drift is an example of this transformative collective practice. 

And of course in practice some combination of these things often happens.  For example – the role of subcultures in the transformation of larger cultural spaces, with a subculture as a location of normative innovation from which new norms can then (potentially) disseminate (or not).  Is a subcultural space of this kind a deviation from the larger normative space, a simple alternative to the larger normative space, or the bleeding edge of the large normative space’s ongoing self-transformation?  Which of these attitudes you adopt, of course, depends on your political and social perspective – but there is no ‘right’ answer – this question can only be settled in political-cultural practice, by the process of normative contestation, rejection, and consensus-formation that is a significant part of our political, economic, and cultural life.

Anyway, I’m not suggesting that I’m saying anything particularly innovative here – these are pretty familiar remarks about the ways in which norms emerge and transform in social practice.  But I want to foreground these kinds of considerations as I think about how to formally model institutional equilibria and dynamics.  I think that at least some institutional economics would benefit from more emphasis on this category of phenomena, when thinking about how the ‘rules of the game’ are made and changed.

On Thursday 12th November Owen Jones wrote an article for the Guardian in which he suggested that governments should not permit pharmaceutical companies to claim patents on COVID-19 vaccines.  Jones argues that such monopolistic patents, their enforcement, and the deals for vaccine provision made by some governments to the detriment of others, will result in predictable, avoidable, and highly undesirable global inequities in access to COVID-19 vaccines.

In response, Tom Chivers at Unherd wrote a piece that lays out some of the problems with removing intellectual property rights from pharmaceutical innovations.  Crudely put, the problem is one of incentivising scientific innovation.  As Chivers writes, there is a “real, and to some extent irresolvable, tension” between the two major functions of pharmaceutical companies.  On the one hand, these companies manufacture and distribute drugs – in most cases this can be done very cheaply.  On the other hand, these companies engage in scientific research to invent new drugs, and this process can be eye-wateringly expensive.

One of the functions of the patent system is to allow companies to recoup the costs of innovation via the sale of the final product.  By granted a state-enforced monopoly on the production and distribution of a product, patents ensure that the product can be sold at a very substantial mark-up, without the threat of market rivals selling the same product at closer to marginal cost.  Without this mechanism, Chivers worries, the major financial incentive to pharmaceutical innovation would be removed 

This is a real worry: much scientific innovation costs a huge amount of money, and without the ability to recoup that money, the major economic incentive to expensive scientific innovation is removed.  At the same time, Jones’ concerns about global equity and access to drugs are also real and very serious.  Fortunately, there are institutional solutions that might walk a path between these problems.  As Chivers writes, one possibility is for governments to award financial ‘prizes’ to the creators of effective COVID-19 vaccines:

we (governments, philanthropic agencies, etc) will give whoever comes up with the first vaccine some large amount of money: perhaps $1 billion, on the condition that they then agree to make it in large quantities and sell it at close to marginal cost to the developing world. 

Chivers argues that this approach is flawed, on the grounds that more than one company might produce an effective vaccine, and we don’t want our prize money to arbitrarily reward the first vaccine to be created, which might not be the most effective.

Luckily, this problem can be resolved by simply giving money to more than one company.  Moreover, we can use this institutional mechanism to take a step closer towards Jones’ preferred solution: in exchange for receiving the prize, a drug company forfeits the patent, and the drug enters the public domain.  (This mutually beneficial exchange between state actors and pharmaceutical companies could perhaps if necessary be backed by the threat of IP expropriation if the companies are unwilling to relinquish the relevant IP rights.)

As with any institutional structure designed to incentivise innovation, this idea has strengths and weaknesses.  The ‘collective action problem’ of establishing and administering the ‘COVID-19 Innovation Fund’ would not be trivial.  Ideally, to my mind, one would want such a fund to be administered at an international level, with national governments making contributions to the fund determined by their own level of national wealth – but of course this is easier said than organised.  Moreover, abolishing the IP associated with a drug does not in itself immediately facilitate production and supply of the drug – that is another challenge, requiring its own incentive system. There would be many other institutional challenges and obstacles.  

Nevertheless – we shouldn’t feel trapped in a false dichotomy created by the faulty idea that patents are the only effective mediating mechanism via which scientific innovation can be rewarded.  At the end of the day, patents are just a way that drug companies can make a lot of money.  It is perfectly possible to sever the link between the reward for innovation and the cost of drugs, by simply rewarding innovation directly.  In the case of COVID-19, this is relatively easily done, not least because governments are already spending colossal sums in their COVID responses, and because the costs of not rolling out a COVID vaccine are so high.

In conclusion: there is no good reason not to take seriously the approach of just giving the drug companies a load of money, and making the various COVID-19 vaccines part of humanity’s common treasury of knowledge. 

[Edited to add: I want to make clear that there’s a very extensive literature on these issues, and if I were doing this properly I’d actually discuss some of it – but I don’t have time, so I’m afraid this is the blog post.]

Back in the day (more than a decade ago, my god!) I sort of ‘live blogged’ my reading of Robert Brandom’s ‘Making It Explicit’.  That generated a few blog posts that in retrospect were badly wrong in key points (as well as a lot of blog posts that I still stand by and value!) – but I nevertheless found the process very helpful in working through Brandom’s system.  So, recognising that I risk again polluting the blogosphere with incorrect takes on Brandom, but selfishly going ahead anyway for purposes of self-clarification, I’m going to put up some remarks on Brandom’s interpretation of Hegel as I start to engage with it.

These are pre-preliminary remarks because I haven’t yet found time to even begin reading ‘A Spirit of Trust’ (Brandom’s Hegel book).  Instead, I’ve been listening to the Leipzig lectures on Hegel’s Phenomenology that Brandom has very helpfully put up on his YouTube channel.  I take it that these lectures basically cover the same terrain as the book, but of course ~18 hours of lectures can’t go into nearly as much detail as a ~800 page book, so I’m not imagining that these lectures are an adequate substitute for the text.  Nevertheless, until I can find time in my reading schedule for the book itself, this is what I’ve got.

It’s probably worth saying upfront that I’m not interested at all in the question of whether Brandom gets Hegel right.  Brandom’s is a reconstructive project, and while it’s obviously going to greatly irritate Hegel scholars if Brandom’s reconstruction departs in major ways from their interpretation of Hegel’s own position, I don’t care.  Moreover, although it is common and reasonable to assume that Brandom’s Hegel is simply Brandom himself dressed up in a slightly different technical vocabulary, I think it’s probably worth exercising a bit of caution here too.  Clearly Brandom’s Hegel’s system bears a striking – even an uncanny – resemblance to Brandom’s own system, but Brandom is still following the text of Hegel’s Phenomenology in his interpretation, so I don’t think it’s reasonable to assume that, if Brandom were sitting down to write a ‘phenomenology of spirit’ himself, it would look like this.  Rather, I think we can usefully operate as if what we have here is a third figure, analogous perhaps to ‘Kripkenstein’ – Saul Kripke’s influential and controversial interpretation of Wittgenstein – which exists somewhere in the space between or is produced in the interaction between Brandom’s and Hegel’s commitments.

So, with that said, some very preliminary, pre-preliminary remarks on starting to listen to the lectures.  First up: it probably doesn’t need saying, but as with ‘Making It Explicit’, my overwhelming impression is just how clever it all is.  Brandom has so many balls up in the air, and he juggles them with such deftness, interlocking different elements of the system in ways that are both intricate in detail and yet also load-bearing within an overall architectonic structure… it’s all just deeply impressive to watch.  I am, clearly, a Brandom fan, and that isn’t going to go away on the basis of this Hegel project.

With that said, I nevertheless have more unease about the Hegel project in some important areas than I did about ‘Making It Explicit’ (MIE).  As ever, there’s much more to be said than can be covered in a single blog post, even if I had actually read the book.  For now, though, I think the best way to begin discussing some of that unease is to highlight two key elements of MIE, and my takes on them, before contrasting those elements of the MIE project with similar elements of the Hegel project.

So.  Extremely long-term readers of the blog may remember that my main discomfort with ‘Making It Explicit’ focused on the role that Brandom grants to specifically linguistic practice within his system.  Clearly that’s a big disagreement to have, given that Brandom is first and foremost a linguistic philosopher, and given that he pretty clearly thinks that participation in a linguistic community is in some sense a precondition of sapience (a view I disagree with!).  Nevertheless, my disagreement with MIE on the role of the linguistic was tempered by the way in which Brandom embeds his ‘inferentialist’ semantics within his ‘normative pragmatics’.  MIE is interested in the way that language is, first and foremost, something that we do, as a social activity.  Moreover, one of the key elements of Brandom’s account of how linguistic practice generates the forms of normativity characteristic of sapience was his metaphor of ‘scorekeeping’.  In MIE, ‘scorekeeping’ plays a fundamental explanatory role – a role analytically more fundamental (I would argue) than the specific linguistic practices that Brandom uses to give an account of how scorekeeping functions within a discursive community.

It seemed to me then (and still does!) that the role of scorekeeping in MIE leaves open the door to a parallel philosophical apparatus (formally very similar to Brandom’s, but departing from it in key respects), that gives a non-linguistic account of social scorekeeping.  So (perhaps eccentrically), it seems to me that despite Brandom’s own heavy emphasis on specifically linguistic practice, the apparatus of MIE has much to teach us, even if we do not share Brandom’s own commitments in linguistic philosophy, or concerning the centrality of language to thought.

That’s one key element of MIE, and my reaction to it.  Another key element of MIE is Brandom’s account of objectivity.  For me, this is really the key ‘output’ of Brandom’s apparatus.  Again, it’s necessary to be extremely crude and simplistic, if one wants to give a subsection-of-a-reasonable-blogpost-length summary of what Brandom is doing.  But as I see it, one key goal of Brandom’s system is to address a problem that has plagued the pragmatist philosophical project from the beginning.

That problem is, to be crude about it, “what about objectivity, then?”  The pragmatist project, crudely put, is to ground our understanding of traditional philosophical categories – categories like knowledge, truth, value – in social practice theory.  The idea is that what we do as social beings is in some sense generative of these categories, and the categories can only be explained in terms of social practice.  The core objection to the pragmatist project is, basically, that this can’t be done.  Moreover, not only can it not be done, but the effort to do it opens the door to moral, political, and epistemic nihilism (at worst) or moral, political, and epistemic incoherence (at best).  This is what Bertrand Russell is saying when he suggests that US-style pragmatism is a gateway drug to fascism.  This is what Sokal and Bricmont were doing when they suggested that the strong programme in science studies was somehow destroying left politics.  And this is (part of) what many contemporary critics of ‘critical theory’ are doing when they suggest that ‘social justice’ accounts of politics or truth are destroying civilisation.  The idea is that truth, morality, etc. have some reality that exists beyond the social practice of contingent social groups, and that critical-theoretic efforts to ground these categories in social practice are undermining the categories themselves.

Obviously there is a lot mixed up in these debates besides the philosophical issue of the coherence of the pragmatist project, so I want to be clear that I’m not at all suggesting that these debates can be reduced to the kind of abstruse meta-theoretical problems that preoccupy Brandom.  Nevertheless, for me, one of the most important contributions of MIE was that it provided a detailed and (in my humble opinion) satisfactory account of how norms and objectivity can be explained in practice-theoretic terms without succumbing to the theoretical vulnerabilities that have bedevilled earlier pragmatist thinkers (such as Brandom’s doctoral supervisor Richard Rorty, but extending back to the ‘classical’ pragmatists like Dewey, James, etc.)

OK.  So for me Brandom’s account of the concept of ‘objectivity’ was probably the key contribution of MIE.  It’s this account of objectivity (of reference and of norms) that explains why pragmatism isn’t simply a way of explaining truth and value in terms of (say) the practices or beliefs of a dominant social group, and why pragmatism doesn’t simply evacuate these categories altogether. And that concept of ‘objectivity’ more or less emerges from Brandom’s account of scorekeeping.  In particular, Brandom’s account rests on a set of distinctions between different attitudes to normative commitments, established via his scorekeeping apparatus. 

On this account, I as a sapient creature have certain normative commitments about the way things are.  I also track other people’s commitments.  But this tracking of commitments operates via what Brandom calls a form of ‘double bookkeeping’.  I can have an opinion about what somebody takes themselves to be committed to; I can also have an opinion about what they actually are committed to, given my own views about what their commitments entail.  And this ‘double bookkeeping’ can reflexively be applied to my own commitments.  I know what I take my own commitments to entail, but I am also aware that others may take my commitments to entail something different – and this gap between my current perception of my own commitments, and the commitments I may eventually take myself to have really possessed all along, opens up a ‘formal’ concept of objectivity that can be understood independent of any specific account of what objectivity substantively consists in.

I’m being much too telegraphic here to capture how Brandom’s argument functions with any adequacy, I’m really just trying to gesture to the broad space of Brandom’s argument.  For the purposes of this blog post, what I mostly want to capture is that this account of objectivity is extremely ‘slimline’ – this key element of MIE’s argument does not make any ontological claims about what the substance of objective knowledge consists in.  It gets you out of the problem that has historically plagued pragmatism – how can we give an account of objectivity that cannot be reduced to, say, the consensus of a given sub-community? – and that’s ‘all’ it does.

Now, there are other elements of MIE – indeed, some of the most involved sections, such as Brandom’s lengthy discussion of anaphora – that I haven’t discussed here.  And indeed, I wouldn’t want to go anywhere near even trying to summarise those sections without reading the book again.  So I don’t want to make any strong claims about what the book doesn’t do.  My points here are more that: First, the elements of the book that I’ve highlighted are, to me, a big part of its core argument; Second, this core argument is quite ‘slimline’ in terms of its commitments: Brandom builds a great deal on the foundations of a quite minimal theory of practice.

Ok.  So, with that overly laborious background (given the brevity of the rest of what I have to say in this post), let me articulate some pre-preliminary thoughts on Brandom’s Hegel project.  And here I want to contrast two elements of Brandom’s Hegel with those elements of MIE I’ve just highlighted.

First, although Brandom’s Hegel is a pragmatist, and there is no inconsistency that I can see between the apparatus of MIE and the apparatus of A Spirit of Trust, the latter seems to me (again, at a very first pass) to devote less energy to grounding its account in a ‘deflationary’ pragmatics.  So far in Brandom’s Hegel lectures we have had no discussion of scorekeeping, that key explanatory component of MIE’s account of objectivity.  Rather, Brandom’s Hegel (so far) has a tendency to leap straight in to the more directly semantic elements of the argument.

Clearly there’s nothing wrong with this – and indeed for all I know these matters will be addressed in full later.  But for people like me for whom the normative pragmatics dimension of MIE was in some respects more interesting than its inferentialist semantics, this is a bit disappointing.

That’s my first, very brief and fairly trivial, observation.  My second observation is that it seems to me that Brandom’s Hegel may be making stronger ‘ontological’ claims than the core elements of MIE that I’ve highlighted need commit us to.

In particular, Brandom has an extremely intricate and carefully developed account of Hegel’s idealism.  I’ll want to circle back round and give a much fuller account of this once I’m more confident in my grasp of this material.  But at (again) a very preliminary and crude first pass, Brandom argues that for Hegel the world is already ‘conceptually structured’.  What this means is not that the world is ontologically dependent on thought – Brandom’s Hegel is not a ‘subjective’, Berkeleyan idealist.  For Brandom’s Hegel (much of) the world would be the way it is even if nobody had ever existed to perceive it.  Rather, the argument is that the structure of the world is such that we are capable of having ‘adequate knowledge’ of the world, and this seemingly requires a homology between the normative structure of thought and the ontological structure of the world.  Specifically, Brandom believes that for Hegel the normative component of semantics maps onto the modal structure of reality.  That is, if I am committed to a claim, what this means is that I am committed to some other claims also being the case, and some other claims also not being the case.  And this normative network of obligations and entitlements (legitimate and illegitimate inferences) is homologous with modal relations of possibility and impossibility between and within states of affairs in reality.  If such-and-such a commitment about the world is incompatible with such-and-such another commitment about the world, this normative obligation to not hold those two beliefs simultaneously is saying that such-and-such a state of affairs is in reality incompatible with such-and-such another state of affairs.  Modal claims about compatibility and incompatibility of real states of affairs map onto normative claims about our inferential obligations given our commitments, and vice versa.

My account of this argument here is desperately crude relative to Brandom’s – my goal is again just to gesture in the direction of the Brandomian Hegelian apparatus.  The point is that this account of Hegel’s idealism explains how we have objective knowledge of the world.  For Brandom’s Hegel, this argument meets the sceptical challenge thrown up by his predecessors in the modern philosophical tradition.  And this goal of meeting the sceptical challenge of Descartes, Kant, and others is a key motivator of this apparatus, on Brandom’s account.  For Brandom’s Hegel, one of the problems of the pre-Hegelian modern philosophical tradition was that it baked scepticism into its semantics, by postulating a relationship of representation that intrinsically rendered reality ungraspable in key elements.  ‘Objective idealism’ aims to address this problem, by showing how reality can be ‘conceptually structured’ and thus knowlable in itself without committing us to the idea that reality is ontologically dependent on knowing subjects.

Which is all fair enough.  My initial worry about this dimension of Brandom’s Hegel’s argument, though, is that it might ‘prove too much’.  Like Brandom’s Hegel, I am suspicious of any epistemology that seems to intrinsically condemn us to scepticism.  Maybe we’re completely misguided about reality, but it doesn’t seem right to have this deep epistemological failure be an intrinsic feature of our philosophical apparatus.  (I’m aware that ‘doesn’t seem right’ isn’t actually an argument, but I’m not going to shoulder the burden of grounding my philosophical intuitions in this blog post…)

At the same time, though, and in the other direction, I worry about arguments that seem to imply that reality must be knowable to us, at least in principle, or at least in general.  What if there are elements of reality that we simply cannot comprehend, and never could?  What if the reason for our inability to comprehend those elements of reality is that reality is not ‘conceptually structured’ in Brandom’s Hegel’s sense, or is so only in some of its aspects, or ‘from a certain point of view’?  I’m inclined to a ‘satisficing’ approach to knowledge – a ‘good enough’ account of what it is to know something – and it feels that Brandom’s Hegel’s account of epistemology might be after a stronger sense of epistemological adequacy.  What if this criterion for adequacy of knowledge is just too strong to actually capture the reality of how we know things?

Now, as I keep saying, these are only pre-preliminary thoughts.  I’m writing them up here not because I’m presenting them as an argument against Brandom’s Hegel’s project, certainly not as stands, but because I find it useful to get my reactions down in writing as I go.  Still, these are some of the things I’m going to be thinking about as I continue to work through Brandom’s remarkable project.

Aotearoa New Zealand has been consumed today by discussion of the varied fiascos around the government’s COVID-19 border quarantine and self-isolation policies and practices. Obviously there have been a wide range of failures here, which cannot be reduced to a single source. But I want to draw attention to a specific, very consistent element of the Ministry of Health’s attitude to COVID-19 policy that has, in my view, caused problems for the government’s COVID-19 response from the beginning, and which now risks wrecking the enormous progress the country has made in the goal of eliminating COVID-19.

Put bluntly, the Ministry of Health is absolutely convinced that there is very little point in testing non-symptomatic people. This has been clear in their testing guidelines from the beginning, and the great difficulty people had getting tested. It is now confirmed in testimony from people who have recently arrived in the country, and who were unable to persuade MoH staff to test them even as part of the government-overseen isolation process. Per this 1 News report:

She and her family members also didn’t get tested before leaving yesterday, because she was told it was optional. “She then said to us that it was pointless us having the test done unless we were showing any symptoms.” Another woman who stayed at the same hotel as the two new positive cases says she was told the same thing. She got a test anyway, but now, home, still hasn’t received her result.

Why is the MoH so determined not to test non-symptomatic people? Obviously the following is somewhat conjectural, but in my view some core elements of the NZ government’s view on COVID-19 are as follows:

First, the government is convinced that asymptomatic transmission of the virus is extremely rare. Asymptomatic here means not just “non-symptomatic”, but “will never become symptomatic.”

Second, the government is convinced that symptomatic cases can be identified when symptoms emerge, and contact tracing can then take care of any transmission that took place in the few days before symptoms emerged.

Third, these opinions are supplemented by the recognised statistical fact that interpreting the likelihood of a false positive in a test result is highly dependent on the underlying probability that the individual being tested has the condition.

Fourth, the government is convinced that for these reasons, those calling for widespread testing of non-symptomatic people are being irrational. To the extent that the government are willing to test non-symptomatic people they will do so reluctantly and tardily, in order to (in their eyes) placate a hysterical public and media, or at best out of a hyper-abundance of caution, rather than for good public health policy reasons.

Fifth, and particularly irrationally, these attitudes have been extended even to the testing of individuals in mandatory isolation to attempt to prevent new COVID-19 cases from circulating within the country.

All of these positions are problematic. It is plausible but not established that asymptomatic transmission is very rare; it is easy to miss the symptoms even of symptomatic cases – as in fact happened with the two new cases of COVID-19, one of whose symptoms was apparently (mis)attributed to a pre-existing condition; the risk of statistical artefacts can be greatly reduced by repeated testing; it is in fact extremely rational to adopt a policy of broader testing, as I argued in an earlier post; and none of this makes any sense when you’re talking about a process that has specifically been designed to prevent new COVID-19 cases from entering an as-far-as-we-know currently otherwise virus-free country.

And yet the Ministry of Health and the government more broadly, in my view, remain in the grip of a baseless conviction that calls for testing of non-symptomatic cases are fundamentally silly, a demand to be placated rather than acted upon. Until the MoH abandons this idée fixe, the country will not be able to adopt a fully rational COVID-19 pandemic policy.

I’ve had enough separate conversations in which I’ve had to demonstrate what John Edmunds actually said on the Channel 4 coronavirus special of March 13, that I’ve decided to transcribe the entire segment for ease of future reference. I may well come up behind this and add some comments at the end, but for now I’m just going to publish the transcript.

The segment itself can be watched here, beginning at 9:50 and ending at 23:55.

TRANSCRIPT

Presenter: Well joining me now is John Edmunds, professor of epidemiology at the London School of Hygiene and Tropical Medicine, he works on the mapping of infectious diseases and is currently advising the government on the coronavirus, and from California a silicon valley executive and writer Tomas Pueyo, he’s not a scientist but his detailed modelling of the virus’s spread has set the internet alight with its stark warnings about the rate of infection. Welcome to you both.

Let me start with you Tomas, in California. President Trump we just heard a few minutes ago has declared a state of national emergency. What will actually change now, because of that?

Pueyo: The country has moved from trying to contain the illness from outside to making sure that inside it doesn’t transmit, and that’s the key here, they’re realising “oh my god, it’s [not] that it’s coming from outside, it’s here, it’s spreading, it’s everywhere, and we need to stop this, we need to stop the transmission between different people, so that’s what they’re trying to do – they’re trying to create social distancing, keeping people spread, not everybody together, so that the transmission goes down.

Presenter: So can we now expect, as a result of this state of national emergency, the kind of measures in America that we’ve seen in places like Italy?

Pueyo: I think we will. Italy’s different from the US, obviously, individual freedoms are substantially more important here, but it is the only thing that is going to stop this thing.

Presenter: Ok. So you welcome this move, of the President today?

Pueyo: Absolutely, not only do I but the markets are also responding, they were up four percent in the US as Trump was speaking.

Presenter: Ok, I want to get back to you in a minute because I want to talk to you about your particular modelling of this virus, but John Edmunds, should we be declaring a state of national emergency here – something as dramatic as that?

Edmunds: No.

Presenter: No?

Edmunds: For what gain? What gain would we get from that? So we’re going to get people up into a panic and stuff? We need people to come with us in a stepwise way. This epidemic is not going to be over in a week or a month, this epidemic is going to last for most of this year, and so if we’re going to ask people to change their behaviour quite radically, it’s going to be very difficult for them to do, it’s going to have major economic and social impacts, on them, then we’re going to have to limit the amount that we’re going to ask them to do, yeah?

Presenter: Limit the amount that we’re going to ask people to do.

Edmunds: So we stop the epidemic, or we slow the epidemic right down, so that the NHS doesn’t become overwhelmed, hospitals don’t become overwhelmed, that’s the idea. The only way to stop this epidemic is indeed to achieve herd immunity.

Presenter: Ok. Tomas Pueyo, you’re shaking your head and now you’ve got your head buried in your hands, what’s your response to what John Edmunds just said?

Pueyo: This is like deciding, you know what, this forest might burn so let’s cut a third of it. This is crazy. We want to have ten, twenty, thirty percent of the population catch this, the UK has what sixty six million people, that’s how many people, that’s around twenty million people, one percent of these people are going to die, so we’re saying that we want to kill 200,000 people in the UK, so that’s –

Presenter: I don’t think anyone is saying that, I don’t think anyone is saying that, but I think there is a real debate in the scientific community going on about the value of herd immunity, so just briefly, what do you think about the value of herd immunity, and can it be created through the measures that the government is introducing right now?

Pueyo: We need to understand what it means, this herd immunity, they’re saying everybody’s going to catch it, so once they catch it they can’t catch it any more. That’s crazy, we don’t want people to catch it, we want people not to catch it, ‘cause otherwise they’re going to die, and right now the cases are going exponentially, in a week the NHS is starting to be collapsed, in two weeks it’s going to be completely collapsed, if we don’t take measures now then people are going to panic, maybe not today, if we don’t take the measures, but they’re going to definitely panic next week or in two weeks, we have thirteen days of advance compared to Italy, right, they thought the exact same thing two weeks ago, and then one week later they realised “oh my god this is exploding”, they have now what, 17,000 cases? It’s exploding there, they realised too late that they were not containing this, UK now has an opportunity to catch this before the weekend, and we need to catch this before the weekend, because everybody’s going to spread this, with their friends, with their families, that they haven’t been seeing during the week, so it needs to be declared now, that’s why

Presenter: OK, Tomas, let’s bring in John, I mean, has he got a point here, we’ve got to catch this right now?

Edmunds: There’s two things, there’s two strategies, with a new virus, a new epidemic, there’s two strategies: one, you can stamp out every single case in the world, every single case _in the world_, and then the virus, then you’re free. You’ve stopped that epidemic without achieving herd immunity, but you must get every single case in the world. When the mild disease, that’s incredibly difficult. That’s the phase that we were in when we were trying to do containment and everybody else was trying to do containment, yeah? Trying to stamp out every single case in the world. It hasn’t worked, yeah? We haven’t managed to do that. The next phase, when the virus – the genie is out of the bottle, the virus is all around the world and spreading, the next phase, the only other way that the epidemic is going to come to a stop is ‘achieving herd immunity’, this is – and let me explain, there are different ways that you can… The natural way, that this will is happen, is the epidemic will run very fast, and the epidemic will come up and come down very fast, and the herd immunity threshold is reached not at the end of the epidemic, that’s what people sort of think, it’s not at the end of the epidemic, it’s at the peak of the epidemic. At that point there’s not enough susceptibles in the population to spread, and it’s very important to understand this one further point, because at the peak there’s so many infectious individuals that they all infect so many other individuals, and so if you can bring the number of infectious people down at the peak, then the epidemic doesn’t overshoot, you can manage the epidemic and reduce the total number of

Presenter: [Right.]

Edmunds: So you can achieve herd immunity and not have an epidemic overshooting.

Presenter: But the trouble is, you know, this

Edmunds: You do that by aggressive measures.

Presenter: This is a very important debate, and it’s happening right now in the scientific community, as we discover on air, but getting away from the abstracts, in practice what this means is there will be many many people, vulnerable people in this community, who may die as a result of what is essentially an experiment

Edmunds: But there’s no way out of it now.

Presenter: There’s no way out of it?

Edmunds: No. There’s no way out of that.

Presenter: Ok.

Edmunds: So we’ve given up on the containment phase, that hasn’t worked

Presenter: But, but

Edmunds: I mean Tomas can throw his arms up as much as he likes but that hasn’t worked, yeah?

Presenter: Ok, but the point is that we’re the only country as far as I know that is espousing this model, I mean the Italians are telling us that they wish they had done it earlier, they wish they had told their population two weeks ago, you know, a lockdown means you don’t go to the cafe, you don’t go to the pizzeria, you stay home.

Edmunds: And what happens when they release the lockdown?

Presenter: What does happen then?

Edmunds: It comes back.

Presenter: But is that inevitable?

Edmunds: Because you haven’t got rid of – yes it’s inevitable, if there’s virus around in the population, there’s infectious people, so unless you’ve stamped every case out, not just in – _every case_, not just in Italy but around the world, as soon as you release them out of lockdown it comes back.

Presenter: This is a really crucial question, Tomas, what’s to say that once China, you know, people go back to the factories and back to the offices and traffic’s back on the streets, that this thing won’t come back?

Edmunds: It will come back.

Presenter: And possibly worse the second time round?

Pueyo: It will, but the key there is not to have it big, because when you have it big you have hundreds of thousands or millions of people collapsing the NHS, and when you do that all the people cannot share the ventilators that you need, all the people who are having heart attacks right now cannot get to the E.R., and so what we need is not to have this huge peak, collapsing everything, killing everybody, what we need is to what we call ‘flatten the curve’, slowly grow these cases, contain it so they can be spread over time, we can achieve herd immunity not in two weeks going crazy, but in six months, in a year, meanwhile

Presenter: But that’s what the government – sorry to interrupt Tomas, but that’s exactly what the prime minister was saying yesterday in his rather colourful language, they’re saying we’ve got to flatten the sombrero, that’s what they’re trying to do here by delaying the virus.

Pueyo: That’s right, and how do you do it, you need to take measures where people don’t talk to each other, they don’t interact with each other, so they don’t transmit the virus, so you need to take measures _now_ to avoid people interacting. Now.

Presenter: Doesn’t he have a point, John?

Edmunds: Well it’s exactly the point I was making. So the only way to do this is to achieve herd immunity, to stop this epidemic, and you impose social distance measures to slow it down, to bring the peak down, to spread it out over time, but if we’re going to bring the peak down from a very high point and spread it over time then we’re going to spread it over a very long period of time, six months to a year is what we’re going to – we’re going to be living with this epidemic for that kind of length of time, and so if we’re going to ask people to take these really extreme measures then we don’t really want to ask them to do it before they have to, because they’re going to have to do it for a very long time. Now the epidemic is moving fast, and we will be asking them to do these measures very soon.

Presenter: Let me ask cynically whether there is an element of this that the government doesn’t want to shut down the economy, sacrifice the economy completely as we’ve seen in other countries while it is trying to find the right solution.

Edmunds: Look there’s no easy way out of this. All of these are going to be hugely damaging to people, to people’s lives and the economy, and there is of course a balance, so you have to try and get it, you know, get the epidemic, manage the epidemic as best we can, _and_ manage the other aspects of – we have to manage the economy, of course we do, we can’t ignore that completely, so –

Presenter: But our priority is to save lives

Edmunds: Of course it is.

Presenter: Richard Horton, the editor of the Lancet – prestigious medical journal – has said that our policy at the moment, the government policy, is “playing roulette with the public”.

Edmunds: That’s a kind of easy thing to say isn’t it, you know, that’s a very easy thing to say. But I don’t think they are, I think what they’re doing is trying to take it sensibly, stage it – look, you’ll see measures coming in very fast now, the epidemic is moving, you’ll see measures coming in, so we will be asked to do – and all of us will have to take care to do those things

Presenter: When? When will those measures come in do you think?

Edmunds: Very soon now.

Presenter: In the next few days?

Edmunds: Within – certainly within a week or so

Presenter: And these will be lockdowns of cities,

Edmunds: No, I don’t think we’re going to

Presenter: We’re not going to go that far?

Edmunds: Not, not initially, but we may get there, yeah? We are going to be asking people to take extra measures, they’ve already been flagged up, you had them on your VT that you showed just before, the prime minister talking about the next measures that might be along the line.

Presenter: So you don’t think we’re dragging our feet on this? With possibly dangerous consequences?

Edmunds: I think we’re trying to stage it as best we can.

Presenter: Tomas? If we impose restrictions tomorrow or the day after, you know this weekend, as you just said, is that going to be early enough in order to stop us from becoming Italy in thirteen days time?

Pueyo: With the current number of cases that we have in the UK, around 800, and the growth rate day over day that we have, in a week we have 6,000 official cases, and in two weeks we have 45,000. We have today more cases than Wuhan had when it shut down. What they did when they shut it down was completely cut it, you can see the growth in [new] cases going dramatically down overnight, and now China has only a handful, a few dozen cases every day. They decided “we’re going to go aggressive _now_, so that later on we don’t need them to be suffering all these consequences, and that was the right move because this is going exponentially, so what you need to do is, when it’s going exponentially, you catch it early, and you really really go aggressive against it. You don’t let it fester to collapse the NHS. If you do that then you can relax little by little over the weeks, over the months, the measures, so that now we have more capacity on the NHS, and the cases are spread out over time.

Presenter: And just, Tomas, explain to us, you wrote this article, it went viral on the internet, your modelling for why these, you know, for explaining why the numbers of infections will double every two days, how do you explain that? Based on the China model?

Pueyo: It really depends on what’s happening at every moment, right now this thing is going really really fast in the UK, right? We have 800 cases, it’s growing at 33 percent every day, that means in three days you get 1,900, so it’s more than doubling in the next three days, and so what’s happening is early on when this thing is really catching up cases explode and they grow at say 2x, like they double every two days, and this is the situation in which the UK is today, it is the situation in which Italy was a week ago, it is the situation in which Spain is right now

Presenter: OK, all right, OK. Joh why are you shaking your head at those numbers?

Edmunds: It’s true if you just look crudely at the numbers that the number of cases are doubling about every two and a half days, but that’s because they’re doing more contact tracing, the actual underlying rate of doubling is more like about every five days.

Presenter: Ok, we’ve got to leave it there. John Edmunds, Tomas Pueyo, thank you very much indeed.

In his 1954 lecture ‘What does the economist economise?’, Dennis Robertson writes:

There exists in every human breast an inevitable state of tension between the aggressive and acquisitive instincts and the instincts of benevolence and self-sacrifice. It is for the preacher, lay or clerical, to inculcate the ultimate duty of subordinating the former to the latter. It is the humbler, and often the invidious, role of the economist to help, so far as he can, in reducing the preacher’s task to manageable dimensions. It is his function to emit a warning bark if he sees courses of action being advocated or pursued which will increase unnecessarily the inevitable tension between self-interest and public duty; and to wag his tail in approval of courses of action which will tend to keep the tension low and tolerable.

This passage is approvingly quoted in Part One of Buchanan and Tullock’s ‘The calculus of consent’. And this basic idea informs much of public choice theory – a branch of economics and political science that uses tools often associated with microeconomics to analyse political decision-making. Slightly more specifically, public choice theory often focuses on the ways in which political decision-makers’ individual interests and incentive structures influence their policy-making, frequently to the detriment of ‘the public good’. In Buchanan’s words, in his 1986 Nobel lecture:

Economists should cease proffering policy advice as if they were employed by a benevolent despot, and they should look to the structure within which political decisions are made.

As Robertson says, the idea here is not that altruistic acts are in some way incompatible with human nature; it is, rather, that an institutional structure that heavily relies on altruistic acts for its ongoing stability is likely to be more fragile, all else equal, than an institution that accommodates less noble motives as a major component of its day-to-day functioning. Acts of heroism, kindness, self-sacrifice, selflessness – these are, contrary to more pessimistic views of ‘human nature’, extremely widespread. But a political-economic institution that relies upon these facets of human nature for its day-to-day reproduction, and that will quickly fall apart in their absence – such an institution is at constant risk of either collapse, or transformation into an institution that does accommodate less noble elements of human behaviour, perhaps to the detriment of its intended or apparent goals.

This ‘pessimistic’ public choice vision of political-economic institutions has often not found favour on the left. Leftist critics of public choice theory – or of the broader liberal tradition of which it is apart – tend to object both to its methodological individualism, and to the kind of ‘human nature’ that is tacitly or overtly ascribed to the individuals it considers. For many leftists, furthermore, the public choice approach to political economy is less an analysis of the pitfalls of collective action, than it is an attempt to undermine or attack successful collective action, in the service of right-wing, anti-statist interests and policies. From this left perspective, public choice theorists attempt to emphasise the ways in which institutions of collective action are liable to fail, because public choice theorists want such institutions to fail: by arguing that the successful collective provision of social goods is difficult or impossible, and that apparently successful collective action is really a mask for individual self-interest, public choice theorists serve the interests of those opposed to emancipatory collective action.

There is much to be said for this left critique of public choice theory. Public choice theory has, indeed, typically emerged from and aligned itself with the right of the political spectrum, and sought to provide intellectual resources and arguments for those who wish to greatly reduce the size of the state and the scope of democratic or collective social decision-making. It is, primarily, a conservative school of thought, and much of the public choice tradition cannot usefully be interpreted unless its analysis is seen as informed and shaped by conservative political commitments.

But should the tools of public choice theory be exclusively the property of the right? Does it benefit the left for this to be the case? In my view, the answer to these questions is ‘no’, and a ‘public choice theory of the left’ is a worthwhile project, no matter our views on ‘actually existing public choice theory’.

Why is this so? First of all, analytically speaking, there is a lot of potential common ground between public choice theory and traditional left critical analysis: the capture of powerful institutions by special interest groups and the use of power to advance the interests of those with power, as against the broader public good… they are not themes that are entirely alien to left analysis. Public choice approaches should be capable of use for left critique.

Secondly, though, the normative public choice critique of would-be emancipatory collective action also carries weight: the left ought to reckon with this category of critique of its own projects and institutions. Public choice theory is suspicious that institutions – paradigmatically state institutions – that are intended to serve the common good have a tendency to serve instead the interests of those who wield power within those institutions. If left politics aspires to create institutions that are not disastrously vulnerable to this phenomenon, it needs to reckon with this risk and this critique. Moreover, it needs (I would argue) to reckon with this critique in a way that does not appeal to unrealistically utopian claims about long-term selfless action on the part of key social actors.

Perhaps the paradigmatic case here is Soviet communism. For many critics of the USSR, the Bolshevik project was intrinsically flawed because the institutions it proposed and implemented in the name of emancipation were always likely to result instead in state power serving the interests of a governing elite rather than the broader citizenry. Of course, there are many on the left who reject this analysis. But there are also many on the left – including me – who agree that Soviet-style communism was in practice a novel form of domination and oppression rather than a fundamentally emancipatory project. And this judgement raises the question of how to evaluate leftist transformative proposals, to ensure that would-be emancipatory institutions are likely to genuinely be emancipatory.

In my post on Erik Olin Wright’s ‘Envisioning Real Utopias’, I discussed one leftist response to this problem: Wright’s centring of ‘social power’ (as against state power) as the ‘true north’ that should guide ‘the socialist compass’. I argued, against Wright, that there is in fact no reason to believe that ‘social power’ is intrinsically more emancipatory than ‘state power’ or indeed ‘market power’ – that we need more fine-grained criteria for evaluating political-economic institutional proposals, to assess whether these proposals are likely to move us in a more or less emancipatory dimension.

The insight from Robertson with which I started this post, I believe, offers one such useful criterion (of course at a very high level of abstraction). As Robertson writes, we can distinguish between on the one hand institutions that, for their emancipatory functioning, require members of the institutions to persistently navigate a high tension between their own personal interests and those of the ‘public good’, and, on the other hand, institutions that reduce the tension between self-interest and public duty to a “low and tolerable” level. Institutions of the latter sort are, all else equal, more likely to be sustainable. The task for leftists is to construct institutions that are emancipatory in their outcomes and processes, while also exhibiting this feature.

In the jargon of game theory, this kind of institution design challenge is known as “incentive-compatible institution design”. That is to say: when we are constructing political-economic institutions, we want to construct those institutions in such a way that the incentives of individuals within the institutions are aligned with the tasks we would want those individuals to fulfill. In the maxim of many introductory economics courses: “incentives matter”.

This is a lesson that should be applicable across a broad range of categories of institutions. It should not be restricted to the political projects of the right, or to the critique of the left. And the left, I think, needs to get better at thinking about institutions in these terms. Paying closer attention to public choice theory is perhaps one route via which that could be accomplished.

There are strong indications that New Zealand will soon be moving down from Alert Level 4 – ‘full lockdown’ – to lower alert levels. In my view it would be a serious mistake to do this without instituting a substantial SARS-CoV-2 testing program for health- and aged care workers. I’ll briefly explain why.

For background: we know that New Zealand’s ‘lockdown’ period has greatly reduced the number of new cases of COVID-19 – here’s a simple chart I made using Ministry of Health data showing daily new cases.

SARSCoV01

This represents an apparent success of the New Zealand policy of not just ‘flattening the curve’ but attempting to reduce the incidence of COVID-19 such that only a small number of cases remain in the country. The government hopes that if this can be achieved, in combination with strict border controls, then the few remaining new cases can be contained using self-isolation and contact tracing, preventing the kind of broader community outbreaks we have seen elsewhere in the world.

Across this period, the government has also significantly increased its testing capacity, and the number of tests typically carried out each day. Here’s a chart I’ve taken from Newsroom, showing daily tests carried out.

SARSCoV02

The government has nevertheless been reluctant to test people not showing respiratory symptoms of COVID-19. In early days of the pandemic, NZ government guidelines suggested that people should only be tested if they showed symptoms of COVID-19 and were either contacts of a known case of COVID-19 or had recently travelled overseas. At the end of March, the government broadened its guidelines such that anyone showing respiratory symptoms could be tested, regardless of travel history or contacts. Only in the last few days has the government begun testing people without symptoms, using testing centres in specific, targeted locations, to collect more data on the possibility of community spread in potential COVID-19 ‘hotspots’.

This expansion of testing is good, and long overdue. However, in my view the government is making a serious mistake by not also implementing a large-scale randomised testing program of health- and aged care workers. This is for three broad reasons.

First: Health and aged-care workers are essential workers who by the nature of their work are much more likely than most Kiwis to be exposed to the virus. Cases of COVID-19 among health- and aged-care workers can therefore potentially function as useful ‘sentinels’ for detecting broader community transmission, without the waste of resources (and the same degree of likelihood of false positives) associated with mass community testing.

Second: To state the obvious, health- and aged care workers are much more likely than most Kiwis to transmit the virus to vulnerable individuals, because their job is to care for people who are much more likely than most Kiwis to be vulnerable to COVID-19. Identifying infection among health- and aged care workers early is therefore likely to have a high potential payoff in terms of lives saved.

Third, and relatedly: The modelling of virus spread across New Zealand (such as that carried out by Te Pūnaha Matatini or commissioned by the Ministry of Health) has focussed on the population of the country as a whole. However, we of course know that there can be localised outbreaks or ‘clusters’, where the virus is very widespread within a specific subcommunity. The Rosewood rest home is one such cluster; as of writing, seven of New Zealand’s eleven COVID-19-related deaths have occured within this cluster.

It is potentially disastrous when a hospital or aged care institution – or even worse the hospital or aged care system as a whole – becomes such a ‘cluster’, within which the virus is widespread. We know from other countries’ experiences that the impact of COVID-19 is most disastrous when the healthcare system is pushed beyond capacity, resulting in an immediate increase in the death rate from COVID-19, as well as many other negative health impacts associated with care not being provided for other conditions. In other countries, this scenario has typically occurred because the number of COVID-19 cases in the broader community has exceeded hospitals’ capacity. However, the spread of SARS-CoV-2 within the hospital system itself will of course also reduce hospitals’ capacity, as well as increase the risks to all patients within the system. The same applies to aged care facilities, which by their nature concentrate people in the demographics most vulnerable to the virus.

It is possible to imagine a scenario, for example, in which broad community incidence of COVID-19 remains very low, but COVID-19 is widespread within a hospital or hospitals, and this scenario would have a disproportionate negative impact on both New Zealand’s COVID-19 death rate, and on our ability to manage the virus, regardless of the success of the rest of the government’s COVID-19 strategy. Moreover, it would be difficult to suppress the virus, in this scenario, because hospitals are by their nature essential services which cannot be ‘locked down’ without very severe health consequences for the communities they serve.

For these reasons, it is worth adopting a highly precautionary approach to ensuring that SARS-CoV-2 does not spread within the health- and aged care systems. These institutions are a ‘weak point’ in our ability to deal with COVID-19. Given the extreme measures we have taken to combat COVID-19 (including near-complete closure of the country’s borders, and level 4 alert measures nationwide) it seems reckless and irrational for the government to fail to implement a much less extreme measure – widespread randomised testing of health- and aged care workers – that could have a disproportionate impact on our ability to successfully manage COVID-19 and its consequences. The government should implement such a testing policy as a matter of urgency.