NHS Cuts by Region

May 23, 2012

Éoin Clarke’s blog has a post on the uneven geographical distribution of NHS cuts. He writes:

The wealthiest, and dare I say it Toriest, parts of England have actually experienced no job losses. The South East of England has actually grown its NHS workforce since the May General Election, while the North West of England alone has experienced more than 6,500 job losses.

His post includes a chart. Clarke’s chart shows absolute figures – I thought I’d make my own version of it, showing percentage change. This doesn’t make any real difference to the story, but here it is anyway. Note that these figures are Hospital and Community Health Service staff, excluding primary care staff – lots of NHS employment isn’t captured.

Click the chart to enlarge it. Data from here.


May 13, 2012

A helpful way to approach Brandom’s inferentialism is to look at some of the positions he takes it to oppose. In this post I will begin to discuss one such position, which Brandom labels ‘reliabilism’.

Recall that the opposition here is between positions that take representation or reference to be explanatorily primitive within semantics, and Brandom’s own position, which argues that representation or reference can be fully explained in terms of inference. For Brandom, inference is a social thing – the language-behaviours with which we attribute and endorse inferences are social behaviours. Reference, by contrast, very much appears to involve something other than social practice – indeed, it is hard to imagine a credible account of reference that does not involve entities and events that would not ordinarily be categorised as social. If I state that the surface temperature of the planet Mercury can reach 700 degrees Kelvin, and that by around 8 billion years from now, by our best estimates, that planet, along perhaps with Earth, will have been destroyed by the expansion of the sun to around 256 times its present radius, I am making claims about entities that do not, in any very narrow sense, participate in our social practices. Neither are the causal mechanisms by which these entities (Mercury; the sun) impact upon our senses and equipment, mechanisms that would ordinarily be called ‘social’

Nevertheless, Brandom believes that representation should be understood in social terms. What does this mean?

We can start to unpack this by contrasting Brandom’s position with a common alternative account of reference, which Brandom (following standard philosophical usage) labels ‘reliabilism’. The issue here, of course, is how we gain empirical knowledge of the world – we are interested for now in language-entry moves (perception), rather than language-exit moves (action).

Recall, then, that Brandom’s account is built up out of reliable differential responsive dispositions (RDRDs). RDRDs are the basic (if you like ‘ontological’) building blocks of Brandom’s account. [Though this is a ‘weak’ rather than a ‘strong’ ontology, in that it does not make any claims about the ‘fundamental’ entities that inhabit our world; merely the kinds of behaviour some entities in our world must be capable of exhibiting if our account is to have any explanatory function.] Given this, it would seem to make sense for Brandom’s system to take reliability of responsiveness to stimuli as the core of its account of representation.

Brandom frames his discussion of reliabilism in relation to the classic definition of knowledge as justified true belief. In his seminal “Is Justified True Belief Knowledge?” Edmund Gettier attributes this position to A.J.Ayer, Roderick M. Chisholm, and, more tentatively, Plato. I need, obviously, to do a more thorough review of the literature here…

A central question for justified true belief accounts of knowledge, is what constitutes justification. Brandom’s overall strategy, in keeping with his normative phenomenalism, is to explain justification in terms of the social practices of taking-as-justified. (Similarly, Brandom will explain truth in terms of taking-true; and he will explain the content of belief – that which makes a belief a belief about something, and thus a belief at all – in terms of the inferential practices of attributing and acknowledging commitments. This is all, for now, a separate set of issues.)

However, the justification of our beliefs does not prima facie have to be accounted for in terms of the activity of justifying them. As Brandom puts it:

It is generally agreed that some sort of entitlement to a claim is required for it to be a candidate for expressing knowledge. But it is not obvious that inferring in the sense of justifying is at all fundamental to that sort of entitlement.

The core point, it appears, is that our beliefs cannot be accidental if they are to be capable of counting as justified. A belief formed by flipping a coin will not (unless, perhaps, we attribute the coin-flip’s outcome to the intervention of a supernatural power) provide justified belief even if (by chance) it provides true belief. An account is needed of the non-fortuitous formation of a belief if it is to be a candidate for knowledge.

‘Reliabilism’ does exactly this, without invoking inferential justification. In Brandom’s words:

[T]he correctness of the belief is not merely fortuitous if it is the outcome of a generally reliable belief-forming mechanism. Epistemological reliabilists claim that this is the sort of entitlement status that must be attributed (besides the status of being a true belief) for attributions of knowledge.

Brandom makes use of Alvin Goldman’s ‘Barn Facade County’ thought experiment (from Goldman’s “Discrimination and Perceptual Knowledge”, Journal of Philosophy 73, no. 20 (1976)) to argue against the reliabilist position. I’ll talk about this next.

Brandom’s system is an ‘inferentialist’ one. Brandom frames much of his work by contrasting this ‘inferentialism’ with what he calls ‘representationalism’. These are two different approaches to understanding conceptual content. ‘Representationalism’ is the view that representation should be taken as explanatorily fundamental for semantics. On this picture, certain linguistic units have meanings by virtue of their powers to denote, or refer. These units can then be connected and combined in propositional structures or belief-webs, the subsections of which can be connected by chains of inference. This is the picture of language proposed by Bertrand Russell, and by the early analytic philosophers working within the logistic paradigm Russell popularised within the English-language discipline. There are simple units of reference, which can be combined and manipulated by using fundamental logical tools of inference. Inference is explanatorily basic, also, on this account – inference cannot be explained in terms of reference. But reference likewise cannot be explained in terms of other semantic concepts.

‘Inferentialism’, by contrast, takes inference as explanatorily fundamental. Furthermore – and seemingly implausibly – it suggests that representation can be explained in terms of inference. On this inferentialist picture, inferences do not connect independently-comprehensible representational content-units. Representations can only be understood – and can be fully explained – as products of inferences.

In Making It Explicit (and in other works) Brandom distinguishes between three kinds of inferentialism: weak inferentialism, strong inferentialism, and hyper-inferentialism. Brandom himself endorses ‘strong inferentialism’. Here are his characterisations of the positions (from Articulating Reasons, p. 219-220):

Weak inferentialism is the claim that inferential articulation is a necessary aspect of conceptual content. Strong inferentialism is the claim that broadly inferential articulation is sufficient to determine conceptual content (including its referential dimension). Hyperinferentialism is the claim that narrowly inferential content is sufficient to determine conceptual content.

Obviously the key here is the difference between “broadly” and “narrowly” inferential content. Brandom characterises the difference as follows:

Broadly inferential articulation is sufficient to determine conceptual content. Broadly inferential articulation includes as inferential the relation even between circumstances and consequences of application, even when one or the other is noninferential (as with observable and immediately practical concepts), since in applying any concept one implicitly endorses the propriety of the inference from its circumstances to its consequences of application. Narrowly inferential articulation is restricted to what Sellars calls “language-language” moves, that is, to the relation between propositional contents.

Brandom presents here a ‘web of belief’ picture, in which propositional contents are related inferentially. Proposition A implies proposition B, and both are incompatible with proposition C, etc. If we understand propositional contents in linguistic terms – propositions being things that can be expressed in sentences – then we can think of the inferential relationships between propositions as relationships between specific linguistic contents. An inference is a “language-language” move, in that it connects one linguistic content to another linguistic content.

Hyperinferentialism, as Brandom characterises it, suggests that linguistic content can be fully understood in terms of these language-language moves. This has some similarities to the class of positions discussed by John McDowell (in his Mind and World) under the heading of ‘coherentism’ (McDowell’s particular target in these discussions is Donald Davidson). The objection to this position is that it seems to sever conceptual content from any connection to the outside world (or, more properly, any rational connection – any connection that can rightly be taken as placing a warranted constraint or having a justificatory bearing on the content of our propositions). In McDowell’s words, this picture:

depicts our empirical thinking as engaged in with no rational constraint, but only causal influence, from outside… Coherentist rhetoric suggests images of confinement within the sphere of thinking, as opposed to being in touch with something outside it.

Brandom too regards this as the likely penalty of a hyperinferentialist understanding of conceptual content. Such an understanding, Brandom claims, may be plausible “at most for some abstract mathematical concepts” (AR, p. 220). It is, however, an inadequate explanatory apparatus if we aspire to treat the empirical richness of most conceptual content.

Weak inferentialism, by contrast, suggests that while inferential connections between propositional contents are a necessary component of our explanation of conceptual content (a concept cannot have content if nothing follows from that content), an account of inference cannot be sufficient to fully explain conceptual content: some other category – i.e. reference – must be brought in to account for the (rational) connection between words (or propositional contents) and things.

What is the nature of the ‘strong inferentialism’ Brandom advocates, which aims to chart a course between these two alternatives? Another way of putting this: what is the category of ‘broad inference’ that encompasses more than simply language-language moves under the heading of inference, for Brandom?

The important Chapter 4 of Making It Explicit addresses these issues. There Brandom discusses ‘Perception and Action’ – or, as he also terms then, ‘language-entry’ and ‘language-exit’ moves. Language-entry moves (perceptions) allow things outside of linguistic practice (the regular furniture of our world) to impinge upon, influence, generate and destroy the conceptual contents we manipulate in our thoughts and statements – to have a bearing upon which conceptual contents are warranted, and which are not. Language-exit moves, by contrast, allow our concepts to impact upon the world in more thoroughgoing ways than via the usual articulation of sentences or interaction of brain-behaviours – we act and transform the world in ways that are connected to our beliefs, and the justification or otherwise of these actions is connected to the content of those beliefs.

How can these perceptions and actions be folded within an ‘inferentialist’ account of conceptual content? In what sense should the perception of a moving rock, or the action of kicking one, be understood ‘inferentially’?

The mathematics of inferential statistics is based on the logic of random sampling: the inferences we make in inferential statistics work on the assumption that the data we are inferring from is randomly sampled from the population we are inferring to – that every member of the population has an equal chance of ending up in our dataset. Obviously this usually isn’t the case; but that’s the assumption, and the further our actual sampling practice deviates from that ideal situation, the less likely our inferences are to have any validity.

In much inferential statistics, the population we are sampling from is an actual population of cases, which could in principle be observed directly if we only had the money, time, staff, access, etc. etc. Here the ideal situation is to create a sampling frame that lists all the cases in the population, randomly select a subset of cases from the sampling frame, and then collect data from those cases we’ve selected. In practice, of course, most data collection doesn’t work this way – instead researchers pick a convenience sample of some kind (sometimes lazily, sometimes unavoidably) and then try to make the argument that this sampling method is unlikely to be strongly biased in any relevant way.

Sometimes, however, the population from which we draw our sample is not an actual population of cases that happen for contingent practical reasons to be beyond the reach of observation. Sometimes the population from which we draw our sample is a purely theoretical entity – a population of possible circumstances, from which actuality has drawn, or realised, one specific instance. Thus our actual historical present is a ‘sample’ from a ‘population’ of possible realities, and the generalisations we aim to make from our sample is a generalisation to the space of possibilities, rather than simply to some aspect of crass and meagre fact.

When we make claims that are predictive of future events, not merely of future observations of present events, we are, tacitly or overtly, engaged in this endeavour. To predict the future is to select one possible reality out of a space of possibilities, and to attribute a likelihood to this prediction is to engage in the statistical practice of assigning probability figures to a range of estimates of underlying population parameters – or, equivalently, to give probability figures to a range of estimates of future sample statistics ‘drawn from’ that underlying population. I may try to articulate this point with more precision in a future post – I’d like to spend more time on Bayesian vs. frequentist approaches to probability. And there is, of course, a ‘metaphysical’ question as to whether such a ‘population’ ‘really exists’, or whether the ‘samples’ themselves are the only reality, and the ‘population’ a speculative theoretical entity derived from our experience of those samples. Functionally, however, these stances are identical: and by my pragmatist lights, to note such functional equivalence is to collapse the two possibilities together for most theoretical purposes.

When we speak of universal natural laws, then, we are stating that a given fact – the law in question – will be true in the entire range of possible worlds that might, in the future, be actualised in reality. (Whether this ‘possibility’ should be understood in ontological or epistemological terms is beside the point). For some, it is the role of science to make such predictions: on this erroneous stance, science attempts to identify universal features of reality, and any uncertainty that accrues to scientific results is the uncertainty of epistemological weakness, rather than ontological variation. Here, for example, is a video of Richard Feynman making fun of social science for its inability to formulate universal laws of history:

To take this attitude is to misunderstand the nature not just of social science, but of science in general. Science is not characterised by a quest for certainty or for permanence, but is rather characterised by an ongoing collective process of hypothesis formation and assessment, based on specific collectively accepted evidentiary standards. The conclusions of science cannot be certain, because they must always be vulnerable to refutation in the light of empirical evidence and the application of community norms of argument. Similarly, the phenomena examined by science need not be necessary, or even ongoing. A scientific endeavour can be entirely descriptive, of the most local and variable phenomena imaginable, so long as the process of description is subject to the appropriate communal evidentiary norms. It can, similarly, be explanatory without being predictive, for we can analyse the causes of the phenomena we observe without being able reliably to predict those causes’ future impacts and interactions. The set of phenomena regarding which long-term or even short-term reliably predictive hypotheses can be formed is smaller than the set of phenomena that can be studied empirically using the relevant community norms of hypothesis formation and assessment.

The social sciences often approach this limit case of the purely descriptive. Social reality is enormously variegated – and often there is little in the way of testable general claims that can be taken from a study of any given social phenomenon. But prediction is nevertheless sometimes the goal of social science. When the social sciences aim to study social phenomena, the ‘laws’ they aspire to uncover are always local and limited in scope – and when we form a hypothesis, this hypothesis applies within a certain local limit and no further. Where to draw the line – where to locate this limit – is a qualitative question that the community of social scientists must always bear in mind, but the existence of this limit in no way renders the endeavour ‘unscientific’.

When we make a social-scientific prediction, then, we are making a claim about what future reality will drawn from the space of possibility. We do not know the scope of this space – nor do we have any reason to regard the principle of selection as random or unbiased – indeed, we have strong reasons to believe the contrary. Further, the nature of social reality is such that we can and do aspire to intervene in this selection – to attempt to influence what possibilities are realised. As social scientists we sometimes aim to predict what outcomes will be drawn from this space of possibilities – and such a prediction can only be made within the framework of a broader, historically informed judgement of the narrower space, within the space of possibilities, that we aspire to model.

But we should also be aware of other, unrealised but potentially realisable social possibilities, beyond the set of possibilities we are modelling at any given moment. Part of the function of the scrupulous social scientist is to describe this space of possibilities itself – to describe not just regularities, but also the possible variety from within which those local regularities are drawn. We cannot know the limits to the space of possibilities – no sampling frame of possible societies exists. But we can explore what the ‘samples’ themselves – existing and historical societies and behaviours – tell us about the scope of that hypothetical space.

This latter task is where social science intersects with political practice. The understanding of the likely behaviour of social reality is important for political practice – but so too is a sense of the larger space of possibilities from which our own past and present societies have been drawn, and from which alternative futures could be drawn, or made, if we only had the political ability to do so.