Among the usual rigours of the start of term, UK academics have been looking with dismay into the less immediate future because of the publication of the latest “consultation document” for the REF. What’s the REF? REF is what used to be RAE. RAE stood for “Research Assessment Exercise,” while REF is “Research Excellence Framework.” Under either name, it’s a huge attempt, consuming a lot of time and money, to work out how good different departments in different universities are at research, and the result is supposed to be that the best places are rewarded with lovely research money and (since some of the extrapolations from the data are published) with status. The consultation document explains some of what’s going to change as RAE becomes REF (and although it’s called a “consultation document,” actually the areas where responses are invited mostly concern points of detail: the general directions appears to be fixed).
Something of the sort has been going on since the 1980s, and (since measuring behaviour always changes behaviour, especially where cash prizes are on offer) it has made a big difference to British universities. Most departments used to have people who didn’t really do any research; in institutions with high aspirations, this is no longer the case. So (depending on which way you look at it) either the universities have got rid of plenty of good scholars and excellent teachers deeply committed to their students who happened not to publish much, or there has been a clear-out of place-holders and passengers. (Needless to say, which of these is a better description varies from case to case. “Getting rid of” people usually meant pushing them into early retirement. Not all of the change occurred through getting rid of people, since by virtue of the fact that the pressures of RAE mean that only those with good publication records can get jobs, regular turnover of personnel has had the same effect.) And those who do publish have had to make sure that they do so at regular intervals, so as always to have enough out in the period under assessment.
Festschriften often have lists of the publications of the person being honoured. One may sometimes look at a Festschrift of somebody past retirement age now and realise that this person, whom one always considered (and rightly so) as a really important researcher with lasting contributions to their field, had a cv which would now be “no good for RAE”. For example, it might be that there was a period of several years with only a few reviews, because (s)he was working on a big book. That’s no good if you don’t get enough out during the RAE cycle…
And the techniques of evaluation were (probably inevitably) fairly alarming. For each assessment, there was a panel of scholars for each subject who had to read and evaluate all the research which had been submitted. Anecdote tells of scholars on these panels being given more than 200 “outputs” (a mix of books and articles) which had to be scored from 1 to 5 (or “unclassified”), each of these being judged by one person only. No scholar would be please for his/her work to be reviewd in this kind of production line. And if PhD students’ work was evaluated this way, it would be considered virtually criminal, and rightly so.
Anyway, RAE is dead: long live REF. And whatever you think of RAE, REF is worse, at least for the humanities.
Starting with the good news:
At least for humanities subjects, there will not be “citation indexing” (i.e., assessment of research by the absurd means of counting up how many times other researchers have cited it). This had been threatened.
It appears that there will be an attempt to stop football-style transfers, by which universities tried to get better RAE scores by hiring top researchers at the last minute.
The bad news is “Impact”.
The first thing to understand about Impact is that it is quite different from Significance. Significance is about how important your work is within the relevant field of research. Impact is about having an effect on the World Outside. Institutions will be rewarded for somehow having an effect on the world outside. From the way in which it’s suggested that this be measured, it’s fairly apparent that this measure was devised by people who didn’t really have the humanities in mind. (Alternatively, the covert intention is to find a way of demonstrating that humanities research has little Impact, and is therefore worthy of less funding). Thus most of the ways in which the document suggests that Impact might be measured are clearly irrelevant to work like mine. Here are some of the ways in which Impact could be indicated:
“Staff movement between academia and industry” (This is held to show that academic work is impacting on industry, which is a Good Thing, rather than that a department is full of people who are on the lookout for a chance to make better money elsewhere, which might be thought of as a Bad Thing).
“Research contracts and income from industry” (The other way of thinking of this would be to say that those researchers are doing the work which industry is willing to pay for anyway, whereas government should concentrate on funding the things which industry won’t pay for).
“Research income from the NHS and medical charities” (and various other indicators which mean “this person’s research is recognised as significant in terms of real-world healthcare”).
“Application of solutions to sustainable development” (i.e., it’s good for impact if your work helps somebody to design a better wind-turbine or similar).
The heading “social welfare, social cohesion or national security” covers a particularly curious bunch, so I quote it in full:
“Applications of new ideas to improve social equity, inclusion or cohesion.
Measures of improved social equity, inclusion or cohesion (for example, improved educational attainment among disadvantaged groups, or increased voting rates in lower participation communities).
Application of new security technologies or practices.”
The only ones relevant to humanities are the following, under the heading “Cultural enrichment, including improved engagement with science and research”:
“Increased levels of public engagement with science and research (for example, as measured through surveys).
Changes to public attitudes to science (for example, as measured through surveys).
Enriched appreciation of heritage or culture (for example, as measured through surveys).
Audience/participation levels at public dissemination or engagement activities (exhibitions, broadcasts etc.).
Positive reviews or participant feedback on public dissemination or engagement activities.”
Quite a lot of things here strike me as curious. One is the way in which some of the definitions flirt with and dance around the question of political direction of research. Clearly it couldn’t say “research ought to be encouraging members of the public to embrace the government’s social and economic policies”; socio-economic goals must be (or must look) neutral in terms of party politics. Improved social equity or cohesion is something nobody is assumed to oppose. Similarly, although increasing voting rates from under-represented communities would probably in most cases work to Labour’s advantage, the Tories will presumably not be willing to be seen to oppose it (that’s not fair… I expect they really wouldn’t want to oppose it). The definition of “social equity” is another interesting question: what kind of equity are we talking about here? The vagueness of the term leaves it as something everybody is expected to be willing to agree with (compare the possibility that a government had said that academia should be rewarded for promoting equitable distribution of wealth: in this case, it’s obvious that it would be “real” politicisation, since not all people or parties believe in equitable distribution of wealth as a goal).
With “security” we are on really alarming ground. Presumably the people who wrote this document felt that everybody was agreed: security “technologies” and “practices” are a Good Thing. Are we? I can’t be the only person who thinks that a bit less attention paid to obsessing about security might be a good thing. It isn’t obvious to me that government funding structures ought to be encouraging academics to direct their researches towards areas which will have applications in this direction, finding more forms of biometric recognition or ways of recording people’s movements.
The same problem, that it is hard to describe what are “good” real-world Impacts of research without undesirable politicisation, is visible in a broader phenomenon. As far as I can see, there can be no negative impact. All impact is good. So you might get brownie points for developing new energy efficient technologies or windfarms (under the heading “sustainable development”). But this won’t stop you from getting marks for Impact if you find a clever new way for horrible old Exxon or BP to get more oil out of the ground in ways which enable more release of greenhouse gases and which are particularly destructive of the unfortunate people and habitats who happen to be on top of the oil: that’s going to come under one of the “engagement with industry” headings.
The same applies to other areas too. “Social cohesion” is officially a good thing, and the present government seems to take the view that organised religion should be thought of a socially beneficial. Thus if you give money to a church or mosque or synagogue and tick the “I am a taxpayer” box, that religious organisation can get some of your tax back from Mr Darling, just as if they were Oxfam or a medical charity (as far as I know, no party plans to change this). But if somebody writes a scholarly book advancing the thesis that most religions in the modern world are a Bad Thing because they are socially divisive or because they usually promote ideologies which are discriminatory against women or whatever it might be, and that same person does a lot of TV or radio or newspaper stuff explaining their ideas to the world outside, that’s still going to count as Impact. Richard Dawkins, with his not-very-covert anti-religious agenda, is going to score just fine on impact (quite right too: but these contradictions come to the fore when you start expecting that scholarship do social work).
Anyway, it’s a problem for people like me, since our work is largely useless: or rather, whatever utilitarian justification it might have (I don’t actually think it’s useless) is very hard to quantify or assess. (Nor is this only a humanities problem: consider the purer reaches of maths and physics. New research in these fields may or may not end up having quite big impacts, but many of these will be unforeseeable at the time when the research gets done, and at the time when it gets assessed for REF). We have to go for the “cultural enrichment” section. Doubtless some good will come of this. School talks and engagement with museums and the like are a Good Thing (they do happen quite a bit anyway). But this will favour the kinds of scholarship which are easier to make into something with a bit of showbiz and razmatazz over the hard, slow legwork without which the other things cannot be done properly. It makes spending on the museum a priority (not acquisitions: facilities for school trips), while spending on the library (which is for almost any university the most important thing to do for good quality humanities research) will drop down the list. People who do good work teaching their classes and then disappear off to the library and do good work there will be asked, by job interviewers or promotion panels, “what about Impact?” and being a good teacher and scholar won’t always be enough.
I’m very pleased to see that Mary Beard, doyenne of media-friendly classics, has come out fighting against it in press, radio and of course blog, not least since this is a form of altruism: she personally and Cambridge classics in general are going to do very well here, and the rise of Impact will add to the gradual rise of Cambridge over Oxford in classics as elsewhere.
(There is an oddity in the “cultural enrichment” section: public “engagement” is with “science and research,” while “public attitudes” are to be “changed” with respect only to “science”. I don’t understand why. Is the underlying thought that the public basically understands what historians are on about, whereas we don’t understand what theoretical physicists are up to, and therefore we need to have our attitudes to science changed, whereas our attitudes to history can stay as they were? This isn’t necessarily true: some of the more theoretically informed kinds of literary discussion are pretty impenetrable to the uninitiated – in which number I would sometimes include myself… Presumably “change” means “ameliorate”: but how are we to define what is a better attitude to science through a survey or in any other way?)
Anyway, some members of the public will probably spot a few changes. Schoolchildren will increasingly find that their trip will be to the university museum (even if the city or county museum is better), and they or their teacher will have to fill in a survey afterwards, or will be expected to write a glowing letter about how educational and enriching it all was. Programme notes in the theatre or in concerts will have essays which start “One of the most interesting aspects of my research in nineteenth century Russian theatrical life has been…”. If you have had lecturers from the local university coming to talk to the A-level or Higher class about something, expect them to arrive with feedback forms next time.
Anyway: look again at the “cultural enrichment” indicators. Participation, changing attitudes, enriching appreciation, dissemination, engagement. We have a word for this: it’s called “teaching”. Since the very beginning, RAE was about research. (Not that teaching and examining are just left for us to do as we please, but the mechanisms for keeping an eye on that are quite separate). And so it is now, in the sense that no REF return is going to say anything about what we do with and for our students. But the effect is that we are to be assessed on how we communicate our research to the readers of the papers, visitors to museums and so on, but not to our own students. I think measuring impact is just a bad idea. But if we have to measure impact instead of concentrating solely on the quality of research, why are we missing out the biggest impact we have, the one we’d like to think we do best, and the one we spend hours of our lives trying to do better?
A final note: consultation documents and government policies come and go; but classical scholarship goes back at least to the third century BC, and the university where I work now has been around since the sixteenth century. Aren’t we just going to ignore this, or make a few half-hearted gestures and wait for it to blow over? Well, it may be that much of what changes is presentational or box-ticking stuff, done with an eye to the design of the eventual submission form (because that’s what targets are about, right? Remember Tony Blair and the GPs’ “every appointment in 48 hours” rule…). But even presentational box-ticking can take up a lot of time and effort and can really end up changing the direction of what people are doing and how they end up organizing their working lives. Academics and universities like to be thought of as independent and bloody-minded, but actually when the funding bodies have a big bag of money (or even a medium-sized bag of money) to distribute, we’ll sit up and beg for it. This funding is library books, research funding, the increased likelihood that when somebody retires or leaves they will get replaced.
This consultation document was published about a month ago. Since then I’ve already seen a job advert instructing the applicant that “you… will have a clear understanding of your work’s potential for Impact.”