If the death of one Margaret Thatcher has served to remind us that the present crisis has its roots back in the 80s on her watch, one of things that Thatcherism was responsible for was the corruption of the value of words. It may seem trivial compared to her other feats, but replacing terms like ‘passenger’ with ‘customer’, or calling Vice-Chancellors ‘Chief Executives’, was an important part of the conquest of the public sphere by neoliberal ideology. I’m reminded of this by having to grapple again with something I’ve written about before: the REF—on which the next round of university research funding depends and for which submissions are being busily prepared—and especially the new ‘impact’ agenda. The very word ‘impact’ has now become an odious one, emptied of its former richness of meaning, reduced to a code word with arcane referents.
There are dangers in the REF that will effect people who probably don’t know much about it, like ‘early career researchers’. It also has particularly disturbing features for areas like creative practice research. As a recent academic blog on the subject puts it, ‘The Funding Council’s overly restrictive ‘physical science’ view of how research influences policy has created an artificial minefield of pointless obstacles.’ Indeed I’m sorely afraid (judging from my experience in producing a case study of my own work as a documentarist) that it’s almost impossible in these circumstances for the individual to write a true and honest account of their work, because then you’ll be saying things that don’t fit the tick-boxes and which the funding councils don’t want to hear.
There is a general danger identified by an another academic blogger at exquisite life:
The introduction of the ‘impact agenda’ into Research Council funding priorities…increases the incorporation of Government objectives into research funding in two ways. First, by making them an increasingly important part of Research Council decision-making (‘excellence with impact’) and, second, by ‘nudging’ academic behaviour into adopting those objectives into their own research proposals.
The blog goes on to discuss a whole lot of technical reasons why all this will disadvantage early career researchers and their employment prospects. If that’s you, read it at your own risk.
One of the problems for arts and humanities is that a great deal of research—scholarly or creative—produces effects that are not easily measurable, for any variety of reasons: the book that is ignored on first publication but turns out ten years later to have been pioneering; the artful video that circulates on the web without leaving an academic footprint (so it can’t be ‘objectively’ evaluated). But anyone producing work in a field like the digital arts, which engages with non-traditional forms of dissemination and reception, will have to deal with the particular problems inherent in web usage metrics, which are at best both fuzzy and evanescent—which means their evidence isn’t ‘robust’. Maybe that’s because you didn’t didn’t do what it now seems you should have done, and do research into your research. But maybe you did. Maybe you asked people to fill in questionnaires about your film, exhibition or performance. Only it turns out that doesn’t count, because it’s merely anecdotal.
The general rubric for ‘impact’ is ‘an effect/change/benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia’. This sounds alright until you try to decode what counts as evidence. Many questions raised by experimental creative practices, for example, cannot be answered by going through a list of quantifiable criteria, since they’re not that kind of question, because they depend on individual taste and sociological dimensions like Bourdieu’s habitus, where quantitative evaluation is largely irrelevant and liable to be reductive. The type of impact they have—what old-fashioned philosophy called the aesthetic experience—is of an experiential and existential kind that necessarily falls beneath the bureaucratic radar.
For policy-makers with an instrumental approach to higher education, this is a problem to be summarily dismissed, and they end up throwing out the proverbial baby because even in their own terms they don’t understand what they’re dealing with. You could give them statistics about the value of what they call the ‘creative industries’; it won’t do any good because they’re not actually interested in all the evidence we’re asked to provide. The ruling paradigm is ideological, not ‘evidence based’. They don’t care, for example, that it was a liberal approach to art and design not so very long ago which fed the fashion, design, advertising and publicity industries in their heyday. And as mathematical logic is to computing, so too critical theory to the cultural industries. Advertising even learned a great deal from semiology (as Armand Mattelart argued in a book of his that I once translated, Advertising International). But that’s not the kind of argument that concerns me here.
The crucial point is made in a recent newspaper report by a post-doctoral researcher, in the simple observation that “the value of research is not always something you can predict from the outset – that’s the point of research’. ‘Impact’, says Shahidha Bari, ‘reads like a policy designed to help universities appease governmental demands for justification of expenditure’, but ‘if you’re in the business of producing ideas and culture as you do in arts and humanities research, then you’re not producing tangible, measurable effects’. There may nonetheless be ‘non-tangible effects that are no less important’.
Actually not all scientists are happy with all this either. According to the same report, some academics are ‘hopping mad’ at the weighting given to the impact criterion—an arbitrary 20%—not because they want to sit in ivory towers isolated from the world that pays for what they do, but because, says Prof Andreas Fring, assistant dean for research at City University, the government has fundamentally misunderstood the nature of how high-quality research in certain disciplines takes shape. Mathematics, he says, may bring no economic benefit at all, but it provides the tools. Furthermore, a lot of scientific research is speculation: ‘for instance, string theory is at a state where it is not yet confirmed. Verification might come after 50 years, but that is still not a practical application.’
Science, of course, gets the lion’s share of the funding. But if you want to speculate for very little money about something that doesn’t tick the right boxes, you have to jump through hoops. (This is not a mixed metaphor; only the last phrase is metaphorical.) Even worse if your work is aimed at exploring the potential for digital internet arts. Says an early career researcher quoted in the same report, Alasdair Pinkerton, ‘It’s yet to be seen how grant-awarding bodies will measure the value of social networking’.
Maybe the findings of the social sciences can sometimes be tracked in the required way, for example when they’re taken up by non-academic bodies in the context of policy debates. It is very rare for works of creative practice to produce a direct and concrete impact on, say, policy making (television documentaries do this very occasionally; feature movies almost never). But they can and do contribute to debate at the level of ‘interest groups’ and communities, both local and virtual. In some cases they link to alternative social initiatives and campaigning, which of course is another problem for the funding bodies, because the government they need to appease is thoroughly inimicable towards criticism of virtually any kind. As I write, a piece appears on Guardian Education: ‘10 tips for how academics can better communicate their research to policymakers.’ But what if you’re not trying to communicate with policymakers, but ordinary people who speak ordinary English?
There is a fundamental problem in reporting the ‘impact’ of works of creative practice, like documentary films: real impact is diffuse. Especially when the work is disseminated by digital communication, and thus situated by definition at the forefront of testing out new possibilities for reaching what the lingo calls the ‘beneficiaries’. Take the case of Secret City, the film I recently made with Lee Salter, which is a bit like a David and Goliath story. The subject of the film, the Corporation of the City of London, is at the heart of a lobbying network that spends £93m a year on behalf of finance capital, while Secret City started out with a zero budget and was completed with university funding of £7K—and no budget for marketing and publicity at all. It was made by a team of four people, and then launched, unusually, with a screening at the House of Commons. By the time the DVD is being released six months later—you can get it here—it will have been screened some thirty times up and down the country at community, cultural and university venues, always to full houses. This was only possible because of an integrated approach to the use of the social media, and the potential of the web to discover an ‘audience-in-waiting’ that is not served by broadcast media or conventional film distribution. Isn’t this already an indication of impact? Apparently not: according to the formula, it’s only a pathway to impact.
The dissemination of Secret City is pretty small scale, precisely because it’s happened outside the marketplace, an example of a new kind of artesanal cultural production in the age of global cultural monopolies. It’s had a modest success that’s corroborated by the figures on condition that you know how to interpret them. Building on the experience of my previous documentary, Chronicle of Protest, the website home page for Secret City received more than three times the number of individual visitors in less than half the time. But the actual figures (around 8000 in 5 months, while viewing figures for the trailer exceed 10,000) are pretty meaningless without this narrative context (all of it crammed into about 750 words).
Of course, you can hardly use a phrase like ‘modest success’ in a case study, where you have to talk it up, but here I’m simply being honest. The point is (sorry, one of the findings of the research is) that the parallel outlets provided by the web are vital to creating the presence that produces dissemination through the dynamics of social networking. With non-commercial production sans a marketing & publicity budget, the web becomes the crucial means for making links with cultural, community and campaign groups and thereby organising the public screenings through which the film finds a widening audience and enters into dialogue with them around the issues. There are many negative things to be said about the forms of sociality found on the web; this is one of the positives. The box-tickers remain oblivious. But the documentarist who observes the practice of taking the film on the road—and one or both us have attended a Q&A at every screening—knows full well when the film has a real impact on the audience.
A last observation: there is a stipulation in the guidelines for completing an impact case study to be ‘sufficiently explicit, transparent and self-contained that the panel can assess the impact without having to make inferences, gather additional material, rely on members’ knowledge, or follow up numerous references.’ Read that again carefully. It implies that you’re required to address the assessors as if they’re rather stupid. The reason is obvious: the quantity of stuff they’ll have to get through means the time they’re given to spend on each item is extremely limited. Thank your lucky stars we’re not talking about ATOS, who have a habit of classifying people as fit for work who then drop down dead. But then again, the paymasters would probably prefer that awkward folk who challenge the criteria would do the same, or at least would just go away. Well, I’m off — to Argentina for some European-funded research-as-practice. But I’ll be back…