In October 2009 I ran (with the help of local researchers) a dissemination workshop in the basement of a community library in the informal settlement of Cato Crest, Durban, South Africa. I was feeding back to project participants, other residents and local politicians recent research findings on men and violence and the institutional structures used by residents to manage such violence, which had focused on that particular settlement. The workshop generated a mix of awkward tensions and hostility but also appreciation and acknowledgement. Violence was indeed a key issue, and it was highly gendered. Some men had found the research liberating and cathartic, describing their involvement as “…taking half of the big heavy bag [from] me…”
Clearly, men’s experiences needed understanding, but participants at the workshop informed me that subsequent action was essential too. The action they suggested was not in relation to violence counseling, planning and safety or gender-sensitive policing (or further academic research on any of these), but rather they wanted housing, jobs, schools and health care, and demanded to know how I would be able to assist in achieving this, surely as an academic I must yield power, I must be able to have an impact. A politician then informed me that I was making trouble for him, ‘rousing the rabble’ over issues which he would then be expected to resolve, over which he had no power, and neither did I. . .Some time later back at my university in the UK in a meeting about performance and targets with my departmental leadership, I was questioned about the impact of my research work: “Has it had an impact on policy?” they asked, and “what actual evidence can you present us with to prove that you have had an impact through your work?” I sat, hands folded.
The scene is a not-so-crowded seminar room in south India, and the research team is trying to fulfill its promise of ‘user engagement’, presenting the summary of our ethnographic research on poverty and participatory governance initiatives. We’ve been constructively critical — highlighting some of the short-comings and unintended consequences of Kerala’s innovative plans for enticing poor women to be active in local council activities — and are presenting our findings to a mixed audience of local academics and ‘policy makers’, the latter including senior politicians and civil servants. The sessions go well: there’s lively debate around our methods, our findings, and the current state of Kerala’s local government institutions. A few days after the meeting, one of our team is approached by a senior opposition politician, who is likely to be back in government after next year’s State Assembly elections: could she write something on the difficulties faced by the female-centred programme for him? Being unsure of his motives, and recognising that we would have no control over the use made of her writing, she refuses.
Back in the UK a few weeks later, I’m emailed by the project’s UK funding agency with a request to fill out a spreadsheet recording the project’s impact: How many project publications have been targeted for use by the Department for International Development? How many project-related media appearances and newspaper reports have there been? What evidence is there of development policies, programmes or practices being changed by our research? The disconnect between the spreadsheet and the experience of dissemination is dramatic — there’s no space to record the workshop as an event, still less the feedback undertaken with our research participants in our rural study areas.
The enhanced stress on ‘impact’ within the evaluation of social science research in UK Higher Education is justifiably raising significant concerns amongst the academic community. Impact has been turned from a vague and rather intangible notion to a concrete ‘performance criterion’, a transformation which can only be enacted through a number of highly questionable assumptions. The first is that it is possible to ‘capture’ impact through the sorts of indicators present in the DFID spreadsheet. This extends the use of metrics, such as citation indices, from the measurement of research ‘quality’ (which has already been the subject of intense debate in UK academia) to the even more difficult task of assessing impact.
The selection of outlets and timeframes here clearly shapes the outcome of this measurement, privileging some indicators whilst discounting others. A second set of assumptions relates to ideas of ‘research-led policy’, with a linear progression from research through dissemination of evidence to policy change, which appear to underpin the impact agenda, despite the fact that such ideas have been widely discredited in policy studies research. Finally, and more fundamentally, there is a normative assumption that ‘more’ impact is better; this in turn sets in train incentives to ‘maximise’ our impact, regardless of the politics of doing so. However conceptually flawed, this impact agenda will have important disciplining effects in decisions over research funding (where projects must demonstrate ‘pathways to impact’), in the way we report our activities (to peers, and to research sponsors), and the difficult and situated judgments we make when engaging with research participants and ‘users’.
These are concerns for all social scientists, of course, but for those working in the ‘Global South’ the layers of complexity are potentially deeper. Demonstrating impact potentially enlists us in performances of our ‘expertise’ to grateful ‘recipients’ which have uncomfortable echoes of colonial patterns of association and action. Criteria through which impact is measured are set in the UK, but transposed uncritically into research contexts which are politically and culturally different: incentives to act in accordance with these criteria can thus be inappropriate or even damaging.
Working with a member of the opposition front bench in Kerala would have allowed Glyn’s project to demonstrate impact, but the politics of this would have been highly questionable. The project’s ‘international’ status would have been deployed to discredit current policy, but our analysis would have been largely irrelevant to the politician’s alternative agenda, which was certain to have been pre-formed around other interests. This engagement would also label Indian members of our team as open critics of the current government — likely to be career-damaging rather than life-threatening in this particular context, but a ‘cost’ of impact nevertheless which responsible research must take into account. Feedback meetings held later in our case study areas are invisible in the DFID spreadsheet — we can claim no policy impact of these, but sharing our analysis directly with our rural research participants seems far more valuable than promoting it through ‘media appearances.’
In Paula’s research, encouraging men to discuss their painful experiences of violence was personally very significant for the men themselves, but was inevitably ambivalent in its outcome. Some male participants found the material traumatizing and indicated their unwillingness to engage further with the project, a request which had to be respected. Other participants’ remarks were highly misogynistic, and it is possible that the project’s focus validated their beliefs that women’s recently extended rights were detrimental. The complex politics at play here, and their unintended negative outcomes, would remain simply ‘unmeasurable’ in UK impact terms, although Paula has begun to address these within her published work.
Opportunities to put her research to wider use locally came in the form of pressure from a local unelected ANC leader at the dissemination event who asked how Paula could work with him to ‘help the community’: this would have brought her findings into policy areas that she had not researched, and done so in partnership with a figure who was a key power-broker in the area with a reputation for corruption.
These concrete examples are indicative of wider concerns. Incentives to claim that we will deliver change in highly charged political situations from which we as individuals are often distanced can encourage risk-taking — but any negative consequences of our actions are likely to fall on people more vulnerable than ourselves. Privileging demonstrable effects on ‘policy makers’ over other forms of engagement severely limits the ways in which academics can critique public policy. What happens, finally, to ethnographic research which aims to understand the diversity and complexity of everyday life in the South, rather than seeking to change policy — is this trivialised, or does it simply disappear?
For all these reasons and more, we believe that it is important that the impact agenda is questioned and challenged. If it is not, it could critically collapse the distance between academia and a performance-driven ‘development industry’, closing down the space for independent scholarship on the Global South.
Paula Meth lectures in the Town Planning Department, University of Sheffield, UK. Her work focuses on gender, violence and informality in South Africa. She is the co-author of Geographies of Developing Areas Routledge, 2009 written with Glyn Williams and Katie Willis.
Glyn Williams also lectures in the Town Planning Department at University of Sheffield, UK.