A recent Women and Equalities select committee hearing on gender-based violence in universities involved a discussion of what evidence is available on this issue, and what the role of surveys should be in providing this evidence. Serendipitously, this was just after myself, Louise Livesey and Marian Duggan had published an article on this very topic, ‘Researching Students’ Experiences of Sexual and Gender-Based Violence and Harassment: Reflections and Recommendations from Surveys of Three UK HEIs’ in the journal Social Sciences, for a special issue edited by Vanita Sundaram and Pam Alldred on methods for researching gender-based violence. It reflects on our methods as well as the politics and governance of surveying gender-based violence in UK universities based on our experiences of carrying out such surveys within three different institutions.
As a backdrop, in the article we reflected on the fact that ‘according to Chantler et al.’s (2019) survey of university staff involved in addressing SGBVH, by 2019 at least 31 universities in the UK had carried out such surveys to establish baseline prevalence data.’ However, little of this work has led to accessibly published reports or academic articles, with exceptions of published institution-level surveys from Bristol & Imperial SUS, and among academic work Steele et al.’s study at Oxford and forthcoming survey at QUB from Susan Lagdon. Thus, there are a large amount of unpublished data on this topic in the UK. We were curious about why and how this data is going unpublished
In the article, we reflected on the resource and governance issues that meant that none of the findings from our three surveys have as yet been published publicly. Overall we found that the main impediment to publishing our findings was a lack of resource from institutions for carrying it out, which led to a lack of expertise or time. Institutional reputation and partnership working were also impediments to publishing the data. This means that, as we outlined in the article, ‘thousands of students spent time completing surveys, but none of the institutions have (at the time of writing) published findings. We find this hugely problematic.’ Our experiences also raise questions over the 31 surveys carried out by 2019, most of which have not been published, that is, whether it is ethical to ask students for this data if we are not going to share the findings with them.
In the article, we also discuss methodological issues that came up in devising, administering and analysing the surveys. Two of the three authors used the ARC3 survey from Bill Flack and team alongside the Illinois Rape Myth Acceptance scale while the third used a derivative version of the ARC3 survey. We found that having relatively long surveys (22-28 mins) was not an impediment to students to filling them out. We also argue for the use of incentivisation to ensure a broader sample of participants. In contrast with other researchers working in this area, we did not have any issues getting ethical approval and found the ethics process within our institutions to be supportive and helpful.
We also discuss how we dealt with issues that came up with using Rape Myth Acceptance scales; there was resistance from students to the binary gendered nature of the survey as well as the assumptions around sexual violence being gendered. More findings we hope will be published soon, including on adapting ARC3 for a UK context as well as some comparative work on our findings and reflections on what a standardised survey tool for the UK could look like.
Overall, however, our findings show that students are interested and willing to participate in such surveys but we have a long way to go on creating a climate where it’s possible to publish findings and share and discuss them with students.