In contrast to the huge scale of the previous conference in Kraków, Autumn has offered an opportunity to attend something a little more manageable. The Digitial Humanities Congress is hosted biennially by the Humanities Research Institute in Sheffield, and is a national conference that attracts international audiences.
In a very varied programme, the speakers covered topics such as musicology, text mining and analysis, semantic encoding and infrastructural issues. An early highlight was a series of papers, introduced by Marilyn Deegan, on the ‘Academic Book of the Future’, which discussed the potential shape of academic outputs, and specifically monographs, as the move to digital and open access opens up new possibilities. Creating works with greater interactivity and engagement, that can link directly to open access source material, and provide insight through well-designed interactive visualisations and access to raw data were all high on the wish-list, with some intriguing experiments.
We clearly need to be looking at how we work with publishers and commercial providers, and how we can reconcile the Open Access agenda mandated through the Sterne and Crossick reports, and initiatives such as OAPEN. This was highlighted in papers presenting the Digital Panopticon, a large data project with multiple interactive outputs. Bob Shoemaker described the potential of experimenting with the form of the monograph, and also the problems of reconciling that potential with the (often rigid) models that publishers demand. Sharon Howard showed, eloquently and modestly, some significant insight on research questions on prisons, prison hulks, transportation and prisoner re-offending that dispelled many preconceptions around the fate of those condemned to be transported. The inclusion within monographs of these interactive visualisations, which make these conclusions clear and obvious, is surely the future of academic outputs.
A number of papers tackled the good and bad of the MOOC, and particularly the scale of MOOCs which often seems in conflict with the often focused and specific nature of digital methods as applied to individuals’ research. The MOOC model works well for foundational material, but is more challenging as students become more aware of their specific needs and want to concentrate on methods that will give them the tools they require. Much experience was shared by Simon Mahony, Francesca Benatti and many others regarding teaching methodology to diverse groups of researchers, across geographical and institutional boundaries, and with varying models of immersion and blended learning.
Perhaps the most unusual topic was presented by Daniele Quericia and Deborah Leem, who examined ‘London Smells’ using text mining methods in local Medical Officer reports across London. This analysis of around a million words of descriptive accounts of local conditions, symptoms and diagnoses produced a dataset with geocoded and categorised smells for boroughs across London. These were mapped interactively, giving an impression of the dominant odours for each place, and perhaps more importantly, some evidence of the trends of poverty and disease relating to these conditions. Of course, there was much discussion of how smells can be categorised and displayed (alas, or perhaps thankfully, there was no attempt to reproduce them), of euphemisms and archaic terminology of smells, and of the methods used to extract meaningful data.
Also worthy of mention were papers around infrastructural issues. Approaches to providing the expertise required for effective Digital Humanities engagement, and the technical infrastructures and staffing issues around maintaining an active Digital Humanities community, were effectively contrasted in a session with Simon Tanner and James Smithies from King’s Digital Lab, and Brian Rosenblum from Kansas University Library. In particular, this posed issues and solutions around sustainability and funding, of building and maintaining expertise, and of growing effective innovation in research and infrastructure.
The final keynote presentation, given by Matthew Gold, provided some insights into an emerging topic of discussion in Digital Humanities, of how the technical infrastructure we use is not neutral, and has its own biases and interests. In particular, when we engage with a network such as Facebook or Twitter, we must be aware that these are commercial platforms which have a clear business model that commodifies our interactions and communications, and this can introduce pressures that we may not necessarily be aware of. This is also true of publishers, whether traditional or engaged with the digital, and there will be nudges from all providers to move us into disseminating or engaging in ways that make commercial sense for them. Matt’s paper could be read as a call to arms to reinvent platforms that benefit the higher aims of education and research, or at least, to recognise the influences they exert on us.
The conference closed with a big thank you to Mike Pidd and the Sheffield HRI Digital team, which I echo wholeheartedly – a diverse and stimulating conference, with emphasis on collaboration and community development, that was also one of the friendliest I’ve been to.
Gary Stringer