Archival Uncertainties: International Conference on Literary Archives at the British Library – 4 April 2016
This one-day conference focused on digital humanities, with papers from a spectrum of interested parties including academics working on digitisation projects, authors, translators, archivists and curators. I attended three panels on the day and the unifying theme was a contrary message of dispersal and amalgamation (and butterflies).
The first thing that has been dispersed or discarded is any idea of a literary canon. As plenary speaker and archivist Catherine Hobbs pointed out, scholarship now focuses less on established set texts and more on themes like “environmental literature”. Over the past few decades, in response to this, archives have collected more non-traditionally canonical literary papers but, Catherine reminded us, as archivists we can’t stop paying attention to the ways that literature continues to change. We need to keep tabs on what is going on in the literary world in order to document it, and this will include tackling new forms of experimental, avant-garde and self-published writing.
As Catherine noted, it used to be easy to find the avant-garde – pretty much whoever was hanging out on the Left Bank – but now it’s up to archivists to not only collect this material, but to track it down in the first place, and not to default to the temptingly easy path of collecting only the papers of that tiny sliver of authors considered publishable by mainstream publishers.
We also need to be prepared to upgrade or at least be flexible in our use of our traditional archival tools of preservation and description. This can be simple – like acknowledging a collective like Michael Field as the creator, rather than the individuals involved; or reconsidering the very label of ‘author’ when applied to a crowd-sourced work. Or it can be technically complex, like figuring out how to archive a (wonderful, and library themed!) generative poem like House of Trust, which was designed to be in flux; or what it means for a local collecting policy when so many innovative literary projects aren’t working within any kind of geographical boundary.
Most archives are still learning how to work with static websites and digital archives, and most digital humanities projects are still dealing only with digitised paper documents, but Catherine listed a couple of examples of creative attempts at collecting or preserving online literary works, like the Electronic Literature Collection (Volume 3) and the Agrippa Files (while also raising the issue that many of these projects are dependent on US and UK funding, possibly to the detriment of diversity). Catherine was also keen to emphasise the value of bringing an archival focus to these collecting efforts, particularly the special archival emphasis on preserving all the developmental iterations of a work, which isn’t the natural concern of libraries or even galleries and museums.
On the theme of dispersal and amalgamation, the Electronic Literature Collection works both ways in relation to avant-garde digital literature. More of a portal or anthology than a web archive, it doesn’t try to preserve the original websites but instead offers written descriptions and video records, while continuing to point users to the very far-flung originals.
The team who represented the Victorian Lives and Letters Consortium, meanwhile, strongly emphasised the international collaboration and joining-up of resources that is needed for digitisation projects these days, needed not only because of the global dispersal of the archival source material, but the dispersal of the tools, knowledge and skills needed to effectively digitise it all.
Sandra Tuppen of the British Library mentioned another example of international collaboration, the (tongue twisting) International Image Interoperability Framework, which I blogged about last June. The IIIF will allow institutions (including the Bodleian!) to very simply share their digital images, making life much easier for projects as ambitious as Victorian Lives and Letters and the Modernist Archives Publishing Project (MAPP) which will bring together scans of archives that are physically scattered across the globe.
There are several reasons to digitise archives: from the archivist’s point of view, two reasons are that we want to make our collections available to as many people as possible and that we very much appreciate having digital preservation copies of the originals. But I think everyone will agree that one of the greatest strengths of digitisation is that, done right, it gives you the ability to manipulate information in shiny new ways.
Bridgit Moynihan (University of Edinburgh) and Wim Van Mierlo (Loughborough University) made clear that metadata is key to this task, and also that putting a flat page scan online is no longer enough when so many researchers are interested in things like the physical materiality of an object.
This is particularly true when that object is one of Edwin Morgan’s scrapbooks, Bridgit’s area of interest, which are so cunningly put together that it can be impossible to tell whether an image is whole or a collage without being able to physically feel the edges of the cut-outs.
Beyond the purely physical, as Wim pointed out, is the temporal, which is especially relevant to a literary manuscript that has been heavily edited and amended. In that case, it’s important for the digitised version to demonstrate the way the manuscript developed over time.
How can this be done? With excellent metadata! The Textual Encoding Initiative was mentioned several times throughout the conference as the most widely used standard.
Bridgit listed some other creative possibilities for metadata, like using sound to represent the different aural qualities of news clippings versus magazine clippings, or using page topographies to help visualise the joins between clippings – or even tools which will allow people to explore a digital image haptically. She mentioned Alice Thudt’s Bohemian Bookshelf as an inspiration for both artistry and discoverability in a digital collection, a fun project that allows you to sort books by the colour of their covers and the number of pages as well as the slightly more traditional date of publication.
Marion Thain (New York University) had more ideas in reference to the diaries of Michael Field. The diaries are clearly written in the different hands of the two women who published poetry under the name ‘Michael Field’. Metadata could be used to separately tag their handwriting and this would not only allow researchers to pull apart the diary entries and read the two women’s stories individually, but would make it possible to do a stylistic analysis on each author and figure out who contributed what to Michael Field’s poems.
Francis O’Gorman of Leeds University suggested another metadata trick for John Ruskin’s diaries, this time tagging the entries against a map so that you could follow Ruskin in his travels day by day.
Guy Baxter, University Archivist at Reading, also discussed metadata, pointing out that current archival description very practically sacrifices diversity for clarity, and doesn’t (can’t) list every possible aspect or potential area of interest of a manuscript or archive. As archivists, we can’t always know what’s important about an archive when it comes to classifying and describing it – especially as what’s considered important about an archive can change radically over time! Asking our users to keyword-tag our catalogues could help with this, and so could revisiting our old descriptions when time and funding permit.
But we also need to be conscious of the “intangibles”, the gaps – butterflies, as Guy called them – and to do our best to think about how to capture what an archive isn’t documenting. He gave the example of a theatre performance archive which might include costumes and theatre programmes, or even video of shows, but which can’t ever give you a real sense of the essentially ephemeral performance itself. He made it clear that all archives have this intangible quality.
And another intangible – electronic archives, particularly those of authors who may be tempted to delete early drafts (or just lose them when they buy a new computer!) where authors of old might have found it just as easy to chuck the draft in a box and keep it. Guy warned that we may increasingly rely on publishers to be able to provide early drafts of an author’s works.
Another note of caution came from Bridgit Moynihan and Sandra Tuppen who brought up a major practical constriction on digitisation: copyright law. Specifically the UK law which restricts the copying of unpublished works until 2039. Until that magical date all the digitisation technology in the world can’t make the vast majority of our paper archives as accessible as researchers would like. A bit of a downer to end on, perhaps, but it’s clear that although creative efforts to amalgamate, arrange and preserve digital archives are ongoing, the expense of digitisation and the exigency of copyright law mean that the physical dispersal of so many paper archives is going to be a problem for a long time to come.