All posts by Marjolein Platjee

Web Archiving & Preservation Working Group: Social Media & Complex Content

On January 16 2020, I had the pleasure of attending the first public meeting of the Digital Preservation Coalition’s Web Archiving and Preservation Working Group. The meeting was held in the beautiful New Records House in Edinburgh.

We were welcomed by Sara Day Thomson who in her opening talk gave us a very clear overview of the issues and questions we increasingly run into when archiving complex/ dynamic web or social media content. For example, how do we preserve apps like Pokémon Go that use a user’s location data or even personal information to individualize the experience? Or where do we draw the line in interactive social media conversations? After all, we cannot capture everything. But how do we even capture this information without infringing the rights of the original creators? These and more musings set the stage perfectly to the rest of the talks during the day.

Although I would love to include every talk held this day, as they were all very interesting, I will only highlight a couple of the presentations to give this blog some pretence at “brevity”.

The first talk I want to highlight was given by Giulia Rossi, Curator of Digital Publications at the British Library, on “Overview of Collecting Approach to Complex Publications”. Rossie introduced us to the emerging formats project; a two year project by the British Library. The project focusses on three types of content:

  1. Web-based interactive narratives where the user’s interaction with a browser based environment determines how the narrative evolves;
  2. Book as mobile apps (a.k.a. literary apps);
  3. Structured data.

Personally, I found Rossi’s discussion of the collection methods in particular very interesting. The team working on the emerging formats project does not just use heritage crawlers and other web harvesting tools, but also file transfers or direct downloads via access code and password. Most strikingly, in the event that only a partial capture can be made, they try to capture as much contextual information about the digital object as possible including blog posts, screen shots or videos of walkthroughs, so researchers will have a good idea of what the original content would have looked like.

The capture of contextual content and the inclusion of additional contextual metadata about web content is currently not standard practice. Many tools do not even allow for their inclusion. However, considering that many of the web harvesting tools experience issues when attempting to capture dynamic and complex content, this could offer an interesting work-around for most web archives. It is definitely an option that I myself would like to explore going forward.

The second talk that I would like to zoom in on is “Collecting internet art” by Karin de Wild, digital fellow at the University of Leicester. Taking the Agent Ruby – a chatbot created by Lynn Hershman Leeson – as her example, de Wild explored questions on how we determine what aspects of internet art need to be preserved and what challenges this poses. In the case of Agent Ruby, the San Francisco Museum of Modern Art initially exhibited the chatbot in a software installation within the museum, thereby taking the artwork out of its original context. They then proceeded to add it to their online Expedition e-space, which has since been taken offline. Only a print screen of the online art work is currently accessible through the SFMOMA website, as the museum prioritizes the preservation of the interface over the chat functionality.

This decision raises questions about the right ways to preserve online art. Does the interface indeed suffice or should we attempt to maintain the integrity of the artwork by saving the code as well? And if we do that, should we employ code restitution, which aims to preserve the original arts’ code, or a significant part of it, whilst adding restoration code to reanimate defunct code to full functionality? Or do we emulate the software as the University of Freiburg is currently exploring? How do we keep track of the provenance of the artwork whilst taking into account the different iterations that digital art works go through?

De Wild proposed to turn to linked data as a way to keep track of particularly the provenance of an artwork. Together with two other colleagues she has been working on a project called Rhizome in which they are creating a data model that will allow people to track the provenance of internet art.

Although this is not within the scope of the Rhizome project, it would be interesting to see how the finished data model would lend itself to keep track of changes in the look and feel of regular websites as well. Even though the layouts of websites have changed radically over the past number of years, these changes are usually not documented in metadata or data models, even though they can be as much of a reflection of social and cultural changes as the content of the website. Going forward it will be interesting to see how the changes in archiving online art works will influence the preservation of online content in general.

The final presentation I would like to draw attention to is “Twitter Data for Social Science Research” by Luke Sloan, deputy director of the Social Data Science Lab at the University of Cardiff. He provided us with a demo of COSMOS, an alternative to the twitter API, which  is freely available to academic institutions and not-for-profit organisations.

COSMOS allows you to either target a particular twitter feed or enter a search term to obtain a 1% sample of the total worldwide twitter feed. The gathered data can be analysed within the system and is stored in JSON format. The information can subsequently be exported to a .CVS or Excel format.

Although the system is only able to capture new (or live) twitter data, it is possible to upload historical twitter data into the system if an archive has access to this.

Having given us an explanation on how COSMOS works, Sloan asked us to consider the potential risks that archiving and sharing twitter data could pose to the original creator. Should we not protect these creators by anonymizing their tweets to a certain extent? If so,  what data should we keep? Do we only record the tweet ID and the location? Or would this already make it too easy to identify the creator?

The last part of Sloan’s presentation tied in really well with the discussion about the ethical approaches to archiving social media. During this discussion we were prompted to consider ways in which archives could archive twitter data, whilst being conscious of the potential risks to the original creators of the tweets. This definitely got me thinking about the way we currently archive some of the twitter accounts related to the Bodleian Libraries in our very own Bodleian Libraries Web Archive.

All in all, the DPC event definitely gave me more than enough food for thought about the ways in which the Bodleian Libraries and the wider community in general can improve the ways we capture (meta)data related to the online content that we archive and the ethical responsibilities that we have towards the creators of said content.

Because Digital Objects can Decay too: Conducting a Proof of Concept for Archivematica

Like other archives, the Bodleian Libraries has been searching for ways to optimize the conservation of our digital collections. The need to find a solution has become increasingly pressing as the Bodleian Electronic Archives and Manuscripts (BEAM), our digital repository service for the management of born-digital archives and manuscripts acquired by the Special Collections, now contains roughly 13TB worth of digital objects, with much more waiting in the wings.

In order to help us manage the ingest of digital objects within our collections, the Bodleian Libraries undertook an options review as part of its DPOC project. This lead to a decision to conduct a proof of concept of Archivematica. This proof of concept included the installation of a QA and DEV environment with the help of Artefactual followed by an extensive testing period and a gap analysis.

In November 2018 we started testing the system to establish whether or not Archivematica met our acceptance criteria. We mainly focussed on three areas:

  1. Overall performance/ functionality: Is the system user friendly? Can it successfully process all the different file types and sizes that we have in our collection?
  2. Metadata: Can Archivematica extract the metadata from the Excel sheets that we have created over time? What technical metadata does Archivematica automatically extract from ingested files?
  3. File extraction and normalization: Are disk images extracted properly? Is the content of a transfers normalized to the right file type?

Whilst testing, we also reached out to and visited other organisations that had already implemented Archivematica as well, including the International Institute of Social History in Amsterdam, the University of Edinburgh, the National Library of Wales and the Wellcome Trust.

Based on the outcomes of the tests we conducted, and the conversations we had with other institutions, we identified five gap areas:

  1. Performance: The Archivematica instance we configured for the Proof of Concept struggled with transfers over 200GB or transfers that contain over 5000+ files.
  2. Error reporting: It was often unclear what a particular error code and message meant. The error logs used by system administrators are also verbose, making it hard for them to pinpoint the error.
  3. Metadata: Here we identified two gaps. Firstly, there is the verbosity of the metadata. Because Archivematica records individual PREMIS events for each digital file, the resulting METS file becomes unwieldy, compromising the system’s performance. Secondly, we require a workflow to migrate our spreadsheet-held legacy pre-ingest capture metadata and file-level metadata into Archivematica, and to go on including this pre-ingest metadata, which will continue to be recorded in spreadsheet form for the foreseeable, in future ingests.
  4. User/ access management: Archivematica does not offer a way to manage access to collections or Archive Information Packages, and allows all users to alter the system work-flow. We are a multi-user organisation, and wish to have tighter controls on access to collections and workflow configurations.
  5. General reporting: Archivematica currently does not offer many reports to monitor progress, content and growth of collections.

Once we identified these gaps we had an intensive two day workshop with Artefactual to pinpoint possible solutions, which we subsequently presented to the wider Archivematica community during the Archivematica Camp in London in July 2019.

We will use all the input gathered from the proof of concept to inform our initial implementation of Archivematica, which will begin in January 2020. The project will focus on the performance and metadata gaps identified during the proof of concept, allowing us to bring Archivematica into production use 2021. We are keen to work with the Archivematica community, so do get in touch at beam@bodleian.ox.ac.uk if you’re interested in finding out more about our work.

Sixth British Library Labs Symposium

On Monday November 12, 2018 I was fortunate enough to attend the annual British Library Labs Symposium. During the symposium the British Library showcases the projects that they have been working on for their digital collections and issues awards to those who either contributed to those projects or used the digital collections to create their own projects.

According to Adam Farquhar, Head of Digital Scholarship at the British Library, this year’s symposium was their biggest and best attended yet: a testimony to the growing importance of digitization, as well as digital preservation and curation, within both archives and libraries.

This year’s theme of 3D models and scanning was wonderfully introduced by Daniel Prett, Head of Digital and IT at the Fitzwilliam Museum in Cambridge, in his keynote lecture on ‘The Value, Impact and Importance of experimenting with Cultural Heritage Digital Collections’. He explained how, during his time with the British Museum, they began to experiment with the creation of digital 3D models. This eventually lead to the purchase of a rig with multiple camera’s allowing them to take better quality photos in less time. At the Fitzmuseum, Prett has continued to advocate the development of 3D imaging. The museum now even offers free 3D imaging workshops open to anyone who is in possession of a laptop and any device that has a camera (including a smartphone).

Although Prett shared much of his other successful projects with us, he also emphasized that much of digitization is about trial and error, and stressed the importance recording those errors. Unfortunately, libraries and archives alike are prone to celebrate their successes, but cover-up their errors, even though we may learn just as much from them. Prett called upon all attendees to more frequently share their errors, so we may learn from each other.

During the break I wandered into a separate room where individuals and companies showcased the projects that they developed in relation to the digital libraries special collections. A lucky few managed to lay their hands on a VR headset in order to experience Project Lume (a virtual data simulation program) and part of the exhibition by Nomad. The British Library itself showcased their own digitization services, including 360° spin photography and 3D imaging. The latter lead to some interesting discussions about the de- and re-contextualization of artworks when using 3D imaging technology.

In the midst of all this there was one stand that did not lure its spectators with fancy technology or gadgets. Instead, Jonah Coman, winner of the BL Teaching & Learning Award, showcased the small zines that he created. The format of these Pocket Miscellany, as they are called, are inspired by small medieval manuscripts and are intended to inform their readers about marginalized bodies, disability and queerness in medieval literature. Due to copyright issues these zines are not available for purchase, but can be found on Coman’s Patreon website.

The BL labs symposium also showed how the digital collections of the British Library can inspire both art and fashion. Fashion designer Nabil Nayal, who unfortunately could not accept his BL labs Commercial Award in person, for example, had used the Elizabethan digital collections as inspiration for the collection he presented at the British Library during the London Fashion week.

Artist Richard Wright, on the other hand, looked to the library’s infrastructure for inspiration. This resulted in The Elastic System, a virtual mosaic of hundreds of the British Library books that together make-up a sketch of Thomas Watts. When you zoom in on the mosaic you can browse the books in detail and can even order them through a link to the BL’s catalogue that is integrated in the picture. Once a book is checked out, it reveals the pictures of BL employees working in the stacks to collect the books. It thereby slowly reveals a part of the library that is usually hidden from view.

Another fascinating talk was given by artist Michael Takeo Magruder about his exhibition on Imaginary Cities which will be staged at the British Library’s entrance hall from 5 April to 14 July 2019. Magruder is using the library’s 19th and early 20th century maps collection to create new and ever changing maps and simulations of virtual, fantastical cities. Try as I might, I fear I cannot do justice to Magruder’s unique and intriguing artwork with words alone and can therefore only urge you to go visit the exhibition this coming year.

These are only a few of the wonderful talks that were given during the Labs symposium. The British Labs symposium was a real eye opener for me. I did not realize just how quickly the field of 3D imaging had developed within the museum and library world. Nor did I realize how digital collections could be used, not simply to inspire, but create actual artworks.

Yet, one of the things that struck me most is how much the development of and advocacy for the use of digital collections within archives and libraries is spurred on by passionate individuals; be they artists who use digital collections to inspire their work, digital- and IT-specialists willing to sacrifice a lunch break or two for the sake of progress or individual scholars who create little zines to spread awareness about a topic they feel passionate about. Imagine what they can do if initiatives like the BL labs continue to bring such people together. I, for one, cannot wait to see what the future for digital collections and scholarship holds. On to next year’s symposium.