Category Archives: Event

Archives Unleashed – Vancouver Datathon

On the 1st-2nd of November 2018 I was lucky enough to attend the  Archives Unleashed Datathon Vancouver co-hosted by the Archives Unleashed Team and Simon Fraser University Library along with KEY (SFU Big Data Initiative). I was very thankful and appreciative of the generous travel grant from the Andrew W. Mellon Foundation that made this possible.

The SFU campus at the Habour Centre was an amazing venue for the Datathon and it was nice to be able to take in some views of the surrounding mountains.

https://twitter.com/ianmilligan1/status/1058403942554464256

About the Archives Unleashed Project

The Archives Unleashed Project is a three year project with a focus on making historical internet content easily accessible to scholars and researchers whose interests lay in exploring and researching both the recent past and contemporary history.

After a series of datathons held at a number of International institutions such as the British Library, University of Toronto, Library of Congress and the Internet Archive, the Archives Unleashed Team identified some key areas of development that would enable and help to deliver their aim of making petabytes of valuable web content accessible.

Key Areas of Development
  • Better analytics tools
  • Community infrastructure
  • Accessible web archival interfaces

By engaging and building a community, alongside developing web archive search and data analysis tools the project is successfully enabling a wide range of people including scholars, programmers, archivists and librarians to “access, share and investigate recent history since the early days of the World Wide Web.”

The project has a three-pronged approach
  1. Build a software toolkit (Archives Unleashed Toolkit)
  2. Deploy the toolkit in a cloud-based environment (Archives Unleashed Cloud)
  3. Build a cohesive user community that is sustainable and inclusive by bringing together the project team members with archivists, librarians and researchers (Datathons)
Archives Unleashed Toolkit

The Archives Unleashed Toolkit (AUT) is an open-source platform for analysing web archives with Apache Spark. I was really impressed by AUT due to its scalability, relative ease of use and the huge amount of analytical options it provides. It can work on a laptop (Mac OS, Linux or Windows), a powerful cluster or on a single-node server and if you wanted to, you could even use a Raspberry Pi to run AUT. The Toolkit allows for a number of search functions across the entirety of a web archive collection. You can filter collections by domain, URL pattern, date, languages and more. Create lists of URLs to return the top ten in a collection. Extract plain text files from HTML files in the ARC or WARC file and clean the data by removing ‘boilerplate’ content such as advertisements. Its also possible to use the Stanford Named Entity Recognizer (NER) to extract names of entities, locations, organisations and persons. I’m looking forward to seeing the possibilities of how this functionality is adapted to localised instances and controlled vocabularies – would it be possible to run a similar programme for automated tagging of web archive collections in the future? Maybe ingest a collection into ATK , run a NER and automatically tag up the data providing richer metadata for web archives and subsequent research.

Archives Unleashed Cloud

The Archives Unleashed Cloud (AUK) is a GUI based front end for working with AUT, it essentially provides an accessible interface for generating research derivatives from Web archive files (WARCS). With a few clicks users can ingest and sync Archive-it collections, analyse the collections, create network graphs and visualise connections and nodes. It is currently free to use and runs on AUK central servers.

My experience at the Vancouver Datathon

The datathons bring together a small group of 15-20 people of varied professional backgrounds and experience to work and experiment with the Archives Unleashed Toolkit and the Archives Unleashed Cloud. I really like that the team have chosen to minimise the numbers that attend because it created a close knit working group that was full of collaboration, knowledge and idea exchange. It was a relaxed, fun and friendly environment to work in.

Day One

After a quick coffee and light breakfast, the Datathon opened with introductory talks from project team members Ian Milligan (Principal Investigator), Nick Ruest (Co-Principal Investigator) and Samantha Fritz (Project Manager), relating to the project – its goals and outcomes, the toolkit, available datasets and event logistics.

Another quick coffee break and it was back to work – participants were asked to think about the datasets that interested them, techniques they might want to use and questions or themes they would like to explore and write these on sticky notes.

Once placed on the white board, teams naturally formed around datasets, themes and questions. The team I was in consisted of  Kathleen Reed and Ben O’Brien  and formed around a common interest in exploring the First Nations and Indigenous communities dataset.

Virtual Machines were kindly provided by Compute Canada and available for use throughout the Datathon to run AUT, datasets were preloaded onto these VMs and a number of derivative files had already been created. We spent some time brainstorming, sharing ideas and exploring datasets using a number of different tools. The day finished with some informative lightning talks about the work participants had been doing with web archives at their home institutions.

Day Two

On day two we continued to explore datasets by using the full text derivatives and running some NER and performing key word searches using the command line tool Grep. We also analysed the text using sentiment analysis with the Natural Language Toolkit. To help visualise the data, we took the new text files produced from the key word searches and uploaded them into Voyant tools. This helped by visualising links between words, creating a list of top terms and provides quantitative data such as how many times each word appears. It was here we found that the word ‘letter’ appeared quite frequently and we finalised the dataset we would be using – University of British Columbia – bc-hydro-site-c.

We hunted down the site and found it contained a number of letters from people about the BC Hydro Dam Project. The problem was that the letters were in a table and when extracted the data was not clean enough. Ben O’Brien came up with a clever extraction solution utilising the raw HTML files and some script magic. The data was then prepped for geocoding by Kathleen Reed to show the geographical spread of the letter writers, hot-spots and timeline, a useful way of looking at the issue from the perspective of engagement and the community.

Map of letter writers.

Time Lapse of locations of letter writers. 

At the end of day 2 each team had a chance to present their project to the other teams. You can view the presentation (Exploring Letters of protest for the BC Hydro Dam Site C) we prepared here, as well as the other team projects.

Why Web Archives Matter

How we preserve, collect, share and exchange cultural information has changed dramatically. The act of remembering at National Institutes and Libraries has altered greatly in terms of scope, speed and scale due to the web. The way in which we provide access to, use and engage with archival material has been disrupted. All current and future historians who want to study the periods after the 1990s will have to use web archives as a resource. Currently issues around accessibility and usability have lagged behind and many students and historians are not ready. Projects like Archives Unleashed will help to furnish and equip researchers, historians, students and the community with the necessary tools to combat these problems. I look forward to seeing the next steps the project takes.

Archives Unleashed are currently accepted submissions for the next Datathon in March 2019, I highly recommend it.

Attending the ARA Annual Conference 2018

ARA Annual Conference 2018, Grand Central Hotel, Glasgow
ARA Annual Conference 2018, Grand Central Hotel, Glasgow

Having been awarded the Diversity Bursary for BME individuals, sponsored by Kevin J Bolton Ltd., I was able to attend the ARA Annual Conference 2018 held in Glasgow in August.

Capitalising on the host city’s existing ubiquitous branding of People Make Glasgow,  the Conference Committee set People Make Records as this year’s conference theme. This was then divided into three individual themes, one for each day of the conference:

  • People in Records
  • People Using Records
  • People Looking After Records

Examined through the lens of the above themes over the course of three days,  this year’s conference addressed three keys areas within the sector: representation, diversity and engagement.

Following an introduction from Kevin Bolton (@kevjbolton), the conference kicked off with Professor Gus John (@Gus_John) delivering the opening keynote address, entitled “Choices of the Living and the Dead”. With People Make Records the theme for the day, Professor John gave a powerful talk discussing how people are impacting the records and recordkeeping of African (and other) diaspora in the UK, enabling the airbrushing of the history of oppressed communities. Professor John noted yes people make records, but we also determine what to record, and what to do with it once it has been recorded.

Noting the ignorance surrounding racial prejudice and violence, citing the Notting Hill race riots, the Windrush generation,  and Stephen Lawrence as examples, Professor John illustrated how the commemoration of historical events is selective: while in 2018 the 50th anniversary of the Race Relations Act received much attention, in comparison the 500th anniversary of the start of the Transatlantic Slave Trade was largely ignored, by the sector and the media alike.  This culture of oppression, and omission, he said, is leading to ignorance amongst young people about major defining events, contributing to a removal of context to historically oppressed groups.

In response to questions from the audience, Professor John noted that one of the problems facing the sector is the failure to interrogate the ‘business as usual’ climate, and that it may be ‘too difficult to consider what an alternative route might be’. Professor John challenged us to question the status quo: ‘Why is my curriculum white? Why isn’t my lecturer black? What does “de-colonising” the curriculum mean? This is what we must ask ourselves’.

Following Professor John’s keynote and his ultimate call to action, there was a palpable atmosphere of engagement amongst the delegates, with myself and those around me eager to spend the next three days learning from the experiences of others, listening to new perspectives and extracting guidance on the actions we may take to develop and improve our sector, in terms of representation, diversity and engagement.

Various issues relating to these areas were threaded throughout many of the presentations, and as a person of colour at the start of my career in this sector, and recipient of the Diversity Bursary, I was excited to hear more about the challenges facing marginalised communities in archives and records, including some I could relate to on a personal and professional level, and, hopefully, also take away some proposed solutions and recommendations.

I attended an excellent talk by Adele Patrick (@AdelePAtrickGWL),  of Glasgow Women’s Library, who discussed the place for feminism within the archive, noting GWL’s history in resistance, and insistence on a plural representation, when women’s work, past and present, is eclipsed. Dr Alan Butler (@AButlerArchive), Coordinator at Plymouth LGBT Community Archive, discussed his experiences of trying to create a sense of community within a group that is inherently quite nebulous.  Nevertheless, Butler illustrated the importance of capturing LGBTQIA+ history, as people today are increasingly removed from the struggles that previous generations have had to overcome, echoing a similar point Professor Gus John made earlier.

A presentation which particularly resonated with me came from Kirsty Fife (@DIYarchivist) and Hannah Henthorn (@hanarchovist), on the issue of diversity in the workforce. Fife and Henthorn presented the findings from their research, including their survey of experiences of marginalisation in the UK archive sector, highlighting the structural barriers to diversifying the archive sector workforce. Fife and Henthorn identified several key themes which are experienced  by marginalised communities in the sector, including: the feeling of isolation and otherness in both workplace and universities; difficulties in gaining qualifications, perhaps due to ill health/disability/financial barriers/other commitments; feeling unsafe and under confident in professional spaces and a frustration at the lack of diversity in leadership roles.

As a Graduate Trainee Digital Archivist, I couldn’t abandon my own focus on digital preservation and digital archiving, and as such attended various digital-related talks, including “Machines make records: the future of archival processing” by Jenny Bunn (@JnyBn1), discussing the impact of taking a computational approach to archival processing, “Using digital preservation and access to build a sustainable future for your archive” led by Ann Keen of Preservica, with presentations given by various Preservica users, as well as a mini-workshop led by Sarah Higgins and William Kilbride, on ethics in digital preservation, asking us to consider if we need our own code of conduct in digital preservation, and what this could look like.

Image of William Kilbride and Sarah Higgins running their workshop "Encoding ethics: professional practice digital preservation", ARA Annual Conference 2018, Glasgow
William Kilbride and Sarah Higgins running their workshop “Encoding ethics: professional practice digital preservation”, ARA Annual Conference 2018, Glasgow

I have only been able to touch on a very small amount of what I heard and learnt at the many and varied talks, presentations and workshops at the ARA conference,  however,  one thing I took away from the conference was the realisation that archivists and recordkeepers have the power to challenge structural inequalities, and must act now, in order to become truly inclusive. As Michelle Caswell (@professorcaz), 2nd keynote speaker said, we must act with sensitivity, acknowledge our privileges and, above all empower not marginalise. This conference felt like a call to action to the archive and recordkeeping community, in order to include the ‘hard to reach’ communities, or alternatively as Adele Patrick noted, the ‘easy to ignore’. As William Kilbride (@William Kilbride) said, this is an exciting time to be in archives.

I want to thank Kevin Bolton for sponsoring the Diversity Bursary, which enabled me to attend an enriching, engaging and informative event, which otherwise would have been inaccessible for me.

________________________________________
Because every day is a school day, as homework for us all, I made a note of some of the recommendations made by speakers throughout the conference, compiled into this very brief list which I thought I would share:

Reading list

Celebrating the Life of Clement Attlee

Photograph of Clement Attlee, n.d. [MS. CRA. 99].

Join the Attlee Foundation and Bodleian Libraries on the 25
th of October in the Weston Lecture Theatre to celebrate the life and legacy of Clement Attlee.

The event will commence with a lecture given by John Bew on the political thought of Clement Attlee. A  Professor of History and Foreign Policy at the War Studies Department at King’s College London, John Bew is also the author of five books including the award-winning biography Citizen Clem: A Life of Attlee (2016), which received the Orwell Prize for Political Writing, the Elizabeth Longford Prize for Historical Biography and the Best Book in the U.K.

A list by Clement Attlee of his “best appointments”, n.d. [post 1951] [MS. CRA. 10].

The lecture will be accompanied by a display of items from Clement Attlee’s personal archive. Covering the years 1945-1951, the display offers viewers a unique insight into the life and work of Attlee, forming a celebration of his achievements in both personal, political and public arenas.

Booking Information:

This event is free but places are limited so please complete the booking form via our website  to reserve tickets in advance. All bookings are subject to a £1 booking fee.

Doors open at 6.15pm. The lecture begins at 6.30pm, and will be followed by a drinks reception.

Sir Oliver Wardrop’s desk diaries donated to the library

Audience members who attended the launch of Nikoloz Aleksidze’s book Georgia: a Cultural Journey through the Wardrop Collection  at the Weston Library on June 1st also had the novel experience of witnessing the arrival of a further addition to the Bodleian’s Wardrop  holdings. A family descendant of Sir Oliver, who was attending the launch, brought his desk diaries to donate to the collection. The Wardrop collection forms the nucleus of the Bodleian’s rich holdings of Georgian books and the donation of the desk diaries enriches this significant collection still further.

Dating from 1882-1948, the diaries provide details of Sir Oliver’s daily meetings and activities. They  will offer scholars an important glimpse into his day-to-day life, particularly during the critical period leading up to and immediately after the formation of the Democratic Republic of Georgia when he served as the British High Commissioner for Transcaucasia.

 

A life in letters: a tribute to Jenny Joseph

Miriam Margolyes
Miriam Margolyes

On Sunday 13th May the actress Miriam Margolyes will be in Oxford to perform a public reading of poems by Oxford alumna Jenny Joseph, the author of Warning:

‘When I am an old woman I shall wear purple
With a red hat which doesn’t go, and doesn’t suit me’

The event, hosted by the Bodleian and St Hilda’s College, celebrates the life and work of Jenny Joseph, who died this January, and will include a selection of poetry ranging across her more than 50 year-long writing career. She donated her literary archive to the Bodleian in 2017.

The reading will be at the beautiful, seventeenth-century Convocation House in the Old Bodleian Library from 11.30pm-1.00pm. Tickets cost £12 (£10 concessions), including tea/coffee and a pastry. You can book tickets online at What’s on, or phone the box office at 01865 278112 (there is a £2 booking fee for phone bookings).

Please note that tickets will not be available on the door.

DPC Email Preservation: How Hard Can It Be? Part 2

Source: https://lu2cspjiis-flywheel.netdna-ssl.com/wp-content/uploads/2015/09/email-marketing.jpg

In July last year my colleague Miten and I attended a DPC Briefing Day titled Email Preservation: How Hard Can It Be?  which introduced me to the work of the Task Force on Technical Approaches to Email Archives  and we were lucky enough to attend the second session last week.

Arranging a second session gave Chris Prom (@chrisprom), University of Illinois at Urbana-Champaign and Kate Murray (@fileformatology), Library of Congress, co-chair’s of the Task Force the opportunity to reflect upon and add the issues raised from the first session to the Task Force Report, and provided the event attendees with an update on their progress overall, in anticipation of their final report scheduled to be published some time in April.

“Using Email Archives in Research”

The first guest presentation was given by Dr. James Baker (@j_w_baker), University of Sussex, who was inspired to write about the use of email archives within research by two key texts; Born-digital archives at the Wellcome Library: appraisal and sensitivity review of two hard drives (2016), an article by Victoria Sloyan, and Dust (2001) a book by Carolyn Steedman.

These texts led Dr. Baker to think of the “imagination of the archive” as he put it, the mystique of archival research, stemming from the imagery of  19th century research processes. He expanded on this idea, stating “physically and ontologically unique; the manuscript, is no longer what we imagine to be an archive”.

However, despite this new platform for research, Dr. Baker stated that very few people outside of archive professionals know that born-digital archives exist, yet alone use them. This is an issue, as archives require evidence of use, therefore, we need to encourage use.

To address this, Dr. Baker set up a Born-Digital Access Workshop, at the Wellcome Library in collaboration with their Collections Information Team, where he gathered people who use born-digital archives and the archivists who make them, and provided them with a set of 4 varying case-studies. These 4 case-studies were designed to explore the following:

A) the “original” environment; hard drive files in a Windows OS
B) the view experience; using the Wellcome’s Viewer
C) levels of curation; comparing reformatted and renamed collections with unaltered ones
D) the physical media; asking does the media hold value?

Several interesting observations came out of this workshop, which Dr. Baker organised in to three areas:

  1. Levels of description; filenames are important, and are valuable data in themselves to researchers. Users need a balance between curation and an authentic representation of the original order.
  2. “Bog-standard” laptop as access point; using modern technology that is already used by many researchers as the mode of access to email and digital archives creates a sense of familiarity when engaging with the content.
  3. Getting the researcher from desk to archive; there is a substantial amount of work needed to make the researcher aware of the resources available to them and how – can they remote access, how much collection level description is necessary?

Dr. Baker concluded that even with outreach and awareness events such as the one we were all attending, born-digital archives are not yet accessible to researchers, and this has made me realise the digital preservation community must push for access solutions,  and get these out to users, to enable researchers to gain the insights they might from our digital collections.

“Email as a Corporate Record”

The third presentation of the day was given by James Lappin (@JamesLappin), Loughborough University, who discussed the issues involved in applying archival policies to emails in a governmental context.

His main point concerned the routine deletion of email that happens in governments around the world. He said there are no civil servants email accounts scheduled to be saved past the next 3 – 4 years – but, they may be available via a different structure; a kind of records management system. However, Lappin pointed out the crux in this scenario: government departments have no budget to move and save many individuals email accounts, and no real idea of the numerics: how much to save, how much can be saved?

“email is the record of our age” – James Lappin

Lappin suggested an alternative: keep the emails of the senior staff only, however, this begs the questions, how do we filter out sensitive and personal content?

Lappin posits that auto-deletion is the solution, aiming to spare institutions from unmanageable volumes of email and the consequential breach of data protection.
Autodeletion encourages:

  •  governments to kickstart email preservation action,
  • the integration of tech for records management solutions,
  • actively considering the value of emails for long-term preservation

But how do we transfer emails to a EDRMS, what structures do we use, how do we separate individuals, how do we enforce the transfer of emails? These issues are to be worked out, and can be, Lappin argues, if we implement auto-deletion as tool to make email preservation less daunting , as at the end of the day, the current goal is to retain the “important” emails, which will make both government departments and historians happy, and in turn, this makes archivists happy. This does indeed seem like a positive scenario for us all!

However, it was particularly interesting when Lappin made his next point: what if the very nature of email, as intimate and immediate, makes governments uncomfortable with the idea of saving and preserving governmental correspondence? Therefore, governments must be more active in their selection processes, and save something, rather than nothing – which is where the implementation of auto-deletion, could, again, prove useful!

To conclude, Lappin presented a list of characteristics which could justify the preservation of an individuals government email accounts, which included:

  • The role they play is of historic interest
  • They expect their account to be permanently preserved
  • They are given the chance to flag or remove personal correspondence
  • Access to personal correspondence is prevented except in case of overriding legal need

I, personally, feel this fair and thorough, but only time will tell what route various governments take.

On a side note: Lappin runs an excellent comic-based blog on Records Management which you can see here.

Conclusions
One of the key issues that stood out for me today was, maybe surprisingly, not to do with the technology used in email preservation, but how to address the myriad issues email preservation brings to light, namely the feasibility of data protection, sensitivity review and appraisal, particularly prevalent when dealing in such vast quantities of material.

Email can only be preserved once we have defined what constitutes ’email’ and how to proceed ethically, morally and legally. Then, we can move forward with the implementation of the technical frameworks, which have been designed to meet our pre-defined requirements, that will enable access to historically valuable, and information rich, email archives, that will yield much in the name of research.

In the tweet below, Evil Archivist succinctly reminds us of the importance of maintaining and managing our digital records…

https://twitter.com/EvilArchivist/status/918098173423423488

Email Preservation: How Hard Can It Be? 2 – DPC Briefing Day

On Wednesday 23rd of January I attended the Digital Preservation Coalition briefing day titled ‘Email Preservation: How Hard Can It Be? 2’ with my colleague Iram. As I attended the first briefing day back in July 2017 it was a great opportunity to see what advances and changes had been achieved. This blog post will briefly highlight what I found particularly thought provoking and focus on two of the talks about e-discovery from a lawyers view point.

The day began with an introduction by the co-chair of the report, Chris Prom (@chrisprom), informing us of the work that the task force had been doing. This was followed by a variety of talks about the use of email archives and some of the technologies used for the large scale processing  from the perspective of researchers and lawyers. The day was concluded with a panel discussion (for a twist, we the audience were the panel) about the pending report and the next steps.

Update on Task Force on Technical Approaches to Email Archives Report

Chris Prom told us how the report had taken on the comments from the previous briefing day and also from consultation with many other people and organisations. This led to clearer and more concise messages. The report itself does not aim to provide hard rules but to give an overview of the current situation and some recommendations that people or organisations involved with, interested in or are considering email preservation can consider.

Reconstruction of Narrative in e-Discovery Investigations and The Future of Email Archiving: Four Propositions

Simon Attfield (Middlesex university) and Larry Chapin (attorney) spoke about narrative and e-discovery. It was a fascinating insight into a lawyers requirements for use of email archives. Larry used the LIBOR scandal as an example of a project he worked on and the power of emails in bringing people to justice. E-discovery from his perspective was its importance to help create a narrative and tell a story, something at the moment a computer cannot do. Emails ‘capture the stuff of story making’ as they have the ability to reach into the crevasses of things and detail the small. He noted how emails contain slang and interestingly the language of intention and desire. These subtleties show the true meaning of what people are saying and that is important in the quest for the truth. Simon Attfield presented his research on the coding aspect to aid lawyers in assessing and sorting through these vast data sets. The work he described here was too technical for me to truly understand however it was clear that collaboration between archivist, users and the programmers/researchers will be vital for better preservation and use strategies.

Jason Baron (@JasonRBaron1) (attorney) gave a talk on the future of email archiving detailing four propositions.

Slide detailing the four propositions for the future of email archives. By Jason R Baron 2018

The general conclusions from this talk was that automation and technology will be playing an even bigger part in the future to help with acquisition, review (filtering out sensitive material) and searching (aiding access to larger collections). As one of the leads of the Capstone project, he told us how that particular project saves all emails for a short time and some forever, removing the misconceptions that all emails are going to be saved forever. Analysis of how successful Capstone has been in reducing signal to noise ratio (so only capturing email records of permanent value) will be important going forward.

The problem of scale, which permeates into most aspects of digital preservation, again arose here. For lawyers, they must review any and all information, which when looking at emails accounts can be colossal. The analogy that was given was of finding a needle in a haystack – lawyers need to find ALL the needles (100% recall).

Current predictive coding for discovery requires human assistance. Users have to tell the program whether the recommendations it produced were correct, the program will learn from this process and hopefully become more accurate. Whilst a program can efficiently and effectively sort personal information such as telephone numbers, date of birth etc it cannot currently sort out textual content that required prior knowledge and non-textual content such as images.

Panel Discussion and Future Direction

The final report is due to be published around May 2018. Email is a complex digital object and the solution to its preservation and archiving will be complex also.

The technical aspects of physically preserving emails are available but we still need to address the effective review and selection of the emails to be made available to the researcher. The tools currently available are not accurate enough for large scale processing, however, as artificial intelligence becomes better and more advanced, it appears this technology will be part of the solution.

Tim Gollins (@timgollins) gave a great overview of the current use of technology within this context, and stressed the point that the current technology is here to ASSIST humans. The tools for selection, appraisal and review need to be tailored for each process and quality test data is needed to train the programs effectively.

The non technical aspects further add to the complexity, and might be more difficult to address, as a community we need to find answers to:

  • Who’s email to capture (particularly interesting when an email account is linked to a position rather than a person)
  • How much to capture (entire accounts such as in the case of Capstone or allowing the user to choose what is worthy of preservation)
  • How to get persons of interest engaged (effectiveness of tools that aid the process e.g. drag and drop into record management systems or integrated preservation tools)
  • Legal implications
  • How to best present the emails for scholarly research (bespoke software such as ePADD or emulation tools that recreate the original environment or a system that a user is familiar with) 

Like most things in the digital sector, this is a fast moving area with ever changing technologies and trends. It might be frustrating there is no hard guidance on email preservation, when the Task Force on Technical Approaches to Email Archives report is published it will be an invaluable resource and a must read for anyone with an interest or actively involved in email preservation. The takeaway message was, and still is, that emails matter!    

What I Wish I Knew Before I Started – DPC Student Conference 2018

On January 24th, four Archives Assistants from Archives and Modern Manuscripts visited Senate House, London for the DPC Student Conference. With the 2018 theme being ‘What I Wish I Knew Before I Started’, it was an opportunity for digital archivists to pass on their wealth of knowledge in the field.

Getting started with digital preservation

The day started with a brief introduction to digital preservation by Sharon McMeekin from the Digital Preservation Coalition. This included an outline of the three basic models of digital preservation: OAIS, DCC lifecycle and the three-legged stool. (More information about these models can be found in the DPC handbook.) Aimed at beginners, this introduction was made accessible and easy to understand, whilst also giving us plenty to think about.

Next to take the stage was Steph Taylor, an Information Manager from CoSector, University of London. Steph is a huge advocate for the use of Twitter to find out the latest information and opinion in the world of digital preservation. As someone who has never had a Twitter account, it made me realise the importance of social media for staying up to date in such a fast-moving profession. Needless to say, I signed myself up to Twitter that evening to find out what I had been missing out on. (You can follow what was happening at the conference with the hashtag #dpc_wiwik.)

The final speaker before lunch was Matthew Addis, giving a technologist’s perspective. Matthew broke down the steps that you would need to take should you be faced with the potentially overwhelming job of starting from the beginning with a depository of digital material. He referenced a two-step approach – conceived by Tim Gollins – named ‘Parsimonious Preservation’, which involves firstly understanding what you have, and secondly keeping the bits safe. In the world of digital preservation, the worst thing you can do is do nothing, so by dealing with the simple and usually low-cost files first, you can protect the vast majority of the collection rather than going straight into the technical, time-consuming and costly minority of material. In the long run, the simple material that could have been dealt with initially may become technical and costly – due to software obsolescence, for instance.

That morning, the thought of tackling a simple digital preservation project would have seemed somewhat daunting. But Matthew illustrated the steps very clearly and as we broke for lunch I was left thinking that actually, with a little guidance, it probably wouldn’t be quite so bad.

Speakers on their experiences in the digital preservation field

During the afternoon, speakers gave presentations on their experiences in the digital preservation field. The speakers were Adrian Brown from the Parliamentary Archives, Glenn Cumiskey from the British Museum and Edith Halvarsson from the Bodleian Libraries. It was fascinating to learn how diverse the day-to-day working lives of digital archivists can be, and how often, as Glenn Cumiskey remarked, you may be the first digital archivist there has ever been within a given organisation, providing a unique opportunity for you to pave the way for its digital future.

Adrian Brown on his digital preservation experience at the Parliamentary Archive

The final speaker of the day, Dave Thomson, explained why it is up to students and new professionals to be ‘disruptive change agents’ and further illustrated the point that digital preservation is a relatively new field. We now have a chance to be the change and make digital preservation something that is at the forefront of business’s minds, helping them avoid the loss of important information due to complacency.

The conference closed with the speakers taking questions from attendees. There was lively discussion over whether postgraduate university courses in archiving and records management are teaching the skills needed for careers in digital preservation. It was decided that although some universities do teach this subject better than others, digital archivists have to make a commitment to life-long learning – not just one postgraduate course. This is a field where the technology and methods are constantly changing, so we need to be continuously developing our skills in accordance with these changes. The discussion certainly left me with lots to think about when considering postgraduate courses this year.

If you are new to the archiving field and want to gain an insight into digital preservation, I would highly recommend the annual conference. I left London with plenty of information, ideas and resources to further my knowledge of the subject, starting my commitment to life-long learning in the area of digital preservation!

 

 

Significance & Authenticity: a Briefing

As an Ancient History graduate, significance and authenticity of source information characterised my university education. Transferring these principles to digital objects in an archival situation is a challenge I look forward to learning more about and embracing. Therefore I set off to Tate Britain on a cold Friday morning excited to explore the Digital Preservation Coalition’s briefing: Significance & Authenticity. Here are some of my reflections.

A dictionary definition is not enough

The morning started with a stimulating discussion led by Sharon McMeekin (DPC), on the definitions of these two concepts within the field of Digital Archives and the context of the varying institutions the delegates were from. Several key points were made, and further questions generated:

Authenticity

  • Authenticity clearly carries with it evidential value; if something is not what it purports to be then how can it (claim to) be authentic?
  • Chains of custody and tracking accidental/intended changes are extremely relevant to maintaining authenticity
  • Further measures such as increasing metadata fields – does this ensure authenticity?

For an archival record to retain authenticity there must be record of the original creation or experience of the digital object; otherwise we are looking at data without context. This also has a bearing on how significant an archival record is. A suggestion was also made that perhaps as a sector too much over-emphasis is placed on integrity checking procedures. Questions surfaced such as: is the digital preservation community too reliant on it? And in turn, is this practical process approach to ensuring authenticity too simplistic?

Significance

  • Records are not just static evidence, they are also for appreciation, education and to use
  • Should the users and re-users (the designated community) be considered more extensively when deciding the significance of a digital object?
  • Emulation as a digital preservation action prioritises the experience of using the data: is this the way to go regarding maintaining both the significant properties together with the authenticity?

There was no doubt left in my mind that the two principles are inextricably linked. However, not only are they increasingly subjective for both the record keeper and the end user, they must be distinguished from one another. For example, if a digital object can be interpreted as both a game and a book, yet the object was created and marketed as a book, does this make it any less significant or authentic? Or is the dispute part of what makes the object significant; the creation, characterisation and presentation of data in digital form is reflective of society today and what researchers may (or may not be) interested in in the future? We do not know and, as a fellow delegate reminded, cannot prejudice future research needs.

Building on the open mindedness that the  discussion encouraged, we were then fortunate enough to hear and learn from practitioners of differing backgrounds regarding how they ensure significance and authenticity of their collections. One particular example had me contemplating all weekend.

Significance & Authenticity of Digital Art by Patricia Falcao & Tom Ensom (Tate)

Patricia and Tom explained that they work with time-based media art and its creators. Working (mostly) with living artists ensures a short chain of provenance, however the nature of the digital art means that applying authenticity and significance is in no way straightforward. A principle which immediately affects the criteria of significance is the fact that it is very important that the Tate can exhibit the works, illustrating that differences in organisations will of course have a bearing on how significant a record is.

One example Tom analysed was the software based Brutalism: Stereo Reality Environment 3 by Peruvian artist Jose Carlos Martinat Mendoza:

Brutalism: Stereo Reality Environment 3 2007 Jose Carlos Martinat Mendoza born 1974 Presented by Eduardo Leme 2007, accessioned 2011 http://www.tate.org.uk/art/work/T13251

The artwork comprises of a range of components: high speed printers, paper rolls,  a web search program and accompanying hardware, movement sensors and a model replica of the Peruvian government building ‘El Petagonito’ which is a symbol of brutalist architecture. The computer is programmed to search the web for references to ‘Brutalism’ and the different extracts of information it gathers are printed from mounted printers on the sculpture, left to fall to the floor around the replica.

Tom explained that retaining authenticity of the digital art was very much a case of the commitment to represent the artists work together with the arrangement and intention. One method of ensuring this is the transfer of a document from the creator called ‘Installation Parameters’. For this particular example, it contained details such as paper type and cabling needs. It also contained display specifications such as the hardware being  a very visible element of the art work.

Further documentation is created and stored to preserve the original authenticity and thus unique significance of the artwork and the integrity of its ‘performance’.  Provenance information such as diagrams, process metadata and the original source code is stored separately to the work itself. However, Tom acknowledged there is no doubt the work will need to change and in turn will be reinterpreted. Interestingly, the point was made that the text itself on the paper itself is time sensitive; live search results related to Brutalism will evolve and change.

Looking ahead, what will happen when the hardware fails? And even, what will happen when nobody uses printers anymore? Stockpiling is only a short term plan for maintaining authenticity and significance. Furthermore, even if hardware can be guaranteed then the program software itself generates different issues. Software emulation, code-change tracking systems and a binary analysis are all to be explored as a means to enable authenticity but there will always be a risk and need for alternative solutions.

Would these changes reduce the authenticity or significance? I believe authenticity is associated with intention and so perhaps if changes are communicated to the user with justifications this could be one way of maintaining this principle. Significance, on the other hand, is more tricky. Without the significant and notable properties of the work, is significance automatically lost?

This case study reinforced that there is much to explore and consider when approaching the principles of authenticity and significance of digital objects. To conclude, Tom and Patricia reinforced that within the artistic context, decisions around authenticity and significance are made through collaborative dialogues with the artist/creator which does indeed provide direction.

Workshop

After 3 more talks and a panel session the briefing ended with a workshop requiring us to evaluate the significance and authenticity of a digital object provided. As a trainee digital archivist I can be guilty of shying away from group discussions/exercises within the community of practice, so I was really pleased to jump in and contribute during the group workshop exercise.

Thank you to the DPC and all involved for a brilliant day.

Collecting Space: The Inaugural Science and Technology Archives Group Conference

On Friday 17th of November I attended the inaugural Science and Technology Archives Group (STAG) conference held at the fantastic Dana Library and Research Centre. The theme was ‘Collecting Space’ and bought together a variety of people working in or with science and technology archives relating to the topic of ‘Space’. The day consisted of a variety of talks (with topics as varied as The Cassini probe to UFOs), a tour of the Skylark exhibition and a final discussion on the future direction of STAG.

What is STAG?

The Science and technology archives group is a recently formed group (September 2016) to celebrate and promote scientific archives and to to engage anyone that has an interest in the creation, use and preservation of such archives. 

The keynote presentation was by Professor Michele Dougherty, who gave us a fascinating insight into the Cassini project, aided by some amazing photos. 

Colour-coded version of an ISS NAC clear-filter image of Enceladus’ near surface plumes at the south pole of the moon. Image credit: NASA/JPL-Caltech/SSI

Her concern with regards to archiving data was context. We were told how her raw data could be given to an archive however it would be almost meaningless without the relevant information about context, for example calibration parameters. Without it data could be misinterpreted.

Dr James Peters from the University of Manchester told us of the unique challenges of the Jodrell Bank Observatory Archive, also called the ‘sleeping giant’. They have a vast amount of material that has yet to be accessioned but requires highly specialised scientific knowledge to understand it. Highlighting the importance of the relationships between the creator of an archive and the repository. Promoting use of the archive was of particular concern, which was also shared by Dr Sian Prosser of the Royal Astronomical Society archives. She spoke of the challenges for current collection development. I’m looking forward to finding out about the events and activities planned for their bi-centenary in 2020.

We also heard from Dr Tom Lean of the Oral History of British Science at the British library. This was a great example of the vast amount of knowledge and history that is effectively hidden. The success of a project is typically well documented however the stories of the things that went wrong or of the relationships between groups has the potential to be lost. Whilst they may be lacking in scientific research value, they reveal the personal side of the projects and are a reminder of the people and personalities behind world changing projects and discoveries.

Dr David Clarke spoke about the Ministry of Defence UFO files release program. I was surprised to hear that as recently as 2009 there was a government funded UFO desk. In 2009 these surviving records were transferred to the National Archives. All files were digitised and made available online. The demand and reach for this content was huge, with millions of views and downloads from over 160 countries. Such an archive, whilst people may dismiss its relevance and use scientifically, provides an amazing window into the psyche of the society at that time.

Dr Amy Chambers spoke about how much scientific research and knowledge can go into producing a film and used Stanley Kubrick’s 2001: A Space Odyssey as an example. This was described as a science fiction dream + space documentary. Directors like Kubrick would delve deeply into the subject matter and speak to a whole host of professionals in both academia and industry to get the most up to date scientific thinking of the time. Even researching concepts that would potentially never make it on screen. This was highlighted as a way of capturing scientific knowledge and the current thoughts about the future of science at that point in history. Today it is no different, Interstellar, produced by Christopher Nolan, consulted Professor Kip Thorne and the collaboration produced a publication on gravitational lensing in the journal Classical and Quantum Gravity.

It was great to see the Dana research library and a small exhibition of some of the space related material that the Science Museum holds. There was the Apollo 11 flight plan that was signed by all the astronauts that took part and included a letter from the Independent Television News, as they used that book to help with the televised broadcast. We also got to see the recently opened Skylark exhibition, celebrating British achievements in space research.

Launch of a British Skylark sounding rocket from Woomera in South Australia. Image credit: NASA

The final part of the conference was an open discussion focusing on the challenges and future of science and technology archives and how these could be addressed.

Awareness and exposure

From my experience of being a chemistry graduate, I can speak first hand of the lack of awareness of science archives. I feel that I was not alone, as during the course of a science degree, especially for research projects, archives are never really needed compared to other disciplines as most of the material we needed was found in online journals. Although I completed my degree some time ago, I feel this is still the case today when I speak to friends who study and work in the science sector. It seems that promotion of science and technology archives to scientists (at any stage of their career, but especially at the start) will make them aware of the rich source of material out there that can be of benefit to them, and subsequently they will become more involved and interested in creating and maintaining such archives.

Content

Science and technology archives, for an archivist with little to no knowledge of that particular area of science, understanding the vastly complex data and material is a potentially impossible job. The nomenclature used in scientific disciplines can be highly specialised and specific and so deciphering the material can be made extremely difficult.

This problem could be resolved in one of two ways. Firstly, the creator of the material or a scientist working in that area can be consulted. Whilst this can be time consuming, it is a necessity as the highly specialised nature of certain topics, can mean there are only a handful of people that can understand the work. Secondly, when the material is created, the creator should be encouraged to explain and store data in a way that will allow future users to understand and contextualise the data better.

As science and technology companies can be highly secretive entities, problems with exploiting sensitive material arise. It was suggested maybe seeking the advice of other specialist archive groups that have dealt with highly sensitive archives.

It appears that there is still a great deal of work to do to promote access, exploitation and awareness of current science and technology archives (for both creators and users). STAG is a fantastic way to get like minds together to discuss and implement solutions. I’m really looking forward to seeing how this develops and hopefully I will be able to contribute to this exciting, worthwhile and necessary future for science and technology archives.