Tag Archives: #SkillsForTheFuture

Email Preservation: How Hard Can it Be? DPC Briefing Day

Miten and I outside the National Archives

Miten and I outside the National Archives, looking forward to a day of learning and networking

Last week I had the pleasure of attending a Digital Preservation Coalition (DPC) Briefing Day titled Email Preservation: How Hard Can it Be? 

In 2016 the DPC, in partnership with the Andrew W. Mellon Foundation, announced the formation of the Task Force on Technical Approaches to Email Archives to address the challenges presented by email as a critical historical source. The Task Force delineated three core aims:

  1. Articulating the technical framework of email
  2. Suggesting how tools fit within this framework
  3. Beginning to identify missing elements.

The aim of the briefing day was two-fold; to introduce and review the work of the task force thus far in identifying emerging technical frameworks for email management, preservation and access; and to discuss more broadly the technical underpinnings of email preservation and the associated challenges, utilising a series of case studies to illustrate good practice frameworks.

The day started with an introductory talk from Kate Murray (Library of Congress) and Chris Prom (University of Illinois Urbana-Champaign), who explained the goals of the task force in the context of emails as cultural documents, which are worthy of preservation. They noted that email is a habitat where we live a large portion of our lives, encompassing both work and personal. Furthermore, when looking at the terminology, they acknowledged email is an object, several objects and a verb – and it’s multi-faceted nature all adds to the complexity of preserving email. Ultimately, it was said email is a transactional process whereby a sender transmits a message to a recipient, and from a technical perspective, a protocol that defines a series of commands and responses that operate in a manner like a computer programming language and which permits email processes to occur.

From this standpoint, several challenges of email preservation were highlighted:

  • Capture: building trust with donors, aggregating data, creating workflows and using tools
  • Ensuring authenticity: ensuring no part of the email (envelope, header, and message data etc.) have been tampered with
  • Working at scale: email
  • Addressing security concerns: malicious content leading to vulnerability, confidentiality issues
  • Messages and formats
  • Preserving attachments and linked/networked documents: can these be saved and do we have the resources?
  • Tool interoperability

 

The first case study of the day was presented by Jonathan Pledge from the British Library on “Collecting Email Archives”, who explained born-digital research began at the British Library in 2000, and many of their born-digital archives contain email.  The presentation was particularly interesting as it included their workflow for forensic capture, processing and delivery of email for preservation, providing a current and real life insight into how email archives are being handled. The British Library use Aid4Mail Forensic for their processing and delivery, however, are looking into ePADD as a more holistic approach. ePADD is a software package developed by Standford University which supports archival processes around the appraisal, ingest, processing, discovery and delivery of email archives. Some of the challenges they experienced surrounded the issue of email as often containing personal information. A possible solution would be the redaction of offending material, however they noted this could lead to the loss of meaning, as well as being an extremely time-consuming process.

Next we heard from Anthea Seles (The National Archives) and Greg Falconer (UK Government Cabinet Office) who spoke about email and the record of government. Their presentation focused on the question of where the challenge truly lies for email – suggesting that, opposed to issues of preservation, the challenge lies in capture and presentation. They noted that when coming from a government or institutional perspective, the amount of email created increases hugely, leaving large collections of unstructured records. In terms of capture, this leads to the challenge of identifying  what is of value and what is sensitive. Following this, the major challenge is how to best present emails to users – discoverability and accessibility. This includes issues of remapping existing relationships between unstructured records, and again, the issue of how to deal with linked and networked content.

The third and final case study was given by Michael Hope, from Preservica; an “Active Preservation” technology, providing a suite of (Open Archival Information System) compliant workflows for ingest, data management, storage, access, administration and preservation for digital archives.

Following the case studies, there was a second talk from Kate Murray and Chris Prom on emerging Email Task Force themes and their Technology Roadmap. In June 2017 the task force released a Consultation Report Draft of their findings so far, to enable review, discussion and feedback, and the remainder of their presentation focused on the contents and gaps of the draft report. They talked about three possible preservation approaches:

  • Format Migration: copying data from one type of format to another to ensure continued access
  • Emulation: recreating user experience for both message and attachments in the original context
  • Bit Level Preservation: preservation of the file, as it was submitted (may be appropriate for closed collections)

They noted that there are many tools within the cultural heritage domain designed for interoperability, scalability, preservation and access in mind, yet these are still developing and improving. Finally, we discussed what the possible gaps of the draft report, and issues such as  the authenticity of email collections were raised, as well as a general interest in the differing workflows between institutions. Ultimately, I had a great time at The National Archives for the Email Preservation: How Hard Can it Be? Briefing Day – I learnt a lot about the various challenges of email preservation, and am looking forward to seeing further developments and solutions in the near future.

Email Preservation: How Hard Can it Be? DPC Briefing Day

On Thursday 6th July 2017 I attended the Digital Preservation Coalition briefing day in partnership with the Andrew W. Mellon Foundation on email preservation titled ‘Email preservation: how hard can it be?’. It was hosted at The National archives (TNA), this was my first visit to TNA and it was fantastic. I didn’t know a great deal about email preservation prior to this and so I was really looking forward to learning about this topic.

The National Archives, Photograph by Mike Peel (www.mikepeel.net)., CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=9786613

The aim of the day was to engage in discussion about some of the current tools, technologies and thoughts on email preservation. It was orientated around the ‘Task Force on Technical Approaches to Email Preservation’ report that is currently in its draft phase. We also got to hear about interesting case studies from the British library, TNA and Preservica, each presenting their own unique experiences in relation to this topic. It was a great opportunity to learn about this area and hear from the co-chairs (Kate Murray and Christopher Prom) and the audience about their thoughts on the current situation and possible future directions.

We heard from Jonathan Pledge from British library (BL). He told us about the forensic capture expertise gained by the BL and using EnCase to capture email data from hard drives, CD’s and USB’s. We also got an insight into how they are deciding which email archive tool to use. Aid4mail fits better with their work flow however ePADD with its holistic approach was something they were considering. During their ingest they separate the emails from the attachments. They found that after the time consuming process of removing emails that would violate the data protection laws, there was very little usable content left, as often, entire threads would have to be redacted due to one message. This is not the most effective use of an archivist time and is something they are working to address.

We also heard from Anthea Seles who works with government collections at TNA. We learnt that from their research, they discovered that approximately 1TB of data in an organisations own electronic document and records management system is linked to 10TB of related data in shared drives. Her focus was on discovery and data analytics. For example, a way to increase efficiency and focus the attention of the curator on was to batch email. If an email was sent from TNA to a vast number of people, then there is a high chance that the content does not contain sensitive information. However, if it was sent to a high profile individual, then there is a higher chance that it will contain sensitive information, so the curator can focus their attention on those messages.

Hearing from Preservica was interesting as it gave an insight into the commercial side of email archiving. In their view, preservation was not an issue. For them, their attention was focused on addressing issues such as identifying duplicates/unwanted emails efficiently. Developing tools for performing whole collection email analysis and, interestingly, how to solve the problem of acquiring emails via a continuous transfer.

Emails are not going to be the main form of communication forever (the rise in the popularity of instant messaging is clear to see) however we learnt that we are still expecting growth in its use for the near future.

One of the main issues that was bought up was the potential size of future email archives and the issue that come with effective and efficient appraisal. What is large in academic terms, e.g. 100 000 emails, is not in government. The figure of over 200 million emails at the George W. Bush presidential library is a phenomenal amount and the Obama administrations is estimated at 300 million. This requires smart solutions and we learnt how the use of artificial intelligence and machine learning could help.

Continuous active learning was highlighted to improve searches. An example of searching for Miami dolphins was given. The Miami Dolphins are an American football team however someone might so be looking for information about dolphins in Miami. Initially the computer would present different search results and the user would choose which the more relevant result is, over time it will learn what it is the user is looking for in cases where searches can be ambiguous.

Another issue that was highlighted was, how do you make sure that you have searched the correct person? How do you avoid false positives? At TNA the ‘Traces Through Time’ project aimed to do that, initially with World War One records. This technology, using big data analytics can be used with email archives. There is also work on mining the email signature as a way to better determine ownership of the message.

User experience was also discussed. Emulation is an area of particular interest. The positive of this is that it recreates how the original user would have experienced the emails. However this technology is still being developed. Bit level preservation is a solution to make sure we capture and preserve the data now. This prevents loss of the archive and allows the information and value to be extracted in the future once the tools have been developed.

It was interesting to hear how policy could affect how easy it would be to acquire email archives. The new General Data Protection Regulation that will come into effect in May 2018 will mean anyone in breach of this will suffer worse penalties, up to 4% of annual worldwide turnover. This means that companies may air on the side of caution with regards to keeping personal data such as emails.

Whilst the email protocols are well standardised, allowing emails to be sent from one client to another (e.g. AOL account from early 1990’s to Gmail of now) the acquisition of them are not. When archivists get hold of email archives, they are left with the remnants of whatever the email client/user has done to it. This means metadata may have been added or removed and formats can vary. This adds a further level of complexity to the whole process

The day was thoroughly enjoyable. It was a fantastic way to learn about archiving emails. As emails are now one of the main methods of communication, for government, large organisations and personal use, it is important that we develop the tools, techniques and policies for email preservation. To answer the question ‘how hard can it be?’ I’d say very. Emails are not simple objects of text, they are highly complex entities comprising of attachments, links and embedded content. The solution will be complex but there is a great community of researchers, individuals, libraries and commercial entities working on solving this problem. I look forward to hearing the update in January 2018 when the task force is due to meet again.

Researchers,practitioners and their use of the archived web. IIPC Web Archiving Conference 15th June 2017

From the 14th – 16th of June researchers and practitioners from a global community came together for a series of talks, presentations and workshops on the subject of Web Archiving at the IIPC Web Archiving Conference. This event coincided with Web Archiving Week 2017, a week long event running from 12th – 16th June hosted by the British Library and the School of Advance Study

I was lucky enough to attend the conference  on the 15th June with a fellow trainee digital archivist and listen to some thoughtful, engaging and challenging talks.

The day started with a plenary in which John Sheridan, Digital Director of the National Archives, spoke about the work of the National Archives and the challenges and approaches to Web Archiving they have taken. The National Archives is principally the archive of the government, it allows us to see what the state saw through the state’s eyes. Archiving government websites is a crucial part of this record keeping as we move further into the digital age where records are increasingly born-digital. A number of points were made which highlighted the motivations behind web archiving at the National Archives.

  • They care about the records that government are publishing and their primary function is to preserve the records
  • Accountability for government services online or information they publish
  • Capturing both the context and content

By preserving what the government publishes online it can be held accountable, accountability is one aspect that demonstrates the inherent value of archiving the web. You can find a great blog post on accountability and digital services by Richard Pope in this link.  http://blog.memespring.co.uk/2016/11/23/oscon-2016/

The published records and content on the internet provides valuable and crucial context for the records that are unpublished, it links the backstory and the published records. This allows for a greater understanding and analysis of the information and will be vital for researchers and historians now and into the future.

Quality assurance is a high priority at the National Archives. By having a narrow focus of crawling, it has allowed for but also prompted a lot of effort to be directed into the quality of the archived material so it has a high fidelity in playback. To keep these high standards it can take weeks in order to have a really good in-depth crawl. Having a small curated collection it is an incentive to work harder on capture.

The users and their needs were also discussed as this often shapes the way the data is collected, packaged and delivered.

  • Users want to substantiate a point. They use the archived sites for citation on Facebook or Twitter for example
  • The need to cite for a writer or researcher
  • Legal – What was the government stance or law at the time of my clients case
  • Researchers needs – This was highlighted as an area where improvements can be made
  • Government itself are using the archives for information purposes
  • Government websites requesting crawls before their website closes – An example of this is the NHS website transferring to a GOV.UK site

The last part of the talk focused on the future of web archiving and how this might take shape at the National Archives. Web archiving is complex and at times chaotic. Traditional archiving standards have been placed upon it in an attempt to order the records. It was a natural evolution for information managers and archivists to use the existing knowledge, skills and standards to bring this information under control. This has resulted in difficulties in searching across web archives, describing the content and structuring the information. The nature of the internet and the way in which the information is created means that uncertainty has to inevitably be embraced. Digital Archiving could take the turn into the 2.0, the second generation and move away from the traditional standards and embrace new standards and concepts. One proposed method is the ICA Records in Context conceptual model. It proposes a multidimensional description with each ‘ thing ‘ having a unique description as opposed to the traditional unit of description (one size fits all).  Instead of a single hierarchical fonds down approach, the Records in Context model uses a  description that can be formed as a network or graph. The context of the fonds is broader, linking between other collections and records to give different perspectives and views. The records can be enriched this way and provide a fuller picture of the record/archive. The web produces content that is in a constant state of flux and a system of description that can grow and morph over time, creating new links and context would be a fruitful addition.

Visual Diagram of How the Records in Context Conceptual Model works

“This example shows some information about P.G.F. Leveau a French public notary in the 19th century including:
• data from the Archives nationales de France (ANF) (in blue); and
• data from a local archival institution, the Archives départementales du Cher (in yellow).” INTERNATIONAL COUNCIL ON ARCHIVES: RECORDS IN CONTEXTS A CONCEPTUAL MODEL FOR ARCHIVAL DESCRIPTION.p.93

 

Traditional Fonds Level Description

 

I really enjoyed the conference as a whole and the talk by John Sheridan. I learnt a lot about the National Archives approach to web archiving, the challenges and where the future of web archiving might go. I’m looking forward to taking this new knowledge and applying it to the web archiving work I do here at the Bodleian.

Changes are currently being made to the National Archives Web Archiving site and it will relaunch on the 1st July this year.  Why don’t you go and check it out.

 

 

 

Web Archiving Week 2017 – “Pages for kids, by kids”

Yesterday I was lucky enough to attend a day of the Web Archiving Week 2017 conferences in Senate House, London along with another graduate trainee digital archivist.

A beautiful staircase in Senate House

Every session I attended throughout the day was fascinating, but Ian Milligan’s ‘Pages by kids, for kids’: unlocking childhood and youth history through the GeoCities web archive stood out for me as truly capturing part of what makes a web archive so important to society today.

Pages by kids, for kids

GeoCities, for those unfamiliar with the name, was a website founded in 1994 from which anyone could build their own free website which would become part of a ‘neighbourhood’. Each neighbourhood was themed for a particular topic, allowing topic clusters to form from created websites. GeoCities was shut down in Europe and the US in 2009, but evidence of it still exists in the Internet Archive.

Milligan’s talk focused particularly on the Enchanted Forest neighbourhood between 1996 and 1999. The Enchanted Forest was dedicated to child-friendliness and was the only age based neighbourhood, and as such had extra rules and community moderation to ensure nothing age inappropriate was present.

“The web was not just made by dot.com companies”

The above image shows what I think was one of the key points from the talk, a quote from the New York Times, March 17th 1997
“The web was not just made by dot.com companies, but that eleven-year-old boys and grandmothers are also busy putting up Web sites. Of course, the quality of these sites varies greatly, but low-cost and even free home page services are a growing part of the on-line world.”

The internet is a democracy, and to show a true record of how and why it has been used it necessarily involves people – not just businesses. By having GeoCities websites within the Internet Archive, it’s possible to access direct evidence of how people were using the internet in the late part of the 20th century, but, as Ian Milligan’s talk explained, it also allows access to direct evidence of childhood and youth culture forming on the internet.

Milligan pointed out that access to evidence of childhood and youth culture is rare, normally historical evidence comes in the form of adults remembering their time as children or from researchers studying children, but something produced by a child for other children would rarely make it into a traditional archive. Within the trove of archived GeoCities websites, however, children producing web content for children is clearly visible. From this, it is possible to examine what constituted popular activities for children on GeoCities in the late 20th century.

Milligan noted one major activity within the Enchanted Forest centred around an awards culture, wherein a popular site would award users based on several web page qualities such as no personal identifiable information, working links and loading times of less than one minute. Some users would create their own awards to present to people, for example an award for finding all the Winnie the Pooh words in a word search. His findings showed that 15% of Enchanted Forest websites had a dedicated awards page.

A darker side of a child-centric portion of the web was also revealed in the Geokidz club. On the surface, the Geokidz Club appeared to be an unofficial online clubhouse where children could share poetry and book reviews, they could chat and take HTML lessons – but these activities came at the price of a survey which contained questions about the lifestyles of the child’s parents (the type of information would appeal to advertisers). This formed part of one of the first internet privacy court cases due to the data being obtained from children and sold on without proper informed consent.

It was among my favourite talks of the day, and showed how much richer our understanding of the recent past can be using web archives, as well as the benefit to researchers of the history of youth and childhood.
It felt particularly relevant to me, as someone who spent her teen years on the internet watching, and being involved in, youth culture happening online in the 2000s to know that online youth culture, which can feel very ephemeral, can be saved for future research in web archives.

A wall hanging in Senate House (made of sisal)

In truth, any talk I attended would have made an interesting topic for this blog – the entire day was filled with informative speakers, interesting presentations and monumental, hair-like wall hangings. But I felt Ian Milligan’s talk gave such a positive example of how the internet, and particularly web archives, can give a voice to those whose experiences might be lost otherwise.

Initiating conversation: let’s talk about web content (part 2)

Colin Harris, Superintendent of Special Collections reading rooms. Chosen site: cyndislist.com

‘I am a founding member of Oxfordshire Family History Society and I’ve long been interested in family history. As a phenomena it surged in popularity in the 1970’s. In about 1973 there was great curiosity (in OFHS) in Bicester as everyone was interested in the popular group, The Osmonds (who originated from Bicester!). Every county has a family history society and I would say it’s they who have done the lion’s share of the work. All of their work and indexing…it’s all grist to the mill in terms of recording names and events.

So the website I would like to have access to in 10 years’ time is cyndislist.com, which is one of the world’s largest databases for genealogy. In fact it’s been going for over 21 years already. This was launched on the 4th March 1996. The family history people have been right there from the very beginning, it’s been growing solidly since then; it’s fantastic. It covers 200 categories of subjects, it has links to 332,000 other websites, and it’s the starting point for any genealogical research. The ‘Cyndi’ is Cyndi Howell, an author in genealogy.

Almost every day the site is launching content that might be interesting in some particular subject. So just going back within the last couple of weeks: an article on Telling the Orphan’s story; Archive lab on how to preserve old negatives; The key to family reunion success and DNA: testing at a family reunion! Projects even go beyond individuals…they explore a Yellowstone wolf family. There is virtually nothing that is untouched. Anything with a name to it has potential for exploration.

To be honest, I haven’t been able to do any family history research since 1980, but I am hoping to do some later on this year (when I retire). All these years that have passed has meant that so much is available to be accessed over the internet

Actually I’d love to see genealogy and family history workers and volunteers getting more recognition for the fantastic amount of industrious and tech savvy work they do. Family history is something for people from all walks of life. Our history, your history, my history is something very personal. As I say, 21 years and going strong; I’d love to see the site going stronger still in 10 years’ time.’


 

Pip Willcox, Head of the Centre for Digital Scholarship and Senior Researcher at Oxford e-Research. Chosen site: twitter.com

Twitter is an amazing tool that society has used to show the best of what humanity is at the moment…we share ideas, we share friendship, fun and joy, we communicate with others around the world, people help each other. But, it shows the worst of what humans can do. The news we see is just the tip of the iceberg – the levels of abuse that users, particularly minority groups, receive is appalling. Twitter is a fantastic place to meet people who think very differently from us, people who come from different backgrounds, have had different experiences, who live far from us, or close by but we might not otherwise have met. It is so rich, so full of potential, and some of what we do with it is amazing, yet some of what we do with it is appalling.

The question for the archive is “which Twitter?” There is the general feed, what you see if you don’t sign in. Then there are our individual feeds, where we curate our own filter bubbles, customizing what we see through our accounts. You can create a feed around a hashtag, an event, or slice it by time or location. All of these approaches will affect the version of Twitter we archive and leave for the future to discover.

These filter bubbles are not new: we have always lived in them, even if we haven’t called them that before. Last year there was an experiment where a series of couples who held diametrically opposing views switched Twitter accounts and I found that, and their thoughtful response to it fascinating.

Projects like Cultures of Knowledge, for example, which is based at the History Faculty here at the University of Oxford, traces early modern correspondence. This resource lets you search for who was writing to whom, when, where, and the subjects they were discussing. It’s an enormously rich, people-centred view of the history of ideas and relationships across time and space, and of course it points readers on in interesting directions, to engage closely with the texts themselves. This is possible because the letters were archived and catalogued over the years, over the centuries by experts.

How are we going to trace the conversations of the late 20th and the early 21st centuries? The speed at which ideas flow is faster than ever and their breadth is global. What will future historians make of our age?

I’m interested from a future history as well as a community point of view. The way we are using Twitter has already changed and tracking its use, reach, and power seems to me well worth recording to help us understand it now, and to help explain an aspect of our lives to future societies. For me, Twitter makes the world more familiar, and anything that draws us together as a global community, that reinforces our understanding that we share one planet, that what we have in common vastly outweighs what divides us, and that helps us find ways to communicate is a good and a necessary thing.’

 


 

Will Shire, Library Assistant, Philosophy and Theology Faculty Library. Chosen site: wikipedia.org

‘It’s one of the sites I use the most…it has all of human knowledge. I think it’s a cool idea that anyone can edit it – unlike a normal book it’s updated constantly. I feel it’s derided almost too much by people who automatically think it’s not trustworthy…but I like the fact that it is a range of people coming together to edit and amend this resource. As a kid I bothered my mum all the time with constant questioning of ‘Why is this like this, why does it do that. Nowadays if you have a question about anything you can visit wikipedia.org. It would be really interesting to take a snapshot of one article every month or week in order to see how much it changes through user editing.

 Also, I studied languages and it is extremely useful for learning new vocabulary as the links at the side of the article can take you to the content in other available languages. You can quite easily look at different words or use it as a starter to take you to different articles in other languages that aren’t English.’


 

 

 

 

Why archive the web?

Here at the Bodleian Libraries’ Web Archive (BLWA), the archiving process starts with a nomination – either by our web curators or by you, the public. The nominated URLs the BLWA team then select for archiving are those specifically identified as being of lasting value and significance for preservation.

Not only are the sites chosen from a preservation standpoint – we are also continually seeking to build up the scope and content of our 7 collections within the BLWA: University of Oxford; University of Oxford colleges; University of Oxford museums, libraries and archives; social sciences; arts and humanities; international and science, medicine and technology. Exactly like the use of a physical collection, the sites belonging to the web collection will be used for research, fact checking, discovery and collaboration. There can be no denying that the web is the platform on which so much of contemporary society occurs. In the future then, and indeed now, web archives are providing an insight into our history.

Anti-Apartheid Movement Archives – http://www.aamarchives.org/

The AAMA site is part of our international collection in the BLWA. Within this collection we have captured the aamarchives.org 7 times since 24th November 2015. This online platform is vital for digital access to further research, cross-cultural relationships and efforts towards understanding the history of the British Anti-Apartheid Movement 1959 – 1994. This capture has preserved the navigation and functionality of the site and links still resolve; for example the user community can still browse the archive, learn about campaigns and download resources. The date and time is clearly displayed in the banner at the top.

BLWA’s first capture of the online AAMA

This website can also be used and explored in conjunction with our related physical holdings. Here at the Bodleian Special Collections we have an amazing depth and range of physical material in the Anti-Apartheid Movement archive and our Commonwealth and African studies collections. You can browse the catalogue for this here.

This archived capture is fully functional, like a live site.

This is a tangible example of how digital preservation enhances and complements physical material and ensures records can reach a wider audience. How exciting it is that a researcher can consult manuscript or archived material, alongside captures of websites from the past in order to gain more of an insight and have a wider scope of substance to survey!

Web content like the aamarchives.org/ is not as stable as you might presume. A repository of web based collections enables future discovery of internet sites that are perhaps taken for granted due to the nature of our technological society; everything is just a tap or a click away. In fact, much of the material we interact with today is only available online. The truth is that web content is ephemeral: there is a very real threat that it can rapidly change and disappear altogether. Therefore web archiving initiatives are vital to preserve these valuable resources for good. Through these captures, provenance, arrangement and content have been preserved; and arguably most importantly of all – access.

Both individual collections and the web archive as a whole can be searched for a specific site, or browsed at leisure.

Growth of open access and web based initiatives mean that there is an ever increasing network of digital libraries on a global scale. There is no doubt that the practice of web archiving is a significant contribution towards ensuring knowledge for all. Access to the Internet enabling access to an ever growing knowledge depository is central to the integrity of educational and professional research, web archiving and on a larger scale, digital preservation.

Browse our collections in Bodleian Libraries’ Web Archive

Get involved and help preserve our history! Nominate a site to archive

Initiating conversation: let’s talk about web content (part 1)

To initiate conversation about preserving web content and to encourage people to think about why archiving the web is so important, I asked staff at the Bodleian Libraries to imagine the following: If you could choose just one website to have guaranteed access to in 10 years’ time what would it be – and why? Keep reading to discover staff answers and perspectives…

Richard Ovenden, Bodley’s Librarian, Bodleian Libraries. Chosen site: bodleian.ox.ac.uk

‘Obviously as somebody who is leading this institution, seeing its history reflected in the institutional website is so significant. If you go back to the archived captures of bodleian.ox.ac.uk that are accessible now through the Internet Archive it’s incredible not only to see evolution of the HTML site itself and the look and feel of it but just to see how it reflects the changes in the organisation since the 1990’s when the first Bodleian website was set up…which was actually the first library in the UK to have a website.

We can see the changes to the way the Bodleian Libraries reflect their public persona through the web but also the website is a useful proxy for how the organisation itself has changed: the organisational structure, the administrative arrangements, the policies and strategies, how the web is a reflection of those changes over the past 20 years is really interesting. And in 10 years’ time it would be over 30 years and there will be another decade of evolution, growth, change…the web is a very convenient place to see that at a glance. We obviously archive a large number of institutional and administrative records in paper and digital form but it’s a huge amount to wade through, whereas the web provides a very convenient lens to view our organisational past through. I can’t think of another way, so conveniently, to chart our history, our progress, our challenges and even some of the mistakes that we’ve made as an organisation over that time.

Our organisation as a whole changed dramatically in the year 2000 when we stopped being just the historic Bodleian Library and we were integrated with the departmental faculty libraries. We then changed our name to University of Oxford Library services, then back to the Bodleian. Through the website you can actually see that extraordinary change. It’s such a convenient way of getting a grip on our history’.


Lukasz Kowalski, Bodleian Library Reader Services, Weston Library. Chosen site: stackexchange.com

‘I was thinking “what’s the website with the most information in it?”. My initial thought was Wikipedia.org. But I could easily live without it if I had to, as probably most knowledge contained in it is available in print. My next thought was stackexchange.com. It facilitates an exchange of knowledge and collective problem-solving on a large scale, otherwise unattainable via printed media. It’s supported by a large community of users, including experts in their fields. Together with its sister sites, it covers virtually any discipline and questions that can be asked and answered. Stackexchange is a web of knowledge, but different from Wikipedia. Rather than being organised knowledge it is more organised thinking.

My background is in Physics and I have used this site to further my understanding of concepts which did not have clear explanations in textbooks, or when I wanted to check that my thinking about a solution to a given problem was on the same page as others.

I think it goes back to what, I guess, the internet was about in the first place: the exchange of knowledge and ideas, and such is the character of this site. It’s great to rely on good teachers if one has access to them – but it is wonderful that people from across the world can gain a deeper understanding of concepts and exchange ideas by connecting more readily with those who have the expertise.’


 

Sophie Quantrell, Library Assistant, Philosophy and Theology Faculty Library. Chosen site: youtube.com

‘I was thinking about youtube.com as a resource mainly because it’s so versatile. It can be used to display images, sound…I’ve seen some people use it for musical scores – putting musical scores alongside the sound and that sort of thing. I think it is a site that can be used almost for any purpose – so you’ve got the social aspect of it with the comments and the interaction as well as the instructional aspect. I learn sign language when I am not busy with other things [gestures around her at the library] so to be able to see and learn it through videos it is great…it’s much more difficult to tell what the signs are if all you’ve got are drawings on a piece of paper!

It can link to videos on so many different topics, like instructional TED talks. There are so many good quality resources online that get overlooked with all the cat videos. It also crosses cultural boundaries…you can upload and view videos in whatever language you want. You could post a video from Australia and someone could be watching it in Kazakhstan!’


Iram Safdar, Graduate Trainee Digital Archivist, Weston Library. Chosen site: wikipedia.org

Wikipedia has been the main source for my knowledge since I was a kid. It’s also provided me with countless hours of entertainment by following the breadcrumb trail of links and seeing where you end up! All sorts of hilarity ensues when you find a rogue edit by someone…I like that it is an open source resource.

Similarly, it shows you what society thinks about things and reveals how we view stuff…which I think in a broader sense is quite interesting.’


Keep an eye out for part 2 and more staff insights coming up on the Archives and Modern Manuscripts blog imminently…

 

‘Getting Started with Digital Preservation’ Workshop

On the 17th of May I attended the Digital Preservation Coalition’s (DPC) ‘Getting Started with Digital Preservation’ workshop in London.

The one-day event was a great opportunity to gain clear insights into starting in the digital preservation sector, and provided a useful platform for networking with other archivists. The event consisted of lectures from DPC members on various topics related to starting digital preservation. It also included group exercises that were aimed at putting these ideas into practice.

The day started with a brief overview of digital preservation. The DPC team started by making us focus on identifying the main aspects of traditional archival preservation for physical documents. For example, a document’s physical, robust and tangible nature. Its ability to be independently understandable without relying on technology. The existence of well-established approaches to its preservation. And the existence of a well-established understanding of value-assessment relating to these documents.

This was used as a springboard to introduce us to many issues that we would face transitioning to digital. Issues like the ephemeral and intangible nature of digital (1s & 0s can’t be held in your hands). The need for technology and software for documents to be understood (e.g. a PDF file requires software to open it). Issues of obsolescence (e.g. new hardware and software making older files redundant) and lack of any value-assessment experience in the field (how do we assess the value of a set of data?).

These areas helped us to understand that digital preservation presented its own set of unique challenges that have to be understood within their own context. The question of ‘Why Digitise?’ was then asked to the attendees at the workshop. The responses focused on: legal, research, cultural heritage, funding opportunities, efficiency, contingency and access reasons for digitising. This shows us that digital preservation cannot be seen as a simple solution to a single problem but a complex solution to many.

Bit-Level Preservation was covered in detail at the workshop, this section focused on the potential dangers that could affect data and how to prevent these from occurring. The three main areas were: media obsolescence: where media type is no longer used or the hardware no longer exists to support it, media failure / decay: when the media itself runs to the end of its life cycle or breaks, and natural / human-made disaster: fire, earthquakes etc. Mitigating these dangers is achieved by backing up the data more than 2-3 times (the actual number of copies needed is a subject of debate). Then storing these copies in different geographical locations, and performing periodical migration of media to new storage devices.

The workshop also looked at integrity checks and the role they play in bit-level preservation. Integrity checking is the process of creating a ‘checksum’ or ‘hash value’ (a unique number created when running an integrity checking program like Fixity, ACE and COPTR on a file). This number is unique to that data, like a fingerprint, and can be used to check if the data has changed or become corrupted in any way due to bit-rot or other data corruption.

Fixity: https://www.avpreserve.com/tools/fixity/
ACE: https://wiki.umiacs.umd.edu/adapt/index.php/Ace
COPTR: http://coptr.digipres.org/Category.Fixity

Later in the workshop characterisation tools were demonstrated. The tool showcased was DROID (Digital, Record Object Identification). DROID is an open-source tool that analyses file types / formats on a system, it then relays this information to PRONOM, a database of file formats. The presentations stressed that the databases the tools used were important, and needed gradual updating to be accurate. Other examples of characterisation tools mentioned: C3PO, JHOVE, TIKKA, FITS.

PRONOM: http://www.nationalarchives.gov.uk/PRONOM/Default.aspx
DROID: https://sourceforge.net/projects/droid/

The presentation on departmental readiness provided useful insights into preparing for digital preservation projects. It focused on the way that maturity models could be used to benchmark your department’s readiness for digital preservation The two main models discussed were: Digital Preservation Capability Maturity Model and the NDSA Levels of Digital Preservation. These models aimed to identify gaps in the institution’s readiness for digital preservation, whilst also focusing on aspects of best practice that they could aim to achieve.

DPCMM: http://www.securelyrooted.com/dpcmm
NDSA: http://www.digitalpreservation.gov/documents/NDSA_Levels_Archiving_2013.pdf

A risk assessment exercise also formed part of the workshop. Those attending were asked to consider how various risks would affect the digital archival process. The risks would then be ranked on their likelihood of occurring, and the potential damage that they might cause. We would then propose potential solutions to help mitigate these risks, and prevent further ‘explosive’ risks from occurring. This was followed by assessing whether the scores for both criteria had improved.

The last presentation was on digital asset registers. It focused on the importance of creating and managing a detailed spreadsheet to hold an institutions digital assets, with the aim of having one organised and accessible source of information on a digital collection. The presentation focused on how this register could be shared with all members of staff to promote a better understanding of a digital collection. It mentioned that this would remove the issue of having one staff member who was a sole specialist on a collection, and promote further transparency throughout the digital preservation process. Another idea mentioned was that the register could be used for promoting further funding into digital collections, by providing a visual representation of the digital preservation process.

I thoroughly enjoyed the DPC workshop and look forward to attending similar workshops.

 

Digital Preservation Workshop

It was a real privilege to attend the Digital Preservation Coalition’s workshop, ‘Getting Started with Digital Preservation’ in London on 17th May 2017. As a newcomer to this topic I was eager to learn more, and the workshop definitely didn’t disappoint, providing me with a fantastic insight into the tools recommended for digital preservation, the challenges it presents, and the solutions that can be used to overcome these.

The workshop began with an introduction to digital preservation, defined neatly by Sharon McMeekin (Head of Training and Skills) as the active management of digital content over a period of time to ensure continued access. We learnt about the sorts of features systems should incorporate to allow for continued access to digital content. These included:

• Resilience, standards, and open to testing
• Error checking, compatibility to multi-media, and back-up
• Authenticity checking

As the morning progressed it was interesting to learn more about some of the difficulties that digital preservation presents including:

• Media obsolescence
• Media failure or decay (otherwise known as ‘bit-rot’).
• Natural disaster
• Man-made error
• Malicious damage
• Viruses
• Network failure
• Disassociation

Methods of dealing with these challenges included: storing more than one copy in different geographic locations, refreshing storage media, and integrity checking, also known as ‘fixity checking’ which is the process of checking if a digital file has remained unchanged.

As part of this final solution we also learnt about ‘checksums’ which are like ‘digital fingerprints’ also used to check if the contents of a file have altered.

The DPC also recommended generating a risk register as a further preventative measure to protect digital material against potential hazards. We even had a go at creating our own digital register based on a fictional scenario. This involved recording the:

•  Type of risk
• Consequence of risk
• Likelihood  of occurrence
• Impact on institution
• Frequency
• Owner
• Response/solution
• New Likelihood of occurrence

As well as safeguarding digital material, we learnt that a risk register has the added benefit of introducing clearer planning within an institution, serving as an advocacy tool, highlighting clearer responsibilities, and benefitting the Digital Asset Register.  DPC recommended that institutions use DRAMBORA, a digital repository audit method based on risk assessment which encourages organisations to generate an awareness of their objectives and activities before identifying and managing the risks to their digital collections.

Finally, Digital Asset Registers were recommended as useful tools for digital preservation coordination since they gather all of the digital information into one place and log preservation risks to collections. They also provide intuitions with a finding aid in the absence of other discovery methods and support best practice and advocacy.

The characterisation tool DROID was also mentioned as a useful software application for identifying file formats. Developed by the National Archives, this tool records the number, size, and format of each file in addition to creating a checksum for each.

The workshop was a wonderful opportunity to learn more about digital preservation and meet with other professionals from the same field. I am now really looking forward to undertaking some of my own digital preservation and archiving projects at the Bodleian.

ARA Film Archives Training Day

Yesterday I attended the ARA Film Archives Training day in the Wessex Film and Sound Archive in Winchester. The four talks over the course of the day were an excellent introduction to some of the uses of film archives as well as the issues associated with them.

The Wessex Film and Sound Archive is based in the Hampshire Record Office

Moving Collections: the impact of archive films in museum displays

Sarah Wyatt of the National Motor Museum gave a fascinating talk on the use of archive film and video footage in museum displays. She discussed a number of benefits in the use of videos- including acting as a restorative from “museum fatigue” (that familiar sensation of being mentally and physically exhausted after wandering around a museum for too long), helping to bring displays to life and showing the motion of moving objects too delicate to be regularly operated.
One unexpectedly interesting takeaway from her talk was the revelation that videos in museums are not at all a recent idea. The Imperial War Museum used to enhance their displays with mutoscopes in the 1920s and 1930s!

Bringing Our History To Life: promoting the use of archive film in cross curricular learning

Zoe Viney of the Wessex Film and Sound archive followed, with a talk on the use of archive film in teaching, and the resource packs for schools they are currently trialling (and how it can be relevant beyond just history lessons). The positive effects she discussed included giving a greater insight into the past, supporting investigation and enquiry skills and creating a sense of greater empathy when the children view the footage and realise it is showing actual people, rather than an abstract idea of “the past.” Its use became especially clear when she set an exercise to link a very short film clip showing the return of a stolen ship to possible teaching opportunities. Each group managed to provide a wealth of possibilities, from geography lessons based around ship routes and learning ocean names, to English lessons based around children writing applications to join the new ship crew. Any school children who get the opportunity to use the Wessex Film and Sound Archive resource packs will be very lucky.

Providing A Regional Screen Archive Service: preservation, digitisation, and access.

After a short break (including tea and biscuits, of course) Dr Frank Gray began his talk centred mostly on how the Screen Archive South East functions, as well as showing some amazing examples of archive film from their collections. A personal highlight was noticing that their workflow for digitising film followed a very similar structure to ours for digitising cassette tapes – it’s exciting to see the similarities in practice between different media.
But the true highlight of his talk came in the examples of digitised film from their collections, and especially the Kinemacolor film shown in its original colours. Kinemacolor was a film format developed in Brighton during the early 20th century which used alternating red and green filters in projectors produce colour when viewed. Unfortunately those projectors are now lost, so there had been no way to view Kinemacolor film as it was intended to be seen until a way to digitally reconstruct the colour was established recently. Information about the Screen Archive South East’s past exhibitions of Kinemacolor can be found here.

‘The Two Clowns’, a 1906 Kinemacolor film by George Albert Smith, from http://screenarchive.brighton.ac.uk/portfolio/capturing-colour/

Vinegar Syndrome in Film Collections

Sarah Wyatt delivered the final talk, a short informative talk on vinegar syndrome, a condition that affects acetate film and, if left untreated and in the wrong conditions, will entirely degrade it. The titular smell is the most familiar symptom, caused by a release of acetic acid that causes irreparable damage at just 3 – 5 parts per million! Even more worryingly, the familiar smell is generally an advanced stage symptom and the syndrome cannot be reversed – just halted if proper precautions are taken. Earlier symptoms can include cracking, shrinking, warping, buckling, flaking and white powder deposits. It was very enlightening, and showed just how important proper storage is.

The back of Hampshire Record Office

By the end of the training day I had a new appreciation for film archives. I hadn’t before realised just how versatile they are, or how many uses beyond the traditional documentary footage or news clips footage there are.