All posts by mitenmistry

Collecting Space: The Inaugural Science and Technology Archives Group Conference

On Friday 17th of November I attended the inaugural Science and Technology Archives Group (STAG) conference held at the fantastic Dana Library and Research Centre. The theme was ‘Collecting Space’ and bought together a variety of people working in or with science and technology archives relating to the topic of ‘Space’. The day consisted of a variety of talks (with topics as varied as The Cassini probe to UFOs), a tour of the Skylark exhibition and a final discussion on the future direction of STAG.

What is STAG?

The Science and technology archives group is a recently formed group (September 2016) to celebrate and promote scientific archives and to to engage anyone that has an interest in the creation, use and preservation of such archives. 

The keynote presentation was by Professor Michele Dougherty, who gave us a fascinating insight into the Cassini project, aided by some amazing photos. 

Colour-coded version of an ISS NAC clear-filter image of Enceladus’ near surface plumes at the south pole of the moon. From Porco et al. 2006, doi: 10.1126/science.1123013

Her concern with regards to archiving data was context. We were told how her raw data could be given to an archive however it would be almost meaningless without the relevant information about context, for example calibration parameters. Without it data could be misinterpreted.

Dr James Peters from the University of Manchester told us of the unique challenges of the Jodrell Bank Observatory Archive, also called the ‘sleeping giant’. They have a vast amount of material that has yet to be accessioned but requires highly specialised scientific knowledge to understand it. Highlighting the importance of the relationships between the creator of an archive and the repository. Promoting use of the archive was of particular concern, which was also shared by Dr Sian Prosser of the Royal Astronomical Society archives. She spoke of the challenges for current collection development. I’m looking forward to finding out about the events and activities planned for their bi-centenary in 2020.

We also heard from Dr Tom Lean of the Oral History of British Science at the British library. This was a great example of the vast amount of knowledge and history that is effectively hidden. The success of a project is typically well documented however the stories of the things that went wrong or of the relationships between groups has the potential to be lost. Whilst they may be lacking in scientific research value, they reveal the personal side of the projects and are a reminder of the people and personalities behind world changing projects and discoveries.

Dr David Clarke spoke about the Ministry of Defence UFO files release program. I was surprised to hear that as recently as 2009 there was a government funded UFO desk. In 2009 these surviving records were transferred to the National Archives. All files were digitised and made available online. The demand and reach for this content was huge, with millions of views and downloads from over 160 countries. Such an archive, whilst people may dismiss its relevance and use scientifically, provides an amazing window into the psyche of the society at that time.

Dr Amy Chambers spoke about how much scientific research and knowledge can go into producing a film and used Stanley Kubrick’s 2001: A Space Odyssey as an example. This was described as a science fiction dream + space documentary. Directors like Kubrick would delve deeply into the subject matter and speak to a whole host of professionals in both academia and industry to get the most up to date scientific thinking of the time. Even researching concepts that would potentially never make it on screen. This was highlighted as a way of capturing scientific knowledge and the current thoughts about the future of science at that point in history. Today it is no different, Interstellar, produced by Christopher Nolan, consulted Professor Kip Thorne and the collaboration produced a publication on gravitational lensing in the journal Classical and Quantum Gravity.

It was great to see the Dana research library and a small exhibition of some of the space related material that the Science Museum holds. There was the Apollo 11 flight plan that was signed by all the astronauts that took part and included a letter from the Independent Television News, as they used that book to help with the televised broadcast.

Apollo 11 flight plan

We also got to see the recently opened Skylark exhibition, celebrating British achievements in space research.

Scale model of the Skylark rocket at the exhibition entrance at the Science Museum, London

The final part of the conference was an open discussion focusing on the challenges and future of science and technology archives and how these could be addressed.

Awareness and exposure

From my experience of being a chemistry graduate, I can speak first hand of the lack of awareness of science archives. I feel that I was not alone, as during the course of a science degree, especially for research projects, archives are never really needed compared to other disciplines as most of the material we needed was found in online journals. Although I completed my degree some time ago, I feel this is still the case today when I speak to friends who study and work in the science sector. It seems that promotion of science and technology archives to scientists (at any stage of their career, but especially at the start) will make them aware of the rich source of material out there that can be of benefit to them, and subsequently they will become more involved and interested in creating and maintaining such archives.

Content

Science and technology archives, for an archivist with little to no knowledge of that particular area of science, understanding the vastly complex data and material is a potentially impossible job. The nomenclature used in scientific disciplines can be highly specialised and specific and so deciphering the material can be made extremely difficult.

This problem could be resolved in one of two ways. Firstly, the creator of the material or a scientist working in that area can be consulted. Whilst this can be time consuming, it is a necessity as the highly specialised nature of certain topics, can mean there are only a handful of people that can understand the work. Secondly, when the material is created, the creator should be encouraged to explain and store data in a way that will allow future users to understand and contextualise the data better.

As science and technology companies can be highly secretive entities, problems with exploiting sensitive material arise. It was suggested maybe seeking the advice of other specialist archive groups that have dealt with highly sensitive archives.

It appears that there is still a great deal of work to do to promote access, exploitation and awareness of current science and technology archives (for both creators and users). STAG is a fantastic way to get like minds together to discuss and implement solutions. I’m really looking forward to seeing how this develops and hopefully I will be able to contribute to this exciting, worthwhile and necessary future for science and technology archives.

Why and how do we Quality Assure (QA) websites at the BLWA?

At the Bodleian Libraries Web Archive (BLWA), we Quality Assure (QA) every site in the web archive. This blog post aims give a brief introduction into why and how we QA. The first steps of our web archiving involve crawling a site, using the tools developed by ArchiveIT. These tools allow for entire websites to be captured and browsed using the Wayback Machine as if it were live, allowing you to download files, view videos/photos and interact with dynamic content, exactly how the website owner would want you to. However, due to the huge variety and technical complexity of websites, there is no guarantee that every capture will be successful (that is to say that all the content is captured and working as it should be). Currently there is no accurate automatic process to check this and so this is where we step in.

We want to ensure that the sites on our web archive are an accurate representation in every way. We owe this to the owners and the future users. Capturing the content is hugely important, but so too is how it looks, feels and how you interact with it, as this is a major part of the experience of using a website.

Quality assurance of a crawl involves manually checking the capture. Using the live site as a reference, we explore the archived capture, clicking on links, trying to download content or view videos; noting any major discrepancies to the live site or any other issues. Sometimes, a picture or two will be missing or, it maybe that a certain link is not resolving correctly, which can be relatively easy to fix, but other times it can be massive differences compared to the live site; so the (often long and sometimes confusing) process of solving the problem begins. Some common issue we encounter are:

  • Incorrect formatting
  • Images/video missing
  • Large file sizes
  • Crawler traps
  • Social media feeds
  • Dynamic content playback issues

There are many techniques available for us to use to help solve these problems, but there is no ‘one fix for all’, the same issue for two different sites may require two different solutions. There is a lot of trial and error involved and over the years we have gained a lot of knowledge on how to solve a variety of issues. Also ArchiveIT has a fantastic FAQ section on their site, however, if we have gone through the usual avenues and still cannot solve our problems, then our final port of call is to ask the geniuses at ArchiveIT, who are always happy and willing to help.

An example of how important and effective QA can be. The initial test capture did not have the correct formatting and was missing images. This was resolved after the QA process

QA’ing is a continual process. Websites add new content or companies change to different website designers, meaning captures of websites that have previously been successful, might suddenly have an issue. It is for this reason that every crawl is given special attention and is QA’d. QA’ing the captures before they are made available is a time consuming but incredibly important part of the web archiving process at the Bodleian Libraries Web Archive. It allows us to maintain a high standard of capture and provide an accurate representation of the website for future generations.

 

Email Preservation: How Hard Can it Be? DPC Briefing Day

On Thursday 6th July 2017 I attended the Digital Preservation Coalition briefing day in partnership with the Andrew W. Mellon Foundation on email preservation titled ‘Email preservation: how hard can it be?’. It was hosted at The National archives (TNA), this was my first visit to TNA and it was fantastic. I didn’t know a great deal about email preservation prior to this and so I was really looking forward to learning about this topic.

The National Archives, Photograph by Mike Peel (www.mikepeel.net)., CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=9786613

The aim of the day was to engage in discussion about some of the current tools, technologies and thoughts on email preservation. It was orientated around the ‘Task Force on Technical Approaches to Email Preservation’ report that is currently in its draft phase. We also got to hear about interesting case studies from the British library, TNA and Preservica, each presenting their own unique experiences in relation to this topic. It was a great opportunity to learn about this area and hear from the co-chairs (Kate Murray and Christopher Prom) and the audience about their thoughts on the current situation and possible future directions.

We heard from Jonathan Pledge from British library (BL). He told us about the forensic capture expertise gained by the BL and using EnCase to capture email data from hard drives, CD’s and USB’s. We also got an insight into how they are deciding which email archive tool to use. Aid4mail fits better with their work flow however ePADD with its holistic approach was something they were considering. During their ingest they separate the emails from the attachments. They found that after the time consuming process of removing emails that would violate the data protection laws, there was very little usable content left, as often, entire threads would have to be redacted due to one message. This is not the most effective use of an archivist time and is something they are working to address.

We also heard from Anthea Seles who works with government collections at TNA. We learnt that from their research, they discovered that approximately 1TB of data in an organisations own electronic document and records management system is linked to 10TB of related data in shared drives. Her focus was on discovery and data analytics. For example, a way to increase efficiency and focus the attention of the curator on was to batch email. If an email was sent from TNA to a vast number of people, then there is a high chance that the content does not contain sensitive information. However, if it was sent to a high profile individual, then there is a higher chance that it will contain sensitive information, so the curator can focus their attention on those messages.

Hearing from Preservica was interesting as it gave an insight into the commercial side of email archiving. In their view, preservation was not an issue. For them, their attention was focused on addressing issues such as identifying duplicates/unwanted emails efficiently. Developing tools for performing whole collection email analysis and, interestingly, how to solve the problem of acquiring emails via a continuous transfer.

Emails are not going to be the main form of communication forever (the rise in the popularity of instant messaging is clear to see) however we learnt that we are still expecting growth in its use for the near future.

One of the main issues that was bought up was the potential size of future email archives and the issue that come with effective and efficient appraisal. What is large in academic terms, e.g. 100 000 emails, is not in government. The figure of over 200 million emails at the George W. Bush presidential library is a phenomenal amount and the Obama administrations is estimated at 300 million. This requires smart solutions and we learnt how the use of artificial intelligence and machine learning could help.

Continuous active learning was highlighted to improve searches. An example of searching for Miami dolphins was given. The Miami Dolphins are an American football team however someone might so be looking for information about dolphins in Miami. Initially the computer would present different search results and the user would choose which the more relevant result is, over time it will learn what it is the user is looking for in cases where searches can be ambiguous.

Another issue that was highlighted was, how do you make sure that you have searched the correct person? How do you avoid false positives? At TNA the ‘Traces Through Time’ project aimed to do that, initially with World War One records. This technology, using big data analytics can be used with email archives. There is also work on mining the email signature as a way to better determine ownership of the message.

User experience was also discussed. Emulation is an area of particular interest. The positive of this is that it recreates how the original user would have experienced the emails. However this technology is still being developed. Bit level preservation is a solution to make sure we capture and preserve the data now. This prevents loss of the archive and allows the information and value to be extracted in the future once the tools have been developed.

It was interesting to hear how policy could affect how easy it would be to acquire email archives. The new General Data Protection Regulation that will come into effect in May 2018 will mean anyone in breach of this will suffer worse penalties, up to 4% of annual worldwide turnover. This means that companies may air on the side of caution with regards to keeping personal data such as emails.

Whilst the email protocols are well standardised, allowing emails to be sent from one client to another (e.g. AOL account from early 1990’s to Gmail of now) the acquisition of them are not. When archivists get hold of email archives, they are left with the remnants of whatever the email client/user has done to it. This means metadata may have been added or removed and formats can vary. This adds a further level of complexity to the whole process

The day was thoroughly enjoyable. It was a fantastic way to learn about archiving emails. As emails are now one of the main methods of communication, for government, large organisations and personal use, it is important that we develop the tools, techniques and policies for email preservation. To answer the question ‘how hard can it be?’ I’d say very. Emails are not simple objects of text, they are highly complex entities comprising of attachments, links and embedded content. The solution will be complex but there is a great community of researchers, individuals, libraries and commercial entities working on solving this problem. I look forward to hearing the update in January 2018 when the task force is due to meet again.