Category Archives: Activity

Children’s Papers: Series 1 catalogue of Opie Archive now available

The cataloguing of the first series of the Opie Archive, which comprises children’s papers, as well as related correspondence from school teachers, has now been completed. The catalogue is available to search online here.

The material in the first 13 boxes spans most of the 1950s, during which time, Iona and Peter Opie were working on their book, The Lore and Language of Schoolchildren, which was published towards the end of 1959. They began by placing an advert in the Times Educational Supplement, seeking teachers willing to assist in their research. Those who responded, soon put the Opies in touch with further colleagues in other schools, until they had recruited a wide network of enthusiastic teachers across the country. In order to keep track of their dizzying number of correspondents, the Opies kept meticulous notes in a series of small address books, in which each contact was assigned a reference code. The material in the first 13 boxes is, therefore, arranged in order of the reference codes of those contacts who had sent in each batch of papers. The subsequent 20 boxes, following the publication of The Lore and Language, date mostly from 1960 onwards. From this point, the material is instead arranged alphabetically, by the area the material had come from – from Aberdeen to York.

The Opie address books, which hold the key to all their many correspondents

The papers, often accompanied by colourful illustrations, list the children’s favourite counting out and skipping rhymes, describe games such as ball games, chasing games and marbles, explain slang terms and expressions currently in use, recount the latest playground fads and crazes, and outline various traditions, superstitions and other playground lore that have been passed down to them. Some of the games described would make modern-day readers flinch, such as the popular game “Knifey”, which involves throwing a pocket knife to stick in the ground near the opponent’s leg. The children’s papers are usually prefaced by a note from their teacher, often apologising for spelling mistakes in their pupils’ work, and sometimes recalling their own childhood songs and games. The teachers’ insights are often particularly interesting, such as when one teacher observes that the few English-language songs and rhymes known to the children in their predominantly Welsh-speaking school in Ruthin, north Wales, appear to be the legacy left by children from Liverpool, who had been evacuated there during the war.

The series also includes a sub-section of material received from sources other than schools, such as from fellow researchers working in the same field as the Opies, or a collection of local rhymes and songs from across Scotland, gathered by the editors of the Aberdeen Press and Journal newspaper. This section also includes ten boxes of children’s essays submitted to the Camberwell Public Libraries Essay Competition, passed on to the Opies by Camberwell’s Chief Librarian. These competition entries provide a fascinating glimpse into the children’s thoughts and lives. The essays are very clearly rooted in their time, which is apparent not only through the 1950s and ’60s hairstyles and fashions, discernible in some of the charming, childish illustrations, but also in the children’s responses to essay topics such as “What I want to be when I leave school”, in which all the girls aspire to be nurses, dressmakers and typists, while their male counterparts seek to become firemen, policemen and train drivers. Other interesting responses were elicited by the 1955 essay title “A visit to the moon” – some children setting their stories firmly in the realm of fantasy, imagining being transported to the moon by fairies or goblins, while others wrote of rocket ships, but set their stories in the far distant year 3000, little imagining that the moon landing could become a reality in just over a decade’s time.

Shiny, new, archive boxes, all labelled up and barcoded!

To begin with, the bundles of papers were mostly still packaged in the same old, brown envelopes in which they had been stored by the Opies. Part of our task, in order to preserve the material long-term, was to remove all the harmful fasteners that could cause damage to the papers over time, such as rusty paperclips, pins and staples, as well as brittle, dried-up elastic bands. The papers could then be repackaged into standard, acid-free archive folders and boxes. In those instances where whole batches of papers had been folded or rolled up within their envelopes, the process of unfurling and flattening them to lie safely and neatly in their archive folders, was rather time-consuming.

Some of the rusty fasteners, removed from the Opie schools material

Our final task was foliation – which means physically numbering all the individual leaves (or “folios”) in each box, in pencil, so that the original order of the pages will never become muddled. The foliation process demanded sustained concentration, as it was all too easy to either miscount or accidently skip a page, especially given that the leaves in each bundle were all different sizes. Once such an error is discovered, all the subsequent numbers in the sequence are then, of course, likewise out of sync – a highly frustrating occurrence which we sought to avoid! In total, we numbered over 24 and a half thousand leaves across 46 boxes.

The Opie cataloguing project is generously funded by the Wellcome Trust. While the catalogue of this first series has now been completed, please note that work on the remaining Opie Archive is still ongoing, and sequences of the Opie Archive will continue to become temporarily unavailable whilst preservation, cataloguing and digitisation work is being carried out. We will try to accommodate urgent researchers’ requests for access wherever possible, however, if you need to consult material from the Opie Archive before June 2018, please do ensure that you contact us with as much advance notice as possible, so that we can advise on the availability of the material in question and make any necessary arrangements.

Supported by the Wellcome Trust

Oxfam archive inspires potential University of Oxford students

Nineteen year-12 students recently attended a seminar in the Weston Library’s impressive Bahari Room as part of a summer school organised by Wadham College.

The programme allows students from schools with low application/entry rates into higher education to experience university life through a four-day residential. During the visit, students attended lectures, seminars and tutorials, giving them a taste of what it is like to be an undergraduate at the University of Oxford.

The theme for this year was ‘The Politics of Immigration’ and in the seminar, students had the chance to handle a selection of material taken from the Oxfam archive. They were then asked to discuss the representation of Palestinian refugees in the archival documents dating from the 1960s. The material used was taken from the Communications section of the archive – i.e. records of Oxfam’s external communication with the public – and is just a very small example of the material available to the public in the extensive Oxfam archive (the Communications catalogue is online here).

An example of some of the material that the students were using from the Communications section of the Oxfam archive.

Though initially hesitant, we were pleased when two eager students volunteered to open up the archival boxes and find the files that were needed. After being carefully handled by our volunteers, all the files were laid out for the students to analyse in groups.

Dr. Tom Sinclair and a student unpacking an archival box.

The students then took it in turns to give examples of how Palestinian refugees were represented in the Oxfam material. One of the excellent examples that students spotted was how Oxfam was able to remain politically neutral (a constitutional necessity for charities) by not specifying why the refugees were displaced. Students also remarked that Oxfam preferred to focus on individual stories in their communications – for instance, that of a displaced teenager with aspirations to be an engineer – which the students suggested helped humanise a crisis that could be difficult for the public to comprehend.

The students studied selected material from the Oxfam archive and gave examples of how Palestinian refugees were represented.

Overall, the ‘Politics of Immigration’ seminar was a great success that gave the students a good feel for what it would be like to use the archives to complete research for a dissertation or other academic project.

Dr Tom Sinclair, who organised the summer school, said: “It was such a privilege to be in that lovely room and have such free access to the archives… I really think that a couple of the students were inspired, and I hope they’ll be future Oxford undergraduates visiting the archives again in a few years’ time.”

Bountiful Harvest: Curation, Collection and Use of Web Archives

The theme for the ARA Annual Conference 2017 is: ‘Challenge the Past, Set the Agenda’. I was fortunate enough to attend a pre-conference workshop in Manchester, ran by Lori Donovan and Maria Praetzellis from The Internet Archive, about the bountiful harvest that is web content, and the technology, tools and features that enable web archivists to overcome the challenges it presents.

Part I – Collections, Community and Challenges

Lori gave us an insight into the use cases of Archive-it partner organisations to show us the breadth of reasons why other institutions archive the web. The creation of a web collection can be for one of (or indeed, all) the following reasons:

  • To maintain institutional history
  • To document social commentary and the perspectives of users
  • To capture spontaneous events
  • To augment physical holdings
  • Responsibility: Some documents are ONLY digital. For example, if a repository upholds a role to maintain all published records, a website can be moved into the realm of publication material.

When asked about duplication amongst web archives, and whether it was a problem if two different organisations archive the same web content, Lori put forward the argument that duplication is not worrisome. The more captures of a website is good for long term preservation in general – in some cases organisations can work together on collaborative collecting if the collection scope is appropriate.

Ultimately, the priority of crawling and capturing a site is to recreate the same experience a user would have if they were to visit the live site on the day it was archived. Combining this with an appropriate archive frequency  means that change over time can also be preserved. This is hugely important: the ephemeral nature of internet content is widely attested to. Thankfully, the misconception that ‘online content will be around forever’ is being confronted. Lori put forward some examples to illustrate the point for why the archiving of websites is crucial.

In general, a typical website lasts 90-100 days before one of the following happens:

  1. The content changes
  2. The site URL moves
  3. The content disappears completely

A study was carried out on the Occupy Movement sites archived in 2012. Of 582 archived sites, only 41% were still live on the web as of April 2014. (Lori Donovan)

Furthermore, we were told about a 2014 study which concluded that 70% of scholarly articles online with text citations suffered from reference rot over time. This speaks volumes about preserving copies in order for both authentication and academic integrity.

The challenge continues…

Lori also pointed us to the NDSA 2016/2017 survey which outlines the principle concerns within web archiving currently: Social media, (70%); Video, (69%) and Interactive media and Databases, (both 62%).  Any dynamic content can be difficult to capture and curate, therefore sharing advice  and guidelines amongst leaders in the web archiving community is a key factor in determining successful practice for both current web archivists, and those of future generations.

Part II – Current and Future Agenda

Maria then talked us through some key tools and features which enable greater crawling technology, higher quality captures and the preservation of web archives for access and use:

  • Brozzler. Definitely my new favourite portmanteau (browser + crawler = brozzler!), brozzler is the newly developed crawler by The Internet Archive which is replacing the combination of heritrix and umbra crawlers. Brozzler captures http traffic as it is loaded, works with YouTube in order to improve media capture and the data will be immediately written and saved as a WARC file. Also, brozzler uses a real browser to fetch pages, which enables it to capture embedded urls and extract links.
  • WARC. A Web ARChive file format is the ISO standard for web archives. It is a concatenated file written by a crawler, with long term storage and preservation specifically in mind. However, Maria pointed out to us that WARC files are not constructed to easily enable research (more on this below.).
  • Elasticsearch. The full-text search system does not just search the html content displayed on the web pages, it searches PDF, Word and other text-based documents.
  • solr. A metadata-only search tool. Metadata can be added on Archive-it at collection, seed and document level.

Supporting researchers now and in the future

The tangible experience and use of web archives where a site can be navigated as if it was live can shed so much light on the political and social climate of its time of capture. Yet, Maria explained that the raw captured data, rather than just the replay, is obviously a rich area for potential research and, if handled correctly, is an inappropriable research tool.

As well as the use of Brozzler as a new crawling technology, Archive-it research services offer a set of derivative data-set files which are less complex than WARC and allow for data analysis and research. One of these derivative data sets is a Longitudinal Graph Analysis (LGA) dataset file which will allow the researcher to analyse the trend in links between urls over time within an entire web collection.

Maria acknowledged that there are lessons  to be learnt when supporting researchers using web archives, including technical proficiency training and reference resources. The typology of the researchers who use web archives is ever growing: social and political scientists, digital humanities disciplines, computer science and documentary and evidence based research including legal discovery.

What Lori and Maria both made clear throughout the workshop was that the development and growth of web archiving is integral to challenging the past and preserving access on a long term scale. I really appreciated an insight into how the life cycle of web archiving is a continual process, from creating a collection, through to research services, whilst simultaneously managing the workflow of curation.

When in Manchester…

Virtual Archive, Central Library, Manchester

I  couldn’t leave  Manchester without exploring the John Rylands Library and Manchester’s Central Library. In the latter, this interactive digital representation of a physical archive combined choosing a box from how a physical archive may be arranged, and then projected the digitised content onto the screen once selected. A few streets away in Deansgate I had just enough time in John Rylands to learn that the fear of beards is called Pogonophobia. Go and visit yourself to learn more!

Special collections reading room, John Rylands Library, Manchester

The Archive of Emily Hobhouse is now available

“to call a woman ‘hysterical’ because you have not the knowledge necessary to deny her facts is the last refuge of the unmanly and the coward…I always felt when termed hysterical that I had triumphed because it meant my arguments cannot be met nor my statements denied…” [MS. Hobhouse 25].

A strong-willed, compassionate and at times controversial figure, Emily Hobhouse is best known for her work publicising the conditions in the concentration camps which were set up by the British government to detain predominantly women and children during the Anglo-Boer War (1899-1902).

Report on the conditions in the camps for the Committee of the Distress Fund for South African Women and Children, MS. Hobhouse 4

Hobhouse’s influential report, MS. Hobhouse 4.

Travelling to South Africa in December 1900, Hobhouse reported on the widespread hunger, death and disease that she encountered there, distributing aid gathered by her Distress Fund for South African Women and Children, and putting pressure on the British government to improve conditions. This led the government to send out a Ladies’ Commission led by Millicent Fawcett, a contemporary but by no means friend of Emily Hobhouse.

Although Hobhouse was not permitted to join the commission, they would confirm her initial reports and make similar recommendations. In 1901 Hobhouse would attempt another visit of the camps, only to be refused permission to disembark, and be deported back to England. In 1905 she returned to South Africa to establish a Home Industries scheme to support rehabilitation, opening schools for spinning, weaving and lace making for local girls.

“a war is not only wrong in itself, but a crude mistake” [MS. Hobhouse 10]

A committed pacifist, Hobhouse travelled to Germany and Belgium during World War One to investigate conditions and meet with the German foreign minister, an act which to some put her on the wrong side of public opinion. Following the armistice, Hobhouse continued her commitment to relief work, and in 1919 set up a local relief fund in Leipzig, where she was honoured and awarded the German Red Cross decoration of second class.

The fascinating collection includes letters, diaries, and her own extensive writings, which reveal her unyielding dedication to her work. The collection also contains papers of her brother, Leonard Trelawny Hobhouse (1864-1929), a social philosopher and journalist.

While she is an often forgotten figure in British history, Emily Hobhouse is still remembered as a heroine in South Africa, where her ashes are buried in the Women’s Monument at Bloemfontein. On her death, Mahatma Gandhi wrote the following memorial:

On her death, Gandhi published the following memorial for Emily Hobhouse, MS. Hobhouse 23

Gandhi’s tribute to Emily Hobhouse, MS. Hobhouse 23.

The Archive of Emily Hobhouse is now available to readers in the Weston Library. The catalogue can be accessed here.

A selection of Emily Hobhouse’s own writings are now available to view online.

 

PDF/A: Challenges Meeting the ISO 19005 Standard

Anna Oates (MSLIS Candidate, University of Illinois at Urbana-Champaign and NDNP Coordinator Graduate Assistant, Preservation Services) explaining the differences between PDF and PDF/A

We were excited to attend the recent project presentation entitled: ‘A Case Study on Theses in Oxford’s Institutional Repository: Challenges Meeting the ISO 19005 Standard’ given by Anna Oates, a student involved in the Oxford-Illinois Digital Libraries Placement Programme.

The presentation focused initially on the PDF/A format: PDF/A differs from standard PDF in that it avoids common long term access issues associated with PDF. For example, a PDF created today may look and behave differently in 50 years time. This is because many visual aspects of the PDF are not saved into the file itself, (PDFs use font linking instead of font embedding) the standardised PDF/A format attempts to remedy this by embedding  metadata within the file and restricting certain aspects commonly found in PDF which could inhibit long term preservation.

Aspects excluded from PDF/A include :

  • Audio and video content
  • JavaScript executable files
  • All forms of PDF encryption

PDF/A is better suited therefore for the long term preservation of digital material as it maintains the integrity of the information included in the source files, be this textual or visual. Oates described PDF/A as having multiple ‘flavours’, PDF/A-1 published in 2005 including conformance level A (Accessible – maintains the structure of the file) and B (Basic – maintains the visual aspects only). Versions 2 and 3 published later in 2011 and 2012, were developed to encompass conformance level U (Unicode – enabling the embedding of Unicode information) alongside other features such as JPEG 2000 compression and the embedding of arbitrary file formats within PDF/A documents.

Oates specified that different types of documents benefited from different ‘flavours’ of PDF/A, for example, digitised documents were better suited to conformance level B whereas born digital documents were better suited to level A.

Whilst specifying the benefits of PDF/A, Oates also highlighted the myriad of issues associated with the format.  Firstly, while experimenting with creating and conforming PDF/A documents, she noted the conformed documents had slight differences, such as changes to the colour pixels of embedded image files (PDF/A format showed less difference in the colour of pixels with programs like PDF Studio), this showcased a clear alteration of the authenticity of the original source file.

Oates compared source images to PDF/A converted images and found obvious visual differences.

Secondly,  Oates noted that when converting files from PDF to PDF/A-1b, smart software would change the decode filter of the image (e.g. changing from JPXDecode used for JPEG2000 to DCTDecode accepted by ISO 19005) in order to ensure it would conform to ISO 19005. However, she noted that despite the positives of avoiding non-conformance the software had increased the file size of the PDF by 65%. The file size increase poses obvious issues in regards to storage and cost considerations for organisations using PDF/A.

Oates’ workflow for creation and conformance checking of PDF/A files using different PDF/A software

Format uptake was also discussed by Oates. She found that PDF/A had not been widely utilised by Universities for long term preservation of dissertations and thesis in the UK. However, Oates provided examples of users of PDF/A for Electronic Theses and Dissertations Repositories that included: Concordia University, Johns Hopkins University, McGill University, Rutgers University, University of Alberta, University of Oulu and Virginia Tech.  Alongside this it was mentioned that uptake amongst Research and Cultural Heritage Institutions included: the Archaeology Data Service (ADS), British Library, California Digital Library, Data Archiving and Networked Services (DANS), the Library of Congress and the U.S. National Archives and Records Administration (NARA).

“Adobe Preflight has failed to recognize most of the glyph errors. As such, veraPDF will remain our final tool for validation.” (Anna Oates)

Oates therefore concluded that PDF/A was not the best solution to PDF preservation, she mentioned that the new ISO standard would cause new issues and considerations for PDF/A users. (Iram do you have anything in your notes re: this?)

Following the presentation the audience debated whether PDF/A should still be used. Some considered whether other solutions existed to PDF preservation; an example of a proposed solution was to keep both PDF/A and the original PDFs. However, many still felt that PDF/A provided the best solution available despite its various drawbacks.

Hopefully Oates’  findings will highlight the various areas needed for improvement in both PDF/A  conversion/ validation software and conformance aspects of the ISO 19005 Standard used by PDF/A to ensure it is up to the task of digital preservation.

To learn  more about PDF/A have a look at Adobe’s own e-book PDF/A In a Nutshell.

Alice, Ben and Iram (Trainee Digital Archivists)

Email Preservation: How Hard Can it Be? DPC Briefing Day

Miten and I outside the National Archives

Miten and I outside the National Archives, looking forward to a day of learning and networking

Last week I had the pleasure of attending a Digital Preservation Coalition (DPC) Briefing Day titled Email Preservation: How Hard Can it Be? 

In 2016 the DPC, in partnership with the Andrew W. Mellon Foundation, announced the formation of the Task Force on Technical Approaches to Email Archives to address the challenges presented by email as a critical historical source. The Task Force delineated three core aims:

  1. Articulating the technical framework of email
  2. Suggesting how tools fit within this framework
  3. Beginning to identify missing elements.

The aim of the briefing day was two-fold; to introduce and review the work of the task force thus far in identifying emerging technical frameworks for email management, preservation and access; and to discuss more broadly the technical underpinnings of email preservation and the associated challenges, utilising a series of case studies to illustrate good practice frameworks.

The day started with an introductory talk from Kate Murray (Library of Congress) and Chris Prom (University of Illinois Urbana-Champaign), who explained the goals of the task force in the context of emails as cultural documents, which are worthy of preservation. They noted that email is a habitat where we live a large portion of our lives, encompassing both work and personal. Furthermore, when looking at the terminology, they acknowledged email is an object, several objects and a verb – and it’s multi-faceted nature all adds to the complexity of preserving email. Ultimately, it was said email is a transactional process whereby a sender transmits a message to a recipient, and from a technical perspective, a protocol that defines a series of commands and responses that operate in a manner like a computer programming language and which permits email processes to occur.

From this standpoint, several challenges of email preservation were highlighted:

  • Capture: building trust with donors, aggregating data, creating workflows and using tools
  • Ensuring authenticity: ensuring no part of the email (envelope, header, and message data etc.) have been tampered with
  • Working at scale: email
  • Addressing security concerns: malicious content leading to vulnerability, confidentiality issues
  • Messages and formats
  • Preserving attachments and linked/networked documents: can these be saved and do we have the resources?
  • Tool interoperability

 

The first case study of the day was presented by Jonathan Pledge from the British Library on “Collecting Email Archives”, who explained born-digital research began at the British Library in 2000, and many of their born-digital archives contain email.  The presentation was particularly interesting as it included their workflow for forensic capture, processing and delivery of email for preservation, providing a current and real life insight into how email archives are being handled. The British Library use Aid4Mail Forensic for their processing and delivery, however, are looking into ePADD as a more holistic approach. ePADD is a software package developed by Standford University which supports archival processes around the appraisal, ingest, processing, discovery and delivery of email archives. Some of the challenges they experienced surrounded the issue of email as often containing personal information. A possible solution would be the redaction of offending material, however they noted this could lead to the loss of meaning, as well as being an extremely time-consuming process.

Next we heard from Anthea Seles (The National Archives) and Greg Falconer (UK Government Cabinet Office) who spoke about email and the record of government. Their presentation focused on the question of where the challenge truly lies for email – suggesting that, opposed to issues of preservation, the challenge lies in capture and presentation. They noted that when coming from a government or institutional perspective, the amount of email created increases hugely, leaving large collections of unstructured records. In terms of capture, this leads to the challenge of identifying  what is of value and what is sensitive. Following this, the major challenge is how to best present emails to users – discoverability and accessibility. This includes issues of remapping existing relationships between unstructured records, and again, the issue of how to deal with linked and networked content.

The third and final case study was given by Michael Hope, from Preservica; an “Active Preservation” technology, providing a suite of (Open Archival Information System) compliant workflows for ingest, data management, storage, access, administration and preservation for digital archives.

Following the case studies, there was a second talk from Kate Murray and Chris Prom on emerging Email Task Force themes and their Technology Roadmap. In June 2017 the task force released a Consultation Report Draft of their findings so far, to enable review, discussion and feedback, and the remainder of their presentation focused on the contents and gaps of the draft report. They talked about three possible preservation approaches:

  • Format Migration: copying data from one type of format to another to ensure continued access
  • Emulation: recreating user experience for both message and attachments in the original context
  • Bit Level Preservation: preservation of the file, as it was submitted (may be appropriate for closed collections)

They noted that there are many tools within the cultural heritage domain designed for interoperability, scalability, preservation and access in mind, yet these are still developing and improving. Finally, we discussed what the possible gaps of the draft report, and issues such as  the authenticity of email collections were raised, as well as a general interest in the differing workflows between institutions. Ultimately, I had a great time at The National Archives for the Email Preservation: How Hard Can it Be? Briefing Day – I learnt a lot about the various challenges of email preservation, and am looking forward to seeing further developments and solutions in the near future.

Email Preservation: How Hard Can it Be? DPC Briefing Day

On Thursday 6th July 2017 I attended the Digital Preservation Coalition briefing day in partnership with the Andrew W. Mellon Foundation on email preservation titled ‘Email preservation: how hard can it be?’. It was hosted at The National archives (TNA), this was my first visit to TNA and it was fantastic. I didn’t know a great deal about email preservation prior to this and so I was really looking forward to learning about this topic.

The National Archives, Photograph by Mike Peel (www.mikepeel.net)., CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=9786613

The aim of the day was to engage in discussion about some of the current tools, technologies and thoughts on email preservation. It was orientated around the ‘Task Force on Technical Approaches to Email Preservation’ report that is currently in its draft phase. We also got to hear about interesting case studies from the British library, TNA and Preservica, each presenting their own unique experiences in relation to this topic. It was a great opportunity to learn about this area and hear from the co-chairs (Kate Murray and Christopher Prom) and the audience about their thoughts on the current situation and possible future directions.

We heard from Jonathan Pledge from British library (BL). He told us about the forensic capture expertise gained by the BL and using EnCase to capture email data from hard drives, CD’s and USB’s. We also got an insight into how they are deciding which email archive tool to use. Aid4mail fits better with their work flow however ePADD with its holistic approach was something they were considering. During their ingest they separate the emails from the attachments. They found that after the time consuming process of removing emails that would violate the data protection laws, there was very little usable content left, as often, entire threads would have to be redacted due to one message. This is not the most effective use of an archivist time and is something they are working to address.

We also heard from Anthea Seles who works with government collections at TNA. We learnt that from their research, they discovered that approximately 1TB of data in an organisations own electronic document and records management system is linked to 10TB of related data in shared drives. Her focus was on discovery and data analytics. For example, a way to increase efficiency and focus the attention of the curator on was to batch email. If an email was sent from TNA to a vast number of people, then there is a high chance that the content does not contain sensitive information. However, if it was sent to a high profile individual, then there is a higher chance that it will contain sensitive information, so the curator can focus their attention on those messages.

Hearing from Preservica was interesting as it gave an insight into the commercial side of email archiving. In their view, preservation was not an issue. For them, their attention was focused on addressing issues such as identifying duplicates/unwanted emails efficiently. Developing tools for performing whole collection email analysis and, interestingly, how to solve the problem of acquiring emails via a continuous transfer.

Emails are not going to be the main form of communication forever (the rise in the popularity of instant messaging is clear to see) however we learnt that we are still expecting growth in its use for the near future.

One of the main issues that was bought up was the potential size of future email archives and the issue that come with effective and efficient appraisal. What is large in academic terms, e.g. 100 000 emails, is not in government. The figure of over 200 million emails at the George W. Bush presidential library is a phenomenal amount and the Obama administrations is estimated at 300 million. This requires smart solutions and we learnt how the use of artificial intelligence and machine learning could help.

Continuous active learning was highlighted to improve searches. An example of searching for Miami dolphins was given. The Miami Dolphins are an American football team however someone might so be looking for information about dolphins in Miami. Initially the computer would present different search results and the user would choose which the more relevant result is, over time it will learn what it is the user is looking for in cases where searches can be ambiguous.

Another issue that was highlighted was, how do you make sure that you have searched the correct person? How do you avoid false positives? At TNA the ‘Traces Through Time’ project aimed to do that, initially with World War One records. This technology, using big data analytics can be used with email archives. There is also work on mining the email signature as a way to better determine ownership of the message.

User experience was also discussed. Emulation is an area of particular interest. The positive of this is that it recreates how the original user would have experienced the emails. However this technology is still being developed. Bit level preservation is a solution to make sure we capture and preserve the data now. This prevents loss of the archive and allows the information and value to be extracted in the future once the tools have been developed.

It was interesting to hear how policy could affect how easy it would be to acquire email archives. The new General Data Protection Regulation that will come into effect in May 2018 will mean anyone in breach of this will suffer worse penalties, up to 4% of annual worldwide turnover. This means that companies may air on the side of caution with regards to keeping personal data such as emails.

Whilst the email protocols are well standardised, allowing emails to be sent from one client to another (e.g. AOL account from early 1990’s to Gmail of now) the acquisition of them are not. When archivists get hold of email archives, they are left with the remnants of whatever the email client/user has done to it. This means metadata may have been added or removed and formats can vary. This adds a further level of complexity to the whole process

The day was thoroughly enjoyable. It was a fantastic way to learn about archiving emails. As emails are now one of the main methods of communication, for government, large organisations and personal use, it is important that we develop the tools, techniques and policies for email preservation. To answer the question ‘how hard can it be?’ I’d say very. Emails are not simple objects of text, they are highly complex entities comprising of attachments, links and embedded content. The solution will be complex but there is a great community of researchers, individuals, libraries and commercial entities working on solving this problem. I look forward to hearing the update in January 2018 when the task force is due to meet again.

Study day of Ge’ez manuscripts of Ethiopia and Eritrea

Recent months have brought an unprecedented interest in Ge’ez manuscripts of Ethiopia and Eritrea – a development that we welcome at the Bodleian. Study of this material has reached a new level, with further palaeographical and codicological knowledge, as well as a growing appreciation of art history. Studying, displaying, and digitising a variety of our little-known codices and scrolls with modern means help us better understand and disseminate our findings to new audiences.
With this in mind, on Saturday, the 17th of June we welcomed a small group of Ethiopians and Eritreans at the Bodleian to view a selection of Ge’ez manuscripts of Ethiopia and Eritrea. The material, which was studied and discussed with great excitement, included a magic scroll with miniatures of angels and demons, an illuminated seventeenth-century prayer book, fragments of a medieval gospel with evangelists’ portraits, a hagiographic work with copious illustrations to the text, an important textual variant of the Book of Enoch and the epic work Kebra Nagast (Glory of the Kings).
The experience of the day was that of beautiful exchange of ideas, as well as building bridges within and between communities. We look forward to future developments!

Engaged in discussion from left to right: Dereje Debella, Judith McKenzie, Girma Getahun, Yemane Asfedai, Gillian Evison, Madeline Slaven and Rahel Fronda. Photo credit: Mai Musié.

Studying a magic scroll, from left to right: Yemane Asfedai, Girma Getahun, Dereje Debella, Madeline Slaven and Rahel Fronda. Photo credit: Gillian Evison.

Studying a textual variant of the Ethiopian Book of Enoch, from left to right: Rahel Fronda, Dereje Debella, Girma Getahun, Yemane Asfedai, Gillian Evison and Madeline Slaven. Photo credit: Miranda Williams.

The first rule of Pig Club…

Rulebook of Nailsworth Pig Club, from the Sir Stafford Cripps archive at the Bodleian Library [sc22/1c]

Rulebook of Nailsworth Pig Club, April 1918, from the Sir Stafford Cripps archive [sc22/1c]

There’s just something about this delightful little three page rulebook that tickles me. Perhaps it’s the use of phrases like ‘eligible pigs’. Otherwise, it’s a perfectly serious document which details the ins and outs of the provision, insurance and inspection of pigs and potatoes (pigs and potatoes!) raised by members of the club.

It appears to have been produced by Gloucestershire County Council and was presumably part of a county-, or country-wide effort to encourage people to raise their own food during the war (also done during the second world war). Despite the central concern of food production, though, it’s a surprisingly cheering document for people concerned with animal welfare, as it’s very specific that the animals must be healthy and well-cared for, and that insurance compensation would not be paid if ‘the death or sickness of a pig is attributed to bad food, insufficient attention, or other carelessness or ill-treatment’.

One of my favourite things about the rulebook is who it belonged to: Stafford Cripps, probably best known as Britain’s ambassador to Moscow in 1940-1942 and as the austerity chancellor from 1947-1950. When this document was produced however, Cripps was not yet a political high flier. A chemistry graduate and practicing lawyer, he was both married and in ill health in 1914, which meant that he was not called up. He kept himself busy with recruitment efforts and then volunteered for a year in France as a Red Cross ambulance driver. In late 1915 he offered his chemistry expertise to the Ministry of Munitions and was posted to one of the country’s biggest munitions factories in Queensferry, near Chester. From early 1916, Cripps was running it, and it took its toll on his health. In early 1918, when this rulebook was drawn up, he was convalescing from a physical breakdown. Somehow, though, he still managed to find the time and energy to serve as honorary secretary of the Nailsworth Pig Club. The now nearly 100 year old rulebook survives in his archive at the Bodleian Library.

Incidentally, the first rule of Pig Club?:

  1. NAME.–The Society shall be called the “Nailsworth Pig Club.”

Can’t argue with that.

#WAWeek2017 – Researchers, practitioners and their use of the archived web

This year, the world of web archiving  saw a premiere: not only were the biennial RESAW conference and the IIPC conference, established in 2016, held jointly for the first time, but they also formed part of a whole week of workshops, talks and public events around web archives – Web Archiving Week 2017 (or #WAWeek2017 for the social medially inclined).

After previous conferences Reykjavik (2016) and Arhus (RESAW 2015), the big 2017 event was held in London, 14-16 June 2017, organised jointly by the School of Advanced Studies of the University of London, the IIPC and the British Library.
The programme was packed full of an eclectic variety of presentations and discussions, with topics ranging from the theory and practice of curating web archive collections or capturing whole national web domains, via technical topics such as preservation strategies, software architecture and data management, to the development of methodologies and tools for using web archives based research and case studies of their application.

Even in digital times, who doesn’t like a conference pack? Of course, the full programme is also available online. (…but which version will be easier to archive?)

Continue reading