Category Archives: Uncategorized

Archives Unleashed – Vancouver Datathon

On the 1st-2nd of November 2018 I was lucky enough to attend the  Archives Unleashed Datathon Vancouver co-hosted by the Archives Unleashed Team and Simon Fraser University Library along with KEY (SFU Big Data Initiative). I was very thankful and appreciative of the generous travel grant from the Andrew W. Mellon Foundation that made this possible.

The SFU campus at the Habour Centre was an amazing venue for the Datathon and it was nice to be able to take in some views of the surrounding mountains.

About the Archives Unleashed Project

The Archives Unleashed Project is a three year project with a focus on making historical internet content easily accessible to scholars and researchers whose interests lay in exploring and researching both the recent past and contemporary history.

After a series of datathons held at a number of International institutions such as the British Library, University of Toronto, Library of Congress and the Internet Archive, the Archives Unleashed Team identified some key areas of development that would enable and help to deliver their aim of making petabytes of valuable web content accessible.

Key Areas of Development
  • Better analytics tools
  • Community infrastructure
  • Accessible web archival interfaces

By engaging and building a community, alongside developing web archive search and data analysis tools the project is successfully enabling a wide range of people including scholars, programmers, archivists and librarians to “access, share and investigate recent history since the early days of the World Wide Web.”

The project has a three-pronged approach
  1. Build a software toolkit (Archives Unleashed Toolkit)
  2. Deploy the toolkit in a cloud-based environment (Archives Unleashed Cloud)
  3. Build a cohesive user community that is sustainable and inclusive by bringing together the project team members with archivists, librarians and researchers (Datathons)
Archives Unleashed Toolkit

The Archives Unleashed Toolkit (AUT) is an open-source platform for analysing web archives with Apache Spark. I was really impressed by AUT due to its scalability, relative ease of use and the huge amount of analytical options it provides. It can work on a laptop (Mac OS, Linux or Windows), a powerful cluster or on a single-node server and if you wanted to, you could even use a Raspberry Pi to run AUT. The Toolkit allows for a number of search functions across the entirety of a web archive collection. You can filter collections by domain, URL pattern, date, languages and more. Create lists of URLs to return the top ten in a collection. Extract plain text files from HTML files in the ARC or WARC file and clean the data by removing ‘boilerplate’ content such as advertisements. Its also possible to use the Stanford Named Entity Recognizer (NER) to extract names of entities, locations, organisations and persons. I’m looking forward to seeing the possibilities of how this functionality is adapted to localised instances and controlled vocabularies – would it be possible to run a similar programme for automated tagging of web archive collections in the future? Maybe ingest a collection into ATK , run a NER and automatically tag up the data providing richer metadata for web archives and subsequent research.

Archives Unleashed Cloud

The Archives Unleashed Cloud (AUK) is a GUI based front end for working with AUT, it essentially provides an accessible interface for generating research derivatives from Web archive files (WARCS). With a few clicks users can ingest and sync Archive-it collections, analyse the collections, create network graphs and visualise connections and nodes. It is currently free to use and runs on AUK central servers.

My experience at the Vancouver Datathon

The datathons bring together a small group of 15-20 people of varied professional backgrounds and experience to work and experiment with the Archives Unleashed Toolkit and the Archives Unleashed Cloud. I really like that the team have chosen to minimise the numbers that attend because it created a close knit working group that was full of collaboration, knowledge and idea exchange. It was a relaxed, fun and friendly environment to work in.

Day One

After a quick coffee and light breakfast, the Datathon opened with introductory talks from project team members Ian Milligan (Principal Investigator), Nick Ruest (Co-Principal Investigator) and Samantha Fritz (Project Manager), relating to the project – its goals and outcomes, the toolkit, available datasets and event logistics.

Another quick coffee break and it was back to work – participants were asked to think about the datasets that interested them, techniques they might want to use and questions or themes they would like to explore and write these on sticky notes.

Once placed on the white board, teams naturally formed around datasets, themes and questions. The team I was in consisted of  Kathleen Reed and Ben O’Brien  and formed around a common interest in exploring the First Nations and Indigenous communities dataset.

Virtual Machines were kindly provided by Compute Canada and available for use throughout the Datathon to run AUT, datasets were preloaded onto these VMs and a number of derivative files had already been created. We spent some time brainstorming, sharing ideas and exploring datasets using a number of different tools. The day finished with some informative lightning talks about the work participants had been doing with web archives at their home institutions.

Day Two

On day two we continued to explore datasets by using the full text derivatives and running some NER and performing key word searches using the command line tool Grep. We also analysed the text using sentiment analysis with the Natural Language Toolkit. To help visualise the data, we took the new text files produced from the key word searches and uploaded them into Voyant tools. This helped by visualising links between words, creating a list of top terms and provides quantitative data such as how many times each word appears. It was here we found that the word ‘letter’ appeared quite frequently and we finalised the dataset we would be using – University of British Columbia – bc-hydro-site-c.

We hunted down the site and found it contained a number of letters from people about the BC Hydro Dam Project. The problem was that the letters were in a table and when extracted the data was not clean enough. Ben O’Brien came up with a clever extraction solution utilising the raw HTML files and some script magic. The data was then prepped for geocoding by Kathleen Reed to show the geographical spread of the letter writers, hot-spots and timeline, a useful way of looking at the issue from the perspective of engagement and the community.

Map of letter writers.

Time Lapse of locations of letter writers. 

At the end of day 2 each team had a chance to present their project to the other teams. You can view the presentation (Exploring Letters of protest for the BC Hydro Dam Site C) we prepared here, as well as the other team projects.

Why Web Archives Matter

How we preserve, collect, share and exchange cultural information has changed dramatically. The act of remembering at National Institutes and Libraries has altered greatly in terms of scope, speed and scale due to the web. The way in which we provide access to, use and engage with archival material has been disrupted. All current and future historians who want to study the periods after the 1990s will have to use web archives as a resource. Currently issues around accessibility and usability have lagged behind and many students and historians are not ready. Projects like Archives Unleashed will help to furnish and equip researchers, historians, students and the community with the necessary tools to combat these problems. I look forward to seeing the next steps the project takes.

Archives Unleashed are currently accepted submissions for the next Datathon in March 2019, I highly recommend it.

Attending the ARA Annual Conference 2018

ARA Annual Conference 2018, Grand Central Hotel, Glasgow

ARA Annual Conference 2018, Grand Central Hotel, Glasgow

Having been awarded the Diversity Bursary for BME individuals, sponsored by Kevin J Bolton Ltd., I was able to attend the ARA Annual Conference 2018 held in Glasgow in August.

Capitalising on the host city’s existing ubiquitous branding of People Make Glasgow,  the Conference Committee set People Make Records as this year’s conference theme. This was then divided into three individual themes, one for each day of the conference:

  • People in Records
  • People Using Records
  • People Looking After Records

Examined through the lens of the above themes over the course of three days,  this year’s conference addressed three keys areas within the sector: representation, diversity and engagement.

Following an introduction from Kevin Bolton (@kevjbolton), the conference kicked off with Professor Gus John (@Gus_John) delivering the opening keynote address, entitled “Choices of the Living and the Dead”. With People Make Records the theme for the day, Professor John gave a powerful talk discussing how people are impacting the records and recordkeeping of African (and other) diaspora in the UK, enabling the airbrushing of the history of oppressed communities. Professor John noted yes people make records, but we also determine what to record, and what to do with it once it has been recorded.

Noting the ignorance surrounding racial prejudice and violence, citing the Notting Hill race riots, the Windrush generation,  and Stephen Lawrence as examples, Professor John illustrated how the commemoration of historical events is selective: while in 2018 the 50th anniversary of the Race Relations Act received much attention, in comparison the 500th anniversary of the start of the Transatlantic Slave Trade was largely ignored, by the sector and the media alike.  This culture of oppression, and omission, he said, is leading to ignorance amongst young people about major defining events, contributing to a removal of context to historically oppressed groups.

In response to questions from the audience, Professor John noted that one of the problems facing the sector is the failure to interrogate the ‘business as usual’ climate, and that it may be ‘too difficult to consider what an alternative route might be’. Professor John challenged us to question the status quo: ‘Why is my curriculum white? Why isn’t my lecturer black? What does “de-colonising” the curriculum mean? This is what we must ask ourselves’.

Following Professor John’s keynote and his ultimate call to action, there was a palpable atmosphere of engagement amongst the delegates, with myself and those around me eager to spend the next three days learning from the experiences of others, listening to new perspectives and extracting guidance on the actions we may take to develop and improve our sector, in terms of representation, diversity and engagement.

Various issues relating to these areas were threaded throughout many of the presentations, and as a person of colour at the start of my career in this sector, and recipient of the Diversity Bursary, I was excited to hear more about the challenges facing marginalised communities in archives and records, including some I could relate to on a personal and professional level, and, hopefully, also take away some proposed solutions and recommendations.

I attended an excellent talk by Adele Patrick (@AdelePAtrickGWL),  of Glasgow Women’s Library, who discussed the place for feminism within the archive, noting GWL’s history in resistance, and insistence on a plural representation, when women’s work, past and present, is eclipsed. Dr Alan Butler (@AButlerArchive), Coordinator at Plymouth LGBT Community Archive, discussed his experiences of trying to create a sense of community within a group that is inherently quite nebulous.  Nevertheless, Butler illustrated the importance of capturing LGBTQIA+ history, as people today are increasingly removed from the struggles that previous generations have had to overcome, echoing a similar point Professor Gus John made earlier.

A presentation which particularly resonated with me came from Kirsty Fife (@DIYarchivist) and Hannah Henthorn (@hanarchovist), on the issue of diversity in the workforce. Fife and Henthorn presented the findings from their research, including their survey of experiences of marginalisation in the UK archive sector, highlighting the structural barriers to diversifying the archive sector workforce. Fife and Henthorn identified several key themes which are experienced  by marginalised communities in the sector, including: the feeling of isolation and otherness in both workplace and universities; difficulties in gaining qualifications, perhaps due to ill health/disability/financial barriers/other commitments; feeling unsafe and under confident in professional spaces and a frustration at the lack of diversity in leadership roles.

As a Graduate Trainee Digital Archivist, I couldn’t abandon my own focus on digital preservation and digital archiving, and as such attended various digital-related talks, including “Machines make records: the future of archival processing” by Jenny Bunn (@JnyBn1), discussing the impact of taking a computational approach to archival processing, “Using digital preservation and access to build a sustainable future for your archive” led by Ann Keen of Preservica, with presentations given by various Preservica users, as well as a mini-workshop led by Sarah Higgins and William Kilbride, on ethics in digital preservation, asking us to consider if we need our own code of conduct in digital preservation, and what this could look like.

Image of William Kilbride and Sarah Higgins running their workshop "Encoding ethics: professional practice digital preservation", ARA Annual Conference 2018, Glasgow

William Kilbride and Sarah Higgins running their workshop “Encoding ethics: professional practice digital preservation”, ARA Annual Conference 2018, Glasgow

I have only been able to touch on a very small amount of what I heard and learnt at the many and varied talks, presentations and workshops at the ARA conference,  however,  one thing I took away from the conference was the realisation that archivists and recordkeepers have the power to challenge structural inequalities, and must act now, in order to become truly inclusive. As Michelle Caswell (@professorcaz), 2nd keynote speaker said, we must act with sensitivity, acknowledge our privileges and, above all empower not marginalise. This conference felt like a call to action to the archive and recordkeeping community, in order to include the ‘hard to reach’ communities, or alternatively as Adele Patrick noted, the ‘easy to ignore’. As William Kilbride (@William Kilbride) said, this is an exciting time to be in archives.

I want to thank Kevin Bolton for sponsoring the Diversity Bursary, which enabled me to attend an enriching, engaging and informative event, which otherwise would have been inaccessible for me.

________________________________________
Because every day is a school day, as homework for us all, I made a note of some of the recommendations made by speakers throughout the conference, compiled into this very brief list which I thought I would share:

Reading list

New catalogue: Archive of Dame Louise Johnson

The online catalogue of the Archive of Dame Louise Johnson is now available.

‘[T]he moment comes when you actually solve a problem—it may be quite a small problem—but for a few moments you stand there and think “Nobody else knows this but me”.’ – Dame Louise Johnson

RESEARCH

Never one to shy away from a challenge Dame Louise Napier Johnson (1940–2012), biophysicist and structural biologist, spent her long career solving many problems; mostly in the field of structural enzymology that she helped create.

One of the first she solved was when she, David Phillips, and Charles Vernon described the enzyme lysozyme and how it bound its substrates at a special meeting of the Royal Institution in 1965. This was at a time when structural biology was in its infancy and lysozyme was the second protein and first enzyme to have its structure elucidated. It was also the first time that the mechanism through which enzymes worked on a structural level was described.

Notes on the structure of lysozyme.

Notes on the structure of lysozyme.

This was more than just an interesting theoretical exercise; it had far reaching implications and showed that understanding structure could help in understanding biological processes. Its practical application to drug discovery changed the face of pharmaceutical research. By the 1970s pharmacological researchers were using rational drug design and looking at protein receptors and their properties and binding potential. Knowing the structure of molecules, they could look for potential binding sites and postulate possible interactions. They could then look for analogues or ‘build’ new molecules specific to targeted binding sites.

She went on to tackle larger and more complex proteins like ribonuclease S and glycogen phosphorylase, but the tools of the time meant that it was a long hard slog. In order to examine these proteins they had to be crystallised to achieve high resolution images. These crystals did not last long and in the early days of structural biology protein structures were studied using diffractometers that took days to record data sets, often going through several crystals at a time.

DIAMOND LIGHT SOURCE

Needing faster and higher resolution data acquisition she turned to synchrotron radiation, which at the time was used primarily for studying purely physical phenomena. She championed the use of synchrotron radiation in the life sciences and was heavily involved in plans for the construction of a third generation synchrotron, Diamond Light Source, in the UK.

Breaking ground for the 250m long x-ray imaging and coherence beamline.

Breaking ground for the 250m long x-ray imaging and coherence beamline.

She came on board Diamond as the Director of Life Sciences because she believed that good research needed good infrastructure and support. She was heavily involved in planning and testing the beamlines, spearheaded a collaboration with Imperial College London to build the Membrane Protein Laboratory and secured funding for the Harwell Campus for visiting researchers.

Robotic sample changers on the MX beamlines.

Robotic sample changers on the MX beamlines.

Her tireless work saw the number of researchers working in structural biology at Diamond rise to its current 40%.

But she didn’t just support researchers with funding and facilities. While Director she continued as head of the Laboratory of Molecular Biophysics where her sympathetic management and light hand brought the best out of her research group.

LABORATORY OF MOLECULAR BIOPHYSICS

In 1967 Johnson joined David Phillips at the newly founded Laboratory of Molecular Biophysics at Oxford University; succeeding him as department head in 1990. Johnson was a constant in a department which endured considerable turnover of staff. The nature of Johnson’s field was very dynamic and it was common for researchers to move to and from other institutions regularly throughout their careers. One of twenty senior staff in 1975; by 2000 she was the only one remaining.

As head, she supervised around 80 people with an outside grant income in 2000 of almost £7 Million; overseeing a successful graduate programme while maintaining a nurturing environment for students and staff alike.

She routinely organised over 50 in-house and general research seminars annually. Many of them focused on the laboratory’s very productive output. Between 1995-6 alone, more than 30 protein and virus structures were solved. She trained a generation of Oxford crystallographers; as evidenced by the plethora of Protein Data Bank entries (including many forms of glycogen phosphorylase and cell cycle CDK/cyclin complexes) deposited by her lab; and kept the department running smoothly when they inevitably departed.

Johnson endeavoured to be a role model for other women. It was source of pride for Louise Johnson that of the six senior faculty members in her laboratory, three were women. She was also a trustee of the Daphne Jackson fund for scientists returning to research after career breaks. In September 2007 a symposium was held in her honour to recognise her continuing achievements and contribution the University.

– Emily Chen and Sean Macmillan

CREDITS

The archive of Louise Johnson came to us through the Saving Oxford Medicine project which sought to discover and catalogue collections relating to Oxford that have had an impact on the medical sciences. These papers were kindly donated by Professor Elspeth Garman and Johnson’s son Umar Salam.

The UK Web Archive: Online Enthusiast Communities in the UK

The beginnings of the Online Enthusiast collection of the UK Web Archive can be traced back to November 2016 and a task to scope out the viability and write a proposal for two potential special collections with a focus on current web use: Mental Health, and Online Enthusiasts.

The Online Enthusiasts special collection was intended to show how people within the UK are using the internet to aid them in practising their hobbies, for example discussing their collections of objects or coordinating their bus spotting. If it was something a person could enthuse about and it was on the internet within the UK then it was in scope. Where many UK Web Archive Special Collections are centred on a specific event and online reactions, this was more an attempt to represent the way in which people are using the internet on an everyday basis.

The first step toward a proposal was to assess the viability of the collection, and this meant searching out any potential online enthusiast sites to judge whether this collection would have enough content hosted within the UK to validate its existence. As it turns out, UK hobbyists are very active in their online communities and finding enough content was, if anything, the opposite of an issue. Difficulty came with trying to accurately represent the sheer scope of content available – it’s difficult to google something that you weren’t aware existed 5 minutes ago. After an afternoon among the forums and blogs of ferry spotters, stamp collectors, homebrewers, yarn-bombers, coffee enthusiasts and postbox seekers, there was enough proof of content to complete the initial proposal stating that a collection displaying the myriad uses hobbyists in the UK have for the internet is not only viable but also worthwhile. Eventually that proposal was accepted and the Online Enthusiast collection was born.

The UKWA Online Enthusiast Communities in the UK collection provides a unique cultural insight into how communities interact in digital spheres. It shows that with the power of the internet people with similar unique hobbies and interests can connect and share and enthuse about their favourite hobbies. Many of these communities grow and shrink at rapid paces and therefore many years of content can be lost if a website is no longer hosted.

With the amount of content on the internet, finding websites had a domino effect, where one site would link to another site for a similar enthusiast community, or we would find lists including hobbies we’d never even considered before. This meant that before long we had a wealth of content that we realised would need categorising. Our main approach to categorising the content was along thematic lines. After identifying what we were dealing with, we created a number of sub-collections, examples of which include: Animal related hobbies, collecting focused hobbies, observation hobbies, and sports.

The approach to selecting content for the collection was mainly focused around identifying UK-centric hobbies and using various search terms to identify active communities. The majority of these communities were forums. These forums provided enthusiasts with a platform to discuss various topics related to their hobbies whilst also providing the opportunity for them to share other forms of media such as video, audio and photographic content. Other platforms such as blogs and other websites were also collected, the blogs often focused on submitting content to the blog owner who would then filter and post related content to the community.

As of May 2018 the collection has over 300 archived websites. We found that the most filled categories for hobbies were Sports, collecting and animal related hobbies.

A few examples of websites related to hobbies that were new to us include:

  • UK Pidgeon Racing Forum: An online enthusiast forum concerned with pigeon racing.
  • Fighting Robots Association Forum: An online enthusiast forum for those involved with the creation of fighting robots.
  • Wetherspoon’s Carpets (Tumblr): A Tumblr blog concerned with taking photographs of the unique carpets inside the Wetherspoon’s chain of pubs across the UK.
  • Mine Exploration and History Forum: An online enthusiast community concerned with mine exploration in the UK.
  • Chinese Scooter Club Forum: An online enthusiast community concerned with all things related to Chinese scooters.
  • Knit The City (now Whodunnknit): A website belonging to a graffiti-knitter/yarnbomber from the UK

The Online Enthusiast Communities in the UK collection is accessible via the UK Web Archive’s new beta interface

The Shāhnāmah of Ibrāhīm Sulṭān – Available Online from Digital.Bodleian

VIEW IBRĀHĪM SULṬĀN’S SHĀHNĀMAH ONLINE
The Shāhnāmah – Book of Kings (or King of Books) – is an epic poem written in Persian by Abū l-Qāsim Firdawsī of Ṭūs. Completed in about 1010 CE, the book is composed of some 60,000 verses which narrate the history of Greater Persia from mythical beginnings until the Arab conquests of the 7th century.

Said to be the longest poem ever to have been written by a single person, the significance of Firdawsī’s Shāhnāmah to the Persian-speaking world can be compared to that of the works of Homer to Greece.

No manuscript copies of the Shāhnāmah survive from the 11th or 12th centuries, and only two from the 13th century are still extant, but many copies from the Timurid and Safavid periods are preserved in Library collections today.

Three of the grandsons of Tīmūr (Tamerlane) are known to have had lavish copies of Firdawsī’s Shāhnāmah or Persian Book of Kings made for them. The Shāhnāmahs of Bāysunghur, Muḥammad Jūkī, and Ibrāhīm Sulṭān are preserved in the Golestan Palace, Tehran, the Royal Asiatic Society, London, and the Bodleian Libraries, Oxford, respectively.

Left: Shamsah showing inscription dedicated to Ibrāhīm Sulṭān. (MS. Ouseley Add. 176, fol. 12a). Right: Ibrāhīm Sulṭān holding court outdoors. (MS. Ouseley Add. 176, fol. 1b).

Thought to have been made in Shiraz sometime between 1430 and Ibrāhīm Sulṭān’s death in 1435, this copy of the Shāhnāmah is known for its exceptional miniature paintings and exquisite illuminated panels.

The manuscript was acquired by Sir Gore Ouseley, a Diplomat and Linguist, during travels in the East in the early 19th century, and came into the Bodleian in the 1850s along with many other of Sir Gore’s collections. It is now preserved as MS. Ouseley Add. 176.

Ibrāhīm Sulṭān’s Shāhnāmah is now digitally available online via Digital.Bodleian. Recently, its sibling Muḥammad Jūkī’s Shāhnāmah was published online by the Royal Asiatic Society; both in good time for Nawruz or Persian New Year on 20th March!

REFERENCES

Abdullaeva, F., & Melville, C., The Persian book of kings : Ibrahim Sultan’s Shahnama (Treasures from the Bodleian Library). Oxford: Bodleian Library, 2008.

Beeston, A. F. L., Hermann Ethé, and Eduard Sachau. Catalogue of the Persian, Turkish, Hindûstânî, and Pushtû Manuscripts in the Bodleian Library . Oxford: At the Clarendon, 1889.

Robinson, B. W.,  A Descriptive Catalogue of the Persian Paintings in the Bodleian Library. Oxford: Clarendon, 1958.

The Bodleian Libraries would like to thank the Bahari Fund for helping to make this digitization project possible.

DPC Email Preservation: How Hard Can It Be? Part 2

Source: https://lu2cspjiis-flywheel.netdna-ssl.com/wp-content/uploads/2015/09/email-marketing.jpg

In July last year my colleague Miten and I attended a DPC Briefing Day titled Email Preservation: How Hard Can It Be?  which introduced me to the work of the Task Force on Technical Approaches to Email Archives  and we were lucky enough to attend the second session last week.

Arranging a second session gave Chris Prom (@chrisprom), University of Illinois at Urbana-Champaign and Kate Murray (@fileformatology), Library of Congress, co-chair’s of the Task Force the opportunity to reflect upon and add the issues raised from the first session to the Task Force Report, and provided the event attendees with an update on their progress overall, in anticipation of their final report scheduled to be published some time in April.

“Using Email Archives in Research”

The first guest presentation was given by Dr. James Baker (@j_w_baker), University of Sussex, who was inspired to write about the use of email archives within research by two key texts; Born-digital archives at the Wellcome Library: appraisal and sensitivity review of two hard drives (2016), an article by Victoria Sloyan, and Dust (2001) a book by Carolyn Steedman.

These texts led Dr. Baker to think of the “imagination of the archive” as he put it, the mystique of archival research, stemming from the imagery of  19th century research processes. He expanded on this idea, stating “physically and ontologically unique; the manuscript, is no longer what we imagine to be an archive”.

However, despite this new platform for research, Dr. Baker stated that very few people outside of archive professionals know that born-digital archives exist, yet alone use them. This is an issue, as archives require evidence of use, therefore, we need to encourage use.

To address this, Dr. Baker set up a Born-Digital Access Workshop, at the Wellcome Library in collaboration with their Collections Information Team, where he gathered people who use born-digital archives and the archivists who make them, and provided them with a set of 4 varying case-studies. These 4 case-studies were designed to explore the following:

A) the “original” environment; hard drive files in a Windows OS
B) the view experience; using the Wellcome’s Viewer
C) levels of curation; comparing reformatted and renamed collections with unaltered ones
D) the physical media; asking does the media hold value?

Several interesting observations came out of this workshop, which Dr. Baker organised in to three areas:

  1. Levels of description; filenames are important, and are valuable data in themselves to researchers. Users need a balance between curation and an authentic representation of the original order.
  2. “Bog-standard” laptop as access point; using modern technology that is already used by many researchers as the mode of access to email and digital archives creates a sense of familiarity when engaging with the content.
  3. Getting the researcher from desk to archive; there is a substantial amount of work needed to make the researcher aware of the resources available to them and how – can they remote access, how much collection level description is necessary?

Dr. Baker concluded that even with outreach and awareness events such as the one we were all attending, born-digital archives are not yet accessible to researchers, and this has made me realise the digital preservation community must push for access solutions,  and get these out to users, to enable researchers to gain the insights they might from our digital collections.

“Email as a Corporate Record”

The third presentation of the day was given by James Lappin (@JamesLappin), Loughborough University, who discussed the issues involved in applying archival policies to emails in a governmental context.

His main point concerned the routine deletion of email that happens in governments around the world. He said there are no civil servants email accounts scheduled to be saved past the next 3 – 4 years – but, they may be available via a different structure; a kind of records management system. However, Lappin pointed out the crux in this scenario: government departments have no budget to move and save many individuals email accounts, and no real idea of the numerics: how much to save, how much can be saved?

“email is the record of our age” – James Lappin

Lappin suggested an alternative: keep the emails of the senior staff only, however, this begs the questions, how do we filter out sensitive and personal content?

Lappin posits that auto-deletion is the solution, aiming to spare institutions from unmanageable volumes of email and the consequential breach of data protection.
Autodeletion encourages:

  •  governments to kickstart email preservation action,
  • the integration of tech for records management solutions,
  • actively considering the value of emails for long-term preservation

But how do we transfer emails to a EDRMS, what structures do we use, how do we separate individuals, how do we enforce the transfer of emails? These issues are to be worked out, and can be, Lappin argues, if we implement auto-deletion as tool to make email preservation less daunting , as at the end of the day, the current goal is to retain the “important” emails, which will make both government departments and historians happy, and in turn, this makes archivists happy. This does indeed seem like a positive scenario for us all!

However, it was particularly interesting when Lappin made his next point: what if the very nature of email, as intimate and immediate, makes governments uncomfortable with the idea of saving and preserving governmental correspondence? Therefore, governments must be more active in their selection processes, and save something, rather than nothing – which is where the implementation of auto-deletion, could, again, prove useful!

To conclude, Lappin presented a list of characteristics which could justify the preservation of an individuals government email accounts, which included:

  • The role they play is of historic interest
  • They expect their account to be permanently preserved
  • They are given the chance to flag or remove personal correspondence
  • Access to personal correspondence is prevented except in case of overriding legal need

I, personally, feel this fair and thorough, but only time will tell what route various governments take.

On a side note: Lappin runs an excellent comic-based blog on Records Management which you can see here.

Conclusions
One of the key issues that stood out for me today was, maybe surprisingly, not to do with the technology used in email preservation, but how to address the myriad issues email preservation brings to light, namely the feasibility of data protection, sensitivity review and appraisal, particularly prevalent when dealing in such vast quantities of material.

Email can only be preserved once we have defined what constitutes ’email’ and how to proceed ethically, morally and legally. Then, we can move forward with the implementation of the technical frameworks, which have been designed to meet our pre-defined requirements, that will enable access to historically valuable, and information rich, email archives, that will yield much in the name of research.

In the tweet below, Evil Archivist succinctly reminds us of the importance of maintaining and managing our digital records…

Significance & Authenticity: a Briefing

As an Ancient History graduate, significance and authenticity of source information characterised my university education. Transferring these principles to digital objects in an archival situation is a challenge I look forward to learning more about and embracing. Therefore I set off to Tate Britain on a cold Friday morning excited to explore the Digital Preservation Coalition’s briefing: Significance & Authenticity. Here are some of my reflections.

A dictionary definition is not enough

The morning started with a stimulating discussion led by Sharon McMeekin (DPC), on the definitions of these two concepts within the field of Digital Archives and the context of the varying institutions the delegates were from. Several key points were made, and further questions generated:

Authenticity

  • Authenticity clearly carries with it evidential value; if something is not what it purports to be then how can it (claim to) be authentic?
  • Chains of custody and tracking accidental/intended changes are extremely relevant to maintaining authenticity
  • Further measures such as increasing metadata fields – does this ensure authenticity?

For an archival record to retain authenticity there must be record of the original creation or experience of the digital object; otherwise we are looking at data without context. This also has a bearing on how significant an archival record is. A suggestion was also made that perhaps as a sector too much over-emphasis is placed on integrity checking procedures. Questions surfaced such as: is the digital preservation community too reliant on it? And in turn, is this practical process approach to ensuring authenticity too simplistic?

Significance

  • Records are not just static evidence, they are also for appreciation, education and to use
  • Should the users and re-users (the designated community) be considered more extensively when deciding the significance of a digital object?
  • Emulation as a digital preservation action prioritises the experience of using the data: is this the way to go regarding maintaining both the significant properties together with the authenticity?

There was no doubt left in my mind that the two principles are inextricably linked. However, not only are they increasingly subjective for both the record keeper and the end user, they must be distinguished from one another. For example, if a digital object can be interpreted as both a game and a book, yet the object was created and marketed as a book, does this make it any less significant or authentic? Or is the dispute part of what makes the object significant; the creation, characterisation and presentation of data in digital form is reflective of society today and what researchers may (or may not be) interested in in the future? We do not know and, as a fellow delegate reminded, cannot prejudice future research needs.

Building on the open mindedness that the  discussion encouraged, we were then fortunate enough to hear and learn from practitioners of differing backgrounds regarding how they ensure significance and authenticity of their collections. One particular example had me contemplating all weekend.

Significance & Authenticity of Digital Art by Patricia Falcao & Tom Ensom (Tate)

Patricia and Tom explained that they work with time-based media art and its creators. Working (mostly) with living artists ensures a short chain of provenance, however the nature of the digital art means that applying authenticity and significance is in no way straightforward. A principle which immediately affects the criteria of significance is the fact that it is very important that the Tate can exhibit the works, illustrating that differences in organisations will of course have a bearing on how significant a record is.

One example Tom analysed was the software based Brutalism: Stereo Reality Environment 3 by Peruvian artist Jose Carlos Martinat Mendoza:

Brutalism: Stereo Reality Environment 3 2007 Jose Carlos Martinat Mendoza born 1974 Presented by Eduardo Leme 2007, accessioned 2011 http://www.tate.org.uk/art/work/T13251

The artwork comprises of a range of components: high speed printers, paper rolls,  a web search program and accompanying hardware, movement sensors and a model replica of the Peruvian government building ‘El Petagonito’ which is a symbol of brutalist architecture. The computer is programmed to search the web for references to ‘Brutalism’ and the different extracts of information it gathers are printed from mounted printers on the sculpture, left to fall to the floor around the replica.

Tom explained that retaining authenticity of the digital art was very much a case of the commitment to represent the artists work together with the arrangement and intention. One method of ensuring this is the transfer of a document from the creator called ‘Installation Parameters’. For this particular example, it contained details such as paper type and cabling needs. It also contained display specifications such as the hardware being  a very visible element of the art work.

Further documentation is created and stored to preserve the original authenticity and thus unique significance of the artwork and the integrity of its ‘performance’.  Provenance information such as diagrams, process metadata and the original source code is stored separately to the work itself. However, Tom acknowledged there is no doubt the work will need to change and in turn will be reinterpreted. Interestingly, the point was made that the text itself on the paper itself is time sensitive; live search results related to Brutalism will evolve and change.

Looking ahead, what will happen when the hardware fails? And even, what will happen when nobody uses printers anymore? Stockpiling is only a short term plan for maintaining authenticity and significance. Furthermore, even if hardware can be guaranteed then the program software itself generates different issues. Software emulation, code-change tracking systems and a binary analysis are all to be explored as a means to enable authenticity but there will always be a risk and need for alternative solutions.

Would these changes reduce the authenticity or significance? I believe authenticity is associated with intention and so perhaps if changes are communicated to the user with justifications this could be one way of maintaining this principle. Significance, on the other hand, is more tricky. Without the significant and notable properties of the work, is significance automatically lost?

This case study reinforced that there is much to explore and consider when approaching the principles of authenticity and significance of digital objects. To conclude, Tom and Patricia reinforced that within the artistic context, decisions around authenticity and significance are made through collaborative dialogues with the artist/creator which does indeed provide direction.

Workshop

After 3 more talks and a panel session the briefing ended with a workshop requiring us to evaluate the significance and authenticity of a digital object provided. As a trainee digital archivist I can be guilty of shying away from group discussions/exercises within the community of practice, so I was really pleased to jump in and contribute during the group workshop exercise.

Thank you to the DPC and all involved for a brilliant day.

PASIG 2017: Ageing of Digital – Towards Managed Services for Digital Continuity

PASIG 2017 (Preservation and Archiving Special Interest Group) was hosted in Oxford this year at the Natural History Museum by Bodleian Libraries & Digital Preservation at Oxford and Cambridge (DPOC). I attended on all three days (11th -13th September), when I wasn’t working I had the opportunity to listen to some thought provoking talks centered around the issue of digital preservation.

One of the highlights of the conference for me, was a talk given by Natasa Milic-Frayling, the founder of Intact Digital. The presentation entitled  ‘Ageing of Digital: Towards Managed Services for Digital Continuity’ demonstrated the innovative ways in which digital preservation issues are being approached.

Digital technology has a short lifespan; hardware and software become redundant and obsolete in a very short time, essentially outdated. This is  known as ‘Legacy Software’, outdated software that no longer receives vendor support or updates.

This poses the problem – How can we manage the life-cycle of digital in the face of a dynamic and changing computing ecosystem?                                        

Technologies are routinely changed, updated (sometimes at a cost), made redundant and retired. The value of digital assets needs to be protected. In the current climate there is an imbalance of power between the technology producers and providers and the content producers, owners and curators. The providers and producers can move on without the opinion or input of those who use the software.

How do we enable prolonged use of software to protect value of digital assets?

A case study was presented that contextualised the problem and the solution. The vendor Tamal vista Insights provided Cut&Search, a software for automated and semi automated  indexing of digitised manuscripts and digital artefacts that standard OCR can not handle.
The software was supplied to Fo Guang Shan, an International Chinese Buddhist Monastic Order with over 200 branch temples worldwide for use with their digitised manuscript collection. This project is made up of thousands of volunteers and spans years, beyond the providers expected life-cycle for their product, its primary market life-time.
 Intact Digital provide a managed service that allows for digital continuity. There are several steps in the process which then provide a  number of options to software providers and the content producers:
  • Deposit
  • Hosting
  • Remote Access
  • Digital Continuity Assurance Plans

The software can be hosted in a virtual machine and accessed remotely via a browser. The implications of this are far reaching for projects like the ones undertaken by the Fo Guang Shan. They don’t need to worry about the Cut&Search software becoming redundant and their digital assets remain protected. For smaller organisations operating on ever decreasing budgets this is an important step both for asset protection and digital preservation.

Key areas to develop

Although this is an important step, there is still much work to do and some key areas that need to be developed were highlighted. This will result in a sustained use of digital.

  • Economy around “retired” software
  • Legal frameworks and sustainable business models
  • New practices to create demand
  • New services to make it efficient, economical and sustainable

Changes to the Ecosystem

In taking these steps and creating a dialogue between the technology producers/providers and the content producers it changes the dynamic of the ecosystem, readdressing the imbalance in control.

 

The talk ended with two very pertinent statements;

Together we can create new practices and
new models of extending the life of digital”
“Without digital continuity our digital content,
information and knowledge has no future”
As a trainee I still have lots to learn but a major theme running throughout digital archiving and digital preservation is the need for communication, collaboration and dialogue. Working together, sharing ideas and the challenges is key to securing the future of digital content.

 

A complete collection of the slides relating to this topic can be found here;  https://doi.org/10.6084/m9.figshare.5415040.v1  Milic-Frayling, Natasa (2017): Aging of digital: Towards managed services for digital continuity. figshare.

PASIG 2017: “Sharing my loss to protect your data” University of the Balearic Islands

 

Last week I was lucky enough to be able to attend the PASIG 2017 (Preservation and Archiving Special Interest Group) conference, held at the Oxford University Museum of Natural History, where over the course of three days the  digital preservation community connected to share, experiences, tools, successes and mishaps.

The story of one such mishap came from Eduardo del Valle, Head of the Digitization and Open Access Unit at the University of the Balearic Islands (UIB), in his presentation titled Sharing my loss to protect your data: A story of unexpected data loss and how to do real preservation”. In 2013 the digitisation and digital preservation workflow pictured below was set up by the IT team at UIB.

2013 Digitisation and Digital Preservation Workflow (Eduardo del Valle, 2017)

Del Valle was told this was a reliable system, with fast retrieval. However, he found this was not the case, with slow retrieval and the only means of organisation consisting of an excel spreadsheet used to contain the storage locations of the data.

In order to assess their situation they used the NDSA Levels of Digital Preservation, a tiered set of recommendations on how organisations should build their digital preservation activities, developed by the National Digital Stewardship Alliance (NDSA) in 2012. The guidelines are organised into five functional areas that lie at the centre of digital preservation:

  1. Storage and geographic location
  2. File fixity and data integrity
  3. Information security
  4. Metadata
  5. File formats

These five areas then have four columns (Levels 1-4) which set tiered recommendations of action, from Level 1 being the least an organisation should do, to Level 4 being the most an organisation can do. You can read the original paper on the NDSA Levels here.

The slide below shows the extent to which the University met the NDSA Levels. They found there was an urgent need for improvement.

NDSA Levels of Preservation UIB compliance (Eduardo del Valle, 2017)

“Anything that can go wrong, will go wrong” – Eduardo del Valle

In 2014 the IT team decided to implement a new back up system. While the installation and configuration of the new backup system (B) was completed, the old system (A) remained operative.

On the 14th and 15th November 2014, a backup was created for the digital material generated during the digitisation of 9 rare books from the 14th century in the Tape Backup System (A) and notably, two confirmation emails were received, verifying the success of the backup.  By October 2015, all digital data had been migrated from System (A) to the new System (B), spanning UIB projects from 2008-2014.

However, on 4th November 2014, a loss of data was detected…

The files corresponding to the 9 digitised rare books were lost. This loss was detected a year after the initial back up of the 9 books in System A, and therefore the contract for technical assistance had finished. This meant there was no possibility of obtaining financial compensation, if the loss was due to a hardware or software problem.  The loss of these files, unofficially dubbed “the X-files”, meant the loss of three months of work and it’s corresponding economic loss. Furthermore, the rare books were in poor condition, and to digitise them again could cause serious damage. Despite a number of theories, the University is yet to receive an explanation for the loss of data.

The digitised 14th century rare book from UIB collection (Eduardo del Valle, 2017)

To combat issues like this, and to enforce best practice in their digital preservation efforts, the University acquired Libsafe, a digital preservation solution offered by Libnova. Libsafe is OAIS and ISO 14.721:2012 compliant, and encompasses advanced metadata management with a built-in ISAD(g) filter, with the possibility to import any custom metadata schema. Furthermore, Libsafe offers fast delivery, format control, storage of two copies in disparate locations, and a built-in catalogue. With the implementation of a standards compliant workflow, the UIB proceeded to meet all four levels of the 5 areas of the NDSA Levels of Digital Preservation.

The ISO 14.721:2012 Space Data and Information Transfer Systems – Open Archival Information System – Reference Model (OAIS)  provides a framework for implementing the archival concepts needed for long-term digital preservation and access, and for describing and comparing architectures and operations of existing and future archives, as well as describing roles, processes and methods for long-term preservation.

The use of these standards facilitates the easy access, discovery and sharing of digital material, as well as their long-term preservation. Del Valle’s story of data loss reminds us of the importance of implementing standards-based practices in our own institutions, to minimise risk and maximise interoperability and access, in order to undertake true digital preservation.

 

With thanks to Eduardo del Valle, University of the Balearic Islands.

PASIG2017: Preserving Memory

 

The Oxford University Natural History Museum (photo by Roxana Popistasu, twitter)

This year’s PASIG conference, (Preservation and Archiving Special Interest Group) bought together an eclectic mix of individuals from around the world to discuss the very exciting and constantly evolving topic of digital preservation. Held at the Oxford University Natural History Museum, the conference aimed to connect practitioners from a variety of industries with a view to promoting conversation surrounding various digital preservation experiences, designs and best practices. The presentations given comprised a series of lightning talks, speeches and demos on a variety of themes including: the importance of standards, sustainability and copyright within digital preservation.

UNHCR: Archiving on the Edge

UNHCR Fieldworkers digitally preserving refugee records (photo by Natalie Harrower, twitter)

I was particularly moved by a talk given on the third day by Patricia Sleeman, an Archivist working for the UNHCR, a global organisation dedicated to saving lives, protecting rights and building a better future for refugees, forcibly displaced communities and stateless people.

Entitled “Keep your Eyes on the Information” Sleeman’s poignant and thought-provoking presentation discussed the challenges and difficulties faced when undertaking digital preservation in countries devastated by the violence and conflicts of war. Whilst recognising that digital preservation doesn’t immediately save lives in the way that food, water and aid can, Sleeman identified the place of digital preservation as having significant importance in the effort to retain, record and preserve the memory, identity and voice of a people which would otherwise be lost through the destruction and devastation of displacement, war and violence.

About the Archive

Sleeman and her team seek to capture a wide range of digital media including: you tube, websites and social media, each forming a precious snapshot of history, an antidote to the violent acts of mnemnocide- or the destruction of memory.

The digital preservation being undertaken is still in its early stages with focus being given to the creation of good quality captures and metadata. It is hoped in time however that detailed policies and formats will be developed to aid Sleeman in her digital preservation work.

One of the core challenges of this project has been handling highly sensitive material including refugee case files. The preservation of such delicate material has required Sleeman and her team to act slowly and with integrity, respecting the content of information at each stage.

For more information on the UNHCR  please click here.