Category Archives: Web Archives

The UK Web Archive: Mental Health, Social Media and the Internet Collection

The UK Web Archive hosts several Special Collections, curating material related to a particular theme or subject. One such collection is on Mental Health, Social Media and the Internet.

Since the advent of Web 2.0, people have been using the Internet as a platform to engage and connect, amongst other things, resulting in new forms of communication, and consequently new environments to adapt to – such as social media networks. This collection aims to illustrate how this has affected the UK, in terms of the impact on mental health. This collection will reflect the current attitudes displayed online within the UK towards mental health, and how the Internet and social media are being used in contemporary society.

We began curating material in June 2017, archiving various types of web content, including: research, news pieces, UK based social media initiatives and campaigns, charities and organisations’ websites, blogs and forums.

Material is being collected around several themes, including:

Body Image
Over the past few years, there has been a move towards using social media to discuss body image and mental health. This part of the collection curates material relating to how the Internet and social media affect mental health issues relating to body image. This includes research about developing theory in this area, news articles on various individuals experiences, as well as various material posted on social media accounts discussing this theme.

Cyber-bullying
This theme curates material, such as charities and organisations’ websites and social media accounts, which discuss, raise awareness and tackle this issue. Furthermore, material which examines the impact of social media and Internet use on bullying such as news articles, social media campaigns and blog posts, as well as online resources created to aid with this issue, such as guides and advice, are also collected.

Addiction

This theme collects material around gaming and other  Internet-based activities that may become addictive such as social media, pornography and gambling. It includes recent UK based research, studies and online polls, social media campaigns, online resources, blogs and news articles from individuals and organisations. Discourse, discussions, opinion and actions regarding different aspects of Internet addition are all captured and collected in this overarching catchment term of addiction, including social media addiction.

The Mental Health, Social Media and the Internet Special Collection, is available via the new UK Web Archive Beta Interface!

Co authored with Carl Cooper

The UK Web Archive Ebola Outbreak collection

By CDC Global (Ebola virus) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

By CDC Global (Ebola virus) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

Next month marks the four year anniversary of the WHO’s public announcement of “a rapidly evolving outbreak of Ebola virus disease (EVD)” that went on to become the deadliest outbreak of EVD in history.

With more than 28,000 cases and 11,000 deaths, it moved with such speed and virulence that–though concentrated in Guinea, Liberia and Sierra Leone–it was feared at the time that the Ebola virus disease outbreak of 2014-2016 would soon spread to become a global pandemic.

No cure or vaccine has yet been discovered and cases continue to flare up in West Africa. The most recent was declared over on 2 July 2017. Yet today most people in the UK unless directly affected don’t give it a second thought.

Searching online now, you can find fact sheets detailing everything you might want to know about patient zero and the subsequent rapid spread of infection. You can find discussions detailing the international response (or failure to do so) and lessons learned. You might even find the reminiscences of aid workers and survivors. But these sites all examine the outbreak in retrospect and their pages and stories have been updated so often that posts from then can no longer be found.

Posts that reflected the fear and uncertainty that permeated the UK during the epidemic. The urgent status updates and travel warnings.  The misinformation that people were telling each other. The speculation that ran riot. The groundswell of giving. The mobilisation of aid.

Understandably when we talk about epidemics the focus is on the scale of physical suffering: numbers stricken and dead; money spent and supplies sent; the speed and extent of its spread.

Whilst UKWA regularly collects the websites of major news channels and governmental agencies, what we wanted to capture was the public dialogue on, and interpretation of, events as they unfolded. To see how local interests and communities saw the crisis through the lenses of their own experience.

To this end, the special collection Ebola Outbreak, West Africa 2014 features a broad selection of websites concerning the UK response to the Ebola virus crisis. Here you can find:

  • The Anglican community’s view on the role of faith during the crisis;
  • Alternative medicine touting the virtues of liposomal vitamin C as a cure for Ebola;
  • Local football clubs fundraising to send aid;
  • Parents in the UK withdrawing children from school because of fear of the virus’ spread;
  • Think tanks’ and academics’ views on the national and international response;
  • Universities issuing guidance and reports on dealing with international students; and more.

Active collection for Ebola began in November 2014 at the height of the outbreak whilst related websites dating back to the infection of patient zero in December 2013 have been retrospectively added to the collection. Collection continued through to January 2016, a few months before the outbreak began tailing off in April 2016.

The Ebola collection is available via the UK Web Archive’s new beta interface.

Web-Archiving: A Short Guide to Proxy Mode

Defining Proxy Mode:

Proxy Mode is an ‘offline browsing’ mode  which provides an intuitive way of checking the quality and comprehensiveness of any web-archived content captured. Proxy Mode enables you to view documents within an Archive-It collection and ascertain which page elements have been captured effectively and which are still being ‘pulled’ from the live site.

Why Use Proxy Mode?

Carrying out QA (Quality Assurance) without proxy mode could lead to a sense of false reassurance about the data that has been captured, since some page elements displayed may actually present those being taken from the live site as opposed to a desired archival capture. Proxy Mode should therefore be employed as part of the standard QA process since it prevents these live-site redirects from occurring and provides a true account of the data captured.

Using Proxy Mode:

Proxy Mode is easy to setup and involves simply downloading an add-on that can be accessed here. There is also an option to setup Proxy Mode manually in Firefox or Chrome.

Potential Issues and Solutions:

Whilst using Proxy Mode a couple of members of the BLWA team (myself included) had issues viewing certain URLs in Proxy Mode often receiving  a ‘server not found’ error message.  After corresponding with Archive-It I discovered that Proxy Mode often has trouble loading https URLs. With this in mind I loaded the same URL but this time removed the ‘s’ from https and reloaded the page. Once Proxy Mode had been enabled this seemed to rectify the issue.

There was one particular instance however where this fix didn’t work and the same ‘server not found’ error message returned, much to my dismay! Browsers can sometimes save a specific version of the URL as the preferred version and will direct to it automatically. I discovered it was just a case of clearing the browser’s: cache, cookies, offline website data and site preferences. Once this had been done I was able to load the site once again using Proxy Mode #bigachievements.

Subcultures as Integrative Forces in East-Central Europe 1900 – present: a Bodleian Libraries’ Web Archive record

A problem, and a solution in action:

The ephemeral nature of internet content (the average life of a web page is 100 days – illustrating that websites do not need to be purposefully deleted to vanish) is only one contributing factor to data loss. Web preservation is high priority;  action is required. This is a driver for not only Bodleian Libraries’ Web Archive, but digital preservation initiatives on a global scale.

However, today I would like to share the solution in action, an example from BLWA’s University of Oxford Collection: Subcultures as Integrative Forces in East-Central Europe 1900 – present.

On the live web, attempts to access the site are met with automatic redirects to BLWA’s most recent archived capture (24 Jan. 2017). The yellow banner indicates it is part of our archive. Image from http://wayback.archive-it.org/2502/20170124104518/http://subcultures.mml.ox.ac.uk/home.html

Subcultures is a University of Oxford project, backed by the Arts & Humanities Research Council, which through its explorative redefinition of ‘sub-cultures’ aims to challenge the current way of understanding simultaneous identification forms in the region of Eastern Europe through a multi-disciplinary methodology of social anthropology, discourse analysis, historical studies and linguistics. The project ran from 2012-2016.

The Subcultures website is an incredibly rich record of the project and it’s numerous works.  It held cross-continent collaborative initiatives including lectures, international workshops and seminars, as well as an outreach programme including academic publications. Furthermore, comparative micro-studies were conducted in parallel with main collaborative project: Linguistic Identities: L’viv/Lodz, c.1900; Myth and Memory: Jews and Germans, Interwar Romania; Historical Discourses: Communist Silesia and Discursive Constructions: L’viv and Wroclaw to present. The scope and content of the project, including key questions, materials, past and present events and network information is* all hosted on http://subcultures.mml.ox.ac.uk/home.html.

Was*. The site is no longer live on the internet.

However, as well as an automatic re-direction to our most recent archival copy, a search on Bodleian Libraries’ Web Archive generates 6 captures in total:

Search results for Subcultures within BLWA. Image from https://archive-it.org/home/bodleian?q=Subcultures

The materials tab of the site fully functions in the archived capture: you are able to listen to the podcasts and download the papers on theory and case studies as PDF versions.

The use of Subcultures

To explore the importance of web-archiving in this context, let us think about the potential use(rs) of this record and the implications if the website were no longer available:

As the  project comprised a wider outreach programme alongside its research, content such as PDF publications and podcasts were available for download, consultation and further research. The website platform means that these innovative collaborations and the data informed by the primary methodology are available for access. This is of access to the public on a global scale for education and knowledge and interaction with important issues – without even elaborating on how academics, researchers, historians and the wider user community will benefit from the availability of the materials from this web archive. Outreach by its very nature demands an unspecified group of people to lend its services to help.

Listening to the podcast of the project event hosted in Krakow: ‘Hybrid Identity’ in 2014. Rationale, abstracts and biographies from the workshop can also be opened. Image from http://wayback.archive-it.org/2502/20170124104618/http://subcultures.mml.ox.ac.uk/materials/workshop-krakow-hybrid-identity-september-2014.html

Furthermore, the site provides an irreplaceable record of institutional history for University of Oxford as a whole, as well as its research and collaborations. This is a dominant purpose of our University of Oxford collection. The role of preserving for posterity cannot be underplayed. Subcultures provides data that will be used, re-used and of grave importance for decades to come, and also documents decisions and projects of the University of Oxford. For example, the outline and rationale of the project is available in full through the Background Paper – Theory, available for consultation through the archived capture as it would be through the live web. Biographical details of contributors are also hosted on the captures, preserving records of people involved and their roles for further posterity and accountability.

Building on the importance of access to research: internet presence increases scholarly interaction. The scope of the project is of great relevance, and data for research is not only available from the capture of the site, but the use of internet archives as datasets are expected to become more prominent.

Participate!

Here at BLWA the archiving process begins with a nomination for archiving: if you have a site that you believe is of value for preserving as part of one of our collections then please do so here. The nomination form will go to the curators and web-archivists on the  BLWA team for selection checks and further processing. We would love to hear your nominations.

Why and how do we Quality Assure (QA) websites at the BLWA?

At the Bodleian Libraries Web Archive (BLWA), we Quality Assure (QA) every site in the web archive. This blog post aims give a brief introduction into why and how we QA. The first steps of our web archiving involve crawling a site, using the tools developed by ArchiveIT. These tools allow for entire websites to be captured and browsed using the Wayback Machine as if it were live, allowing you to download files, view videos/photos and interact with dynamic content, exactly how the website owner would want you to. However, due to the huge variety and technical complexity of websites, there is no guarantee that every capture will be successful (that is to say that all the content is captured and working as it should be). Currently there is no accurate automatic process to check this and so this is where we step in.

We want to ensure that the sites on our web archive are an accurate representation in every way. We owe this to the owners and the future users. Capturing the content is hugely important, but so too is how it looks, feels and how you interact with it, as this is a major part of the experience of using a website.

Quality assurance of a crawl involves manually checking the capture. Using the live site as a reference, we explore the archived capture, clicking on links, trying to download content or view videos; noting any major discrepancies to the live site or any other issues. Sometimes, a picture or two will be missing or, it maybe that a certain link is not resolving correctly, which can be relatively easy to fix, but other times it can be massive differences compared to the live site; so the (often long and sometimes confusing) process of solving the problem begins. Some common issue we encounter are:

  • Incorrect formatting
  • Images/video missing
  • Large file sizes
  • Crawler traps
  • Social media feeds
  • Dynamic content playback issues

There are many techniques available for us to use to help solve these problems, but there is no ‘one fix for all’, the same issue for two different sites may require two different solutions. There is a lot of trial and error involved and over the years we have gained a lot of knowledge on how to solve a variety of issues. Also ArchiveIT has a fantastic FAQ section on their site, however, if we have gone through the usual avenues and still cannot solve our problems, then our final port of call is to ask the geniuses at ArchiveIT, who are always happy and willing to help.

An example of how important and effective QA can be. The initial test capture did not have the correct formatting and was missing images. This was resolved after the QA process

QA’ing is a continual process. Websites add new content or companies change to different website designers, meaning captures of websites that have previously been successful, might suddenly have an issue. It is for this reason that every crawl is given special attention and is QA’d. QA’ing the captures before they are made available is a time consuming but incredibly important part of the web archiving process at the Bodleian Libraries Web Archive. It allows us to maintain a high standard of capture and provide an accurate representation of the website for future generations.

 

Bountiful Harvest: Curation, Collection and Use of Web Archives

The theme for the ARA Annual Conference 2017 is: ‘Challenge the Past, Set the Agenda’. I was fortunate enough to attend a pre-conference workshop in Manchester, ran by Lori Donovan and Maria Praetzellis from The Internet Archive, about the bountiful harvest that is web content, and the technology, tools and features that enable web archivists to overcome the challenges it presents.

Part I – Collections, Community and Challenges

Lori gave us an insight into the use cases of Archive-it partner organisations to show us the breadth of reasons why other institutions archive the web. The creation of a web collection can be for one of (or indeed, all) the following reasons:

  • To maintain institutional history
  • To document social commentary and the perspectives of users
  • To capture spontaneous events
  • To augment physical holdings
  • Responsibility: Some documents are ONLY digital. For example, if a repository upholds a role to maintain all published records, a website can be moved into the realm of publication material.

When asked about duplication amongst web archives, and whether it was a problem if two different organisations archive the same web content, Lori put forward the argument that duplication is not worrisome. The more captures of a website is good for long term preservation in general – in some cases organisations can work together on collaborative collecting if the collection scope is appropriate.

Ultimately, the priority of crawling and capturing a site is to recreate the same experience a user would have if they were to visit the live site on the day it was archived. Combining this with an appropriate archive frequency  means that change over time can also be preserved. This is hugely important: the ephemeral nature of internet content is widely attested to. Thankfully, the misconception that ‘online content will be around forever’ is being confronted. Lori put forward some examples to illustrate the point for why the archiving of websites is crucial.

In general, a typical website lasts 90-100 days before one of the following happens:

  1. The content changes
  2. The site URL moves
  3. The content disappears completely

A study was carried out on the Occupy Movement sites archived in 2012. Of 582 archived sites, only 41% were still live on the web as of April 2014. (Lori Donovan)

Furthermore, we were told about a 2014 study which concluded that 70% of scholarly articles online with text citations suffered from reference rot over time. This speaks volumes about preserving copies in order for both authentication and academic integrity.

The challenge continues…

Lori also pointed us to the NDSA 2016/2017 survey which outlines the principle concerns within web archiving currently: Social media, (70%); Video, (69%) and Interactive media and Databases, (both 62%).  Any dynamic content can be difficult to capture and curate, therefore sharing advice  and guidelines amongst leaders in the web archiving community is a key factor in determining successful practice for both current web archivists, and those of future generations.

Part II – Current and Future Agenda

Maria then talked us through some key tools and features which enable greater crawling technology, higher quality captures and the preservation of web archives for access and use:

  • Brozzler. Definitely my new favourite portmanteau (browser + crawler = brozzler!), brozzler is the newly developed crawler by The Internet Archive which is replacing the combination of heritrix and umbra crawlers. Brozzler captures http traffic as it is loaded, works with YouTube in order to improve media capture and the data will be immediately written and saved as a WARC file. Also, brozzler uses a real browser to fetch pages, which enables it to capture embedded urls and extract links.
  • WARC. A Web ARChive file format is the ISO standard for web archives. It is a concatenated file written by a crawler, with long term storage and preservation specifically in mind. However, Maria pointed out to us that WARC files are not constructed to easily enable research (more on this below.).
  • Elasticsearch. The full-text search system does not just search the html content displayed on the web pages, it searches PDF, Word and other text-based documents.
  • solr. A metadata-only search tool. Metadata can be added on Archive-it at collection, seed and document level.

Supporting researchers now and in the future

The tangible experience and use of web archives where a site can be navigated as if it was live can shed so much light on the political and social climate of its time of capture. Yet, Maria explained that the raw captured data, rather than just the replay, is obviously a rich area for potential research and, if handled correctly, is an inappropriable research tool.

As well as the use of Brozzler as a new crawling technology, Archive-it research services offer a set of derivative data-set files which are less complex than WARC and allow for data analysis and research. One of these derivative data sets is a Longitudinal Graph Analysis (LGA) dataset file which will allow the researcher to analyse the trend in links between urls over time within an entire web collection.

Maria acknowledged that there are lessons  to be learnt when supporting researchers using web archives, including technical proficiency training and reference resources. The typology of the researchers who use web archives is ever growing: social and political scientists, digital humanities disciplines, computer science and documentary and evidence based research including legal discovery.

What Lori and Maria both made clear throughout the workshop was that the development and growth of web archiving is integral to challenging the past and preserving access on a long term scale. I really appreciated an insight into how the life cycle of web archiving is a continual process, from creating a collection, through to research services, whilst simultaneously managing the workflow of curation.

When in Manchester…

Virtual Archive, Central Library, Manchester

I  couldn’t leave  Manchester without exploring the John Rylands Library and Manchester’s Central Library. In the latter, this interactive digital representation of a physical archive combined choosing a box from how a physical archive may be arranged, and then projected the digitised content onto the screen once selected. A few streets away in Deansgate I had just enough time in John Rylands to learn that the fear of beards is called Pogonophobia. Go and visit yourself to learn more!

Special collections reading room, John Rylands Library, Manchester

#WAWeek2017 – Researchers, practitioners and their use of the archived web

This year, the world of web archiving  saw a premiere: not only were the biennial RESAW conference and the IIPC conference, established in 2016, held jointly for the first time, but they also formed part of a whole week of workshops, talks and public events around web archives – Web Archiving Week 2017 (or #WAWeek2017 for the social medially inclined).

After previous conferences Reykjavik (2016) and Arhus (RESAW 2015), the big 2017 event was held in London, 14-16 June 2017, organised jointly by the School of Advanced Studies of the University of London, the IIPC and the British Library.
The programme was packed full of an eclectic variety of presentations and discussions, with topics ranging from the theory and practice of curating web archive collections or capturing whole national web domains, via technical topics such as preservation strategies, software architecture and data management, to the development of methodologies and tools for using web archives based research and case studies of their application.

Even in digital times, who doesn’t like a conference pack? Of course, the full programme is also available online. (…but which version will be easier to archive?)

Continue reading

Researchers,practitioners and their use of the archived web. IIPC Web Archiving Conference 15th June 2017

From the 14th – 16th of June researchers and practitioners from a global community came together for a series of talks, presentations and workshops on the subject of Web Archiving at the IIPC Web Archiving Conference. This event coincided with Web Archiving Week 2017, a week long event running from 12th – 16th June hosted by the British Library and the School of Advance Study

I was lucky enough to attend the conference  on the 15th June with a fellow trainee digital archivist and listen to some thoughtful, engaging and challenging talks.

The day started with a plenary in which John Sheridan, Digital Director of the National Archives, spoke about the work of the National Archives and the challenges and approaches to Web Archiving they have taken. The National Archives is principally the archive of the government, it allows us to see what the state saw through the state’s eyes. Archiving government websites is a crucial part of this record keeping as we move further into the digital age where records are increasingly born-digital. A number of points were made which highlighted the motivations behind web archiving at the National Archives.

  • They care about the records that government are publishing and their primary function is to preserve the records
  • Accountability for government services online or information they publish
  • Capturing both the context and content

By preserving what the government publishes online it can be held accountable, accountability is one aspect that demonstrates the inherent value of archiving the web. You can find a great blog post on accountability and digital services by Richard Pope in this link.  http://blog.memespring.co.uk/2016/11/23/oscon-2016/

The published records and content on the internet provides valuable and crucial context for the records that are unpublished, it links the backstory and the published records. This allows for a greater understanding and analysis of the information and will be vital for researchers and historians now and into the future.

Quality assurance is a high priority at the National Archives. By having a narrow focus of crawling, it has allowed for but also prompted a lot of effort to be directed into the quality of the archived material so it has a high fidelity in playback. To keep these high standards it can take weeks in order to have a really good in-depth crawl. Having a small curated collection it is an incentive to work harder on capture.

The users and their needs were also discussed as this often shapes the way the data is collected, packaged and delivered.

  • Users want to substantiate a point. They use the archived sites for citation on Facebook or Twitter for example
  • The need to cite for a writer or researcher
  • Legal – What was the government stance or law at the time of my clients case
  • Researchers needs – This was highlighted as an area where improvements can be made
  • Government itself are using the archives for information purposes
  • Government websites requesting crawls before their website closes – An example of this is the NHS website transferring to a GOV.UK site

The last part of the talk focused on the future of web archiving and how this might take shape at the National Archives. Web archiving is complex and at times chaotic. Traditional archiving standards have been placed upon it in an attempt to order the records. It was a natural evolution for information managers and archivists to use the existing knowledge, skills and standards to bring this information under control. This has resulted in difficulties in searching across web archives, describing the content and structuring the information. The nature of the internet and the way in which the information is created means that uncertainty has to inevitably be embraced. Digital Archiving could take the turn into the 2.0, the second generation and move away from the traditional standards and embrace new standards and concepts. One proposed method is the ICA Records in Context conceptual model. It proposes a multidimensional description with each ‘ thing ‘ having a unique description as opposed to the traditional unit of description (one size fits all).  Instead of a single hierarchical fonds down approach, the Records in Context model uses a  description that can be formed as a network or graph. The context of the fonds is broader, linking between other collections and records to give different perspectives and views. The records can be enriched this way and provide a fuller picture of the record/archive. The web produces content that is in a constant state of flux and a system of description that can grow and morph over time, creating new links and context would be a fruitful addition.

Visual Diagram of How the Records in Context Conceptual Model works

“This example shows some information about P.G.F. Leveau a French public notary in the 19th century including:
• data from the Archives nationales de France (ANF) (in blue); and
• data from a local archival institution, the Archives départementales du Cher (in yellow).” INTERNATIONAL COUNCIL ON ARCHIVES: RECORDS IN CONTEXTS A CONCEPTUAL MODEL FOR ARCHIVAL DESCRIPTION.p.93

 

Traditional Fonds Level Description

 

I really enjoyed the conference as a whole and the talk by John Sheridan. I learnt a lot about the National Archives approach to web archiving, the challenges and where the future of web archiving might go. I’m looking forward to taking this new knowledge and applying it to the web archiving work I do here at the Bodleian.

Changes are currently being made to the National Archives Web Archiving site and it will relaunch on the 1st July this year.  Why don’t you go and check it out.

 

 

 

Web Archiving Week 2017 – “Pages for kids, by kids”

Yesterday I was lucky enough to attend a day of the Web Archiving Week 2017 conferences in Senate House, London along with another graduate trainee digital archivist.

A beautiful staircase in Senate House

Every session I attended throughout the day was fascinating, but Ian Milligan’s ‘Pages by kids, for kids’: unlocking childhood and youth history through the GeoCities web archive stood out for me as truly capturing part of what makes a web archive so important to society today.

Pages by kids, for kids

GeoCities, for those unfamiliar with the name, was a website founded in 1994 from which anyone could build their own free website which would become part of a ‘neighbourhood’. Each neighbourhood was themed for a particular topic, allowing topic clusters to form from created websites. GeoCities was shut down in Europe and the US in 2009, but evidence of it still exists in the Internet Archive.

Milligan’s talk focused particularly on the Enchanted Forest neighbourhood between 1996 and 1999. The Enchanted Forest was dedicated to child-friendliness and was the only age based neighbourhood, and as such had extra rules and community moderation to ensure nothing age inappropriate was present.

“The web was not just made by dot.com companies”

The above image shows what I think was one of the key points from the talk, a quote from the New York Times, March 17th 1997
“The web was not just made by dot.com companies, but that eleven-year-old boys and grandmothers are also busy putting up Web sites. Of course, the quality of these sites varies greatly, but low-cost and even free home page services are a growing part of the on-line world.”

The internet is a democracy, and to show a true record of how and why it has been used it necessarily involves people – not just businesses. By having GeoCities websites within the Internet Archive, it’s possible to access direct evidence of how people were using the internet in the late part of the 20th century, but, as Ian Milligan’s talk explained, it also allows access to direct evidence of childhood and youth culture forming on the internet.

Milligan pointed out that access to evidence of childhood and youth culture is rare, normally historical evidence comes in the form of adults remembering their time as children or from researchers studying children, but something produced by a child for other children would rarely make it into a traditional archive. Within the trove of archived GeoCities websites, however, children producing web content for children is clearly visible. From this, it is possible to examine what constituted popular activities for children on GeoCities in the late 20th century.

Milligan noted one major activity within the Enchanted Forest centred around an awards culture, wherein a popular site would award users based on several web page qualities such as no personal identifiable information, working links and loading times of less than one minute. Some users would create their own awards to present to people, for example an award for finding all the Winnie the Pooh words in a word search. His findings showed that 15% of Enchanted Forest websites had a dedicated awards page.

A darker side of a child-centric portion of the web was also revealed in the Geokidz club. On the surface, the Geokidz Club appeared to be an unofficial online clubhouse where children could share poetry and book reviews, they could chat and take HTML lessons – but these activities came at the price of a survey which contained questions about the lifestyles of the child’s parents (the type of information would appeal to advertisers). This formed part of one of the first internet privacy court cases due to the data being obtained from children and sold on without proper informed consent.

It was among my favourite talks of the day, and showed how much richer our understanding of the recent past can be using web archives, as well as the benefit to researchers of the history of youth and childhood.
It felt particularly relevant to me, as someone who spent her teen years on the internet watching, and being involved in, youth culture happening online in the 2000s to know that online youth culture, which can feel very ephemeral, can be saved for future research in web archives.

A wall hanging in Senate House (made of sisal)

In truth, any talk I attended would have made an interesting topic for this blog – the entire day was filled with informative speakers, interesting presentations and monumental, hair-like wall hangings. But I felt Ian Milligan’s talk gave such a positive example of how the internet, and particularly web archives, can give a voice to those whose experiences might be lost otherwise.

Initiating conversation: let’s talk about web content (part 2)

Colin Harris, Superintendent of Special Collections reading rooms. Chosen site: cyndislist.com

‘I am a founding member of Oxfordshire Family History Society and I’ve long been interested in family history. As a phenomena it surged in popularity in the 1970’s. In about 1973 there was great curiosity (in OFHS) in Bicester as everyone was interested in the popular group, The Osmonds (who originated from Bicester!). Every county has a family history society and I would say it’s they who have done the lion’s share of the work. All of their work and indexing…it’s all grist to the mill in terms of recording names and events.

So the website I would like to have access to in 10 years’ time is cyndislist.com, which is one of the world’s largest databases for genealogy. In fact it’s been going for over 21 years already. This was launched on the 4th March 1996. The family history people have been right there from the very beginning, it’s been growing solidly since then; it’s fantastic. It covers 200 categories of subjects, it has links to 332,000 other websites, and it’s the starting point for any genealogical research. The ‘Cyndi’ is Cyndi Howell, an author in genealogy.

Almost every day the site is launching content that might be interesting in some particular subject. So just going back within the last couple of weeks: an article on Telling the Orphan’s story; Archive lab on how to preserve old negatives; The key to family reunion success and DNA: testing at a family reunion! Projects even go beyond individuals…they explore a Yellowstone wolf family. There is virtually nothing that is untouched. Anything with a name to it has potential for exploration.

To be honest, I haven’t been able to do any family history research since 1980, but I am hoping to do some later on this year (when I retire). All these years that have passed has meant that so much is available to be accessed over the internet

Actually I’d love to see genealogy and family history workers and volunteers getting more recognition for the fantastic amount of industrious and tech savvy work they do. Family history is something for people from all walks of life. Our history, your history, my history is something very personal. As I say, 21 years and going strong; I’d love to see the site going stronger still in 10 years’ time.’


 

Pip Willcox, Head of the Centre for Digital Scholarship and Senior Researcher at Oxford e-Research. Chosen site: twitter.com

Twitter is an amazing tool that society has used to show the best of what humanity is at the moment…we share ideas, we share friendship, fun and joy, we communicate with others around the world, people help each other. But, it shows the worst of what humans can do. The news we see is just the tip of the iceberg – the levels of abuse that users, particularly minority groups, receive is appalling. Twitter is a fantastic place to meet people who think very differently from us, people who come from different backgrounds, have had different experiences, who live far from us, or close by but we might not otherwise have met. It is so rich, so full of potential, and some of what we do with it is amazing, yet some of what we do with it is appalling.

The question for the archive is “which Twitter?” There is the general feed, what you see if you don’t sign in. Then there are our individual feeds, where we curate our own filter bubbles, customizing what we see through our accounts. You can create a feed around a hashtag, an event, or slice it by time or location. All of these approaches will affect the version of Twitter we archive and leave for the future to discover.

These filter bubbles are not new: we have always lived in them, even if we haven’t called them that before. Last year there was an experiment where a series of couples who held diametrically opposing views switched Twitter accounts and I found that, and their thoughtful response to it fascinating.

Projects like Cultures of Knowledge, for example, which is based at the History Faculty here at the University of Oxford, traces early modern correspondence. This resource lets you search for who was writing to whom, when, where, and the subjects they were discussing. It’s an enormously rich, people-centred view of the history of ideas and relationships across time and space, and of course it points readers on in interesting directions, to engage closely with the texts themselves. This is possible because the letters were archived and catalogued over the years, over the centuries by experts.

How are we going to trace the conversations of the late 20th and the early 21st centuries? The speed at which ideas flow is faster than ever and their breadth is global. What will future historians make of our age?

I’m interested from a future history as well as a community point of view. The way we are using Twitter has already changed and tracking its use, reach, and power seems to me well worth recording to help us understand it now, and to help explain an aspect of our lives to future societies. For me, Twitter makes the world more familiar, and anything that draws us together as a global community, that reinforces our understanding that we share one planet, that what we have in common vastly outweighs what divides us, and that helps us find ways to communicate is a good and a necessary thing.’

 


 

Will Shire, Library Assistant, Philosophy and Theology Faculty Library. Chosen site: wikipedia.org

‘It’s one of the sites I use the most…it has all of human knowledge. I think it’s a cool idea that anyone can edit it – unlike a normal book it’s updated constantly. I feel it’s derided almost too much by people who automatically think it’s not trustworthy…but I like the fact that it is a range of people coming together to edit and amend this resource. As a kid I bothered my mum all the time with constant questioning of ‘Why is this like this, why does it do that. Nowadays if you have a question about anything you can visit wikipedia.org. It would be really interesting to take a snapshot of one article every month or week in order to see how much it changes through user editing.

 Also, I studied languages and it is extremely useful for learning new vocabulary as the links at the side of the article can take you to the content in other available languages. You can quite easily look at different words or use it as a starter to take you to different articles in other languages that aren’t English.’