Category Archives: Activity

New Conservative Party Archive releases for 2026

Each January, the Archive of the Conservative Party releases files which were previously closed under the 30-year rule. This year, files from 1995 are newly-available to access. This blog post serves to outline strands of information contained in these files, while exploring a number of interesting highlights. These demonstrate not only their intrinsic value in helping to develop a more complete understanding of the era but also provide valuable source material for researchers and historians. This archive material is open to those who wish to investigate the mechanics of the Conservative Party, but also those in search of a greater appreciation of British Political history.

1995 saw the continuation of Conservative government, with John Major serving his fifth year as Prime Minister. However, despite a steady economic situation, John Major’s position was becoming overshadowed by significant internal divisions over the issue of European integration. To assert his authority and to silence his critics, Major resigned as Party Leader in June 1995, and immediately put himself forward for re-election, challenging his opponents. He won the ballot with over 66% of the vote. However, this did not stem the series of defeats the Conservative Party experienced in successive local elections. These issues are amongst those covered within the newly-released files, alongside monitoring of opposition parties, and an insight into the Conservative initiatives abroad, including the European Democratic Union.

Again, as in subsequent years, a large proportion of our new releases are from our collections of CRD (Conservative Research Department) files. Material is drawn from various sources including subject briefings, directors’ papers and letter books of desk officers. It also includes CRD files covering topics such as agriculture, environmental issues and food standards. Alongside these CRD files we will also be releasing papers relating to the international arm of the Conservative Party.

International relations – 1990-1995

This year we have released a number of files from the International Office (previously, Conservative Overseas Bureau), who were responsible for improving links with overseas political parties, as well as providing briefing material on international issues. After the collapse of the Soviet Union, the Conservative Party, through the International Office’s links with groups such as the European Democrat Union (EDU) and the International Democrat Union (IDU), gave support and encouragement to the formation of democratic political parties in former Eastern bloc countries, and sent observers to monitor elections. The Conservative Party was a founding member of both the EDU and the IDU, and from 1983 to 1999 the secretariat of the IDU was based at the International Office.

Being released this year is COB 7/4/1/8, which contains papers regarding the EDU’s Committee on the Promotion of Stability and Security in Europe. The briefing summary does admit that “the main problem is that of strengthening the will of European governments to engage in joint action.” It also references that “the continued presence of Russian troops in the Baltic countries is in sharp contrast with international law and must come to an end.”

CPA COB 7/4/1/8 – “The Promotion of Stability and Security in Europe”, 1995

It is noteworthy that much discussion in 1995 is focused on the European Union. It is clear that there were differing opinions in just how Britain should be part of that community. Within the Conservative Party, there were hardliners who did not want to be part of the single currency as well as those who embraced the single market. The Conservative Political Centre set out several local policy discussion groups to encourage consensus through discussion and debate on the way forward in Europe. Looking at Richard Normington’s (Head of the International Office and Head of the Overseas and Defence Section of CRD, 1994-1999) letter books, thanks can be found by the Foreign Secretary, Malcom Rifkind, to these groups.

CPA COB 3/3/2 – Letter book, February 1995-December 1995

National Issues, 1993-1995

In addition to international relations, the International Office was also responsible for working with parties relating to Northern Ireland. Again, 1995 proved a pivotal year in the political landscape of Northern Ireland, with a landmark visit by US President Bill Clinton. Against the backdrop of the Troubles, work was ongoing to ensure the continued progress of the peace process. In the midst of trial and tribulation, Richard Normington received a letter from Sinn Fein, reaffirming their mutual desire for a stable peace. They welcomed “the opportunity to exchange views of the important task and responsibility which we all have to create the conditions to bring about a just and lasting peace settlement.”

CPA COB 3/3/1Letter book, Richard Normington, 1994-1995

As well as the ongoing questions surrounding Northern Ireland, in 1995, Labour began to float an idea regarding Devolution. The Labour Party, under the leadership of Tony Blair, expressed an intention to set up Regional Development Agencies, to help develop regions on a local level. Additionally, there were discussions regarding a referendum regarding Welsh and Scottish devolution. This raised some questions about how the devolved system would work. It also reignited issues surrounding the West Lothian Question, as to whether MPs in Scotland and Wales would be stopped from voting on issues that involved England only, as the reverse had been purported to be something they were considering.

CPA CRD/D/11/18 – Director’s Files: Strategy and Elections, 1989-1995

Furthermore, being released this year is a large collection of CRD briefings on various domestic issues. As well as briefings on agriculture and food standards, there are also briefings regarding transport. For example, the upgrading of the Channel Tunnel rail link. So, although the Channel Tunnel had officially been opened in 1994, a Bill was put through Parliament to extend the rail link.

CPA CRD/B/31/18 – Transport briefs, 1995

These CRD papers also include a few papers regarding the ongoing privatisation of England and Wales’ water supply. This issue is still very much alive. However, an analysis of the papers identifies that even after the privatisation, the Conservative government worked closely with the water companies to ensure fair charging and good water quality.

CPA CRD 5/6/42 (left) and CPA CRD 5/6/7 (right)

Strategy papers, 1994-1995

Among the CRD papers are also records which afford insight into the internal Conservative strategies for the upcoming General Election in 1997. The Campaigning Department, working towards the General Election, started their work in 1992, right after the previous General Election. However, it is apparent that 1995 was a crucial year, as a number of local elections had taken place. This had resulted in the Conservatives losing a number of local seats and it was clear that new strategies were needed. Therefore, a decision was made to focus on the national position, rather than the local level. Throughout this period, the Conservative Party remained optimistic that the 1997 election would provide them with another victory.

CPA CRD/D/11/16 – Director’s Files: Strategy and Elections, 1989-1995

All the material featured in this blog will be made available in January 2026. The full list of de-restricted items can be accessed here: Files de-restricted on 2026-01-02

Introducing Wacksy: a library for writing WACZ collections

Blog published on behalf of Pierre Marshall, Technical Research Officer, Algorithmic Archive project.

As part of the Algorithmic Archive project, we have been building tooling to support a prospective social media archive. This post introduces Wacksy, a Rust crate we wrote for packaging Web ARChive (WARC) files in WACZ collections.

Web Archive Collection Zipped (WACZ) is a format for packaging web archives. A WACZ collects WARC files along with any other related resources into a single hermetic zip archive. Each WACZ collection is self-describing and should contain within it all the resources necessary to replay a WARC file.

Motivation

The main advantage to independently replayable WACZ files is that you don’t need to maintain an external indexing server. Eliminating that dependency on an external indexing system is beneficial for digital preservation, in the long run the most reliable database is just the file system.

Of course, this also means that packaging a WARC file in a WACZ involves indexing the WARC, and so the scope of this little project grew from ‘wrap some things in a zip archive’ to ‘build a WARC indexer’.

Thankfully, WARC files are easy enough to parse. One of the useful side-effects of this project was that we got to learn the WARC and WACZ specs very well. The process of implementing the WACZ format also brought up a few issues (#161, #163, #164, #166, #167) which could be used to tighten the spec in a future revision.

Archiving the datafied web

There’s another reason you might want to package resources together in a WACZ. We’ve spent a lot of time in this project trying to understand how to represent a social media post.

While sidestepping arguments about mass literary culture, we can think of social media as a kind of electronic literature, and preserving it is an extension of ongoing web archiving work. This approach includes preserving the ‘look and feel’ of a post in context, surrounded by comments and advertising and blobby Frutiger Aero buttons.

Social media posts are also data, in the sense that they exist in the form of structured json. Each post is an object with properties: text, username, datetime, maybe links to associated media. All raw content, easily searchable, indexable, and ready for researchers to throw into a data processing workflow.

Ideally, you want to capture and preserve both: a web archive snapshot and the structured data.

You could include the json inline in a WARC header field, or add it to the WARC file as a resource record. Or, you could package the WARC and json files together in a WACZ collection, this is the use case we had in mind when writing Wacksy.

How to use

With a stable Rust toolchain, run cargo add wacksy to add the crate to your cargo manifest.

The API provides a WACZ type with two functions: from_file and as_zip_archive.

from_file() builds the WACZ object, it takes a path to a WARC file, indexes it, and returns a result with either a WACZ struct or an error. The indexer was recently rewritten and contains almost no error handling. Use with caution! Also, the format requires all resources to be defined in the datapackage, so when you construct a datapackage you’re already building a structured representation of the WACZ collection. This is a neat feature, Ed Summers and Ilya Kremer really did a good job on the WACZ spec here.

as_zip_archive() takes all the resources in the WACZ object and passes them through into a zip file, making use of Nick Babcock’s rawzip library.

Here is an example from the documentation (current/latest):

fn main() -> Result<(), Box<dyn Error>> {

    let warc_file_path = Path::new("example.warc.gz"); // set path to your ᴡᴀʀᴄ file

    let wacz_object = WACZ::from_file(warc_file_path)?; // index the ᴡᴀʀᴄ and create a ᴡᴀᴄᴢ object

    let zipped_wacz: Vec<u8> = wacz_object.as_zip_archive()?; // zip up the ᴡᴀᴄᴢ

    fs::write("example.wacz", zipped_wacz)?; // write out to file

    Ok(())

}

This API is still missing a way of adding arbitrary extra resources into the WACZ, although the code is flexible enough to accommodate that in future.

Shipping binaries

There are two other WACZ libraries out there:

Besides the library API, another goal is to provide a simple command line interface, and wrap that up into a standalone binary. This would be the main distinguishing feature of Wacksy, that you don’t need to set up Python or Node.js runtimes. For example, there would be fewer steps involved if you were packaging up WARC files in an automated workflow.

Although, py-wacz is better tested and more feature-complete, so take that into consideration. We have used py-wacz as a reference implementation to test against.

We’re also working on packaging Wacksy for Debian, and other systems after that. When it’s all packaged, it’ll be much easier for users to try out.

Performance

The WARC indexer was written with an eye on performance and memory use. When reading plain uncompressed WARCs, the indexer only reads the headers of each record. With a known header length and the WARC content-length value for each record, we can calculate the next record offset and skip through the file without passing record contents into memory. Where possible we’re also using a buffered reader rather than reading reading byte-by-byte. For gzip compressed WARCs it’s more complicated, and we’ve avoided doing anything more fancy like streaming decompression.

We’ve also tried to limit the dependencies used. At the moment, a binary compiled from the example above comes out at ~600 kilobytes — not super small, but more lightweight than most web pages. As a bonus, pruning the dependency tree will make Wacksy easier to package and distribute.

Use it!

At time of writing, the library is still experimental, not yet integrated into any other software, and it has only been tested against a few example WARC files. It would benefit from wider testing on real world use cases.

The code is all open source and available under an MIT license; contributions welcome!

Reporting from the RESAW2025 Conference Workshop: Towards an “Algorithmic Archive”

Welcome sign at the University of Siegen

At this year’s REsearch infrastructure for the Study of Archived Web materials (RESAW) conference, my colleague Pierre Marshall and I organised a workshop titled “Towards an “Algorithmic Archive”: Developing Collaborative Approaches to Persistent Social and Algorithmic Data Services for Researchers”. The workshop was accepted as one of the RESAW2025 pre-conference workshops and took place on 4 June 2025. We had around twelve participants, including researchers at various career stages and web archivists, that contributed to lively discussions and to the Algorithmic Archive project thanks to their experience with social media data.

The workshop was articulated in two sessions: the first one focussed on gathering researchers’ perspectives and information about the use of social media data in research. The second session invited participants to imagine a long-term archive of social media data, asking them to think about the features they would like to see in a social media data service. Both sessions offered a valuable opportunity to gather insights for the Algorithmic Archive project, particularly regarding issues and expectations related to short- and long-term access to social media data.

Key themes and takeaways are summarised below.

Social media data (re)use and data management practices

Researchers appeared to work mostly with small datasets, especially after free access to data for research purposes came to an end with the deprecation of the Twitter Academic API in 2023.  Among the researchers that shared their experience with social media data, one noted how they currently work with information about the number of followers, which is often supplemented with screenshots taken at different points in time. They explained how screenshots are essential for their research as enable to capture the “look and feel” of the social platforms, which is an essential part of the research they are conducting. In this regard, one of the web archivists participating in the workshop noted how at their institution they use Webrecorder[1] at least once a year.

In addition, a researcher whose research focussed on algorithms, noted that social media data collected via APIs is only one of the sources they use for their study. Other sources include existing policies, new regulations (e.g. EU Digital Services Act) and other archival sources such as information on GitHub.[2]

As for long-term preservation, researchers participating in the workshop appeared to not have specific plans in this sense, with some indicating that they usually delete social media data sometime after the end of the project. Despite some concerns related to potential ethical issues, researchers expressed a general interest in reusing datasets that include social media data. Nevertheless, they emphasised that effective reuse would require detailed documentation from the dataset creator to understand how the data was developed.

Access and user requirements

For the second session, we organised a post-it note exercise where we asked researchers to reflect on the type of metadata they would find useful for their research and would like for memory institutions to collect/provide. Researchers suggested several metadata or information they would like to see associated with the archived resource, includingdate of capture; date of publication; technical and curatorial metadata; hardware (e.g., mobile, tablet, laptop); sensitivity assessment; andthe type of tool used to collect the information.

Post-it note session

There was a general agreement among participants about the need for the collecting institution to preserve at least some instances of the context in which the data was embedded. For example, walkthroughs of social media platforms recorded using tools such as Webrecorder, would be crucial for researchers and future users of the collection to get a sense of platforms’ “look and feel” at certain points in time.Some of the participants noted the importance to get an understanding of potential functionality loss when replaying archived social media material.

Nevertheless, Access, particularly free access to platform data is still one of the major blocks for researchers who need such information for their studies. This has become even more crucial after the Twitter Academic API was deprecated in 2023 and replaced with a paid tier system which due to the high fees required to get access to the required amount of data, has often led many researchers to redirect their research goals, either significantly reducing the amount of data needed or focus on other platforms.

Overall, the workshop brought together diverse perspectives from practitioners and researchers working with social media data, fostering discussions regarding the development of sustainable strategies to collect social media platforms. This was a unique opportunity to discuss some of the Algorithmic Archive findings, clarify researchers’ perspectives on concerns related to the use of social media data as well as raise further questions that the Algorithmic Archive project should take into consideration for the development of a social media data service.


[1] Webrecorder homepage: https://webrecorder.net/

[2] More information about the GitHub Archiving Programme can be found here: https://archiveprogram.github.com/

The catalogue of the archive of Robert Craft and Igor Stravinsky is now available

Robert Craft (1923-2015) was a conductor and composer, who was known for his professional, and personal, association with Stravinsky. He conducted several orchestras in America, Canada, Europe and the rest of the world. Some of Stravinsky’s later works premiered with Craft conducting. His recordings include numerous works by Stravinsky, Schoenberg and Webern. His writings include collaborations with Stravinsky, such as Conversations with Igor Stravinsky (1959) and Memories and Commentaries (1960). He also wrote extensively on Stravinsky, as well as other notable figures in music and literature.

Igor Stravinsky (1882-1971) was a Russian composer and conductor. He is considered an important and influential composer whose works inspired many. During his life he composed more than one hundred pieces that spanned various genres and styles. His early works include his ballets commissioned by Sergei Diaghilev for the Ballet Russe, such as ‘The Firebird’ (1910), ‘Petrushka’ (1911), and ‘The Rite of Spring’ (1913). His neoclassical period included pieces influenced by Greek mythological themes, including his 1933 work ’Perséphone’. His later music was influenced by the twelve-tone technique used by Schoenberg, and the music of the Second Viennese School, a change that came about after meeting Robert Craft.

Craft and Stravinsky remained close until Stravinsky’s death in 1971.

The archive comprises a variety of material belonging to both Craft and Stravinsky. In this first edition of the catalogue, researchers will find correspondence, photographs and music papers spanning both the lifetimes of Craft and Stravinsky. As work progresses further material will be added.

Reflections on Curating in the Crossfire: Collecting in the Time of War, Conflict and Crises

On 3-4 November, I attended a two-day event at the British Library that highlighted the challenges and approaches of collecting materials created during times of war, conflict and crises. Through a series of panels and discussions, museum and library professionals, researchers and private collectors shared examples of incredible historical and contemporary initiatives to preserve diverse materials and heritage sites at risk of loss, decay or destruction.

Having recently worked on the joint Bodleian Libraries and History of Science Museum Collecting COVID project, I was particularly interested in contemporary programmes of collecting. Our project, which ran from 2021-2023, aimed to acquire and preserve the University of Oxford’s research response to the COVID-19 pandemic. It enabled us to capture, catalogue and publish over ninety oral history interviews.

Modern collections/initiatives showcased included:

  • Web Archiving the COVID-19 pandemic, Nicola Bingham, British Library
  • Coastal Connections (heritage sites at threat from coastal erosion) Dr Alex Kent, World Monuments Fund)
  • Crowdsourcing photographs for the Picturing Lockdown Collection Dr Tamsin Silvey, Historic England
  • Endangered Archives Programme (recent case studies include Ukraine, Gaza and Sudan) Dr Sam van Schaik, British Library
  • Collecting Human Stories during the war in Ukraine, Natalia Yemchenko, Rinat Akhmetov Foundation/Museum of Civilian Voices

Rapid collecting is a means to collect documentary evidence, preserve cultural memories and commemorate events. By providing access to these collections, institutions are then able to build a body of evidence and facilitate research. I was struck by the similarities between modern initiatives and those that had taken place a century before. Some of the contemporary examples of collections crowdsourcing harked back to the collecting of ephemera during the First World War. Dr Ann-Marie Foster highlighted the Bond of Sacrifice Collection and Women’s Work Collection (Imperial War Museums) in her presentation with Alison Bailey, in which families sent items memorialising loved ones, as examples of early collecting initiatives. Modern rapid collecting work has meant that contemporary archivists/curators have taken up this tradition, working actively to save materials at risk of loss through intentional selection.

As well as crowdsourcing and outreach, other strategies institutions draw upon in an increasingly online world are web archiving, digitisation and digital preservation. With social media now a main mode of communication for millions, web archiving is a useful tool to preserve and present online response to global events. Work to capture websites relating to recent events is ongoing at both the Bodleian Libraries and British Library. I found Archive-It to be an incredibly useful tool to capture and publish a range of web pages (including the social media pages of COVID-19 researchers, given with permission) for our project, which without reactive selection and preservation, would otherwise have been at risk of loss.

Overall, the event highlighted that institutions must use active strategies towards preserving at-risk materials created during ongoing crises and conflicts, including:

  • Involving communities to assist in selection of materials;
  • Providing as representative a view of the event as possible (capturing diverse perspectives);
  • Providing access to collections and making them available as widely as possible (ethical considerations and sensitivities permitting);
  • Democratising collections and preserving them for future generations.

Takeaways from the Cambridge Future Nostalgia “Copy that Floppy” Workshop

The Digital Archivist Trainees had the opportunity to attend the “Copy that Floppy” workshop organised by the Cambridge Future Nostalgia team on October 9, which provided an introduction to floppy disk imaging for digital archivists and digital preservation practitioners. This blog post outlines some of the key takeaways from our experience, and a full guide to floppy disk imaging produced by Future Nostalgia can be found here.

A floppy disk is a type of media which stores data on a magnetic-coated soft plastic disk in a hard plastic case. Popular in the 1970s–1990s, floppy disks come in several sizes: 8-inch, 5.25-inch, 3.5-inch, and sometimes 3-inch. While the number of 8-inch and 5.25-inch floppy disks sold in this period remained relatively stable, the number of 3.5-inch floppy disks sold rose dramatically in the 1990s. The Future Nostalgia team predicts that there will be a significant rise in the number of 3.5-inch disks in future accessions, and therefore creating the capacity to image 3.5-inch disks in particular before this influx should be a priority.

A USB-C cable, a floppy disk drive, a ribbon cable, a controller, a power cable, and 3.5-inch high density disk.
Workstation equipment including a floppy disk drive, ribbon cable, controller, 3.5-inch high density disk, and a power cable. Photo by Leontien Talboom.

Early floppy disks came in single-sided and double-sided formats, meaning that data could be reliably written on only one or both sides of the disks. It is also important to try to identify the “density”, or the way the disk was encoded and magnetised, as this affects how the disk can be read. 3.5-inch double density disks have a hole only in one corner, whereas 3.5-inch high density disks often have two. 5.25-inch disks are more difficult to identify as double or high density, and 8-inch disks are also sometimes single density. The disk manufacturer and type of computer used to write data can also affect the way the disk can be read (e.g., Mac data can be difficult to read on a non-Mac system and vice versa). Common disk manufacturers included Apple, Amstrad, and IBM.

Floppy disk drives that are compatible with the various sizes of floppy disks can be used with a “controller” to read disks on a modern computer. A controller is a piece of hardware that manages the connection between the disk drive and the modern machine, and crucially, it can read “flux-level data” from the disk. (Some 3.5-inch disks can also be read with a USB floppy drive, but these drives cannot read flux-level data, which can help recover some information when a disk is damaged or degraded.) In the workshop, we used a “Greaseweazle”, which is the most commonly used floppy disk controller, that runs with a Python package of the same name.

In teams, we each assembled a workstation to read various sizes of floppy disks. The Future Nostalgia team provided drives, controllers, and cables, as well as some test disks and workshop participants also brought in their own disks that they had been hoping to read. Excitingly, one member of my team brought in a stack of 3-inch Amstrad floppy disks which tend to be rarer than their 3.5-inch counterparts. We used a 26- to 34-pin ribbon cable to connect the 3-inch drive to our controller and a USB-C cable to connect the controller to a PC. The Amstrad drive also required us to use a flipped power cable compatible with an Amstrad drive to connect to an external 12V power source. Luckily, the expert at our table warned us this was necessary―a regular power cable or a power connection directly to the 5V-compatible Greaseweazle would’ve fried the drive or the board!

A floppy disk imaging workstation including a floppy disk drive, power cable, ribbon cable, controller, and laptop.
Setting up a workstation to image 3-inch Amstrad disks.

Despite everything being connected in a way that should have worked, the Greaseweazle software returned unexpected errors when trying to read the disk. Floppy disk drives and cables are fickle and will sometimes work or not work in the same set-up―it’s worth taking things apart, putting them back together, and trying again. Eventually, we discovered that the controller was unhappy with its connection to the ribbon cable and we had to instead connect it to a different port on the same cable. When that was done, the Greaseweazle was satisfied and we were able to image some Amstrad floppy disks! The first step was to take a flux image of the disk and view it using an emulator. From this flux image we were able to tell whether the disk was damaged (fortunately it was in good shape!) and how many tracks were stored on it. We then were able to convert the raw flux image data into a disk image, and extract some of the text files saved on the disk. It turned out that the stack of 3-inch disks contained research notes and bibliographies compiled by an historian of Anglo-Saxon history from whose archive they came.

My colleague Evie’s team ran into one of the most interesting cases of the day, which amassed a small crowd of practitioners looking over her shoulder while she was imaging a disk. Curiously, the flux image kept returning data for only one side of the double-sided disk. The suspicion that we left with was that the user had first written the disk using both sides of a double-sided drive, but had later overwritten data on only one side by using a single-sided drive. Unfortunately, that meant that the oldest data was lost―but it generated a lot of speculation as to how to go about recovering as much as possible. Floppy disks are complicated, and they and the machines needed to read and write them were expensive. Users found creative ways to reuse and reformat disks, which means that sometimes manufacturers’ labels are misleading when imaging disks today. The Future Nostalgia team estimated that they have success imaging disks about 50% of the time due to degradation or damage, so it was an authentic experience not to get complete data off of all of the disks we saw.

Evie using a laptop to image a floppy disk. Several colleagues are looking at her laptop screen.
Evie copying some floppies! Photo by Mark Box.

This workshop was a fantastic crash course into floppy disk imaging, and many thanks to the Future Nostalgia team for inviting us along!

Additions to the archive of Raymond Chandler

Chandler’s private detective, Philip Marlowe, features on a set of stamps to mark the 50th anniversary of Interpol, 1973. MS. Chandler 107. © Raymond Chandler Limited.

Raymond Chandler is best known for hard-boiled crime novels including The Big Sleep (1939) and The Long Goodbye (1953) and as a screenwriter for some of the biggest motion pictures of the 1940s, including The Blue Dahlia. Since the start of the year, work has been underway to enhance and expand the original catalogue of the Chandler archive and to integrate and make accessible later accessions. These new additions cover papers and correspondence created by Chandler in his lifetime, as well as a vast afterlife of papers showcasing the legacy of the great mystery writer.

New additions to the archive largely focus on correspondence and papers concerning the Chandler estate, stewarded by his literary agent and heir, Helga Greene. These demonstrate a wealth of interest in Chandler’s work by filmmakers and biographers, largely covering a period from 1960-1990. By the 1970s, small and big screen adaptations of his novels and short stories were becoming apparent as a major focus of interest for the estate. Greene’s defence of Chandler’s work and legacy is evident in the papers, through her diligent renewal of copyright and selective choices over permissions for adaptations, anthologies and new publications. The papers also go into detail over a will contest and include material concerning Greene’s legal fight to be recognised as heir, in a suit brought by Chandler’s former secretary, MS. Chandler 112-113.

Extract of a letter touting Chandler’s special recipe ‘Swordfish Mascagni’ and apples baked in cider, MS. Chandler 107. © Raymond Chandler Limited.

Amongst the papers generated by the estate following Chandler’s death are snippets of original writings that demonstrate his natural humour and wit, as well as leisurely pursuits and interests, including cookery and darts. In the final year of his life, whilst working on ‘The Poodle Springs Story’ (the last and unfinished novel, in which Marlowe marries heiress Linda Loring), Chandler and Greene were also collaborating on an idea for a cookery book with a provisional title of ‘Cooking For Idiots.’ Although the book never came to fruition, the collection does hold remnants of the early development of this work. As well as the above letter teasing recipes such as apples baked in cider (‘vociferously admired by anyone who owes me money’), the collection includes an assortment of handwritten recipe cards featuring the culinary creations of Chandler’s late wife Cissy, including ‘Cissy’s Ham Goodbye’ and ‘Pancakes for Raymio,’ MS. Chandler 102 & 106.

© Raymond Chandler Limited.

Alongside the cookery ‘specials’ of the Chandler household, a glimpse of Chandler’s humour is found in this unsent letter marked ‘For Posterity’ to Los Angeles department store Bullock’s Wilshire (in its heyday a glitzy haunt of famous clientele), where he conjures an elaborate narrative in an effort to return an unwanted sports jacket.

Additionally, original prose such as drafts and typescripts for short stories including ‘The Pencil’ (published as ‘Marlowe Takes on the Syndicate’ in the Daily Mail, 1959) and the fantastic story ‘Professor Bingo’s Snuff’ are now available, along with a selection of assorted notes, prose and unpublished writings in MS. Chandler 7. The material featured in this blog post, along with all other newly catalogued additions to the archive can now be consulted in the Weston Library. The new and enhanced catalogue for the Archive of Raymond Chandler is searchable here.

Algorithmic Archive Project: Use Cases (3/3)

The Algorithmic Archive project is a one year project funded by the Mellon Foundation. As part of the first Work Package, we explored how researchers from different disciplines use social media data to answer various research questions.

This post is the third in a three-part series presenting use cases drawn from research conducted as part of the Algorithmic Archive project.

We would like to thank the researchers who generously shared insights from their work.


Use Case – Study on the trustworthiness of social media visual content among young adults (TRAVIS project)[1]

Research questions and aim(s):

Trust And Visuality: Everyday digital practices (TRAVIS) is an ESRC project which has received funding from the European Union’s Horizon 2020 Research and Innovation Programme. This research project that looks at how young adults experience, build and express trust in news and social media images related to wellbeing and health. It explores how and why people trust some visuals over others and how content creators establish trustworthiness through visual content. The TRAVIS project involves cross-national collaboration of multiple research teams located at different universities in UK and Europe. This includes the University of Oxford, in particular the Oxford team is based School of Geography and the Environment.

Social media data used:

The project included data collected indirectly from platforms including Facebook, Instagram, TikTok and YouTube (see below).

Tools and methods adopted:

Data collection from social media consisted of screenshots taken from the devices of interviewed young adults, as the TRAVIS project investigates the meaning of social media posts (visual content) via interviews with young adult users. The datasets generated from this method of collection counts around 400 screenshots, stored on an institutional cloud drive, which is accessible by the whole team.


[1] Further information about the TRAVIS project are available here: https://www.tlu.ee/en/bfm/researchmedit/trust-and-visuality-everyday-digital-practices-travis

The archive of Maria Becket is now available

Maria Hary Becket (1931-2012) was a Greek political and environmental activist who worked on a global scale.

The catalogue of the archive of Greek political activist Maria Becket is now online (see catalogue: Collection: Archive of Maria Becket | Bodleian Archives & Manuscripts). The archive spans the turbulent international politics of the mid-20th century and also documents the growing environmental movement of the late-20th and early-21st centuries, all told through the personal story of a life fiercely lived in the passionate service of human rights and causes. I found working on the Maria Becket Archive to be revelatory in its documentation of the horrors of repressive regimes and violent conflicts, and the superhuman and sometimes unorthodox efforts that Becket and her family, friends and associates went to try to remedy these problems.

Maria Hary was born in Athens in 1931 into a prominent Greek family – her father, Nikolaos, was active in the resistance against the Nazi occupation in Greece, but his troubled character had a formative effect on her early life. Her mother’s family were from old Constantinople (now Istanbul), which led to Maria’s lifelong interest in Byzantine history. She credited her political awakening to an experience during the Greek Civil War of 1946-1949, when she discovered a girl who had died of starvation on her doorstep and wondered why she had lived and the girl had died.

MS. 23105 photogr. 27. The Hary family in 1949, including Maria Becket aged 18, seated on left, and her father Nikalaos, standing on the right.
MS. 23105 photogr. 27. The Hary family in 1949, including Maria Becket aged 18, seated on left, and her father Nikolaos, standing on the right.

After her first marriage to a Greek shipowner and a period studying Byzantine history in London, Maria met American lawyer James Becket on a cruise ship crossing the Atlantic in 1958, and they married, had two daughters, and moved to Geneva in Switzerland. When the Greek government was overthrown by a junta of “colonels” in April 1967, the Beckets immediately became involved in the resistance movement. They were instrumental in the case brought against Greece in the Council of Europe by four Scandinavian countries, where they presented witnesses to testify about the use of torture by the regime. The archive contains testimonies and details of dozens of political prisoners who were tortured under the junta, and information on the horrific conditions in the notorious police building on Bouboulinas Street in Athens and in other Greek prisons. Maria and James Becket were also closely involved with networks of clandestine resistance to the regime of the “colonels”, and organised the escape of political prisoners.

MS. 23105/77. Stamps discouraging tourism in Greece during the military regime period, 1968
MS. 23105/77. Stamps discouraging tourism in Greece during the military regime period, 1968.

When the junta regime fell and Cyprus was invaded by Turkey in August 1974, Maria Becket organised Radio Free Cyprus to broadcast messages from Cyprus’s deposed leader Archbishop Makarios. Maria was also involved in the placement of Greek-Cypriot refugee children in foster care, and organised a programme for displaced Greek-Cypriot women to produce embroidery items for sale. She was offered the position of Greek Ambassador to the USA in 1974 but turned this down.

MS. 23105/30. Doll wearing Cypriot national costume made by Greek-Cypriot refugees organised by Maria Becket, c. 1974.
MS. 23105/130. Doll wearing Cypriot national costume made by Greek-Cypriot refugees from a group organised by Maria Becket, c.1974.

Maria Becket had a lifelong involvement with Palestine, and had connections to the Palestine Liberation Organisation (PLO). She attended PLO training camps in the Greek junta period, and her involvement is documented in the archive. The Beckets also had much wider interests in resistance movements and human rights issues all over the world.

Maria worked as an advisor for the Greek centre-right New Democracy party under Constantine Karamanlis from 1976-1981, and during later election campaigns. She also worked for UN High Commissioner for Refugees, Prince Sadruddin Aga Khan, during the late 1980s and early 1990s, and also became involved in his environmental work.

This work inspired her to begin Religion, Science and the Environment (RSE) in conjunction with Ecumenical Patriarch Bartholomew I, the leader of the Greek Orthodox Church, also known as the Green Patriarch. RSE organised eight floating symposia between 1995 and 2009, in which religious leaders and prominent scientists travelled on epic voyages across seas and rivers through multiple countries, giving a programme of talks addressing the environmental crises in the visited regions. The symposia included journeys on the Aegean Sea, the Arctic Ocean, the River Amazon and the Mississippi.

Maria died in 2012, but in her last years recorded autobiographical interviews which describe her extraordinary life.

The archive includes testimonies and collected information on political prisoners and refugees; planning material on resistance activities; political correspondence; papers on human rights, politics and the environment; photographs relating to political and environmental work; political pamphlets, magazines and ephemera; papers on the organisation of international meetings and symposia; personal correspondence and autobiographical material; and audio-visual and digital material.

Algorithmic Archive Project: Use Cases (2/3)

The Algorithmic Archive project is a one year project funded by the Mellon Foundation. As part of the first Work Package, we explored how researchers from different disciplines use social media data to answer various research questions.

This post is the second in a three-part series presenting use cases drawn from research conducted as part of the Algorithmic Archive project.

We would like to thank the researchers who generously shared insights from their work.


Use Case – Exploring Algorithmic Mediation and Recommendation Systems on YouTube [1]

Research questions and aim(s):

The study sought to investigate how the YouTube platform operates, focusing on algorithmic activity and the strategies employed by both human and automated (robot) actors within federal and regional elections. The aim was to understand the impact that this system of mediation has on society and to demystify preconceptions of ideologically neutral technologies in highly disputed political events. The research focuses on two case studies: 1) the 2018 Ontario (Canada) election and 2) the 2018 Brazilian Federal Election. The data collection was carried out during the campaigning periods, between May and June in Ontario, and between August and October 2018 in Brazil.

Social media data used:

The research focussed on the sole YouTube platform. Specifically, the researchers collected information about recommended videos starting from specific keywords related to the election campaign.

Tools and methods adopted:

The data collection was carried out using a Python script developed by the Algo Transparency project. The script automates YouTube search operations based on specified keywords (e.g., the names of the candidates), allowing the researcher to gather video-related data and the relative ranking position displayed to the user. Once the keywords were defined, the tool retrieved links for the top four results for each keyword and then examined the recommendation section. This process was repeated four times, each time collecting recommended videos, simulating a user interacting with algorithmic suggestions.

Data collected was stored on personal devices and the institutional cloud, and can be visualized at the following links:


[1] Reis, R., Zanetti, D., & Frizzera, L. (2020). A conveniência dos algoritmos: o papel do YouTube nas eleições brasileiras de 2018. Compolítica10(1), 35–58. https://doi.org/10.21878/compolitica.2020.10.1.333