Tag Archives: planets

music, planets and secret messages

My head is all a-buzz this morning with thoughts and ideas. There are a several reasons for this, the first being the news that “Google has deleted at least six popular music blogs“. I don’t want to argue the rights or wrongs of illegal file sharing, that is not why my head is buzzing. Rather, it is that the struggle between of the music industry and the pirates, all riding the waves of the Web, seems to me a struggle that will be of great importance to future scholars trying to work out what happened in the 21st Century to produce the business models of the 25th. Google’s actions here could amount to someone burning Amatino Manucci’s Summa de Arithmetica.

The second reason is we’re just back from “Digital Preservation – The Planets Way“, a three day workshop in London put together by the lovely and talented people on the Planets Project. Just looking at the programme you’ll see what a treat it was and there are some great things that have come out of the project – including the Plato preservation planning tool, which provides a guided (albeit manual) workflow to build preservation plans for digital objects; and the Testbed, which is a Web-based “Lab” in which you can test preservation actions and record and evaluate the results prior to running those actions locally. Both provide a useful audit trail and hopefully protect a beleaguered digital preservationist from trouble should they find they’ve been using the equivalent of acid-paper some time in the future – because they can show the actions they took were the best available to them at the time.

Most interesting for me (as a developer) was the Interoperability Framework, which seems to promise simple integrated access to preservation tools and so my head is buzzing with the thoughts of how we might use it. Mind you, I’ve not delved deeper yet, but I’ll let you know what we find out!

Finally, on the train home, I bought a copy of Linux Magazine mostly because the cover announced an article on steganography. Earlier that day we’d been trying out image migrations in the Planets TestBed and I didn’t recall seeing any of the characterization tools pointing out (or having a space for) detection of secret messages, although such tools exist (the article suggested they might not work or lead to false positives however). Wikipedia lists some futher paths to follow and this morning I found a report from a conference in 2004 that mentions steganalysis and archives. I guess our forensics machine might have tools to find this sort of thing too.

All of which left the final buzz in my head – I didn’t know what migration might do to such messages – say embedding a poem inside a BMP file and then migrating that BMP to TIFF (and then back to a BMP) and seeing if the message is still there. If ever there was a time to try out the Testbed, this would be it! So I’m off to see if the group logins still work! 🙂


The login still worked, so I conducted four experiments.

Firstly, I converted this BMP (blogspot has migrated it to a JPG on upload, though it still has its original filename!) to a TIFF and noted that none of the “compare” tools noticed anything odd about the BMP, in spite of the secret poem contained therein and steghide would no longer work as it doesn’t support TIFFs.

Secondly, I converted the TIFF back to a BMP (using the same migration tool – GIMP in this instance). Perhaps if you know more about the innards of image formats it will come as no surprise, but I was surprised to discover that migrating back to BMP restored the original hidden message. (Interestingly, doing the same thing on the command line with ImageMagick – BMP->JPG JPG->BMP returned a BMP of identical size to the source, but the hidden message was lost).

Finally I converted the BMP to a JPG (twice – the first migration had some dubious default settings for quality of the resulting JPG) and tried steghide on the results. Unsurprisingly the message was again lost. This is nothing new – you have to decide which characteristics of an object you want to keep and which are OK to lose on migration and a stenographic message is probably just one of those characteristics – albeit a rare one!

This was just a brief moment spent with the Testbed and I rushed through the tests really, but it was pretty interesting. True, I could’ve probably conducted these experiments faster on the command line using any number of image manipulation tools, but then I would not have a record of my work. Using the Testbed forced me to document the experiment rather than just bash out a few commands and leap to a conclusion and any results I have, should, in theory make their way to the “community”.

Finally, if you want to know what the poem is, the passphrase is “boat” and the original image is available here. (You might want steghide too).

-Peter Cliff

DPC’s preservation planning workshop

Earlier in the week I attended a DPC workshop on preservation planning, which was largely constructed of material coming out of the European project called Planets, which is now half-way through its four-year programme. There were also interesting contributions from Natalie Walters of the Wellcome Library and Matthew Woollard of the UK Data Archive.

A preservation system for the Wellcome Library?
Much of what Natalie had to say about the curation of born-digital archives chimed with our experiences here. Unlike us though, Wellcome are in the process of evaluating ‘off the shelf’ systems to manage digital preservation. They put out a tender earlier this year and received five responses that seem, in the main, to demonstrate a misunderstanding of archival requirements and the immaturity of the digital curation/preservation marketplace. One criticism was that the responses offered systems for ‘access’ or ‘institutional repositories’ (of the kind associated with open access HE content – academic papers and e-theses). This is something we also felt when we evaluated the Fedora and DSpace repositories on the Paradigm project (admittedly, this evaluation becomes a bit more obsolete day by day). Balancing access and preservation requirements has long been an issue for archivists, since we often have to preserve material that is embargoed for a period of time. I still believe that systems providing preservation services and systems providing researcher access are doing different things, but we do of course need some form of access to embargoed material for management and processing purposes. I also find the adoption of new meanings for words, like ‘repository’ and ‘archive’, tricky to negotiate at times. These issues aside, one of the systems offered seems to have held Wellcome’s interest and I’ll be keen to find out which one when this information can be revealed.

Preservation policy at UKDA
Matthew spoke about the evolution of preservation policy at the UKDA, which had no preservation policy until 2003 despite celebrating its 40th anniversary last year. The first two editions of the policy were more or less exclusively concerned with the technical aspects of preserving digital material, specifying such things as acceptable storage conditions and the frequency with which tape should be re-tensioned. The latest (third) edition embraces wider requirements including organisational/business need, user requirements (designated community and others), standards, legislation, technology and security. The new policy increases emphasis on data integrity and archival standards, it defines archival packages more closely to provide for their verification, and it pays attention to the curation of metadata describing the resources to be preserved.

If I understood correctly, the UKDA preserves datasets in their original form (SIP), migrates them to a neutral format (AIP1) and creates usable versions from the neutral format (AIP2). All these versions are preserved and dissemination versions of the dataset are created from AIP2. The degree of processing applied to a dataset is determined by applying a matrix which assigns a value on the basis of likely use and value. These processes feel similar to those evolving here, though we need to do more work to formalise them.

Matthew also showed us a nice little diagram from 1976, which was created to document UKDA workflow from initial acquisition of a dataset to its presentation to the final user. The fundamentals of professional archival, or OAIS-like, practice are evident. The UKDA’s analysis of its own conformance with the OAIS model undertaken under the JISC 04/04 Programme is worth a look for those who haven’t seen it.

Towards the end of the talk Matthew reminded us that having written a policy, one must implement it. It’s not normally possible to implement every new thing in a policy at once, but the policy is valueless without mechanisms in place to audit it. Steps must be taken to progress those aspects of the policy that are new and to audit compliance more generally. The policy must also be available to relevant audiences who can evaluate the degree to which the archive complies with its own policy for themselves. I found this a very useful overview of the key issues involved in developing a preservation policy and the resulting policy itself is very clear and concise.

Planets tools for preservation planning
It’s great to see the promise of Planets starting to be realised, especially since we plan to build on the project’s work in relation to characterising material, planning and executing preservation strategies. Andreas Rauber kicked things off with an overview of the Planets project, which helped to demonstrated how the various components fit together.What is uncertain at the moment is how the software and services being developed by Planets will be sustained beyond the project’s life. Neither is it clear what licensing model/s will be adopted for different components in the project, since there are the needs of commercial partners to consider as well as those of national archives, libraries and universities.

Christoph Becker gave us an overview of Plato, a tool which allows the user to develop preservation strategies for specific kind of objects. In Plato, users can design experiments to determine the best available preservation strategy for a particular type of material. This involves a formal definition of constraints and objectives, which includes an assessment of the relative importance of each of these factors. Factors might include:

* object migration time – max. 1 second
* object migration cost – max £0.05 per object
* preserve footnotes – 5
* preserve images- 5
* preserve headings – 4
* open format required – 5
* preserve font – 3
* and so on…

These are expressed in an ‘objective tree’, which can be created directly in Plato or in the Freemind mind mapping tool and uploaded to Plato. Objective tress can be very simple, but the process of creating a good and detailed objective tree is quite demanding (we had a go at doing this ourselves in the afternoon). In future we should be able to build on previous objective trees as these are developed and that will ease the process. For the moment the templates provided are minimal because the Plato team don’t want to preempt user requirements!

The user must also supply a sample of material which can be used to assess the effectiveness of different strategies. This should be the bare minimum of objects required to represent the range of factors expressed in the objective tree. The user then selects different strategies to apply to the sample material, sets the experiment in motion, and compares the results against the objective tree. The process of evaluating results is manual at present, but there are plans to begin automating aspects of this too. Once the evaluation is complete, Plato can produce a report of the experiment which should demonstrate why one preservation strategy was chosen over another in respect of a particular class of material.

Plato is available for offline use, which will be necessary for us when processing embargoed material, but it is also offered as an online service where users can perform experiments in one place and benefit from working with the results of experiments performed by others.

The Planets work on characterisation was introduced by Manfred Thaller. This work develops two formal characterisation languages – the extensible characterisation extraction language (XCEL) and the extensible characterisation description language (XCDL). The work should make it possible to perform more automatically determine whether a preservation action, such as migration, has preserved an object’s essential characteristics (or significant properties). It is expected that the Microsoft family of formats, PDF formats and common image formats will treated before the end of the project.

One of the interesting aspects of the characterisation work is developing an understanding of what is preserved or not in a particular process and how a file format impacts on this. Thaller demonstrated this (using a little tool for *shooting* files) by deliberately causing a small amount of damage to a png file and a tif file. A small amount of damage to the png file had severe consequences for its rendering, while the tif file could be damaged much more extensively and still retain some of its informational value. Thaller also used the example of migrating a MS Word 2003 document to the Open Document Text format. The migration to ODT seemed to lose a footnote in the document. Thaller then showed the same MS Word 2003 document migrated to PDF, where the footnote appears to be retained. In actual fact the footnote isn’t lost in the migration to ODT, it’s just not rendered. On the other hand, the footnote is structurally lost in the PDF file, but visually present. Thaller is proposing a solution which allows structure and appearance to be preserved.

The final element of planets on show was the testbed developed at HATII, demonstrated by Matthew Barr. The testbed looks very useful and, like Plato, will be available for use online and offline. There did seem to be some overlap in aims and functionality with Plato, but there are differences too. It’s essential objectives seem similar – users should be able to perform experiments with select data and tools, evaluate those experiments and draw conclusions to inform their preservation strategy; the testbed will also tools and services to be benchmarked. It struck me is that the process of conducting an experiment was simpler than with Plato, since a granular expression of objectives is not necessary. It’s more quick and dirty, which may suit some scenarios better, but will the result be as good? Aspects I found particularly interesting were the development of a corpora and the ability to add new services (tools are deployed and accessed using web services) for testing.

-Susan Thomas