Skip to main content

AV Digitization Workflow Considerations

I had the great opportunity of attending the Association of Moving Image Archivists annual meeting and discovered a few things. First, Cornell University Library is at a good place in developing an audiovisual preservation workflow. With an established AV digitization service point and our first ingestion into CULAR on the horizon, DCAPS continues to expand and shape very innovative, yet practical goals. This is one of the biggest steps facing the AV archiving community and I was happy to share some of the progress we’ve made with the larger field at AMIA. It was very encouraging to see so many sympathetic nods.

With the help of Danielle Mericle, Mickey Cassad and Manolo Bevia, I put together timeline that charted the lifespan of the Experimental Television Center Collection from inception to magnetic carriers to digitization and finally, to archival repository. I used the life of the collection as an example of how CUL, like many institutions, created a programmatic approach to AV preservation out of related projects over the course of several years. I thought I would share the poster, since I found it to be an inspiring, long-view of the AV work that has been happening here at CUL.

(poster design by Manolo Bevia)

(poster design by Manolo Bevia w/Tre Berney)

I also would like to report on the most recent developments of the Advanced Media Workflow Association’s work in progress, AS-07, which is exploring the MXF codec as a preservation standard video (and/or audio) wrapper. Although there is a good debate going on about it, there would be several advantages to using MXF, including variations of lossless or partial-loss compression types, areas for embedded metadata, closed-captions, associated images and more. Chris Lacinak (AVPS), George Blood (George Blood LP), and James Snyder (LC, NAVCC) discussed positive considerations with using MXF, which could be useful to us here in CUL. Being able to embed basic technical and descriptive metadata in a video file would be tremendously advantageous. Plus, several tools coalescing around this codec are coming from the emerging open-source, command-line utility community. (as seen at AMIA/DLF Hack Day 2014) **Feedback is welcome regarding MXF for preservation.

We have also expanded the DCAPS website to include detailed information about our AV digitization lab, including capabilities and pricing. Don’t hesitate to contact us if you have questions or need technical assistance:

Finally, there is much to be done to ensure the preservation of our rapidly-aging, unique media items and we are now gathering data for our AV Preservation Initiative. In partnership with CIT, this will help establish a bird’s-eye view of the AV landscape here at Cornell. Many of the unit libraries have spoken up regarding their unique AV materials and collections, but I continue to encourage anyone on campus with time-based media (whether it’s analog or digital-born) used for teaching and/or research to please complete our brief survey:

Have a great week-

Tre Berney

GSU Ereserves and Cornell

As most everyone knows by now, the decision in the appeal of the Georgia State ereserves case is out. The district court opinion found that Georgia State’s program was a fair use and ordered the plaintiffs (Oxford University Press, Cambridge University Press, and Sage Publishing, acting as “beards” for the Copyright Clearance Center and the Association of American Publishers, which are funding the litigation) to pay the university’s legal bill of almost $3 million. The plaintiffs appealed, and the Circuit Court has now thrown out the earlier decision (but without finding that Georgia State had infringed).

If you want to know more about the nuances of the decision, I heartily recommend Nancy Sims’s excellent analysis on her Copyright Librarian site. Look, too, for a promised upcoming analysis from Kevin Smith that will temper his initial assessment that the decision “looks like a considerable victory for the publishers” and instead concludes that the publishers lost everything that really mattered to them (oops – just published at

As far as Cornell is concerned, the new decision is entirely in line with Cornell practices, especially the Cornell Electronic Course Content Copyright Guidelines and Fair Use Checklist. All recognize that some uses of copyrighted material in teaching can be fair uses. And all eschew rigid amounts or percentages in favor of a more flexible fair use analysis. In some cases, using less than a chapter might be fair (especially if that is all that is needed for the pedagogical purpose), whereas in other cases more than one chapter might be acceptable. One needs, for example, to take into account the nature of the work (fiction or factual and, if factual, the amount of creative analysis in the work) as well as the availability of any digital license to use an excerpt. We often hear calls for a “bright line” fair use test, but the Circuit Court’s decision mirrors Cornell’s approach that the faculty member’s assessment of whether a particular use is fair will vary according to specific circumstances.

A decision in the 11th Circuit is not binding on courts in New York. But it is still reassuring to note that at least one Appellate Court views educational fair use in a similar manner to Cornell.

What is next for this case? It now goes back to Judge Evans, who has to reconsider the alleged infringement in light of the Circuit Court’s findings. Many commentators believe that Judge Evans may again find that Georgia State’s use of the material was fair, especially since the Circuit Court approved of much of her analysis. It could then be appealed again to the Circuit Court. Some commentators are suggesting that in light of the huge amounts of money already spent on this case and the publisher’s inability to get the appeals court to endorse its arguments, a settlement is likely. I am not so confident, given the fact that libraries are indirectly funding much of the litigation through their payments to the CCC. I hope I am wrong.

While the litigation may be back near square one, the whole saga should be one more reminder of how badly educational publishing is broken. The three plaintiffs – Oxford, Cambridge, and Sage – are proving themselves to be no friends of higher education. If authors wish to have their writings used in courses, they had best negotiate either better terms or look for a different publisher. And librarians should consider whether they want to support litigation and faculty members and universities by purchasing titles from these publishers and by paying permission fees to CCC.



Expanded Alexander Kluge Website

We are pleased to announce the release of the newly expanded digital collection revolving around the works – and intellectual networks – of Alexander Kluge: Alexander Kluge: Cultural History in Dialogue. Alexander Kluge is a leading public intellectual, filmmaker, and cultural theorist in Germany, whose vast corpus of work engages with a broad range of social, historical, political, and aesthetic questions. The work represented on the site includes streaming videos of interviews with prominent German writers, as well as some of Kluge’s own experimental films. The majority of the work has been transcribed, translated, described, and time-coded for broad access and ease of discoverability. The entire website is presented in English and German.

Kluge home page

Development of the site has been an ongoing collaboration between Cornell University and the University of Bremen in Germany, with additional input by Princeton University during the most recent phase of the project. The original project, Müller-Kluge: Conversations between Heiner Müller and Alexander Kluge, was funded as part of Sarah Thomas’s faculty grants initiative in 2004, and focused on conversations between Kluge and Heiner Müller, one of the most significant European dramatists of the second half of the 20th century. The project received two years of additional funding from the College of Arts & Science’s faculty grants program in 2010 and 2011 , which allowed for a significant expansion of the number of interviewees available on the site, including acclaimed poet and novelist Hans Enzensberger, among others, and a full responsive redesign of the website.

The project team includes David Bathrick, emeritus professor in German Studies (PI); Rainer Stollman, professor in German Studies at the University of Bremen (PI); Kizer Walker, Director of Collection Development at CUL (PI); and Michael Jennings, Director of the Alexander Kluge Research Center at Princeton (collaborator). On the development side, Melissa Wallace, James Reidy, and Jenn Colt were responsible for the website design and back-end programming, and Tre Berney oversaw the AV digitization and upload process. In addition, a large number of graduate students are credited with generating the metadata and translations for the site, most notably Hannah Mueller (Cornell); Erica Doerhoff (Cornell); Bret Leraul (Cornell); Felix Hampel (Bremen); and Fabian Roelen (Bremen). The overall project has been managed by Danielle Mericle.

The website is one of the most highly utilized digital collections hosted by the Library, with audiences spanning the globe and from a broad range of disciplines. It has been used in courses in German Studies both at Cornell and Bremen, and offers a rich user experience to anyone interested in European literary and intellectual history, the wars and revolutions of the 20th century, or any of the extraordinarily broad range of topics on the table in the conversations between Kluge and his colleagues. Working on such rich content and developing an accompanying site has been incredibly rewarding, and we hope to continue to add content in the coming years.

Danielle Mericle, Kizer Walker and Melissa Wallace

RepoExec, or The CUL Repository Executive Group

The Cornell University Library’s Repository Executive Group, colloquially known as RepoExec, has been meeting since the beginning of the year to explore and address the issues surrounding digital repositories at CUL. A digital repository is a system for managing and storing digital items, potentially including a wide range of content (visual images, digital images, research data files, AV, electronic records, code) for a variety of purposes and users. These repositories often include research outputs such as journal articles or research data, e-theses, e-learning objects and teaching materials, and administrative data. CUL supports several digital repositories to archive and/or provide access to a wide range of digital information such as eCommons, DigitalCommons@ILR, DigitalCommons@Law, CULAR, Luna Insight, Kaltura, Greenstone, Shared Shelf, etc.

As mentioned in the recent CUL IR white paper, the key impetus behind the formation of this group was to bring together the key players in this program area to strengthen communication and collaboration, and to foster the creation of an integrated and sustainable service framework. Therefore RepoExec draws its members from across the repository spectrum, including AULs, unit repository managers, and representatives from CUL IT and functional units such as collections and metadata.

The group’s full preliminary charge can be found here, but a good place to start would be the three goals that were set for us:

  • Reach out to stakeholders at CUL and Cornell at large to determine what is needed from repositories in terms of content, service needs, and sustainability.
  • Develop recommendations and/or scenarios by which CUL can meet those needs, addressing questions of software and architecture, workflow and staffing, and collection development.
  • Work with LibExec to develop an actionable plan to implement the recommendation(s) that best fit the needs for CUL.

Underlying all of those goals was the idea that the repository landscape at Cornell needed to be streamlined, both so that our services would be more useful to our users and also so we could maintain those services in a sustainable fashion.

Right from the start, I’ll say that our first several months didn’t see us addressing the first goal as well as we could have, though we did take strides towards the second and the third. This blog post will talk a bit about what we did, why we did it, and what we’re going to do next.

We determined early on that, in order to pursue the second and third goals above, we needed to have a more complete sense of the current digital repository landscape. So a small sub-committee was formed to put together an inventory of our existing repositories, the (somewhat surprising) results of which can be downloaded here as an Excel file or as a PDF.

We found more than twenty systems currently receiving at least some support within CUL that fit our definition of a digital repository. For each of those systems, we associated descriptive metadata in six categories:

  • General: Administrative unit, current CUL contact, current URL, etc.
  • Infrastructure: Software version in use, storage location, homegrown vs. off-the-shelf, etc.
  • Ingest: Submission policy, intellectual description of content, average frequency of deposit, etc.
  • Access/Discovery: content discoverability, availability of access descriptions, availability of embargoes, etc.
  • Content: Number of objects, optimized content types, etc.
  • Preservation: Redundancy, parity/bit checks, file versioning, etc.

If the breadth of the repository landscape is surprising to you, take heart: it was surprising to us, too. That’s a lot of systems, and a lot of apparent redundancy. But that’s a good thing to confirm, especially since the strategic goals of CUL indicate a strong need to streamline such systems. We needed to take what we have in the inventory and move it into the realm of actionable knowledge; our first attempt to do this involved expanding the metadata inventory into a broader schema of tags and categories that could be applied to repositories to a) identify those groups of repositories that were similar enough to be grouped together for policy purposes, b) identify the gaps and redundancies within those groups that could be addressed by new policy recommendations, and c) help connect people considering new repository projects with the existing options that would best suit their needs.

Unfortunately, after a few weeks of work, we discovered that in order for such a system to apply to the full range of repository options, it would either need to be so limited and general as to not be any more useful than the existing inventory, or so large and specific as to be completely unwieldy. However, the benefit of being a new committee is our flexibility: when we see something that’s not working, we can pull resources back from that, and apply them elsewhere.

Going forward, our goals for the rest of the year are to reassess our work and our charge, to see if we need to modify them to better meet the needs of CUL and its constituents. We have a number of ideas in the works for better engaging with our stakeholders, and using the work we’ve done to date to evaluate aspects of the repository landscape in more depth, and start making recommendations for where CUL’s repositories need to go next.

It’s been an exciting few months, and we’ve got plenty of work ahead! If you have any questions or feedback, please feel free to contact me, or stop by one of our open meetings. I’ll also be talking about RepoExec at the November R&O Forum, and hopefully more forums in the future.

Jim DelRosso
Chair, RepoExec
Digital Scholarship Fellow (September ’13 – August ’14)
Digital Projects Coordinator, HLM Library

DCAPS/Kheel Center Collaboration Begins

It is with much enthusiasm that we begin digitization of the Kheel Center’s collection of Collective Bargaining Agreements. Over the next two years, Digital Consulting & Production Services (DCAPS) will digitize upwards of 2000 agreements, representing contracts from the American educational services and retail industries. The series selected range in length from two pages to two hundred, and span from the 1930s to the 1980s.

Barb Morley, digital archivist within Kheel, delivered the first shipment of Collective Bargaining Agreements to DCAPS for digitization in late July. This consisted of 2 linear feet of agreements from the educational sector– a mix of bound and unbound items, and color and black and white material. In order to digitize the material most efficiently, we established a workflow that identifies agreements that can be easily disbound for scanning with a Fujitsu sheetfed scanner. Items that cannot withstand disbinding will be scanned with a Zeutschel 10000TT overhead camera. So far we have scanned the first shipment and are almost through the second. All scanned agreements will undergo rigorous quality control inspection as well as optical character recognition (OCR) for search and discovery of the content within. Currently we are right on schedule, and expect things to progress smoothly.

Once digitized and online, these CBAs will provide unprecedented access and opportunity for historians, social scientists, and the general public to analyze the role of organized labor in America. The project is being funded by the National Historical Publications and Records Commission, an independent federal agency that preserves and shares records with the public. We are thrilled to be a part of this exciting and important initiative.

The DCAPS project team includes Bronwyn Mohlke, project management and quality control; Shakhya Bodhiwamsa, lead digitization technician; and Mira Basara, OCR specialist.

Danielle Mericle
Coordinator, Digital Consulting & Production Services

Summer Graduate Fellowship Program in Digital Scholarship

In July-August 2014, Cornell University Library (CUL) and the Society for the Humanities co-sponsored a second year of its five-week summer fellowship program for graduate students in the humanities.

Piloted as an internship in Summer 2013, this program was inspired by the recognition that humanities graduate students at Cornell need additional opportunities to develop digital skills and knowledge that will be increasingly necessary in academic job markets.  The Fellowship’s primary aim is to provide graduate students with the time and technical support to explore digital scholarship tools and platforms in ways that complement their own scholarly and pedagogical goals.


Liz Blake, Mia Tootill, and Kaylin Myers at the final program meeting


Professor David Mimno meets with grad fellows to discuss text analysis and topic modeling. Mia Tootill, David Mimno, Jake Nabel, Jason Blaesig.


The program brings together a small cohort of graduate fellows for an intense 5-week fellowship period.  Fellows spend approximately half their fellowship time in workshops and discussions; the other half they spend creating a small-scale digital project of their own, with inspiration, guidance, and technical support from Cornell faculty and CUL staff.

We were thrilled to receive nearly three times as many applications in 2014 as we received in 2013, and we are already planning to expand the program in 2015.  Stay tuned for news of a pubic showcase of fellows’ work this Spring!

2014 Summer Fellows :

Jason Blaesig, Anthropology

Project: multimedia site in Scalar combining anthropological field recordings and translations of Peruvian folklore

Liz Blake, English

Project: topic modeling the text of James Joyce’s Ulysses

Kaylin Myers, Medieval Studies

Project: online compilation and interactive translation of Old English Body and Soul homilies

Jake Nabel, Classics

Project: compilation and translation of ancient Parthian inscriptions, including an online Parthian grammar

Mia Tootill, Musicology

Project: interactive map and collection in Omeka using Neatline to visualize the location of opera venues in 19th century Paris


Program Sponsors

Professor Timothy Murray, Director of the Society for the Humanities

Oya Y. Rieger, Associate University Librarian for Digital Scholarship and Preservation Services

Bonna Boettcher, Interim Director of Olin and Uris Libraries

CU Faculty

Prof. Edward Baptist, History

Prof. David Mimno, Information Science

CUL Program Staff

Mickey Casad, Coordinator,  CUL – DSPS

Virginia Cole, CUL – Olin/Uris

John Handel, CUL – DSPS

Michelle Paolillo, CUL – DSPS

…with many thanks to:

Jenn Colt, CUL – DSPS

Jason Kovari, CUL – LTS

Danielle Mericle, CUL – DSPS

Susette Newberry, CUL – Olin/Uris

Jaron Porciello, CUL – DSPS

Anne Sauer, CUL – RMC

Melissa Wallace, CUL – DSPS

Florio Arguillas, CISER

Patrick Graham, Academic Technologies

Patrice Prusko, Academic Technologies

Interactive Digital Media Art Survey: Key Findings and Observations

In February of 2013, Cornell University Library in collaboration with the Society for the Humanities began a two-year project funded by the National Endowment for the Humanities (NEH) to preserve access to complex born-digital new media art objects. The project aims to develop a technical framework and associated tools to facilitate enduring access to interactive digital media art with a focus on artworks stored on hard drive, CD-ROM, and DVD-ROM. The ultimate goal is to create a preservation and access practice for complex digital assets that is based on a thorough and practical understanding of the characteristics of digital objects and requirements from the perspectives of collection curators and users alike. Digital content that is not used is prone to neglect and oversight. Reliable access mechanisms are essential to the ongoing usability of digital assets. However, no archival best practices yet exist for accessing and preserving complex born-digital materials.  Given our emphasis on use and usability and our recognition that we must develop a framework that addresses the needs of future as well as current media art researchers, we developed a survey targeting researcher, artists, and curators to expand our understanding of user profiles and use cases. The purpose of this article is to summarize our key findings of the survey.

About the Project

Despite its “new” label, new media art has a rich 40-year history, making loss of cultural history an imminent risk. Experiencing a media artwork requires machines that are themselves vulnerable to technological obsolescence. This is especially true of digital art, which requires hardware and software support and is often stored in fragile formats. Although the NEH-funded project uses the Library’s Rose Goldsen Archive of New Media Art as a testbed, our ultimate goal is to create generalizable new media preservation and access practices that are applicable for different media environments and institutional types.

Many of the artworks in our test collection were created for computer operating systems that are now obsolete.  This vintage iMac is an important component of the project’s digital workstation.

Many of the artworks in our test collection were created for computer operating systems that are now obsolete. This vintage iMac is an important component of the project’s digital workstation.

Named after the late Professor Rose Goldsen of Cornell University, a pioneering critic of the commercialization of mass media, the Goldsen Archive was founded in 2002 by Professor Timothy Murray (Director, Society for the Humanities, Cornell University) to house international art work produced on portable or web-based digital media. The archive has grown to achieve global recognition as a prominent collection of multimedia artworks that reflect aesthetic developments in cinema, video, installation, photography, and sound. We estimate that about 70 percent of CD-ROM artworks in the Goldsen collection already cannot be accessed without a specialized computer terminal that runs obsolete software and operating systems. Because of the fragility of storage media like optical discs, physical damage is also a serious danger for the Goldsen’s artworks on CD-ROM and DVD-ROM, many of which are irreplaceable. Even migrating the information files to another storage medium is not enough to preserve their most important cultural content. Interactive digital assets are far more complex to preserve and manage than single, uniform digital media files. A single interactive work can comprise an entire range of digital objects, including files in different types and formats, applications to coordinate the files, and operating systems to run the applications. If any part of this complex system fails, the entire asset can become unreadable.

Project team at work, from left to right, Dianne Dietrich, Mickey Casad, and Desiree Alexander

Project team at work, from left to right, Dianne Dietrich, Mickey Casad, and Desiree Alexander

Survey Results

In January 2014, we announced the questionnaire on several preservation, art, and digital humanities mailing lists. We had a total of 170 responses, 122 of them responding as an individual researcher or practitioner and 48 responding on behalf of an archive, museum, or a cultural heritage institution. Out of 170 respondents, 80 fully and 32 partially completed the survey and 58 of them only took a quick look without responding. We are not sure if the incomplete survey rate is due to time limitations of the respondents or indicates unfamiliarity with the program area. We did not observe any significant differences in the responses of these two groups (personal and institutional responses), probably due to the fact that even at an institutional level, new media projects and collections are led by small teams or sometimes individuals. Respondents held multiple roles and characterized themselves as artists (48%), researchers (47% researchers), educators (25%), and curators (20%).  Almost 24% identified themselves as archivists, conservators, project managers, digitization specialists, or technical developers.  The scope of digital media art collections they worked with was also broad, including digital installations, digital video and image, interactive multimedia, raw audio files, born digital artwork, 3-D, video art, and websites. Genres emphasized in their media art research included installation/performance/media sculpture, video/cinema, and interactive artists portfolios. Respondents were interested in several platforms, the most common ones being personal computers/devices, locative media installation/sculpture performance, web-based art works, and hardware peripherals. Among the countries represented were the US, Germany, France, UK, Australia, and Argentina.

We posed an open-ended question to inquire about the research questions that guided respondents’ interactions with media works. It is difficult to characterize or summarize their broad range of involvements, as the research frameworks referenced were almost equally distributed among the contextual categories of artistic, social, historical, cultural, aesthetic, and technical. However, some of the noteworthy research angles mentioned in the responses included:

  • Social change – how technologies are assisting exploration of political stories, strategies to mitigate problems of born-digital to work towards a system of advocacy and lobbying, implications of social identity (for example, gender) in digital media artworks
  • Digital divide – accessibility of digital art for individuals with lower socioeconomic backgrounds and artists’ role in reaching out to a diverse population
  • Role of technologies in supporting and stimulating community and researcher engagement, presentation of news and actual events through art
  • Interpretation of artists’ intentions – what is being communicated through the artwork, interactive power of technologies, imaging future use – e.g., how will the art object be used/viewed in 20 years?
  • Historical perspectives – how certain technologies have been used in art, evidence of art-science collaboration – synergy
  • Affordances of digital media and digital spaces – if and how digital works explore something further than the analog approaches, embodied and social user interactions.
  • User-response oriented analysis – the role of viewers’ background in interpreting digital art work and interactive narratives, effects of image and sound on audiences, social and political effects of technology.
  • Characteristics of influential artworks  & relation of historic art work to present-day questions, searching for works of art for classroom teaching
  • Long-term preservation challenges and requirements for retrieval and documentation of digital art works for research and learning from users’ perspective. Sustainability of digital content and role of crowdsourcing
  • Device requirements for accessing and experiencing the artwork – role of viewing environments (e.g., if an artwork is meant to be seen on an old TV set)
  • Authenticity and documentation: How can documentation capture the essence of highly interactive works, for instance live performances?

Respondents cited a number of serious impediments encountered in conducting research involving new media art. These impediments were technical, institutional, and cultural in nature.  For example, respondents mentioned lack of documentation, technological challenges such as migration and emulation, costs and lack of understanding of costs, legal issues and access limitations, missing connections between similar archives (lack of unified discovery & access), digital divide, insufficient metadata, and hardware and software dependencies. Several of the respondents expressed their unease about the disappearing web-based art and ubiquitous broken links. One respondent noted, “In a society that is rushing headlong into the future, it is vital that we preserve the efforts of those who have early works in this new culture.” One of the respondents pointed out that due to a general “disinterest in preserving the cultural artifacts of the digital age,” there was a lack of understanding of the importance of these objects for cultural history. Another comment was about the infrequent access requests and therefore difficulties in justifying investment in preservation efforts for future use.

The respondents who use new media collections in support of teaching and learning listed several impediments such as vanishing webpages, link rot, poor indexing, gap for works from the 80s and 90s, and the lack of quality documentation. One of the respondents wrote, “Some work becomes very easy to make when the technology evolves and the students don’t understand how it was important, or how it was a challenge to produce at the time.” This statement underscores the importance of documenting cultural context to situate the work from artistic, historic, and technical perspectives

We inquired about respondents’ documentation needs and preferred strategies in cases where full interactive access is not possible. Again, there were several suggestions:

  • Providing textual description of content
  • Capturing video documentation of use such as walkthrough video with voiceover
  • Recording audience perspectives, interpretations, and reactions
  • Maintaining artists’ notes
  • Offering blogs such as the British Library’s Endangered Archives Programme to build awareness about the threats to digital archives
  • Describing the technology in context to its time to understand and appreciate the available technological and artistic tools
  • Collecting contextual materials – exhibition announcements, brochures, resumes, etc.
  • Capturing metadata including MANS (Media Art Notation System), OAIS, PREMIS, TOTEM (the Trustworthy Online Technical Environment Metadata Registry)

When we asked the respondents about the preservation measures undertaken for their own art work, again we again received a combination of different strategies. Some were common ones such as archiving hard drives, keeping backups of software, and maintaining redundant storage. They also mentioned maintaining a blog with information about the art work, web publishing for open and broad access, videotaping user interactions, taking screen shots, and creating short videos about the work. Several respondents made reference to the fact that some of their early works no longer existed or worked.

For practicing artists, there were several concerns about longevity of their creative expressions. One individual expressed doubts about the inability to sell works due to the fact that they may become obsolete within a year. They worried that it was difficult to archive immersive installations, interactive Flash pieces, and work with dependency on external files. They also mentioned copyright issues as a significant impediment. There were also several comments such as the following ones articulating anxiety over future use:

[My work] will stay forever in storage and will never be re-activated.

I am worried about context and artistic intent – how do we retain authenticity in the long term?

The question about which archiving and access practices affected respondents the most in their creative and professional work also generated thoughtful responses. Here are some examples:

Access to past works are incredibly valuable to me -  understanding works not just for their message but also for their technical [aspects] help new media artist evolve the area of practice.

I think museums tend to see my books as a treasure when they were created to be used.

Knowing where artworks and their documentation are kept. Individual sites that do not often appear very high up in search engine results.

What is complex media object?  If it is performed or presented, it can be power point or a photo essay.

For curators, the following comments illustrate the biggest concerns:

Probably the biggest impact is in teaching. One is continually trying to explain a work that one has seen in the past without the ability to actually show it.

I know [the art works] will become obsolete as running objects so the best thing I can do is push as much data about them out onto the Internet as possible.

Allowing original context of the artwork in the audience experience

Only twenty-four of the respondents indicated that their institutions include born-digital interactive media artworks and artifacts in its holdings. Several of the respondents indicated that they don’t include born-digital interactive media in their holdings because such materials fall outside of collection scope.  In some cases, they noted that procedures for providing access are too complex or unsustainable, or cited technological challenges and lack of local support.

Twenty respondents answered the access and preservation related questions on behalf of an archive, museum, or a cultural institution. Only one organization mentioned having a sophisticated and integrated web-based discovery, access, and preservation framework. The others indicated that access needed to be arranged through a special arrangement such as setting an appointment. They indicated that a full range of users are supported – students, faculty, researchers, artists, hobbyists, and general public such as museum visitors. They mentioned a range of preservation strategies they rely on including migration, creation of search and discovery metadata, maintaining a media preservation lab, providing climate control storage, collecting documentation from the artists. They named several challenges to preservation, many stemming from lack of resources or difficulties associated with executing artist interviews. The conservation measures were sometimes triggered by exhibition plans and some indicated that they were working on clarifying policies. They also noted that the measures taken to secure access, preservation, migration rights varied from case to case.

Key Conclusions

The data we have gathered further strengthened our opinion that identifying the most significant properties of individual media artworks will require direct input from artists. This confirms our belief that we need to push the integration of archival protocols as far upstream as possible, to the point of content creation and initial curation. We plan to adapt pre-existing conservation-oriented questionnaires to our emerging data model and our growing sense of media art “classes” with distinct preservation and access needs.  We plan to solicit the contributions of artists in the test collection for this specific NEH-supported project. We will simultaneously revisit our rights agreements with the artists, which never anticipated access strategies based on emulation.

A reoccurring theme in our findings involved the difficulties associated with capturing sufficient information about a digital art object to enable an authentic user experience. This challenge cannot and should not be reduced to the goal of providing a technically accurate rendering of an artwork’s content. So much of new media works’ cultural meaning derives from the users’ spontaneous and contextual interactions with the art objects. Reproduction of an artwork’s digital files does not always ensure preservation of its most important cultural content. It is essential that we anticipate the needs of future researchers and acknowledge the core experiences that need to be captured to preserve these artifacts. For a work to be understood and appreciated, it is essential to relay a cultural and technologies framework for interpretation. Some works that come across mundane now may have been highly innovative trailblazers of yesterday. Given the speed of technological advances, it will be essential to capture these historical moments to help future users understand and appreciate such creative works.

The preservation model to be developed will apply not only to new media artworks but to other digital media environments. Therefore we are hoping that this project will inform digital preservation services at libraries, archives, and museums to support future uses in learning, teaching, research and creative expression by scholars and students. We will further elaborate our findings in a future article. Stay tuned!

Oya & Mickey

On behalf of the project team:

Timothy Murray & Oya Rieger (co-PIs), Mickey Casad (Project Manager), Dianne Dietrich, Desiree Alexander, Jason Kovari, Danielle Mericle, Liz Muller, Michelle Paolillo, & AudioVisual Preservation Solutions

Ongoing Considerations in AV Preservation at CUL

In an audiovisual preservation workflow, there is a bit of a wormhole effect to each decision you make. For instance, when choosing whether to accommodate the digitization of a new format in-house, one must consider long-term support for the equipment involved, including cleaning, maintenance, tools and supplies, as well as technical expertise. All of these things add up to two critical things: time and money.

A Studer A810 we recently inherited from our colleagues at the Lab of Ornithology. This deck originally came from NPR Studios by way of Bill McQuay.

A Studer A810 we recently inherited from our fantastic colleagues at the Lab of Ornithology. This deck originally came from NPR Studios and Radio Expeditions’ Bill McQuay.











In a preservation workflow, it’s not always as simple as hooking a VCR (assuming you still have one) up to a computer. You might think: If you need a DVD copy of something or a CD dub of an old recording, then that’s fairly easy given today’s technology, right? As with the conservation of a manuscript or painting, there are general requirements and standards widely accepted and used by the preservation community when trying to digitally preserve unique AV content. Instead of scanning an image at high-resolution, rebinding a brittle book, or making a squeeze more interpretable, you’re trying to capture an electric signal. The quality of that signal can make all the difference and consumer grade electronics weren’t designed to produce a broadcast-grade signal.

It first requires routinely tested, cleaned, professional and broadcast-grade equipment. This equipment is not cheap and the cost of the items, parts, and repair expertise is rapidly increasing due primarily to format obsolescence. Legacy formats like magnetic tape are machine-readable and completely reliant on the proper technology to be interpreted. Most of the major manufacturers of magnetic media and the devices needed to play it have disappeared. Next, one must consider the quality of the analog to digital converter. Organizations like The International Association of Sound and Audiovisual Archives(IASA) and the Audio Engineering Society (AES) have set guidelines for the quality of analog–to-digital converters for audio preservation. Standards for video (RF) conversion and digitization are still being worked on by various organizations and institutions. Finally, you need a computer capable of 10-bit (or higher) capture to a linear-based editing software platform. As you can see, deciding to digitize a format in-house is a big one.

We have been carefully considering the formats we want to support in the digitization lab here at CUL. We want to address the major formats contained in our vast collections, balancing what we can achieve on-site with the services available through our vendor partners.

In the CUL AV preservation lab, we can now handle the following formats:

-U-Matic (3/4”)
-Vinyl LP (33rpm)
-1/4” open reel audio (Stereo and 4-Track)
-all formats of non-tape based, digital-born audio and video

Another huge piece of the puzzle is maintaining the integrity of digital content. After something has been digitized, the work is not finished, as many institutions have learned. There are requirements to maintaining the integrity of this data. Audiovisual data could be lost or compromised due to data loss, bit rot, bit-level corruption, hardware failure and other kinds of data problems. We maintain integrity by utilizing fixity checks in the form of data comparisons on a weekly basis on both of our digitization stations. This helps keep track of additions and changes to files in order to recognize digital problems and errors within our current projects. I hope that we can soon be pushing these preservation master files into CULAR, as we’re running out of space on our 24TB machines. That brings me to the biggest hurdle at this point: video master files are huge. 60 minutes of SD video footage at 10-bit resolution ends up producing a roughly 100GB file.

I knew this coming in, but it’s become clear that deciding what formats to handle in-house and knowing when to outsource is crucial. I have developed close, working relationships with our vendors in order to minimize cost and effort while meeting our format and metadata needs. IT requirements are growing across the library and that is part of the cost burden we have to be mindful of. Danielle Mericle and I are working closely on how to adequately meet CUL’s audiovisual preservation needs while not over-extending our budget, scope, and the library’s general requirements. This is a big charge, but I’m happy to report that we’ve made huge strides toward meeting this challenge.

-Tre Berney


AV Preservation Census

We are pleased to announce that the Cornell AudioVisual Preservation team is launching a campus-wide census which will gather important data regarding our ‘at-risk’ AV formats. We will assess condition, format stability, uniqueness, and scholarly value. This is an important first step in developing a more comprehensive preservation strategy. This pilot initiative is being jointly funded by Cornell University Library and CIT.

The challenges associated with audio-visual (AV) media preservation are significant: important scholarly material is at risk due to physical media degradation; metadata loss; player or format obsolescence; and rights issues. In addition to issues with legacy materials, there is mounting pressure on newly generated AV content, as scholars are now creating large-scale AV collections associated with their research projects. With new government regulations requiring data management plans for all grant-funded initiatives, it is imperative we begin to address the long and short-term needs to preserve and provide access to important audio-visual collections.

In many cases, we may not even be aware of high-value content on campus, as it lives outside of the normal avenues for collection development and maintenance (such as the Library). Instead such material is embedded in departments, in shoe boxes under desks, or as digital files living on isolated desktops. Even within the Library system there are collections that are not sufficiently described or preserved. Hence we are undertaking the census work as a first step to get a handle on the scope of the problem.

Our initial effort will be in the form of a web-survey, with scheduled follow-up site visits from representatives from our team. We hope to have wide-spread input from key stakeholders across campus, and encourage you to share this with your colleagues. If you have any questions, do not hesitate to contact us at For more information about our group and charge, go to:

Why I joined the Authors Alliance

(by Peter Hirtle)

Of all the absurdities associated with the Authors Guild suit against Google over the Google Books Project, perhaps the greatest was the Guild’s efforts to make it a class action, with the 8,000 members of the Guild speaking for all authors everywhere in the world.  Most academic authors realize that providing a keyword index to all published literature can only aid scholarship.  At the same time, by making it easier to identify works that might be of interest, Google Books can only increase readership and sales of the original works.  Yet at the time of the lawsuit, there was no organization that could speak for authors motivated by concerns that were not solely commercial.

Now there is.  On 21 May, the Authors Alliance was formally launched in San Francisco.  The Alliance is the brainchild of Pamela Samuelson, one of the foremost copyright experts in the country and an active voice in the Google Books cases.  The Alliance recognizes that the primary motivation for most authors, including many academic authors, is to be read. Digital network technologies present unprecedented opportunities for the creation and distribution of creative works for the public good.  Alliance members are not opposed to authors making money from their works; most of the members of its Advisory Board publish with trade publishers and have works that can only be purchased.  But they recognize that there are some educational uses (including indexing) that do not need to be monetized.  The Alliance will be a voice for moderation.

This is an especially auspicious time for the formation of the organization.  Discussions have started in Congress about reforming copyright law.  What has been a trade regulation for print media no longer works in a digital environment that exists on copying.  A different ethos is needed if copyright is to meet its constitutional mandate “to promote the progress of science and useful arts.”  The Authors Alliance has therefore developed a set of “Principles and Proposals for Copyright Reform” that reflect the interest of authors who write to be read and that will broaden the discussion in Washington.

This is why I was happy to become a Founding Member of the Authors Alliance and make a donation to its work.  I would encourage anyone who is an author (of books or articles or any creative work) to look at the Alliance’s mission statement and goals and to consider joining as well. And if you don’t believe me, see Kevin Smith’s excellent post, “Why I joined the Authors Alliance.”

keep looking »