"What does it take to archive a linear foot of the Web?," Anna Perricci posed rhetorically to our web archiving metrics breakout discussion group two weeks ago. I don't yet have a good answer for what the question's getting at, but I was gratified by the level of interest and engagement in web archiving as archiving at the just-concluded Society of American Archivists (SAA) Annual Meeting and inaugurally coscheduled Archive-It Partner Meeting.
We've written before on our restoration of the oldest U.S. website, covering in detail how we did it and some interesting discoveries we made along the way. More recently, Web Archiving Engineer Ahmed AlSum prepared a visual diagram (see below) of the steps involved in packaging, indexing, and making accessible the legacy web content in a poster for the Joint Conference on Digital Libraries (JCDL), an annual meeting sponsored by the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE) focused on research and development for digital libraries. Notably, the display won the Best Poster Award! We celebrate the continued community interest in Ahmed's innovative work.
A couple of weeks have passed since the successful conclusion of the annual IIPC General Assembly, hosted this year by Stanford University Libraries and Internet Archive. The meeting has been pretty well summarized already in posts by Sawood Alam, Jefferson Bailey, Emmanuelle Bermes, Tom Cramer, Carlos Eduardo Entini, and Ian Milligan. Rather than contributing another retrospective, I'd like to instead look ahead to 2016 and consider what the web archiving community might accomplish together in the coming year, highlighting some of the opportunities discussed and presented two weeks ago.
This past week saw the 2015 General Assembly of the IIPC, International Internet Preservation Coalition--probably the biggest week and biggest event of the year in the web archiving world. The IIPC has 50 members from 30 countries, and comprises the leading web archiving institutions in the world, including tons of national libraries, the Internet Archive, and a growing number of research institutions.
Here are the five key lessons that I am taking away from this year’s IIPC.
“More is More. Less is Less. Avoid Monoculture.” Abby Smith Rumsey articulated the soul and mission of web archiving in 8 words during her Wednesday talk on Memory in the Digital Age. Speaking from her perspectives as an historian, a member of the NSF’s Blue Ribbon Task Force on Sustainable Digital Preservation, and as a longtime leader and practitioner in digital libraries, Abby outlined both the needs and opportunities for archiving the modern web. The essence of her talk (from my perspective): capture as much as possible, don’t over-invest in curation today (let future users make their own calls), demonstrate value, and get as many players as possible active in the effort.
APIs, APIs, APIs. The fundamentals of web archiving are now (pretty well) understood, and it’s a (somewhat) mature space. We now know enough to create standard definitions for interactions across the major functions of web archiving--selection, capture, preservation, indexing, playback, mining. Lets define these interactions formally via APIs; once we do, we’ll see all sorts of associated benefits. Institutional software stacks will be componentized, modular, and swappable allowing us to assemble best-of-breed systems. Archives and their associated functions will become interoperable--allowing for the reuse and exchange of content, software and services across institutions, time and place. Developers will be able to swarm an individual component (say playback with Open Wayback; crawling with Open Heretrix; indexing with solr) with the confidence it will plug in to their local stack.
If there is a single opportunity for IIPC to advance the technology-scape supporting Web Archiving in the next year, it’s defining these community-standard APIs among layers of the end-to-end web archiving stack.
It’s time to intentionally cultivate a community of Web Archiving developers. If each of the 50 institutions in the IIPC has .5 of a developer allocated to web archiving, that makes 25 FTE developers. If each site allocates 1 software engineer, that makes 50 developers. I don’t know what the average size of an IIPC member’s dev team is, but it’s a BIG POOL of talent, and one that collectively could get a LOT done if they work in concert. The seeds of a robust development community have been sown. The development of Open Wayback, while slow to pick up community traction at first, is now looks healthy and even thriving, with 10 committers from as many institutions.
If there are two opportunities for IIPC to advance the technology-scape supporting Web Archiving in the coming year, the second is to foster a thriving, collaborative developer community engaged in building tools for web archiving--one that is intensely collaborative, and subscribes to best practices in open source, collaborative software development.
Reaffirming the mission and focus of IIPC. This is the third and final year of the IIPC’s three year consortial agreement. Going into 2016 means the organization must renew the consortial agreement, and provides a rare chance to reaffirm its mission, specify current goals, and update its operating practices. This is a fantastic opportunity for IIPC members to double down on the areas of greatest value for advancing web archiving internationally. The IIPC Steering Committee is primed to take this on; I am very much looking forward to the next few months of discussions among the Steering Committee and members as we re-express the core mission and objectives of IIIPC for the next five years. Personally, I hope and expect that the objectives of tool development, training, advocacy and collaborative collecting/preservation/access remain the fundamental objectives of the organization.
Networked archives are the future of archiving the ‘net. Brewster Kahle gave a provocative speech on the heels of Abby Smith Rumsey’s talk on Wednesday. In essence, he called for “collective collection building; distributed preservation; local/cloud-based access” for web archives. While there are a lot of details still to be sorted, I think Brewster has hit a key theme and need for web archives of the future--given the scale and nature of the Web, it’s no longer sufficient to think about global Web Archiving as a series of independent archives each pursuing its own selection, capture, preservation, access and use objectives. Rather, web archiving in the future needs to reflect the distributed nature of the very thing we’re seeking to preserve--a collective, distributed, networked effort to capture, archive and give access to the Web, with many, many players invested and specializing in the functions they do best. Scale and specialization are both watchwords here. APIs and Abby Smith Rumsey’s council (more is more; avoid monoculture) are also both critical here. This is going to be very interesting to observe in the next three to five years.
This was my first IIPC General Assembly, and overall I am impressed. The IIPC is a serious, substantial and dedicated group of practitioners. There is a broad front of international concern and engagement in preserving the Internet and its contents for future generations; at the same time, there are deep pockets of innovation and progress in advancing the state of the art in practice. For all the substance of IIPC in its current form, though, there needs to be MORE--the Internet is the largest, most significant, most far reaching, and most interconnected vehicle for human communication and history, EVER. Capturing its contents over time is critical to capturing the history, substance and technology of humanity.
IIPC is the vanguard for memory and research institutions in the world for capturing this critical content, defining the technology and policies and practices that will capture this critical period of humanity’s development. If there were ever a group and cause that merits national and international investment to capture a unique period of history, this is it. After this week, I’m happy to report that--as large and critical as the need is--IIPC seems like is in a unique position to help meet it, and define the history of the present and future.
Once each year, the international web archiving community represented by the International Internet Preservation Consortium meets for a week-long "General Assembly". As alluded to in my recap of the 2014 meeting, I'm pleased to belatedly announce that Stanford University is the confirmed host for the 2015 IIPC General Assembly as well as more promptly announce that registration is now open!
We are pleased to announce the acceptance of our bid to join the IIPC Steering Committee, based on a vote by the IIPC membership. SUL joins the 15-member group as one of two currently-serving university library members (the other being the University of North Texas Libraries) and as the third university library to ever serve on the body (the other being the California Digital Library).
Yesterday the U.S. Senate Select Committee on Intelligence released its "Study of the CIA's Detention and Interrogation Program - Foreword, Findings, and Conclusions, and Executive Summary." (BIG PDF!) The report is 525 pages, heavily redacted, and includes graphic details about the torture techniques used by the CIA. The study found that American torture was not confined to a handful of aberrational cases or techniques, nor was it the work of rogue CIA agents. It was an officially sanctioned, worldwide (over 1/4 of the world's countries participated in some way!) regime of torture that had the acquiescence, if not explicit approval, of the top members of both political parties in Congress.
"We had no idea that we were making history and were just trying to get the job done in our 'spare' time',” Louise Addis, one of the WWWizards team who developed the SLAC website from 1991, said during our conversation about the restoration of SLAC's earliest website. Last May, Nicholas Taylor, web archiving service manager, told me, "SLAC has a historical collection of webpages that may be the first website in the US. Can we help them to find a home for this archive?” As Web archivist, I felt that I found a treasure. I replied, "Of course, Stanford Web Archive Portal should be the home."
One of the major use cases for the Web Archiving Service is preserving Stanford University web content. The earliest SLAC website represent the oldest such content we could find; it is the first website in the US dated to 1991, so we started there. Stanford Web Archiving Service launched its portal this week which featured SLAC's earliest website that was kept on SLAC servers for many years. This Halloween, it comes back to life. Our task was to convert the original list of scattered files into an accessible, browsable website with temporal navigation. In this post, I will discuss the technical challenges of and lessons learned from restoration process.