All libraries are CLOSED due to poor air quality. Online services are available.
For updates visit

The Libraries are resuming limited in-person research activities by appointment only as part of the University's Research Restart Plan.
Learn more about the Libraries' entry requirements and available services.


Fantastic Futures 2019, 2nd International Conference on AI for Libraries, Archives, and Museums

The full schedule is available here.
A map of the venues is available here.


Thursday 5 December


Develop New Skills from LAM Experts

Fantastic Futures will host day-long workshop sessions to provide instruction in key topics and applications of AI in libraries, archives, and museums. Many of the outstanding presenters from the Plenary Sessions will be contributing to the workshops.

The sessions will open with a plenary meeting to cover high level concepts behind Machine Learning, Deep Learning, AI and how they might be relevant for LAMs. Each track has its own, more specific learning goals. Tracks 2-5 encourage participation by institutional teams that include engineering experience and subject expertise.

Note that pre-conference skill-building sessions will also be offered on December 3 for those who register for the December 5 workshops. Space is limited. Indicate your interest when registering for the December 5 workshops.


Track 1 • Designing an AI Program / Designing an AI Project (FILLED)

Leads: Emmanuelle Bermès, Nicole Coleman, Mary Elings, Abigail Potter, and Meghan Ferriter

This session is directed to administrators and project managers. Over the day, we will move from the project level to the implications of programmatic applications of AI with insights from Teemu Roos, Karen Cariani, Katie McDonough, Jan Willem van Wessel, Elena Nieddu, and, via teleconference, Sandy Hervieux and Amanda Wheatley. The workshop will emphasize implementing technology to reflect the ethos of the institution. Specifically, we will provide a review the landscape of LAM AI projects; address how to anticipate future developments; outline the resources and skills necessary to take an AI project from start to finish; compare commercial services to an in-house lab, and learn how to prepare staff for a future where AI is integral to library infrastructure.

Track 2 • Text (FILLED)

Instructors: Scott Bailey, Quinn Dombrowski, James Pustejovsky 

Scott Bailey, from Stanford will introduce computational text analysis with the scikit-learn python library. Quinn Dombrowski, also from Stanford, will teach handwritten text recognition using Transkribus and neural networks. James Pustejovsky from Brandeis will share the language-only toolkit pipeline, LAPPS (Language Application Grid) for text-based metadata extraction.

Track 3 • Images

Instructors: Elena Nieddu, Peter Leonard, Doug Duhaime, Freddy Wetjen, André Walsøe, and Zuzana Bukovčiková

For those who have some exposure to Tensorflow, Elena Nieddu, from the In Codice Ratio project, with the help of Freddy Wetjen and André Walsøe from the National Library of Norway, will cover Tensorflow 2.0 + Keras for model prototyping (+ Tensorboard for visualization); including scalable dataset collection via crowdsourcing; the preparation of training data; and how to build a model incrementally, tune and evaluate it. Peter Leonard and Doug Duhaime from the Yale DH Lab will teach clustering large image collections, based on PixPlot. And Zuzana Bukovčiková, Slovak University of Technology, will cover the unique challenges of face detection for newspapers.

Track 4 • Audio/Video

Instructors: James Pustejovsky, Kelley Lynch, Kyeongmin (Keigh) Rim, and Peter Broadwell

James Pustejovsky from Brandeis and his students Kelley Lynch and Keigh Rim will provide an introduction to the toolkit they have used on the American Archive of Public Broadcasting. They will get into the technical details of the tool suite and help you overcome common obstacles in the process of automating indexing of audio and video. Peter Broadwell, research developer at Stanford Libraries, will teach a machine learning approach to audio deduplication.

Track 5 • Maps and More Images

Instructors: Thomas van Dijk, Katie McDonough and Thomas Smits

Thomas van Dijk (University of Würzburg) will take workshop participants through algorithmic object and feature extraction from historical maps based on his work with Benedikt Budig. Katie McDonough will provide instruction on image classification and segmentation for historical map processing, specifically on modern (19th/20th c.) serial maps based on her work on the nineteenth-century British Ordnance Survey (OS) maps in the Space and Time Lab of Living with Machines at the Turing Institute. And Thomas Smits, Universiteit Utrecht, will teach the basics of convolutional neural networks for images based on the image classification pipeline used for his Chronic project.