Artificial intelligence and the library of the future, revisited
There are two breakthrough technologies catching fire on campus these days. One of them, CRISPR-Cas9, is changing our relationship to the physical world through gene editing. The other, Artificial Intelligence (AI), is changing how we generate, process and analyze information.
AI already touches many of our daily computing activities, from searching the web to managing spam in email applications. It underlies the speech recognition that makes Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa, and the Google Assistant able to process and respond — with some success — to our queries. It is the computer vision that helps self-driving cars and food delivery robots navigate our streets and sidewalks. The fundamental activity driving these varied applications of AI is search within a large space of possibilities. It is not deep cognition but perceptual recognition. The power lies in the fact that machines can recognize patterns efficiently and routinely, at a scale and speed that humans cannot approach.
Though the underlying AI technologies that make all these applications possible have existed since the 1970s and 80s, AI has really taken off in the last decade, applied to search within images, sound, and text. Natural Language Processing (NLP) in Linguistics is a system for understanding language that has opened entirely new avenues of research across the university, making it possible to mine large corpora, identify topics, recognize named entities (people, places, and things), and perform sentiment analysis. Computer Vision is an interdisciplinary branch of AI that incorporates knowledge from several domains including physics, signal processing, and neurobiology to understand images and video. Machine learning dramatically accelerates statistical pattern recognition by learning from examples. Research to predict crop yield from remote sensing data, diagnose heart arrhythmias, and read 2,000-year-old papyri carbonized by the eruption of Vesuvius use machine learning in combination with other AI technologies. 
The term machine learning suggests that the machine is teaching itself. But the most common learning techniques are supervised, requiring a tremendous amount of human work and the careful curation of training data. The combination of massive amounts of data, accelerated computing power, and a deeper understanding of how we learn have created the conditions for successful applications of multi-layered machine learning known as deep learning that uses Artificial Neural Network architectures. Inspired by the way neurons function within the brain, neural networks can learn features over time and begin to characterize them. This approach has made unsupervised learning possible. A great deal of data is needed, but it need not be human-cultivated training sets. In the cat example popularized by Andrew Ng’s team, instead of feeding the machine millions of images of cats so that it can recognize the features common to a cat, unsupervised learning works by letting the machine create its own classifications of an essentially infinite number of images. Then, when presented with a picture that we have labeled ‘cat,’ it matches that image to its own classification system and returns images that match our labelled image.
Earlier this year at the Stanford Business school, Andrew Ng declared AI ‘the new electricity’ with the potential to transform nearly every industry, deeply influencing how we live and work. At Stanford Libraries we are considering the profound change AI can bring as power tools for scholarship, making our vast library collections discoverable, searchable, and analyzable in new ways. Thinking about the transformation AI could bring to libraries is by no means new but the conditions are ripe for us to revisit and rethink Ed Feigenbaum’s vision of the Library of the Future. How best do we bring the skills and knowledge of library staff, scholars, and students together to design an intelligent information system that respects the sources, engages critical inquiry, fosters imagination, and supports human learning and knowledge creation.
(Webcomic from Abstruse Goose via Peter Norvig)
 Recent research at Stanford found speech interfaces three times faster than typing on mobile devices and work in this area has increased rapidly.
 In a great TedX Boston talk from October 2016 titled, “When Machines Have Ideas,” Ben Vigoda talks about how he was at Stanford writing machine learning algorithms using Artificial Neural Networks trained with the backpropagation algorithm 29 years ago.
 The study on crop yields was led by Stefano Ermon at Stanford. The work on detecting heart arrhythmias was done by Andrew Ng’s students at Stanford, and the work on reading papyri was led by Brent Seales at the University of Kentucky.
 Success can require millions of images and this only became possible because of projects like ImageNet (2009) that collected 15 million images in 22,000 categories.
 Find “Toward the Library of the Future” in the Edward A. Feigenbaum Papers Stanford Libraries exhibit at Stanford: https://exhibits.stanford.edu/feigenbaum and more in the History of Artificial Intelligence exhibit.