Users, Bias, and Sustainability in AI

By Audrey Altman, May 20, 2021.

Thoughts from the NARA – Virginia Tech Workshops

How can artificial intelligence and machine learning help organize, describe, and provide access to the growing volume of materials in digital libraries and archives?  This was the central question of a workshop series hosted by the National Archives and Records Administration (NARA) and the University Libraries at Virginia Tech (VT).

Over the past few years, NARA has had such an enormous influx of digital materials that archivists are looking for new forms of computational assistance to process records and make them discoverable in their public catalog.  Sylvester Johnson, Assistant Vice Provost for the Humanities at VT, observed that democracy relies on citizens being able to understand what the government is doing, and for that, they need the archives.  NARA’s current tools and workflows were not designed to handle the sheer volume of digital information, which hampers the archive’s ability to provide public access.  NARA is looking to AI and machine learning for new ways to process digital information, and to assist both their staff and their diverse user communities.

Workshop participants included experts and practitioners in libraries, computer science, education, and the humanities.  While NARA’s use cases served as a focal point, the topics of discussion, including user needs, bias, and sustainability in AI, are relevant across libraries and archives.  These are some of my reflections on the many interesting presentations and conversations held over the duration of the conference.

Users and Use Cases

NARA staff gave many examples of specific use cases in which AI might be helpful for different user groups.  Examining such a wide diversity of use cases drove home the importance of considering AI projects in context, and assessing the risk that AI could produce imperfect or biased results within these contexts.  Depending on who is using an AI and for what purpose, the consequences of receiving inaccurate information are vastly different.

For example, NARA could use an AI to do a first pass through a very large collection of documents and extract information relevant to cataloging.  It could, for example, find broad topical categories through clustering or perform named-entity recognition.  Such computations would help archivists describe materials more efficiently and make materials accessible to the public sooner.  In this case, a small group of expert users would interact with an AI over a relatively long period of time, learning its strengths and weaknesses through repeated experience.  Trained archivists could tolerate a fair bit of imperfection from the AI because they could rely on their professional expertise to distinguish between good and poor results.  They could even work with data scientists to improve the AI’s performance.

By contrast, military veterans who rely on NARA for documentation about their own service history would have virtually no tolerance for imperfect systems, since they need accurate information to secure benefits, employment and other vital services.  In this use case, the necessity for accurate, complete information is so great that it may be too risky to use AI at all, requiring instead the expertise and total oversight of a human archivist.

Bias and Oversight

Detecting, understanding, and ameliorating biases are significant challenges in AI.  Tanu Mitra, Assistant Professor at the University of Washington, spoke about her experience conducting audits of platforms like YouTube and Amazon to uncover biases related to misinformation and conspiracy theories (see Mitra’s publications to learn more).

Cultural heritage institutions would also benefit from bias audits, even before they start incorporating AI into their systems.  Conducting a good audit will require that we learn to ask the right questions of our systems.  As Mitra cautioned, one cannot test everything, so it is important to craft experiments that will reveal precise, actionable information.

In the context of library and archival collections, the raw data for AI is often drawn from historical documents.  This presents a real challenge, since the historical record is rife with prejudices of all sorts.  Reframing the question from eradicating bias to promoting inclusiveness is helpful because it challenges us to confront our history with grace and empathy, while making space for a diversity of voices and perspectives.  Incorporating inclusiveness into complex technical systems is a challenge that the DPLA network is actively exploring.  

Mitra’s talk also raised important questions about governance and oversight.  Who should be tasked with monitoring library AIs for bias, and how can they be empowered to do their work and bring about necessary change?  Should governance come from within our organizations, from third-party auditors, or from public user communities?  These questions do not have ready answers, but I look forward to hearing about the different approaches our communities adopt as they establish institutional AI practices.


Patricia Hswe, Program Officer for Public Knowledge at the Andrew W. Mellon Foundation, presented on the importance of project maintenance.  Hwse observed that it is much easier to generate excitement around new innovations, but it is equally important to have a plan in place to transition projects into maintenance mode after the initial development stage.  This transition can be particularly challenging when different people are tasked with maintenance than were involved with the initial development.

AI projects may require sustainability models that incorporate long-term maintenance with cycles of re-innovation.  In order to ensure that bias does not creep into AI models, they will need to be regularly monitored, and may need to be adjusted as new data is introduced into the system, or as users interact with the models in new ways.  AI technologies can also advance quickly, creating opportunities for continued innovation.

To oversee ongoing maintenance and innovation, library professionals need interdisciplinary cross-training in areas including AI technologies, metadata practices, and research methods.  This will enable them to identify potential problems and opportunities for improvements, and facilitate necessary conversations across library staff, vendors, and user communities.  Project could also benefit from active user and developer communities, who can work in a decentralized fashion to design for local use cases, solve specific problems, and create new features.

I am excited to see how the library community addresses questions of contextualizing AI within different user contexts, detecting and addressing bias, and long-term sustainability as they adopt machine learning into their systems and workflows.  I am particularly interested to see which new innovations will help NARA in its critical work of making records of our nation’s government available to the public.  

The DPLA community is holding a conversation about Algorithms and Justice on May 27th at 1pm ET. If your institution is a DPLA member, you can register here.

You may also like