CARLI Instruction Committee article discussion: Algorithmic Literacy

Primary tabs

Friday, February 20, 2026 - 10:00am to 11:00am

As part of this year’s theme, “Trust Us:” The Role of Library Instruction in Transforming Landscapes, the Instruction Committee is exploring how trust shows up in our instruction work.

The CARLI Instruction Committee invites you to join our discussion on Algorithmic Literacy.

To spark discussion, we have selected a short article, Michael Ridley’s “Explainable AI: An Agenda for Explainability Activism.”. If you can read the article (linked below) in advance, wonderful; if not, please still come! We will begin the discussion with a summary of the article for those who may have not had time to read it and invite all to participate.

We hope you will also come with your own perspectives and questions to ask; in addition, Instruction Committee members will moderate the discussion, and offer questions to guide our conversation.

Please register using the link above.

When: Thursday, February 20th, 10:00am-11:00am
Article Title: Explainable AI: An Agenda for Explainability Activism
Author: Michael Ridley
Link: https://crln.acrl.org/index.php/crlnews/article/view/26733/34650

Summary:
Ridley argues that the opacity of how generative AI works makes the work of explanation crucial for librarians, who must serve as "explainability activists," creating actionable and contestable explanations of how AI functions. To allow this relationship, Ridley argues, interactions with AI should be "seamful": the limitations and boundaries of the system should be clearly visible to the user, encouraging their "active self-explanation" in explaining systems as they use them. Focusing on this agenda of human-centered explainable AI (HCXAI) will create an "action-agenda" for libraries, Ridley argues, including critical information AI literacy initiatives for staff and patrons, supporting explainable AI research, demanding explainability and, perhaps, seamfulness, from vendors, and advocacy work for federal and international regulation to require explainability in AI and information providers. Doing so will address a shifting power dynamic in which, currently, technology designers are shaping the ways n which we understand the "authority, credibility, and accuracy" of the information we receive.