Michael Hanegan and Chris Rosser – Generative AI and Libraries: Claiming Our Place in the Center of a Shared Future

Steve Thomas: Michael and Chris, welcome to the podcast.

Michael Hanegan: Thanks for having us.

Chris Rosser: Yeah, thank you very much. It’s very, very good to be here with you, Steve.

Steve Thomas: As you identify in the book, AI is what’s called an arrival technology. Can you tell listeners what arrival technologies are and then maybe even specify, what is generative AI in the larger Venn diagram of AI?

Michael Hanegan: The primary categories that we could think about are what we call arrival technologies and adoption technologies. Adoption technologies are technologies that we choose to participate in. We think about cell phones. We all kinda remember the first time we got a cell phone, and it wasn’t necessarily something that we had to do. It didn’t necessarily shape or reshape our life whether or not we participated in those early days. An arrival technology reshapes the world in which you live no matter whether or not you participate. So think about electrification or the steam engine. There are these rare technological breakthroughs that shape and reshape the entire world that you live in whether or not you choose personally to engage. And generative AI is one of those technologies.

Part of what this means is that, while we still have the choice to opt in or kind of opt out, we don’t have a choice to the degree in which the world in which we live is shaped by those things. And so part of what we’re trying to say in this book is that, while certainly libraries have enough agency and autonomy to say, just “no”, the world itself will change, and we hope that libraries will assert themselves in shaping that instead of simply experiencing the transformation that’s gonna come from this technology.

Chris Rosser: And then I’ll just say that generative artificial intelligence is a kind of arrival technology that is actually capable of producing new text and image and video and other content like this in various modalities. And when we say that it’s capable of creating new content, what it’s doing is, it’s a predictive technology and it’s predicting what language patterns are going to follow or be most appropriate, depending on input.

And what’s really interesting and very important for us to understand is that when we say it’s creating new content, that’s because it’s doing something similar to what people do. If you and I have a conversation about the same topic twice, maybe even within the span of 30 minutes, the way that we talk about that is probably not going to be verbatim from how we talked about it before. What generative AI is doing, is actually creating new content, new patterns of language, which is different from plagiarism.

I think that sometimes there’s a misunderstanding that all AI is doing is going out and it’s stealing language from others and then bringing it back as if it’s its own. That’s not what’s going on. It’s actually more similar to the way that humans create novel text or video. We are doing that by taking all that has come before, all that is available to us, and then repackaging that in different ways, for the context. Generative AI is an arrival technology that we are going to have to deal with and interact with and understand, especially in the world of libraries where our service is to the communities and the institutions where we live.

Steve Thomas: When people talk about plagiarism and stealing and things like that, I’m thinking, well that maybe happened further back in the process of when, people are taking big, huge pirated books into the language model, but it’s not the technology’s fault itself that it was fed this work that was not paid for appropriately and all that kind of stuff. So what it’s generating not necessarily isn’t that, but it’s sort of the ethical problem is on the larger companies there.

Chris Rosser: I think you’ve touched on something really, really important because a lot of the stories that are coming out of Big Tech, for example, are stories that make it seem that this technology is kind of a mystical or mysterious or unknown and possibly even unknowable type force. In the ways that it’s talked about almost has a kind of semi divine presence. And that kind of storytelling about what generative AI is might actually be useful for Big Tech, but what we have to understand is this, if bad things are gonna be done with generative AI, that is not the AI. That is not generative AI doing that. It’s people. People are the ones who have the agency. We are the ones who have the capacity and the ability to develop these tools in ways and to use these tools in ways that are gonna be good for human flourishing or they’re not gonna be good for human flourishing, and environmental and non-human flourishing, as well.

The reason that in our book, we decided to not do a deep dive on a lot of the ethical issues are number one, because these are perennial ethical issues that humans have struggled with, wrestled with, for a very long time. They weren’t created because of or by generative AI, but they may be exacerbated by generative AI. And so these ethical issues are such a moving target, that it’s difficult for us to say anything about them beyond providing some orientations for how we think about these kinds of ethical issues, and frankly, we already have those kinds of orientations available to us. The library core values, the ethics of librarianship, all of these are going to inform how we engage with generative artificial intelligence.

Michael Hanegan: One example of this is, my undergraduate students, I teach a social theory impact and ethics course in AI, and one of the articles that I give to my students is an “ethics of autonomous vehicles” piece from 2020, and we say, is it moral to allow cars on the road that we don’t know if they’ll choose between passenger and pedestrian? And it’s a genuinely open question. We don’t know if autonomous vehicles are safe, and therefore, if they’re ethical. In 2025, we know. Sometimes ethical questions take hundreds of years for us to sort out. The answer here is unambiguous. Autonomous vehicles are infinitely safer.

Now the ethical question is not, “Should we allow autonomous vehicles on the road?” The ethical question is, “Should I allow my mom to drive?” So even the way in which we’ve had to navigate the pace of change, these ethical questions fundamentally kind of move along that same pace and trajectory.

So I think the other reason that it is important to not always, at least in book form, I think there are lots of conversations to have about ethics, and I think these are always valuable, the challenge here is that we’re used to technology that advances incrementally, and so it’s pretty easy to kind of keep your pulse on the way things move. These tools don’t advance incrementally.

As a practical example, if we were to have talked about power consumption with a large language model when we were writing our text, last Spring, the models now are a hundred times more efficient in six months. And so part of the challenge is that, in writing a book, it’s stuck. That text is never gonna move, and the problem is that the questions and the underlying realities move constantly.

This is why we hope that libraries can begin to think meaningfully about how they engage in these questions, the importance of putting themselves right in the thick of these questions, but without trying to provide answers in a place that’s always kind of running along.

Steve Thomas: And you even stipulate basically that librarians are the ones that should be doing this because we have those ethics and because we are trusted, we need to be part of the conversation so that we can steer it in that direction. So it’s kind of like, if we’re not in that conversation, then all this expertise that we already have is not gonna be part of the AI conversation. We’re not gonna be able to shape it. We’re gonna just keep complaining about it and then it’s just gonna get worse and worse from our point of view.

Chris Rosser: The truth is that we are using it and many times in ways that we don’t even recognize. AI has been here for a long time. Generative AI, of course, kind of came onto the scene, and was popularized by ChatGPT, but here’s the thing. We really do understand the kind of reaction to generative AI that either says, let’s hesitate, let’s wait and see, and I think that there can be some wisdom in that, there’s also danger, but also in that reaction that is really just a refusal. We’re not gonna touch this. And I think a lot of librarians believe that, actually it is virtuous for us to say, “No, we’re not gonna use this!”

Here’s the problem though. One of the problems is that because of the stories that are coming out of Big Tech and elsewhere that do something called fetishizing. They fetishize the technology in a way that kind of imbues it with this supernatural or semi divine quality. It’s its own entity that we can’t control. Part of the problem is that as those stories come out from big tech, when librarians and libraries say, “Nope, we’re not gonna use it,” then what that does is it provides the necessary nemesis for that particular story to become normalized in society. We become the villain. So then the story becomes, “Oh look, libraries are progressive. Progressives aren’t willing to use this kind of technology. Oh, but look at what the moderate choice is. We’re gonna use these tools and we’re gonna transform learning and work.”

And so there’s a real danger in libraries not, as part of our agency, not working to kind of claim the narrative as well. I mean, we can talk about AI in ways that promote the ethical use of these tools. The critically optimistic, frankly, but nevertheless critical use of these tools, ways that push against the narrative that your privacy is just part of the collateral damage of the convenience that these tools are offering you. We can tell a different story and we do believe that libraries have agency to help shape the development, and the use of the integration of these tools.

Steve Thomas: That’s even in the subtitle of your book, that the library should help claim the center.

Chris Rosser: I’ll be honest with you, I don’t trust anybody right now to do a better job of helping society at large and the communities that we work with at that small scale. I don’t trust anybody more than libraries right now to provide a very, very important and meaningful voice in this, and here’s part of the reason that we think libraries should be at the very center of this one is because we are trusted guides. We have been ethical stewards. I know that there is a lot of static about the relationships of libraries to society, et cetera. right now. There’s a lot of static, but underneath all of that, it still remains true that libraries are an incredibly trusted part of social fabric.

We’re also everywhere. I mean, we are in most communities, and I know that we’re not everywhere, everywhere. But we’re in institutions, we are in communities, and so we have presence. The difficulty for us is going to be leveraging our one voice at that ecosystem level. There’s a real disconnect between school libraries and public libraries and academic and special libraries, et cetera. And if we can unite at that ecosystem level, I really believe that we can be at the center of these conversations. And we can also be at the center of helping to guide the development and integration of these tools.

When we’ve gone and talked about this, especially when we’ve gone and talked about the book, at different spaces, conferences, et cetera, we are met with a lot of skepticism that libraries and librarians, that we actually have the agency that Michael and I are, claiming that we have, in this book. And my response to that is libraries historically have proven that we are able to get things done. We are able to change and shape society in particular ways, but more importantly than that, if you don’t think we have the agency and you’re unwilling to try, you are correct. We do not have the agency. We might as well just walk away right now, but we gotta try. We’ve gotta unite, we’ve gotta talk to each other, and we certainly can’t devour each other. We can’t attack each other ’cause that, truly is kind of the path to obsolescence.

Steve Thomas: Well, how did the two of you first become interested in this intersection of libraries and AI, and how did you start working together?

Michael Hanegan: We’ve been friends for a long time and have done a lot of work together. I am a future of learning and work scholar, so I’m interested in the way in which technology forces us to renegotiate the relationship of learning and work, that these historically have been very distinct and very separated since the industrial Revolution. You go to school and then you go to work, and this has had a couple consequences for the worlds of learning and work. The world of learning has become increasingly disconnected from the real world of work. So you have questions about relevancy and applicability of formal education for the world of work.

You see this in higher ed, you know, is a degree worth anything anymore. You see this, if you have teenagers, you hear this all the time from your high school students, who are just saying, why do we even doing this? Why does this matter? And in the world of work, separation of learning and work has led to the almost complete atrophy of the ability for learning.

So, as an example, HR departments in ’70s, ’80s, and ’90s were filled with people who were in instructional design and learning and development. And now it’s compliance, hire, fire, and cover the company. So now what we have is this disconnection on both sides. At the brink of the largest upskilling and reskilling in human history, neither one is ready. And so, even before generative AI, this tension was already a problem. It is just now expanded exponentially and accelerated exponentially.

A few years ago I came to Chris and said, “Hey, my work is very clear that this is gonna be a mess, renegotiating the world of learning and work. And you and I both know that libraries are gonna be an essential piece of this.” As an outsider, Chris helps me to navigate world of libraries in a meaningful and wonderful way. But I think we’ve seen very clearly that there’s not another organization or another kind of role that is able to renegotiate that bridge between learning and work as well as libraries are.

Steve Thomas: You have a whole section of the book about implementing AI. Can you walk us through what a thoughtful implementation of AI would actually look like in a real world library?

Chris Rosser: The second part of our book is really talking about, okay, so how do we do this? I mean, how do we begin to integrate? What does it look like to get everybody on board? Libraries, collectively, we sometimes do have that resist, refuse, not always for sure, but when it comes to emerging technologies, sometimes we do have that. It’s not everybody in libraries. In other words, libraries aren’t a monolith.

One of the ways that we’ve been talking about this and, forgive this religiously charged language, you can think of it in other ways if you like, but in the book we talk about these two different, reactions to AI. We talk about it as the prophets and the priests. And what we mean by this is that so-called priestly role, those who want to help to maintain the important status quo, those who want to maintain traditions, those who want to help keep things stable. That is an incredibly important role and we’ve been calling that the priestly role. There’s also the prophetic role, and these are the ones who are over here on the side, pointing us to a new horizon, a different horizon. They’re saying, “Come on guys, you gotta come and see what’s over here because this is the direction that we need to be moving!”

And in order to integrate AI in our libraries, what we have to understand is two things. One is both of those voices are incredibly important. They are essential. So you can have the techno optimist and you can have the techno skeptics, and they gotta be in the same room together. And there has to be sustained this tension between the prophets and the priests.

But here’s the other thing we have to remember. Everyone, both profits and priest, they’re gonna have to level up. Whether you’re a techno optimist or a techno skeptic, right now is a time where to have a meaningful voice, even a meaningful voice of critique, you gotta understand these tools. You’ve really gotta be up to date where things are, for example, in terms of energy consumption, environmental sustainability, all of these questions. But we really need to understand where are things now, not where they were six months ago, et cetera. And we’ve got to be able to use these tools well enough that we can help our patrons, guests, students, customers, use these tools as well, because that really is an essential part of our role among the communities that we serve. Michael, do you wanna talk more about integration, really answering the question that Steve asked?

Michael Hanegan: Yeah. We do have this kind of fourfold framework that we think is really important.

First and foremost, it’s gotta be human-centered. We’ve gotta think about what this does to and for human beings. These are the fundamental questions. It’s not just about how is this technology gonna work and where do we bolt it onto these other things. It really is about these kind of results. What will this do to and for the people that we serve?

We also suggest that we need to think about meta literacy, which is obviously a part of what librarians do. This is a wonderful framework to understand, not only the technology itself, but its implications for the way in which we will navigate learning and work in the future.

I think one of the important transformations for libraries to help them understand why they have to claim the center is that the real transformation of generative AI is not technological. It is the transformation of knowledge consumption and production, and that is precisely the library’s wheelhouse. What we are on the brink of is the ability to engage at scale, the full learning of the human family. And other places that do this work, both learning and work, are not equipped to reimagine how we do those things. Libraries are in a prime position to be a part of that, and we think meta literacy is a huge piece of that work.

The third piece of this framework is to talk about right tool, right job. If you’re using the wrong AI tools for a job that is meaningful and valuable, the results are just gonna be awful. I think we see this a lot when we travel around. We’ll see people say, ” Well, I used this tool and it did a terrible job. Clearly generative AI is not any good.” And that’s their experience, but if there’s a mismatch between that right tool, right job, it’s important. I always try to remind people that generative AI is clearly not very good. Breakthroughs in this technology only won two of the four Nobel prizes in 2024. So part of this gap is about how we’re using and how we’re deploying these technologies.

And then the fourth piece is what’s called the jagged frontier. This comes from a wonderful research study out of Harvard Business School that says, the challenge here is that when you take tasks of equal difficulty, sometimes AI blows it away, A+, absolutely just wonderful job. Other times it’ll just kind of barely clear, you can get a C and kind of make your way through, and then sometimes it’ll fail spectacularly. That frontier is uneven. But the other challenge is that that frontier that is uneven also advances unevenly. So part of the challenge here is that what is not possible right now may very well be possible very soon. I practically keep a ” not yet” list, either I haven’t figured out how to do it or the technology hasn’t figured out how to do it. And I’ll tell you that in the last 12 months, there are a whole lot more strikethroughs on that list than there are things left remaining.

So I think part of these is to say the way that we think about how we approach these questions are really, really important. We provide a framework for libraries to think about how they center themselves, and we try to walk them away from two different kinds of approaches. One is this top-down approach where we would say we’re in charge and then this is how it’s gonna go. We find that those kinds of top down approaches don’t work in technology adoption very well ever, much less with something so disruptive as generative AI. The other is just kind of a, “Who wants to go?” ” Who’s interested?” That we just take the people who are of high interest, maybe not even high readiness, but just who’s excited and that’s who we run with.

What we offer instead is an approach that we call the convening approach, which is to say, there’s room for everyone at the table to kind of determine their proximity to the center or to the front in this way, but that the goal is to bring everyone along as we go. We think that this provides a framework that enables us to not either leave people behind, keep the same systems that got us in this mess in place, but to try and really navigate how we’re gonna do this moving forward.

Chris Rosser: And Steve, I’ll just say that if your listeners are looking for something really practical, what is the practical answer to the question? We think that there are really two things that we ought to be doing right now, maybe three. One of those is that within our libraries, we need to start identifying people who could serve as a kind of AI leader within our space. What we need to do is to begin building small teams that can begin to help us dream about what’s to come. We believe that imagination is going to be the dynamo that powers this integration. In other words, it’s gonna take a lot of imagination for us to begin to understand how we can start to partner with and use these tools in ways that enhance our work and enhance librarianship. So we’ve gotta start identifying people in our communities that can serve as those leaders.

And then the second thing is that we have to start building out an infrastructure to where we can help people level up. So it’s going to be education in using these tools, what we might call AI literacies. What Michael and I have said, really is in library wheelhouse, meta literacy. That is AI literacy. And we can talk more about meta literacy if you want to, but we’ve gotta start building out that infrastructure where our librarians, our library workers, can begin leveling up, capacity building with AI.

And then a third really practical thing that you can do in libraries, and Michael, you might wanna say more about this, but you can become informed about AI beyond the sphere of librarianship. In other words, we’ve gotta start paying attention to the world of work. We’ve gotta start understanding what are the skills that are being demanded of our employees in various industries ’cause what we’re seeing, also, is that the kinds of skills that are needed are precisely the kinds of skills that librarians have been developing for such a long time. They’re those transferable skills, being able to think critically and to evaluate information. Being able to communicate effectively, being able to ask good critical questions, being able to engage with these kinds of technologies because of our digital literacies, et cetera. So librarians really have all that we need to begin this work.

In other words, we don’t have to spool up some kind of an entirely new skillset. But what we need to do is to start building out that infrastructure so that we can leverage all the skills that we’ve already been developing in order to understand, use, and integrate these tools.

Steve Thomas: You both mentioned meta literacy and, you talked about that in the book and how AI can complement that and also the framework that is developed in the book, but also the ACRL framework and how this can all complement that. Can you talk about how all those work together and complement each other?

Chris Rosser: Sure, sure, sure. I’ll say a little bit and then Michael, throw in whatever you would like here. Meta literacy is something that emerges out of library world. Tom Mackey and Trudi Jacobson developed this idea of meta literacy and have really been over the past decade or more, been expanding and expounding upon this idea.

Meta literacy is a framework that is empowering us to critically evaluate, create, share, and reflect on information, and here’s the key, in collaborative digital spaces. So, when meta literacy came out, I mean, this was at the emergence of 2.0-type engagement with the internet, and how people were not only consuming but were also creating information that became posted.

So the question, how do we navigate spaces like that where we’re collaborating in digital spaces? We’re not just consuming, but we’re also creating. This is precisely the kind of work that we’re doing with generative AI. We are creating new language, text, content knowledge, in digital spaces, but the difference is here, we’re collaborating with a machine mind, if it’s okay for me to talk like that. I’m not trying to anthropomorphize, I’m just trying to acknowledge that the experience of using generative AI feels like you are partnering with someone who is creating knowledge and content back with you, in a very similar way to that we might do with other human beings.

So it’s that collaborative piece in a digital environment that really gives meta literacy an edge over other AI literacies that have emerged. There are lots of frameworks out there. They’re all very helpful, but they primarily focus on whether or not we can understand AI, whether or not we can use ai, whether or not we can evaluate AI, and sometimes, that we can create with AI. We believe that meta literacy actually provides a way for librarians to emerge onto the scene already fully equipped to begin to provide that capacity build for AI literacy.

We were on a call, several months ago, a virtual meeting like this, with ALA, and I said something like, meta literacy is AI literacy, and it was really interesting because after that meeting, Tom Mackey, one of these theorists, emailed and said, yeah, that’s it. Let’s keep going in this direction because meta literacy is AI literacy. So librarians are already very well-equipped with this kind of understanding capacities to begin to help others level up as well.

Michael Hanegan: Yeah, I think one of the key unlocks of meta literacy is that it gives us a concrete framework to help us think about what it means to learn how to learn, which is gonna be the trademark skill of the AI era, or what we call the Age of Intelligence. The ability to rapidly acquire new learning, to adapt, to respond to changes in technology and in the world of learning, in the world of work. This skill, we know from the world of work, is the most in-demand skill in the market, and people are struggling to say, how do we build this capacity in people? How do we begin to think about how to learn how to learn? And libraries say, “Hey, we’ve been thinking about this for a couple hundred years. We have some wisdom to share here.”

Steve Thomas: You guys also introduce the STACKS model, and, first off, you know, congratulations on coming up with a good library themed acronym there, but what is that all about, and how can a library start using that?

Chris Rosser: Yeah, that’s really good. We give a little explanation of STACKS in the book. Yes, a little bit of a cheesy, but library related, acronym here, but STACKS stands for: Strategy, Tactics, Assembly, Curation, Knowledge, and Solutions. And what this essentially involves is a process that helps us do two things: one, it helps us to evaluate tools. So I’m gonna talk about that in just a second, or Michael might want to talk about that in just a second because a STACKS model, allows us to use our values in order to evaluate tools. The other thing that STACKS model allows us to do with regard to meta literacy and instruction is to begin to think about the task at hand by breaking it down into the skills that are required or other kinds of needs that are required, and once we can identify those skills and those needs, then we can begin to strategically and tactically map those two, the particular generative AI tools or AI adjacent tools or traditional tools that don’t have anything to do with AI so that we can develop what we’re calling a Stack, which is really just kind of a constellation of tools, both AI and AI adjacent, tools that we are gonna be able to use to get the job done. But the first bit of that work is the hard work of thinking it through. And what we really wanna do is to be able to map skills to tools, create a stack of tools, and then that helps us to understand a workflow, a process, and then we can get the job done that’s at hand.

Michael Hanegan: I think part of the value of the framework for STACKS is it doesn’t approach learning as simply a technological problem. It’s not just about tools. It is about this interweaving of technology and the human. We live in an era in which technology is moving from a purely assistive or instrumental use, towards a more collaborative use. This is the first time that we’ve encountered a technology in which the technology itself is able not only to retrieve information, but actually to create information, to actually bring together in novel ways, information and insight that wouldn’t have occurred otherwise by simply retrieving information that already exists.

Part of the power of generative AI is that because it is a predictive tool, it’s not going and pulling the answer to your query off the shelf. It is helping to bring together, a new answer, at least new in some form. All learning has resonance with all previous learning. This is one of the things we learned quickly from library world, but it’s coming together in new ways.

So part of what we recognized is that we needed a framework that holds together both the way in which humans learn, which will not fundamentally change in the near future, the way that our cognitive processes work don’t change every six months. Humans are much slower creatures to change in this way but also that realizes that the technology and the tools that we have and the capabilities that they open are changing rapidly.

I think the real value here of this framework is that it gives us, not always step by step, it’s not always a linear progression, but it helps us to understand that there are different parts of our work that we need to approach in different ways. And so we hope that this framework not only allows us to think about tools, but to think about what we’re trying to accomplish, and now more importantly, what we can try to accomplish that would’ve been previously impossible without, the power of generative AI.

Chris Rosser: Michael and I, really from early on, one of the first things that we did is we were trying to think through, okay, how are we gonna approach AI and AI integration was to articulate five core values that, have been very, very useful to us. Those values are transparency, curiosity, rigor, inclusion, and play. And we talk about those values in the book. But one of the other things that the STACKS framework helps us to do, is to evaluate tools according to those values. So in other words, whenever we’re trying to think about mapping the right tool for the right job, et cetera, we’re thinking about these tools in light of the transparency that it offers, not only in terms of how it’s being developed, how information is being shared, but also does it provide the possibility for a user to share conversation threads with others and to make their work transparent. That’s a really important thing for students working with instructors, for example. But we can evaluate it in terms of the curiosity that it fosters, the rigor that it facilitates, the inclusion, especially in terms of accessibility, that these tools can provide. And in play, we feel that play, is a really important part of what these tools should kind of produce within us. Our experience can be playful using these tools, and I think librarians can help the communities that we serve begin to understand that and to kind of infuse our spaces with that kind of playful exploration of these tools.

Steve Thomas: Chris, you can jump in on this too, but mostly for Michael, ’cause you work across you libraries, but also K-12, higher ed, you’re kind of covering education at a higher level. Are you seeing any sector in particular embracing AI better than the other ones?

Michael Hanegan: We’re all drowning, I think is the key. I think part of the reason that that’s true is that these things continue to move and evolve. School’s about to start or has just started in all spaces and the tools that we were worried about last year are all being deprecated this week, because they’re no longer on the same level of the tools that are being released right now. So I think part of the challenge is that each time we look up to see where the field is at, the field has moved.

I think the other piece that’s a challenge here is that the first response in most education sectors was, how do we manage or constrain this technology instead of how do we think about what the future of learning is gonna be? We certainly have pockets in institutions and we certainly have some institutions who are further than others who are saying, ” This is an arrival technology. It’s not going anywhere. Let’s try and push towards the frontier instead of trying to push back to a time long gone.”

But I think the challenge here is not primarily a technological one. It is a human problem. It is a cultural problem, an institutional culture problem. I think the question here that will separate the institutions that will continue to fall behind, the institutions who will kind of reticent take a wait and see approach, and the institutions that will push forward are fundamentally about culture. Institutions that have an academic culture primarily of suspicion and of surveillance and this faculty versus students component, it’s gonna be a disaster this year. For the institutions who are just kind of, “we’re here and we’re doing our thing,” or they have a lot of diversity within their institution, they kind of let everybody choose their own adventure as it relates to AI, you’re gonna have pockets of disaster and you’re gonna have pockets of really deep change. And then I think for institutions who are really trying to tackle this head on, I think you’re gonna see transformative applications.

We already see some research from this. We see institutions who, in small spaces last academic year were providing these tools in thoughtful and carefully designed ways and had tremendous results. I think the key here is that, again, this is not a technological problem in education. This is a pedagogy and design question. So the key is not, where do we put AI? The key is how do we design learning in a world in which AI is a part of this? And once we answer those questions, that will ultimately kind of determine the fate of our institutions in this coming year.

When I work with educational institutions, we talk about this framework where we say, first it is human centered. We wanna say, what kind of people are we trying to cultivate and sustain? What kind of humans do we want to form and prepare for the world? Then we say, how does pedagogy and design help us to get to that place? And then, and only then do we say, can generative AI help?

The tech companies are telling us that generative AI is a glaze that should be poured over everything. Everything should be AI powered. Everything that exists should have some AI component to it. And that’s just fundamentally not true. Some tech companies want us to treat it like ranch dressing. The more you put on, the better it will be, with some presumption that quantity is always an improvement. But the reality is that good pedagogy and design is always been important. But now with these technological advances that remove limits that have been on human learning forever, now that some of those limits are being arbitrarily removed, pedagogy and design is more important than ever.

This is one place where libraries are precisely prepared to engage in this question. What does it mean to learn? What does it mean to center human beings? Clearly we’ve learned from the last 25 years that access to information does not inevitably make better culture and better people. And so in the same way, better tools do not necessarily lead to better outcomes. This is where we’re gonna have to think about how we build, for the real people that we serve, and that we serve alongside.

Chris Rosser: Even outside of education, public libraries, they’ve been on the frontline for so many things, but public libraries, as Michael said earlier, are about to be on the frontline of that largest upskilling, reskilling in human history. That is not insignificant. And so the question then becomes, how are public libraries gonna do this? What are they gonna do to begin to equip patrons? This is another reason why it’s really important for us to be paying attention to, things that are beyond librarianship, specifically related to the world of work.

We don’t know yet what’s going to happen.. There’s been a lot of reports about how industry is beginning to adopt AI. There’s been a lot of reporting about how that is already beginning to affect the workforce with regard to job loss, or to redeployment into other capacities. We don’t yet know what’s going to happen. Some of those estimates about how traumatic and significant job loss is going to be, those haven’t really played out yet, so we still, we don’t know yet what’s going happen. What we do know is this: public libraries in particular are gonna be on the front line to this. School libraries are gonna begin thinking about how do we prepare our kids to enter that workforce. Academic libraries are going to be right in the center of helping to prepare people for the kinds of jobs that are going to be, here in the future.

So, I really want to encourage anyone who’s listening to this podcast to get a copy of Allison Pugh’s, the Last Human Job because I think that this book is going to help underscore that something that libraries can help empower people for is that connective labor, the way that humans connect with other humans, the way that we make each other seen, and feel seen, and that labor is actually gonna be something that’s so important for us in the future.

Michael Hanegan: I think one of the things that we need to keep in mind is that as we think about how we begin to change our institutions, or the work that we do, especially around pedagogy and design, is that that lift is not as time consuming as it feels. There is this sense in which this transformation is substantive. It may even feel existentially overwhelming in some ways, but the ability to say, look, here’s how good pedagogy and good design already get us 90% of the way there. I really do think that the primary shift for most people, especially in teaching roles or in supportive roles, which I think includes most librarians in this case, I don’t think that’s really more than 20 or 30 hours worth of work right now because we’re so early in the development of these technologies. In two to three years, yeah, the curve is gonna be much higher. The amount of work it’s gonna take to get caught up is going to be significantly more. But for now, for the next six months or so, before the end of the year, that upskill to being not necessarily an expert, expertise is not what we need, that is a manageable lift. But the consequences of that wait and see approach or even that antagonistic approach is that the lift becomes exponential really, really quickly.

The goalpost for what students think is ethical and unethical conduct around generative AI is evolving. And here’s one of the trends that we’re seeing: students essentially feel that if they’re given work that has no value, they are happy to submit work that has no value, and they find that to be morally right. So even if our institutions haven’t changed the way that we think about these red lines, these lines in the sand, our students have, they’ve said, if you give me a garbage assignment, I give you a garbage submission. And not only is that not wrong, but it’s right. I’m gonna kind of match your energy.

I really do think that one of the primary ways that we navigate, especially questions about academic integrity and academic misconduct is not better surveillance. It’s not AI detection tools, which don’t work and are an utter disaster. It is better pedagogy and design. It is building things that cultivate connection and engagement and curiosity in ways that engage learners, not as a competitive, antagonistic approach to say who can catch who, who can beat the other? But as a genuinely, nurturing and human-centered place to say, how do we learn alongside and learn for the sake of, and really build a community of learning in this space?

And I think students are responsive to those things. I find that in my own coursework, what I call this kind of pedagogy of transparency to say, here’s what I’m asking you to do and here’s why, has been profoundly transformative for my students. But it also holds me accountable because if the answer is because I said so, it’s no longer on the table for me. And so there are these ways in which we kind of center human beings, their care and nurture and their curiosity and their capabilities, that I think navigate most of these problems. Cheating is not new. You could mail it in at any point in the history of human learning, just in different ways. And I think the challenge here is to build learning that people actually want to engage in. And when we do that, I think we find that most of these problems resolve themselves.

Chris Rosser: Steve, if you don’t mind, I’m gonna talk to my fellow instructional librarians, whether you’re in an academic, school, or any other kind of setting. Students are hungry for us to help them understand how to use these tools, especially as they think about the need that they’re going to have for these tools in their future careers. When I go into a class, and we’re gonna do some form of basic AI literacy, the first question I ask is, how many of you’re using these tools? Most students are using ChatGPT. However, when I ask the question, how many of you’re using these tools? Only a few hands go up and it’s really interesting. And so I started saying, “Guys, I’m not a priest. This is not a confessional. How many of you are using these tools?” And when I say that, a hundred percent of the hands go up.

And the instructor is often very surprised by that ’cause they don’t understand their students are using the tools, they’re using the tools already. They’re learning how to use these tools that the instructors, in a lot of cases have said, do not touch. It’s that kind of refusal, so we instructional librarians, we really have to understand that we are the ones who can help students and instructors alike begin to understand how to use these tools in ways that supercharge them as learners, not in ways that are outsourcing their learning or their thinking, but we can be right there not just to help the students, but also to help our fellow instructors, professors, teachers, begin to rethink life with generative artificial intelligence in the classroom.

Steve Thomas: Well, I wanted to wrap up with a little experiment. I asked four different chat bots the same question to see how they compare with each other. I used ChatGPT, Gemini, Claude, and Copilot. And I asked them basically, “What are the pros and cons of librarians using AI in their work?” They all talked about efficiency. They all talked about enhanced user services. They all talked about data-driven decision making. One thing that was interesting was all of them except ChatGPT mentioned accessibility. Gemini is the only one that mentioned any kind of digital preservation.

Claude is the only one that mentioned becoming more dependent on vendors as a con for using AI, because vendors will start using it more and you’ll get more lockin. I knew misinformation, accuracy, reliability, I knew that was gonna be a con for all of them. Copilot doesn’t mention it once. It’s just like, “Yeah, I’m cool. I don’t ever make mistakes.” And then the other cons are ethical and privacy concerns, digital divide, high implementation costs, and job concerns, things like that. They were mostly all on the same page there and kind of hit a lot of the same things that we’ve talked about here today. So I guess they know what we’re talking about.

Chris Rosser: Nice. And I’ll say, Steve, well done. I mean, that was a good idea, comparing the outputs of those various tools. I will say, about Claude, and Claude is my favorite. Claude is, I would say my buddy, but you know what Claude said about, perhaps we’re going to see more dependency on vendors. I’d like to say that actually, I think the opposite is gonna be true. If the library ecosystem, can come together and with one voice begin to leverage that influence that we have with vendors, I think we can actually shape the development of those tools. And we already start seeing that in various, areas. We don’t have to be reliant upon the vendors themselves to just kind of turn on a tool and say, “Here, libraries, go to town and now it’s gonna cost you this much more because we turned on the tool.” Libraries really need to kind of put ourselves in the middle of working with vendors to develop those very kinds of tools, to help them understand what it is that we want, what it is that we need, and what it is that we are not going to be willing to put up with.

Michael Hanegan: Well, I just think the open source realities are gonna make this less worrisome. We have state of the art open source models that’ve come out this week. And so I think, as we continue to move and evolve, how we will engage will change and move and evolve as well. This is why we’re really asking libraries to take seriously their position as both information scientists and as people who have a profession that is deeply grounded in this human centered ethics.

These are the longstanding, traditions and sources of wisdom and expertise that are gonna be important to navigating that ever changing dynamic of how the technology itself moves, and so there are parts of library world that are infinitely durable in a world where change is gonna be perpetual, very, very shortly.

Steve Thomas: The people who are feeling like, “Oh, the people who are into AI just don’t care about that stuff. If people don’t care about the ethics, then yes, don’t listen to them, but I think the whole point of this book is to have an ethical implementation and the whole point is to get involved to make it ethical. So it’s the people who are abandoning ship who are letting it flow off and letting the tech bros basically decide what it does in the future, and again, all they wanna do is juice their stock price, not actually make it ethical.

Michael Hanegan: This is one place where libraries are gonna have to pivot a little bit. Libraries are used to having very static sources of information to navigate these questions. What’s in the peer reviewed databases? What’s in the books? This world moves so quickly that most of the best research about ethics and tech and implications and social factors are all preprints. They’re not even gonna go under peer review. It’s not useful in nine months when it comes out in a peer review journal. So I think even the ways that librarians engage with information about how we navigate all of this stuff is gonna have to change a little bit, but I’m optimistic that librarians are, are both capable and essential.

Steve Thomas: Well, and I will say we’re recording this on August 8th, and the situation is probably completely different from the time you listen to this episode!

Chris Rosser: Yeah, that’s the speed of progress right now. And I understand the resistance. I understand the refusal, but we cannot cede this space. This is why librarians, library workers, libraries really have to move to the center here because we cannot cede this conversation. We cannot throw up our hands and say, “Let it burn!” That is not a generative approach. We gotta find the way forward, and we think, libraries are very well suited to help be the guides that we have always been.

Steve Thomas: Yeah, and I imagine you all got to explore that a little bit ’cause a lot of your work is based on, I know you, you wrote a white paper about using it in theological education, so I’m sure that kind of ethical consideration probably comes up a lot in that.

Chris Rosser: Yeah, for sure. I mean, we were asked at a conference recently, I think the question was asked with some surprise, like, “You have theological training, so shouldn’t you be up in arms about this?” And of course, we are up in arms about the things that are being done that are bad for flourishing, both human and non- human. I mean, read the Empire of AI that recently was published, and that will be one that lights those fires. But this is not a moment where we can just rest upon our laurels. We have to be engaged, we have to be in the thick of this.

And again, I just wanna say I trust librarians and library workers. I trust library world to be able to be those trusted guides at this moment, ethical stewards at this moment, those who already have everything that we need, to help inform this moment. And that’s who we are, and I think that we can move to the center of these conversations.