Artificial intelligence (AI) can be used in libraries and archives as a powerful tool for enhancing metadata, improving search and discovery, recommending resources, powering library chatbots, and more. However, AI systems may incorporate surveillance technologies that threaten user privacy, and AI often reflects the biases of our society due to biased training data. This 60 minute short demonstration will feature a tool for responsible implementation of AI in libraries and archives, which has been developed as an outcome of the IMLS-funded Responsible AI project. This grant examines this tension between innovating library services and protecting library communities. The Responsible AI team will lead participants through a demonstration of the tool and solicit feedback from the data science community on features and limitations, with the potential for discussion informing AI software development and technology implementation.