Responsible & Safe AI
Abstract: In a world increasingly shaped by AI technologies, including influential models like ChatGPT and DALL.E, common applications of AI in natural language understanding, image generation, and more are prominent. There is an increased use of AI models in various applications, from enhancing customer service interactions to enabling creative image generation and autonomous vehicles. One can now use specialized GPTs for a variety of usecases through a AppStore-like AI marketplace, straight out-of-the-box. However, with great power comes great responsibility, and it's imperative to acknowledge the ethical, responsible, and fairness issues that accompany these powerful systems. Zooming out to the bigger picture, the talk will address the criticality of identifying ethical, responsible, and auditing requirements for AI systems. Our discussion will encompass a comprehensive exploration of AI, spanning computational, cultural, and legal dimensions. This would include exploring the potential harms arising from modern AI capabilities in experimental and real-world contexts, ongoing research projects and ideas for enhancing AI system safety, specific projects related to addressing bias in Large Language Models (LLMs) and their effective application in legal contexts, as well as methods to improve interpretability, consistency, and the removal of harmful labels and knowledge from these models.