Red Teaming LLM applications null Hyderabad Meet 20 July 2024 Monthly Meet

Abstract
Outcome of Red Teaming LLM applications Session:
During the Red Teaming LLM applications session, participants will acquire valuable insights into LLM (Language Model) applications, particularly those based on the RAG (Retrieval-Augmented Generation) approach.
Key highlights include:
Understanding LLM Applications: Participants will gain a comprehensive understanding of LLM applications, emphasizing the RAG approach. This includes exploring how retrieval mechanisms enhance the generation process.
Security Evaluation: The session will underscore the importance of evaluating the security aspects of LLM applications.
Challenges with Foundational Models: We’ll discuss major challenges associated with foundational models and shedding light on their limitations.
Giskard Open Source Library: Use an open source library from Giskard to help automate LLM red-teaming methods.
Hands-On Learning: To enhance practical skills, a vulnerable LLM application code will be made available on my GitHub repository. Participants can actively engage in simulated attacks, gaining firsthand experience in identifying and exploiting weaknesses.
Additional Resources:
6.1. Skeleton Key Attack insight
6.2. Participants will receive recommendations and resources for building production-level LLM applications using the RAG approach. These resources aim to empower developers and researchers in creating robust LLM applications.
Speaker
Timing
Starts at Saturday July 20 2024, 10:00 AM. The sessions runs for about 1 hour.