null Global Committee elections are coming up! See the election repository for more information.

1709028628722

Abstract

Outcome of Red Teaming LLM applications Session:

During the Red Teaming LLM applications session, participants will acquire valuable insights into LLM (Language Model) applications, particularly those based on the RAG (Retrieval-Augmented Generation) approach.

Key highlights include:

  1. Understanding LLM Applications: Participants will gain a comprehensive understanding of LLM applications, emphasizing the RAG approach. This includes exploring how retrieval mechanisms enhance the generation process.

  2. Security Evaluation: The session will underscore the importance of evaluating the security aspects of LLM applications.

  3. Challenges with Foundational Models: We’ll discuss major challenges associated with foundational models and shedding light on their limitations.

  4. Giskard Open Source Library: Use an open source library from Giskard to help automate LLM red-teaming methods.

  5. Hands-On Learning: To enhance practical skills, a vulnerable LLM application code will be made available on my GitHub repository. Participants can actively engage in simulated attacks, gaining firsthand experience in identifying and exploiting weaknesses.

  6. Additional Resources:
    6.1. Skeleton Key Attack insight
    6.2. Participants will receive recommendations and resources for building production-level LLM applications using the RAG approach. These resources aim to empower developers and researchers in creating robust LLM applications.

Speaker

Ankit Patel

Timing

Starts at Saturday July 20 2024, 10:00 AM. The sessions runs for about 1 hour.

Resources