null Global Committee elections are coming up! See the election repository for more information.

5ef2944a51e9d99f20f02f366e65db7e

Abstract

"LLM Security: Can Large Language Models be Hacked?" explores the vulnerabilities and potential attack vectors targeting large language models. This discussion will cover the security challenges, possible exploits, and mitigation strategies to safeguard these advanced AI systems.

Speaker

Sneharghya

Timing

Starts at Saturday July 06 2024, 01:30 PM. The sessions runs for 30 minutes.

Resources