Delve into the inner workings of Large Language Models (LLMs) and the vulnerabilities they face.
Beginning with an introduction to LLM fundamentals, we explore the meaning behind the acronym GPT and key concepts such as Bayes Theorem, Supervised Fine-Tuning, and Reinforcement Learning from Human Feedback (RLHF), illustrating how these techniques train models to talk, to be helpful, and to be ethical.
We then examine a range of common attacks on LLMs, and discuss strategies for mitigating, detecting, and responding to these threats. This presentation aims to equip attendees with an understanding of the risks associated with LLMs and practical guidance on enhancing their security and reliability.
This session will be held by Jim Simpson. Jim is a seasoned cybersecurity professional with extensive experience leading cyber threat intelligence initiatives across the industry. As the Principal Threat Intelligence Analyst at HiddenLayer, Jim focuses on strengthening AI and machine learning security through comprehensive threat analysis and tailored intelligence creation.
He is responsible for curating high-quality intelligence collections, automating data exploitation processes, and delivering strategic insights that enhance clients' security posture. Previously, Jim held senior leadership roles at SearchLight Cyber and BlackBerry Cylance, where he developed advanced threat detection frameworks and led intelligence teams in addressing complex, real-world cyber threats. His industry expertise and hands-on approach have made him a respected figure in the fields of cyber threat intelligence and AI security.
If you want to understand AI, security and the pit falls - you should not miss this session!