Prompt Injection

A cyberattack that supplies crafted input into an LLM’s context or external content to manipulate model behavior, causing data exfiltration, unsafe outputs or bypassing configured safety instructions.

Featured

Upcoming Training Events

0 AM