← Back
EN
|
SK
Question 1 / 20
0%
What characterizes a "Prompt Injection" attack in an LLM (Large Language Models) environment?
Writing prompts exclusively in all caps
Disconnecting the model from the internet
Changing the color of the AI user interface
Manipulating input so the model bypasses its safety filters and executes undesired commands
Physical damage to graphics cards
Next