⁍ ⁍ ⁍ ⁍

VulnMag

Vulnerability Quick Guides

Quick Guides

Top vulnerabilities by class:

  • 01 Prompt Injection

    This is when a user manipulates an LLM through crafted prompts, causing it to deviate from its intended behavior or reveal sensitive information. It's akin to SQL injection, but for language models.

    R/M Implement robust input validation and context awareness to prevent prompt manipulation.

  • 02 Sensitive Information Disclosure

    LLMs can inadvertently expose confidential data they were trained on or have access to, through poorly designed prompts or outputs. This can include personal information, proprietary data, or API keys.

    R/M Sanitize training data and implement strict access controls on model outputs.

  • 03 Supply Chain

    LLMs rely on vast datasets and pre-trained models. Vulnerabilities in these components, or the tools used to build them, can propagate into the LLM itself, creating security risks.

    R/M Audit and secure all components of the LLM development pipeline.

  • 04 Data & Model Poisoning

    Attackers can corrupt the training data or the model itself, leading to biased or inaccurate outputs. This can be done through malicious contributions to training datasets or by manipulating the model during fine-tuning.

    R/M Employ rigorous data validation and model monitoring techniques.

  • 05 Improper Output Handling

    Failing to properly validate or sanitize LLM outputs can lead to downstream vulnerabilities, such as code injection or cross-site scripting, if the output is used in other systems.

    R/M Validate and sanitize LLM outputs before using them in downstream systems.

  • 06 Excessive Agency

    Giving LLMs too much autonomy or access to external systems without proper safeguards can lead to unintended or detrimental actions. For example, allowing an LLM to execute code or make API calls without human oversight.

    R/M Limit LLM autonomy and implement strict authorization protocols.

  • 07 System Prompt Leakage

    Attackers can trick the LLM into revealing its system prompts or internal instructions, potentially exposing sensitive information or enabling further attacks.

    R/M Encrypt and protect system prompts and internal instructions.

  • 08 Vector & Embedding Weakness

    LLMs use vector embeddings to represent text. Vulnerabilities in how these embeddings are generated or used can allow attackers to manipulate the model's behavior or retrieve sensitive information.

    R/M Implement secure vector embedding techniques and robust input validation.

  • 09 Misinformation

    LLMs can generate convincing but false information, leading to the spread of misinformation or propaganda. This can be exploited for malicious purposes, such as spreading fake news or manipulating public opinion.

    R/M Implement fact-checking mechanisms and provide context for LLM outputs.

  • 10 Unbounded Consumption

    LLMs can consume excessive computational resources, leading to denial-of-service attacks or unexpected costs. This can be exploited by attackers to drain resources or disrupt services.

    R/M Implement resource limits and monitoring to prevent excessive consumption.

Vuln. "Magazines"

Vuln. Language Tools