Updated Weekly

AI Red Teaming Index

Tracking adversarial testing tools, jailbreak research, prompt injection defenses, and red team methodologies for large language models and AI systems.

View Dataset on GitHub →

What This Index Covers

🔩

Adversarial Testing Tools

Open-source and commercial tools for automated red teaming, adversarial prompt generation, and model robustness evaluation.

🔓

Jailbreak Research

Published jailbreak techniques, bypass methods, and defense evaluations across frontier and open-source language models.

🛡

Prompt Injection Defenses

Detection and mitigation strategies for direct and indirect prompt injection attacks in production AI systems.

🚀

Red Team Methodologies

Structured frameworks, evaluation rubrics, and best practices for conducting AI red team assessments at scale.

Methodology

Data is collected weekly via automated pipelines from academic publications, security advisories, open-source repositories, and vendor disclosures. All collection scripts are transparent and auditable.

100+
Tools Tracked
Weekly
Update Frequency
100%
Open Source