Adversarial Testing Tools
Open-source and commercial tools for automated red teaming, adversarial prompt generation, and model robustness evaluation.
Tracking adversarial testing tools, jailbreak research, prompt injection defenses, and red team methodologies for large language models and AI systems.
View Dataset on GitHub →Open-source and commercial tools for automated red teaming, adversarial prompt generation, and model robustness evaluation.
Published jailbreak techniques, bypass methods, and defense evaluations across frontier and open-source language models.
Detection and mitigation strategies for direct and indirect prompt injection attacks in production AI systems.
Structured frameworks, evaluation rubrics, and best practices for conducting AI red team assessments at scale.
Data is collected weekly via automated pipelines from academic publications, security advisories, open-source repositories, and vendor disclosures. All collection scripts are transparent and auditable.