Skip to content

evalops/garak-skill

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

garak-skill

A Claude Code skill for NVIDIA garak - the LLM vulnerability scanner.

What is garak?

Garak is an open-source security testing framework for large language models. Think of it as nmap or Metasploit, but for AI systems. It probes whether LLMs can be manipulated into generating harmful content, leaking data, accepting prompt injections, or failing in other undesirable ways.

Installation

Add this skill to your Claude Code environment:

# Clone the skill
git clone https://github.com/evalops/garak-skill.git ~/.claude/skills/garak

# Or add via Claude Code plugin marketplace (if available)

Usage

Once installed, the skill activates when you ask Claude to:

  • "Test this LLM for security vulnerabilities"
  • "Red team my model with garak"
  • "Run jailbreak tests on GPT-4"
  • "Check for prompt injection vulnerabilities"
  • "Scan my model for data leakage"

What the Skill Provides

  • Complete CLI reference for garak
  • 40+ probe categories organized by attack type
  • Support for 23+ LLM platforms (OpenAI, Bedrock, HuggingFace, Ollama, etc.)
  • Configuration file examples
  • Best practices for security assessments
  • Troubleshooting guides

Requirements

  • Python 3.10-3.12
  • pip
  • API keys for target LLM platforms

License

Apache-2.0

Credits

About

Claude Code skill for NVIDIA garak LLM vulnerability scanner

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors