TL;DR. The hook-development skill from anthropics/claude-plugins-official is the official reference for building event-driven hooks in Claude Code. It documents nine lifecycle events, a prompt-based hook API that uses an LLM to make context-aware decisions, and the ${CLAUDE_PLUGIN_ROOT} resolution rules that keep your scripts portable. It scored 89/100 in our auto-review pass — the highest of any skill in the catalog so far.
On this page
- What is hook-development?
- When should you use it?
- How do you install it?
- What hook events does Claude Code emit?
- What is the prompt-based hook pattern?
- How do
${CLAUDE_PLUGIN_ROOT}paths resolve? - What are the common pitfalls?
- Related skills
- FAQ
What is hook-development?
hook-development is a Claude skill that teaches the agent how to author event-driven hooks for Claude Code plugins. Hooks are short scripts (or LLM prompts) that run in response to specific lifecycle events — when a tool is about to fire, when a session starts, when the agent stops talking — and can validate, mutate, or block the event. The skill ships nine event types with worked JSON examples, the two configuration formats (plugin hooks.json vs user .claude/settings.json), and the ${CLAUDE_PLUGIN_ROOT} path resolution rules that let the same hook script work in development and after install.
Authored by Anthropic and shipped under anthropics/claude-plugins-official, it sits alongside two dozen other official skills (PDF authoring, slack integration, codeql analysis) — but hook-development is the one that turns Claude Code from a chatbot into a programmable runtime.
When should you use it?
Reach for hook-development whenever you’re about to write a hook by hand. The skill loads in under a thousand tokens and saves you from rediscovering the format every six weeks. Concrete triggers it covers, paraphrased from the description:
- “Block dangerous commands” — gate
rm -rf,git push --force, and other destructive operations before they fire. - “Validate tool use” — check arguments against a policy before letting the tool run.
- “Add context on session start” — load project conventions, secrets paths, or runtime info into every new conversation.
- “Enforce completion standards” — make the agent re-run linters before declaring done.
- “React to tool results” — auto-format files after writes, post Slack notifications after deploys.
If you’re using Claude Code without writing plugins, you don’t need this skill. If you’re authoring a plugin and have ever pasted a JSON snippet from someone’s GitHub README and hoped, you do.
How do you install it?
The SkillHub CLI installs any catalog skill into ~/.claude/skills/ in one command. For hook-development:
npx skillhub install anthropics/claude-plugins-official/hook-development npx skillhub list | grep hook-development
If you’d rather pin to a specific commit, the skill source lives at anthropics/claude-plugins-official on GitHub. The SkillHub getting-started guide covers the full install loop, including how to remove a skill cleanly.
What hook events does Claude Code emit?
Claude Code emits nine event types, each with a different signature and use case. The skill catalogs them with examples; here’s the comparison table you’ll wish you had on day one:
| Event | Fires when | Can block? | Typical use |
|---|---|---|---|
PreToolUse | Before a tool runs | Yes | Validate args, block destructive commands |
PostToolUse | After a tool completes | No | Format files, send notifications, log telemetry |
UserPromptSubmit | User sends a message | Yes | Sanitize input, inject policy context |
SessionStart | New conversation begins | No | Load project conventions, set env vars |
SessionEnd | Conversation closes | No | Cleanup temp files, archive logs |
Stop | Agent indicates done | Yes | Force re-run if quality gate failed |
SubagentStop | Spawned subagent done | Yes | Validate subagent output before merging |
PreCompact | About to compress context | No | Pin specific files into the summary |
Notification | Generic event channel | No | Custom integrations |
The skill’s biggest practical insight is which events accept the prompt-based hook API (next section) and which only accept commands. PreToolUse, Stop, SubagentStop, and UserPromptSubmit support both; the rest are command-only.
What is the prompt-based hook pattern?
A prompt-based hook is a hook whose body is a natural-language instruction, evaluated by an LLM at runtime instead of running shell code. It’s the part of the hook system most teams haven’t seen yet, and it changes what hooks are good for.
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "prompt",
"prompt": "Inspect $TOOL_INPUT. If the command would touch files outside the current repo, exfiltrate secrets, or run as root, return blocked with a one-sentence reason. Otherwise return ok.",
"timeout": 30
}
]
}
]
}
}
The same hook written as a command would be a fragile bash script of greps and case statements. As a prompt, it generalises to commands you didn’t anticipate when you wrote the rule. The skill includes four worked examples — including one that uses prompt hooks to enforce that every Stop happens after the test suite passed.
PostToolUse hook on every Read can quietly multiply your usage. Reserve them for the events that actually need judgment (PreToolUse on Bash; Stop for completion checks). Use command hooks for everything deterministic.How do ${CLAUDE_PLUGIN_ROOT} paths resolve?
${CLAUDE_PLUGIN_ROOT} expands to the absolute path of the installed plugin at runtime. The skill stresses why this matters: write "command": "${CLAUDE_PLUGIN_ROOT}/scripts/lint.sh" instead of a hardcoded path, and the same hook works whether you cloned the plugin into ~/.claude/plugins/foo/, /opt/claude/plugins/foo/, or a containerized install. Without it, every install needs path-rewriting; with it, plugins are portable.
What are the common pitfalls?
Three traps the skill flags explicitly:
- Format confusion. Plugin
hooks.jsonneeds the{"hooks": {...}}wrapper; user.claude/settings.jsonuses events directly at top level. Mixing them silently produces a “hook never fires” bug that is frustrating to debug. - Missing the matcher. Without
"matcher": "Bash"on aPreToolUsehook, it fires on every tool — includingReadandEdit, which usually isn’t what you wanted. - Treating prompt hooks like assertions. They’re judgments. They can be wrong. Pair high-stakes prompt hooks (
PreToolUseonBash) with a deterministiccommandsafety net for the cases you can express as code.
Related skills
If hook-development fits your workflow, three other Pass-rated skills compound with it:
- writing-skills — meta-skill from
sickn33/antigravity-awesome-skillson how to author SKILL.md files. Pair with hook-development when shipping a plugin that bundles both. - codeql from
trailofbits/skills— drops as aPostToolUsehook onWritefor security scanning. - design-review from
garrytan/gstack— natural fit as aStophook for any UI-touching project.
Browse the rest of the catalog at skills.palebluedot.live, or read the getting-started guide if you’re new to SkillHub. The full Skill Spotlight archive tracks the highest-aiScore skills as they ship.
FAQ
No. Hooks are a Claude Code plugin feature. The events listed here are Claude Code lifecycle events; other agent runtimes (Cursor, Cline, etc.) have their own extension models.
No — each hook entry binds to exactly one event. Repeat the entry under each event name if you want shared logic, or factor the body into a script that hooks call.
The hook is treated as if it returned no decision; the underlying tool fires. Set
"timeout" conservatively (the default is 30 seconds) and use deterministic command hooks for safety-critical checks.$TOOL_INPUT documented?The skill documents the full set of substitution variables (
$TOOL_INPUT, $TOOL_OUTPUT, $SESSION_ID, $CWD) inline. Run npx skillhub install and read the SKILL.md — Anthropic keeps the canonical reference there.The skill recommends invoking the script directly with mock environment variables (
TOOL_INPUT='{"command":"rm -rf /"}' bash hooks/validate.sh) and adding unit tests as a Stop hook on your plugin’s own repo.
Leave a Reply