tamuras-claude-code
Custom AI agents and hooks for Claude Code
Situation
Working daily with Claude Code across multiple projects, I kept running into the same friction points: no standardized way to audit code quality, no persistent context between sessions, and no guardrails to prevent the AI from making the same category of mistakes repeatedly.
The default Claude Code setup is powerful but generic. Every project got the same treatment regardless of stack, conventions, or past learnings. Security audits, performance checks, and responsive design reviews all required manual prompting and produced inconsistent results.
Task
Build a plugin system with specialized agents, automated hooks, and session persistence for Claude Code — specifically tailored for the Next.js + Supabase + TypeScript + Tailwind stack. The agents needed to be consultative (analyze and recommend, not auto-fix), produce versioned audit reports, and integrate with the existing Claude Code workflow without friction.
Action
I designed the plugin around three pillars: specialized agents, lifecycle hooks, and session continuity.
Specialized agents each focus on one dimension of code quality:
- Performance agent — Audits Core Web Vitals, bundle size, and database queries. Produces structured reports in
audits/performance/YYYY-MM-DD.mdwith specific findings and recommendations. - Security agent — Checks Supabase RLS policies, OWASP Top 10 vulnerabilities, Server Actions security, and AI guardrails (prompt injection, output validation). Reports go to
audits/security/. - Responsive agent — Validates mobile-first design, breakpoint coverage, and touch target sizes. Catches layout issues that desktop testing misses.
- Visual research agent — Generates moodboards and UI references from Dribbble, Behance, and design systems. Useful for establishing visual direction before building.
Lifecycle hooks run shell commands at specific points in the Claude Code session:
- Session ledger — A pre/post hook that maintains a continuity ledger per project (
CONTINUITY_[project].md). It tracks modified files, decisions, and next steps across sessions, so a new conversation can pick up where the last one left off. - QA command (
/qa) — An adaptive checklist that searches for a project's QA.md, runs build/lint/type checks, and produces a pass/fail report.
Versioned audits solve the "was this already checked?" problem. Each agent writes its findings to a dated file in the audits/ directory. Before exploring the codebase, Claude Code can check if a recent audit exists and skip redundant analysis — saving context window and time.
All agents follow the same philosophy: analyze and document, never auto-fix. The user reviews findings and decides what to implement. This keeps the human in the loop for architectural decisions while automating the tedious inspection work.
Result
- 4 specialized agents covering performance, security, responsive design, and visual research
- Session continuity via ledger hooks — conversations build on previous context instead of starting from zero
- Versioned audit trail in
audits/[type]/YYYY-MM-DD.md— searchable history of code quality over time - Adaptive QA that respects project-specific checklists and conventions
- Stack-specific configuration targeting Next.js 15+, Supabase, TypeScript strict mode, Tailwind CSS, and shadcn/ui