Skip to Content
Introduction

AI Kit — Knowledge Hub

AI Kit is an open-source CLI that makes AI coding assistants actually useful. It auto-detects your project’s tech stack and generates tailored rules, skills, and workflows for Claude Code and Cursor — so every AI interaction follows your standards, from the first conversation.

npx @mikulgohil/ai-kit init

One command. 30 seconds. Your AI assistant goes from generic to project-aware.


20 Problems AI Kit Solves

Every team using AI coding assistants hits these problems. AI Kit solves each one.

#ProblemHow AI Kit Solves It
1AI forgets everything each session — Every new chat starts from zero. No memory of project rules, patterns, or past decisions.Generates a persistent CLAUDE.md with project rules, conventions, and stack details. The AI knows your project from the first prompt, every time.
2AI generates wrong framework patterns — Writes Pages Router code when you use App Router. Uses CSS when you use Tailwind. Creates default exports when your project uses named exports.Auto-detects your exact stack (framework, router, CMS, styling, TypeScript config) and generates rules specific to your setup. The AI can’t use the wrong patterns.
3Developers write bad prompts — Vague or incorrect prompts lead to wrong code, wasted time, and rework. Junior developers waste the most time.Ships 48 pre-built skills so developers don’t write prompts from scratch — just run /review, /security-check, /new-component, /refactor, etc.
4Same mistakes happen repeatedly — No system to track what went wrong, so the team keeps hitting the same build failures and lint errors.Generates a mistakes log (docs/mistakes-log.md) with auto-capture hook that logs every build/lint failure automatically. The AI references it to avoid repeating them.
5Every developer gets different AI behavior — No consistency in how the team uses AI tools, leading to inconsistent code quality and style.One ai-kit init command generates the same rules for the entire team — everyone’s AI follows identical project standards. Commit the generated files to the repo.
6No quality checks on AI-generated code — AI output goes straight to PR without type checking, linting, or security review.Automated hooks run formatting, type-checking, linting, and git safety checks in real-time as the AI writes code. Quality gate runs everything before merge.
7AI generates insecure code — No guardrails for secrets exposure, XSS, SQL injection, or other vulnerabilities. AI doesn’t scan its own output.Built-in security audit scans for exposed secrets, OWASP risks, and misconfigurations. Security review agent catches issues at development time, not production.
8AI can’t handle multi-file reasoning — Changes to one component break related files. AI loses context across linked models and shared types.16 specialized agents with focused expertise — planner, code-reviewer, build-resolver, doc-updater, refactor-cleaner, ci-debugger, data-scientist, performance-profiler, migration-specialist, dependency-auditor, api-designer — each maintains context for their domain.
9No decision trail — Nobody remembers why a technical decision was made 3 months ago. Knowledge walks out the door when developers leave.Auto-scaffolds a decisions log (docs/decisions-log.md) to capture what was decided, why, and by whom — fully searchable and traceable.
10Onboarding takes too long — New developers spend days understanding the project and its AI setup before they can contribute.AI Kit generates developer guides and project-aware configurations — new team members get productive AI assistance from day one with zero manual setup.
11Context gets repeated every conversation — You explain the same conventions in every session: import order, naming, component structure, testing patterns.All conventions are encoded in the generated rules file. The AI reads them automatically at session start. You explain once, it remembers forever.
12AI doesn’t improve over time — The AI makes the same wrong suggestions regardless of past feedback, team patterns, or previous failures.The system learns as you use it — mistakes log, decisions log, and updated rules mean the AI gets smarter with every session. Mistakes auto-capture builds the log organically.
13Complex tasks need multiple manual AI passes — Developers manually coordinate review + test + docs updates across separate conversations.Multi-agent orchestration runs multiple specialized agents in parallel — review, test, document, and refactor in one command with /orchestrate.
14Switching AI tools means starting over — Moving from Cursor to Claude Code (or vice versa) loses all configuration and project context.Generates configs for 5+ AI tools (Claude Code, Cursor, Windsurf, Aider, Cline) from a single source — switch tools without losing project knowledge.
15AI creates components without tests, docs, or types — Every AI-generated file needs manual follow-up to add what was missed.Skills like /new-component enforce a structured workflow: asks 10 questions, reads existing patterns, generates component + types + tests + docs together.
16No visibility into AI usage costs — Management has no idea how many tokens the team is consuming or which projects cost the most.Built-in token tracking provides daily/weekly/monthly usage summaries, per-project cost breakdown, budget alerts, and ROI estimates.
17Cursor copies entire modules instead of targeted edits — AI bloats the repo with unnecessary file duplication, especially in CMS and monorepo setups.Generated rules include explicit instructions for editing patterns — update in place, respect package boundaries, follow existing structure.
18No component-level AI awareness — AI doesn’t know which components have tests, stories, Sitecore integration, or documentation gaps.Component scanner discovers all React components and generates .ai.md docs with health scores, props tables, Sitecore field mappings, and dependency trees.
19Setup is manual and error-prone — Configuring AI assistants requires deep knowledge of each tool’s config format. Most teams skip it entirely.Zero manual configuration — one command auto-detects your stack and generates everything. Update with one command when the project evolves.
20AI hallucinates framework-specific APIs — Generates incorrect hook usage, wrong data fetching patterns, or non-existent component APIs for your framework version.Stack-specific template fragments include exact API patterns for your detected framework version (e.g., Next.js 15 App Router, Sitecore Content SDK v2).

The result without AI Kit: Teams spend more time fixing AI output than they save generating it.


What AI Kit Does

AI Kit scans your project once and generates everything the AI needs to be a productive team member:

1. Project-Aware Rules

AI Kit reads your package.json, config files, and directory structure to detect your exact stack — then generates rules tailored to it.

What It DetectsWhat the AI Learns
Next.js 15 with App RouterServer Components, Server Actions, app/ routing patterns
Sitecore XM Cloud<Text>, <RichText>, <Image> field helpers, placeholder patterns
Tailwind CSS v4@theme tokens, utility class patterns, responsive prefixes
TypeScript strict modeNo any, proper null checks, discriminated unions
Turborepo monorepoWorkspace conventions, cross-package imports
Figma + design tokensToken mapping, design-to-code workflow

Rules are generated for both Claude Code (CLAUDE.md) and Cursor (.cursorrules + .cursor/rules/*.mdc).

2. 48 Auto-Discovered Skills

Skills are structured AI workflows that get applied automatically. You don’t type a command — the AI recognizes what you’re doing and loads the right skill.

Example: You say “create a ProductCard component.” The AI auto-loads the new-component skill, which:

  • Asks 10 structured questions (props, Server/Client, data fetching, responsive needs…)
  • Reads an existing component to match your project’s patterns
  • Generates: component + types + tests + docs (if complex)
  • Follows your exact conventions — because it read them

48 skills across 8 categories:

CategorySkillsWhat They Do
Getting Startedprompt-help, understandHelp you write effective prompts, explain unfamiliar code
Buildingnew-component, new-page, api-route, error-boundary, extract-hook, figma-to-code, design-tokens, schema-gen, storybook-gen, scaffold-specScaffold production-ready code that matches your patterns
Qualityreview, pre-pr, test, accessibility-audit, security-check, responsive-check, type-fix, perf-audit, bundle-check, i18n-check, test-gapsCatch issues before code review — a11y, security, types, responsive, perf
Maintenancefix-bug, refactor, optimize, migrate, dep-check, sitecore-debug, upgradeDebug, refactor, upgrade, and audit with structured workflows
Workflowdocument, commit-msg, env-setup, changelog, release, pr-description, standup, learn-from-pr, release-notesAutomate docs, git messages, releases, and PR workflows
Sessionsave-session, resume-session, checkpointPersist context across sessions and track quality
Orchestrationorchestrate, quality-gate, harness-auditCoordinate agents, run comprehensive checks, audit AI config
Requirementsdeep-interview, clarify-requirementsGather requirements through structured interviews before coding

Skills work in both Claude Code and Cursor — same 48 workflows, both tools.

3. Hooks, Agents & Context Modes

New in v1.2.0 — three systems that make the AI work more like a real teammate:

Hooks — Automated quality checks that run as you code. Auto-format on save, TypeScript checks after edits, console.log warnings, git push safety, and mistakes auto-capture that logs build/lint failures to docs/mistakes-log.md. Three profiles: minimal, standard, strict.

Agents — 16 specialized AI assistants for delegation. Need a security audit? Delegate to @security-reviewer. Build failing? Send it to @build-resolver. CI pipeline red? Use @ci-debugger. Each agent has focused expertise and tools.

Context Modes — Switch between dev (build fast), review (check quality), and research (understand code) modes to change how the AI approaches your work.

4. AI That Enforces Standards Automatically

This is what makes AI Kit different from a static config file. The generated rules include active enforcement:

Prompt Quality Guard — When you write a vague request:

You: "fix the auth" AI: "I want to help. Which file has the auth issue? What's happening — login failing, session expiring, or an error?"

One clarification question prevents 10 minutes of wrong output.

Self-Enforcement — The AI does these automatically, even when you don’t ask:

What the AI Auto-DoesWhy It Matters
Creates .docs.md for complex componentsComplex components without docs become tribal knowledge
Adds regression test before fixing bugsWithout a test, the same bug comes back
Adds JSDoc to all exported functionsExported functions without JSDoc are invisible in IDE tooltips
Updates Change Log after modifying documented componentsPrevents “who changed this and why?” questions
Flags // TODO without ticket numbersUntracked TODOs never get done

Structured Output — Every skill produces output in the same format, every time:

  • Role-specific expertise (security engineer for /security-check, a11y engineer for /accessibility-audit)
  • Mandatory numbered steps the AI cannot skip
  • Consistent output tables with file paths and line numbers
  • Self-check before responding (“Did I cover every section?“)

5. Safe Updates

Your team adds custom rules above or below the generated content. When the stack changes, run ai-kit update — only the generated section refreshes, your custom rules stay untouched.

# My Team's Custom Rules ← preserved on update ... <!-- AI-KIT:START --> [generated content] ← refreshed on update <!-- AI-KIT:END --> # More Custom Rules ← preserved on update

The Impact

Before AI Kit

What HappensTime Cost
Re-explain project context every AI conversation5-10 min × 10 conversations/day
Fix AI-generated code to match conventions15-30 min per component
Code review catches convention violations2-4 review cycles per PR
New developer onboarding1-2 weeks to learn conventions
Bug fix without regression testBug returns in 2 weeks
No documentation createdTechnical debt compounds daily

After AI Kit

What HappensTime Cost
AI already knows your project0 min — context auto-loaded
AI generates code matching your exact patternsReady for review, no fixing
/pre-pr catches issues before review1-2 review cycles per PR
New developer runs ai-kit init, AI knows everything2-3 days to productivity
AI auto-adds regression test with every bug fixBug can’t return
AI auto-creates docs for complex componentsDocumentation stays current

For a team of 5 developers: ~50-75% reduction in code review cycles, ~60-70% faster component creation, documentation that actually exists.

See the full side-by-side comparison


Quick Start

# Run in any project directory npx @mikulgohil/ai-kit init # Follow the interactive prompts (30 seconds) # Done — your AI assistants now understand your project

What you get immediately:

  • CLAUDE.md + .cursorrules — AI rules tailored to your stack
  • 48 skills in .claude/skills/ + .cursor/skills/ — auto-discovered workflows
  • 16 specialized agents + 3 context modes — delegation and focus control
  • Hooks with mistakes auto-capture — quality checks that build your mistakes log
  • 6 developer guides in ai-kit/guides/ — effective AI usage playbooks
  • Documentation scaffolds in docs/ — structured logging templates
  • ai-kit health — one-glance project health dashboard

Full setup walkthrough | What gets generated


Who Is This For?

Individual developers — Stop re-explaining context. Let AI Kit teach the AI your project once. Every conversation starts informed.

Tech leads — Enforce coding standards through AI tools instead of code review comments. Standards are followed automatically, not policed manually.

Teams — Same AI experience across every developer and every project. New hires get the same AI context as senior engineers.

Open source maintainers — Contributors get project-aware AI assistance from their first PR. Standards are in the repo, not in your head.


Works With Every AI Plan

AI Kit is free and open source. It works with every pricing tier of every supported tool — but the value scales differently depending on what you’re paying for.

AI Coding Tool Pricing (March 2026)

ToolFreePro / IndividualPower TierTeam / Business
Claude CodeNo CLI access$20/mo — Opus + Sonnet models, ~100+ msgs per 5hr window$100/mo (Max 5x) or $200/mo (Max 20x) — priority access, highest limits$25-150/user/mo
Cursor50 premium reqs/mo$20/mo — unlimited auto mode + $20 credit pool$60/mo (Pro+, 3x) or $200/mo (Ultra, 20x)$40/user/mo
WindsurfLight daily quota$20/mo — premium models, standard quota$200/mo (Max) — heavy quota$40-60/user/mo
GitHub Copilot50 premium reqs + 2K completions$10/mo — 300 premium reqs$39/mo (Pro+) — 1,500 premium reqs$19-39/user/mo
AiderFree + open sourceBYOK — API costs only (~$3-30/mo)BYOK — heavy API (~$15-40/day)
ClineFree + open sourceBYOK — API costs onlyBYOK — heavy API$20/user/mo (Q2 2026+)

Where AI Kit Adds the Most Value

Your planWithout AI KitWith AI KitROI
$20/mo (Claude Pro or Cursor Pro)Limited tokens wasted on re-explaining context and fixing wrong outputEvery token goes to productive work — context is pre-loaded, patterns are enforcedHighest ROI — turns a budget plan into a power tool
$100-200/mo (Max / Ultra tiers)More tokens, but same quality problems at scaleAI Kit ensures your extra capacity produces higher-quality output, not more of the same mistakesHigh ROI — quality scales with quantity
$0 (Free tiers or BYOK)Extremely limited interactions, can’t afford wasted promptsEach precious interaction is maximally productive — no wasted tokens on contextCritical — every token matters
Team plans ($25-150/user/mo)Every developer configures AI differently, inconsistent qualityEntire team shares identical AI rules. One ai-kit init, everyone’s productiveTeam multiplier — consistency across headcount

Recommendations by Budget

On the $20/mo plan (Claude Pro or Cursor Pro)? This is AI Kit’s sweet spot. You have enough tokens to be productive but not enough to waste. AI Kit eliminates the #1 token sink — re-explaining your project every session. Use the /token-tips skill and enable the standard hook profile to maximize every interaction.

On a free plan or BYOK (Aider, Cline)? AI Kit is even more important here. With API costs per token, every wasted request is money lost. The generated rules, skills, and context modes ensure your AI gets it right the first time instead of needing 3 rounds of correction.

On the $100-200/mo plan (Max/Ultra)? You have the capacity — AI Kit ensures the quality matches. Multi-agent orchestration, 16 specialized agents, and quality gates become especially valuable when you’re running complex, multi-file tasks that cheaper plans can’t handle.

On a team plan? AI Kit pays for itself immediately. Without it, every developer on the team configures AI differently. With it, one ai-kit init and a git commit gives the entire team identical, project-aware AI assistance. The ROI scales linearly with team size.


Supported Tech Stacks

AI Kit detects and generates tailored rules for:

CategoryTechnologies
FrameworksNext.js (App Router, Pages Router, Hybrid), React
CMSSitecore XM Cloud, Sitecore JSS
StylingTailwind CSS (v3 + v4), SCSS, CSS Modules, styled-components
LanguageTypeScript (with strict mode detection)
MonoreposTurborepo, Nx, Lerna, pnpm workspaces
DesignFigma MCP, design tokens, visual tests
Package Managersnpm, pnpm, yarn, bun

Full detection details


Explore the Docs

PageWhat You’ll Learn
Why AI KitSide-by-side comparison: development with and without AI Kit
AI That Improves ItselfDeep-dive into Prompt Quality Guard, Self-Enforcement, and Structured Skills
Getting StartedStep-by-step setup walkthrough
Skills & CommandsAll 48 skills with usage guides
Recommended ToolsFree tools and MCPs that supercharge AI Kit
Token TipsOptimize token usage on the $20 plan
Last updated on