Why AI Kit
A side-by-side comparison of AI-assisted development with and without AI Kit — across real workflows that happen every day.
The 60-Second Case
Without AI Kit, your AI assistant starts every conversation from scratch. It doesn’t know your framework, conventions, team standards, or project structure. You re-explain context every time. Output is inconsistent. Code reviews catch AI-generated violations.
With AI Kit, the AI already knows everything. One command. Zero ongoing effort.
Time to set up: npx @mikulgohil/ai-kit init (30 seconds)
Time to maintain: npx @mikulgohil/ai-kit update (when stack changes)
Runtime cost: Zero — no background processes, no API callsSide-by-Side: Daily Workflows
1. Creating a New Component
| Without AI Kit | With AI Kit | |
|---|---|---|
| What you type | ”Create a ProductCard component" | "Create a ProductCard component” or /new-component ProductCard |
| What AI knows | Nothing about your project | Your framework, styling, CMS, conventions — and which skill applies |
| What you get | Generic React component. Wrong export style, wrong file structure, no Sitecore helpers, no TypeScript patterns | Component matching your exact project patterns. Correct exports, Tailwind classes, Sitecore field helpers, typed props |
| Tests | None unless you ask | Auto-generated: happy path, error states, edge cases |
| Documentation | None | Auto-generated .docs.md if component is complex |
| Follow-up work | 15-30 min fixing patterns, adding types, creating tests | Ready for review |
2. Fixing a Bug
| Without AI Kit | With AI Kit | |
|---|---|---|
| What you type | ”fix the checkout bug" | "fix the checkout bug” |
| What happens first | AI guesses which file, reads 10+ files, tries random fixes | AI asks: “Which file? What’s expected vs actual?” |
| The fix | May fix the wrong thing. No test. No docs update. | Targeted fix + regression test + change log update |
| Review cycles | 2-3 rounds catching missing tests, wrong patterns | 1 round focused on business logic |
| Same bug returns? | Likely — no regression test | No — test prevents it |
3. Pre-PR Review
| Without AI Kit | With AI Kit | |
|---|---|---|
| What you do | Push code, open PR, wait for review | Run /pre-pr before pushing |
| What reviewers catch | Console.logs, missing types, any usage, no alt text, wrong imports, missing tests | Already fixed before they see it |
| Review time | 30-60 min per review cycle | 10-15 min focused on logic |
| Review cycles | 2-4 rounds | 1-2 rounds |
| Developer frustration | ”I keep getting the same feedback” | Feedback is about architecture, not lint |
4. Implementing a Figma Design
| Without AI Kit | With AI Kit | |
|---|---|---|
| Approach | Eyeball the design, hardcode values | /figma-to-code — structured token mapping |
| Spacing/colors | Hardcoded p-6, #1a2b3c | Maps to design tokens: p-spacing-lg, text-primary |
| Design review cycles | 3-4 rounds (“wrong spacing”, “wrong color”, “not responsive”) | 1-2 rounds |
| Maintainability | Breaks when design system updates | Token changes propagate automatically |
5. Onboarding a New Developer
| Without AI Kit | With AI Kit | |
|---|---|---|
| Day 1 | Read the wiki (if it exists). Ask teammates about conventions. | Run ai-kit init. AI already knows everything. |
| First PR | 5+ review comments about naming, structure, missing docs | AI enforced conventions. Review focuses on logic. |
| Time to productivity | 1-2 weeks | 2-3 days |
| Questions asked | ”What’s the naming convention? Where do tests go? Which import style?” | AI answers these through /understand and /prompt-help |
6. Sitecore Component Development
| Without AI Kit | With AI Kit | |
|---|---|---|
| Field helpers | Developer forgets <Text>, uses {fields.title.value} — breaks Experience Editor | Rules enforce field helpers. AI uses <Text>, <RichText>, <Image> automatically |
| Debugging | Hours googling “Sitecore component not rendering” | /sitecore-debug — structured checklist finds the issue in minutes |
| Component registration | Forgot to register in componentFactory — blank page | AI checks registration as part of /new-component |
Impact by Role
For Individual Developers
| Metric | Without AI Kit | With AI Kit | Improvement |
|---|---|---|---|
| Time explaining context to AI | 5-10 min per conversation | 0 min (auto-loaded) | 100% eliminated |
| PR review cycles | 2-4 rounds | 1-2 rounds | 50-75% fewer |
| Bug recurrence | Common (no regression tests) | Rare (auto-tested) | Significant reduction |
| Component creation time | 30-45 min (fix patterns after) | 10-15 min (right first time) | 60-70% faster |
| Documentation debt | Grows constantly | Stays current (auto-enforced) | Eliminated |
For Tech Leads
| Metric | Without AI Kit | With AI Kit | Improvement |
|---|---|---|---|
| Code review time | 30-60 min per PR | 10-15 min per PR | 50-75% reduction |
| Review comments about conventions | 60% of all comments | Near zero | Standards auto-enforced |
| Onboarding time | 1-2 weeks | 2-3 days | 70-80% faster |
| Cross-project consistency | Each project different | Same standards everywhere | Fully standardized |
| Standards documentation | Wiki nobody reads | CLAUDE.md AI actually follows | Finally enforced |
For the Organization
| Metric | Without AI Kit | With AI Kit | Improvement |
|---|---|---|---|
| AI tool ROI | Low — inconsistent output | High — reliable, standards-compliant | Dramatically higher |
| Knowledge sharing | Tribal knowledge | Codified in rules + guides | Preserved |
| Client code quality | Varies by developer | Consistent across team | Uniform quality |
| Technical debt from AI | Grows (AI generates non-standard code) | Minimal (AI follows standards) | Prevented |
| Security mistakes | Caught in review (if at all) | Caught by /security-check before commit | Shifted left |
Real Scenario: A Week Without vs With AI Kit
Monday — Without AI Kit
09:00 Start new feature. Ask AI to create 3 components.
09:30 AI generates components with wrong patterns. Start fixing.
10:30 Components match conventions. Start writing tests.
11:30 Tests done. Push PR.
13:00 Review feedback: "Missing alt text on images. Use named exports,
not default. Add docs for the data table component.
The ProductCard should use Sitecore field helpers."
14:00 Fix all review feedback. Re-push.
15:00 Second review: "The regression test for the bug fix is missing.
Also, there's a console.log on line 45."
15:30 Fix, re-push.
16:00 Approved on third cycle.Total: 7 hours. 3 review cycles. Multiple convention violations.
Monday — With AI Kit
09:00 Start new feature. Run /new-component for each.
09:45 AI generates components with correct patterns, tests, and docs.
Sitecore field helpers included automatically.
10:00 Run /pre-pr. Catches a missing alt text and a console.log.
10:15 Fix both. Push PR.
11:00 Review feedback: "Could we use a discriminated union for the
loading state?" (Actual architecture feedback, not conventions.)
11:30 Update the type. Re-push.
12:00 Approved on second cycle.Total: 3 hours. 2 review cycles. Zero convention violations.
What AI Kit Does NOT Do
Being honest about limitations:
| Claim | Reality |
|---|---|
| ”AI Kit makes AI perfect” | No. AI still makes mistakes. But it makes fewer, more specific ones. |
| ”You never need code review” | No. AI Kit catches conventions and patterns. Humans review logic and architecture. |
| ”It works without developer effort” | Partially. The rules auto-enforce standards, but developers still need to write clear prompts for complex tasks. |
| ”It replaces documentation” | No. It generates and maintains docs automatically, but developers should still write architectural decision records and design docs. |
| ”100% consistent output” | ~90%. Prompt engineering gets close, but AI models have natural variance. Edge cases still require human judgment. |
Cost-Benefit Summary
| Cost | Benefit | |
|---|---|---|
| Setup | 30 seconds (npx @mikulgohil/ai-kit init) | AI understands your project forever |
| Maintenance | Occasional ai-kit update | Rules stay current with stack changes |
| Learning curve | Read getting-started.md (5 min) | 48 skills auto-discovered — no command names to learn |
| Runtime overhead | Zero — static files only | No performance impact |
| Team adoption | Commit generated files to git | Every developer gets the same AI context |
The question isn’t “is AI Kit worth it?” — it’s “why would you use AI tools without it?”
Get Started
npx @mikulgohil/ai-kit initSee Getting Started for the full walkthrough, or read AI That Improves Itself to understand the technology behind these improvements.