Skip to Content
Why AI Kit

Why AI Kit

A side-by-side comparison of AI-assisted development with and without AI Kit — across real workflows that happen every day.


The 60-Second Case

Without AI Kit, your AI assistant starts every conversation from scratch. It doesn’t know your framework, conventions, team standards, or project structure. You re-explain context every time. Output is inconsistent. Code reviews catch AI-generated violations.

With AI Kit, the AI already knows everything. One command. Zero ongoing effort.

Time to set up: npx @mikulgohil/ai-kit init (30 seconds) Time to maintain: npx @mikulgohil/ai-kit update (when stack changes) Runtime cost: Zero — no background processes, no API calls

Side-by-Side: Daily Workflows

1. Creating a New Component

Without AI KitWith AI Kit
What you type”Create a ProductCard component""Create a ProductCard component” or /new-component ProductCard
What AI knowsNothing about your projectYour framework, styling, CMS, conventions — and which skill applies
What you getGeneric React component. Wrong export style, wrong file structure, no Sitecore helpers, no TypeScript patternsComponent matching your exact project patterns. Correct exports, Tailwind classes, Sitecore field helpers, typed props
TestsNone unless you askAuto-generated: happy path, error states, edge cases
DocumentationNoneAuto-generated .docs.md if component is complex
Follow-up work15-30 min fixing patterns, adding types, creating testsReady for review

2. Fixing a Bug

Without AI KitWith AI Kit
What you type”fix the checkout bug""fix the checkout bug”
What happens firstAI guesses which file, reads 10+ files, tries random fixesAI asks: “Which file? What’s expected vs actual?”
The fixMay fix the wrong thing. No test. No docs update.Targeted fix + regression test + change log update
Review cycles2-3 rounds catching missing tests, wrong patterns1 round focused on business logic
Same bug returns?Likely — no regression testNo — test prevents it

3. Pre-PR Review

Without AI KitWith AI Kit
What you doPush code, open PR, wait for reviewRun /pre-pr before pushing
What reviewers catchConsole.logs, missing types, any usage, no alt text, wrong imports, missing testsAlready fixed before they see it
Review time30-60 min per review cycle10-15 min focused on logic
Review cycles2-4 rounds1-2 rounds
Developer frustration”I keep getting the same feedback”Feedback is about architecture, not lint

4. Implementing a Figma Design

Without AI KitWith AI Kit
ApproachEyeball the design, hardcode values/figma-to-code — structured token mapping
Spacing/colorsHardcoded p-6, #1a2b3cMaps to design tokens: p-spacing-lg, text-primary
Design review cycles3-4 rounds (“wrong spacing”, “wrong color”, “not responsive”)1-2 rounds
MaintainabilityBreaks when design system updatesToken changes propagate automatically

5. Onboarding a New Developer

Without AI KitWith AI Kit
Day 1Read the wiki (if it exists). Ask teammates about conventions.Run ai-kit init. AI already knows everything.
First PR5+ review comments about naming, structure, missing docsAI enforced conventions. Review focuses on logic.
Time to productivity1-2 weeks2-3 days
Questions asked”What’s the naming convention? Where do tests go? Which import style?”AI answers these through /understand and /prompt-help

6. Sitecore Component Development

Without AI KitWith AI Kit
Field helpersDeveloper forgets <Text>, uses {fields.title.value} — breaks Experience EditorRules enforce field helpers. AI uses <Text>, <RichText>, <Image> automatically
DebuggingHours googling “Sitecore component not rendering”/sitecore-debug — structured checklist finds the issue in minutes
Component registrationForgot to register in componentFactory — blank pageAI checks registration as part of /new-component

Impact by Role

For Individual Developers

MetricWithout AI KitWith AI KitImprovement
Time explaining context to AI5-10 min per conversation0 min (auto-loaded)100% eliminated
PR review cycles2-4 rounds1-2 rounds50-75% fewer
Bug recurrenceCommon (no regression tests)Rare (auto-tested)Significant reduction
Component creation time30-45 min (fix patterns after)10-15 min (right first time)60-70% faster
Documentation debtGrows constantlyStays current (auto-enforced)Eliminated

For Tech Leads

MetricWithout AI KitWith AI KitImprovement
Code review time30-60 min per PR10-15 min per PR50-75% reduction
Review comments about conventions60% of all commentsNear zeroStandards auto-enforced
Onboarding time1-2 weeks2-3 days70-80% faster
Cross-project consistencyEach project differentSame standards everywhereFully standardized
Standards documentationWiki nobody readsCLAUDE.md AI actually followsFinally enforced

For the Organization

MetricWithout AI KitWith AI KitImprovement
AI tool ROILow — inconsistent outputHigh — reliable, standards-compliantDramatically higher
Knowledge sharingTribal knowledgeCodified in rules + guidesPreserved
Client code qualityVaries by developerConsistent across teamUniform quality
Technical debt from AIGrows (AI generates non-standard code)Minimal (AI follows standards)Prevented
Security mistakesCaught in review (if at all)Caught by /security-check before commitShifted left

Real Scenario: A Week Without vs With AI Kit

Monday — Without AI Kit

09:00 Start new feature. Ask AI to create 3 components. 09:30 AI generates components with wrong patterns. Start fixing. 10:30 Components match conventions. Start writing tests. 11:30 Tests done. Push PR. 13:00 Review feedback: "Missing alt text on images. Use named exports, not default. Add docs for the data table component. The ProductCard should use Sitecore field helpers." 14:00 Fix all review feedback. Re-push. 15:00 Second review: "The regression test for the bug fix is missing. Also, there's a console.log on line 45." 15:30 Fix, re-push. 16:00 Approved on third cycle.

Total: 7 hours. 3 review cycles. Multiple convention violations.

Monday — With AI Kit

09:00 Start new feature. Run /new-component for each. 09:45 AI generates components with correct patterns, tests, and docs. Sitecore field helpers included automatically. 10:00 Run /pre-pr. Catches a missing alt text and a console.log. 10:15 Fix both. Push PR. 11:00 Review feedback: "Could we use a discriminated union for the loading state?" (Actual architecture feedback, not conventions.) 11:30 Update the type. Re-push. 12:00 Approved on second cycle.

Total: 3 hours. 2 review cycles. Zero convention violations.


What AI Kit Does NOT Do

Being honest about limitations:

ClaimReality
”AI Kit makes AI perfect”No. AI still makes mistakes. But it makes fewer, more specific ones.
”You never need code review”No. AI Kit catches conventions and patterns. Humans review logic and architecture.
”It works without developer effort”Partially. The rules auto-enforce standards, but developers still need to write clear prompts for complex tasks.
”It replaces documentation”No. It generates and maintains docs automatically, but developers should still write architectural decision records and design docs.
”100% consistent output”~90%. Prompt engineering gets close, but AI models have natural variance. Edge cases still require human judgment.

Cost-Benefit Summary

CostBenefit
Setup30 seconds (npx @mikulgohil/ai-kit init)AI understands your project forever
MaintenanceOccasional ai-kit updateRules stay current with stack changes
Learning curveRead getting-started.md (5 min)48 skills auto-discovered — no command names to learn
Runtime overheadZero — static files onlyNo performance impact
Team adoptionCommit generated files to gitEvery developer gets the same AI context

The question isn’t “is AI Kit worth it?” — it’s “why would you use AI tools without it?”


Get Started

npx @mikulgohil/ai-kit init

See Getting Started for the full walkthrough, or read AI That Improves Itself to understand the technology behind these improvements.

Last updated on