Developing from Scratch with AI - The Vibe Coding Reality
Authored on
Have you noticed how everyone's talking about AI coding assistants these days? "Just describe what you want and the AI writes the code!" Sounds amazing, right? But here's the thing—does it actually work when you're building something real, something that needs to go into production?
I recently decided to find out. I built a complete Drupal theme from scratch using AI assistance throughout the entire project. We're talking about a custom theme (kbc) for keboca.com—premium dark minimalist design, SCSS architecture, Twig templates, responsive design, error pages, the whole nine yards.
What if I told you it was both faster than traditional development AND more frustrating than I expected? Let me tell you what actually happened, the real challenges I faced, and what I learned along the way. All the examples here come straight from my debugging sessions—no sugar coating.
What is Vibe Coding Anyway?
So "vibe coding" is this new approach where you use natural language to guide AI models (LLMs) to generate code. The idea is simple:
- You write a prompt in plain English
- AI generates code
- You refine through conversation
- AI handles the boilerplate and common patterns
- You guide, review, and integrate
For this project, I used Claude via Cursor throughout the entire development. I'm not gonna lie—I was skeptical at first. But I set up a proper structure: everything documented in .ai/requests/kbc-theme/ with task.md, plan.md, and history.md files. We followed a phased approach: Design first, then Implementation, then Debugging. Oh boy, lots of debugging. More than I expected, honestly.
The Pros: What Actually Worked
1. Rapid Scaffolding (This Was Legit Amazing)
Okay, so creating a theme structure normally takes me hours. Setting up all the YAML files, SCSS architecture, build tooling... it's tedious. But the AI generated all of this in minutes:
- Complete
kbc.info.ymlandkbc.libraries.yml - Component-based SCSS structure (
_variables.scss,_reset.scss,_layout.scss,_components.scss) - Template suggestions and Twig structure
- Build tooling config (npm, sass compilation)
I'm not exaggerating—this saved me hours of boilerplate setup. I could focus on actual implementation from day one instead of fighting with configuration files. That part was genuinely great.
2. Design Specification Documentation (Unexpectedly Valuable)
Here's something I didn't expect: the AI generated comprehensive design specs. Phase 5 of the project added 2,287 lines to plan.md. Detailed SCSS code examples for every single page, responsive breakpoints fully documented, accessibility guidelines (WCAG AA compliance), complete design mockups in ASCII format (yeah, ASCII art mockups!). The design-first approach meant I had a clear roadmap before writing any code. Every component had specs with actual code examples ready to implement. This was way more valuable than I thought it would be.
3. Pattern Recognition (AI's Actually Good at This)
Once I established the design system, the AI started recognizing patterns and applying them consistently. Card components all followed the same structure, form elements maintained consistent dark theme styling, responsive media queries followed my breakpoints (768px, 992px, 1200px), and design system variables were used consistently throughout. This reduced the inconsistency errors that usually plague large projects. The AI remembered the patterns better than I do sometimes (let's be honest).
4. Documentation as Memory (This Saved the Project)
So here's the thing—AI doesn't remember past conversations. At all. But I created a history.md file that grew to 2,314 lines, documenting every session, every decision, every debugging step. This became absolutely critical. Future sessions could reference the complete project history, no context loss between development sessions, debugging processes were documented (not just solutions), and project continuity was maintained despite AI's complete lack of memory. This file-based approach to context management? Essential. Without it, the project would have been a disaster.
The Cons: Real Challenges I Encountered
1. Drupal-Specific Knowledge Gaps (The AI Doesn't Know Drupal Like We Do)
Problem: Preprocess Function Naming
Take a deep breath... I asked the AI to create a preprocess function, and it generated this:
// AI generated this (WRONG):
function kbc_theme_preprocess_html($variables) { }
// Correct Drupal pattern:
function kbc_preprocess_html($variables) { }See the problem? That extra _theme in the function name? Drupal never discovered or called the function. No error message, no warning—just a function that silently never executed. I spent way too long debugging this before I realized what happened.
The AI didn't understand Drupal's strict hook naming convention: THEMENAME_preprocess_HOOK(). It looked correct, but Drupal's hook system is picky. Real picky. Always double-check the naming patterns—AI doesn't always know framework-specific conventions, even for well-documented frameworks like Drupal.
Problem: Selector Specificity in Drupal
When styling Drupal system messages, the AI generated this CSS:
// AI generated (too low specificity):
.messages--status { }
// Core Drupal styles override theme styles
// My feedback: "it isn't visible... do NOT add !important be sure the selector are correct"I had to tell the AI: "it isn't visible... do NOT add !important be sure the selector are correct". The AI generated CSS without understanding Drupal's core style hierarchy. Core Drupal styles were overriding my theme styles.
I had to manually increase selector specificity: .layout-content .kbc-messages .messages. The AI didn't understand that Drupal's core styles have higher specificity in certain contexts. Framework context matters—the AI needs explicit guidance about framework-specific styling hierarchies. It can't just guess.
2. Complex Debugging Cycles (This Was Exhausting)
Problem: 500 Error Page Implementation
I'll tell you a brief story about implementing a custom 500 error page. It required an ExceptionSubscriber to intercept server errors. The initial implementation looked correct... but it took three debugging iterations to actually work.
Issue 1: Custom Template Not Rendering
Generic Drupal error message kept showing instead of my custom template. I added logging, verified subscriber priority, checked event propagation... nothing. Finally realized: HtmlResponse expects a rendered HTML string, not a render array. The AI generated code that looked logical but didn't understand Drupal's rendering pipeline. Classic.
Issue 2: Empty <head> Tag
CSS/JS assets weren't loading. Placeholders weren't being replaced. The attachments weren't being processed correctly in the exception handler. I had to simplify to #type => 'html' with #attached library. Drupal's default rendering pipeline handles attachments when you use the HTML element correctly—but the AI didn't know this. Getting back on track...
Issue 3: "Skip to Main Content" Link Visible
This accessibility link kept showing up no matter what I tried. Multiple attempts to hide it. Finally used a combined approach: body class + CSS rule + template comment. Sometimes you just need to throw everything at a problem and see what sticks.
Impact: Three-plus iterations, extensive debugging, multiple commits. The complexity of Drupal's rendering pipeline required deep understanding that the AI just doesn't have. Complex framework internals require human expertise—AI can generate the structure, but debugging requires actual framework knowledge. You can't fake this.
3. Context Loss Between Sessions (This Was Annoying)
Problem: AI Memory Limitations
Here's the reality: AI doesn't retain memory of past conversations. At all. Each new session starts completely fresh. I had to work around this by maintaining comprehensive documentation: history.md with 2,314 lines of session history, plan.md with 3,669 lines of implementation details, task.md with 102 lines of requirements.
The impact? I had to re-explain context in every new session, manually attach history files to each conversation, documentation maintenance became a significant time investment, and project continuity depended entirely on file-based context. It's annoying, but necessary.
Here's an actual exchange from my history:
Me: "do you have memory of the way we complete this request?"
AI: "I don't have direct memory of past chat sessions"
Me: *sigh* *updates history.md*File-based context is essential for multi-session projects. Documentation overhead is real and requires discipline. But without it, you're lost. Trust me on this one.
4. Iterative Refinement Required (More Back-and-Forth Than Expected)
Problem: Responsive Design Implementation
The responsive design followed a pattern that repeated for every single section:
- AI generates initial responsive styles
- I test on actual device
- Find issues (overflow, touch targets, spacing)
- Go back to AI with specific feedback
- Refinement cycle repeats
Example: Mobile Hamburger Menu
Let me tell you about the hamburger menu. The AI generated basic toggle functionality. Great! But then issue 1: menu links not visible (CSS conflicts with visibility: hidden, opacity: 0, display: none all fighting each other). Issue 2: desktop menu visible on mobile alongside hamburger (what?). Issue 3: menu positioned incorrectly (missing position: relative on parent). Issue 4: links not clickable (pointer-events: none interfering).
Four-plus iterations to get it working. Four. Plus. Way more back-and-forth than traditional development. Each feature required multiple refinement cycles. The AI generates "good enough" code, but production quality? That requires iteration. Plan for refinement cycles in your timeline—the first generation won't be perfect. It rarely is.
5. Over-Engineering Tendencies
For the exception handler, the AI initially suggested using complex RenderContext and HtmlResponseAttachmentsProcessor classes. Sounds fancy, right? The reality? A simple #type => 'html' with #attached library worked way better. The AI sometimes suggests "enterprise" solutions when simpler approaches exist. Don't just accept the AI's first suggestion—question it. Your expertise is still needed.
Real Numbers from My Project
Let me give you the actual numbers from history.md. Total sessions: 15+ documented sessions. Phases completed: 6 major phases (3.x, 4, 5.x, 6.x, 6.8, 6.9). Files created/modified: 50+ files. Documentation was massive—plan.md at 3,669 lines, history.md at 2,314 lines, task.md at 102 lines. Git commits: 20+ commits with iterative fixes. Debugging cycles varied: preprocess function took 1 cycle, messages styling took 2 cycles, 500 error page took 3+ cycles, hamburger menu took 4+ cycles, and responsive design had multiple cycles per section.
Time investment? Significant time maintaining history.md and plan.md (but worth it). More debugging time than expected due to framework-specific issues. Multiple refinement cycles for each feature. Net result: faster than from-scratch, but not as fast as "AI magic" suggests. The hype is real, but so are the challenges.
When It Works, When It Doesn't
So when does vibe coding actually shine? Well-documented frameworks where the AI has good training data—think React, Vue, standard patterns. It's great for standard patterns too: CRUD operations, forms, basic styling. Boilerplate generation? Absolutely. Scaffolding, file structure, initial setup—this is where AI really saves time. Documentation generation is solid too: specs, comments, README files. And refactoring existing code with clear patterns? Yeah, it handles that well.
But here's where it struggles. Framework-specific conventions trip it up—Drupal hooks, Symfony patterns, those framework quirks that only make sense if you've been in the trenches. Complex debugging of deep framework internals? Forget it. The rendering pipeline, event system—you need human expertise here. Production edge cases are another weak spot: browser quirks, performance optimization, accessibility details. Multi-session projects require extensive documentation because of context loss. And architectural decisions? The AI suggests solutions, but you're the one who has to evaluate them.
What I'd Do Differently (And What Worked)
Documentation is critical—seriously, do this. Maintain detailed history of decisions and changes. Use structured files like task.md, plan.md, history.md. Document debugging processes, not just solutions. This saved my project multiple times.
Always verify framework conventions. Don't assume AI knows your framework's quirks. Test early, catch naming/convention issues fast. I learned this the hard way with that preprocess function.
Accept the iterative approach. The first generation won't be perfect—plan for refinement cycles. Use AI for initial implementation, manual for polish. This is just how it works.
Context management? Your project depends on this. Attach relevant files to each session, reference previous sessions explicitly. Maintain "project memory" in files, not in AI. Without this, you're basically starting over every session.
Know when to take over. Trust your instincts. Sometimes the old way is still the best way.
Code Example: The Reality Check
Let me show you what actually happened. Here's what the AI generated initially:
// ExceptionSubscriber.php - First version
public function onException(GetResponseForExceptionEvent $event) {
$exception = $event->getException();
if ($exception->getCode() >= 500) {
$build = [
'#type' => 'page',
'#theme' => 'page__500',
];
$response = new HtmlResponse($build); // WRONG: expects string
$event->setResponse($response);
}
}Looks reasonable, right? Wrong. Here's what I had to debug it to:
// ExceptionSubscriber.php - Working version
public function onException(GetResponseForExceptionEvent $event) {
$exception = $event->getException();
if ($exception->getCode() >= 500) {
$build = [
'#type' => 'html',
'page' => [
'#type' => 'page',
'#theme' => 'page__500',
],
'#attached' => [
'library' => ['kbc/global-styling'],
],
];
$html = $this->renderer->renderRoot($build);
$response = new HtmlResponse($html, $exception->getCode());
$event->setResponse($response);
$event->stopPropagation();
}
}The Gap: The AI generated "logical" code, but didn't understand that HtmlResponse expects a rendered HTML string, not a render array. This required understanding Drupal's rendering system to fix. The AI can't fake framework knowledge.
Conclusion
So here's my verdict after building a complete theme with AI assistance. Vibe coding is powerful for scaffolding, documentation, and standard patterns. But it's not magic—it requires significant developer oversight and iteration. Framework expertise still matters. AI doesn't replace deep framework knowledge, and documentation overhead is real. Maintaining context requires discipline. Net benefit? Faster development, but with different challenges.
For Drupal developers specifically: use AI for initial implementation and boilerplate. Always verify Drupal-specific conventions—hooks, services, rendering. Plan for debugging cycles, especially for complex features. Maintain comprehensive documentation for multi-session projects. And know when to take manual control. Your expertise is still valuable.
AI will improve on framework-specific knowledge. Better context management tools will emerge. But developer expertise will always be needed for production-quality code. Vibe coding is a valuable tool, but it's not a replacement for expertise. The most successful projects combine AI's speed with human judgment, framework knowledge, and the discipline to maintain proper documentation. That's the reality, whether we like it or not.