AI in web development: how I work faster without sacrificing quality
AI in web development — the reality behind the hype
Every week there's a new headline: "AI will replace developers." "Build a full app with one prompt." "No-code AI makes developers obsolete." I've been using AI in web development daily for over a year now. Here's what actually happens when you use AI to build real products for real clients — not demos, not prototypes, not Twitter threads.
The short version: AI has made me significantly faster. It has not made me replaceable. And the gap between AI-assisted development and AI-replacing-development is enormous.
My actual AI workflow
I use two primary tools:
Claude Code — Anthropic's CLI tool for Claude. This is my main development partner. It runs in the terminal, has full context of my codebase, can read files, write files, run commands, and execute multi-step tasks. Think of it as a senior developer sitting next to me who types really fast but needs clear direction.
Cursor — an AI-native code editor. Good for smaller edits, quick refactors, and when I want AI suggestions inline while I'm already in a file.
Here's what a typical day looks like:
Morning: Architecture and planning (no AI)
I spend the first 30-60 minutes thinking. What needs to be built? What's the data model? How do the pieces connect? Where are the edge cases? This happens on paper or in my head. AI is not useful here — it can generate plausible-sounding architectures, but it doesn't understand the business constraints, the client's budget, the maintenance burden, or the long-term implications.
Midday: Building features (heavy AI)
This is where AI earns its keep. I give Claude Code specific, scoped tasks:
- "Create a new component that displays order history with pagination, using our existing card component style"
- "Write Vitest tests for the route optimization function, covering edge cases for empty routes, single-stop, and 25+ stops"
- "Refactor this 400-line component into smaller components following our project structure"
Each task is clear, bounded, and verifiable. I can check the output, test it, and iterate. This is 2-4x faster than writing everything from scratch.
Afternoon: Review, debug, deploy (light AI)
I review what was built, test edge cases, check performance, and deploy. AI helps occasionally — explaining an error message, suggesting a fix for a failing test — but most of this work requires human judgment.
What AI does well in web development
Boilerplate and repetitive code
Every web project has tons of repetitive patterns: CRUD operations, form validation, API routes, type definitions, component scaffolding. Writing these by hand is tedious and error-prone. AI generates them in seconds, correctly, and consistently.
For colet.app, AI generated the initial CRUD operations for all entity types — orders, routes, clients, drivers, depots. That's probably 40+ files of repetitive code that would have taken days to write manually.
Testing
This is AI's superpower in development. Writing tests is important but boring. Most developers skip tests because they'd rather spend time on features. AI removes this excuse.
I can say "write comprehensive tests for this function" and get 20+ test cases in minutes, including edge cases I might not have thought of. colet.app has 274+ automated tests, and AI wrote the majority of them. I reviewed and adjusted, but the heavy lifting was done by Claude.
Refactoring
"Take this 500-line component and split it into smaller, reusable components" is a perfect AI task. The logic stays the same. The structure changes. AI handles the mechanical work of extracting components, updating imports, and passing props. I handle the decisions about where to split and what abstractions make sense.
CSS and styling
Writing CSS is where AI saves me the most frustration. Centering things, responsive layouts, animations — AI generates working CSS faster than I can look up the flexbox syntax for the hundredth time. For CSS Modules (which we use at tiny.studio), AI generates scoped styles that match our design system variables.
Documentation and types
TypeScript type definitions from API responses. JSDoc comments for complex functions. README updates. These are all perfect AI tasks — mechanical, important, and easy to verify.
What AI does poorly
Architecture decisions
"Should I use a monorepo or separate repos?" "How should I structure the database for multi-tenancy?" "Should real-time updates use WebSockets or polling?"
AI will give you an answer. It'll sound confident and well-reasoned. It might even be right. But it doesn't know your team size (in my case: 1), your budget, your client's technical sophistication, your deployment environment, or your maintenance capacity.
When I was building colet.app, the decision to use Supabase with Row Level Security for multi-tenancy came from experience — knowing the tradeoffs, understanding what Supabase handles well and where it struggles. AI couldn't have made that call with the same confidence.
UX and user flows
"Design the optimal flow for a dispatcher planning a route with 25+ stops." This requires understanding the user — watching them work, knowing their mental model, feeling the frustration points. AI can generate wireframes and suggest flows, but it's working from generic patterns, not from sitting in a dispatch office watching someone juggle phone calls and spreadsheets.
Client communication
Understanding what a client actually needs versus what they say they need is a deeply human skill. A client says "I want a modern website." What they mean is "I want more customers from Google." AI can't read between those lines.
Debugging production issues
When something breaks in production at 11 PM and you've got partial logs, a frustrated client, and three possible causes — AI can help analyze error logs, but it can't reproduce the issue, check the database state, or understand the sequence of events that led to the failure. Debugging requires context that doesn't fit in a prompt.
Real numbers: how much faster?
I track my time carefully. Here's the honest comparison for common tasks:
| Task | Without AI | With AI | Speedup |
|---|---|---|---|
| New CRUD feature (full stack) | 4-6 hours | 1-2 hours | 3x |
| Writing tests for a module | 2-3 hours | 30-45 min | 4x |
| Refactoring a large component | 1-2 hours | 20-30 min | 3-4x |
| Building a new page from design | 3-4 hours | 1.5-2 hours | 2x |
| Debugging a complex bug | 1-3 hours | 1-2.5 hours | 1.2x |
| Architecture planning | 2-4 hours | 2-4 hours | 1x |
The overall speedup across a full project is roughly 2-3x. Not 10x. Not "build an app in a weekend." Two to three times faster than before, with better test coverage and more consistent code quality.
For a client, this means a project that would have taken 8 weeks takes 3-4 weeks. Real savings, real timeline improvement — but not magic.
Quality control: the non-negotiable part
Here's where most "AI developer" content gets dangerous. They show the generation step and skip the verification step. Using AI without rigorous quality control produces garbage that looks like code.
My process for every AI-generated piece of code:
- Read it line by line. If I don't understand what it does, I don't ship it.
- Test it. Not just happy path — edge cases, error states, empty states.
- Check performance. AI loves over-engineering. Does this component re-render when it shouldn't? Is this database query missing an index?
- Verify against the design system. AI doesn't know our brand guidelines by heart (though Claude Code gets close with a good CLAUDE.md file).
- Check accessibility. AI often forgets aria labels, keyboard navigation, and screen reader compatibility.
This review process takes time. But it's the difference between a professional product and a demo that falls apart in production.
Why AI doesn't replace experience
A junior developer using AI produces code faster. A senior developer using AI produces better code faster. The difference is knowing what to ask for, recognizing when the output is wrong, and understanding the implications of every decision.
AI doesn't know:
- That the client's hosting plan can't handle server-side rendering
- That the target audience is 55+ and needs larger text and simpler navigation
- That this feature will need to be maintained by someone who's never seen the codebase
- That this database design will become a bottleneck at 10,000 users
- That the "simple" feature request actually requires rethinking the entire data model
These are the things that make software work in the real world, and they come from experience, not from prompts.
The tools I recommend
If you're a developer wanting to integrate AI into your workflow:
-
Start with Claude Code. It understands full project context and can handle multi-file changes. The CLAUDE.md file (project-level instructions) makes it remarkably effective for repeat work on the same codebase.
-
Use AI for tests first. This is the lowest-risk, highest-value starting point. Even if the generated tests aren't perfect, they're better than no tests.
-
Don't use AI for one-off scripts you'll never read again. If it breaks and you don't understand why, you've created a liability.
-
Keep AI in the loop, not in the driver's seat. Give it specific tasks. Review everything. Make the architectural decisions yourself.
The future (without the hype)
AI tools will get better. They'll understand larger codebases, make fewer mistakes, and handle more complex tasks. But the fundamental dynamic won't change: AI accelerates execution. It doesn't replace decision-making.
The developers who thrive will be the ones who use AI as a force multiplier — shipping faster, testing more thoroughly, and spending their freed-up time on the problems that actually matter: understanding users, making architectural tradeoffs, and building software that works in the real world.
The developers who struggle will be the ones who either refuse to adopt AI (working 3x slower than competitors) or blindly trust AI output (shipping fragile, poorly-understood code).
The sweet spot is in the middle. Use AI aggressively for the mechanical work. Think carefully about everything else.
If you're interested in how this AI-augmented approach translates to client work — faster timelines, lower costs, better test coverage — that's exactly how we work at tiny.studio. Not AI replacing human judgment. AI amplifying it.