How I Use Claude + Cursor to Cut Website Build Time by 60%
Jack Amin
Digital Marketing & AI Specialist

Quick Answer
By combining Claude for planning, architecture, and content logic with Cursor for in-editor code generation and debugging, I've cut the time to deliver a production-ready website from roughly 6–8 weeks down to 2–3 weeks on comparable projects. This post documents the exact workflow — phase by phase, prompt by prompt — including where the AI genuinely helps, where it still needs supervision, and the specific patterns I've found most reliable for Next.js, Tailwind, and Sanity CMS projects.
Why I'm sharing this
Every few months someone publishes a post claiming AI "10x'd their development speed" with very little evidence beyond vibes and vague gestures at ChatGPT. I wanted to write the version of this that's actually useful — specific workflows, specific prompts, real numbers, and an honest account of where the AI falls short.
I run Codeble, a digital marketing and web development agency in Sydney. Since mid-2024 I've built most client projects on a Next.js + Tailwind + Sanity CMS stack, deployed to Vercel. The 60% figure in the headline is derived from comparing delivery timelines on equivalent-scope projects before and after integrating Claude and Cursor into my core workflow. It's not a controlled study. But it's a real number, and this post explains exactly where that time went.
The Stack This Workflow Is Built Around
Before getting into the workflow, here's the technical context:
- Framework: Next.js (App Router)
- Styling: Tailwind CSS v4
- CMS: Sanity
- Database: Neon (Postgres, serverless)
- Deployment: Vercel
- AI tools: Claude Pro (claude.ai) + Cursor Pro
If you're working with a different stack, most of this still applies — the principles transfer. But the specific prompts and patterns I'll share are calibrated for this environment.
Phase 1: Discovery and Architecture (Before Writing a Single Line of Code)
This is where the time saving starts, and it's the phase most developers underestimate.
Before I touch Cursor, I spend an intensive session with Claude working through the project architecture. This used to take days of back-and-forth with the client, reference documentation, and rough sketching. Now it takes a few hours — and produces a cleaner output.
How I use Claude for project architecture
I open a new Claude conversation and paste in everything I have: the client brief, any reference sites they've provided, their content requirements, and any technical constraints. Then I ask Claude to produce a structured site architecture document.
The prompt I use:
You are a senior Next.js architect. Based on the brief below, produce a complete site architecture document including: page inventory with routes, component hierarchy for shared components, Sanity schema structure for each content type, and a data flow diagram in plain text. Flag any technical decisions that need client input before we proceed.
[paste brief]
[Screenshot: Claude producing a structured architecture output with page routes, component hierarchy, and Sanity schema draft — shown in a Claude conversation window]
What comes back is a working document I can share with the client for approval before a line of code is written. It includes:
- Full route structure (
/,/about,/services,/services/[slug],/blog,/blog/[slug],/contact) with notes on which are static, which use ISR, and which need dynamic rendering - Sanity schema drafts for each content type — services, blog posts, team members, etc. — in plain English before I convert them to actual schema code
- Shared component list — navbar, footer, hero section, CTA block, card grid, etc. — with notes on which props each needs
- Third-party integration notes — forms, analytics, CRM connections, anything that will require API work
This document becomes my build spec. When I'm inside Cursor two days later, I'm not making architectural decisions — I already made them. That elimination of mid-build decision paralysis is where a significant chunk of the time saving comes from.
The follow-up prompt I always run:
Based on the architecture above, what are the three most likely causes of scope creep or technical debt in this project? What would you recommend doing at the architecture stage to prevent them?
Claude is surprisingly good at this. It flags things like: "if the client wants the blog to eventually support multiple authors, the current Sanity schema won't support that without a migration — consider adding an author reference field now." That kind of forward-looking review used to come from experience and hindsight. Now it's part of the planning phase.
Phase 2: Sanity Schema Generation
Sanity CMS schemas are the most repetitive part of any content-driven build. Every document type needs a schema file. Every schema file follows a predictable structure. It's exactly the kind of work AI excels at.
My Sanity schema prompt pattern
Once I have the architecture document, I ask Claude to generate the actual Sanity schema files. Here's the prompt pattern:
Generate a Sanity schema file for a
servicedocument type with the following fields:
- title (string, required)
- slug (slug, generated from title)
- shortDescription (text, max 160 characters)
- heroImage (image with alt text)
- body (array of blocks using Portable Text)
- category (reference to serviceCategory document)
- seoTitle (string, optional)
- seoDescription (text, optional, max 160 characters)
- featured (boolean, default false)
Use TypeScript. Follow Sanity v3 conventions. Include the export.
Here's a representative output:
typescript// schemas/service.ts
import { defineField, defineType } from 'sanity'
export default defineType({
name: 'service',
title: 'Service',
type: 'document',
fields: [
defineField({
name: 'title',
title: 'Title',
type: 'string',
validation: (Rule) => Rule.required().min(2).max(80),
}),
defineField({
name: 'slug',
title: 'Slug',
type: 'slug',
options: {
source: 'title',
maxLength: 96,
},
validation: (Rule) => Rule.required(),
}),
defineField({
name: 'shortDescription',
title: 'Short Description',
type: 'text',
rows: 3,
validation: (Rule) => Rule.max(160),
}),
defineField({
name: 'heroImage',
title: 'Hero Image',
type: 'image',
options: { hotspot: true },
fields: [
defineField({
name: 'alt',
title: 'Alt Text',
type: 'string',
validation: (Rule) => Rule.required(),
}),
],
}),
defineField({
name: 'body',
title: 'Body',
type: 'array',
of: [{ type: 'block' }],
}),
defineField({
name: 'category',
title: 'Category',
type: 'reference',
to: [{ type: 'serviceCategory' }],
}),
defineField({
name: 'seoTitle',
title: 'SEO Title',
type: 'string',
}),
defineField({
name: 'seoDescription',
title: 'SEO Description',
type: 'text',
rows: 3,
validation: (Rule) => Rule.max(160),
}),
defineField({
name: 'featured',
title: 'Featured',
type: 'boolean',
initialValue: false,
}),
],
preview: {
select: {
title: 'title',
media: 'heroImage',
},
},
})
[Screenshot: The above schema file open in Cursor, with the Sanity Studio running alongside in a browser showing the document type rendered correctly]
For a typical project with 6–8 content types, generating all schemas manually used to take half a day. With Claude, it takes 20–30 minutes — mostly review and adjustment time, not generation time.
Where I still intervene: Claude occasionally generates validation rules that are too restrictive, or misses a nuanced relationship between document types. I always review schema outputs against the architecture document before using them.
Phase 3: Inside Cursor — The Build Workflow
This is where Cursor does its heaviest lifting. I'm not using it as a fancy autocomplete. I use it as a junior developer who knows my codebase and can be given complete, specific tasks.
Setting Cursor up for a new project
Before writing anything, I do two things:
1. Add a .cursorrules file to the project root. This is a plain text file that tells Cursor's AI about the project conventions. Mine looks something like this:
This is a Next.js 14+ App Router project using TypeScript, Tailwind CSS v4, and Sanity CMS.
Conventions:
- Use the App Router (not Pages Router). All routes live in /app.
- Components go in /components, organised by feature (e.g. /components/blog, /components/ui).
- Fetch Sanity data using GROQ queries in /lib/sanity/queries.ts — never inline GROQ in components.
- Use server components by default. Add "use client" only when strictly necessary (event handlers, hooks).
- Tailwind classes only — no CSS modules, no inline styles.
- All images go through next/image with explicit width/height or fill.
- TypeScript strict mode. No `any` types.
- Australian English in all user-facing copy.
This single file eliminates most of the convention-drift that happens when AI writes code across a multi-day project. Cursor respects it consistently.
2. Open the architecture document in a Cursor tab. Having the architecture visible means I can reference it when writing prompts — and Cursor's context window will pick up on the structure when I use @file references.
[Screenshot: Cursor workspace showing .cursorrules file open in one tab, architecture document in another, and the main component file being built in the centre]
The component-building prompt pattern
For each new component, I use this pattern:
Step 1 — Describe it completely before asking for code:
Build a
ServiceCardcomponent. It receives aserviceprop typed as:{ title: string; shortDescription: string; slug: string; heroImage: { url: string; alt: string }; category: { title: string } }. It should render as a card with the hero image at top, category label as a small uppercase tag, title as an h3, short description truncated to 2 lines, and a "Learn more" link to/services/[slug]. Use Tailwind only. Make it hover-interactive with a subtle shadow lift. Export as default.
Step 2 — Review the output, then immediately ask for the TypeScript type:
Now create a
ServiceTypeScript type in/types/sanity.tsthat matches the prop shape above, extended with all fields from the Sanity schema we created earlier.
Step 3 — Ask for the GROQ query:
Write the GROQ query that fetches a list of services for the homepage, selecting only the fields needed for
ServiceCard. Include the slug, image URL viaasset->url, and category title viacategory->title. Export it asservicesListQueryfrom/lib/sanity/queries.ts.
This three-step pattern — component, type, query — means by the time I move on, the entire data pipeline for that UI element is complete and consistent. The alternative is writing the component, realising the type doesn't match, fixing the query, and discovering the image URL isn't being resolved. Cursor helps avoid that cascade.
[Screenshot: Cursor showing the GROQ query file alongside the component file, with TypeScript types in a third panel — all generated from the above prompts]
Debugging with Cursor
When something breaks — and it will — I use Cursor's chat panel rather than Stack Overflow as my first stop. The critical difference is that Cursor's AI has read my codebase. When I paste an error, it knows the component structure, the types, and the query shape. Stack Overflow doesn't.
My debugging prompt pattern:
I'm getting this error: [paste error]. Here's the component where it's occurring: [Cmd+K to attach file]. Here's the relevant query: [attach file]. Walk me through what's causing this and fix it.
The resolution rate on first attempt is roughly 70–80% for TypeScript errors, logic bugs, and data-fetching issues. For CSS/layout bugs it's lower — around 50% — because visual debugging still often requires a human eye.
Where Cursor still falls short: Complex, stateful bugs — where the issue isn't in one file but in an interaction between several components across the render lifecycle — often require me to think through the problem myself first, then have Cursor implement the fix. The AI is a much better implementer than it is a debugger on genuinely complex issues.
Phase 4: The Content Layer — Claude Handles the Words
Once the technical build is in progress, there's a parallel workstream that used to be bottlenecked: page copy.
In the old workflow, development would be largely complete before copy was finalised. There'd be a gap where I was waiting on client content before I could build out the real pages. That gap is now much smaller because I generate structured placeholder copy with Claude that's close enough to final to build against — and the client can refine it rather than write from scratch.
How I generate build-ready placeholder copy
The prompt:
Write homepage copy for [client name], a [description of business] based in [location]. The homepage has the following sections: Hero (headline + subheadline + CTA), Services overview (intro paragraph + 3 service cards), About snippet (2 sentences), Social proof (3 testimonial placeholders), and CTA section (headline + body + button label). Write in a professional but approachable tone. Australian English. Do not use generic phrases like "industry-leading" or "world-class." Keep headlines under 8 words.
What comes back is structured, section-by-section copy I can drop directly into Sanity as seed content. Clients respond to it — they edit, correct, and personalise — rather than facing a blank Sanity Studio and having to write from nothing.
This single change has compressed the content sign-off phase from 2–4 weeks to 3–7 days on most projects.
Phase 5: SEO and Metadata — Claude + Code
Generating consistent, well-structured metadata across 15–30 pages used to be one of the more tedious parts of any build. Now it's a 20-minute task.
The workflow:
- Export the page inventory as a simple table (page name, route, primary keyword, target audience intent)
- Paste it into Claude with this prompt:
For each page in the table below, write: a
<title>tag (max 60 characters), a<meta name="description">(max 155 characters), and anog:title(max 60 characters). Optimise for the primary keyword. Use Australian English. Never truncate with ellipsis — write complete sentences.
- Claude returns a structured table of metadata I can implement directly into Next.js's Metadata API.
In Next.js App Router, this looks like:
typescript// app/services/page.tsx
import type { Metadata } from 'next'
export const metadata: Metadata = {
title: 'Digital Marketing Services | Codeble Sydney',
description:
'SEO, AEO, marketing automation, and web development services for Australian businesses. Based in Sydney. Book a free consultation.',
openGraph: {
title: 'Digital Marketing Services | Codeble',
description:
'SEO, AEO, marketing automation, and web development services for Australian businesses.',
url: 'https://codeble.com.au/services',
siteName: 'Codeble',
locale: 'en_AU',
type: 'website',
},
}
Generating this for every page used to be twenty individual tasks requiring individual thought. Now it's one Claude session and twenty copy-paste operations.
The Honest Accounting: Where the 60% Actually Comes From
Here's how I'd break down the time saving across a typical 6-week project that now takes roughly 2.5 weeks:
| Phase | Before AI | After AI | Time saved |
|---|---|---|---|
| Architecture and spec | 3–4 days | 0.5–1 day | ~3 days |
| Sanity schema creation | 1–1.5 days | 0.5 days | ~1 day |
| Component development | 2–3 weeks | 1–1.5 weeks | ~1 week |
| Copy and content seed | 2–4 weeks (client-dependent) | 3–7 days | ~1–2 weeks |
| SEO and metadata | 1–2 days | 2–4 hours | ~1 day |
| Debugging and QA | 1–1.5 weeks | 5–8 days | ~4 days |
| Total | ~6–8 weeks | ~2–3 weeks | ~60% |
A few caveats on that table:
The copy and content phase is the most variable — it depends heavily on how responsive the client is and how willing they are to refine AI-generated placeholder copy versus writing everything from scratch. The time saving is real but client-dependent.
The debugging and QA line hasn't been reduced as dramatically as the others. QA still requires human attention. What's improved is the resolution speed when bugs are found — not the reduction in bugs overall (AI-generated code has its own failure modes, as I'll cover below).
What AI-Generated Code Gets Wrong (And How I Catch It)
This section is important. If you're integrating AI into your development workflow, you need a systematic approach to catching the failure modes — not just knowing they exist.
The four most common failure patterns I see from Cursor
1. Correct-looking but stale API usage. Cursor sometimes generates code using an older API pattern — a Next.js 13 App Router approach applied to a Next.js 15 project, a deprecated Sanity method, or a Tailwind v3 class in a v4 project. This is particularly tricky because the code is syntactically valid and passes TypeScript — it just doesn't work at runtime.
How I catch it: The .cursorrules file helps significantly. Beyond that, I always check any unfamiliar function or pattern against the official docs before considering it done.
2. Prop drilling where a context would be better. AI tends to solve the immediate problem, not the structural problem. If I ask for a component that needs data from a parent, it will add a prop. Then another. Then another. The code works but becomes unmaintainable.
How I catch it: I review component prop counts before completing a phase. Anything with more than 5–6 props gets refactored, often using Cursor to generate the context.
3. Accessibility gaps. AI-generated UI code often misses ARIA labels, focus management, keyboard navigation, and screen reader considerations. A card component that works perfectly visually might be completely inaccessible.
How I catch it: I run Axe or Lighthouse accessibility audits on all components before sign-off. Then I paste the report into Cursor and ask it to fix the identified issues.
4. Overly literal interpretation of prompts. If I ask Cursor to "make the hero section responsive," it might add breakpoint classes to the hero section — but not adjust the font size, the image aspect ratio, or the CTA button size that are also affected. It does exactly what I asked and nothing more.
How I catch it: Test on real devices (not just browser resize) before calling anything done. The mistakes are often visible immediately on mobile.
The Workflow in Summary
If you're starting to integrate Claude and Cursor into your web development process, here's the sequence that works for me:
- Architecture session in Claude — produce a build spec before touching code
- Schema generation in Claude — all Sanity schemas from the architecture doc
- Set up
.cursorrules— establish project conventions before Cursor writes anything - Component-type-query pattern in Cursor — build each UI element as a complete data pipeline
- Parallel copy generation in Claude — seed content for client refinement
- Metadata generation in Claude — one session, all pages
- Systematic QA with Axe + Cursor — catch accessibility gaps and fix them in the editor
The tools don't make decisions for you. They execute instructions faster. The quality of the output depends entirely on the quality of the instructions — which means clear thinking and clear prompting are more valuable skills now than they were two years ago.
Want This Workflow Built Into Your Next Project?
If you're a business commissioning a website and you've ever experienced the frustration of a project dragging from initial brief to launch, the workflow above is part of why Codeble can deliver faster without cutting corners.
If you're a developer looking to implement these patterns in your own practice and want a starting point — the .cursorrules template, the architecture prompt, and the schema generation pattern — get in touch and I'll share the template pack.
Frequently Asked Questions
Let's discuss your project
Ready to implement these patterns in your own projects? Talk to Codeble.


