Codebleby Jack Amin
AI Tools18 April 2026

My Real Weekly AI Workflow as a One-Person Agency

J

Jack Amin

Digital Marketing & AI Specialist

12 MIN READ
A highly realistic photograph of a professional digital agency owner working diligently at a modern desk using a sleek laptop.

Quick Answer

Running a one-person agency in 2026 without AI assistance is like running a restaurant without a kitchen. By integrating reliable tools like Claude Pro, Cursor, Perplexity, and Make, you can compress the gap between having an idea and executing it. This workflow involves using AI for tasks like weekly structural planning, generating document first drafts on a section-by-section basis, and expediting development work, resulting in an estimated 40–50% time saving across an average week. Crucially, AI is meant to support—not replace—human tasks like client interaction, strategic judgements, and deep thinking.

Why I'm writing this

Most "AI workflow" content online is aspirational. It describes a hypothetical ideal day where every task flows smoothly into the next, every AI output is perfect, and the whole operation hums along effortlessly.

That's not how it works.

I run Codeble — a digital marketing and web development agency — alongside a full-time marketing role. My weeks are real weeks: deadlines that pile up, clients who need things urgently, briefs that arrive incomplete, and days where the AI generates something confidently wrong and I catch it at 10pm.

This is an attempt to document the actual workflow — the messy, practical, real version — in a way that's useful to other people running small operations and trying to figure out how AI fits into their lives without the hype.

The Setup: What's Always Open

Before getting into the week, here's what's running in the background as permanent fixtures:

  • Claude Pro: always open in a browser tab. This is my primary thinking partner, writing partner, and strategic sounding board. I have a persistent conversation I return to for ongoing client work, and I start fresh conversations for discrete tasks.
  • Cursor: open whenever there's any development work happening. Even on non-development days, it's often open because I'll tweak something on a client site between other tasks.
  • Perplexity: my default when I need to know something current. Google for research, Perplexity for answers with citations.
  • Make: runs in the background managing automations. I check in when something breaks or a new workflow needs building. It's infrastructure.
  • Ahrefs: open on client SEO days. Not daily.
  • Notion: the operational backbone. Every client has a project page. My weekly task list lives here. Meeting notes go here. It's the one tool I'd be genuinely lost without.

That's the stack. Nothing exotic. The leverage comes from how they're connected and when I reach for each one.

Monday: The Week Setup

Monday morning is the most AI-intensive part of my week — not because I'm doing client work, but because I'm planning it.

How do I plan my week with AI?

I have a weekly planning ritual that takes about 45 minutes and starts before I open any client files. It goes like this:

Step 1: Brain dump in Notion. Everything that needs to happen this week, in no particular order, pulled from client conversations, my project pages, and whatever's sitting in the back of my mind. No filtering. Just capture.

Step 2: Paste the list into Claude and ask for structure. The prompt I use most Mondays:

Here's my unfiltered task list for this week. I'm one person. I have roughly 25 hours of working time available across client work, and I need to protect time for a Tuesday client call and Thursday delivery. Help me sort these into: (1) must happen this week with a hard deadline, (2) high value but flexible, (3) can defer. Then suggest a rough daily structure that front-loads the hardest cognitive work.

What comes back is a prioritised structure that's about 80% right. I adjust it, move a few things, then copy the daily plan into Notion. The value isn't that Claude knows my week better than I do — it's that having to articulate the list clearly enough to paste it into Claude forces me to think about it more rigorously than I would if I just stared at the list myself.

Step 3: Review any active automations in Make. Monday morning I glance at the Make dashboard to confirm that scheduled workflows ran correctly over the weekend. Error logs get dealt with before the week starts, not mid-week.

Step 4: Prepare for client calls. If I have a client call this week, Monday is when I prepare for it. This means reviewing the project status in Notion, identifying what's been delivered and what's outstanding, and drafting any questions or discussion points. I'll often paste the client's brief or previous notes into Claude and ask: "What questions should I be asking in this call that I haven't thought of yet?"

Mondays are slow to start and fast to finish. By noon I usually know exactly what the week looks like.

Tuesday: Client Delivery Day

Tuesday is my heaviest execution day. The planning is done. Now the work happens.

The work varies by week — some Tuesdays it's a website component, some it's a content strategy document, some it's a GA4 audit. But the AI pattern is fairly consistent regardless of the deliverable type.

What does a client content day actually look like?

Say I'm delivering a content strategy document for a client — a small professional services firm that needs a content plan for the next quarter.

Morning: Research with Perplexity. I start by understanding the current landscape for their industry. What are competitors publishing? What questions are coming up in their category? What's showing in AI Overviews for their key queries? Perplexity handles this faster than any manual research process I've found — and the citations mean I can verify sources rather than trust AI-confabulated statistics.

Rough time: 45 minutes

Late morning: Structure and draft in Claude. Once I have enough research context, I switch to Claude and start building the actual document. My approach is always outline first, then section by section — never "write me a 2,000-word content strategy document" as a single prompt. That approach produces generic output. Section by section with specific context produces something worth delivering to a client.

A typical exchange might go:

Here's what I know about this client: [paste brief]. Here's the competitive landscape I've researched: [paste notes]. Write the executive summary section of a content strategy document. It should cover: what we're trying to achieve, the core content thesis, and the three content pillars we'll focus on. Tone: professional but direct. Australian English. No bullet points in this section — write in paragraphs.

Then for the next section:

Now write the keyword and topic cluster section. Here are the primary keywords we're targeting: [list]. For each cluster, explain the content rationale — why this cluster, what intent it serves, and what type of content will perform best. Include a note on AI Overview opportunity where relevant.

Each section is its own conversation turn with its own specific brief. The document assembles itself section by section, and I review and refine as I go.

Rough time: 2–3 hours

Afternoon: Polish and format. Claude drafts. I edit. The editing is the work that can't be delegated — checking that the recommendations actually match the client's situation, that the tone sounds like Codeble rather than generic AI, that nothing has been confidently stated that I can't stand behind. This is where I earn the fee.

Rough time: 1–1.5 hours

End of day: Deliver and log. Document goes to the client. Status updated in Notion. If there's a follow-up action (a call scheduled, a revision round expected), it goes into next week's planning list.

Wednesday: Development or Automation Day

Wednesday alternates between two modes depending on active projects: website development (Cursor-heavy) or marketing automation (Make-heavy). Some weeks it's both.

What does a development Wednesday look like?

If I'm building or updating a client site, Wednesday is when the deep technical work happens. The workflow here follows the Claude + Cursor pattern I've written about elsewhere, but in practice it looks like this:

Morning: Architecture and task breakdown. Before opening Cursor, I spend 30–45 minutes with Claude reviewing what needs to be built. Even on an established project, I describe the day's tasks specifically enough that I have a clear sequence before writing a line of code. Ambiguous tasks produce ambiguous code.

The rest of the day: Cursor, task by task. I work in discrete chunks — one component, one query, one page — rather than trying to hold the whole build in my head at once. Each task gets its own Cursor prompt, its own test, and its own commit before moving to the next.

The rhythm is: describe the task to Cursor → review the output → test → adjust → commit → next task. Repeat for 4–5 hours.

The thing I've had to learn is that working with Cursor effectively requires more upfront clarity than I was used to. Vague prompts produce code that looks right but behaves unexpectedly. Specific prompts produce code I can trust. The discipline of writing a clear prompt is actually the discipline of thinking clearly about what you want — which is good engineering practice regardless of whether AI is involved.

What does an automation Wednesday look like?

If it's a Make day, I'm usually building or debugging a client workflow — a lead routing automation, a CRM data sync, a scheduled reporting pipeline.

The AI involvement on automation days is different. I use Claude heavily at the design stage — describing a workflow in plain language and asking Claude to help me think through the logic, edge cases, and failure modes before I build anything in Make. Once the logic is clear, the Make build itself is mostly manual — Make's visual interface is intuitive enough that I don't need AI to assemble scenarios, just to design them.

Where Claude helps on automation days is in writing the content that automations use. Email copy for a triggered nurture sequence. Notification message templates. Error log summaries. Anything where words are needed inside the workflow.

Thursday: Client Calls and Reactive Work

Thursday is the least structured day of the week, which is deliberate. Client calls land here most often. Reactive work — a client feedback round, an urgent content update, a bug that's surfaced on a live site — gets absorbed here without disrupting the execution days.

How do I handle client calls with AI?

I don't bring AI into client calls directly. The calls themselves are human — listening, asking questions, building the relationship. Where AI comes in is before and after.

Before: The preparation I described earlier. Claude helps me identify questions I should be asking and flag anything in the project that deserves discussion.

After: Immediately after a call, while my notes are fresh, I write a bullet-point summary of everything that was discussed, decided, and agreed. Then I paste that into Claude with this prompt:

Here are my notes from a client call. Convert these into: (1) a brief call summary suitable for sending to the client as a follow-up email, (2) a list of action items with responsible party (me or client) and expected timeline.

The email draft goes out, lightly edited, within an hour of the call ending. The action items go into Notion. The client gets a fast, clear follow-up. I get a logged record of what was agreed without spending time formatting notes.

Reactive work: Client feedback rounds are where I use AI most liberally because they're inherently repetitive. "Can you make this section shorter?" "Can we try a different headline?" "The client wants three versions of this." These are tasks where AI can generate options quickly and I can curate rather than create from scratch.

The principle I follow: use AI to generate volume, use human judgement to select. Never send the first AI output without review, but don't feel obligated to write every variant from scratch.

Friday: Content, SEO, and the Week's Loose Ends

Friday is Codeble's content day. Blog posts (like this one), social content, newsletter drafts, and any SEO-specific work for clients.

How do I approach content creation with AI?

Honestly: with more restraint than you might expect.

The posts I'm proudest of — the ones that get shared, that generate enquiries, that feel genuinely useful — are the ones where the thinking is mine and the AI helped me articulate it more efficiently. The posts where I outsourced the thinking to the AI and just polished the output are flatter. They read correctly but they don't land the same way.

My content workflow:

Step 1: The idea and the angle are always mine. I won't start a content session without knowing what I actually think about the topic. If I don't have a clear point of view, I'm not ready to write — and asking AI to generate a point of view for me produces content that has no particular reason to exist.

Step 2: I write the opening. The first few paragraphs of every piece I write myself, before AI touches it. This forces me to commit to the tone and angle early, and it means the piece has a genuine voice from the start.

Step 3: Claude helps me develop the structure. Once I know where I'm going, I'll often paste my opening into Claude and ask: "Given this angle and tone, suggest a structure for the rest of this piece. The audience is [X]. I want to avoid [common clichés in this space]."

Step 4: Sections get drafted with Claude, edited by me. Same pattern as the client content work. Section by section, specific brief, review and adjust.

Step 5: I read the whole thing aloud before publishing. This is the single best editing technique I've found, AI-assisted or otherwise. If I stumble over a sentence, it comes out. If a section feels hollow, it gets cut or rewritten. The voice check is the last filter before anything goes live.

Friday afternoon: The week's loose ends. Whatever didn't get done Monday through Thursday lands here. Sometimes that's a quick SEO check on a client's Search Console. Sometimes it's a Perplexity research session for next week's content. Sometimes it's just catching up on Notion and making sure everything's documented for the week ahead.

What AI Doesn't Do in My Week

This is the part I think matters most, so I'm being deliberate about it.

  • AI doesn't talk to clients. Every client interaction — calls, emails, Slack messages — is me. AI helps me prepare and follow up, but the relationship is human. Clients are paying for my judgement and my accountability, not for a well-prompted language model.
  • AI doesn't make strategic decisions. When a client asks whether they should invest in SEO or Google Ads, or whether their website needs a rebuild, or how to position against a new competitor — that answer comes from me. AI can inform the analysis, but the recommendation has to be something I'll stand behind personally.
  • AI doesn't handle anything confidential without explicit permission. Client briefs, commercial information, personal data — I'm careful about what goes into any AI system. My default is to work at a level of abstraction that doesn't expose specific client details unless there's a clear reason to and I'm confident in the platform's privacy handling.
  • AI doesn't produce the first draft of anything that requires credibility. Audit reports, strategy documents, technical recommendations — I write those. AI refines them. The sequence matters. If I let AI write the first draft of something that requires genuine expertise, I'm at risk of producing something that sounds expert without being expert. That's the version of AI use that erodes trust.
  • AI doesn't replace the slow thinking. The 20-minute walk where I work out why a client's positioning isn't landing. The staring at a GA4 report until a pattern becomes visible. The reading that keeps me current on the industry. These can't be prompted. They're what the AI output is built on.

The Honest Numbers: How Much Time Does AI Actually Save?

I've resisted putting a specific percentage on this because it varies significantly by task type. But here's my genuine estimate across different categories of work:

  • Research and competitive analysis: 50–60% (Perplexity dramatically compresses this)
  • First drafts of client documents: 40–50% (Depends heavily on how clear my brief is)
  • Code generation: 50–60% (Higher for repetitive patterns, lower for novel problems)
  • Email and call follow-ups: 60–70% (High template leverage once the pattern is set)
  • SEO content production: 35–45% (The thinking time is irreducible)
  • Automation design and build: 30–40% (Design phase benefits; build phase less so)
  • Client strategy and recommendations: 10–15% (AI informs; judgement is the product)

The aggregate is probably 40–50% across a typical week. That's not a number I can verify precisely, but it's what it feels like — and it matches roughly what my capacity has expanded to accommodate since integrating AI seriously.

What I do with that recovered time: more client work, more content, more strategic thinking. Not less working — more output from the same hours.

What I'd Tell Someone Starting This

If you're running a small operation and trying to figure out where AI fits, here's what I'd say based on the past 18 months of actually doing it:

Start with the tasks you hate, not the tasks you're good at. The fastest wins come from automating the work that drains you. For me that was research, formatting documents, and writing follow-up emails. Those tasks are now 20% of the effort they used to be.

Develop your prompting like a skill, not a feature. The quality of what AI produces is almost entirely determined by the quality of the instruction you give it. Vague prompts produce mediocre output. Specific, contextual prompts produce things worth using. Getting good at prompting is the actual leverage.

Protect the thinking time. The danger with AI in a solo operation isn't that it replaces you — it's that it makes it very easy to be busy without being thoughtful. You can generate enormous volumes of content, code, and documentation without actually solving the client's problem. The discipline of sitting with a problem before reaching for the AI is what separates useful AI use from noise production.

Don't hide it. My clients know I use AI. It's not a secret. What they're buying is my expertise, my judgement, and my accountability for the outcome — not my guarantee that every word was typed by hand. Being transparent about the workflow builds trust; being cagey about it erodes it.

Frequently Asked Questions

A typical stack includes Claude Pro for strategy and drafting, Cursor for development work, Perplexity for research with citations, Make for automations, and Notion as the operational backbone.

Let's discuss your project

Ready to integrate AI into your own solo agency workflow? Let's chat and find what works for you.