Codebleby Jack Amin
AI & Automation6 March 2026

Why AI Gives You Bad Results (And How to Fix It in 5 Minutes)

J

Jack Amin

Digital Marketing & AI Automation Specialist

10 MIN READ
Digital workspace showing a chaotic cloud of vague data being transformed through a glowing funnel into sharp, structured, precise output.

Quick Answer

AI gives bad results because most people give it vague instructions. "Write me a blog post" produces generic content. "Write a 600-word blog post for Australian small business owners about Google Ads budgeting, using a conversational tone, with a comparison table and 3 FAQs" produces something genuinely useful. The fix isn't a better AI tool — it's a better prompt.

You've tried ChatGPT. You asked it to write an email, a blog post, or a social media caption. The result was... fine. Generic. Bland. It sounded like it was written by a helpful robot who'd read a thousand articles but never actually run a business.

So you concluded that AI is overhyped. Maybe it works for tech people, but not for your business.

Here's what actually happened: the AI gave you exactly what you asked for. The problem is that most people ask for very little — and that's exactly what they get back.

This guide shows you why AI produces mediocre output by default, and how a small change in how you ask transforms the results from forgettable to genuinely useful. You can test this in the next five minutes with any AI tool you already have.

Why does AI default to generic output?

AI tools like ChatGPT, Claude, and Gemini are trained on billions of pages of text. When you give them a vague prompt, they do the only thing they can — they give you the statistical average of everything they've learned. The most common, safest, most middle-of-the-road response possible.

That's why AI output often sounds like it was written by a committee. It was — a committee of every article, email, and blog post the model has ever read, blended into an inoffensive, technically correct, entirely unremarkable answer.

The AI isn't broken. It's doing exactly what you asked. The problem is what you asked.

What you askedWhat AI heardWhat you got
"Write me a blog post about marketing""Write something generic about a massive topic for an unknown audience"A bland 500-word overview that sounds like every other marketing article
"Help me with an email""Write a polite email about something, to someone, for some reason"A generic template with placeholder enthusiasm
"Create social media content""Write vague social posts for an unspecified platform and audience"Forgettable captions full of emojis and hashtags

The pattern is clear: vague input produces vague output. Specific input produces specific, useful output. Every time.

The 5-minute fix: the Context + Task + Format method

You don't need to learn "prompt engineering." You need to answer three questions before you type anything into an AI tool:

1. Context — Who are you, who is this for, and what's the situation? 2. Task — What specifically do you want the AI to do? 3. Format — How should the output be structured?

That's it. Those three elements transform every AI interaction. Let me show you the difference.

Example 1: Writing a customer email

Bad prompt: "Write a follow-up email to a customer."

Result: A generic, overly polite email that could be from any business in any industry. Useless.

Good prompt: "I run a web design agency in Sydney. A potential client enquired about a new website last week but hasn't responded to my quote. Write a friendly follow-up email that acknowledges they're probably busy, briefly restates the key benefit (a site that generates leads, not just looks good), and ends with a specific call to action to book a 15-minute call. Keep it under 150 words. Tone: professional but warm, not salesy."

Result: A specific, natural email you could send with minor edits. The difference is night and day.

Example 2: Creating social media content

Bad prompt: "Write a LinkedIn post about AI."

Result: A generic thought-leadership post about how AI is "transforming the future" with vague excitement and zero substance.

Good prompt: "I'm a digital marketing specialist based in Sydney. Write a LinkedIn post (under 200 words) sharing a specific lesson I learned this week: that ChatGPT's data analysis feature saved me 3 hours of manual spreadsheet work on a Google Ads campaign. Tone: conversational and honest, like I'm telling a colleague. Include a practical takeaway the reader can try themselves. End with a question to encourage comments. No hashtags in the body text — put 3 relevant hashtags at the very end."

Result: A post that sounds like a real person sharing a real experience — which is exactly what performs on LinkedIn.

Example 3: Analysing data

Bad prompt: "Look at this spreadsheet and tell me what you think."

Result: A surface-level summary that restates the column headers. Not helpful.

Good prompt: "This is a Google Ads performance export for the last 90 days. I'm looking for three things: (1) which campaigns have the highest cost per conversion, (2) any keywords spending more than $200 with zero conversions, and (3) month-over-month trends in overall ROAS. Present the findings in a summary table, then give me 3 specific recommendations I can action this week."

Result: Focused, actionable analysis that saves you an hour of manual work.

The one-line trick that makes every prompt better

If you remember nothing else from this post, remember this technique. Add this line to the end of any prompt:

"Before you start, ask me any questions you need to give the best possible answer."

That's it. One sentence. It changes everything.

Instead of guessing what you meant (and guessing wrong), the AI will ask you 3–5 clarifying questions: Who's the audience? What tone do you want? How long should it be? What's the goal? What should it avoid?

You answer those questions, and the AI now has the context it needs to produce something genuinely useful. It's the difference between telling a designer "make me a logo" and having a 10-minute conversation about your brand first.

This works with ChatGPT, Claude, Gemini — any AI tool. Try it right now. You'll immediately see better results.

The before/after comparison

Here's a real-world test. Same task. Same AI tool. Only the prompt changes.

Task: Write a homepage headline and subheadline for a web design agency.

Before (vague prompt)

Prompt: "Write a homepage headline for a web design agency."

Output: "Crafting Digital Experiences That Inspire" "We build beautiful, responsive websites that help businesses grow in the digital age."

This could be any agency, anywhere. It says nothing specific. No one would remember it.

After (specific prompt)

Prompt: "Write a homepage headline and subheadline for Codeble, a one-person digital marketing and web development agency in Sydney, Australia. The target audience is small business owners who need a website that generates leads, not just looks good. The tone should be confident and direct — no jargon, no buzzwords. The headline should communicate the core benefit in under 10 words. The subheadline should explain what makes this agency different (one senior specialist, not a team of juniors; focused on measurable results, not just design)."

Output: "Websites That Win Customers, Not Just Compliments" "One senior specialist. Strategy, design, and development — built to generate leads and grow your business. Sydney-based, available Australia-wide."

Same AI. Same task. Completely different result. The only variable was the quality of the prompt.

What to do when AI still gets it wrong

Even with a great prompt, AI won't nail it on the first try every time. That's normal. The key is knowing how to iterate — not just accepting the first output or giving up.

If the tone is wrong: "Rewrite this in a more conversational tone — like you're explaining it to a friend over coffee, not presenting at a conference."

If it's too long: "Cut this to 150 words. Keep the core message and the call to action. Remove everything else."

If it's too generic: "This is too vague. Add specific details: mention Sydney, reference Australian small businesses, and include a concrete example."

If it missed the point: "You focused on [X] but the main message should be about [Y]. Rewrite with [Y] as the central theme."

If you like parts but not all: "Keep the first paragraph and the closing line. Rewrite the middle section to focus on [specific angle]."

Think of it like working with a fast, capable assistant who needs clear feedback. You wouldn't accept the first draft of anything from a human either — you'd give notes and ask for a revision. AI works the same way.

The 5 most common prompt mistakes

1. Being too vague

"Write something about marketing" tells the AI nothing about your audience, your industry, your goal, or your voice. Be specific about every element that matters.

2. Asking for too much at once

"Write a complete marketing strategy with a content calendar, email sequences, social media plan, and budget" in a single prompt overwhelms the model. Break complex tasks into steps. Strategy first, then calendar, then emails. Each prompt builds on the last.

3. Not providing examples

If you want AI to match your writing style, you need to show it what your style looks like. Paste in 2–3 examples of your previous writing and say "match this tone and style." Without examples, AI guesses — and guesses wrong.

4. Accepting the first draft

The first output is a starting point, not a finished product. The best results come from 2–3 rounds of refinement. First draft → feedback → revision → final edit. This takes an extra 3–5 minutes and dramatically improves quality.

5. Not telling AI what to avoid

Sometimes telling AI what not to do is as important as telling it what to do. "Don't use buzzwords like 'leverage', 'synergy', or 'cutting-edge'. Don't open with a question. Don't use more than one exclamation mark in the entire piece." Constraints produce better writing.

A simple framework you can use today

Next time you open ChatGPT, Claude, or Gemini, fill in this template before you type your prompt:

Who am I? [Your role and industry] Who is this for? [Your audience] What do I need? [The specific output] What format? [Length, structure, style] What should it avoid? [Anything you don't want] What's an example of what good looks like? [Reference or sample]

Then write your prompt combining those answers. Or, even easier — paste this line and answer the questions the AI asks you:

"I need help with [task]. Before you start, ask me the questions you need to give the best possible answer."

Start there. You'll get better results in your very next conversation.

Key takeaways

  • AI gives bad results because vague prompts produce vague output — not because the tool is broken
  • The fix is the Context + Task + Format method: tell AI who you are, what you need, and how to structure it
  • The single most effective technique: "Before you start, ask me any questions you need" — this one line transforms every interaction
  • Always provide examples of your writing style, tone, or preferred format — AI can't match what it hasn't seen
  • Iterate, don't accept — treat the first output as a draft, give feedback, and refine in 2–3 rounds
  • The quality gap between a vague prompt and a specific prompt is enormous — same AI, completely different results
  • You can test all of this in 5 minutes with any free AI tool — try it on your next email, social post, or content brief

Frequently Asked Questions

No. The techniques in this guide work identically on free and paid tiers. The difference between good and bad AI output is almost always the quality of your prompt, not the quality of the model. Free tiers of ChatGPT, Claude, and Gemini are all capable of producing excellent results when prompted well. Paid tiers give you higher usage limits and access to the most powerful models, but the prompting principles are the same.

Let's discuss your project

Want help with this? Get in touch.