The Report Looked Great. It Missed Everything That Mattered.

What Alex's SEO experience reveals about where AI actually breaks — and what to do before you open a chat window.


There's a conversation that keeps coming up at conferences, in Slack threads, in the comments on LinkedIn. Someone tries an AI tool, gets a mediocre result, and concludes: AI isn't that good yet. Sometimes they're right. Often, they're not giving the tool what it needs to do the job.

On Episode 74 of Enterprising Minds, Alex walked us through two situations where Claude and ChatGPT failed him in ways that genuinely shook his confidence. He fed both tools full SEO crawl data and paid search spreadsheets, asked them to produce reports, and got back polished, lengthy documents that completely missed the most critical issues — a robots.txt file blocking large portions of a site from being indexed, broken XML sitemaps, and fundamental gaps in the paid search data. Things a reasonably experienced analyst would catch in the first pass.

His takeaway was blunt: "AI tools right now can create SEO and paid search reports. They cannot create good SEO and paid search reports."

He's not wrong about what happened. But his experience points to something worth sitting with because I've been on the other side of it.

Why Two People Can Get Completely Different Results From the Same Tool

When Alex described the reports he got back, my first instinct was to ask about the prompt. Not to blame him, but because, genuinely, my experience with the same tools has been different lately. The same week Alex was wrestling with those reports, I was using Claude for poetry editing and getting sharp, useful feedback. The difference wasn't the tool. It was how I set up the conversation.

I've started opening most of my sessions with something like: take your time, don't assume anything, I don't mind working iteratively, ask me whatever you need. It sounds simple, but it changes everything. The model stops trying to produce a complete answer immediately and starts working more like a thoughtful collaborator. That's not a workaround.

What Good Prompting Actually Looks Like

Anthropic's Claude 101 course frames it cleanly. Before your next conversation, consider three things:

1. Setting the stage: What is your role and what are your objectives? Is there context about your work that Claude should know about?

2. Defining the task: What action do you want Claude to take? Do you want Claude to write, analyze, build, or something else?

3. Specifying rules: What's the style or tone you want Claude to use? Are there examples that you can attach to show Claude what you're looking for?

This matters more than most people realize. The model doesn't know you're an SEO analyst who cares deeply about crawl prioritization. It doesn't know that in your organization, a broken XML sitemap is a five-alarm fire and a thin content recommendation is noise. It doesn't know what good looks like for your specific situation unless you tell it.

The more context you build in upfront — your role, the stakes, what "useful" means in this context — the less the model has to guess. When models guess, they default to what looks complete, not what is actionable.

You can go further than a single prompt, too, for even further refined responses. Connectors, file uploads, and custom preferences in Claude let you bring your actual context into the conversation. Your writing style, your brand voice, your organizational constraints, your definitions of what matters, that's not a power-user trick; it's the difference between asking a consultant who's never met you versus one who has your full brief.

The Bigger Frame: AI Fluency

There's a reason prompting feels inconsistent even for people who are in these tools every day. We're not just learning a skill, we're learning to think about collaboration differently.

Anthropic's 4D Framework for AI Fluency, developed through research collaboration between Professor Rick Dakan (Ringling College of Art and Design) and Professor Joseph Feller (University College Cork), names four core competencies that shape how well any of us can work with AI:

Delegation — Deciding what work should be done by humans, what should be done by AI, and how to distribute tasks strategically between them.

Description — Effectively communicating with AI systems: clearly defining outputs, guiding processes, and specifying the behaviors and interactions you want.

Discernment — Thoughtfully and critically evaluating AI outputs. Assessing quality, accuracy, appropriateness, and identifying where improvement is needed.

Diligence — Using AI responsibly. Making thoughtful choices about which systems to use, maintaining transparency, and taking accountability for AI-assisted work.

Alex's SEO report failure is instructive through this lens. The Description piece — how clearly the task, context, and success criteria were communicated — was incomplete. The Discernment piece kicked in when he reviewed the output and caught what the tools missed. But the problem started before the model ever saw the data.

Ruthi put it well in the episode: the prompts that consistently produce good results are usually the ones that start with help me think through this rather than here is the output I want. That's Description and Delegation working together — being clear about the process, not just the destination.

The Harder Problem: Most People Haven't Started

Here's what I keep coming back to. Alex's frustration is the frustration of someone who is genuinely pushing these tools and paying attention to where they break. That's valuable. That's how you develop real judgment about AI.

But a surprising number of professionals aren't there yet. They're using AI the way they'd use a search engine — single-line queries, answer-engine style, no context, no iteration — and either getting mediocre results or not using the tools at all. And then writing off the whole category based on that experience.

That's a different problem. Not a technology problem. A fluency problem. The platforms themselves don't help. Claude, ChatGPT, Gemini, and Perplexity have meaningfully different training approaches, system philosophies, and strengths. What works in one often doesn't transfer cleanly to another. Even experienced practitioners — and yes, this includes me — regularly get caught off guard when they switch contexts or a model updates.

So if you've tried AI and found it underwhelming, the question worth asking isn't is this tool good enough? It's: Did I give it what it needed to do the job?

What to Try This Week

If you want to test this framework without a major time investment, here's a starting point. Before your next AI session, spend 60 seconds answering these three questions in the chat before you make a single request:

  1. Who you are and what you're working on

  2. What you want the tool to do, specifically

  3. What good looks like — tone, format, level of detail, what to avoid

Then, and this is the part most people skip, invite it to ask you clarifying questions before it starts. The output will be different. Not always dramatically, but often enough to matter.

Have you had a prompting experience — good or bad — that changed how you think about working with AI? We'd love to hear it. Reply to this email or find us on LinkedIn.

Next
Next

Marketing Tactics Don’t Fail. Lazy Marketing Does.