It's Not An AI Problem. It's A You Problem.
As a professional nerd, I’ve never really “gone viral” before. But after last week’s EO interview clocked nearly half a million views in just a few days, I’ve been eager to dig into exactly what resonated. In short, it seems like the reframe of "Teammate, Not Technology" provided a simple but fundamental shift: we need to stop thinking about AI as a technology rollout and start treating it like a new teammate.
What I didn't fully explain is that this isn't just a feel-good metaphor or semantic distinction. It produces measurably better results.
The most surprising discovery I've made over two years of working intensively with AI is this: large language models respond to human-like treatment in remarkably human-like ways. The quirks and patterns that make someone a good collaborator with humans turn out to be eerily similar to what makes them effective when working with AI.
The Quirky Parallels
Let's start with something cognitive scientists have long known about human reasoning: when people explicitly walk through their thought process step-by-step before making a decision, they arrive at better outcomes. This is called "chain of thought reasoning," and it's a well-established phenomenon in human cognition.
Here's what's weird: large language models exhibit an eerily similar pattern.
When you ask ChatGPT or Claude to "walk through your reasoning step-by-step before answering," you get dramatically better results—often going from flat-out wrong to surprisingly accurate. The models, like humans, benefit from being encouraged to think explicitly before concluding.
A similar pattern emerges with mental breaks. There's a reason why shower thoughts and walk-around-the-block moments lead to breakthroughs—our brains benefit from conceptual pauses.
Kit Kat's marketers demonstrate a parallel in AI. They found that inserting "Have a break…" in front of prompts measurably improved the quality of responses. Simple examples like these seem to suggest that the parallels to human cognition aren't just metaphorical—they're functional.
When I Was the Doofus
Let me share a recent personal example. My dad asked if AI could help him convert a multi-hundred-page PDF into a CSV file. "Of course," I replied confidently, and somewhat absentmindedly popped the file into ChatGPT with a quick conversion request.
The model immediately obliged, giving me a CSV file. I almost forwarded it to my dad with a self-satisfied "AI victory!" message, but fortunately opened it first. It was a single-line CSV—completely useless.
To the AI-uninitiated, this would confirm their suspicion: "See? AI isn't capable." But the problem wasn't the AI—it was my approach. I was treating it like a conversion tool instead of a thinking partner.
I added one sentence to my query: "Before you respond to my request, please walk me through your thought process step-by-step."
This time, ChatGPT outlined how it would parse the document, create categories, establish headings, and process the information. It then produced a perfect multi-hundred-line CSV with all the necessary categories.
Same model. Same task. Completely different result—all because I shifted from commanding a tool to collaborating with a teammate.
The Anthropomorphization Question
I can hear the objections already: "Isn't it dangerous to anthropomorphize technology?"
Perhaps. But I'm more concerned about the opposite risk: dehumanizing our approach to AI.
Recently, someone mentioned that Sam Altman tweeted about how users saying "please" and "thank you" costs OpenAI millions in token processing. My response? Cry me a river, Sam.
I'm not worried about costing a tech giant a few million with my politeness. I'm concerned about preserving my humanity while working with these systems.
The simple truth is this: my input to the model directly correlates to the output I receive. To get better outputs, I need to provide better inputs. And I've found that thinking of AI as a teammate rather than a tool consistently leads to better inputs.
Real-World Evidence
I'm not alone in this discovery. On a recent episode of Beyond the Prompt, Russ Somers (CMO who tripled his marketing department's productivity with AI) shared something fascinating: he gives his custom GPTs human names like "Betty Budget," "Aidan Adman," and "Roger RevOps."
This isn't cute branding—it's strategic. Russ discovered that naming his GPTs influenced how he interacted with them, which significantly improved the outputs he received. The change wasn't in the model; it was in his approach as a collaborator.
"When I'm interacting with 'Betty' versus 'Budget Analysis Tool,' I naturally provide more context, explain my needs more clearly, and engage in more of a dialogue," he explained. "The results speak for themselves."
Teammate Communication Tactics
So how do we actually implement this shift from tool to teammate? Here are the approaches I've found most effective:
Give Your Teammate a Role to Play
Tell your AI what kind of teammate you need. If you’re working through a parenting challenge, tell the model, “You’re a developmental psychologist with a specialty in early childhood development.” If you’re working through a strategic management question, tell the model, “You’re a senior partner at McKinsey with decades of experience in ____ industry…”
This helps the model assess which area of its unfathomably large “cloud” of training data it should be leveraging.
Always Request “Chain of Thought”
Unless you’re using a reasoning model like o3 — and even then, my own experimentation has taught me to be a little skeptical of refraining — every prompt to a large language model should include some variant of “Before you respond to my request, please walk me through your thought process step by step.” This is so important that I recommend creating a text replacement on your device for “CoT” so that you don’t have to type that sentence every time.
Let Your Teammate Ask Questions
One of the biggest mistakes we make is assuming we've provided all the context the AI needs. We wouldn't do this with a human colleague—we'd expect them to ask for clarification if our request was unclear.
Try adding this to your prompts: "Before providing an answer, please ask me any questions you need to better understand my request and deliver spectacular results." (Use AI to use AI…)
This simple addition transforms the interaction from a one-way command to a collaborative dialogue. The questions the AI asks will often reveal assumptions you didn't realize you were making, leading to significantly better outcomes.
The Feedback Loop
Another mistake: taking AI output, refining it ourselves, and never telling the model what we changed. This is like silently editing a colleague's work without telling them how to improve next time.
Instead, try this approach: "Here's how I modified your output. What can you learn from these changes for next time?"
When you complete this feedback loop, something remarkable happens. The model actually incorporates your preferences and adjusts its approach accordingly. Your next interaction builds on the previous one, just like a human working relationship.
This isn't about making the AI "feel" better—it's about creating a more effective collaboration pattern that produces superior results.
Try This For a Week
Here's my challenge to you: for the next seven days, consciously shift how you interact with AI. Stop treating it like a calculator and start treating it like a collaborator.
Try this simple test: Take a complex question or task and approach it two different ways:
Tool approach: "Convert this document to a summary."
Teammate approach: "You’re a gifted document analyst. I need your help summarizing this document. Before you begin, please let me know what additional information would help you understand the context and my goals better."
Compare the results. The difference won't be subtle.
Then try applying the other teammate principles:
Ask it to walk through its thinking
Create a feedback loop on outputs
Provide context like you would to a new team member
Share what changes when you start treating AI less like a calculator and more like a collaborator. I suspect you'll never go back to the command-line approach.
Because here's the truth: whether you believe AI is "thinking" like a human or not is irrelevant. What matters is that treating it like a thinking partner consistently produces better outcomes.
The most effective human-AI collaboration looks less like operating a machine and more like developing a working relationship. The sooner we embrace this reality, the sooner we'll unlock AI's full potential.
Related: Teammate, Not Technology
Related: Use AI to Use AI: The Meta-Skill Nobody's Talking About
Related: Beyond the Prompt: Russ Somers on Constructing a Marketing GPTeam
Related: Don’t Keep The AI Expert Waiting
Join over 24,147 creators & leaders who read Methods of the Masters each week
Last week, I proposed a simple but fundamental shift: we need to stop thinking about AI as a technology rollout and start treating it like a new teammate. What I didn't fully explain is that this isn't just a semantic distinction. It produces measurably better results.