Punish Inaction: Why Leaders Must Make AI Adoption Non-Optional

"Good companies reward success, punish failure, and ignore inaction. Great companies reward success and failure, and punish inaction."

I've carried this insight from Stanford creativity guru Jim Adams for years. Nowhere — no-when? — is it more true than in the age of AI.

Last week, Shopify CEO Tobi Lütke released an internal memo that's been making waves. In it, he declared: "Using AI effectively is now a fundamental expectation of everyone at Shopify... Frankly, I don't think it's feasible to opt out of learning the skill of applying AI in your craft."

This isn't just another tech CEO jumping on the AI bandwagon. It's the clearest articulation I've seen of a principle I've been advocating privately to leaders for the past 18 months: the greatest risk with AI isn't failure—it's inaction.

Permission Without Accountability: The Inaction Problem

About a year ago, the CEO of a visible, public organization remarked, "We've given folks all of this access and permission, but why aren't we seeing more usage? Why aren't we seeing more action?"

I asked a simple question in return: "What is the cost of inaction?"

"What do you mean?" he replied.

"The reason people aren't taking action," I explained, "is because there are no consequences for inaction. What if you changed the paradigm so that it was not acceptable to remain inactive?"

His expression told me this was a perspective shift he hadn't considered.

This is the fundamental problem in most organizations today. They've provided access to AI tools. They've granted permission to experiment. They've encouraged exploration. But they haven't made inaction unacceptable.

Brad Anderson, President of Product, Engineering and UX at Qualtrics, put it bluntly: to be an AI leader, "every person in the organization, in every function—from HR to finance to legal to operations—must be using AI every single day."

Yet despite universal agreement on AI's importance, most organizations still treat it as optional—an "if you're interested" post-script rather than a fundamental expectation.

From Invitation to Expectation

The Shopify memo reveals a critical shift in thinking. Lütke writes: "The call to tinker with [AI] was the right one, but it was too much of a suggestion. This is what I want to change here today."

This evolution—from invitation to expectation—is the exact transition organizations need to make.

John Waldmann, CEO of Homebase, shared a perfect example on Beyond the Prompt (interview coming soon!). A year ago, at the company hackathon, they asked participants to build ideas and "encouraged" them to use AI if they wanted. This year, they required AI to be a foundational element of each idea.

The result? Far more AI-driven innovations.

The difference between "recommended" and "required" might seem subtle, but it's transformative. Encouragement acknowledges the old paradigm while suggesting a new one. Requirement establishes the new paradigm as the baseline.

This isn't just about semantics—it's about creating the conditions for organizational evolution.

Measuring Shots on Goal: Implementing Accountability

A management truism comes to mind: "People manage what you measure." If AI activity isn't being measured in some way, people will neglect it. When it's measured, people will prioritize it.

This is why I suggested to the CEO: "In performance reviews six months from now, set the expectation for your leadership team that you want to hear about ten AI-driven initiatives they're running. All ten can fail—that's fine. You're not asking for ten successes to publicize. But if they don't have experiments to show, that should be a problem."

If you've read Ideaflow, you know I’m a big believer in "shots on goal" in innovation—not just successes. This principle applies perfectly to AI adoption. Don't measure success; measure attempts. It's through taking shots that we ultimately score.

Here's a three-part framework for implementing this accountability:

  1. Individual Usage Demonstration: Any employee should be able to show, on the spot, how they're using AI in their regular workflow. They should be able to share screen and walk everyone through one meaningful use case from which they’re routinely deriving value. This doesn't mean showcasing major breakthroughs—just evidence of integration.

  2. Calendar Evidence: Where on people's calendars is experimentation happening? Make AI experimentation a scheduled activity, not something to fit in "when there's time" (which there never is). Mindstone CEO Joshua Wöhle has his team dedicate two hours each week to identifying automation opportunities, and another two weeks to building the automations they identify.

  3. Normalizing the Question: When engaging with your team, don’t be afraid to ask the seemingly obvious question, “Have you tried AI?” Repeatedly asking is a great way to normalize AI use. The truth is, it takes repetition to work AI into our workflows and muscle memory. Asking the simple question regularly is a gift that gives folks assurance that not only are you open to use, you’re looking for people who are seeking a level-up.

Shopify is implementing accountability through performance reviews: "We will add AI usage questions to our performance and peer review questionnaire." They're also creating structural incentives: "Before asking for more Headcount and resources, teams must demonstrate why they cannot get what they want done using AI."

These aren't soft suggestions. They're legitimate accountability mechanisms.

From Philosophy to Practice

So how do you actually implement this in your organization? Here are concrete steps:

Start with the executive team. As Lütke notes, "Everyone means everyone. This applies to all of us—including me and the executive team." If executives can't demonstrate AI usage, the rest of the organization won't take the expectation seriously. Brad Anderson routinely shares his screen to show how he’s leveraging AI. Diarra Bousso regularly sends her team Loom videos of AI-power-ups.

Create specific metrics. "Be prepared to report out on 10 AI experiments in the next six months" is clear and measurable. It creates a North Star for action without dictating how to get there.

Address the "too busy" argument head-on. Make it clear that "we've been too busy" is not an acceptable reason for inaction. In fact, being "too busy" is precisely why AI adoption should be prioritized.

Redefine failure. Ensure everyone understands that failed AI experiments aren't punished—they're celebrated as shots on goal. What's punished is having no shots to show.

Create public sharing mechanisms. Shopify has dedicated Slack channels like #revenue-ai-use-cases and #ai-centaurs where people share their prompts and approaches. This creates both accountability and learning.

When you encounter resistance, remember that you're not asking for perfection or breakthrough innovation. You're simply asking for action—for shots on goal.

The Moral Case

There's a moral dimension to this approach that shouldn't be overlooked. As Lütke put it: "Stagnation is slow-motion failure. If you're not climbing, you're sliding."

This isn't just about performance but responsibility—to your organization, your employees, and your customers.

When AI can dramatically improve outcomes, continuing without it isn't just inefficient—it's delivering suboptimal results to the people you serve. Making AI adoption non-optional isn't an act of control; it's an act of service.

This is especially true for those in leadership positions. You're not just responsible for your own work but for creating the conditions where others can do their best work. If you're not mandating AI exploration, you're mandating underperformance.

Your Next Shot on Goal

If you lead a team or organization, here's your immediate action plan:

  1. Schedule an organization-wide AI usage review. Make it clear that the purpose isn't to criticize but to establish a baseline and set expectations.

  2. Implement the 10-experiment standard. Tell your direct reports that in six months, you want to see evidence of 10 AI experiments or initiatives. They can all “fail,” but they must exist. Inaction is the only punishable failure.

  3. Add AI usage questions to your next performance review cycle. Make them specific: "Show me how you've used AI to address challenges in your role."

  4. Create calendar space. Block time for AI experimentation on your own calendar, and encourage (or require) your team to do the same.

  5. Ask the cost-of-inaction question. In your next leadership meeting, ask: "What is the cost of our current approach to AI adoption?" Then follow with: "What if we made inaction unacceptable?"

The existential question for your organization isn't whether AI will transform your industry—it will. The question is whether you'll be the one doing the transforming or the one being disrupted.

As Jim Adams might put it today: Good companies tolerate AI inaction. Great companies punish it.

Now, what's your next shot on goal?

P.S. From "Usage to Action

If you read my recent post "Stop Measuring AI Usage (Start Measuring AI Impact)," you know I'm skeptical of superficial metrics like login counts and DAUs. These tell you nothing about impact.

But that doesn't mean we shouldn't measure AI adoption. We just need to measure the right things: meaningful experiments, workflow transformations, and shots on goal.

Inaction is still the enemy. We're just being more sophisticated about how we define "action." Don't track logins—track attempts at transformation. Don't celebrate accounts created—celebrate workflows reimagined.

The point isn't contradictory—it's complementary. Stop measuring meaningless activity and start measuring meaningful experimentation.

Related: Beyond the Prompt: Brad Anderson
Related: Stop Measuring AI Usage (Start Measuring Impact)
Related: Beyond the Prompt: Diarra Bousso
Related: Take Your Own Job Before Someone Else Does

Join over 24,147 creators & leaders who read Methods of the Masters each week

Previous
Previous

Don't Keep the AI Expert Waiting

Next
Next

Exponential Ideaflow: Why AI Will Surpass Human Creativity (And Why That's OK)