AI Is Just a Consultant: What Steve Jobs Got Right in 1992

AI Is Just a Consultant: What Steve Jobs Got Right in 1992
Knowing about something vs. knowing something. Steve Jobs used a banana to explain the difference.

Jobs was between NeXT and Apple when he gave this talk. He had nothing to sell, so he was honest.

In 1992, Steve Jobs gave a talk at MIT Sloan's Distinguished Speaker Series. He was between companies. NeXT was struggling. Apple hadn't asked him back yet. He was candid in a way he rarely was during product launches.

Someone asked about management consulting. Jobs didn't hold back.

"Without owning something over an extended period of time, where one has to see one's recommendations through all action stages and accumulate scar tissue for the mistakes, one learns a fraction of what one can."

He compared it to looking at pictures of a banana versus actually tasting one. Consultants get a broad view across many companies, but it's thin. They never sit with the consequences of their own advice long enough to learn from it.

I've been thinking about this quote a lot lately. Because it describes AI perfectly.

The three layers of the consultant critique

Jobs was making three distinct points, and all three map directly to how AI works today.

1. No ownership of outcomes

A consultant writes the recommendation, presents the deck, and moves on to the next client. They never find out if the strategy actually worked. They never see the reorg fail six months later or watch the product launch land flat.

AI does exactly the same thing. It writes your strategy doc, drafts your marketing plan, generates your code. Then it closes the tab. It has no idea whether any of it worked. When it gets something wrong, it's not AI that stays up until 2am fixing the production issue. That's you.

2. Breadth without depth

Jobs' banana metaphor is perfect here. Consultants see a lot of companies but never go deep enough to really understand any one of them. They pattern-match across industries, which is valuable, but they miss the texture.

AI is the extreme version of this. It has seen more text than any human could read in a thousand lifetimes. It brings patterns from every industry, every framework, every methodology. That breadth is genuinely useful. But it has never actually run a company, shipped a product, or lost a client. The patterns are real. The understanding is a picture of a banana.

I've been surveying people on how much they trust different technologies and the results are striking. People trust GPS, online banking, email without thinking. But AI? That trust drops off a cliff. Part of the reason is exactly this: breadth without depth feels impressive until you need it to be right about your specific situation.

3. No scar tissue from mistakes

This is the core of Jobs' argument. Scar tissue comes from living with your decisions. From watching something you recommended fall apart and having to pick up the pieces. That's how you actually learn.

AI accumulates no scar tissue. Every conversation starts fresh. It doesn't remember the last time its code suggestion introduced a bug. It doesn't carry the weight of a strategy that failed. It gives you the same confident answer whether it's right or wrong, because confidence isn't earned from experience. It's just how large language models work.

I've been on both sides

I'm a 4x founder who spent the last few years consulting. I've walked into companies, mapped their operations in a week, handed over a plan, and moved on. Sometimes those plans worked. Sometimes I have no idea, because I was already on to the next engagement.

That's the thing. I was good at it. Clients got value. But I never had to sit in the all-hands when the reorg I recommended went sideways. I never had to debug the automation I designed when it broke at scale six months later. I got the broad cut. I missed the texture.

That gap between advice and ownership is real, and it's exactly where things tend to fall apart.

That's why I'm going back to building. Not because consulting lacks value, but because I want the scar tissue. I want to be the one who lives with the decisions. (I wrote about a related tension in the agentic AI landscape recently.)

How to manage AI like a consultant

If you've ever hired a consultant, you already know how to work with AI. The principles are identical.

Scope the work. You wouldn't hand a consultant your entire business and say "fix it." You give them a specific problem. Do the same with AI. I built a skill that lets Claude Code talk directly to Airtable. It works because the scope is narrow: here's the schema, here's the task, go. When I tried the same tool on open-ended "help me redesign my database" prompts, the output was useless.

Verify the output. No competent manager accepts a consultant's recommendation without checking the assumptions. AI is the same. It sounds confident regardless of accuracy. I've had AI generate code that compiled, passed lint, and still did the wrong thing because I stopped paying attention. Check the work.

Own the decision. A consultant advises. You decide. AI advises. You decide. The moment you stop owning the decision is the moment you've abdicated the one thing that actually matters.

The reframe

The AI conversation has gotten stuck in a binary. Either AI replaces you or it doesn't. Either it's revolutionary or it's hype.

Here's a better frame: AI is the most scalable, affordable consultant you've ever hired. It works around the clock. It brings patterns from everywhere. It never gets tired and it never has an ego.

But it doesn't own the outcome. You do.

I've been calling this balance the Ebbe Method: finding the right mix of AI and traditional approaches for your specific situation. Not all-in on AI. Not ignoring it. Finding where it actually helps and where you still need the scar tissue.

Stop fearing AI. Start managing it. You already know how.

If this reframe changed how you think about AI, send it to someone on your team who's either terrified of it or treating it like magic. They probably need to hear this more than you did.