The Agentic AI Landscape Is Splitting Into Three Lanes

The Agentic AI Landscape Is Splitting Into Three Lanes

TL;DR — I spent a month running three AI agents, not reading about them. Here's what I learned:

  • OpenClaw: 234K GitHub stars, 512 security vulnerabilities. Hobbyist power tool. I bought a Mac Mini for it, then returned it.
  • Manus AI: Meta acquired it for $2B. Finishes 50-page research reports while you sleep. Now inside Ads Manager.
  • Airtable Hyperagent: Every agent session gets its own cloud computer. Structured data + enterprise guardrails. The underdog I'm watching closest.
  • The real takeaway: The right agent depends on what you're actually trying to do — not which one has the most GitHub stars.
  • Bonus: Mac Mini vs VPS for running OpenClaw — which one I'd actually recommend to a founder.

I've spent the last month deep in the AI agent rabbit hole. Not reading about them — actually running them. Setting them up, breaking them, paying the bills, and watching them either do something useful or light money on fire.

Here's what I've noticed: the agentic AI space isn't converging. It's splitting. And the three lanes it's settling into tell you a lot about where this is all heading.

Lane 1: The open-source power tool

OpenClaw went from zero to 234,000 GitHub stars in under a month. Peter Steinberger built something people wanted badly enough to set up a Mac Mini, configure a dozen API keys, and let an AI agent loose on their Telegram.

I did the same thing. Bought the hardware. Got it running. Watched it browse the web, try to manage my calendar, and send messages on my behalf. It was genuinely impressive for about 72 hours.

Then I returned the Mac Mini.

(I'm still running an instance called Eddy on a DigitalOcean Droplet. So take my skepticism with a grain of salt.)

The honest assessment: OpenClaw is powerful, flexible, and moving fast. But a security audit in January found 512 vulnerabilities — 8 critical — including one-click remote code execution (CVE-2026-25253, CVSS 8.8). Researchers found 341 malicious skills on ClawHub and over 42,000 publicly exposed instances.

On February 14th, Steinberger announced he's joining OpenAI. Sam Altman called him "a genius with a lot of amazing ideas about the future of very smart agents interacting with each other." The project moves to an open-source foundation — probably the right call for something this consequential.

OpenClaw is a hobbyist's dream and an enterprise security team's nightmare. If you have the technical chops and the risk tolerance, it's remarkable. If you're running a business, it's a liability.

Lane 2: The autonomous executor

Manus AI took a completely different approach. Instead of giving you a messaging bot to tinker with, they built something that finishes tasks while you're not looking.

Give it a research brief. Come back an hour later. There's a 50-page report with citations, competitive analysis, and data tables. It scored higher than OpenAI's Deep Research on the GAIA benchmark across every difficulty level — and that was before Meta acquired them for over $2 billion in December.

From Manus's own blog post announcing the deal:

Manus joins Meta for the next era of innovation.

Understated, considering the deal reportedly closed in under three weeks.

Manus is now inside Meta Ads Manager as of February 17th. Give it a campaign to analyze and it'll pull reports, research audiences, and surface patterns across your ad spend.

What makes Manus interesting is the model: async execution. You don't sit there watching it work. You delegate and it delivers. Technical people I respect are genuinely impressed by it. The Meta acquisition gives it distribution that no other agent startup can match.

The catch? You're handing your work to a black box. Manus decides how to break down your task, which tools to use, and what to include. If you care about knowing exactly what's happening and why, that autonomy is a feature and a bug.

Lane 3: The structured enterprise play

This is the one I didn't see coming.

On February 19th, Airtable's CEO Howie Liu posted this:

I've been personally burning through billions of tokens a week for the past few months as a builder. Today I'm excited to announce Hyperagent, by Airtable. An agents platform where every session gets its own isolated, full computing environment in the cloud — no Mac Mini.

Hyperagent gives each agent session a real machine — filesystem, shell, browser, code execution, hundreds of integrations, access to your data warehouse. Skills improve with every run. And it's all sitting on top of Airtable's structured data, which 80% of the Fortune 100 already uses.

This is separate from Airtable's Superagent (multi-agent research coordination, launched January 27th) and their existing Field Agents (per-record AI automation). Three different products, three different layers of the stack.

Different bet entirely. OpenClaw says "let the agent roam free." Manus says "let the agent work autonomously." Hyperagent says "give the agent a structured foundation and enterprise guardrails."

Here's why I think the structured approach matters more than most people realize: AI agents are only as good as the context they operate in. Give an agent access to a messy filesystem and it'll produce messy output. Give it clean, well-structured data with clear field semantics and relationships — the kind of thing Airtable already provides — and you get dramatically better results.

From Liu's Substack:

I've been burning through billions of tokens a week. Not theorizing about where agents are going, but building with them, and building them, in the most hands-on way possible.

Data preparation matters more than model selection. That's his thesis, and honestly, it tracks with everything I've seen building on Airtable.

What I actually learned

After running all three, my takeaway isn't "pick a winner." It's that the right tool depends entirely on what you're trying to do.

I don't want OpenClaw recreating my task management system from scratch. I want it interfacing with the tools that already work — Airtable, Linear, Obsidian, Google Workspace. The tools that are industry standard didn't get there by accident. They got there by being good.

The real skill isn't picking the most powerful agent. It's knowing when to use AI and when to use the proven tool that already does the job well. I've been thinking about this as finding the balance — knowing where AI adds genuine value versus where it's automation theater.

More on that soon.

Bonus: OpenClaw on Mac Mini vs VPS — a founder's guide

I get this question a lot, so here's the honest breakdown. I've done both.

Mac Mini (what I tried first)

Cost ~$600-800 one-time (M4, 16GB RAM)
Setup time 2-4 hours if you know what you're doing
Monthly cost Your electricity bill. Maybe $5-10/mo.
Uptime Only when it's plugged in and your internet is up
Access Local network only, unless you set up port forwarding or a tunnel
Security On your home network. Next to your other devices.
What happens when you're done You sell it on Facebook Marketplace for 60 cents on the dollar

I ran this setup for about a week. It worked. It also meant I had a $700 computer sitting in my closet running an AI agent with 512 known vulnerabilities on the same network as everything else in my house.

I returned it.

VPS / DigitalOcean Droplet (what I use now)

Cost $0 upfront
Setup time 30-60 minutes with a solid guide
Monthly cost $24/mo (Basic / 4GB RAM / 2 vCPUs)
Uptime 99.99%. It doesn't care if your power goes out.
Access SSH from anywhere. Phone, laptop, coffee shop.
Security Isolated from your home network. Firewall rules. Snapshots.
What happens when you're done You click "Destroy Droplet" and stop paying

This is what I run Eddy on. A $24/mo DigitalOcean Droplet. Basic tier. It's been running for weeks without me touching it.

My take for founders: The real argument isn't cost. It's optionality.

Everything in this space is moving absurdly fast. New models every month. New agent frameworks every week. Hardware announcements around every corner. Buying a Mac Mini right now is betting that you know what machine you'll need six months from now. You don't. I didn't.

With a VPS: - If something better than OpenClaw comes out tomorrow, I turn it off. No hardware to sell. - If I need more power, I click a few buttons — more CPUs, more RAM, spin up additional agents. - If I decide the whole thing isn't worth it, I stop paying $24 and walk away. - When I've done enough experimenting to know exactly what machine I need, I'll buy it with confidence instead of guessing.

The Mac Mini will still be there when you're ready for it. Until then — stay lean, buy the tokens, and have fun figuring out what actually works.

The question

Every week, someone asks me which AI agent to use. I've started answering with a question: What are you actually trying to accomplish?

If you want to tinker and experiment — OpenClaw. If you want tasks done while you sleep — Manus. If you need enterprise trust and structured execution — keep watching Hyperagent.

But here's the real question: Are you picking your tools based on what they can do, or based on what you actually need done?

Those are very different lists.


Know someone navigating the AI agent hype? Forward this to them. I write these so people can skip the marketing and get the honest take from someone actually running these tools. If this saved you time, it'll save them time too.

Share on LinkedIn · Share on X

This is Automate With Rob, a newsletter about building with AI tools without losing your mind. If someone forwarded this to you, subscribe here.