I'm Running an Experiment on Trust
Something has been bothering me for months.
Every client call I get on, every coffee meeting, every Zoom, I hear the same thing: "We need to adopt AI." The urgency is real. The budgets are moving. The FOMO is palpable.
But when I talk to the people who actually have to use these tools — the operators, the managers, the individual contributors — I hear something completely different:
"I don't trust it."
Not "I don't understand it." Not "I'm scared of it." Specifically, I don't trust it.
The question nobody is asking the right way
The Edelman Trust Barometer found that only 32% of Americans trust AI. Pew Research found 73% of AI experts see positive impact vs. just 23% of the public. KPMG surveyed 48,000 people and found 66% use AI regularly, but only 46% actually trust it. And Harvard Business Review found that only 6% of companies fully trust AI agents to handle core business processes. They all ask: "Do you trust AI?" And they get predictable answers that fill a press release.
But that question is too blunt. It's like asking, "Do you trust technology?" The answer depends entirely on which technology.
You trust your email to arrive. You trust Google Maps to route you efficiently. You trust your bank app to show the right balance.
Do you trust ChatGPT to summarize a document accurately?
Do you trust an AI agent to send emails on your behalf?
Somewhere between "I trust my email" and "I trust an AI agent," there's a cliff. The trust just falls off. And where that cliff sits, and why, is the most useful thing anyone working with AI needs to understand right now.
What I built
I created a 3-minute survey that walks you through this journey. It maps what I'm calling The Trust Spectrum: from technologies you trust without thinking to technologies you're being asked to trust right now.
The survey covers: - How much you trust 6 specific technologies (from email to autonomous AI agents) - What breaks your trust in AI - What would increase it - Who you think is responsible when AI gets it wrong - How far you'd actually let an AI go on your behalf
I'm not looking for AI experts. I'm looking for everyone. The person who's never used ChatGPT and the person who uses it 50 times a day. The gap between those perspectives is where the real story is.
What happens next
When I hit 100 responses, I'm publishing the full results as a series:
- The Trust Cliff: the exact point where trust in technology breaks down
- The Trust Gap: how technical and non-technical people see AI completely differently
- Who's Responsible When AI Screws Up? Nobody agrees, and that's the real finding
- The AI Permission Ladder: what people will and won't delegate to AI
- The 5-Year Prediction: will AI be as trusted as email by 2031?
Subscribers get every one of these first, before they hit LinkedIn.
Why I'm finally shipping this
I was at a client's office in Plano this week. Fully remote company, two days in the same room. We solved problems in five minutes that had been circling on Teams calls for weeks. Nobody had a laptop open. There was actual eye contact. The whole dynamic was different.
And it hit me: trust works differently when you're in the same room. You read body language. You catch the face someone makes before they say "that's fine." You have a conversation while walking to get coffee that turns out to be the most important meeting of the trip.
That's what made me stop sitting on this survey and actually send it. Because the trust question isn't abstract for me. I'm a fractional advisor and automation specialist. Every client engagement comes down to: "How much latitude should we give this thing?"
That answer has to be built on real data. Not hype. Not fear. Not what the vendors want you to believe.
I'd genuinely appreciate your 3 minutes.
If someone in your network has an interesting perspective on this, forward this email. The more diverse the sample, the better the findings.
— Rob
Follow me on LinkedIn for the results as they roll in.