Share

My vision on 2026: Human First, Supported by AI

Why this is the year leaders design AI, or AI designs them

Picture this scenario...

Monday, June 2nd.

A team uses three AI agents. One in Slack summarizes discussions. One in their CRM qualifies leads. One in their project tool creates tasks based on emails. They talk to each other via MCP. Fast. Efficient. Everyone's happy. Until that Monday. The Slack agent summarizes an internal discussion about a customer. Sensitive information: "We need to scale down. They don't pay on time."

That summary gets forwarded to the CRM agent. It recognizes the customer name. Links it to an open ticket. Automatically triggers a check-in email. The email goes out. To the customer. With the internal summary in it.

Within an hour, the customer calls. Angry. Shocked.

No hacker. No attack. No data breach.

...Just three agents doing exactly what they were allowed to do... but nobody had designed what they weren't allowed to merge.

This scenario isn't fiction. This class of mistakes is happening right now. At large companies. At teams of ten people. At scale-ups. At everyone deploying AI without design.

Because that's where we are. AI has become so accessible and integrates so fast that we forget to determine how it should participate. We trust tools before we understand what they're allowed to do. We turn on an agent before we've set boundaries.

We're in 2026 now. And this is visible everywhere.

What we're seeing right now

AI isn't slowing down. We're living the acceleration.

Agentic AI is becoming mainstream. Not chatbots that generate text, but agents that execute actions themselves. Send emails. Deploy code. Move data. Control processes. Create tasks. Trigger workflows.

Agents work autonomously. They communicate with each other. They build on context. And they do that within boundaries you must determine... boundaries most teams haven't thought about yet.

MCP (Model Context Protocol) is making enormous leaps. Tools with connectors are growing fastest. AI isn't a feature anymore. AI is a system layer.

Slack. Notion. JIRA. Gmail. Everything has an AI layer now. Tools are pushing AI into your organization whether you understand what that layer actually sees, reads, and can activate or not.

And the EU AI Act becomes fully effective later this year. Mandatory Human Oversight. Logging. Transparency. Demonstrable accountability.

My observation is simple.

AI has shifted into our infrastructure. And infrastructure must be designed.

The question isn't whether AI gets a role. The question is: who's determining how?

The biggest pitfall we're seeing: Inexperienced trust

The biggest risks right now don't arise from naivety.

They arise from inexperience.

AI feels reliable. Speaks convincingly. Acts independently. Responds quickly. Carries the image of its makers... large companies, known models, familiar interfaces.

And that makes us forget something important.

We still don't have a reference frame for distrusting AI.

With software, we know where the boundaries are. With colleagues, we know what can go wrong. With processes, we know where the risks sit.

But with agents?

No instinct. No experience. No pattern recognition. No red flags that go off automatically.

So people trust it faster than is healthy.

Not because they're stupid. But because everything is new. And when something is new, it often feels better than it is.

That leads to predictable mistakes we're seeing every week. We give agents access because the interface looks reliable. We adopt model answers because they sound smart. We click "OK" because the tool looks like it knows what it's doing. We forget boundary settings because nothing has gone wrong yet.

This isn't blind trust.

This is first-generation trust.

And that makes it dangerous right now.

Inexperienced trust creates situations where machines get more authority than intended, simply because nobody can assess the risks yet, and that gap between capability and understanding grows wider every time a new tool launches with another "just works" promise.

That's why we're seeing it everywhere. Shared login credentials. Customer data in public models. Self-built scripts without logging. Agents running without IT knowing.

Not because people are reckless.

But because AI feels like a helpful colleague, not like a system that must be designed.

As long as there's no design, wild growth is the norm. And wild growth scales faster than mature behavior.

Innovation is happening. But innovation without design is improvisation. And improvisation never scales.

Why right now is the tipping point

Last year, AI was mainly a tool.

Writing. Summarizing. Brainstorming. Tools we controlled.

This year, it's Agentic AI. Systems that don't wait for our commands but execute actions themselves. Systems that make decisions based on context. Systems that communicate with each other.

The question has changed.

From "How does AI help me?" to "What is AI allowed to do without me?"

That makes the impact bigger. And the risk too.

At the same time, culture has changed. Teams expect AI now. Without AI, work feels slower. Workflows are being built around agents, not beside agents.

But dependence without design is dangerous.

And later this year there's compliance. The EU AI Act mandates Human Oversight for high-risk systems. Companies must demonstrate that people are ultimately responsible. That AI decisions are explainable. That logging is complete. That boundaries are set.

That's not ethics. That's compliance.

The only workable model: Human First, Supported by AI

AI supports. AI accelerates. AI structures. But AI doesn't replace responsibility. AI is assistant. The human leads. You must determine what AI may see. What AI may do. Where the mandate stops. Not because it's a moral discussion, but because it must happen now. AI is structural in organizations. That makes it risky. The EU calls that Human Oversight. In plain language: you remain ultimately responsible.

AI may never decide about people. Make ethical choices. Cross boundaries without permission. Autonomously escalate without human check. Execute critical tasks together with other agents without oversight. These aren't philosophical principles. This is practical safety for today.

Five design domains leaders are working on now

This isn't a step-by-step plan. These are perspectives teams are using to integrate AI maturely.

1. Access and boundaries

What may AI see? Which data may AI never touch?

Without boundaries, AI reads along in places where no human would ever get access. CRM. HR. Strategy. Finances. Everything often sits in the same tools. Data access isn't a side issue. It's the foundation. Everything must be logged. Every action must be traceable. No exceptions.

2. Decision-making

What may AI do autonomously? What may AI only suggest? A draft email is okay. A sent email without check is not. Generating tasks is okay. Sending escalations is not. The difference between suggestion and action determines safety.

3. Team rhythm and protocols

How does AI participate in Slack, standups, and reviews? May AI post summaries? May AI tag people? May AI escalate? Small rules determine whether AI supports or disturbs the team. AI must strengthen rhythm. Not disrupt it.

4. Attention protection

AI may not increase noise. An agent that sends notifications at 11 PM because a task comes in isn't help. That's disruption. AI must respect work hours. Focus. Energy. AI must serve, not demand.

5. Transparency and control

AI must remain explainable. Stoppable. Traceable.

No black boxes. No "the system did it automatically."

If an agent does something, you must be able to see exactly what happened. Why. Based on what. Who approved it.

And everything must have an emergency stop. If it goes wrong, you must be able to intervene.


Why this matters right now

The risks have shifted from technology to culture.

Tools are pushing AI inward before organizations understand what they're getting. Teams are building workflows around agents without thinking about possible consequences. Leaders are waiting for clarity that isn't coming.

The mistakes happening right now arise from too little ownership. Unclear agreements. No design. Wrong assumptions.

The risk doesn't lie with AI. The risk lies with leaders who haven't set frameworks yet.

This is the foundation

This isn't the year AI becomes smarter. This is the year leaders determine how AI participates.

The future isn't AI-first. The future is Human First, Supported by AI. And that future is now.

We use cookies

Choose which categories you allow. Necessary cookies are always on.