What Do You Actually Want?

Architectural blueprint cross-section: the official transformation facade above, several conflicting drive shafts below. A single amber steering cable marks the actual operational intent beneath the official narrative.

An agent is supposed to win a boat race. It gets rewarded for collecting green blocks. So it drives in circles, racks up points, and never finishes the race. The score looks excellent. The purpose is missed.

That sounds like a neat lab anecdote from AI research. In practice it is also a fairly precise description of many corporate AI initiatives. The dashboards look good. Activity goes up. Cost per task goes down. The number of generated artifacts rises. And yet it becomes less clear, not more, whether the system is optimizing for the right thing.

At that point the problem often is not the model. The problem is that the organization itself cannot clearly say what it actually wants.

Right now we spend a lot of time talking about prompts, specs, context windows, agent setups, and evaluation. All of that matters. But much of it assumes something that, in a surprising number of organizations, exists mainly as a polite fiction: an articulable intent.

Not the mission-statement version of intent. Not the website version. Not what sits in the town hall deck. I mean the answer to a few simple questions that can survive contact with reality:

  • What are we actually trying to achieve here?
  • Which trade-offs really apply?
  • Which sacrifices are quietly acceptable?
  • How would we notice that the initiative is becoming more efficient while moving in the wrong direction?

If the honest answer to those questions turns evasive, contradictory, or politically unsayable, you do not have a prompt problem. You have an intent problem.


The problem sits one layer higher

Many companies still treat AI as if it were mainly a specification problem. People need to formulate more precisely. Briefs need to improve. Data needs to be cleaner. Use cases need to be sharper. Governance needs to be stronger. Often that is true.

It is just not the first question.

Before you can specify well, you have to know what is actually worth specifying. Before you can build serious evaluation, you have to know what the result is supposed to be evaluated against. Before you can delegate meaningfully, you have to know which intent should survive the handoff once the original plan breaks against operational reality.

That is where a surprising number of organizations go soft very quickly.

Then you hear sentences like:

  • We want to use AI to become more productive.
  • We want to become more innovative.
  • We want to relieve our employees.
  • We want to improve our cost base.
  • We want to stay ahead.

None of those statements is false. Almost all of them are too vague to function as intent, especially when they are supposed to be true at the same time.

Because in practice they collide. Anyone who wants to cut costs does not automatically reduce pressure on employees. Anyone who wants to accelerate innovation does not automatically reduce risk. Anyone who wants higher productivity does not automatically get better quality. And anyone who wants to stay ahead still has not said what they would actually give up in order to mean it.

An organization without articulable intent does not write a good spec. It writes a tidy confusion.


Three old answers to the same problem

The interesting thing about the intent question is that it is not new. What is new is that AI makes it brutally visible.

Three very different traditions land on the same structure.

Mission command

Prussian mission command did not emerge because someone wanted leadership to sound more poetic. It emerged because people understood that plans collapse under complex conditions. Anyone who knows only the order fails the moment reality departs from the script. Anyone who understands the purpose can adapt locally without losing direction.

The key point is not decentralization as ideology. The key point is this: the person acting has to understand the purpose, not just the instruction.

That is very close to the problem that now appears in AI delegation as well. If an agent, a team, or a business unit is supposed to act with some autonomy, it is not enough to prescribe isolated steps. The intent has to be clear enough to hold once the first plan stops fitting.

Hoshin Kanri

In the West, Hoshin Kanri often gets reduced to a method for cleaner cascades of goals. That is the harmless version. The real demand is tougher.

The system only works if a company can translate strategic priorities in a way that lower levels do not experience merely as reporting duty, but as usable direction. It fails very reliably when that translation does not happen.

Then the whole architecture turns ritualistic. Goals get handed down, but not owned. Metrics get maintained, but not believed. The organization looks aligned but is mostly busy.

That is not far from what many firms are doing with AI right now. They distribute target pictures without clarifying the intent underneath. Then they wonder why activity appears everywhere while consistent direction appears nowhere.

Constitutional AI

Even AI systems need principles above individual instructions. That is why constitutions, model specs, guardrails, and prioritization logics exist in the first place.

This may be the most embarrassing observation in the whole debate: the companies building AI invest enormous effort in giving their systems an explicit value structure. The companies adopting AI often behave as if their own intent were somehow already clear.

Usually the opposite is true. Many organizations have spent more time polishing their brand values than describing their operational intent in a way that can survive pressure.

Three domains. A century and a half between them. Same conclusion: rules scale worse than shared intent.


Missing intent and hidden intent

Not every weakness around intent is the same. At minimum, two cases need to be separated.

1. Missing intent

In this case, the organization itself does not really know what it wants.

This is the more charitable case. The problem is not primarily manipulation, but ambiguity. Different leaders tell different stories about the same initiative. One side wants a cost lever. Another wants an innovation story. A third wants a culture project. A fourth wants risk reduction. Everything at once. Nothing with priority.

Then the intent is not deliberately concealed. It simply does not exist in a form that can survive conflict.

You can often see that on the metric side. Success gets measured by whatever is easiest to report: number of users, number of use cases, hours saved, automation rate. Not because those metrics are convincing, but because they are available.

2. Hidden intent

In this case, the organization does have a real priority. It just does not say so out loud.

Officially the initiative is about quality and customer value. Operationally it is about margin pressure. Officially it is about empowerment. Operationally it is about tighter control. Officially it is about relief. Operationally the same work is simply redistributed under higher output pressure.

That is not a misunderstanding. It is politics.

And politics does not disappear once AI enters the picture. It simply gets operationalized more harshly. The agent does not optimize for the polished intention that sounds good in public. It optimizes for the signals the system actually emits.

The first case is ambiguity. The second is concealment. For the machine, both are simply inputs. For the organization, the difference matters a great deal.


Before AI, people could soften a lot of this

That is exactly why the intent gap stayed hidden for so long.

Good people balanced contradictory goals in day-to-day work. Middle layers of management smoothed over conflicts. Teams often made reasonably sensible work out of half-baked signals. Meetings turned ambiguity into motion. Not into clarity, but often into enough social coordination to keep the place running.

That was expensive. But it was a working buffer.

AI makes part of that buffer thinner. Not because humans suddenly disappear, but because execution, variation, and scaling become cheaper. That raises the price of unclear direction.

An experienced person often notices when a task is phrased cleanly but tilted internally. An agent efficiently does what it has been pointed at. And if the organization itself does not know how it prioritizes its conflicts, then AI simply becomes a faster optimization machine for its own ambiguity.

That is why AI is not the cause here. AI is a disclosure machine.


The real cost of unclear intent

Unclear intent does not just produce bad individual results. It produces bad learning.

Then the company gradually optimizes for:

  • metrics instead of purpose
  • activity instead of value
  • speed instead of direction
  • local wins instead of system effects
  • easily reportable side effects instead of harder, more strategic progress

That is more dangerous than a bad spec.

A bad spec eventually becomes obvious. The output is visibly off. A skewed intent is more deceptive. It still produces results, just in the wrong direction. And the better execution gets, the more convincing those results often look at first.

That is the logic of the boat driving in circles. The score is not fake. It is simply tied to the wrong purpose.


The German version of the problem

The DACH angle is not decorative here. It changes the shape of the issue.

German organizations often have more structure than articulated direction. Processes, committees, sign-off paths, and governance tend to be plentiful. What is often missing is the hard confrontation over which goal really takes priority once productivity, quality, control, speed, and cultural stability begin to conflict.

That gap becomes visible very quickly in AI debates because things now have to be made explicit that previously ran on social intuition.

Co-determination does not intensify this only as a problem. If handled well, it can act as a reality test. The moment workload, role design, evaluation, and restructuring become subject to formal challenge, a foggy intent is no longer enough. Management has to say what is supposed to change and why. Not in strategic vapor. In terms that can still hold under resistance.

That is tiring. But it is also an advantage.

The advantage is not slowness. The advantage is that bad intent tends to surface earlier when it has to be made articulable against operational reality, audit, or co-determination.

US-centered AI rollout writing can afford to treat intent as a cultural haze for longer. In the German-speaking context, it turns more quickly into a management question that is either robust or not.


Who could write down your real intent?

That is the uncomfortable question at the end of this piece.

  • Not: Do you have an AI strategy?
  • Not: Do you have use cases?
  • Not: Do you have governance?

But: Who in your company could write down the actual operational intent of this transformation? Not the PR version. Not the board slide. The version from which an agent, a new team, or an entire unit could act without everything dissolving into follow-up questions, trade-offs, and polite confusion three days later.

If the honest answer is that nobody can currently do that, that is not unusual. From what I see, it is more or less the norm. But it is also, in plain terms, the state of the nation.

And if that is true, then maybe the most honest sentence about your AI transformation is not: we need better prompts.

It is: we first need to decide what we actually want.

Put it into practice

This prompt kit translates the essay's concepts into concrete prompts you can use right away.

Go to Prompt Kit