Margin Notes

Short observations, finds, thoughts-in-progress. No finished argument. No prompt kit. Thinking in progress.

The Dispatcher Beats the Model Fan

When a weaker model delivers useful preparation work for one fifth of the price, model choice becomes an operating decision.

Open Standard, Open Entry Point?

MCP is being sold as a common standard for AI tooling. That is exactly why the question matters who is actually left carrying the security burden.

The Fault Is One Layer Up

When AI tools suddenly feel worse, the problem often does not sit in the model but in the work system around it.

Not Siri-ready. System-ready.

Preparing an app only for Siri is too narrow. The real task is systemic addressability.

The App with the Factory Badge

Many companies treat AI tools like harmless productivity software. Organizationally, they often behave more like external firms with factory badges.

Are Managed Agents Becoming a New AI Infrastructure?

Agentic workflows turn model buying into an infrastructure problem surprisingly fast. A managed layer may be forming exactly there, above inference as the underlying commodity.

Ethical Until the Website Edge

Cloudflare's new metric pushes the ethics question in AI a step further. Not only: what does the model do. But also: what does the product give back to the sources?

The Model Company Is Not Yet a Model Organization

Big AI numbers sound like deep transformation. Often they only show that one part of the company already runs on a new rhythm while the rest still operates on the old one.

The Trust Tax

AI platforms will not just sell model capability. They will sell predictability. The real surcharge sits in billing, limits, appeals, and policy enforcement.

The Quiet Unbundling of Teamwork

AI often makes solo work faster. But what disappears when teamwork gets translated into one person plus an agent stack shows up on no productivity slide.

From Risk to Product

The same capability was a reason for restraint yesterday and is suddenly a product today. Not because the model changed, but because the narrative and access channel did.

The Brand in the Prompt

The agentic commerce literature says: when agents do the buying, only data matters. Wrong. Brands are compressed value profiles – exactly what an agent needs.

From Lock to Meter

Anthropic burned through three enforcement strategies in four months before landing on the only one that works: the meter, not the lock.

The Bottleneck

Apple opens MCP but controls the gateway. Opening the protocol while controlling the gateway gets you cooperation instead of resistance. Plus: The Gemini asymmetry nobody is working through.

The Market Will Sort It Out

AI will write good code because it's cheaper? The reasoning has 50 years of counter-evidence. The market doesn't evaluate quality, it evaluates delivery speed.

The Two Versions

Anthropic accidentally published the source code of Claude Code. More interesting than the blunder: internal users get a different system prompt than paying customers.

Design Gets a Text Field

Google rebuilt Stitch into a full design tool. Design is converging on text. But how much structure does that text need before the agent stops getting "creative"?

Merchants of Complexity

Agents make us faster, not more productive. When the human bottleneck disappears, so does the protection against uncontrolled complexity.

The 99-Dollar Wall

Apple is blocking vibe-coded apps. But if the highest barriers to entry in the software ecosystem no longer filter, what does?

The Invisible Craftsman

A top-3% engineer at Uber: zero GitHub commits, no social media, no LinkedIn. When AI makes visible artifact production cheaper, how do you recognize the people whose value lies in what you can't see?

Die Maschine klickt

Anthropic gibt Claude die Kontrolle über Maus und Tastatur. Was sich ändert, wenn AI nicht mehr Texte produziert, sondern Handlungen ausführt – und warum der Permission-Dialog die Governance-Frage nicht beantwortet.

Plausibel klingender Unsinn

Drei gezackte Oberflächen übereinander: Die Fähigkeiten der Modelle, die Evaluationsmethoden und die Intuition der Nutzer sind alle unzuverlässig. Warum der Aufwand für Evaluation steigt, wenn die Modelle besser werden.