BAD CALL

Decision autopsies for people who know better.

Most organizations don’t fail because of one catastrophic mistake made by a fool.

They fail because of a sequence of reasonable-seeming decisions made by smart, experienced, well-resourced people — each one individually defensible, each one making the next harder to question, until the outcome that seemed unthinkable becomes the one that was nearly inevitable.

That pattern has a name. It has a structure. And once you can see it clearly, you see it everywhere.

What Bad Call Is

Bad Call is a decision intelligence brief that reconstructs those sequences. Not to assign blame. Not to celebrate hindsight. To show precisely where the thinking went wrong, what was driving it beneath the stated rationale, and what a cleaner version of the same decision would have looked like.

Each issue takes one organization — a company, a board, a leadership team, an investment group — and runs it through a structured analytical framework built around the forces that distort consequential decisions. The framework isn’t theoretical. It’s been built and tested against some of the most expensive decision failures of the last thirty years, and refined through decades of sitting inside the kinds of decisions this publication dissects.

The structure of every issue is fixed and deliberate: the situation as it appeared at the time, what was actually driving it, where the distortion entered and why, what a clean version would have looked like, and one idea the reader can carry into the room on Monday.

No long introductions. No storytelling for its own sake. No frameworks you already know dressed up as new insight. The structure is the product.

Who It’s For

Bad Call is written for people who make consequential calls for a living. Founders. Board members. Senior operators. Advisors. People who can’t afford to let bias, noise, accumulation, and misaligned incentives quietly run their judgment off the road.

Bad Call doesn’t work for everyone. It’s not supposed to.

If you’re the right reader you’ll know within two issues.

Who Writes It

Bad Call is written by RC — an operator and advisor with decades of experience inside the kinds of organizations and decisions this publication dissects. The framework came from watching smart people make predictable mistakes in real time, trying to name what was happening before it compounded, and occasionally failing to make the naming stick.

The analysis comes from someone who has been in the room. The point of view is direct. The writing is attributed.

What the Framework Looks Like in Practice

In banking circles, there’s a term for a particular kind of technology solution. RFOP. Rooms Full Of People. The product works — it just works because of the humans operating it, not because of the technology being sold. The pitch is automation. The reality is sneakers.

Olive AI raised $850 million, achieved a $4 billion valuation, and deployed its platform across 900 hospitals before shutting down entirely in October 2023. The stated rationale at every stage was growth — more hospitals, more use cases, an AI platform that would become the operating system for healthcare administration.

What was actually happening was a gap. Despite the AI branding, Olive’s platform was primarily built on brittle automation scripts. When they failed — and they failed regularly — human workers fixed the errors behind the scenes. The automation wasn’t autonomous. It was a room full of people in sneakers, running between screens, manually completing what the pitch deck called AI-driven workflow automation.

Nobody decided to build a fraudulent company. The incentive structure made selling the gap the path of least resistance at every stage — and organizations, like water, find that path without being directed toward it.

Every contract signed on the basis of that gap created an obligation the product couldn’t honor. Every acquisition — Verata Health, Empiric Health, an in-house venture studio — added complexity and distance from the core delivery problem without ever solving it. The same renewal data that should have triggered an honest organizational reckoning got interpreted differently by every function: an account management problem, an engineering problem, a pricing problem, a timing problem. Nobody had a framework to force a common answer from the same signal.

Three cognitive biases compounded in sequence. Overconfidence bias built an expectation of capability the product couldn’t match. The planning fallacy reset that expectation with every new commitment. And confirmation bias ensured the neutral signals — the renewal rates, the KLAS scores, the quiet engineer complaints — got resolved in the direction the expectation was already pointing.

As Dan Ariely observes: you don’t see what you want to see. You see what you expect to see. By the time Olive’s data became unambiguously negative, the accumulation of misdirected commitments had made honest correction nearly impossible.

That’s what Bad Call does. Not the headline. The mechanism. Not what happened. Why it was nearly inevitable — and what a cleaner version would have looked like.

On Price

Bad Call is a paid publication, priced to reflect the value of what it delivers rather than the volume of what it produces. One issue every other week. No filler, no padding, no content for content’s sake.

A 30-day satisfaction guarantee applies to all new subscriptions. If it isn’t what you expected, contact LFB Holdings directly within 30 days.

Founding memberships — In The Room — are available in limited quantity for those who want permanent access and a direct line to the thinking behind the analysis. There are 25. That number won’t change.

Bad Call publishes every other Tuesday.

Because it always seemed fine at the time.

— RC

User's avatar

Subscribe to Bad Call

Decision autopsies for people who know better. Because it always seemed fine at the time.

People