The Server Room Chill and the CEO’s Golf Swing
The air conditioning in the meeting room, always set to a temperature that felt suspiciously close to the inside of a server rack, did little to mitigate the rising heat in the back of my neck. I was presenting the results of the Q3 segmentation analysis, and the numbers were definitive: Customer Segment A, the one everyone loved, the one the marketing team built their entire Q4 narrative around, was unprofitable by a margin of 10.5%.
I clicked to the next slide, waiting for the inevitable. The CEO leaned back, steepled his fingers, and squinted at the projected pie chart like it had personally insulted his lineage. “Interesting,” he said, the word hanging there, heavy and cold. “But you know, I was golfing with a guy from Competitor X yesterday, and he said their Segment B is driving all their growth. Can we just-can you pull the numbers for our Segment B again? Maybe we haven’t been applying the acquisition cost accurately there. Run that modeling for the last 45 days, specifically.”
We weren’t looking for the truth; we were looking for ammunition to confirm the pre-existing decision.
Data-Supported vs. Data-Driven
I knew what was happening. We weren’t looking for the truth; we were looking for the ammunition to confirm the pre-existing decision. The CEO wasn’t asking for deeper insight; he was demanding a data-driven justification for a bias formed entirely on a manicured patch of grass between the 5th and 6th holes.
This is the silent, intellectual dishonesty that poisons organizations claiming to be “data-driven.” We are not data-driven. We are, at best, data-supported. We only use data when it reinforces the direction of the highest-paid person in the room (HiPPO).
The moment the data suggests an uncomfortable pivot, we don the forensic gloves, not to excavate reality, but to find the single, acceptable shard of evidence that allows us to discard the rest of the findings. This isn’t just frustrating for analysts. It’s corrosive to the core mission of discovery. We spend millions on infrastructure, warehousing, visualization tools-and yet we train highly talented analytical minds that their highest value is not in objective reporting, but in finding the most eloquent way to confirm the boss’s gut feeling.
Analyst’s Admission:
I WAS GUILTY
I’ve been guilty of it, too. Not as the demanding executive, but as the tired researcher. Earlier this year, I was working on a minor policy change… Did I spend 235 hours reconciling the two methods? No. I spent 5 minutes pointing out a flaw in the intern’s sample size, effectively dismissing the contradictory evidence, because I wanted the 3.5% uplift to be real. We are all searching for data that justifies the energy already expended.
The Crucial Choice: Process Over Prediction
Leads to confirmation bias.
Accepts uncertainty.
This is the crucial pivot: Do you value the prediction, or do you value the process? Because if you value the prediction-the outcome you want-you will always manipulate the process until the numbers align. If you value the process, you have to accept that sometimes the answer is ‘no,’ or ‘we were wrong,’ or, terrifyingly, ‘we have no idea, and need to spend another $575,000 to find out.’
The Archaeologist’s Dilemma
I think about Taylor B.K., an archaeological illustrator I met years ago. Her job was incredible: she didn’t just draw the pottery fragments found at a dig site; she reconstructed what the complete vessel must have looked like. If she found 45 shards of plain clay, and one tiny piece that suggested a Roman inscription, the temptation was enormous to draw a majestic, inscribed vase, shifting the entire narrative of the settlement.
But she wouldn’t. She would draw the plain vessel, and in a tiny, separate box, she would illustrate the single inscribed fragment, noting its ambiguous context. She prioritized intellectual integrity over a stunning discovery narrative. Our data teams are tasked with drawing the whole pot, but leadership often demands the Roman inscription, even if all we have is plain clay.
From Advocacy to Accountability
We must understand that seeking only supporting evidence is not analysis; it is advocacy. And when leadership requires analysts to be advocates for a specific outcome, we lose the critical distance necessary for true discovery. The cost of this intellectual erosion is staggering. It leads to the funding of zombie projects-initiatives that the data clearly shows are dead, but which are propped up by selective metrics because too many careers are tied to their perceived success.
(Cumulative opportunity loss due to confirmation bias)
How do we break this cycle? It requires a cultural shift that treats data, not as a source of certainty, but as a source of productive disagreement. We need organizations committed to providing clear, objective, and meticulously vetted information, ensuring that even if the HiPPO suggests a direction, the underlying foundational knowledge is sound. This is why services like nhatrangplay are becoming essential-they focus on building a bedrock of reliable, objective understanding.
The real failure happens when the analyst stops saying, “The data suggests X,” and starts saying, “How can I make the data show Y?” This subtle shift turns your entire expensive data science team into political operatives, draining the company of its most valuable resource: curiosity.
If you want to be truly data-driven, you must introduce metrics of failure at the start. When we launched that complicated, high-risk project two years ago, we should have budgeted $1295 specifically for the failure narrative-the mechanism by which we would officially, publicly, and financially acknowledge the project’s termination if specific, predetermined failure metrics were hit. We kept drawing the imaginary Roman inscription.
Rewarding the Truth-Bringer
There must be an organizational reward for being wrong. We must reward the messenger who brings data that contradicts the strategic direction, not punish them. If the truth makes the CEO uncomfortable, the solution isn’t to massage the truth until it’s comfortable; the solution is to change the strategy, or, more often, to acknowledge the discomfort as the cost of knowledge.
The Receipt Rule
This penetrates personal life. I recently tried to return a small item-costing perhaps $15-without a receipt. The store knew I bought it, but the rule was absolute: no receipt, no return. The rule had replaced the underlying objective reality. My frustration wasn’t about the money; it was about the fundamental lack of trust in the evidence.
When we prioritize arbitrary rules (or political agendas) over evidence, we are running an expensive confirmation bias service.
We need to stop asking if the data supports our decision, and start asking: Does the truth of the data change who we fundamentally are?
If the data shows us a plain clay pot when we expected gold, will we finally stop illustrating the gold one?