Viral Fiction vs. Thermodynamic Fact: The Truth About AI Data Center Cooling

By Kenneth Henseler, 20-FEB-2026

If you spend enough time scrolling through Instagram or TikTok, you are bound to encounter highly alarming statistics about the environmental impact of artificial intelligence. Recently, a reel posted by the user ‘bizbrat’ went viral, featuring a dark, ominous video of an industrial grate accompanied by a startling text overlay: “800 BILLION litres of fresh water is being used in a single DAY to cool down systems across the world, concerning or not?”

The caption went further, claiming that 11 trillion liters of water are used for this purpose overall, and alleging that companies refuse to use “Air/dry cooling” or “Closed-loop systems” because of “Higher upfront cost” and “Water is cheap & under-regulated.” Most alarmingly, the post claimed that hot water is routinely dumped into water bodies, killing organisms and causing severe “thermal pollution.”

To understand why this video exists, we have to look at the digital economy. In 2025, Oxford University Press named “rage bait” as its Word of the Year.[1] Defined as online content deliberately engineered to provoke anger, frustration, or moral outrage to artificially inflate engagement, the usage of the term tripled as the digital landscape became increasingly charged.[1] The claims in this specific video are a textbook example of this phenomenon—taking fragmented, outdated concepts and presenting them as modern crises to harvest outrage for algorithmic profit.[2]

The most egregious claim in the reel’s caption is the idea of “thermal pollution”—the assertion that “hot water is sometimes put into water bodies which kills many organisms.” While thermal pollution is a legitimate historical and regulatory concern for mid-century nuclear or coal power plants that utilize open-loop river cooling, modern enterprise data centers operate under entirely different engineering paradigms.

Furthermore, the irony of the video is that the exact solutions it demands—air/dry cooling and closed-loop systems—are already the standard for high-tier enterprise infrastructure.

To ground this in reality, we can look at the NTT Global Data Centers TX1 facility in Garland, Texas. This 230,000-square-foot fortress supports 16 Megawatts of critical IT load.[3] Does it evaporate billions of liters of water daily? No. The official specifications of the TX1 facility explicitly state that it utilizes “waterless cooling using indirect air exchange cooling technology” driven by 74 total rooftop cooling units.[4]

As artificial intelligence pushes server rack power densities from standard 10kW loads up to 100kW or even 200kW, the industry is shifting toward liquid cooling.[5] However, these are fundamentally closed-loop systems. Whether utilizing Direct-to-Chip cold plates or full immersion cooling, the liquid is sealed within the system.[6] These liquid systems are highly sustainable, capable of reducing data center energy consumption by over 60% and up to 95% in optimized setups.[7]

The technology to run massive computational loads sustainably doesn’t just “exist” as a hypothetical—it is currently powering the global digital economy. The next time a viral video tries to tell you the internet is boiling the oceans, remember that outrage is free, but good engineering is a closed loop.

🍎 Podcasts: https://podcasts.apple.com/us/podcast/the-chronos-archive/id1831231439?i=1000750756195

Sources Cited:

  • Oxford Word of the Year 2025: Rage Bait [1]
  • NTT Global Data Centers TX1 Specifications [5, 4]
  • The Mechanics of Kyoto Cooling [6, 7]
  • Liquid vs. Air Cooling in High-Density AI Data Centers [8, 9]
  • Understanding Data Center Water Consumption [2, 3]

From Ticket-Taking to Platform-Building: Why We Are Pivoting to Product Mode

A Platform Engineering Manifesto

By: Kenneth Henseler, 15-FEB-2026

I’ve spent a lot of time in the trenches of IT Infrastructure. If you’ve been there, you know the drill: The “Ticket Factory.”

Developers need a server? Ticket.

Need a firewall rule? Ticket.

Need a database? Ticket.

For decades, this was the industry standard. It was safe. It was controlled. But in 2026, it’s also a bottleneck that kills velocity. When your smartest engineers spend 60% of their week manually executing repetitive tasks from a queue, you aren’t managing infrastructure—you’re managing a bureaucracy.

That’s why I’m leading a strategic shift in my organization: Moving from IT Service Management (ITSM) to Platform Engineering. We call it Project Polaris.

Here is the philosophy behind the shift, and why “Good IT” isn’t about closing tickets anymore—it’s about building products.

1. The “Ticket Factory” Doesn’t Scale

Traditional IT operations are linear. If you hire 10 more developers, you generate 10x more requests, which means you need 10x more sysadmins to handle the load. That math doesn’t work.

We are moving away from being “Gatekeepers” (who approve and implement) to becoming “Gardeners” (who cultivate the ecosystem).

The goal of our new Platform Engineering model is simple: Self-Service with Guardrails.

We are building an Internal Developer Platform (IDP) that treats our infrastructure as a product. If a developer needs a resource, they shouldn’t have to ask me for permission; they should be able to consume it via API or portal, knowing that the security and compliance checks are already baked in.

2. The “Golden Ratio” of Capacity Planning

One of the hardest lessons in engineering leadership is protecting your team’s time. If you don’t defend it, “keeping the lights on” (KTLO) will eat 100% of your bandwidth.

As part of this restructure, we are implementing a strict capacity model that I call the “Golden Ratio” for our sprints:

• 50% Strategic Enablers: Work that moves the business forward (Building the IDP, new architecture, automation).

• 30% Operational/Support: The inevitable day-to-day reality of running systems.

• 20% Tech Debt Repayment: Mandatory. Non-negotiable.

If you don’t explicitly budget for Tech Debt, you are essentially taking out a high-interest loan on your future stability. Eventually, the interest payments (outages, slow deployments, manual patches) will bankrupt your time.

3. Governance as Code (Safety Without Speed Bumps)

The biggest fear with self-service is usually security. “If we let devs provision their own DBs, won’t they leave them open to the internet?”

In the old world, we stopped this by having a human review every change. In the Platform world, we stop this with Governance as Code.

Instead of a manual approval board, we define our policies in the platform itself.

• You want an S3 bucket? Fine, but the platform automatically enforces encryption and private access policies before it’s even created.

• You need a VM? The image is pre-hardened and automatically patched.

We aren’t removing the rules; we are automating the enforcement. This allows us to say “Yes” faster, without lowering our security posture.

The North Star, Polaris

This transition isn’t easy. It requires a culture shift from “I own this server” to “I own the code that builds this server.”

But the destination is worth it. By treating our platform as a product, we stop being the “Department of No” and start being the accelerator that the business actually needs.

See you in the server room (or the repo).

– Ken

Software-mageddon: Why Wall Street Just Deleted $1 Trillion from SaaS (And How to Survive)

By Kenneth Henseler

Manager, Platform Engineering (Systems Infrastructure) at Brinks Home

Everyone is talking about the AI bubble. But while you were watching NVIDIA, the real story happened in the software layer.

In the second week of February 2026, the market ruthlessly repriced the technology sector. In just five trading days, over $1.2 trillion in value was wiped from traditional software stocks like ServiceNow (-50%) and Salesforce (-40%). At the exact same time, the “Hyperscalers” (Amazon, Google, Microsoft) committed $660 billion to building AI infrastructure.

Why the split?

The market has realized that the “per-seat” business model is dying. If an AI agent can do the work of 50 humans, companies don’t need 50 software licenses—they need one API connection.

https://podcasts.apple.com/us/podcast/the-chronos-archive/id1831231439?i=1000749434456

In our latest podcast episode, “The Great Bifurcation,” we dive deep into this market shift. We cover:

• The “Remove the AI” Test: The simple question that reveals if your product is a future-proof “Infrastructure” play or a doomed “Feature.”

• The Brinks Home Case Study: How Brinks Home deployed “Veronica” (powered by Cresta) to achieve 92% First Call Resolution, effectively moving from a “tool-based” to an “agent-based” model.

• The New Unit Economics: Why “Outcome-Based Pricing” is the only way forward for B2B tech.

Stop building tools that wait for input. Start building platforms that deliver outcomes.

The shift from “SaaS” to “Service-as-Software” isn’t just a market trend; it is an architectural mandate. Platform Engineering is no longer about just managing infrastructure—it’s about engineering the autonomous enterprise.

We are live-prototyping this transition at Brinks Home. We call it Project Polaris.
[Subscribe to follow the build]

P.S. Agree? Disagree? I’m debating the “Great Bifurcation” right now on [Threads].