AI is driving deep structural change everywhere from enterprise data centers to edge endpoints, but this growth comes with a heavy tag: water and energy consumption. As chip designers, system architects, and hyperscale operators push the limits of compute, the underlying physical infrastructure is running into hard constraints, especially around sustainability.

Recent Morgan Stanley analysis predicts AI data centers could require upwards of 1,068 billion liters of water per year by 2028, a roughly 11x rise from 2024 estimates, and about a Switzerland’s worth of drinking water. This includes both direct cooling water and indirect water consumed through electricity generation. Add the massive footprint of semiconductor manufacturing, and it’s clear: the water-energy link in AI infrastructure is driving new barriers in sustainability, regulations, and even permitting for new facilities.

Power Consumption and Water Stress

The physics are simple: every Watt burned by a processor, switch, or retimer becomes heat, and every bit of heat puts strain on cooling resources. For air- and liquid-cooled racks, water withdrawal for evaporative or tower cooling can exceed thousands of liters per year per rack.

This is exacerbated by the growing need for higher bandwidth interconnects (PCIe Gen6, CXL, Ethernet, UALink).

Over half of global datacenter hubs are in regions that already face water stress, leading to mounting regulatory pressure. Locations like California, Arizona, Singapore, and even parts of Europe are now enforcing stricter requirements around water usage effectiveness (WUE), cooling system types, and environmental reporting. For chipmakers, drought-driven fab shutdowns and water rationing are no longer hypothetical events, they shape capacity planning and supply chain reliability.

The lever of Signaling and Interconnect Efficiency

While much of the industry attention has gone to cooling innovation and renewable power adoption, the signaling layer, how bits transit between chips, offers a high-leverage target for sustainability.

This is a case where Kandou AI’s copper MIMO signaling stands out: for example, typical PCIe Gen6 retimers might operate at ~4.5 pJ/bit, but copper MIMO architectures can deliver twice the throughput at roughly half the power; directly reducing a rack’s cooling load by lowering total heat output.

Less heat, less cooling machinery, less water per bit of AI inference.

Consider a symbolic rack with eight GPUs, each using 400 Gbps links. Switching from legacy retimers to Kandou AI’s signaling could save >60,000 kWh/year.

Using current benchmarks of 1.89 liters of water per kWh consumed (accounting for both grid generation and cooling), it creates an annual saving of >100,000 liters of water per rack; a tangible impact when scaled to AI clusters.

These indirect effects ripple upstream. Lower power devices mean less strain on electricity grids, and therefore less water consumed by regional coal, gas, or nuclear plants for steam and cooling. In parallel, more efficient chips also reduce the demand for extreme-scale manufacturing, a water-intensive process, often requiring millions of liters per day for ultrapure processing.

As AI density and compute demand grow, industry standards (from the EU’s Energy Efficiency Directives to Singapore’s Green Data Centre Roadmap) are moving beyond voluntary guidelines. Performance thresholds on Water Usage Effectiveness (WUE), energy intensity, and even supply chain water recycling are now part of site selection and capital planning. Interconnect efficiency is becoming central to environmental permitting, sustainability disclosures, and long-term cost-of-ownership models.

The Path Forward: Scalable and Sustainable AI

The next generation of datacenter innovation will require a shift from incremental efficiency to holistic sustainability design. Copper MIMO signaling and similar advances show it’s possible to break linear resource scaling and enable responsible growth even as AI workloads surge.

Lower power isn’t just better for performance or energy bills; it is now a keystone for water conservation, regulatory compliance, and global resource resilience.

AI’s promise depends on a datacenter infrastructure that is both scalable and sustainable.

Incremental improvements are needed, as is a step change in how interconnects, chips, and systems are designed and deployed.

At Kandou AI, silicon innovation unlocks smarter growth, lower water consumption, for an ecosystem able to scale without compromise.