This panel attracted over 100 attendees and featured more than an hour of content, including a conversation-style discussion and post-event networking sessions.
This session ran as a conversation plus open networking with breakout rooms. Founders, operators, and product leaders walked through what is working now to align AI compute with power, water, and community needs.
The throughline was simple.
Build for speed, measure water, and keep architectures flexible so you can adapt.
Design for production first, then make sustainability a built-in constraint rather than an afterthought.
Watch the complete recording or read the summary below to learn how teams are siting, powering, and cooling AI data centers while staying grid-friendly and community-ready.
First, meet our panelists:
Anna Jacobi, Fractional CPO and AI Architect Advisor @ Gathid, former Microsoft, Meta, AMD.
Guy Marom, Vice President of Engineering @ EdgeCloudLink
Michael Eusterman, Operations & Growth @ Giga Energy
Before you get into the summary…
Share GridInnovationHub.com with your peers, colleagues, and friends.
Co-locate and design for reuse
Pair compute with energy, water, and heat loops
Start where energy, water, and heat can balance each other. Pair hyperscale facilities with district heating loops or geothermal exchange to capture and reuse up to ~80% of waste heat.
Bottlenecks are shifting from training to inference latency and water. Move beyond PUE — start reporting WUE.
Plan for more edge sites, colos, and owned compute near load and renewables.
Expect SMRs and other local generation options to co‑locate with compute.
Treat “A‑to‑A” (AI producing data for AI) as a future state that demands local, auditable systems.
Production first, sustainability built-in
Bridge speed-to-power with flexible sources
Demand is outpacing supply.
Customers ask one question first: when can I be in production. The practical answer combines grid power, renewables, and onsite options so you can deliver capacity now and improve carbon over time.
Mix grid, renewable, and onsite generation to hit timelines
Keep a clear path to greener supply as transmission and markets catch up
Build controls that let you dial performance, cost, or sustainability as needed
Measure water, not just PUE
Move from PUE alone to WUE and real local impact
Water is becoming the limiting factor in many regions. You need to know, report, and improve it.
Shift from only PUE to include WUE and track sources, treatment, reuse, and evaporation.
Track WUE alongside PUE in real time
Engage communities early on water sourcing and reuse
Co-site where you can return heat and avoid net draws on local supply
Cooling tradeoffs you cannot ignore
Closed loop saves water, evaporative saves energy
There is no silver bullet. Direct-to-chip liquid in closed loops can cut water use, but it adds weight, complexity, and continuous chiller load.
Evaporative towers save energy but consume make-up water. You need a system that adapts.
Design for multiple cooling modes and switch by season or tariff
Model rack densities from 10 kW to 250 kW today, with a path to 600 kW
Keep rack and manifold choices as vendor-agnostic as possible
Front-of-meter, behind-the-meter, or both
Find power where it actually exists
Two paths are thriving. Developers with immediate grid access at favorable substations can move fast.
Others secure behind-the-meter supply to escape long queues. The market will use both, often at the same campus.
Hunt for rural substations with spare headroom and good tariffs
Use modular blocks to stage capacity while long-lead gear arrives
Expect new transmission to lag demand, so plan interim bridges
Batteries change the operating math
Flex load without violating uptime
Short-duration batteries let you ride through peaks, shift draw, and provide grid services without touching uptime targets. Think two to four hours for most sites.
Shift peak energy to batteries during price spikes
Combine with demand response in markets like ERCOT and SPP
Treat batteries as a buffer, not backup, and model lifecycle cost
Prepare for the edge AI wave
Training in hubs, inference near users
Training will stay centralized. Inference wants low latency, data sovereignty, and cost control, which pulls it to the edge. That creates a different hardware, power, and cooling profile than training.
Plan 5–20 MW edge sites with strict latency targets
Expect inference-specific silicon to reduce power per token
Separate training and inference footprints whenever possible
Autonomy is not optional
Operate with software, audit with data
Modern sites collect tens of thousands of signals per second.
Autonomy coordinates generation, distribution, cooling, and workload placement against a live objective: performance, cost, or sustainability.
Use a single pane to set targets and watch KPIs
Generate real-time PUE and WUE plus audit reports by default
Let control software orchestrate modes and prove every decision
What to pilot in the next quarter
Small scope, measurable ROI, operator trust built in
WUE tracking pilot: instrument and publish WUE next to PUE for one block
Peak-shift with batteries: switch to storage during two scheduled peaks and report savings
Autonomous controls slice: let software manage cooling mode on one row with safety guardrails
Heat reuse demo: capture and deliver waste heat to a nearby loop or process
Edge inference pod: deploy a 5 MW modular pod close to users with strict latency SLOs
Bottom line: deliver power fast, make water a first-class metric, and keep your architectures flexible.
That is how you scale AI compute while staying reliable, efficient, and welcome in the community.
Thanks for reading, here are some more grid technology resources for you to check out:
👉 Watch the whole 60-minute panel event here.
👉 Find more grid technology panels, meet-ups, and events here.
👉 Register for our networking coffee chats here.
And if you like our content, check out some of these newsletters we love:




