Hyperscale, AI/HPC, colocation, edge, and telecom operators each bring different cooling architectures, procurement models, and measurement priorities. This page maps our instrumentation to the specific requirements of each segment.
Multi-hundred-megawatt campuses operated by cloud providers and large platform companies. Standardized server SKUs, long deployment pipelines, and procurement processes that treat instrumentation as a spec-driven commodity — where accuracy, protocol conformance, and lifetime reliability all get verified before a purchase order is cut.
| Typical campus size | 50–500 MW IT load |
| Cooling profile | CDU + D2C in new builds; hybrid in older halls |
| Procurement model | Multi-year contracts, qualification cycles |
| Integration target | Custom DCIM, Modbus/BACnet at scale |
GPU and accelerator-dense deployments running large model training or scientific workloads. Per-rack power densities routinely exceed 80 kW; thermal instability directly translates to training throughput loss. Instrumentation here is not a compliance line item — it's a performance lever.
| Typical rack density | 60–150 kW (D2C), up to 250 kW (immersion) |
| Dominant architecture | Direct-to-chip for GPU racks |
| Monitoring cadence | Sub-second for thermal telemetry |
| Coupling with IT stack | Telemetry often joined with GPU metrics |
Multi-tenant facilities leasing rack space, cages, or suites. Instrumentation serves a dual purpose: operational monitoring plus per-tenant billing and SLA evidence. Every kWh of cooling delivered may need to be attributable to a specific contract.
| Typical density mix | 5–20 kW (legacy) up to 60 kW (modern) |
| Cooling mix | Often heterogeneous: air + RDHx + some D2C |
| Commercial driver | Metering, chargeback, SLA proof |
| Tenant interface | Customer portals, standardized reports |
Compact, distributed facilities — from city-block micro data centers to outdoor cabinets — supporting latency-sensitive workloads. Often unstaffed, remote, and subject to a wider ambient envelope than enterprise facilities.
| Typical size | 2–200 kW per site |
| Staffing | Unmanned or periodic visits only |
| Environment | Often harsher than hall DCs (outdoor, curbside) |
| Cooling | Mixed: sealed air + RDHx + some D2C |
Central offices, 5G aggregation sites, and telecom-owned computing. Cooling sits alongside battery plants, fiber distribution frames, and active network gear. Regulatory continuity and site-level reliability dominate design choices.
| Typical density | 3–20 kW per rack |
| Architecture | Traditional air + selective RDHx retrofit |
| Regulatory | Telecom-grade uptime (NEBS where applicable) |
| Mix of loads | Compute + network + battery plant |
Recommended instrumentation packages for D2C, immersion, RDHx, and CDU deployments.
View solutions →Five product families, with detailed specs, accuracy classes, and protocol options.
View products →Technical articles on liquid cooling trends, thermal density challenges, and vendor selection.
Read articles →Whether you're designing a new liquid-cooled data center or retrofitting existing air-cooled facilities, our engineers can help you select the right instrumentation package.