Driven by regulatory action, the data center industry is undergoing a significant transition to low-GWP refrigerants. Under the AIM Act, the Environmental Protection Agency (EPA) is mandating that new cooling equipment for data centers must use refrigerants with a GWP of less than 700 by January 1, 2027. Some states, such as California, Washington, and New York, have established even earlier compliance deadlines, with California leading the way, requiring a GWP of less than 750 in new equipment by January 1, 2025.

In light of these regulatory shifts, new low-GWP refrigerants are emerging that not only reduce the environmental impact associated with traditional refrigerants, but also maintain the crucial performance and reliability necessary for data center operations.

 

Cooling Equipment Choices

There are four main types of data centers, and each has specific requirements for the type of cooling equipment it employs, said Stephen Hueckel, market manager of HVACR technologies Americas at Copeland.

“Edge data centers, which operate closest to end users, are much smaller in size and would likely use a computer room air conditioning (CRAC) unit, or even something small like a wall-mount unit,” he said. “More traditional data centers, known as enterprise data centers, are also primarily driven by CRAC units. Colocation data centers — where the space is occupied by multiple tenants — come in assorted sizes, so they might use a variety of technologies. The largest data center type — known as a hyperscale data center — is typically built by companies like Amazon, Meta, and Microsoft, primarily using air- or water-cooled chillers.”

In addition to facility size, the choice of cooling equipment is influenced by power density and geographical location, as well as labor resources, elevation, and seismic factors, said Sean Crain, consulting engineer account executive at Johnson Controls. For these reasons, he said it is very difficult to standardize the design.

“In the mission-critical data center space, standard heat rejection equipment includes primarily air- and water-cooled chillers,” said Crain. “There are also water-side free cooling, direct evaporator, and DX system designs, especially on legacy sites. For cooling the white space, the solutions we're seeing most right now are utilizing air-cooled equipment (including CRACs, CRAHs, and fan coil walls), then DX systems, and emerging direct-to-chip and immersion technologies.”

Ron Spangler, senior manager at Vertiv, added that equipment found in data centers can range from floor-mounted split DX systems with high sensible heat ratio to self-contained DX systems installed outdoors on grade to floor-mounted chilled water air handlers with chillers.

“Smaller sites tend to use split-system DX equipment, rather than installing large, chilled water plants with air handlers,” said Spangler. “However, split-system and self-contained DX systems using sophisticated economization technologies are popular even at large sites to allow for rapid deployment of additional equipment, so cooling equipment can be pay-as-you-go, instead of installing large cooling plants up front.”

 

Liquid Cooling

Michael Strouboulis, business development director for data centers at Danfoss Climate Solutions, predicts that precision air CRAC or chilled water CRAH (computer room air handling) cooling systems will continue to be the chosen cooling methods for low rack power density data centers for some time. However, he noted that as high-performance computing, artificial intelligence, machine learning, IoT, and other technologies advance, rack power densities will increase, and liquid cooling will become essential to ensuring reliability and performance.

“For example, CRAC/CRAH and row-based containment cooling solutions are ideal up to 15 to 20 kW, but there is a point, maybe around 20 kW, past which they are no longer cost-effective or efficient,” said Strouboulis. “At that point, there are other techniques that cool closer to the source of heat and are located very close to the rack doors or even closer, cooling microchips directly by direct-to-chip or immersion liquid cooling.”

Indeed, liquid cooling is an emerging technology that can help improve the energy efficiency and performance of data center cooling systems. This type of new technology is necessary, because air-cooled systems are not anticipated to be able to meet the cooling needs of next-generation chips, said Brandon Marshall, global marketing manager for automotive and immersion cooling at The Chemours Company. Based on publicly available roadmaps, he noted that air-cooled equipment will not be able to cool these components as early as 2026.

Marshall explained that liquid cooling uses a fluid — such as water or a dielectric fluid — in direct contact with or through a heat exchanger to cool the heat-generating components of the servers and remove heat more effectively than air. This type of technology can be divided into two categories: single-phase and two-phase.

“Single-phase liquid cooling uses a pump to circulate the liquid through a closed loop system, while two-phase liquid cooling uses a phase change material, such as a refrigerant, that evaporates and condenses as it absorbs and releases heat,” said Marshall. “Right now, a good deal of industry attention has been turned to two-phase immersion cooling (2-PIC), the frontrunner for being the most energy-efficient technology for data center cooling. 2-PIC is also a solution that brings many other benefits for today’s high-performance computing and environmental demands.”

 

Low-GWP Alternatives

When it comes to low-GWP refrigerants, the cooling systems used in data centers not only need to be designed to be compatible and optimized, but also need to be reliable, said Strouboulis. While the cost of the cooling system is a major consideration in new builds and retrofits, uptime surpasses cost and drives all decisions, he said, adding that there is no room for disruptions caused by malfunctioning equipment in a data center.

“The best opportunity for energy efficiency for cooling or heat removal from data centers is in raising the cooling temperature, such that less work is required to reject or reuse that rejected heat,” said Strouboulis. “The selection of the right low-GWP refrigerant should be such that it will allow this rise in cooling temperature and allow heat recovery and reuse, while ensuring reliable and uninterrupted operation.”

According to Strouboulis, Danfoss is ready for the refrigerant transition, and all of its compressors, heat exchangers, sensors, and flow controls are available and have been qualified for data center cooling systems using low-GWP refrigerants.

Johnson Controls is also ready for the transition to low-GWP refrigerants, said Crain, with low-GWP options for all of its applied and commercial equipment for data centers. This includes the York YVAM air-cooled magnetic bearing centrifugal chiller, which is specifically designed for hyperscale and colocation data center applications and uses the refrigerant, R-1234ze, which has a GWP of less than 10.

While it’s fairly easy for owners of new data centers to select cooling equipment that utilizes low-GWP refrigerants, transitioning to these alternatives in existing facilities presents a unique challenge.

“A lot of the colocation market takes a phased approach to building out. So, we're seeing phase one and two utilizing equipment with conventional refrigerants, and the expectation to build out phases three and four with a very similar piece of equipment,” said Crain. “However, when data center equipment uses a different refrigerant, there may be modest changes in product footprint, capacity, and efficiency. It's possible that phase three of a data center will use a different chiller from phase two, which can ultimately affect the cooling system design and operating conditions.”

Vertiv is in the process of redesigning all of its product lines to be available with low-GWP refrigerants by early 2025. This will allow the company to provide equipment for states such as California, as well as meet the sustainability requirements of customers who want to transition to low-GWP refrigerants before it is required by law, said Spangler.

In the U.S., he expects lower-GWP refrigerants such as R-454B and R-32 refrigerants will be used for air-cooled equipment, while chillers may use R-513A and R-1234ze. Spangler doesn’t believe it will be practical to retrofit existing equipment with any of these new refrigerants, as it would require changes in components like compressors, shut-off valves, and refrigerant leak detection sensors. In addition, the equipment may need to be recertified in the field for safety agency listing.

“We also expect that current refrigerants will be available for several years, since there will be an emerging refrigerant reclaim market for them, given the ramp down in new production of these refrigerants,” said Spangler. “For existing and new data centers, customers and contractors will need to install equipment using low-GWP refrigerants by January 1, 2027. Contractors will need to procure equipment with manufacturing and installation lead time in mind to be sure the equipment is installed by that date. California currently requires installation by January 1, 2025, so new cooling equipment will need to be ordered well before then.”

In general, Chemours expects the move to next-generation, low-GWP refrigerants to have very little impact on the operation and performance of equipment used in data centers. The most important thing to note about this transition, said Marshall, is the change in safety designation from an ASHRAE-classified A1 refrigerant to an A2L (mildly flammable) refrigerant.

One of the A2L refrigerants that Chemours offers as a replacement for air-cooled systems is Opteon XL41 (R-454B). With a GWP of 466 (AR4), a reduction of more than 77% vs. R-410A, and comparable properties to R-410A, Chemours expects this solution to become very popular upon deployment in 2025, said Marshall.

Another low-GWP refrigerant in the pipeline at Chemours is Opteon 2P50, a developmental dielectric fluid that has a GWP of 10. According to Marshall, “It was specifically created to optimize the performance of electronic components in a 2-PIC system. Opteon 2P50 is currently pre-commercial pending regulatory approval.”

To help data center end users adopt the next generation of safe, reliable, and highly efficient compressor technologies, Copeland has optimized all its major product platforms for use with A2L refrigerants, said Hueckel. This includes the recently launched oil-free centrifugal compressor with Aero-lift bearing technology. This compressor platform is optimized for use with lower-GWP A2L refrigerants including R-515B, R-1234ze, and R-513A, and is being developed for the 50- to 200-ton capacity range.

“The compressor platform delivers an efficient and reliable oil-free solution for the air- and water-cooled chiller market,” said Hueckel. “For OEMs, it provides a flexible platform from which they can easily customize and adapt to the needs of specific applications -- from data centers to heat recovery to health care facilities and high-ambient conditions. The product will be available in summer of this year.”

In addition, Copeland’s scroll compressor lineup is optimized for use with R-454B and R-32 and includes not only fixed speed, but also two-stage, digital, and variable speed.

“We see a wide variety of scroll options being applied in a multitude of data center types,” said Hueckel. “We’re also seeing some Copeland scroll compressors for R-134a transitioning to R-513A being deployed in data centers.”

HVAC contractors will play a key part in helping their data center customers transition to low-GWP refrigerants, which is why Hueckel suggests that they obtain additional training on lower-GWP and A2L refrigerants.

“This will help them select, install, commission, maintain, and service the new equipment installed in a new data center or retrofit application,” he said. “Contractors should also educate themselves about the latest refrigerant regulations and understand the state requirements for various HVAC applications.”

Obtaining this education is crucial, as time is of the essence. While OEMs have until 2027 to meet the <700 gwp requirement in data center cooling systems, various states are rolling out their own transition schedules, beginning with california next year. addition, some owners may be looking to move lower-gwp refrigerants before the mandated deadline, so contractors should ready help them find best solution that meets particular needs.