Category Archives: Servers

Edge Computing and Thermal Management

By Rebecca O’Day and Norman Quesnel
Senior Members of Marketing Staff
Advanced Thermal Solutions, Inc. (ATS)

Expanding the Internet of Things (IOT) into time-critical applications such as with autonomous vehicles, means finding ways to reduce data transfer latency. One such way, edge computing, places some computing as close to connected devices as possible. Edge computing pushes intelligence, processing power and communication capabilities from a network core to the network edge, and from an edge gateway or appliance directly into devices. The benefits include improved response times and better user experiences.

While cloud computing relies on data centers and communication bandwidth to process and analyze data, edge computing provides a means to lay some work off from centralized cloud computing by taking less compute intensive tasks to other components of the architecture, near where data is first collected. Edge computing works with IoT data collected from remote sensors, smartphones, tablets, and machines. This data must be analyzed and reported on in real time to be immediately actionable. [1]

Edge Computing Architecture Scheme with Both the Computing Power and Latency Decreasing Downwards.
FIgure 1: Edge Computing Architecture Scheme with Both the Computing Power and Latency Decreasing Downwards [2]

In the above edge computing scheme, developed by Inovex, the layers are described as follows:

Cloud: On this layer compute power and storage are virtually limitless. But, latencies and the cost of data transport to this layer can be very high. In an edge computing application, the cloud can provide long-term storage and manage the immediate lower levels.

Edge Node: These nodes are located before the last mile of the network, also known as downstream. Edge nodes are devices capable of routing network traffic and usually possess high compute power. The devices range from base stations, routers and switches to small-scale data centers.

Edge Gateway: Edge gateways are like edge nodes but are less powerful. They can speak most common protocols and manage computations that do not require specialized hardware, such as GPUs. Devices on this layer are often used to translate for devices on lower layers. Or, they can provide a platform for lower-level devices such as mobile phones, cars, and various sensing systems, including cameras and motion detectors.

Edge Devices: This layer is home to small devices with very limited resources. Examples include single sensors and embedded systems. These devices are usually purpose-built for a single type of computation and often limited in their communication capabilities. Devices on this layer can include smart watches, traffic lights and environmental sensors. [2]

Today, edge computing is becoming essential where time-to-result must be minimized, such as in smart cars. Bandwidth costs and latency make crunching data near its source more efficient, especially in complex systems like smart and autonomous vehicles that generate terabytes of telemetry data. [3]

Edge Computing and Thermal Management - Leap Mind's Small Edge Computing Device
Figure 2: A Small Scale Edge Computing Device from LeapMind [4]

Besides vehicles, edge computing examples serving the IoT include smart factories and homes, smartphones, tablets, sensor-generated input, robotics, automated machines on manufacturing floors, and distributed analytics servers used for localized computing and analytics.

Major technologies served by edge computing include wireless sensor networks, cooperative distributed peer-to-peer ad-hoc networking and processing, also classifiable as local cloud/fog computing, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented reality and virtual reality. [5]

Autonomous Vehicles and Smart Cars

New so-called autonomous vehicles have enough computing hardware they could be considered mobile data centers. They generate terabytes of data every day. A single vehicle running for 14 to 16 hours a day creates 1-5TB of raw data an hour and can produce up to 50TB a day. [6]

A moving self-driving car, sending a live stream continuously to servers, could meet disaster while waiting for central cloud servers to process the data and respond back to it. Edge computing allows basic processing, like when to slow down or stop, to be done in the car itself. Edge computing eliminates the dangerous data latency.

Edge Computing Reduces Data Latency to Optimize Systems in Smart and Autonomous Vehicles
Figure 3: Edge Computing Reduces Data Latency to Optimize Systems in Smart and Autonomous Vehicles [7]

Once an autonomous car is parked, nearby edge computing systems can provide added data for future trips. Processing this close to the source reduces the costs and delays associated with uploading to the cloud. Here, the processing does not occur in the vehicle itself.

Other Edge Computing Applications

Edge computing enables industrial and healthcare providers to bring visibility, control, and analytic insights to many parts of an infrastructure and its operations—from factory shop floors to hospital operating rooms, from offshore oil platforms to electricity production.

Machine learning (ML) benefits greatly from edge computing. All the heavy-duty training of ML algorithms can be done on the cloud and the trained model can be deployed on the edge for near real-time or true real-time predictions.

For manufacturing uses, edge computing devices can translate data from proprietary systems to the cloud. The capability of edge technology to perform analytics and optimization locally, provides faster responses for more dynamic applications, such as adjusting line speeds and product accumulation to balance the line. [8]

Figure 4: EdgeBoard by Baidu is a Computing Solution for Edge-Specific Applications [9]

Edge Computing Hardware

Processing power at the edge needs to be matched to the application and the available power to drive an edge system operation. If machine vision, machine learning and other AI technologies are deployed, significant processing power is necessary. If an application is more modest, such as with digital signage, the processing power may be somewhat less.

Intel’s Xeon D-2100 processor is made to support edge computing. It is a lower power, system on chip version of a Xeon cloud/data server processor. The D-2100 has a thermal design point (TDP) of 60-110W.  It can run the same instruction set as traditional Intel server chips, but takes that instruction set to the edge of the network. Typical edge applications for the Xeon D-2100 include multi-access edge computing (MEC), virtual reality/augmented reality, autonomous driving and wireless base stations. [10]

Figure 5: The D-2100 Processor Dissipates Between 60 -110W. Thermal Management Depends on the Type of Device and Where it is Used [11]

Thermal management of the D-2100 edge focused processor is largely determined by the overall mechanical package the edge application takes. For example, if the application is a traditional 1U server, with sufficient air flow into the package, a commercial off the shelf, copper or aluminum heat sink should provide sufficient cooling.  [11]

Edge Computing Server from ATOS Featuring the Intel Xeon D-2187 Edge CPU Processor
Figure 6: An Edge Computing Server from ATOS Featuring the Xeon D-2187 from Intel’s D-2100 Family of Processors [12]

An example of a more traditional package for edge computing is the ATOS system shown in Figure 6. But, for less common packages, where airflow may be less, more elaborate approaches may be needed. For example, heat pipes may be needed to transport excess processor heat to another part of the system for dissipation.

One design uses a vapor chamber integrated with a heat sink. Vapor chambers are effectively flat heat pipes with very high thermal conductance and are especially useful for heat spreading. In edge hardware applications where there is a small hot spot on a processor, a vapor chamber attached to a heat sink can be an effective solution to conduct the heat off the chip.

Coca Cola's Freestyle Fountain An Edge Computing Example
Figure 7: Coca-Cola’s Freestyle Fountain, a Non-Traditional Edge Computing System, Features an Intel I7 CPU, DRAM, Touchscreen, WiFi and HiDef Display [13]

The Nvidia Jetson AGX Xavier is designed for edge computing applications such as logistics robots, factory systems, large industrial UAVs, and other autonomous machines that need high performance processing in an efficient package.

Nvidia Jetson AGX Xavier Edge Computing and AI Processor
Figure 8: Nvidia’s Jetson AGX Xavier Produces Little Heat But Could Have Thermal Issues in Edge Computing Applications [14]

Nvidia has modularized the package, proving the needed supporting semiconductors and input/output ports. While it looks like if could generate a lot of heat, the module only produces 30W and has an embedded thermal transfer plate. However, any edge computing deployment of this module, where it is embedded into an application, can face excess heat issues. A lack of system air, solar loading, impact of heat from nearby devices can negatively impact a module in an edge computing application.

Nvidia Jetson AGX Xavier Processor Development Kit
Figure 9: Nvidia’s Development Kit for the Jetson AGX Xavier Includes Heat Sink and Heat Pipes [15]

Nvidia considers this in their development kit for this module. It has an integrated thermal management solution featuring a heat sink and heat pipes. Heat is transferred from the module’s embedded thermal transfer plate to the heat pipes then to the heat sink that is part of the solution.

For a given edge computing application, a thermal solution might use heat pipes attached to a metal chassis to dissipate heat. Or it could combine a heat sink with an integrated vapor chamber. Studies by Glover, et al from Cisco have noted that for vapor chamber heat sinks, the thermal resistance value varies from 0.19°C/W to 0.23°C/W for 30W of power. [16]

A prominent use case for edge computing is in the smart factory empowered by the Industrial Internet of things (IIoT). As discussed, cloud computing has drawbacks due to latency, reliability through the communication connections, time for data to travel to the cloud, get processed and return. Putting intelligence at the edge can solve many if not all these potential issues. The Texas Instruments (TI) Sitara family of processors was purpose built for these edge computing machine learning applications.

TI Stara ARM Processors for Edge Computing and IIOT
Figure 10: TI’s Sitara Processors are Design for Edge Computing Machine Learning Applications [17]

Smart factories apply machine learning in different ways. One of these is training, where machine learning algorithms use computation methods to learn information directly from a set of data. Another is deployment. Once the algorithm learns, it applies that knowledge to finding patterns or inferring results from other data sets. The results can be better decisions about how a process in a factory is running.  TI’s Sitara family can execute a trained algorithm and make inferences from data sets at the network edge.

The TI Sitara AM57x devices were built to perform machine learning in edge computing applications including industrial robots, computer vision and optical inspection, predictive maintenance (PdM), sound classification and recognition of sound patterns, and tracking, identifying, and counting people and objects. [18,19]

This level of machine learning processing may seem like it would require sophisticated thermal management, but the level of thermal management required is really dictated by the use case. In development of its hardware, TI provides guidance with the implementation of a straight fin heat sink with thermal adhesive tape on its TMDSIDK574 AM574x Industrial Development Kit board.

TI AM574x Industrial Development Kit
Figure 11: TI TMDSIDK574 AM574x Industrial Development Kit [20]

While not likely an economical production product, it provides a solid platform for the development of many of the edge computing applications that are found in smart factories powered by IIoT. The straight fin heat sink with thermal tape is a reasonable recommendation for this kind of application.

Most edge computing applications will not include a lab bench or controlled prototype environment. They might involve hardware for machine vision (an application of computer vision).  An example of a core board that might be used for this kind of application is the Phytec phyCORE-AM57x. [21]

Phytec phyCORE-AM57x for Machine Vision Applications
Figure 12: The Phytec phyCORE-AM57x Can Be used in Edge Computing Machine Vision Applications [22]

Machine vision being used in a harsh, extreme temperature industrial environment might require not just solid thermal management but physical protection as well.  Such a use case could call for thermal management with a chassis. An example is the Arrow SAM Car chassis developed to both cool and protect electronics used for controlling a car.

Chassis for Automotive Application that Protects Components and Provides Thermal Management
Figure 13: Chassis for Automotive Application that Protects Components and Provides Thermal Management [23]

Another packaging example from the SAM Car is the chassis shown below, which is used in a harsh IoT environment. This aluminum enclosure has cut outs and pockets connecting to the chips on the internal PCB.  The chassis acts as the heat sink and provides significant protection in harsh industrial environments.

SAM Car Electronics and Computing Chassis
Figure 14: Aluminum Chassis with Cut Outs and Pocketts to the Enclosed PCB with Semiconductors [23]

Edge computing cabinetry is small in scale (e.g. less than 10 racks), but powerful in information. It can be placed in nearly any environment and location to provide power, efficiency and reliability without the need for the support structure of a larger white space data center. 

The Jetson TX2 Edge Computing Platform from NVIDIA
Figure 15: The Jetson TX2 Edge Computing Platform from Nvidia [24]

Still, racks used in edge cabinets can use high levels of processing power. The enclosure and/or certain components need a built-in, high-performance cooling system.

Hardware OEMs like Rittal build redundancy into edge systems. This lets other IT assets remain fully functional and operational, even if one device fails. Eliminating downtime of the line, preserving key data and rapid response all contribute to a healthier bottom line.

Although edge computing involves fewer racks, the data needs vital cooling protection. For edge computers located in remote locations, the availability of cooling resources may vary. Rittal provides both water and refrigerant-based options. Refrigerant cooling provides flexible installation, water based cooling brings the advantage of ambient air assist, for free cooling. [25]

Immersion Liquid Cooling from LiquidCool
Figure 16: LiquidCool Immersion Cooling Technology Eliminates the Need for Air Cooling

LiquidCool’s technology collects server waste heat inside a fluid system and transports it to an inexpensive remote outside heat exchanger. Or, the waste heat can be re-purposed. In one IT closet-based edge system, fluid-transported waste heat is used for heating an adjacent room. [26]

Green Revolution Cooling provides ICEtank turnkey data centers built inside ISO shipping containers for edge installations nearly anywhere. The ICEtank containers feature immersion cooling systems. Their ElectroSafe coolant protects against corrosion, and the system removes any need for chillers, CRACs (computer room ACs) and other powered cooling systems. [27]

A Summary Chart of Suggested Cooling for Edge Computing

The following chart summarizes air cooling options for Edge Computing applications:

Figure 17: Edge Computing Air Cooling Options Summary Chart
Figure 17: Edge Computing Air Cooling Options Summary Chart [click for larger version]

The Leading Edge

The edge computing marketplace is currently experiencing a period of unprecedented growth. Edge market revenues are predicted to expand to $6.72 billion by 2022 as it supports a global IoT market expected to top $724 billion by 2023. The accumulation of IoT data, and the need to process it at local collection points, will continue to drive the deployment of edge computing. [28,29]

As more businesses and industries shift from enterprise to edge computing, they are bringing the IT network closer to speed up data communications. There are several benefits, including reduced data latency, increased real-time analysis, and resulting efficiencies in operations and data management. Much critical data also stays local, reducing security risks.

References

  1. https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
  2. https://www.inovex.de/blog/edge-computing-introduction/ https://www.datacenterknowledge.com/edge-computing/searching-car-network-s-natural-edge
  3. https://www.bloomberg.com/news/articles/2019-06-17/ai-needs-edge-computing-to-make-everyday-devices-smarter
  4. https://www.networkcomputing.com/networking/how-edge-computing-compares-cloud-computing
  5. https://medium.com/velotio-perspectives/a-beginners-guide-to-edge-computing-6cfea853aa11
  6. https://www.datacenterknowledge.com/edge-computing/searching-car-network-s-natural-edge
  7. https://www.wespeakiot.com/will-edge-computing-devour-cloud/
  8. https://www.designnews.com/automation-motion-control/edge-computing-emerges-megatrend-automation/27888481159634
  9. https://www.design-reuse.com/news/45423/xilinx-baidu-brain-edge-ai-edgeboard.html
  10. https://www.intel.com/content/www/us/en/products/docs/processors/xeon/d-2100-brief.html
  11. https://software.intel.com/en-us/articles/intel-xeon-processor-d-2100-product-family-technical-overview
  12. https://atos.net/en/2019/press-release/general-press-releases_2019_05_16/atos-launches-the-worlds-highest-performing-edge-computing-server
  13. https://venturebeat.com/2012/09/11/this-coke-machine-has-an-intel-core-i7-processor-and-it-can-take-your-picture/
  14. https://www.custompcreview.com/news/nvidia-announces-jetson-x2-edge-computing-platform/
  15. https://developer.nvidia.com/embedded/jetson-agx-xavier-developer-kit#resources
  16. “Glover, G., Chen, Y., Luo, A., and Chu, H., “Thin Vapor Chamber Heat Sink and Embedded Heat Pipe Heat Sink Performance Evaluations”, 25th IEEE Symposium, San Jose, CA USA 2009.
  17. http://www.ti.com/tool/SITARA-MACHINE-LEARNING#descriptionArea
  18. https://www.mathworks.com/discovery/machine-learning.html
  19. http://www.ti.com/tool/SITARA-MACHINE-LEARNING#descriptionArea
  20. http://www.ti.com/tool/TMDSIDK574
  21. https://www.phytec.com/phytec-announces-a-new-system-on-module-som-based-on-the-new-sitara-am57x-processor-family-from-texas-instruments/
  22. http://processors.wiki.ti.com/index.php/File:PhyCORE-AM57x_SOM.jpg
  23. https://www.qats.com/cms/2017/10/09/ats-collaborates-sam-car-featured-cnbc-program-jay-lenos-garage/
  24. https://www.custompcreview.com/news/nvidia-announces-jetson-x2-edge-computing-platform/
  25. https://www.rittal.us/contents/edge-computing-and-uncontrolled-environments/
  26. https://www.liquidcoolsolutions.com/edge-server/#single/null
  27. https://www.grcooling.com/edge-computing/
  28. https://blog.apc.com/2019/05/15/four-reasons-configure-to-order-rack-pdus-edge-computing-environments/
  29. https://www.techrepublic.com/article/edge-computing-the-smart-persons-guide/

Thermal Performance of Heat Sinks with Heat Pipes or Vapor Chambers for Servers

Most blade servers for data and telecommunication systems use air to cool the high-power chips inside. As the power level of these chips keep increasing, the pressure is on thermal engineers to design ever higher performance air-cooled heat sinks. In recent years, advancements in manufacturing of thinner heat pipes and vapor chambers have enabled engineers to integrate the heat pipes and vapor chambers into the blade server heat sinks.

A heat pipe is a device with high thermal conductance that can transport large amounts of heat with a small temperature difference between its hot and cold ends. The idea of a heat pipe was first proposed by Gaugler [1]. However, only after its invention by Grover [2, 3] in the early 1960s, were the remarkable properties of heat pipes realized by scientists and engineers. It is now widely used to transport heat from one location to another location or to smooth the temperature distribution on a solid surface.

A heat pipe is a self-driven two-phase heat transfer device. A schematic view of a heat pipe is shown in Figure 1. At the hot section (evaporator), the liquid evaporates and turns to vapor. The vapor flows to the cold section (condenser) and liquefies there. The liquid is driven back from the cold section to the hot section by a capillary force induced by the wick structure. By using boiling and condensation, the heat pipes can transfer and spread the heat from one side to another side with a minimum temperature gradient.

Figure 1. Typical heat pipe. [4]

Vapor chambers are flat heat pipes with very high thermal conductance. They have flat surfaces on the top and bottom sides.  See Figure 2, which can be easily attached to a heat source and a heat sink.

Figure 2. Vapor chamber. [5]

Just like heat pipes, vapor chambers use both boiling and condensation to maximize their heat transfer ability. A vapor chamber generally has a solid metal enclosure with a wick structure lining the inside walls. The inside of the enclosure is saturated with a working fluid at reduced pressure. As heat is applied at one side of the vapor chamber, the fluid at locations close to the heat source reaches its boiling temperature and vaporizes. The vapor then travels to the other side of the vapor chamber and condenses into liquid. The condensed fluid returns to the hot side via the gravity or capillary action, ready to vaporize again and repeat the cycle.

In electronics cooling, heat pipes are generally used to move the heat from electronics to heat dissipation devices. For example, in a desktop computer, multiple heat pipes are used to transfer heat from a CPU to an array of cooling fins, which dissipate the heat to ambient environment through convection. Vapor chambers are generally used to spread heat from a small size device to a larger size heat sink, as it is shown in Figure 2. If used in server heat sinks, the heat pipes and vapor chambers are both used to spreading the heat due to the low profile and large footprint of the heat sinks.

Compared to copper heat spreaders, heat pipes and vapor chambers have the following merits.

First, they have a much higher effective thermal conductivity. The pure copper has a thermal conductivity of 401 W/m°C and the best conductive material of diamond has a thermal conductivity of 1000-2000 W/m°C. The effective thermal conductivity of a well-designed heat pipe and vapor chamber can exceed 5000 W/m°C, which is an order of magnitude higher than that of pure copper. Second, the density of the heat pipe and vapor chamber is much lower than that of copper. Due to its hollow structure, the heat spreaders made by vapor chambers are much lighter than those made of copper. These properties make them the ideal candidate for high heat flux and weight sensitive heat spreading applications.

Dynatron Corporation is an electronic cooling provider specializing in heat sink for servers. This article compares the thermal performance of its server heat sinks, some of which have integrated vapor chambers. Figure 3 shows the photos of two Dynatron 1U passive server heat sinks for Intel’s Sandy Bridge EP/EX Processors. The R12 is made of pure copper with skived fins. The R19 has a vapor chamber base and stacked copper fins. The heat sink specification is listed in Table 1. The R19 is 150g lighter than the R12.

Figure 3. Dynatron passive heat sinks R12 (left) and R19 (right). [6]
Table 1. Dynatron passive heat sink specification.

Figures 4 and 5 show the thermal performance of R12 and R19 at different flow rates. At 10CFM, both heat sinks have a thermal resistance of 0.298ºC/W. When the flow rate increases to 20CFM, the R19’s thermal resistance is 0.218ºC/W, which is 0.020ºC/W lower than that of R12.

Figure 4. Dynatron R12 heat sink performance. [6]
Figure 5. Dynatron R19 heat sink performance. [6]

Figure 6 shows the photos of two Dynatron 1U active server heat sinks for Intel’s Sandy Bridge EP/EX Processors. The R18 is made of copper with skived fins. The R16 has vapor chamber base and stacked copper fins. Both heat sinks use same blower. The heat sink specification is listed in Table 2. The R16 is 90g lighter than the R18.

Figure 6. Dynatron active heat sinks R18 (left) and R16 (right). [6]
Table 2. Dynatron active heat sink specification. [6]

Figures 7 and 8 show the thermal performance of R18 and R16 at different blower speeds. At 3000RPM, the R18 and R16 heat sinks have thermal resistance of 0.437ºC/W and 0.393ºC/W, respectively. When the blower speed increases to 6000RPM, the R18’s thermal resistance is 0.268ºC/W and the R16’s thermal resistance is 0.241ºC/W. The R16 is constantly able to outperform the R18 at different blower speeds and its thermal resistance is 10% lower than R18.

Figure 7. Dynatron R18 heat sink performance. [6]
Figure 8. Dynatron R16 heat sink performance. [6]

The comparison of the Dynatron heat sinks shows that heat sinks with vapor chambers have a slight thermal edge vis-a-vis its copper counterparts even though they are light. This is true for both passive and active heat sinks.

Glover et al., for Cisco, have tested different heat sinks either with embedded heat pipes or vapor chambers for their servers and published their findings [7]. They tested five different heat sinks from different vendors, who utilized different manufacturing technologies to fabricate the heat sinks. The five heat sinks are similar in size: 152.4 x 101.6 x 12.7mm. Table 3 summarizes the physical attributes of these five heat sinks.

Table 3. Cisco tested heat sink specification. [7]

Figure 9-11 shows the three vapor chamber heat sinks with different vapor chamber structures and fin designs. Heat sink A-1 is an extruded aluminum heat sink with a vapor chamber strip. The 40 mm wide vapor chamber strip is embedded in the center of the base. It is the lightest one among five tested heat sink. Heat sink B-1 and C-1 have full base size vapor chamber and aluminum zipper fins.

Figure 9. Heat sink with vapor chamber A-1. [7]
Figure 10. Heat sink with vapor chamber B-1. [7]
Figure 11. Heat sink with vapor chamber C-1. [7]

Figures 12-13 show the two heat sinks with embedded heat pipes. Heat sink C-2 has heat pipes embedded inside its aluminum base. It uses zipper fins and has a copper slug in the middle of the base. Heat sink D-1 has three flat heat pipes embedded in its base. It has a copper plate as base.

Figure 12. Heat sink with heat pipes C-2. [7]
Figure 13. Heat sink with heat pipes D-1. [7]

Glover et al. tested the five heat sinks at different mounting orientation and air velocity. Table 4 presents the summary results of the heat sinks at 3m/s approach air velocity. The tested heat sinks were mounted horizontally with heater sources underneath the heat sink bases.

Table 4. Heat Sink Performance at 3 m/s with horizontal mounting position and bottom heating. [7]

The C-1 heat sink has the lowest thermal resistance; thus, its values are used as the benchmark for other heat sinks. The performance of heat sinks is purely design dependent. For vapor chamber heat sinks, the thermal resistance value varies from 0.19 to 0.23°C/W for 30 W of power. For heat sinks with heat pipes, the C-2 heat sink has a thermal resistance of 0.23°C/W, which matched with that of A-1 and B-1.

The D-1 heat sink has the highest thermal resistance, which is the result of inferior design and manufacture. However, the D-1 heat sink still has relatively low thermal resistance when it is compared to a regular heat sink without a heat pipe and vapor chamber.

Figure 14 shows the thermal resistance of the five heat sinks for 60W of input power at different air velocities. The C-1 heat sink performs best for all velocities and the D-1 heat sink’s performance is the worst.

Figure 14. Heat sink thermal resistance at 60 W. [7]

The pressure drop across the heat sink at different air velocities was also measured and the results were plotted in Figure 15. The B-1, C-1 and C-2 heat sinks have similar fin structures. Therefore, their pressure drop is similar, too. The pressure drop of the A-1 and D-1 heat sinks are similar and higher than the other heat sinks. This is because the A-1 heat sink has thicker fins and the D-1 heat sink has a thicker base.

Figure 15. Heat sink pressure drop. [6]

Because the heat pipes and vapor chambers use capillary force to drive liquid back from the condensation section to the evaporation section, their thermal performance is prone to orientation variation. Glover et al. also investigated the effects of the mounting orientation on the performance of the five heat sinks. They found the effect of the orientation is design dependent and is the result of both the wick structure and the entire heat sink assembly construct.

The heat sink specification from Dynatron Corporation and the test results from Cisco, show that the server heat sinks with embedded heat pipes or vapor chamber have a better thermal performance than their copper counterparts. The heat sinks with embedded heat pipes or vapor chamber are also lighter than the pure copper heat sinks, which make them more suitable for applications which are weight sensitive. If the cost of such heat sinks is justified, they are definitely good candidates for server cooling applications.

References

  1. Gaugler, R. S., US Patent 2350348, Appl. 21 Dec, 1942. Published 6 Jun. 1944.
  2. Grover, G. m., US Patent 3229759. Filed 1963.
  3. Grover, G. M., Cotter, T. P., and Erickson, G. F., “Structure of Very High Thermal Conductance.” J. App. Phys., Vol. 35, P. 1990, 1964.
  4. http://www.lightstreamphotonics.com/technology.htm
  5. http://www.thermacore.co.uk/vapour-chamber
  6. http://http://www.dynatron-corp.com
  7. Glover, G., Chen, Y., Luo, A., and Chu, H., “Thin Vapor Chamber Heat Sink and Embedded Heat Pipe Heat Sink Performance Evaluations,” 25th IEEE SEMI-THERM Symposium, San Jose, CA, USA, 2009.

For more information about Advanced Thermal Solutions, Inc. (ATS) thermal management consulting and design services, visit https://www.qats.com/consulting or contact ATS at 781.769.2800 or ats-hq@qats.com.

Application of Air Jet Impingement for Cooling a 1U System

As their speeds increase, the heat dissipation from high performance processors requires more innovative cooling techniques for heat removal. One such technique, jet impingement, provides one of the highest heat transfer coefficients among cooling methods. This property of jet impingement has been used by Advanced Thermal Solutions, Inc. (ATS) in a 1-U server application. Jet impingement has also been applied to ATCA chassis.

Air Jet Impingement
Cooling high-powered 1U servers requires new thermal management techniques, including advanced air jet impingement. (Wikimedia Commons)

A Ujet-1000™ 1-U chassis made by ATS was assessed in the company’s thermal test lab. Four identical heat sinks were tested under conventional parallel flow and ATS’ proprietary Therm-Jettimpingement flow. The results showed a 20-40 percent improvement in the thermal performance of the heat sinks.

The Ujet-1000™ is a 1 to 2 KW 1-U chassis system (depending on component case and ambient temperature) designed for the most demanding, telecom and server applications. Lab tests demonstrated that four heat sinks on four simulated chips located on a PCB achieved 0.16 to 0.18oC/W thermal resistance. The power dissipation of each simulated component was maintained at 200 W. On the other hand, the thermal resistance of the same heat sinks with parallel flow, using the same fans, was almost 20 to 40% worse.

The new ATS Therm-Jett technology uses a specially made duct with an impingement plate beneath it to create jet impingement on top of the components and heat sinks. The tremendous increase in heat transfer coefficient leads to significant reduction of thermal resistance compared to the other conventional 1-U systems. A Therm-Jett system can be built for any specific configuration. The impingement duct is less than 5 mm thick and is located in the chassis on top of the motherboard. In addition to high heat transfer coefficient, fresh air is distributed between all heat sinks at inlet temperature.

In contrast, in conventional cooling systems, the upstream heat sinks and components receive air at inlet temperatures which are cold and gradually warm up as the air moves downstream. The increase of air temperature effectively reduces the cooling effect of the air downstream.

The other advantage of Therm-Jett™ is that there is no need to make a special duct for each heat sink, thus freeing the motherboard for other components. Even by adding ducts, other components such as memory cards, resistors and capacitors located upstream of the heat sinks on the PWB would deprive the heat sink of the flow at its most critical point, which is close to the base.

Figure 1 shows a CAD drawing of the real system under test. The cooling is provided by eight 40 mm high capacity double fans located in the midsection of the 1-U chassis. The power to the heat sinks was provided by four heaters attached below and dissipating 200 W each. The heat dissipation of the power supply was simulated by attaching a rectangular heater strip under the power supply which dissipates about 100 W. A “U” shape frame made of aluminum was located under the hard drives. The power of four hard drives was simulated by placing a rectangular heater under the “U” shape frame which dissipates about 80 W.

Four thermocouples were placed in holes at the center of the base of the heat sink downstream. The holes were filled with thermal grease to minimize the interfacial resistance. Three thermocouples were attached to the aluminum “U” frame, and their average temperature was recorded as an approximate temperature of a real hard drive. One thermocouple was also attached to the base of the power supply to measure its approximate temperature. All temperature measurements were taken using J type thermocouples.

Figure 1. Schematic of the Ujet-1000TM and the Therm-JETT™ cooling duct. (Advanced Thermal Solutions, Inc.)

Figure 2 shows the exploded view of the Ujet-1000™ chassis.

Figure 2. Exploded view of the Ujet-1000TM and the Therm-JETT™ cooling duct. (ATS)

Figure 3 shows a conventional cooling system for a 1-U system. In this system, the air flow from the fans is parallel to the heat sinks.

Figure 3. Conventional Cooling System in a 1-U Application. (ATS)

Results

Figure 4 shows the schematic set up of the conventional cooling system. In this configuration, eight (8) blowers move the air in parallel to the heat sink fins.

Figure 4. Configuration of conventional layout in a 1U system for temperature measurement. (ATS)

Figure 5 shows the implementation of an ATS Therm-Jet, which provides jet impingement on the same four heat sinks, and the location of impingement holes with respect to the heat sinks.

Figure 5. Configuration of an ATS Therm-JETT™ application in a 1U system for temperature measurement. (ATS)

Tables 1 and 2 show the experimental values obtained within a 1-U chassis made by ATS. The two sets of tests were done for both 12 and 6 volts to the fans. The thermal resistance data of all four heat sinks, hard drives and the power supply were obtained for both conventional cooling and jet impingement cases. The acoustic noise for each case was also recorded for comparison.

The data shows an improvement of thermal resistance of 22% to 42% for the heat sinks from jet impingement as compared to conventional cooling. The power supply shows a 10% improvement in the thermal resistance.

The hard drive, though, shows a 20% degradation. This is due to the fact that, with jet impingement, the pressure drop on the fans increases, consequently decreasing the flow through the system. However in an actual system the percentage will be smaller. That’s because the heat generated will be more volumetric compared to the current setup where heat is generated on the surface of the “U” shape aluminum piece located at the bottom of hard drives.

In that case, the decrease of flow through the system will have less impact. Additionally, the increase in hard drive temperature is less than 2°C in this experiment, which is generally not large enough to be a concern.

Table 1. Experimental test results comparing conventional and Therm-JETT™ results with 12 volts to the fans.
Table 2. Experimental test results comparing conventional and Therm-JETT™ results with six volts to the fans.

The question might be raised as to whether the performance of the heat sinks could be improved if we removed the impingement duct, increased the heat sink height by the height of impingement duct and ducted the flow.

We analyzed this situation and found that the improvement would be at most 5%, if we assume that the heat sink is ducted and the pressure drop is the same in both short and tall versions. To study this problem in detail one must consider the fan curves instead of using a fixed volumetric flow rate. Interested readers will find an article in a previous issue of the ATS Qpedia Thermal eMagazine [1] with more information about this topic.

Table 3 shows the temperatures of the four heat sinks, the hard drive and the power supply. As we mentioned earlier, the heat sinks were mounted on 200 W devices, the power supply was dissipating 100 W and the hard drives were dissipating 80 W. The results are shown for jet impingement and conventional parallel air flow over 23.5 mm tall heat sinks, and ducted flow over heat sinks with 28.5 mm tall heat sinks. It can be seen that heat sink temperatures are significantly lower for jet impingement even compared with a taller heat sink with ducted flow.

Table 3. Comparison of temperatures for jet impingement and parallel flow with 23.5 mm heat sinks and ducted flow with 28.5 mm tall heat sinks.

Table 4 shows the improvement in temperature of the four processors between the jet impingement and the two cases of 23.5 mm heat sink and the ducted 28.5 mm heat sinks. By comparing the results for 6 and 12 volts to the fans, it can be seen that at lower voltage the jet impingement temperature difference is even more than with higher voltage to the fans. This implies low pressure drop fans can significantly benefit from the application of jet impingement.

Table 4. Comparison of temperature improvement for jet impingement and parallel flow with 23.5 mm heat sinks and ducted flow for 28.5 mm tall heat sinks.

Figure 6 is a graphical representation of Table 4. The figure shows the significant temperature increases in the case of parallel flow and ducted flow for the heat sinks compared to jet impingement technology. Components (heat sinks) 2 and 3 are hotter than components 1 and 2 because they are downstream and the approach air temperature is higher for a ducted flow. In the jet impingement mode, the impingement flow is at upstream temperature and therefore much cooler than the air received in ducted flow.

In impingement mode, there is another flow coming axially toward the components, called cross flow. It is the interaction of cross flow and impingement that causes the cooling of the component (heat sink).

Figure 6. Heat sink temperature increase of parallel and ducted flow compared to jet impingement cooling. (ATS)

It should be noted that the above experiment was done for a heat source that is the same size as the heat sink base; hence, the spreading resistance is zero because it is almost independent of the heat transfer coefficient. The spreading resistance can be added to the above numbers for other sizes of heat sources.

The same concept of jet impingement has been applied to simulated components in ATCA chassis. The results will be published in subsequent Qpedia articles. The data improvement is promising. Even though conventional air-cooling technology is fast approaching its thermodynamic limit, there are still numerous potentials for air cooling which will enable this technology to be used in the years to come.

Reference

1.  “Heat Sink Thermal Resistance as a Function of Height-Ducted Flow with Fan Curve,” Qpedia Thermal eMagazine, Advanced Thermal Solutions, Inc., January 2009.


For more information about Advanced Thermal Solutions, Inc. (ATS) thermal management consulting and design services, visit https://www.qats.com/consulting or contact ATS at 781.769.2800 or ats-hq@qats.com.

Meeting the thermal management requirements of high-performance servers

High-performance servers are devices specially designed to handle large computational loads, a huge amount of communication signals, fast data processing, etc. Due to their task-oriented nature, high-performance servers must have high reliability, interchangeability, compact size and good serviceability.

High-Performance Servers

To achieve high computational speed, high-performance servers generally have dozens of CPUs and memory models. They also have dedicated data process modules and control units to ensure seamless communication between CPUs and parallel data processing ability. To reach higher speeds, the power dissipation of high–performance CPUs has been increasing continuously in the past decade for its use in high-performance servers.

Cooling dozens of kW servers brings a unique challenge for thermal engineers. To deal with the ever-growing high heat flux issue in high-performance servers, it will need the cooperation of electrical, mechanical and system engineers to solve the problem. The job to remove the high heat flux from CPUs to ambient requires chip level, board level and cabinet level solutions.

Wei [1] described Fujitsu’s thermal management advancements in their high-end UNIX server PRIMEPOWER 2500. The server cabinet is shown in Figure 1. Its dimension is 180cm × 107cm × 179cm (H×W×D) and has a maximum power dissipation of 40 kW. The system configuration of PRIMEPOWER 2500 is shown in Figures 2 and 3. It has 16 system boards and 2 input/output (I/O) boards installed vertically on two back-panel boards. The two back-panel boards are interconnected by six (6) crossbars installed horizontally.

Figure 1. PRIMEPOWER 2500 Cabinet [1]
Figure 2. PRIMEPOWER 2500 System Configuration [1]
Figure 3. PRIMEPOWER 2500 System Board Unit [1]

To cool the electrical components inside PRIMEPOWER 2500, 48 200-mm-diameter fans are installed between the system board unit and the power supply unit. They provide forced air cooling for system boards and power supplies. In addition, six 140-mm-diameter fans are installed on one side of crossbar to cool the crossbar boards with a horizontal flow. The flow direction is shown in Figure 3. Each system board is 58 cm wide and 47 cm long.

There are eight CPU processors, 32 Dual In-Line Memory Modules, 15 system controller processors, and associated DC-DC converters on each system board. The combined power dissipation per system board is 1.6 kW at most.

Figure 4. PRIMEPOWER 2500 System Board [1]

To cool the electrical components inside PRIMEPOWER 2500, 48 200-mm-diameter fans are installed between the system board unit and the power supply unit. They provide forced air cooling for system boards and power supplies. In addition, six 140-mm-diameter fans are installed on one side of crossbar to cool the crossbar boards with a horizontal flow. The flow direction is shown in Figure 3. Each system board is 58 cm wide and 47 cm long.

There are eight CPU processors, 32 Dual In-Line Memory Modules, 15 system controller processors, and associated DC-DC converters on each system board. The combined power dissipation per system board is 1.6 kW at most.

Forced air-cooling technology is commonly used in computers, communication cabinets, and embedded systems, due to its simplicity, low cost and easy implementation. For high-performance servers, the increasing power density and constraints of air-cooling capability and air delivery capacity have pushed forced air cooling to its performance limit.

For high power systems like PRIMEPOWER 2500, it needs a combination of good CPU design, optimized board layout, advanced thermal interface material (TIM), high-performance heat sinks, and strong fans to achieve desired cooling.

The general approach to cool the multi-board system is first to identify the hottest power component with the lowest temperature margin. For the high-performance server, it is the CPUs. For multiple CPUs on a system board, generally, the CPU located on downstream of a board or other CPUs has the highest temperature.

So, the thermal resistance requirement for this CPU is:

Where Tj,max is the allowed maximum junction temperature, Ta is the ambient temperature, ∆Ta is the air temperature rise due to preheating before the CPU, and qmax is the maximum CPU power.

The junction-to-air thermal resistance of the CPU is:

Where Rjc is the CPU junction-to-case thermal resistance, RTIM is the thermal resistance of thermal interface materials, and Rhs is the heat sink thermal resistance. To reduce the CPU junction temperature, it is critical to find intuitive ways to minimize Rjc, RTIM, and Rhs, because any reduction in thermal resistance is important in junction temperature reduction.

The CPU package and heat sink module of PRIMEPOWER 2500 are shown in Figure 5. The CPU package has an integrated heat spreader (IHS) attached to the CPU chip. A high-performance TIM is used to bond the CPU chip and IHS together, see Figure 6. The heat sink module is mounted on the IHS with another TIM in between.

Figure 5. PRIMEPOWER 2500 CPU Package and Heat Sink Module [1]
Figure 6. CPU Package [1]

The TIM used in between the CPU chip and the IHS are crucial to the CPU’s operation. It has two key functions: to conduct heat from the chip to the IHS and to reduce the CPU chip stress caused by the mismatch of the coefficient of thermal expansion (CTE) between the CPU chip and IHS. Fujitsu developed a TIM made of In-Ag composite solder for the above application. The In-Ag composite has a low melting point and a high thermal conductivity. It is relatively soft, which is good for absorbing thermal stress between the chip and the IHS.

Wei [2] also investigated the impact of thermal conductivity on heat spread performance. He found a diamond composite IHS (k=600 W/(mK)) would result in a lower temperature gradient across the chip and low temperature hot spots, compared with aluminum nitride (k=200 W/(mK)) and copper (k=400 W/(mK)). The simulation results are shown in Figure 7.

Figure 7. Heat Spreader Material Comparison [2]

In high-performance servers like the PRIMEPOWER 2500, the thermal performance gains by optimizing the TIM and the IHS are small, because they compose only a small portion of the total thermal resistance. Heat sinks dissipate heat from the CPU to air and have an important role in the thermal management of the server. In a server application, the heat sink needs to meet not only the mechanical and thermal requirements, but also the weight and volume restraints. Hence, heat pipes, vapor chambers, and composite materials are widely used in place of high-performance heat sinks.

Koide et al [1] compared the thermal performance and weight of different heat sinks for server application. The results are shown Figure 8. They used the Cu-base/AL-fin heat sink as benchmark. Compared with the Cu-base/AL-fin heat sink, the Cu-base/Cu-fin heat sink is 50% heavier and gains only 8% performance.

If the heat pipe is used in base, the heat sink weight can be reduced by 15% and the thermal performance increases by 10%. If the vapor chamber is embedded in the heat sink base, it reduces the heat sink weight by 20% and increases the heat sink performance by 20%.

Figure 8. Thermal Performance and Weight Comparison of Different Heat Sinks [1]
Figure 9. (a) USIII Heat Sink for Sun Fire 15K Server, (b) USIV Heat Sink for Sun Fire 25K [3]

Sun Microsystems’ high-performance Sun Fire 15K Server uses USIII heat sink to cool its 72 UltraSparc III (USIII) processors. In Sun Fire 25K Server, the CPUs are upgraded to UltraSparc IV (USIV), which has a maximum power of 108 W. To cool the USIV processor, Xu and Follmer [3] designed a new USIV heat sink with copper base/copper fin, see Figure 9. The old USIII heat sink has 17 forged aluminum fins, the USIV heat sink has 33 copper fins. Both heat sinks have the same base dimensions and height.

Figure 10. Thermal Resistance Comparison between USIII Heat Sink and USIV Heat Sink [3]

Figure 10 shows the thermal resistance comparison between the USIII heat sink and the USIV heat sink. The thermal resistance of the USIV heat sink is almost 0.1°C/W lower than that of the USIII heat sink at medium and high flow rates, which is a huge gain in thermal performance. The thermal performance improvement of the USIV heat sink is not without penalty.

Figure 11. Pressure Drop Comparison between USIII Heat Sink and USIV Heat Sink [3]

Figure 11 shows the pressure drop comparison between the USIII heat sink and the USIV heat sink. For the same air flow rate, the pressure drop of the USIV heat sink is higher than that of the USIII heat sink. That means the Sun Fire 25K Server needs stronger fans and better flow arrangements to ensure the USIV heat sinks have adequate cooling flow.

The design of the cooling method in high-performance servers follows the same methodology used in the design cooling solution of other electronic devices, but at an elevated scale. The main focus is to identify the hottest components, which in most cases is CPUs. Due to extreme high power of CPUs, memory modules, cheat spreader, TIM, and heat sinks to achieve desired cooling in the server. The goal of thermal management is to find cost-effective ways to maintain the junction temperature of the CPU lower than specifications and ensure the continuous operation of the server. Wei [1] has proved a 40 kW server can be cooled by forced air cooling.

However, it requires highly integrated design and a huge amount of air flow that the 54 fans inside PRIMEPOWER 2500 can generate. In the near future, it would be very difficult for a forced air-cooling method to cool cabinets with more than 60 kW power. It would require bigger fan trays to deliver huge amounts of air flow and large size heat sinks to transfer heat from the CPUs to air, which makes it impossible to design a reliable, compact and cost-effective cooling system for the server.

We have to find alternative ways to deal with this problem, Other cooling methods, such as air impinging jets, liquid cooling and refrigeration cooling systems, have the potential to dissipate more heat. But it will require intuitive packaging to integrate them into the server system.

References:

  1. Wen, J., Thermal Management of Fujitsu’s High-performance Servers, source: http://www.fujitsu.com/downloads/MAG/vol43-1/paper14.pdf.
  2. Koide, M.; Fukuzono, K.; Yoshimura, H.; Sato, T.; Abe, K.; Fujisaki, H.; High-Performance Flip-Chip BGA Technology Based on Thin-Core and Coreless Package Substrates, Proceedings of 56th ECTC, San Diego, CA, USA, 2006, pp.1869-1873.
  3. Xu, G; Follmer, L.; Thermal Solution Development for High-end System, Proceedings of 21st IEEE SEMI-THERM Symposium, San Jose, CA, USA, 2005, pp. 109-115.

For more information about Advanced Thermal Solutions, Inc. (ATS) thermal management consulting and design services, visit https://www.qats.com/consulting or contact ATS at 781.769.2800 or ats-hq@qats.com.

Immersion Liquid Cooling of Servers in Data Centers

A data center is a large infrastructure used to house large quantities of electronic equipment, such as computer servers, telecommunications equipment, and data storage systems, etc. The data center requires non-interrupted power, communication and internet access to all equipment inside, it also has dedicated environment control system which provides appropriate working conditions for the electrical devices hosted inside.

Immersion Cooling

Traditional data centers use cold air generated by a room air conditioner system (CRAC) to cool the servers installed on the racks. Cooling the electrical devices by cold air generated by an air conditioner is an easy method to implement. However, it is not a very efficient method in terms of power consumption.

The inefficiency of the method can be contributed to several causes: generating and delivering cold air from a chiller to servers is a multiple heat transfer process, such as the mixing of warm and cool air in the room, which reduces the efficiency and power consumption of cooling hardware such as chillers, computer room air conditioners (CRACs), fans, blowers and pumps.

Data center designers and operators have invented many ways to improve the data center’s thermal efficiency, such as optimizing the rack layout and air conditioner location, separating cold aisles and hot aisles, optimizing the configuration of pipes and cables in under-floor plenum, introducing liquid cooling to high-power severs.

While the above methods can improve the data center heat load management, they cannot dramatically reduce the Power Usage Effectiveness (PUE), which is a measure of how efficiently a datacenter uses its power and is defined as the ratio of total datacenter power consumption to the IT equipment power consumption.

An ideal PUE is 1,0. A better way, proposed and used by some new data centers, is directly bringing the outside cold air to the servers. This method can eliminate the computer room air conditioners (CRACs). To achieve this, the data center has to be located in a specific area where cold air can be provided for all four seasons and the servers have to have higher operating environmental temperature.

Another dramatic solution proposed and used by some companies is liquid immersion cooling for entire servers. When compared with traditional liquid cooling techniques, the liquid immersion cooling uses dielectric fluid as a working agent and open bath design. This eliminates the need for hermetic connectors, pressure vessels, seals and clamshells. There are several different liquid immersion cooling methods.

This article will review the active single-phase immersion cooling technology proposed by Green Revolution Cooling (GRC) [1] and a passive two-phase immersion cooling technology proposed by the 3M Company [2].

Green Revolution Cooling has designed a liquid-filled rack to accommodate the traditional servers and developed dielectric mineral oil as the coolant. Figure 1 shows the liquid cooling racks with chiller and an inside view of a CarnotJet cooling rack from GRC. The racks are filled with 250 gallons of dielectric fluid, called GreenDEF™, which is a non-toxic, clear mineral oil with light viscosity.

Figure 1. Server racks and chiller (left) and inside view of the server rack. [1]

The servers are installed vertically into slots inside the rack and fully submerged in the liquid coolant. Pumps are used to circulate the cold coolant from the chiller to the rack. The coolant returns to the chiller, after removing heat from the servers. Because of its high heat capacity and thermal conductivity, the GreenDEF™ can cool the servers more efficiently than air.

The server racks are semi-open to the environment and the coolant level is constantly monitored by the system. Figure 2 shows a server motherboard is being submerged in the coolant liquid inside a server rack from GRC.

Figure 2. A Server Motherboard Being Immersed in Liquid Coolant in A Server Rack. [1]

Intel has conducted a year-long test with immersion cooling equipment from Green Revolution Cooling in New Mexico [3]. They have found that the technology is highly efficient and safe for servers. In their tests, Intel tested two racks of identical servers – one using traditional air cooling and the other immersed in a Green Revolution enclosure. Over the course of a year, the submerged servers had a partial Power Usage Effectiveness (PUE) of 1.02 to 1.03, equaling some of the lowest efficiency ratings reported using that metric.

The 3M Company is also actively engaged in immersion cooling technology and has developed a passive two-phase immersion cooling system for servers. Figure 3 illustrates the concept of the immersion cooling system developed by 3M. In a specially designed server rack, servers are inserted vertically in the rack. The servers are immersed in 3M’s Novec engineered fluid, a non-conductive chemical with a low boiling point.

The elevated temperature of electronic components on the sever boards will cause the Novec engineered fluid to boil. The evaporation of the fluid will remove a large amount of heat from the heated components with small temperature difference. The evaporated fluid travels to the upper portion of the server rack, where it condenses to liquid on the surface of the heat exchanger cooled by the cold water. The condensed liquid flows back to the rack bath, driven by the force of gravity. In 3M’s server rack, the liquid bath is also semi-open to the outside environment.

Because the cooling method is passive, there is no pump needed in the system.

Figure 3. Passive Two-phase Immersion Cooling System from 3M. [2]

By utilizing the large latent heat of Novec engineered fluid during evaporation and condensation, the coolant can remove heat from servers and dissipate it to water heat exchanger with small a temperature gradient. To enhance the boiling on the component surfaces, 3M invented special coating for electronic chips inside the liquid bath. The boiling enhancement coating (BEC) is a 100 mm thick porous metallic material.

The application of the BEC is illustrated in Figure 4. The coating is directly applied to the integrated heat spreader (IHS) of the chip. Tuma [2] claimed that the coating can produce boiling heat transfer coefficients in excess of 100,000 W/m2-K, at heat fluxes exceeding 300,000 W/m2.

Figure 4. Application of boiling enhanced coating (BEC). [2]

In his paper, Tuma [2] discussed the economic and environmental merits of the passive two-phase immersion cooling technology for cooling data center equipment. He concluded that liquid immersion cooling can dramatically decrease the power consumption for cooling relative to traditional air-cooling methods. It can also simplify facility construction by reducing floor space requirements, eliminating the need for air cooling infrastructure such as plenum, air economizers, elevated ceilings etc.

Green Revolution Cooling and 3M have demonstrated the feasibility and applicability of using immersion cooling technology to cool the servers in data centers. The main advantages of immersion liquid cooling are saving overall cooling energy and maintaining the component temperature low and uniform. However, both immersion liquid cooling technologies require specially designed sever racks. Specially formulated coolants are needed for both cooling technologies, too, and they are not cheap. For the traditional air-cooled data center, the air is free, abundant and easy to deliver.

In both immersion cooling technologies, the servers have to be vertically installed inside the server rack, which will reduce the date center footprint usage efficiency. Because the liquid baths used in immersion cooling are open to the environment, coolant is gradually and inevitably lost to the ambient during long term service.

The environmental impact of the discharge of a large amount of coolant by data centers has to be evaluated, too. The effect of the coolant on the connectors and materials used on the PCB is not also very clear.

Immersion liquid cooling is a very promising technology for cooling high-power servers. But, there are still obstacles that need to be overcome before their large scale application is assured.   

References

  1. http://www.grcooling.com
  2. Tuma, E. P., “The Merits of Open Bath Immersion Cooling of Datacom Equipment,” 26th IEEE SEMI-THERM Symposium, Santa Clara, California, USA  2010.
  3. http://www.datacenterknowledge.com

For more information about Advanced Thermal Solutions, Inc. (ATS) thermal management consulting and design services, visit https://www.qats.com/consulting or contact ATS at 781.769.2800 or ats-hq@qats.com.