Category Archives: Thermal Design

Cooling Embedded AI Electronics

Embedded AI enables dedicated functions within larger systems. These AI chips power countless devices—robotic arms, smart thermostats, security cameras, medical instruments, drones, and vehicles—enhancing functionality and decision-making at the edge.

ChatGPT is one of the most visited websites in the world. Along with Gemini, Perplexity AI, Grok, and many others, online AI tools are increasingly popular and specialized. This is leading to more power-hungry AI data centers, where hundreds of thousands of GPU chips run at upwards of 1,000 watts each. [1]

But millions of lower power AI chips are running quietly in edge applications all around us.

In smart homes, embedded AI powers thermostats, voice/image recognition, and security. In factories, it drives automated quality control, predictive maintenance, and robotic assembly.

Figure 1 – Embedded AI Systems in Industry Provide Fast, Local Processing to Enhance Production and Safety. [2]

Using local AI inference, these systems make independent decisions, predict outcomes, and automate operations in real time. Connected via the Internet of Things (IoT), they share data and improve interoperability, making homes and factories smarter and more efficient.

AI Technologies in Embedded Systems

  • AI vs. ML: Artificial Intelligence (AI) includes deep learning that uses artificial neural networks to process unstructured data. Machine learning (ML), a subset of AI, focuses on training algorithms to learn from data and adapt over time.
  • Discriminative AI: Embedded systems typically use discriminative AI—optimized for data analysis and evaluation—requiring lower compute power than generative models.

Embedded AI Chips and Cooling Needs

AI processors and modules in embedded applications are not the high-powered versions in data centers. For those, liquid cooling with constant monitoring is essential.

Figure 2 – Intel FPGAs Support Real-Time Deep Learning Inference for Embedded Systems and Data Centers. [4, 5]

Embedded AI processors often come in compact system-on-module (SOM) formats that include CPUs, memory, and specialized chips like GPUs or DSPs. These modules prioritize space efficiency and typically rely on air cooling—either passive or fan-assisted—rather than the liquid cooling found in high-wattage data centers.

Following are some popular AI processors and approved heat sinks.

AMD Kria™ SOMs

The AMD Kria K24 SOM runs on as little as 2.5 watts and typically uses a passive (fan-less) heat sink. Its low power and compact size allow it to be installed close to the processes it manages, such as intelligent motor control. The more capable Kria K26 SOM supports higher-end tasks like machine vision and robotic planning and may require active cooling. [6]

Figure 3 –The AMD Kria K24 and K26 SOMs Can Be Used for Sophisticated Robotic Applications. The K24 Provides Intelligent Motor Control. The K26 Manages Complex Machine Vision. [6]

In the above robotics application, different heat sinks are available to cool the K24 and K26 SOMs. These come in varieties for providing optimum levels of air cooling, as well as for fitting available spaces. The K24 SOM can be cooled with a passive (fan-less) sink. Depending on its application, the K26 SOM may need an active heat sink. Examples of heat sinks for cooling the K26 SOM are below. [7]

Figure 4 – Fan-assisted Heat Sinks, Like the Above ATS Model May be Needed for Cooling AMD Kria K26 System-on-Modules. In Some Applications, Passive (fan-less) Heat Sinks are Sufficient.

Figure 5 – Three Passive Heat Sinks Developed to Cool AMD Kria K24 SOMs. The Taller Finned Versions Provide More Cooling Performance but Need More Headroom and are Heavier. [8]

NVIDIA Jetson Modules

Widely used NVIDIA Jetson modules power a wide range of AI in embedded systems. These compact, powerful modules enable AI solutions in manufacturing, logistics, and healthcare. They leverage NVIDIA’s GPU technology for accelerated AI computations.

In the Jetson module family, Orin systems are specifically engineered to provide high-speed support for a wide range of sensors, enabling seamless integration with various edge AI applications.

One of these, the Jetson AGX Orin series, uses just 15 to 75 watts of power depending on the specific module, workload, and external factors such as local temperatures. They’re designed for passive cooling to manage heat in applications with prolonged operating temperatures, where fans could be affected by dust and debris. [9]

Figure 6 – Top: NVIDIA’s Jetson AGX Orin Module Features an AI Accelerator Graphic Chip and an  Ampere GPU Architecture Chip in One Package. It Can be Passively Cooled with a Specially-Designed, NVIDIA-Approved ATS Heat Sink. [9,10]

Bottom: The Many Uses of Orin Modules Include Embedding in Zipline Delivery Drones [11]

The Orin, another Jetson module, is a small, powerful computer for embedded AI applications connected to the IoT. Its capabilities include deep learning, computer vision, graphics, and multimedia.

Figure 7 – Top: An NVIDIA Jetson Orin Nano Module and a Specially-Designed ATS Active Heat Sink. [12, 10] Bottom: Multiple Security Cameras and Sensors Feed Visual Data to an Orin Nano Module Whose AI Detects Unusual Activities. [13]

One application for Orin Nano modules is in security surveillance systems. Cameras and sensors are placed in strategic locations. The Orin Nano module processes their visual data, detecting unusual activities and triggering alerts when identified by the AI.

When Air Cooling Isn’t Enough

One exception to air cooling for embedded processors is in some smart phones. Tasked to perform ever more functions, including AI, their increasingly powerful chips require higher performance cooling.

For example, Qualcomm Snapdragon 8-series chips, used in phones like the OnePlus 13, generate significant heat under heavy loads. Vapor chambers help dissipate that heat across a broader surface for effective cooling without active fans.

Figure 8 – Top: The Top-Rated OnePlus 13 Phone Features a Qualcomm Snapdragon 8 Elite Chip. Botton: A Teardown Video Reveals the Vapor Chamber for Cooling the Snapdragon Chip. [14,15]

Embedded AI Efficiency

Embedded AI continues to gain ground due to its compact design, low latency, and localized processing. Its benefits include:

  • Reduced network load by transmitting processed insights rather than raw data
  • Lower system cost vs. cloud-based AI
  • Lower power consumption, enabling simpler and cheaper cooling solutions

With AI now embedded across sectors—from smart homes to drones to industrial robotics—thermal management solutions are evolving alongside to ensure performance and longevity.

References

  1. MIT Technology Review, https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
  2. GIGAIPC, https://www.gigaipc.com/en/solution-detail/Machine-Vision/
  3. Embedded, https://www.embedded.com/ai-efficiency-will-depend-on-model-size/
  4. Intel, https://www.intel.com/content/www/us/en/software/programmable/fpga-ai-suite/overview.html
  5. Mirabilis Design, https://www.mirabilisdesign.com/intel-fpga-neural-processor-ai/
  6. Electronic Design, https://www.electronicdesign.com/technologies/industrial/boards/video/21273991/a-look-inside-amds-kria-k24-system-on-module
  7. AMD, https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
  8. Advanced Thermal Solutions, Inc., https://www.qats.com/Heat-Sinks/Device-Specific-AMD-Kria-K26
  9. NVIDIA, https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/
  10. Advanced Thermal Solutions, Inc., https://www.qats.com/Heat-Sinks/Device-Specific-NVIDIA
  11. Things Embedded, https://things-embedded.com/us/nvidia-jetson/orin/agx/
  12. NVIDIA, https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/product-development/
  13. Prox PC, https://www.proxpc.com/blogs/case-studies-real-world-applications-of-nvidia-jetson-orin-nano
  14. Tom’s Guide, https://www.tomsguide.com/phones/oneplus-phones/oneplus-13-is-official-and-one-of-the-first-snapdragon-8-elite-powered-phones
  15.  PBKreviews, https://www.youtube.com/watch?v=WqJq3-ngL2Q

Cooling AI Data Centers

How important are AI data centers? In just months, Elon Musk’s xAI team converted a factory outside Memphis into a cutting-edge, 100,000-GPU center for training the Colossus supercomputer—home to the Grok chatbot.

Initially powered by temporary gas turbines (later replaced by grid power), Colossus installed its first 100,000 chips in only 19 days, drawing praise from NVIDIA CEO Jensen Huang. Today, it operates 200,000 GPUs, with plans to reach 1 million GPUs by the end of 2025. [1]

Figure 1 – Elon Musk’s 1 Million Sq Ft xAI Colossus Supercomputer Facility near Memphis, TN. [1]

There are about 12,000 data centers throughout the world, nearly half of them in the United States. Now, more and more of these are being built or retrofitted for AI-specific workloads. Leaders include Musk’s xAI, Microsoft, Meta, Google, Amazon, OpenAI, and others.

High power is essential for such operations, and like computational electronics of all sizes heat issues need to be resolved.

GenAI

A key driver of data center growth is Generative AI (GenAI)—AI that creates text, images, audio, video, and code using deep learning. Chatbots and large language model ChatGPT are examples of GenAI, along with text-to-image models that generate images from written descriptions.

Managing all this is possible from new generations of processors, mainly GPUs. They all draw on higher levels of power and generate higher amounts of heat.

Figure 2 – Advanced AI Processor, the NVIDIA GH200 Grace Hopper Superchip with Integrated CPU to Increase Speed and Performance. [2,3]

AI data centers prioritize HPC hardware: GPUs, FPGAs, ASICs, and ultra-fast networking. Compared to CPUs (150–200 W), today’s AI GPUs often run >1,000 W.  . To handle massive datasets and complex computations in real-time they need significant power and cooling infrastructure.

Data Center Cooling Basics

Traditional HVAC was sufficient for older CPU-driven data centers. Today’s AI GPUs demand far more cooling, both at the chip level and facility-wide. This has propelled a need for more efficient thermal management systems at both the micro (server board and chip) and macro (server rack and facility) levels. [4]

Figure 3 – The Colossus AI Supercomputer Now Runs 200,000 GPUs. It Operates at 150MW Power, Equivalent to 80,000 Households. [5]

At Colossus, Supermicro 4U servers house NVIDIA Hopper GPUs cooled by:

  • Cold plates
  • Coolant distribution manifolds (1U between each server)
  • Coolant distribution units (CDUs) with redundant pumps at each rack base [6]

Each 4U server is equipped with eight NVIDIA H100 Tensor Core GPUs. Each rack contains eight 4U servers, totaling 64 GPUs per rack.

Between every server is a 1U manifold for liquid cooling. They connect with CDUs, heat-exchanging Coolant Distribution Units at the bottom of each rack that include a redundant pumping system. The choice of coolant is determined by a range of hardware and environmental factors.

Figure 4 – Each Colossus Rack Contains Eight 4U Servers, Totaling 64 GPUs Per Rack. Between Each Server is a 1U Manifold for Liquid Cooling. [7]
Figure 5 – The Base of Each Rack Has a 4U CDU Pumping System with Redundant Liquid Cooling. [7]

Role of Cooling Fans

Fans remain essential for DIMMs, power supplies, controllers, and NICs.

Figure 6 – Rear Door Liquid-Cooled Heat Exchangers. [7]

At Colossus, fans in the servers pull cooler air from the front of the rack, and exhaust the air at the rear of the server. From there, the air is pulled through rear door heat exchangers. The heat exchangers pass warm air through a liquid-cooled, finned heat exchanger/radiator, lowering its temperature before it exits the rack.

Direct-to-Chip Cooling

NVIDIA’s DGX H100 and H200 server systems feature eight GPUs and two CPUs that must run between 5°C and 30°C. An AI data center with a high rack density houses thousands of these systems performing HPC tasks at maximum load. Direct liquid cooling solutions are required.

Figure 7 – An NVIDIA DGX H100/H200 System Featuring Eight GPUs [8]
Figure 8 – The NVIDIA H100 SmartPlate Connects to a Liquid Cooling System to Bring Microconvective Chip-Level Cooling That Outperforms Air Cooling by 82%. [9]

Direct liquid cooling (cold plates contacting the GPU die) is the most effective method—outperforming air cooling by 82%. It is preferred for high-density deployments of the H100 or GH200.

Scalable Cooling Modules

Colossus represents the world’s largest liquid-cooled AI cluster, using NVIDIA + Supermicro technology. For smaller AI data centers, Cooling Distribution Modules (CDMs) provide a compact, self-contained solution.

Figure 9 – The iCDM-X Cooling Distribution Module from ATS Includes Pumps, Heat Exchanger and Liquid Coolant for Managing Heat from AI GPUs and Other Components. [10]

Most AI data centers are smaller, and power and cooling needs are lower, but essential. Many heat issues can be resolved using self-contained Cooling Distribution Modules.

The compact iCDM-X cooling distribution module provides up to 1.6MW of cooling for a wide range of AI GPUs and other chips. The module measures and logs all important liquid cooling parameters. It uses using just 3kW of power, and no external coolant is required.

These modules include:

•         Pumps

•         Heat exchangers

•         Cold plates

•         Digital monitoring (temp, pressure, flow)

Their sole external component is one or more cold plates removing heat from AI chips. ATS provides an industry-leading selection of custom and standard cold plates, including the high-performing ICEcrystal series.

Figure 10 – The ICEcrystal Cold Plates Series from ATS Provide 1.5 kW of Jet Impingement Liquid Cooling Directly onto AI Chip Hotspots.

Cooling Edge AI and Embedded Applications

AI isn’t just for big data centers—edge AI, robotics, and embedded systems (e.g., NVIDIA Jetson Orin, AMD Kria K26) use processors running under 100 W. These are effectively cooled with heat sinks and fan sinks from suppliers like Advanced Thermal Solutions. [11]

Figure 11 – High Performance Heat Sinks for NVIDIA and AMD AI Processors in Embedded and Edge Applications. [11]

NVIDIA also partners with Lenovo, whose 6th-gen Neptune cooling system enables full liquid cooling (fanless) across its ThinkSystem SC777 V4 servers—targeting enterprise deployments with NVIDIA Blackwell + GB200 GPUs. [12]

Figure 12 – Lenovo’s Neptune Direct Water Cooling Removes Heat from Power Supplies, for Completely Fanless Operation. [12]

Benefits gained from the Neptune system include:

  • Full system cooling (GPUs, CPUs, memory, I/O, storage, regulators)
  • Efficient for 10-trillion-parameter models
  • Improved performance, energy efficiency, and reliability

Conclusion

With surging demand, AI data centers are now a major construction focus. Historically, cooling problems are the #2 cause of data center downtime (behind power issues). With the high power needed for AI computing, these builds should carefully fit with their local communities in terms of electrical needs and sources, and water consumption. [13]

AI workloads will increase U.S. data center power demand by 165% by 2030 (Goldman Sachs), with nearly double 2022 levels (IBM/Newmark). Sustainable design and resource-conscious cooling are essential for the next wave of AI infrastructure. [14,15]

References

1. The Guardian, https://www.theguardian.com/technology/2025/apr/24/elon-musk-xai-memphis

2. Fibermall, https://www.fibermall.com/blog/gh200-nvidia.htm

3. NVIDA, https://resources.nvidia.com/en-us-grace-cpu/grace-hopper-superchip?ncid=no-ncid

4. ID Tech Ex, https://www.idtechex.com/en/research-report/thermal-management-for-data-centers-2025-2035-technologies-markets-and-opportunities/1036

5. Data Center Frontier, https://www.datacenterfrontier.com/machine-learning/article/55244139/the-colossus-ai-supercomputer-elon-musks-drive-toward-data-center-ai-technology-domination

6. Supermicro, https://learn-more.supermicro.com/data-center-stories/how-supermicro-built-the-xai-colossus-supercomputer

7. Serve The Home, https://www.servethehome.com/inside-100000-nvidia-gpu-xai-colossus-cluster-supermicro-helped-build-for-elon-musk/2/

8. Naddod, https://www.naddod.com/blog/introduction-to-nvidia-dgx-h100-h200-system

9. Flex, https://flex.com/resources/flex-and-jetcool-partner-to-develop-liquid-cooling-ready-servers-for-ai-and-high-density-workloads

10. Advanced Thermal Solutions, https://www.qats.com/Products/Liquid-Cooling/iCDM

11. Advanced Thermal Solutions, https://www.qats.com/Heat-Sinks/Device-Specific-Freescale

12. Lenovo, https://www.lenovo.com/us/en/servers-storage/neptune/?orgRef=https%253A%252F%252Fwww.google.com%252F

13. Deloitte, https://www2.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/genai-power-consumption-creates-need-for-more-sustainable-data-centers.html

14.GoldmanSachs, https://www.goldmansachs.com/insights/articles/ai-to-drive-165-increase-in-data-center-power-demand-by-2030

15. Newmark, https://www.nmrk.com/insights/market-report/2023-u-s-data-center-market-overview-market-clusters

Nanoparticles to Enhance the Thermal Management of Electronics

The addition of nanoparticles to a coolant are an alternative approach that can be considered to improve the performance of a liquid cooled system or perhaps to further reduce the size of such a system. But nanoparticles are not necessarily well known by engineers engaged in thermal management. This list of material may help.

The addition of nanoparticles to a coolant are an alternative approach that can be considered to improve the performance of a liquid cooled system or perhaps to further reduce the size of such a system.  But nanoparticles are not necessarily well known by engineers engaged in thermal management. This list of material may help.

First, a new paper by Moita, Moreira and Pereira, does an excellent job of reviewing nanofluids for the next generation of thermal management. This paper contributes to the body of knowledge in this space by looking at typical nanoparticle/base fluid mixtures used and combined in technical and functional solutions. It covers the science of nanofluids and their practical application. You can download this open-access paper from the Multidisciplinary Digital Publishing Institute, at this link (download is a PDF): Nanofluids for the Next Generation Thermal Management of Electronics: A Review

Second, ATS was fortunate enough to have had on our research staff, Dr. Reza Azizian. He and others authored a white paper titled “Nanofluids in Electronics Cooling Applications”. This piece discusses the theory and use of nanofluids for thermal management. We’ve posted that paper on the ATS blog here: Nanofluids in Electronics Cooling Applications.

We hope you find these resources helpful. Like always, if you have trouble accessing them, drop us a comment and we’ll get you a copy.

Nanoparticles Shapes & Forms Image used by permission from the artist normaals

Integrate More Electronics in Less Space with ATS Integration, Chassis Design and Cooling Solutions

ATS has designed custom housing and chassis for a variety of products including

  • ATCA chassis with 4.5KW cooling capability,
  • Small enclosures such set-top boxes, network interface units, industrial and autonomous vehicle systems
  • High capacity 1U and 2U chassis with integrated air jet impingement to push the air-cooling capacity of the 1U chassis to over 1.8KW.
ATS develops chassis and does systems integration for a wide variety of electronics in datacomm, telecomm, autonomous vehicles, industrial IoT and more

Integration of the cooling system, whether liquid or air, has enabled ATS clients to get their product out to the market right-the-first-time with superb thermal performance and right-cost.

Where Can Thermal Solutions for Electronics Equipment be Tested and Characterized? at ATS!

One of ATS’ core principles is basing our solutions on data: from analytical modeling, to CFD to manufacturing the thermal solution then actually testing it in our labs. Our vertical integration allows for excellent quality control and reliable solutions. With our investment in 6 characterization labs, we take this seriously for ourselves and for our customers.

ATS runs six Thermal Characterization Labs. Featuring a unique selection of air velocity, air temperature and air pressure measurement instruments and wind tunnels for thermal management research, testing and analysis. Virtually any electronic system can be characterized.

==> Learn more on our web site here: https://www.qats.com/Consulting/Lab-Capabilities
==> Got questions on our labs and how we might help your next project? email us at ats-hq@qats.com