By Norman Quesnel, Senior Member of Marketing Staff Advanced Thermal Solutions, Inc. (ATS)
Liquid cooling systems transfer heat up to four times better than an equal mass of air. This allows higher performance cooling to be provided with a smaller system. A liquid cooled cold plate can replace spaceconsuming heat sinks and fans and, while a liquid cold plate requires a pump, heat exchanger, tubing and plates, there are more placement choices for cold plates because they can be outside the airflow. 
One-time concerns over costs and leaking cold plates have greatly subsided with improved manufacturing capabilities. Today’s question isn’t “Should we use liquid cooling?” The question is “What kind of liquid should we use to help optimize performance?”
For liquid cold plates, the choice of working fluid is as important as choosing the hardware pieces. The wrong liquid can lead to poor heat transfer, clogging, and even system failure. A proper heat transfer fluid should provide compatibility with system’s metals, high thermal conductivity and specific heat, low viscosity, low freezing point, high flash point, low corrosivity, low toxicity, and thermal stability. 
Today, despite many refinements in liquid cold plate designs, coolant options have stayed relatively limited. In many cases, regular water will do, but water-with-additives and other types of fluids are available and more appropriate for certain applications. Here is a look at these coolant choices and where they are best suited.
Basic Cooling Choices
While water provides superior cooling performance in a cold plate, it is not always practical to use because of its low freezing temperature. Additives such as glycol are often needed to change a coolant’s characteristics to better suit a cold plate’s operating environment.
In fact, temperature range requirements are the main consideration for a cold plate fluid. Some fluids freeze at lower temperatures than water, but have lower heat transfer capability. The selected fluid also must be compatible with the cold plate’s internal metals to limit any potential for corrosion.
Table 1 below shows how the most common cold plate fluids match up to the metals in different cold plate designs.
The choices of cold plate coolants will obviously have varied properties. Some of the differences between fluids are less relevant to optimizing cold plate performance, but many properties should be compared. Tables 2 and 3 show the properties of some common coolants.
An excellent review of common cold plate fluids is provided by Lytron, an OEM of cold plates and other cooling devices. The following condenses fluid descriptions taken from Lytron’s literature. 
The most commonly used coolants for liquid cooling applications today are:
Inhibited Glycol and Water Solutions
Water – Water has high heat capacity and thermal conductivity. It is compatible with copper, which is one of the best heat transfer materials to use for your fluid path. Facility water or tap water is likely to contain impurities that can cause corrosion in the liquid cooling loop and/or clog fluid channels. Therefore, using good quality water is recommended in order to minimize corrosion and optimize thermal performance. If you determine that your facility water or tap water contains a larger percentage of minerals, salts, or other impurities, you can either filter the water or you can opt to purchase filtered or deionized water. [5,6]
Deionized Water – The deionization process removes harmful minerals, salts, and other impurities that can cause corrosion or scale formation. Compared to tap water and most fluids, deionized water has a high resistivity. Deionized water is an excellent insulator, and is used in the manufacturing of electrical components where parts must be electrically isolated. However, as water’s resistivity increases, its corrosivity increases as well. When using deionized water in cold plates or heat exchangers, stainless steel tubing is recommended. [5, 7]
Inhibited Glycol and Water Solutions – The two types of glycol most commonly used for liquid cooling applications are ethylene glycol and water (EGW) and propylene glycol and water (PGW) solutions. Ethylene glycol has desirable thermal properties, including a high boiling point, low freezing point, stability over a wide range of temperatures, and high specific heat and thermal conductivity. It also has a low viscosity and, therefore, reduced pumping requirements. Although EGW has more desirable physical properties than PGW, PGW is used in applications where toxicity might be a concern. PGW is generally recognized as safe for use in food or food processing applications, and can also be used in enclosed spaces. [5, 8]
Dielectric Fluid – A dielectric fluid is non-conductive and therefore preferred over water when working with sensitive electronics. Perfluorinated carbons, such as 3M’s dielectric fluid Fluorinert™, are non-flammable, non-explosive, and thermally stable over a wide range of operating temperatures. Although deionized water is also non-conductive, Fluorinert™ is less corrosive than deionized water. However, it has a much lower thermal conductivity and much higher price. PAO is a synthetic hydrocarbon used for its dielectric properties and wide range of operating temperatures. For example, the fire control radars on today’s jet fighters are liquid-cooled using PAO. For testing cold plates and heat exchangers that will use PAO as the heat transfer fluid, PAO-compatible recirculating chillers are available. Like perfluorinated carbons, PAO has much lower thermal conductivity than water. [5, 9]
Water, deionized water, glycol/water solutions, and dielectric fluids such as fluorocarbons and PAO are the heat transfer fluids most commonly used in high performance liquid cooling applications.
It is important to select a heat transfer fluid that is compatible with your fluid path, offers corrosion protection or minimal risk of corrosion, and meets your application’s specific requirements. With the right chemistry, your heat transfer fluid can provide very effective cooling for your liquid cooling loop.
Expanding the Internet of Things (IOT) into time-critical applications such as with autonomous vehicles, means finding ways to reduce data transfer latency. One such way, edge computing, places some computing as close to connected devices as possible. Edge computing pushes intelligence, processing power and communication capabilities from a network core to the network edge, and from an edge gateway or appliance directly into devices. The benefits include improved response times and better user experiences.
While cloud computing relies on data centers and communication bandwidth to process and analyze data, edge computing provides a means to lay some work off from centralized cloud computing by taking less compute intensive tasks to other components of the architecture, near where data is first collected. Edge computing works with IoT data collected from remote sensors, smartphones, tablets, and machines. This data must be analyzed and reported on in real time to be immediately actionable. 
In the above edge computing scheme, developed by Inovex,
the layers are described as follows:
Cloud: On this layer
compute power and storage are virtually limitless. But, latencies and the cost
of data transport to this layer can be very high. In an edge computing
application, the cloud can provide long-term storage and manage the immediate
Node: These nodes are
located before the last mile of the network, also known as downstream. Edge nodes
are devices capable of routing network traffic and usually possess high compute
power. The devices range from base stations, routers and switches to
small-scale data centers.
gateways are like edge nodes but are less powerful. They can speak most common
protocols and manage computations that do not require specialized hardware,
such as GPUs. Devices on this layer are often used to translate for devices on
lower layers. Or, they can provide a platform for lower-level devices such as mobile
phones, cars, and various sensing systems, including cameras and motion
layer is home to small devices with very limited resources. Examples include
single sensors and embedded systems. These devices are usually purpose-built
for a single type of computation and often limited in their communication
capabilities. Devices on this layer can include smart watches, traffic lights and
environmental sensors. 
Today, edge computing is becoming essential where time-to-result must be minimized, such as in smart cars. Bandwidth costs and latency make crunching data near its source more efficient, especially in complex systems like smart and autonomous vehicles that generate terabytes of telemetry data. 
Besides vehicles, edge computing examples serving the IoT include
smart factories and homes, smartphones, tablets, sensor-generated input,
robotics, automated machines on manufacturing floors, and distributed analytics
servers used for localized computing and analytics.
Major technologies served by edge computing include
wireless sensor networks, cooperative distributed peer-to-peer ad-hoc
networking and processing, also classifiable as local cloud/fog computing,
distributed data storage and retrieval, autonomic self-healing networks, remote
cloud services, augmented reality and virtual reality. 
Autonomous Vehicles and Smart Cars
New so-called autonomous vehicles have enough computing
hardware they could be considered mobile data centers. They
generate terabytes of data every day. A single vehicle running for 14 to 16 hours
a day creates 1-5TB of raw data an hour and can produce up to 50TB a day. 
A moving self-driving car, sending a live stream continuously to servers, could meet disaster while waiting for central cloud servers to process the data and respond back to it. Edge computing allows basic processing, like when to slow down or stop, to be done in the car itself. Edge computing eliminates the dangerous data latency.
Once an autonomous car is parked, nearby edge computing
systems can provide added data for future trips. Processing this close to the
source reduces the costs and delays associated with uploading to the cloud.
Here, the processing does not occur in the vehicle itself.
Other Edge Computing Applications
Edge computing enables industrial and healthcare providers
to bring visibility, control, and analytic insights to many parts of an
infrastructure and its operations—from factory shop floors to hospital
operating rooms, from offshore oil platforms to electricity production.
Machine learning (ML) benefits greatly from edge computing.
All the heavy-duty training of ML algorithms can be done on the cloud and the
trained model can be deployed on the edge for near real-time or true real-time
For manufacturing uses, edge computing devices can translate data from proprietary systems to the cloud. The capability of edge technology to perform analytics and optimization locally, provides faster responses for more dynamic applications, such as adjusting line speeds and product accumulation to balance the line. 
Edge Computing Hardware
Processing power at the edge needs to be matched to the
application and the available power to drive an edge system operation. If
machine vision, machine learning and other AI technologies are deployed,
significant processing power is necessary. If an application is more modest,
such as with digital signage, the processing power may be somewhat less.
Intel’s Xeon D-2100 processor is made to support edge computing. It is a lower power, system on chip version of a Xeon cloud/data server processor. The D-2100 has a thermal design point (TDP) of 60-110W. It can run the same instruction set as traditional Intel server chips, but takes that instruction set to the edge of the network. Typical edge applications for the Xeon D-2100 include multi-access edge computing (MEC), virtual reality/augmented reality, autonomous driving and wireless base stations. 
Thermal management of the D-2100 edge focused processor is largely determined by the overall mechanical package the edge application takes. For example, if the application is a traditional 1U server, with sufficient air flow into the package, a commercial off the shelf, copper or aluminum heat sink should provide sufficient cooling. 
An example of a more traditional package for edge computing
is the ATOS system shown in Figure 6. But, for less common packages, where
airflow may be less, more elaborate approaches may be needed. For example, heat
pipes may be needed to transport excess processor heat to another part of the
system for dissipation.
One design uses a vapor chamber integrated with a heat sink. Vapor chambers are effectively flat heat pipes with very high thermal conductance and are especially useful for heat spreading. In edge hardware applications where there is a small hot spot on a processor, a vapor chamber attached to a heat sink can be an effective solution to conduct the heat off the chip.
The Nvidia Jetson AGX Xavier is designed for edge computing applications such as logistics robots, factory systems, large industrial UAVs, and other autonomous machines that need high performance processing in an efficient package.
Nvidia has modularized the package, proving the needed supporting semiconductors and input/output ports. While it looks like if could generate a lot of heat, the module only produces 30W and has an embedded thermal transfer plate. However, any edge computing deployment of this module, where it is embedded into an application, can face excess heat issues. A lack of system air, solar loading, impact of heat from nearby devices can negatively impact a module in an edge computing application.
Nvidia considers this in their development kit for this
module. It has an integrated thermal management solution featuring a heat sink
and heat pipes. Heat is transferred from the module’s embedded thermal transfer
plate to the heat pipes then to the heat sink that is part of the solution.
For a given edge computing application, a thermal solution
might use heat pipes attached to a metal chassis to dissipate heat. Or it could
combine a heat sink with an integrated vapor chamber. Studies by Glover, et al
from Cisco have noted that for vapor chamber heat sinks, the thermal resistance
value varies from 0.19°C/W to 0.23°C/W for 30W of power. 
A prominent use case for edge computing is in the smart factory empowered by the Industrial Internet of things (IIoT). As discussed, cloud computing has drawbacks due to latency, reliability through the communication connections, time for data to travel to the cloud, get processed and return. Putting intelligence at the edge can solve many if not all these potential issues. The Texas Instruments (TI) Sitara family of processors was purpose built for these edge computing machine learning applications.
Smart factories apply machine learning in different ways.
One of these is training, where machine learning algorithms use computation
methods to learn information directly from a set of data. Another is
deployment. Once the algorithm learns, it applies that knowledge to finding
patterns or inferring results from other data sets. The results can be better
decisions about how a process in a factory is running. TI’s Sitara family can execute a trained
algorithm and make inferences from data sets at the network edge.
The TI Sitara AM57x devices were built to perform machine
learning in edge computing applications including industrial robots, computer
vision and optical inspection, predictive maintenance (PdM), sound
classification and recognition of sound patterns, and tracking, identifying,
and counting people and objects. [18,19]
This level of machine learning processing may seem like it would require sophisticated thermal management, but the level of thermal management required is really dictated by the use case. In development of its hardware, TI provides guidance with the implementation of a straight fin heat sink with thermal adhesive tape on its TMDSIDK574 AM574x Industrial Development Kit board.
While not likely an economical production product, it
provides a solid platform for the development of many of the edge computing
applications that are found in smart factories powered by IIoT. The straight
fin heat sink with thermal tape is a reasonable recommendation for this kind of
Most edge computing applications will not include a lab bench or controlled prototype environment. They might involve hardware for machine vision (an application of computer vision). An example of a core board that might be used for this kind of application is the Phytec phyCORE-AM57x. 
Machine vision being used in a harsh, extreme temperature industrial environment might require not just solid thermal management but physical protection as well. Such a use case could call for thermal management with a chassis. An example is the Arrow SAM Car chassis developed to both cool and protect electronics used for controlling a car.
Another packaging example from the SAM Car is the chassis shown below, which is used in a harsh IoT environment. This aluminum enclosure has cut outs and pockets connecting to the chips on the internal PCB. The chassis acts as the heat sink and provides significant protection in harsh industrial environments.
Edge computing cabinetry is small in scale (e.g. less than 10 racks), but powerful in information. It can be placed in nearly any environment and location to provide power, efficiency and reliability without the need for the support structure of a larger white space data center.
Still, racks used in edge cabinets can use high levels of processing power. The enclosure and/or certain components need a built-in, high-performance cooling system.
Hardware OEMs like Rittal build redundancy into edge systems. This lets other IT assets remain fully functional and operational, even if one device fails. Eliminating downtime of the line, preserving key data and rapid response all contribute to a healthier bottom line.
Although edge computing involves fewer racks, the data needs vital cooling protection. For edge computers located in remote locations, the availability of cooling resources may vary. Rittal provides both water and refrigerant-based options. Refrigerant cooling provides flexible installation, water based cooling brings the advantage of ambient air assist, for free cooling. 
LiquidCool’s technology collects server waste heat inside a
fluid system and transports it to an inexpensive remote outside heat
exchanger. Or, the waste heat can be re-purposed. In one IT closet-based edge
system, fluid-transported waste heat is used for heating an adjacent room. 
Green Revolution Cooling provides ICEtank turnkey data centers built inside ISO shipping containers for edge installations nearly anywhere. The ICEtank containers feature immersion cooling systems. Their ElectroSafe coolant protects against corrosion, and the system removes any need for chillers, CRACs (computer room ACs) and other powered cooling systems. 
A Summary Chart of Suggested Cooling for Edge Computing
The following chart summarizes air cooling options for Edge Computing applications:
The Leading Edge
The edge computing marketplace is currently
experiencing a period of unprecedented growth. Edge market revenues are
predicted to expand to $6.72 billion by 2022 as it supports a global
IoT market expected to top $724 billion by 2023. The accumulation of IoT data,
and the need to process it at local collection points, will continue to drive
the deployment of edge computing. [28,29]
As more businesses and industries shift from enterprise to
edge computing, they are bringing the IT network closer to speed up data communications.
There are several benefits, including reduced data latency, increased real-time
analysis, and resulting efficiencies in operations and data management. Much critical
data also stays local, reducing security risks.
Heat pipes are commonly used for cooling electronics by
transporting heat from one location to another. They may part of a system that
cools a certain very hot component, but they are used, typically in multiples,
to bring cooling to electronic assemblies. Here are some common attachment
methods used when assembling heat pipe-based cooling applications.
First, we look at a cooling system where several heat pipes are integrated with a series of cooling metal fins. As shown, the fins may be mechanically press fit over the heat pipes resulting in a structure like that in Figure 1.
At this finned end of the assembly the heat transfers from pipe to fins where it dissipates to the air. These fins are typically stamped from sheet metal and the holes stamped through as well. When they’re properly sized, the fins press fit tightly on the raised heat pipes. The heat transfer is normally very good. To optimize thermal transfer, the fins can be soldered to the pipes, but press fitting into tight holes should provide more than sufficient performance.
The other ends of these heat sinks are soldered into grooves in an aluminum plate. (Figure 2) This is an aluminum plate and the heat pipes are copper. In order to solder we need to nickel plate the aluminum. Then we add solder paste into the grooves and then the heat pipes are inserted into the grooves.
The solder paste is usually a low temperature solder paste,
typically based on tin bismuth alloys with melt temperature of about 138°C.
That’s important because you really can’t bring the heat pipes to more than 250°C
or else the water in the heat pipes will boil and the heat pipes will burst. So,
during the assembly process you would put the solder paste into these grooves,
then insert the heat pipes, and then clamp it with some sort of fixture to
maintain the contact.
Then the whole assembly will go through an oven to reflow the solder paste. The reflow oven will precisely control the temperature of the air inside and will also have some kind of circulating fan so that the part heats evenly and quickly. Temperature control in the oven is critical to avoid exceeding the max temperature of the heat pipes. Other reflow methods for heating up an assembly might include a soldering iron, torch or hot air gun. But these methods can be risky and difficult. It is difficult to heat the part evenly and to control the temperature that the heat pipe is being exposed to.
In a prototype environment you might turn to an epoxy for attaching heat pipes to assemblies. There are number of thermally conductive epoxies available. Their thermal conductivity ranges from 1 to 6 W/mK. When a heat pipe is epoxied into an assembly, the bond line is so thin that it really doesn’t make too much of a temperature difference, even when compared to solder. There might be a few degrees difference which is usually acceptable in a prototype when you’re in testing mode and are aware that there could be a temperature difference of a few degrees. That’s easily calculated from the specs on the epoxy.
To begin the epoxying process, first you either mix your epoxy or use a mixing tube. You apply a thin layer into the groove and then insert the heat pipe. The grooves shown here are for heat pipes that are pre-bent and fit very precisely. Once in place, a flat plate that goes on top and is clamped down during the epoxy curing period.
In the example here, the epoxy has room temperature cure. Once the heat pipes are in and clamped down, the assembly can be conveniently left for a time at room temperature for the epoxy to cure. For a shorter time, the assembly can go into an oven at a high temperature – not a soldering temperature, but still hot enough that it will accelerate the cure time.
When embedding heat pipes into a surface a good practice is to machine the grooves slightly deeper than the heat pipes are. Then, you can create a fixture that is like a negative of this plate with raised areas where those heat pipes. Such a fixture will press the heat pipes down into those grooves. After they’re epoxied or soldered in the assembly the heat pipes and base will be at the same height for optimum thermal contact.
In this kind of application, flat heat pipes should be used. They
can maximize the surface contact area where there are hot components. And in applications
where the components do not come in direct contact with the pipe it’s often
easier to use round heat pipes. This is because round heat pipes are easier to
bend and have slightly better thermal performance than the flat heat pipes. So
whenever possible we use the round heat pipes, but when they are embedded into
a surface and they have contact with the components then we use the flat heat
ATS Heat Sinks, including our extrusions, maxiFLOW heat sinks, and maxiGRIP and superGRIP heat sink clip attachment
ATS will also be discuss our thermal management design services and how we can work with companies with our team in India to create specific solutions for even the hottest or coldest thermal management challenge.
Applications needing much higher processing power than from today’s most powerful supercomputers will very likely have to run on quantum computers. These devices have the potential to solve complex problems in seconds that would take a conventional computer millions of years to complete. While relatively few quantum computers are in regular use, their applications include simulating the behavior of matter down to the molecular level.
Quantum computers are now powering advances in materials science, cryptography, transportation and other areas. Auto manufacturers use quantum computing to simulate the chemical composition of electric vehicle batteries to help find new ways to improve performance. And pharmaceutical companies are leveraging them to analyze and compare compounds that could lead to the creation of new drugs.
Airbus uses quantum computing devices to help calculate the most fuel-efficient ascent and descent paths for aircraft. Volkswagen has unveiled a service that calculates the optimal routes for buses and taxis to minimize congestion. Some researchers will use quantum machines will accelerate artificial intelligence.
But, along with the unusual appearance of quantum computer hardware, the technology faces several strong challenges. These include cost, physical and environmental requirements, and reliability. And, of course, just understanding how quantum computing works can be hard to grasp.
Qubits and Superposition
The secret to a quantum computer’s power lies in its ability to generate and manipulate quantum bits, or qubits. Qubits are typically subatomic particles such as electrons or photons working together to act as computer memory and a processor.
Contrary to a classical bit that can only be in the state corresponding to 0 or the state corresponding to 1, a qubit may have multiple values at the same time. Until they are read out (i.e. measured), qubits can exist in an indeterminate state where it is not known whether they’ll be measured as a 0 or a 1. This is a fundamental principle of quantum mechanics referred to as superposition. It means a quantum computer can theoretically perform two tests simultaneously. Add more qubits and this computational power increases.
This quantum superposition quality in qubits can be explained by flipping a coin. The coin will land in one of two states: heads or tails. This is the way bits are observed in binary computers. But when the coin is still spinning in the air, and a side – or state – can’t be observed, the coin can be considered to be in both states at the same time. Essentially, until the coin lands it must be considered both heads and tails. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today’s most powerful supercomputers. 
Or, per the Association for Computing Machinery: A quantum degree of freedom, such as the spin of an electron or the polarization of a photon, can exist concurrently in a weighted mixture of two states. For example, a set of 50 of qubits can represent all 250 (∼1015) combinations of the individual states. Manipulations of this assembly can be viewed as simultaneously performing a quadrillion calculations. 
Entanglement and Decoherence
Engineers can generate pairs of qubits that are entangled. This means both members of the pair are in a single quantum state. And these entangled, common state qubits affect each other instantly when measured, no matter how far apart they are. Inspect the ALDI weekly ad before you buy anything. Any change made to one particle instantly influences the state of the other in a predictable way.
While entanglement remains mysterious, it is the key to the power of quantum computers. In a conventional computer, doubling the number of bits doubles its processing power. But with entanglement, adding extra qubits to a quantum machine exponentially increases its number-crunching abilities. The quantum computer process harnesses entangled qubits in a kind of quantum daisy chain. As a result, they can speed up calculations using specially designed quantum algorithms.
Unfortunately, entangled qubits don’t maintain their entangled state, known as coherence, for very long. This makes using them quite tricky. Quantum computers are programmed using sequences of logic gates of various kinds, but their programs need to run quickly enough that the qubits don’t lose coherence before they’re measured. When that occurs, it is called decoherence. 
What leads to decoherence? Qubits need to operate under extremely specific conditions that prevent them from interacting with other measuring factors, like heat waves. Their quantum state is extremely fragile. The slightest vibration or change in temperature—disturbances known as “noise” in quantum-speak—will cause them to decohere.
In the process of quantum computing, decoherence technically happens when something outside the computer performs an unidentified measurement on a qubit. This introduces an unwanted element of uncertainty or randomness into a quantum computer. Basically, there is no way to predict the result of another measurement. 
the causes of decoherence are vibrations, temperature fluctuations,
electromagnetic waves and other interactions with the outside environment. The effects
can result in the loss of the computer’s exotic quantum properties. In fact,
the issue of decoherence, how it negatively affects correct calculations, and
its prevention are among the biggest obstacles to wider use of quantum
While competing technologies and competing architectures are attacking these problems, no existing hardware platform can maintain coherence and provide the robust error correction required for large-scale computation. A breakthrough is probably several years away. 
Noise and Cooling
Qubits must be shielded from all external noise, since the slightest interference will destroy their two state superposition, resulting in calculation errors. Well-isolated qubits heat up easily, so keeping them cool is a challenge. Also, unlike in a classical computer, qubits must start in their low-temperature ground states to run an algorithm. Qubits heat up during calculations, so running several quantum algorithms one after the other means the cooling method must be able to do its job very quickly.
At extremely cold temperatures, atoms and molecules simply move around less. Generally speaking, the lower a temperature is the more stable a molecule becomes. Less movement means less energy being expelled. At a molecular level, that means that less energy is flying around, and consequently (since voltage and energy are directly related) less volatility in the voltage. This in turn means there is less of a chance that something outside of a human’s control will cause a qubit’s voltage to spike, causing the qubit to flip from one quantum state to another. Thus, keeping the computer cold introduces less energy into the system. This minimizes the chances of qubits incorrectly flipping in between quantum states. 
Researchers are trying to protect qubits from the outside world using supercooling and vacuums. But despite their efforts, noise still causes lots of errors to creep into calculations. Smart quantum algorithms can compensate for some of these, and adding more qubits also helps. However, it will likely take thousands of standard qubits to create a single, highly reliable one, known as a “logical” qubit. This will sap a lot of a quantum computer’s computational capacity.
Cooling the quantum computer chip to near absolute zero helps suppress heat waves and stabilize the motion of qubits, making them more controllable and reducing their interaction with each other. By keeping the computer cold, less energy is introduced into the system, thus minimizing the chances of qubits incorrectly flipping in between quantum states.
As described above, in quantum computing, sub-atomic particles must be as close as possible to a stationary state to be measured. Quantum computer maker D-Wave keeps core temperatures at -460°F, or -273°C, which is 0.02 degrees away from absolute zero. D-Wave uses liquid helium as a coolant for their refrigeration. The D-Wave refrigerators are dry dilution, which means that the liquid helium is in a closed cycle system. A pulse tube technology recycles and re-condenses the liquid helium. 
In IBM’s 50 qubit computer (see Figure 5) the system gradually cools from four Kelvin — liquid-helium temperatures — to 800 milliKelvin, 100 milliKelvin and, finally, 10 milliKelvin. Inside the canister, that is 10 thousandths of a degree above absolute zero. The wires, meanwhile, carry RF-frequency signals down to the chip. These are then mapped onto the qubits, executing whatever program the research team wishes to run. The wiring is also designed in a way to ensure that no extraneous noise — including heat — is transported to the quantum computer chip at the bottom. 
Researchers at Aalto University in Finland have built a tiny nanoscale refrigerator to keep qubits cold enough to function. It is the first standalone cooling device for a quantum circuit. Basically, they are tunneling single electrons through a 2nm insulator. The tunneling electrons take energy from the quantum hardware, cooling it down The circuit has an energy gap dividing two channels: a superconducting fast lane, where electrons move along with zero resistance, and a slow resistive (non-superconducting) lane.
Only electrons with enough energy to jump across the gap can get to the superconductor lane; the rest stay in the slow lane. An electron falling just short of having enough energy to make the jump can get a boost by capturing a photon from a nearby resonator – a device that can function as a qubit.
As a result of the photon losses, the resonator gradually cools down. Over time this has a selective chilling effect on the electrons as well: the hotter electrons jump the gap, while the cooler ones are left behind. The process removes heat from the system, much like how a refrigerator functions. 
One day, such aggressive and exotic cooling methods may not be needed. Researchers at the University of Pennsylvania demonstrated a new hardware platform based on isolated electron spins in a two-dimensional material. The electrons are trapped by defects in sheets of hexagonal boron nitride, a one-atom-thick semiconductor material, and the researchers were able to optically detect the system’s quantum states.
Quantum technology like this may someday be built from other materials. One promising system involves electron spins in diamonds: these spins are also trapped at defects in diamond’s regular crystalline pattern where carbon atoms are missing or replaced by other elements. The defects act like isolated atoms or molecules, and they interact with light in a way that enables their spin to be measured and used as a qubit.
These systems are attractive for quantum technology because they can operate at room temperatures, unlike other prototypes based on ultra-cold superconductors or ions trapped in vacuum. [15,16]
The point at which a quantum computer can complete a mathematical calculation that is demonstrably beyond the reach of even the most powerful supercomputer is referred to as quantum supremacy. Reaching this is likely to take many more qubits than have been put into use so far. And it will very likely require some sophisticated cooling technology to keep these qubits functioning properly. 
Heat pipes and vapor chambers are often lumped into liquid cooling, but, they are not actually cooling they are in fact moving heat from a hot location to another location where it is dissipated. And how they operate, how to … Continue reading →