Category Archives: Thermal Science

Thermal Management: They Hope The Problem Will Go Away On Its Own

“Often times the thermal management is a second thought or after thought and alot of managers hope it goes away by itself.  Then at the end of the program development they do the testing, finds that it fails and panic and need to call someone right away.”   That’s a quote from Dr. Kaveh Azar of Advanced Thermal Solutions during his live interview on the importance of thermal management.

That interview, performed by one of the electronics’ industries leading journalists, Rich Nass, can be heard on their web site here or by clicking the you tube link below.

Why Use Research Quality Instruments?

The life expectancy of most products is estimated at some point prior to their introduction. Reliability analyses are an integral part of the design cycle of a product. In all reliability calculations, temperature is the key driver. The predicted life span from these calculations is often the deciding factor for introducing the product or investing more resources in redesign.

The questions that linger are: to what level of accuracy can we determine the temperature magnitude, and what is the impact of temperature uncertainty on the predicted reliability (i.e., the expected life of the product)?

When a system is operating, it incessantly experienc­es temperature and power-cycling. Such fluctuations, resulting from system design and operation or from complex thermal transport in electronic systems, create large bandwidths in temperature response. Whether it happens in the course of an analysis or a compliance/ stress testing, we often overlook the accuracy by which temperature is measured or calculated. Yet to truly obtain an adequate measure of a systems reliability in the field, such temperature data is essential.Why - Nomenclature

The CLWT-115 wind tunnel produces warm air flows for thermal studies

To demonstrate the impact of temperature on reliability, consider the two models commonly used in practice. The Arrhenius model [1], often referred to as Erroneous, is perhaps the most broadly used model in the field. Equation 1 shows the reaction rate (failure rate) k and the acceleration factor AT. KB is the Boltzmann constant (8.617 x 10-5 eV/K) and Ea is the activation energy. All temperatures are in Kelvin. Activation energy depends on the failure mechanism and the materials (for example, 0.3 – 0.5 for oxide defects, and 1.0 for contamination).

Why - 1

[1]

The second model, Eyring, often referred to as More Erroneous, is shown by Equation 2.

Why - 2

[2]

The data shows that the uncertainty band is between 7 to 51%. These numbers by themselves are alarming, yet they are commonly encountered in the field. In either case, Stand-Alone or Device-In-System, being able to accurately determine the temperature or air velocity in a highly three-dimensional thermal transport environment is not a task to be treated casually.

To measure the impact of such uncertainty on the reliability prediction, it’s best to calculate its impact on the Acceleration factor AT.

Let us consider the case when:

T1 = 40oC

T2 = 150oC

Ea = 0.4 eV

kB = 8.6×10-5 eV/K

This results in AT = 48. Now, let us impose a 10% and 35% uncertainty on the temperature measurement of T2. Table 1 shows the result of this error on the acceleration factor.

Why - Table 1

Table 1 clearly demonstrates how a small degree of uncertainty in temperature measurement can negatively impact the Acceleration Factor and, thus, the reliability predictions where AT is often used. The first row shows the correct temperature. The second row shows the result of a 10% error in temperature measurement (i.e., 165oC instead of 150oC). The last row shows the impact of a 35% error (i.e., 202oC vs. the 158.6oC that the device is actually experiencing). The end result of this error in measurement is a 230% error in the Acceleration Factor.

One may think such an error is rare, but the contrary is true! In a simple device-case-temperature measurement, the temperature gradient could be in excess of 20oC from the die to the edge of the device. Or the air temperature variation in a channel formed by two PCBs could exceed 30oC. Of course, there are variations due to geometry, material and power dissipation that are observed in any electronics system. If we add to these the effects of improperly designed instruments, the combination of physical variation and the instrument error could certainly be detrimental to a products launch.

Longevity and life cycle in the market are keys for a products success. Therefore, to determine system performance, a reliability analysis must be performed. Since time is of the essence, and first-to-market is advantageous, the quickest reliability prediction models (analysis in general) will continue to be popular. To make such models, the use of Equations 1 and 2, or others more meaningful, must include accurate component and fluid temperature data. Measurement is heavily relied upon for temperature and air velocity determination. It is imperative to employ instruments designed for use in electronics systems with the highest level of accuracy and repeatability. High-grade instruments with quality output will enhance the reliability of the product you are working on.

SUMMARY

Small errors in temperature and air flow measurements can have a significant effect on reliability predictions. The origin of these errors lies in the measurement process or the use of inaccurate instruments. The former depends on the knowledge-base of the experimenter. That is why a good experimentalist is even a better analyst. You must know where to measure and the variations that exist in the field of measurement. Electronics system environments are notorious for such variations. It is repeatedly seen that, in one square centimeter of air flow passage between two PCBs, you can have temperature variations in excess of 30oC. Therefore, measurement practices and instrument selection must address these changes and not introduce further errors because of inferior design. Besides its design, an instrument’s construction and calibration should not introduce more errors. Accurate and high-quality instruments are not only essential for any engineering practice, their absence will adversely impact reliability predictions of a product at hand. No company wants to have its products returned, especially because of thermally induced failures.

References:

1. Klinger, D., Nakada, Y., and Menendez, M., AT&T Reliability Manual, Van Nostrand Reinhold, 1990.

2. Azar, K., The Effect of Uncertainty Analysis on Temperature Prediction, Therminic Conference, 2002.

Testing Thermal Interface Materials

Illustration: Parker Chomerics

Thermal interface materials, TIMs, provide the thermal pathway for transferring heat from components to heat sinks. At one time, most TIMs were simple, homogenous pads filled with thermally conductive fillers. But increasing power levels of processors and other components present a continuous need for improved thermal material performance. Today, a much wider range of TIMs is available, including phase change materials, compounds, and gap fillers.

When choosing a TIM, its essential to understand the testing methods to accurately determine the materials bulk thermal properties and in its performance.

The most common test is ASTM D5470: Linear Rod Method. This is the standard for measuring the thermal impedance of a TIM. Heat flow is carefully controlled through a test sample of a TIM. Typically, a heater is attached to an aluminum cylinder that has thermocouples arranged in series.

The thermocouples not only report temperature, but also the heat transfer through the known aluminum cylinder. Next, the interface material is compressed between the raised cylinder and an identical lower unit. Finally, a cold plate is attached to the bottom of the assembly to ensure the direction of heat transfer. The assembly can accommodate various material thicknesses and apply a range of pressure to the sample.

Another TIM test is laser flash diffusivity. Here, a small sample of interface material is subjected to a short pulse of laser energy. The temperature rise of the material is then recorded at a very high sample rate. Diffusivity is calculated using the equation shown below.

k = D/ρCp

Where:

k= thermal conductivity;

D = thermal diffusivity,

ρ = density of sample,

and Cp = specific heat.

The halftime of the sample is defined as the time between the start of the laser pulse to when the temperature of the back side of the sample has risen to half of its maximum value. The other variable in equation 1 is L, the thickness of the sample, which may be directly measured. Once diffusivity is known, it can be used in equation 2 to calculate thermal conductivity.

This laser flash method is very accurate as long as the density and specific heat are well known. However, it only measures thermal conductivity, as opposed to the ASTM standard which also measures thermal impedance. Thus, a key drawback to laser flash testing is that it doesn’t provide the contact resistance.

In comparisons of interface materials must be carried out by the user to provide meaningful results. Interface material testing procedures are different than heat sink testing methods. When testing several heat sinks it is possible to affix a thermocouple to the component’s case surface or to the heat sink itself and draw direct comparisons of performance. However, this approach will not work if the interface material is changed. To accurately compare interface materials, die-level temperature measurements must be taken, while the same heat sink is used in identical PCB and flow conditions.

An Instrument for Measuring Air Velocity, Pressure and Temperature in Electronics Enclosures

For engineer-level thermal management studies, the iQ-200 instrument from Advanced Thermal Solutions, Inc, ATS, can simultaneously measure air velocity, air pressure and the temperature of components and surrounding air at multiple locations inside electronic systems. This enables users to obtain full and accurate profiles of components, heat sinks, PCBs and other electronics hardware to enable more effective thermal management.

Developed by Advanced Thermal Solutions, Inc., ATS, the iQ-200 system simultaneously captures data from up to 12 J-type thermocouples, 16 air/velocity sensors, and four pressure sensors.

The thermocouples provide surface area temperature measurements on heat spreaders, component packages, housing hardware, and elsewhere to track heat flow or detect hot spots. Temperature data is tracked from -40 to 750°C. The sensors (available separately) measure both air temperature and velocity at multiple points allowing a detailed analysis of airflow.

Candlestick Sensor from ATS

Thin, low profile ATS candlestick sensors can be easily positioned throughout a system under test and measure airflow from -10 to +6°C. Air velocity is measured from natural convection up to 6 m/s (1200 ft/min). The iQ-200 can be factory modified to measure airflow to 50 m/s (10,000 ft/min) and air temperature up to 85°C. Four differential transducers capture pressure drop data along circuit cards, assemblies and orifice plates. Standard pressure measurement capabilities range from 0- 1,034 Pa (0 – 0.15 psi).

The ATS iQ-200 system comes preloaded with user-friendly iSTAGE application software which effectively manages incoming data from the various sensor devices, and allows rich graphic presentation on monitors and captured on videos or documents. The iQ-200 connects via USB to any conventional PC for convenient data management, storage and sharing.

More information on the iQ-200 system from ATS can be found on Qats.com (http://www.qats.com/products/Temperature-and-Velocity-Measurement/Instruments/iQ-200/2632.aspx), or by calling 781-769-2800.

The Principal Methods for Measuring Thermal Conductivity in Electronics Cooling Studies

A paper by Advanced Thermal Solutions, Inc., ATS, compiles the major methods used by engineers for measuring thermal conductivity. In all, the paper describes and compares 17 proven methods for measuring thermal conductivity in electronics.

In one section of the paper, these methods are grouped according to the time dependence of the heat applied to the sample. Each method is classified under steady-state, periodic or pulsed. Another section compares the performance of each thermal conductivity measurement method, and provides an idea of sample size and preparation, and the operator skill required. There is also a list of the equipment typically needed to conduct each of these thermal tests.

According to the ATS article, the wide choice of methods may first appear to be a disadvantage. However, once understood for their application-specific benefits the advantages become evident. Materials to be tested, part geometry and part test temperatures will usually be the primary criteria.

As always, the relative cost and expected level of accuracy will also be important factors. Avoiding complicated boundary conditions, irregular part geometry, difficult heater placement/construction and encouraging the difficult task of one-dimensional heat flow will greatly simplify the measurement process. Multiple benefits will result from reducing the cost and assembly difficulty of the experimental set-up while avoiding those errors often introduced when attempting to construct complicated analytical/mathematical models.