text
stringlengths
11
320k
source
stringlengths
26
161
Thermal hydraulics (also called thermohydraulics ) is the study of hydraulic flow in thermal fluids . The area can be mainly divided into three parts: thermodynamics , fluid mechanics , and heat transfer , but they are often closely linked to each other. A common example is steam generation in power plants and the associated energy transfer to mechanical motion and the change of states of the water while undergoing this process. Thermal-hydraulics analysis can determine important parameters for reactor design such as plant efficiency and coolability of the system. [ 1 ] The common adjectives are "thermohydraulic", "thermal-hydraulics" and "thermalhydraulics". In the thermodynamic analysis, all states defined in the system are assumed to be in thermodynamic equilibrium ; each state has mechanical, thermal, and phase equilibrium, and there is no macroscopic change with respect to time. For the analysis of the system, the first law and second law of thermodynamics can be applied. [ 2 ] In power plant analysis, a series of states can comprise a cycle . In this case, each state represents condition at the inlet/outlet of individual component. The example of components are pump compressor , turbine , reactor, and heat exchanger . By considering the constitutive equation for the given type of fluid, thermodynamic state of each point can be analyzed. As a result, the thermal efficiency of the cycle can be defined. Examples of the cycle include the Carnot cycle , Brayton cycle , and Rankine cycle . Based on the simple cycle, modified or combined cycle also exists. Authors observed that Thermo-hydraulic Parameter (THP) is less sensitive towards the Friction Factor Improvement Factor (FFER). [ 3 ] The deviation between the terms (fR/fS) and (fR/fS)0.33 has been found 48 % to 64 % for the range of roughness and other parameters with (Re) 2900 – 14,000, which has been used for the present study. Therefore, to evaluate in equal proportions of enhancement in heat transfer (Nu) and friction factor (f) in the thermal systems a new parameter has been proposed and introduced by present article first author, which is more realistic and it is named as Thermo-hydraulic Improvement Parameter (THIP), and it can be evaluated as the ratio of (NNIF) to (FFIF) [Sahu et al.]. [ 3 ] Where (NNIF)=Nusselt Number Improvement Factor and (FFIF)=Friction Factor Improvement Factor Temperature is an important quantity to know for the understanding of the system. Material properties such as density , thermal conductivity , viscosity , and specific heat depend on temperature, and very high or low temperature can bring unexpected changes in the system. In solid, the heat equation can be used to obtain the temperature distribution inside the material with given geometries. For steady-state and static case, the heat equation can be written as where Fourier’s law of conduction is applied. Applying boundary conditions gives a solution for the temperature distribution. In single-phase heat transfer, convection is often the dominant mechanism of heat transfer. For adiabatic flow where the flow receives heat, the temperature of the coolant changes as it flows. An example of single-phase heat transfer is a gas-cooled reactor and molten-salt reactor . The most convenient way for characterizing the single-phase heat transfer is based on an empirical approach, where the temperature difference between the wall and bulk flow can be obtained from the heat transfer coefficient . The heat transfer coefficient depends on several factors: mode of heat transfer (e.g., internal or external flow ), type of fluid, geometry of the system, flow regime (e.g., laminar or turbulent flow ), boundary condition, etc. Examples of heat transfer correlations are Dittus-Boelter correlation (turbulent forced convection ), Churchill & Chu ( natural convection ). Compared with single-phase heat transfer, heat transfer with a phase change is an effective way of heat transfer. It generally has high value of heat transfer coefficient due to the large value of latent heat of phase change followed by induced mixing of the flow. Boiling and condensation heat transfers are concerned with wide range of phenomena. Pool boiling is boiling at a stagnant fluid. Its behavior is well characterized by Nukiyama boiling curve , [ 4 ] which shows the relation between the amount of surface superheat and applied heat flux on the surface. With the varying degrees of the superheat, the curve is composed of natural convection, onset of nucleate boiling, nucleate boiling , critical heat flux , transition boiling, and film boiling. Each regime has a different mechanism of heat transfer and has different correlation for heat transfer coefficient. Flow boiling is boiling at a flowing fluid. Compared with pool boiling, flow boiling heat transfer depends on many factors including flow pressure, mass flow rate, fluid type, upstream condition, wall materials, system geometry, and applied heat flux. Characterization of flow boiling requires comprehensive consideration of operating condition. [ 5 ] In 2021 a prototype electric vehicle charging cable using flow boiling was able to remove 24.22 kW of heat, allowing the charging current to reach 2,400 amps, far higher than state of the art charging cables that top out at 520 amps. [ 6 ] Heat transfer coefficient due to nucleate boiling increases with wall superheat until they reach a certain point. When the applied heat flux exceeds the certain limit, heat transfer capability of the flow decreases or significantly drops. Normally, the critical heat flux (CHF) corresponds to departure from nucleate boiling (DNB) in pressurized water reactor (PWR) and dryout in boiling water reactor (BWR). The reduced heat transfer coefficient seen in post-DNB or post-dryout is likely to result in damaging of the boiling surface. Understanding of the exact point and triggering mechanism related to critical heat flux is a topic of interest. For DNB type of boiling crisis, the flow is characterized by creeping vapor fluid between liquid and the wall. On top of the convective heat transfer, radiation heat transfer contributes to the heat transfer. After the dryout, the flow regime is shifted from an inverted annular to mist flow. Other thermal hydraulics phenomena are subject of interest:
https://en.wikipedia.org/wiki/Thermal_hydraulics
Thermal hydrolysis is a process used for treating industrial waste , municipal solid waste and sewage sludge . Thermal hydrolysis is a two-stage process combining high-pressure boiling of waste or sludge followed by a rapid decompression. This combined action sterilizes the sludge and makes it more biodegradable , which improves digestion performance. Sterilization destroys pathogens in the sludge resulting in it exceeding the stringent requirements for land application (agriculture). [ 1 ] In addition, the treatment adjusts the rheology to such an extent that loading rates to sludge anaerobic digesters can be doubled, and also dewaterability of the sludge is significantly improved. [ 2 ] [ 3 ] The first full-scale application of this process for sewage sludge was installed in Hamar , Norway in 1996. Since then, there have been over 30 additional installations globally. [ 1 ] Sewage treatment plants , such as Blue Plains in Washington, D.C. , USA, have adopted thermal hydrolysis of sewage sludge in order to produce commercially valuable products (such as electricity and high quality biosolid fertilizers) out of the wastewater . [ 4 ] The full-scale commercial application of thermal hydrolysis enables the plant to utilize the solids portion of the wastewater to make power and fine fertilizer directly from sewage waste. [ 5 ] The city of Oslo , Norway installed a system for converting domestic food waste to fuel in 2012. A thermal hydrolysis system produces biogas from the food waste, which provides fuel for the city bus system and is also used for agricultural fertilizer. [ 6 ] * Tons of Dry Solids/Year
https://en.wikipedia.org/wiki/Thermal_hydrolysis
Thermal inductance refers to the phenomenon wherein a thermal change of an object surrounded by a fluid will induce a change in convection currents within that fluid, thus inducing a change in the kinetic energy of the fluid. [ 1 ] It is considered the thermal analogue to electrical inductance in system equivalence modeling; its unit is the thermal henry . [ 1 ] Thus far, few studies have reported on the inductive phenomenon in the heat-transfer behaviour of a system. In 1946, Bosworth demonstrated that heat flow can have an inductive nature through experiments with a fluidic system. [ 2 ] [ 3 ] He claimed that the measured transient behaviour of the temperature change cannot be explained by merely the combination of the thermal resistance and the thermal capacitance. Bosworth later extended the experiments to study the thermal mutual inductance; however, he did not report on the thermal inductance in a heat-transfer system with the exception of a fluid flow. In 2013, Ye et al. have a publication of "Thermal Transient Effect and Improved Junction Temperature Measurement Method in High Voltage Light-Emitting Diodes". [ 4 ] In their experiments, a high voltage LED chip was directly attached to a silicon substrate with thin thermal interface material (TIM). The temperature sensors were fabricated using standard silicon processing technologies which were calibrated ranging from 30 °C to 150 °C. The thickness of the chip and TIM were 153μm and 59μm, respectively. Thus the sensors were very close to the p-n junction . The silicon substrate was positioned and vacuumed on a cumbersome thermal plate with an accurate temperature controller in an enclosure. The experimenters applied step-up/step-down currents and measured the characteristics between temperature and forward voltage after 100 ms. In this work, the ‘recovery time’ is defined as the interval from the start of power change to the time at which the temperature became again equal to the initial temperature value. The results show that the junction temperature of LED decreases significantly and immediately (more than 10 °C) when a current from 100 μA to 15 mA is applied. Then, the junction temperature gradually increases. After a recovery time of ~100 ms, the junction temperature reaches the initial value. At the steady state of 15 mA, the applied high current is instantly reduced through a step-down mode to 100 μA. The measured junction temperature increases by 4 °C within 0.1 ms. The sensor temperature simultaneously shows a temperature increase of 2 °C. Subsequently, the junction temperature gradually decreases. After a recovery time of ~100 ms, the junction temperature decrease to their initial values. Then the junction temperature continues to decrease until the system achieves the steady state at room temperature. Notably, the junction temperature changes in opposition to the current change in the chips. In 2016, there was a further investigation on this phenomenon. [ 5 ] Instead of a high-voltage LED chip, a GaN -based low-voltage LED chip was also examined in this case. This chip can withstand a wider range of applied currents and facilitate more precise power change for observing the thermal inductive responses. The chip was mounted on a lead frame and encapsulated with silicone. The chip package was soldered to a metal core printed circuit board and mounted on a thermal plate with controllable temperatures. The transient junction temperature of the LED chip as a function of time was measured with the applied current as different step down functions. They calculated that the junction temperature is equal to 36.2 °C at 350 mA in this situation. The results are consistent with the previous thermal inductive measurement in the GaN-based high-voltage LED chip. As expected, the junction temperatures immediately rise and gradually decrease as the currents are stepped down. And the recovery time is slower for the system with a larger step level decrease. They prove that a rapid changing power through the GaN-based LEDs induces a proportional temperature change, which is opposed to the temperature change expected from the power input. This phenomenon is referred as thermal inductance in the report. The thermal inductive properties could be related to the thermoelectric effect, especially the transient thermoelectric effect. However, rather than considering the specified structure of thermoelectric devices, it is considered that the thermal inductance that occurs in GaN devices with a p–n junction. With the combination of the thermal resistance, the thermal capacitance, and the thermal inductance, it is expected that their assumption can promote the thermal analysis of high-frequency GaN devices. In addition, it is expected that thermal inductance phenomena exist more widely exist in nonhomogeneous materials and in the field of thermal analysis under energy changes in very short duration. In 2019, an experiment was carried out in which a thermal oscillation was achieved without any external work being done. [ 6 ] The device was composed of a Peltier element and an electric inductance switched in series. It was shown that the time derivative of the heat current is proportional to the negative temperature difference across the device, in analogy to an electric inductor where the time derivative of the electric current is proportional to the negative voltage difference. The resulting "thermal self-inductance" allowing for this oscillatory behavior, with the considered objects always being in thermal quasi-equilibrium, was then expressed as a function of electric inductance, Seebeck coefficient of the used thermoelectric material and the operating temperature, and a differential equation was given for the oscillating thermal "LCR" current in the supplementary information section of the publication. Although the reported thermal oscillation was highly damped and the resulting temperature oscillation around the thermal bath temperature was comparably small, the experiment seems to be a valid proof of concept for a working thermal inductance. This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Thermal_inductance
Thermal inertia is a term commonly used to describe the observed delays in a body's temperature response during heat transfers . The phenomenon exists because of a body's ability to both store and transport heat relative to its environment. Since the configuration of system components and modes of transport (e.g. conduction, convection, radiation, phase change) and energy storage (e.g. internal energy, enthalpy, latent heat) vary substantially between instances, there is no generally applicable mathematical definition of closed form for thermal inertia. [ 1 ] Bodies with relatively large mass and heat capacity typically exhibit slower temperature responses. However heat capacity alone cannot accurately quantify thermal inertia. Measurements of it further depend on how heat flows are distributed inside and outside a body. Whether thermal inertia is an intensive or extensive quantity depends upon context. Some authors have identified it as an intensive material property, for example in association with thermal effusivity . It has also been evaluated as an extensive quantity based upon the measured or simulated spatial-temporal behavior of a system during transient heat transfers . A time constant is then sometimes appropriately used as a simple parametrization for thermal inertia of a selected component or subsystem. A thermodynamic system containing one or more components with large heat capacity indicates that dynamic, or transient, effects must be considered when measuring or modelling system behavior. Steady-state calculations, many of which produce valid estimates of heat flows and temperatures when reaching an equilibrium , nevertheless yield no information on the transition path towards such stable or metastable conditions. Nowadays the spatial-temporal behavior of complex systems can be precisely evaluated with detailed numerical simulation . In some cases a lumped system analysis can estimate a thermal time constant . [ 2 ] [ 3 ] : 627 A larger heat capacity C {\displaystyle C} for a component generally means a longer time to reach equilibrium. The transition rate also occurs in conjunction with the component's internal U i {\displaystyle U_{i}} and environmental U e {\displaystyle U_{e}} heat transfer coefficients , as referenced over an interface area A {\displaystyle A} . The time constant τ {\displaystyle \tau } for an estimated exponential transition of the component's temperature will adjust as C / ( A ⋅ U e ) {\displaystyle C/(A\cdot U_{e})} under conditions which obey Newton's law of cooling ; and when characterized by a ratio U e / U i , {\displaystyle U_{e}/U_{i},} or Biot number , much less than one. [ 4 ] : 19–26 Analogies of thermal inertia to the temporal behaviors observed in other disciplines of engineering and physics can sometimes be used with caution. [ 5 ] In building performance simulation , thermal inertia is also known as the thermal flywheel effect , and the heat capacity of a structure's mass (sometimes called the thermal mass ) can produce a delay between diurnal heat flow and temperature which is similar to the delay between current and voltage in an AC-driven RC circuit . Thermal inertia is less directly comparable to the mass-and-velocity term used in mechanics , where inertia restricts the acceleration of an object. In a similar way, thermal inertia can be a measure of heat capacity of a mass, and of the velocity of the thermal wave which controls the surface temperature of a body. [ 1 ] For a semi-infinite rigid body where heat transfer is dominated by the diffusive process of conduction only, the thermal inertia response at a surface can be approximated from the material's thermal effusivity , also called thermal responsivity r {\displaystyle r} . It is defined as the square root of the product of the material's bulk thermal conductivity and volumetric heat capacity , where the latter is the product of density and specific heat capacity : [ 6 ] [ 7 ] Thermal effusivity has units of a heat transfer coefficient multiplied by square root of time: When a constant flow of heat is abruptly imposed upon a surface, r {\displaystyle r} performs nearly the same role in limiting the surface's initial dynamic "thermal inertia" response: as the rigid body's static heat transfer coefficient U {\displaystyle U} plays in determining the surface's steady-state temperature. [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Thermal_inertia
Thermal Integrity Profiling (TIP) is a non-destructive testing method used to evaluate the integrity of concrete foundations. It is standardized by ASTM D7949 - Standard Test Methods for Thermal Integrity Profiling of Concrete Deep Foundations . The testing method was first developed in the mid 1990s at the University of South Florida . [ 1 ] [ 2 ] It relates the heat generated by curing of cement to the integrity and quality of drilled shafts, augered cast in place (ACIP) piles and other concrete foundations. In general, a shortage of competent concrete (necks or inclusions) is registered by relative cool regions; the presence of extra concrete (over-pour bulging into soft soil strata ) is registered by relative warm regions. Concrete temperatures along the length of the foundation element are sampled throughout the concrete hydration process. [ 3 ] TIP analysis is performed at the point of peak temperature, generally 18 to 24hrs post-concreting. [ 4 ] Measurements are available relatively soon after pouring (6 to 72 hours), generally before other integrity testing methods such as cross hole sonic logging and low strain integrity testing can be performed. TIP can be performed using a probe lowered down standard access tubes or by installing embedded thermal wires along the length of the reinforcement cage. [ 4 ] Four thermal wires are commonly installed along the steel cage, each 90 degrees from one another, forming a north-east-south-west configuration. If records at a certain depth show regions with cooler temperatures (when compared to the average temperature at that depth), a concrete deficiency or defect may be present. An average temperature at a certain depth that is significantly lower than the average temperatures at other depths may also be indication of a potential problem. It is also possible to estimate the effective area of the foundation, and to assess if the reinforcing cage is properly aligned and centered. [ 5 ] This article about a civil engineering topic is a stub . You can help Wikipedia by expanding it . This electrochemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Thermal_integrity_profiling
A thermal interface material (shortened to TIM ) is any material that is inserted between two components in order to enhance the thermal coupling between them. [ 1 ] A common use is heat dissipation, in which the TIM is inserted between a heat-producing device (e.g. an integrated circuit) and a heat-dissipating device (e.g. a heat sink). There are intensive studies in developing several kinds of TIM with different target applications. At each interface, a thermal resistance exists and impedes heat dissipation. In addition, the electronic performance and device lifetime can degrade dramatically under continuous overheating and large thermal stress at the interfaces. Many recent efforts have been dedicated to developing and improving TIMs: [ 1 ] These effort include minimizing the thermal boundary resistance between layers and enhancing thermal management performance, while addressing application requirements such as low thermal stress between materials of different thermal expansion coefficients , low elastic modulus or viscosity , as well as ensuring flexibility and reusability.
https://en.wikipedia.org/wiki/Thermal_interface_material
Thermal ionization , also known as surface ionization or contact ionization , is a physical process whereby the atoms are desorbed from a hot surface, and in the process are ionized. Thermal ionization is used to make simple ion sources , for mass spectrometry and for generating ion beams . [ 1 ] Thermal ionization has seen extensive use in determining atomic weights, in addition to being used in many geological/nuclear applications. [ 2 ] The likelihood of ionization is a function of the filament temperature, the work function of the filament substrate and the ionization energy of the element . This is summarised in the Saha–Langmuir equation : [ 3 ] where Negative ionization can also occur for elements with a large electron affinity Δ E A {\displaystyle \Delta E_{\text{A}}} against a surface of low work function. One application of thermal ionization is thermal ionization mass spectrometry (TIMS). In thermal ionization mass spectrometry, a chemically purified material is placed onto a filament which is then heated to high temperatures to cause some of the material to be ionized as it is thermally desorbed (boiled off) the hot filament. Filaments are generally flat pieces of metal around 1–2 mm (0.039–0.079 in) wide, 0.1 mm (0.0039 in) thick, bent into an upside-down U shape and attached to two contacts that supply a current. This method is widely used in radiometric dating , where the sample is ionized under vacuum. The ions being produced at the filament are focused into an ion beam and then passed through a magnetic field to separate them by mass. The relative abundances of different isotopes can then be measured, yielding isotope ratios. When these isotope ratios are measured by TIMS, mass-dependent fractionation occurs as species are emitted by the hot filament. Fractionation occurs due to the excitation of the sample and therefore must be corrected for accurate measurement of the isotope ratio. [ 4 ] There are several advantages of the TIMS method. It has a simple design, is less expensive than other mass spectrometers, and produces stable ion emissions. It requires a stable power supply, and is suitable for species with a low ionization energy, such as strontium and lead . The disadvantages of this method stem from the maximum temperature achieved in thermal ionization. The hot filament reaches a temperature of less than 2,500 °C (2,770 K; 4,530 °F), leading to the inability to create atomic ions of species with a high ionization energy, such as osmium and tungsten . Although the TIMS method can create molecular ions instead in this case, species with high ionization energy can be analyzed more effectively with MC-ICP-MS . [ citation needed ]
https://en.wikipedia.org/wiki/Thermal_ionization
Thermal laser epitaxy (TLE) is a physical vapor deposition technique that utilizes irradiation from continuous-wave lasers to heat sources locally for growing films on a substrate. [ 1 ] [ 2 ] This technique can be performed under ultra-high vacuum pressure or in the presence of a background atmosphere, such as ozone , to deposit oxide films. [ 3 ] TLE operates at power densities between 10 4 – 10 6 W/cm 2 , which results in evaporation or sublimation of the source material, with no plasma or high-energy particle species being produced. Despite operating at comparatively low power densities, TLE is capable of depositing many materials with low vapor pressures , including refractory metals , a process that is challenging to perform with molecular beam epitaxy . [ 4 ] TLE uses continuous-wave lasers (typically with a wavelength of around 1000 nm) located outside the vacuum chamber to heat sources of material in order to generate a flux of vapor via evaporation or sublimation. [ 1 ] Owing to the localized nature of the heat induced by the laser, a portion of the source may be transformed into a liquid state while the rest remains solid, such that the source acts as its own crucible. The strong absorption of light causes the laser-induced heat to be highly localized via the small diameter of the laser beam, which can also have the effect of confining the heat to the axis of the source. The resulting absorption corresponds to a typical photon penetration depth on the order of 2 nm due to the high absorption coefficients of α ~ 10 5 cm −1 of many materials. Heat loss via conduction and radiation further localizes the high- temperature region close to the irradiated surface of the source. The localized character of the heating enables many materials to be grown by TLE from freestanding sources without a crucible. Owing to the direct transfer of energy from the laser to the source, TLE is more efficient than other evaporation techniques such as evaporation and molecular beam epitaxy , which typically rely on wire-based Joule heaters to reach high temperatures. By heating the source, a flux of vapor is produced, the pressure of which frequently has an approximately exponential relation to temperature. The vapor is then deposited onto a laser-heated substrate. The very high substrate temperatures achievable by laser heating allow the use of adsorption -controlled growth modes, similar to molecular beam epitaxy , ensuring precise control of the stoichiometry and temperature of the deposited film. This precise control is valuable for growing thin-film heterostructures of complex materials, such as high- T c superconductors . [ 5 ] [ 6 ] By positioning all lasers outside of the evaporation chamber, contamination can be reduced compared to using in situ heaters, resulting in highly pure deposited films. The deposition rate of the vapor impinging upon the substrate is controlled by adjusting the power of the incident source laser. The deposition rate frequently increases exponentially with source temperature, which in turn increases linearly with incident laser power. [ 4 ] Stability in the deposition rate may be achieved by continuously moving the laser beam around the source, while compensating for any coating of any laser optics inside the TLE chamber. [ 7 ] The gas in the chamber can be incorporated in the deposition film. With the addition of an oxygen or ozone atmosphere, oxide films can readily be grown with TLE at pressures up to 10 −2 hPa. [ 3 ] [ 8 ] Similarly, the addition of an ammonia gas source, a wide variety of nitride films can be grown via TLE, including various superconducting nitride compounds like TiN and NbN . [ 9 ] Shortly after the invention of the laser by Theodore Maiman in 1960, [ 10 ] it was quickly recognized that a laser could act as a point source to evaporate source material in a vacuum chamber for fabricating thin films. [ 11 ] [ 12 ] In 1965, Smith and Turner [ 12 ] succeeded in depositing thin films using a ruby laser, after which Groh deposited thin films using a continuous-wave CO 2 laser in 1968. [ 13 ] Further work demonstrated that laser-induced evaporation is an effective way to deposit dielectric and semiconductor films. However, issues occurred with regard to stoichiometry and the uniformity of the deposited films, thus diminishing their quality compared to films deposited by other techniques. [ 14 ] [ 15 ] Experiments to investigate the deposition of thin films using a pulsed laser at high power densities laid the foundation for pulsed laser deposition , an extremely successful growth technique that is widely used today. Experiments utilizing continuous-wave lasers continued to be performed throughout the latter half of the twentieth century, highlighting the many advantages of continuous-wave laser evaporation including low power densities, which can reduce surface damage to sensitive films. It proved challenging to achieve congruent evaporation from compound sources using continuous-wave lasers, and film deposition was typically limited to sources with high vapor pressures due to the low continuous wave power densities available. [ 16 ] [ 17 ] [ 18 ] In 2019, the evaporation of sources using continuous-wave lasers was rediscovered at the Max Planck Institute for Solid State Research and dubbed "thermal laser epitaxy". This new technique uses elemental sources illuminated by high-power continuous-wave lasers (typically with peak powers around 1 kW at a wavelength of 1000 nm), thus allowing the deposition of low-vapor-pressure materials such as carbon and tungsten while avoiding issues with congruent evaporation from compound sources. [ 1 ] [ 2 ] Thermal Laser Epitaxy - Max Planck Institute for Solid State Research
https://en.wikipedia.org/wiki/Thermal_laser_epitaxy
A thermal loop is a movement of air driven by warm air rising at one end of the loop, and cool air descending at the other end, creating a constantly moving loop of air. They can be used to precisely control the temperature of a specific area. [ 1 ] Thermal loops also occur in liquids. Thermal loops are size-independent; that is to say, they may occur in a space as small as a room or as large as a global hemisphere . The Hadley cell is an example of a global-scale thermal loop. This article about atmospheric science is a stub . You can help Wikipedia by expanding it . This oceanography article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Thermal_loop
In building design, thermal mass is a property of the matter of a building that requires a flow of heat in order for it to change temperature. Not all writers agree on what physical property of matter "thermal mass" describes. Most writers use it as a synonym for heat capacity , the ability of a body to store thermal energy . It is typically referred to by the symbol C th , and its SI unit is J/K or J/°C (which are equivalent). However: The lack of a consistent definition of what property of matter thermal mass describes has led some writers to dismiss its use in building design as pseudoscience . [ 4 ] [ 5 ] [ 6 ] The equation relating thermal energy to thermal mass is: where Q is the thermal energy transferred, C th is the thermal mass of the body, and Δ T is the change in temperature. For example, if 250 J of heat energy is added to a copper gear with a thermal mass of 38.46 J/°C, its temperature will rise by 6.50 °C. If the body consists of a homogeneous material with sufficiently known physical properties, the thermal mass is simply the mass of material present times the specific heat capacity of that material. For bodies made of many materials, the sum of heat capacities for their pure components may be used in the calculation, or in some cases (as for a whole animal, for example) the number may simply be measured for the entire body in question, directly. As an extensive property , heat capacity is characteristic of an object; its corresponding intensive property is specific heat capacity, expressed in terms of a measure of the amount of material such as mass or number of moles, which must be multiplied by similar units to give the heat capacity of the entire body of material. Thus the heat capacity can be equivalently calculated as the product of the mass m of the body and the specific heat capacity c for the material, or the product of the number of moles of molecules present n and the molar specific heat capacity c ¯ {\displaystyle {\bar {c}}} . For discussion of why the thermal energy storage abilities of pure substances vary, see factors that affect specific heat capacity [ broken anchor ] . For a body of uniform composition, C t h {\displaystyle C_{\mathrm {th} }} can be approximated by where m {\displaystyle m} is the mass of the body and c p {\displaystyle c_{\mathrm {p} }} is the isobaric specific heat capacity of the material averaged over temperature range in question. For bodies composed of numerous different materials, the thermal masses for the different components can just be added together. Christoph Reinhard describes the impact of heat capacity this way: [ 7 ] Heat capacity is not normally calculated in the engineering of buildings. In the United States and Canada, national building codes and most state and local jurisdictions require that heating and cooling equipment be sized in accordance with Manual J [ 8 ] of the Air Conditioning Contractors of America . The Manual J process uses detailed measurements of a building's dimensions, construction, insulation, air-tightness, features and occupant loads, but it does not take into effect the heat capacity. Some heat capacity is presumed in the Manual J process, equipment sized according to Manual J is sized to maintain comfort at the first percentile of temperature for heating and the 99th percentile of temperature for cooling. The process presumes that the building has sufficient heat capacity to maintain comfort during brief excursions outside of those extremes.
https://en.wikipedia.org/wiki/Thermal_mass
A thermal oscillator is a system where conduction along thermal gradients overshoots thermal equilibrium, resulting in thermal oscillations where parts of the system oscillate between being colder and hotter than average. [ 1 ] This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Thermal_oscillator
In microfabrication , thermal oxidation is a way to produce a thin layer of oxide (usually silicon dioxide ) on the surface of a wafer . The technique forces an oxidizing agent to diffuse into the wafer at high temperature and react with it. The rate of oxide growth is often predicted by the Deal–Grove model . [ 1 ] Thermal oxidation may be applied to different materials, but most commonly involves the oxidation of silicon substrates to produce silicon dioxide . Thermal oxidation of silicon is usually performed at a temperature between 800 and 1200 °C , resulting in a so-called High Temperature Oxide layer (HTO). It may use either water vapor (usually UHP steam ) or molecular oxygen as the oxidant; it is consequently called either wet or dry oxidation. The reaction is one of the following: The oxidizing ambient may also contain several percent of hydrochloric acid (HCl). The chlorine neutralizes metal ions that may occur in the oxide. Thermal oxide incorporates silicon consumed from the substrate and oxygen supplied from the ambient. Thus, it grows both down into the wafer and up out of it. For every unit thickness of silicon consumed, 2.17 unit thicknesses of oxide will appear. [ 2 ] If a bare silicon surface is oxidized, 46% of the oxide thickness will lie below the original surface, and 54% above it. According to the commonly used Deal-Grove model, the time τ required to grow an oxide of thickness X o , at a constant temperature, on a bare silicon surface, is: where the constants A and B relate to properties of the reaction and the oxide layer, respectively. This model has further been adapted to account for self-limiting oxidation processes, as used for the fabrication and morphological design of Si nanowires and other nanostructures. [ 1 ] If a wafer that already contains oxide is placed in an oxidizing ambient, this equation must be modified by adding a corrective term τ, the time that would have been required to grow the pre-existing oxide under current conditions. This term may be found using the equation for t above. Solving the quadratic equation for X o yields: Most thermal oxidation is performed in furnaces , at temperatures between 800 and 1200 °C. A single furnace accepts many wafers at the same time, in a specially designed quartz rack (called a "boat"). Historically, the boat entered the oxidation chamber from the side (this design is called "horizontal"), and held the wafers vertically, beside each other. However, many modern designs hold the wafers horizontally, above and below each other, and load them into the oxidation chamber from below. Because vertical furnaces stand higher than horizontal furnaces, they may not fit into some microfabrication facilities. They help to prevent dust contamination. Unlike horizontal furnaces, in which falling dust can contaminate any wafer, vertical furnaces use enclosed cabinets with air filtration systems to prevent dust from reaching the wafers. Vertical furnaces also eliminate an issue that plagued horizontal furnaces: non-uniformity of grown oxide across the wafer. [ 3 ] Horizontal furnaces typically have convection currents inside the tube, which cause the bottom of the tube to be slightly colder than the top of the tube. As the wafers lie vertically in the tube, the convection, and the temperature gradient with it, cause the top of the wafer to have a thicker oxide layer than the bottom of the wafer. Vertical furnaces solve this problem by having wafer sitting horizontally and then having the gas flow in the furnace flowing from top to bottom, significantly damping any thermal convections. Vertical furnaces also allow the use of load locks to purge the wafers with nitrogen before oxidation to limit the growth of native oxide on the Si surface. Wet oxidation is preferred to dry oxidation for growing thick oxides, because of the higher growth rate. However, fast oxidation leaves more dangling bonds at the silicon interface, which produce quantum states for electrons and allow current to leak along the interface. (This is called a "dirty" interface.) Wet oxidation also yields a lower- density oxide, with lower dielectric strength . The long time required to grow a thick oxide in dry oxidation makes this process impractical. Thick oxides are usually grown with a long wet oxidation bracketed by short dry ones (a dry-wet-dry cycle). The beginning and ending dry oxidations produce films of high-quality oxide at the outer and inner surfaces of the oxide layer, respectively. Mobile metal ions can degrade performance of MOSFETs ( sodium is of particular concern). However, chlorine can immobilize sodium by forming sodium chloride . Chlorine is often introduced by adding hydrogen chloride or trichloroethylene to the oxidizing medium. Its presence also increases the rate of oxidation. Thermal oxidation can be performed on selected areas of a wafer and blocked on others. This process, first developed at Philips, [ 4 ] is commonly referred to as the local oxidation of silicon ( LOCOS ) process. Areas which are not to be oxidized are covered with a film of silicon nitride , which blocks diffusion of oxygen and water vapor due to its oxidation at a much slower rate. [ 5 ] The nitride is removed after oxidation is complete. This process cannot produce sharp features, because lateral (parallel to the surface) diffusion of oxidant molecules under the nitride mask causes the oxide to protrude into the masked area. Because impurities dissolve differently in silicon and oxide, a growing oxide will selectively take up or reject dopants . This redistribution is governed by the segregation coefficient, which determines how strongly the oxide absorbs or rejects the dopant, and the diffusivity . The orientation of the silicon crystal affects oxidation. A <100> wafer (see Miller indices ) oxidizes more slowly than a <111> wafer but produces an electrically cleaner oxide interface. Thermal oxidation of any variety produces a higher-quality oxide, with a much cleaner interface, than chemical vapor deposition of oxide resulting in low temperature oxide layer (reaction of TEOS at about 600 °C). However, the high temperatures required to produce High Temperature Oxide (HTO) restrict its usability. For instance, in MOSFET processes, thermal oxidation is never performed after the doping for the source and drain terminals is performed, because it would disturb the placement of the dopants.
https://en.wikipedia.org/wiki/Thermal_oxidation
A thermal oxidizer (also known as thermal oxidiser , or thermal incinerator ) is a process unit for air pollution control in many chemical plants that decomposes hazardous gases at a high temperature and releases them into the atmosphere. Thermal oxidizers are typically used to destroy hazardous air pollutants (HAPs) and volatile organic compounds (VOCs) from industrial air streams. These pollutants are generally hydrocarbon based and when destroyed, via thermal combustion, they are chemically oxidized to form CO 2 and H 2 O . Three main factors in designing the effective thermal oxidizers are temperature, residence time, and turbulence. The temperature needs to be high enough to ignite the waste gas. Most organic compounds ignite at the temperature between 590 °C (1,094 °F) and 650 °C (1,202 °F). To ensure near destruction of hazardous gases, most basic oxidizers are operated at much higher temperature levels. When catalyst is used, the operating temperature range may be lower. Residence time is to ensure that there is enough time for the combustion reaction to occur. The turbulence factor is the mixture of combustion air with the hazardous gases. [ 1 ] [ 2 ] The simplest technology of thermal oxidation is direct-fired thermal oxidizer. A process stream with hazardous gases is introduced into a firing box through or near the burner and enough residence time is provided to get the desired destruction removal efficiency (DRE) of the VOCs. Most direct-fired thermal oxidizers operate at temperature levels between 980 °C (1,800 °F) and 1,200 °C (2,190 °F) with air flow rates of 0.24 to 24 standard cubic meters per second . [ 1 ] Also called afterburners in the cases where the input gases come from a process where combustion is incomplete, [ 1 ] these systems are the least capital intensive, and can be integrated with downstream boilers and heat exchangers to optimize fuel efficiency . Thermal Oxidizers are best applied where there is a very high concentration of VOCs to act as the fuel source (instead of natural gas or oil) for complete combustion at the targeted operating temperature . [ citation needed ] One of today's most widely accepted air pollution control technologies across industry is a regenerative thermal oxidizer, commonly referred to as a RTO. RTOs use a ceramic bed which is heated from a previous oxidation cycle to preheat the input gases to partially oxidize them. The preheated gases enter a combustion chamber that is heated by an external fuel source to reach the target oxidation temperature which is in the range between 760 °C (1,400 °F) and 820 °C (1,510 °F). The final temperature may be as high as 1,100 °C (2,010 °F) for applications that require maximum destruction. The air flow rates are 2.4 to 240 standard cubic meters per second. [ 4 ] RTOs are very versatile and extremely efficient – thermal efficiency can reach 95%. They are regularly used for abating solvent fumes, odours, etc. from a wide range of industries. Regenerative Thermal Oxidizers are ideal in a range of low to high VOC concentrations up to 10 g/m 3 solvent. There are currently many types of Regenerative Thermal Oxidizers on the market with the capability of 99.5+% Volatile Organic Compound (VOC) oxidization or destruction efficiency. The ceramic heat exchanger(s) in the towers can be designed for thermal efficiencies as high as 97+%. Ventilation air methane thermal oxidizers are used to destroy methane in the exhaust air of underground coal mine shafts. Methane is a greenhouse gas and, when oxidized via thermal combustion, is chemically altered to form CO 2 and H 2 O. CO 2 is 25 times less potent than methane when emitted into the atmosphere with regards to global warming. Concentrations of methane in mine ventilation exhaust air of coal and trona mines are very dilute; typically below 1% and often below 0.5%. VAMTOX units have a system of valves and dampers that direct the air flow across one or more ceramic filled bed(s). On start-up, the system preheats by raising the temperature of the heat exchanging ceramic material in the bed(s) at or above the auto-oxidation temperature of methane 1,000 °C (1,830 °F), at which time the preheating system is turned off and mine exhaust air is introduced. Then the methane-filled air reaches the preheated bed(s), releasing the heat from combustion. This heat is then transferred back to the bed(s), thereby maintaining the temperature at or above what is necessary to support auto-thermal operation. [ citation needed ] A less commonly used thermal oxidizer technology is a thermal recuperative oxidizer. Thermal recuperative oxidizers have a primary and/or secondary heat exchanger within the system. A primary heat exchanger preheats the incoming dirty air by recuperating heat from the exiting clean air. This is done by a shell and tube heat exchanger or a plate heat exchanger . As the incoming air passes on one side of the metal tube or plate, hot clean air from the combustion chamber passes on the other side of the tube or plate and heat is transferred to the incoming air through the process of conduction using the metal as the medium of heat transfer. In a secondary heat exchanger the same concept applies for heat transfer, but the air being heated by the outgoing clean process stream is being returned to another part of the plant – perhaps back to the process. Biomass , such as wood chips, can be used as the fuel for a thermal oxidizer. The biomass is then gasified and the stream with hazardous gases is mixed with the biomass gas in a firing box. Sufficient turbulence, retention time, oxygen content and temperature will ensure destruction of the VOC's. Such biomass fired thermal oxidizer has been installed at Warwick Mills, New Hampshire . The inlet concentrations are between 3000–10.000 ppm VOC. The outlet concentration of VOC are below 3 ppm, thus having a VOC destruction efficiency of 99.8–99.9%. [ 5 ] In a flameless thermal oxidizer system waste gas, ambient air, and auxiliary fuel are premixed prior to passing the combined gaseous mixture through a preheated inert ceramic media bed. Through the transfer of heat from the ceramic media to the gaseous mixture the organic compounds in the gas are oxidized to innocuous byproducts, i.e., carbon dioxide (CO 2 ) and water vapor (H 2 O) while also releasing heat into the ceramic media bed. [ 6 ] The gas mixture temperature is kept below the lower flammability limit based on the percentages of each organic species present. Flameless thermal oxidizers are designed to operate safely and reliably below the composite LFL while maintaining a constant operating temperature. Waste gas streams experience multiple seconds of residence time at high temperatures leading to measured destruction removal efficiencies that exceed 99.9999%. [ citation needed ] Premixing all of the gases prior to treatment eliminates localized high temperatures which leads to thermal NOx typically below 2 ppmV . Flameless thermal oxidizer technology was originally developed at the U.S. Department of Energy to more efficiently convert energy in burners, process heaters, and other thermal systems. In a Fluidized bed concentrator (FBC), a bed of activated carbon beads to adsorb volatile organic compounds (VOCs) from the exhaust gas. Evolving from the previous fixed-bed and carbon rotor concentrators, the FBC system forces the VOC-laden air through several perforated steel trays, increasing the velocity of the air and allowing the sub-millimeter carbon beads to fluidize, or behave as if suspended in a liquid. This increases the surface area of the carbon-gas interaction, making it more effective at capturing VOCs. Catalytic oxidizer (also known as catalytic incinerator ) is another category of oxidation systems that is similar to typical thermal oxidizers, but the catalytic oxidizers use a catalyst to promote the oxidation. Catalytic oxidation occurs through a chemical reaction between the VOC hydrocarbon molecules and a precious-metal catalyst bed that is internal to the oxidizer system. A catalyst is a substance that is used to accelerate the rate of a chemical reaction, allowing the reaction to occur in a normal temperature range between 340 °C (644 °F) and 540 °C (1,004 °F). [ 7 ] The catalyst can be used in a Regenerative Thermal Oxidizer (RTO) to allow lower operating temperatures. This is also called Regenerative Catalytic Oxidizer or RCO. [ 4 ] For example, the thermal ignition temperature of carbon monoxide is normally 609 °C (1,128 °F). By utilizing a suitable oxidation catalyst, the ignition temperature can be reduced to around 200 °C (392 °F). [ 8 ] This can result in lower operating costs than a RTO. Most systems operate within the 260 °C (500 °F) to 1,000 °C (1,830 °F) degree range. Some systems are designed to operate both as RCOs and RTOs. When these systems are used special design considerations are utilized to reduce the probability of overheating (dilution of inlet gas or recycling), as these high temperatures would deactivate the catalyst, e.g. by sintering of the active material. [ citation needed ] Catalytic oxidizers can also be in the form of recuperative heat recovery to reduce the fuel requirement. In this form of heat recovery, the hot exhaust gases from the oxidizer pass through a heat exchanger to heat the new incoming air to the oxidizer. [ 7 ]
https://en.wikipedia.org/wiki/Thermal_oxidizer
Thermal physics is the combined study of thermodynamics , statistical mechanics , and kinetic theory of gases . This umbrella-subject is typically designed for physics students and functions to provide a general introduction to each of three core heat-related subjects. Other authors, however, define thermal physics loosely as a summation of only thermodynamics and statistical mechanics. [ 1 ] Thermal physics can be seen as the study of systems with a large number of atoms. It unites thermodynamics and statistical mechanics. Thermal physics, generally speaking, is the study of the statistical nature of physical systems from an energetic perspective. Starting with the basics of heat and temperature , thermal physics analyzes the first law of thermodynamics and second law of thermodynamics from the statistical perspective, in terms of the number of microstates corresponding to a given macrostate . In addition, the concept of entropy is studied via quantum theory . A central topic in thermal physics is the canonical probability distribution . The electromagnetic nature of photons and phonons are studied which show that the oscillations of electromagnetic fields and of crystal lattices have much in common. Waves form a basis for both, provided one incorporates quantum theory. Other topics studied in thermal physics include: chemical potential , the quantum nature of an ideal gas , i.e. in terms of fermions and bosons , Bose–Einstein condensation , Gibbs free energy , Helmholtz free energy , chemical equilibrium , phase equilibrium , the equipartition theorem , entropy at absolute zero , and transport processes as mean free path , viscosity , and conduction . [ 2 ]
https://en.wikipedia.org/wiki/Thermal_physics
Thermal pollution , sometimes called " thermal enrichment ", is the degradation of water quality by any process that changes ambient water temperature . Thermal pollution is the rise or drop in the temperature of a natural body of water caused by human influence . Thermal pollution, unlike chemical pollution , results in a change in the physical properties of water . A common cause of thermal pollution is the use of water as a coolant by power plants and industrial manufacturers. [ 1 ] Urban runoff — stormwater discharged to surface waters from rooftops, roads, and parking lots—and reservoirs can also be a source of thermal pollution. [ 4 ] Thermal pollution can also be caused by the release of very cold water from the base of reservoirs into warmer rivers. When water used as a coolant is returned to the natural environment at a higher temperature, the sudden change in temperature decreases oxygen supply and affects ecosystem composition. Fish and other organisms adapted to particular temperature range can be killed by an abrupt change in water temperature (either a rapid increase or decrease) known as "thermal shock". Warm coolant water can also have long term effects on water temperature, increasing the overall temperature of water bodies, including deep water. Seasonality affects how these temperature increases are distributed throughout the water column. Elevated water temperatures decrease oxygen levels, which can kill fish and alter food chain composition, reduce species biodiversity , and foster invasion by new thermophilic species. [ 5 ] [ 6 ] : 375 In the United States about 75 to 80 percent of thermal pollution is generated by power plants. [ 6 ] : 376 The remainder is from industrial sources such as petroleum refineries , pulp and paper mills , chemical plants , steel mills and smelters . [ 7 ] : 4–2 [ 8 ] Heated water from these sources may be controlled with: One of the largest contributors to thermal pollution are once-through cooling (OTC) systems which do not reduce temperature as effectively as the above systems. A large power plant may withdraw and export as many as 500 million gallons per day. [ 10 ] These systems produce water 10 °C warmer on average. [ 11 ] For example, the Potrero Generating Station in San Francisco (closed in 2011), used OTC and discharged water to San Francisco Bay approximately 10 °C (20 °F) above the ambient bay temperature. [ 12 ] Over 1,200 facilities in the United States use OTC systems as of 2014. [ 7 ] : 4–4 Temperatures can be taken through remote sensing techniques to continually monitor plants' pollution. [ 13 ] This aids in quantifying each plants' specific effects, and allows for tighter regulation of thermal pollution. Converting facilities from once-through cooling to closed-loop systems can significantly decrease the thermal pollution emitted. [ 10 ] These systems release water at a temperature more comparable to the natural environment. As water stratifies within man-made dams, the temperature at the bottom drops dramatically. Many dams are constructed to release this cold water from the bottom into the natural systems. [ 14 ] This may be mitigated by designing the dam to release warmer surface waters instead of the colder water at the bottom of the reservoir. [ 15 ] During warm weather, urban runoff can have significant thermal impacts on small streams. As storm water passes over hot rooftops, parking lots, roads and sidewalks it absorbs some of the heat, an effect of the urban heat island . Storm water management facilities that absorb runoff or direct it into groundwater , such as bioretention systems and infiltration basins , reduce these thermal effects by allowing the water more time to release excess heat before entering the aquatic environment. These related systems for managing runoff are components of an expanding urban design approach commonly called green infrastructure . [ 16 ] Retention basins (stormwater ponds) tend to be less effective at reducing runoff temperature, as the water may be heated by the sun before being discharged to a receiving stream. [ 17 ] Elevated temperature typically decreases the level of dissolved oxygen and of water, as gases are less soluble in hotter liquids. This can harm aquatic animals such as fish , amphibians and other aquatic organisms. Thermal pollution may also increase the metabolic rate of aquatic animals, as enzyme activity, resulting in these organisms consuming more food in a shorter time than if their environment were not changed. [ 5 ] : 179 An increased metabolic rate may result in fewer resources; the more adapted organisms moving in may have an advantage over organisms that are not used to the warmer temperature. As a result, food chains of the old and new environments may be compromised. Some fish species will avoid stream segments or coastal areas adjacent to a thermal discharge. Biodiversity can be decreased as a result. [ 20 ] : 415–17 [ 6 ] : 380 High temperature limits oxygen dispersion into deeper waters, contributing to anaerobic conditions. This can lead to increased bacteria levels when there is ample food supply. Many aquatic species will fail to reproduce at elevated temperatures. [ 5 ] : 179–80 Primary producers (e.g. plants, cyanobacteria ) are affected by warm water because higher water temperature increases plant growth rates, resulting in a shorter lifespan and species overpopulation . The increased temperature can also change the balance of microbial growth, including the rate of algae blooms which reduce dissolved oxygen concentrations. [ 21 ] Temperature changes of even one to two degrees Celsius can cause significant changes in organism metabolism and other adverse cellular biology effects. Principal adverse changes can include rendering cell walls less permeable to necessary osmosis , coagulation of cell proteins , and alteration of enzyme metabolism . These cellular level effects can adversely affect mortality and reproduction . A large increase in temperature can lead to the denaturing of life-supporting enzymes by breaking down hydrogen - and disulphide bonds within the quaternary structure of the enzymes. Decreased enzyme activity in aquatic organisms can cause problems such as the inability to break down lipids , which leads to malnutrition . Increased water temperature can also increase the solubility and kinetics of metals, which can increase the uptake of heavy metals by aquatic organisms. This can lead to toxic outcomes for these species, as well as build up of heavy metals in higher trophic levels in the food chain , increasing human exposures via dietary ingestion. [ 21 ] In limited cases, warm water has little deleterious effect and may even lead to improved function of the receiving aquatic ecosystem. This phenomenon is seen especially in seasonal waters. An extreme case is derived from the aggregational habits of the manatee , which often uses power plant discharge sites during winter. Projections suggest that manatee populations would decline upon the removal of these discharges. [ 22 ] Releases of unnaturally cold water from reservoirs can dramatically change the fish and macroinvertebrate fauna of rivers, and reduce river productivity. [ 23 ] In Australia , where many rivers have warmer temperature regimes, native fish species have been eliminated, and macroinvertebrate fauna have been drastically altered. Survival rates of fish have dropped up to 75% due to cold water releases. [ 14 ] When a power plant first opens or shuts down for repair or other causes, fish and other organisms adapted to particular temperature range can be killed by the abrupt change in water temperature, either an increase or decrease, known as "thermal shock". [ 6 ] : 380 [ 24 ] : 478 Water warming effects, as opposed to water cooling effects, have been the most studied with regard to biogeochemical effects. Much of this research is on the long term effects of nuclear power plants on lakes after a nuclear power plant has been removed. Overall, there is support for thermal pollution leading to an increase in water temperatures. [ 25 ] When power plants are active, short term water temperature increases are correlated with electrical needs, with more coolant released during the winter months. Water warming has also been seen to persist in systems for long periods of time, even after plants have been removed. [ 3 ] When warm water from power plant coolant enters systems, it often mixes leading to general increases in water temperature throughout the water body, including deep cooler water. Specifically in lakes and similar water bodies, stratification leads to different effects on a seasonal basis. In the summer, thermal pollution has been seen to increase deeper water temperature more dramatically than surface water, though stratification still exists, while in the winter surface water temperatures see a larger increase. Stratification is reduced in winter months due to thermal pollution, often eliminating the thermocline . [ 3 ] A study looking at the effect of a removed nuclear power plant in Lake Stechlin , Germany, found a 2.33 °C increase persisted in surface water during the winter and a 2.04 °C increase persisted in deep water during the summer, with marginal increases throughout the water column in both winter and summer. [ 3 ] Stratification and water temperature differences due to thermal pollution seem to correlate with nutrient cycling of phosphorus and nitrogen, as oftentimes water bodies that receive coolant will shift toward eutrophication . No clear data has been obtained on this though, as it is difficult to differentiate influences from other industry and agriculture. [ 26 ] [ 27 ] Similar to effects seen in aquatic systems due to climatic warming of water, thermal pollution has also been seen to increase surface temperatures in the summer. This can create surface water temperatures that lead to releases of warm air into the atmosphere, increasing air temperature. [ 3 ] It therefore can be seen as a contributor to global warming. [ 28 ] Many ecological effects will be compounded by climate change as well, as ambient temperature rises in water bodies. [ 11 ] Spacial and climatic factors can impact the severity of water warming due to thermal pollution. High wind speeds tend to increase the impact of thermal pollution. Rivers and large bodies of water also tend to lose the effects of thermal pollution as they progress from the source. [ 25 ] [ 29 ] Rivers present a unique problem with thermal pollution. As water temperatures are elevated upstream, power plants downstream receive warmer waters. Evidence of this effect has been seen along the Mississippi River , as power plants are forced to use warmer waters as their coolants. [ 30 ] This reduces the efficiency of the plants and forces the plants to use more water and produce more thermal pollution.
https://en.wikipedia.org/wiki/Thermal_pollution
A thermal power station , also known as a thermal power plant , is a type of power station in which the heat energy generated from various fuel sources (e.g., coal , natural gas , nuclear fuel , etc.) is converted to electrical energy . [ 1 ] The heat from the source is converted into mechanical energy using a thermodynamic power cycle (such as a Diesel cycle , Rankine cycle , Brayton cycle , etc.). The most common cycle involves a working fluid (often water) heated and boiled under high pressure in a pressure vessel to produce high-pressure steam. This high pressure-steam is then directed to a turbine, where it rotates the turbine's blades. The rotating turbine is mechanically connected to an electric generator which converts rotary motion into electricity. Fuels such as natural gas or oil can also be burnt directly in gas turbines ( internal combustion ), skipping the steam generation step. These plants can be of the open cycle or the more efficient combined cycle type. The majority of the world's thermal power stations are driven by steam turbines, gas turbines, or a combination of the two. The efficiency of a thermal power station is determined by how effectively it converts heat energy into electrical energy, specifically the ratio of saleable electricity to the heating value of the fuel used. Different thermodynamic cycles have varying efficiencies, with the Rankine cycle generally being more efficient than the Otto or Diesel cycles. [ 1 ] In the Rankine cycle, the low-pressure exhaust from the turbine enters a steam condenser where it is cooled to produce hot condensate which is recycled to the heating process to generate even more high pressure steam. The design of thermal power stations depends on the intended energy source. In addition to fossil and nuclear fuel , some stations use geothermal power , solar energy , biofuels , and waste incineration . Certain thermal power stations are also designed to produce heat for industrial purposes, provide district heating , or desalinate water , in addition to generating electrical power. Emerging technologies such as supercritical and ultra-supercritical thermal power stations operate at higher temperatures and pressures for increased efficiency and reduced emissions. Cogeneration or CHP (Combined Heat and Power) technology, the simultaneous production of electricity and useful heat from the same fuel source, improves the overall efficiency by using waste heat for heating purposes. Older, less efficient thermal power stations are being decommissioned or adapted to use cleaner and renewable energy sources. Thermal power stations produce 70% of the world's electricity. [ 2 ] They often provide reliable, stable, and continuous baseload power supply essential for economic growth. They ensure energy security by maintaining grid stability, especially in regions where they complement intermittent renewable energy sources dependent on weather conditions. The operation of thermal power stations contributes to the local economy by creating jobs in construction, maintenance, and fuel extraction industries. On the other hand, burning of fossil fuels releases greenhouse gases (contributing to climate change) and air pollutants such as sulfur oxides and nitrogen oxides (leading to acid rain and respiratory diseases). Carbon capture and storage (CCS) technology can reduce the greenhouse gas emissions of fossil-fuel-based thermal power stations, however it is expensive and has seldom been implemented. Government regulations and international agreements are being enforced to reduce harmful emissions and promote cleaner power generation. Almost all coal-fired power stations , petroleum, nuclear , geothermal , solar thermal electric , and waste incineration plants , as well as all natural gas power stations are thermal. Natural gas is frequently burned in gas turbines as well as boilers . The waste heat from a gas turbine, in the form of hot exhaust gas, can be used to raise steam by passing this gas through a heat recovery steam generator (HRSG). The steam is then used to drive a steam turbine in a combined cycle plant that improves overall efficiency. Power stations burning coal, fuel oil , or natural gas are often called fossil fuel power stations . Some biomass -fueled thermal power stations have appeared also. Non-nuclear thermal power stations, particularly fossil-fueled plants, which do not use cogeneration are sometimes referred to as conventional power stations . Commercial electric utility power stations are usually constructed on a large scale and designed for continuous operation. Virtually all electric power stations use three-phase electrical generators to produce alternating current (AC) electric power at a frequency of 50 Hz or 60 Hz . Large companies or institutions may have their own power stations to supply heating or electricity to their facilities, especially if steam is created anyway for other purposes. Steam-driven power stations have been used to drive most ships in most of the 20th century [ citation needed ] . Shipboard power stations usually directly couple the turbine to the ship's propellers through gearboxes. Power stations in such ships also provide steam to smaller turbines driving electric generators to supply electricity. Nuclear marine propulsion is, with few exceptions, used only in naval vessels. There have been many turbo-electric ships in which a steam-driven turbine drives an electric generator which powers an electric motor for propulsion . Cogeneration plants, often called combined heat and power (CHP) facilities, produce both electric power and heat for process heat or space heating, such as steam and hot water. The reciprocating steam engine has been used to produce mechanical power since the 18th century, with notable improvements being made by James Watt . When the first commercially developed central electrical power stations were established in 1882 at Pearl Street Station in New York and Holborn Viaduct power station in London, reciprocating steam engines were used. The development of the steam turbine in 1884 provided larger and more efficient machine designs for central generating stations. [ 3 ] By 1892 the turbine was considered a better alternative to reciprocating engines. [ 4 ] Turbines offered higher speeds, more compact machinery, and stable speed regulation allowing for parallel synchronous operation of generators on a common bus. After about 1905, turbines entirely replaced reciprocating engines in almost all large central power stations. The largest reciprocating engine-generator sets ever built were completed in 1901 for the Manhattan Elevated Railway . Each of seventeen units weighed about 500 tons and was rated 6000 kilowatts; a contemporary turbine set of similar rating would have weighed about 20% as much. [ 5 ] The energy efficiency of a conventional thermal power station is defined as saleable energy produced as a percent of the heating value of the fuel consumed. A simple cycle gas turbine achieves energy conversion efficiencies from 20 to 35%. [ 6 ] Typical coal-based power plants operating at steam pressures of 170 bar and 570 °C run at efficiency of 35 to 38%, [ 7 ] with state-of-the-art fossil fuel plants at 46% efficiency. [ 8 ] Combined-cycle systems can reach higher values. As with all heat engines, their efficiency is limited, and governed by the laws of thermodynamics . The Carnot efficiency dictates that higher efficiencies can be attained by increasing the temperature of the steam. Sub-critical pressure fossil fuel power stations can achieve 36–40% efficiency. Supercritical designs have efficiencies in the low to mid 40% range, with new "ultra critical" designs using pressures above 4,400 psi (30 MPa) and multiple stage reheat reaching 45–48% efficiency. [ 7 ] Above the critical point for water of 705 °F (374 °C) and 3,212 psi (22.15 MPa), there is no phase transition from water to steam, but only a gradual decrease in density . Currently most nuclear power stations must operate below the temperatures and pressures that coal-fired plants do, in order to provide more conservative safety margins within the systems that remove heat from the nuclear fuel. This limits their thermodynamic efficiency to 30–32%. Some advanced reactor designs being studied, such as the very-high-temperature reactor , Advanced Gas-cooled Reactor , and supercritical water reactor , would operate at temperatures and pressures similar to current coal plants, producing comparable thermodynamic efficiency. The energy of a thermal power station not utilized in power production must leave the plant in the form of heat to the environment. This waste heat can go through a condenser and be disposed of with cooling water or in cooling towers . If the waste heat is instead used for district heating , it is called cogeneration . An important class of thermal power station is that associated with desalination facilities. These are typically found in desert countries with large supplies of natural gas . In these plants freshwater production and electricity are equally important co-products. Other types of power stations are subject to different efficiency limitations. Most hydropower stations in the United States are about 90 percent efficient in converting the energy of falling water into electricity [ 9 ] while the efficiency of a wind turbine is limited by Betz's law , to about 59.3%, and actual wind turbines show lower efficiency. The direct cost of electric energy produced by a thermal power station is the result of cost of fuel, capital cost for the plant, operator labour, maintenance, and such factors as ash handling and disposal. Indirect social or environmental costs, such as the economic value of environmental impacts, or environmental and health effects of the complete fuel cycle and plant decommissioning, are not usually assigned to generation costs for thermal stations in utility practice, but may form part of an environmental impact assessment. Those indirect costs belong to the broader concept of externalities . In the nuclear plant field, steam generator refers to a specific type of large heat exchanger used in a pressurized water reactor (PWR) to thermally connect the primary (reactor plant) and secondary (steam plant) systems, which generates steam. In a boiling water reactor (BWR), no separate steam generator is used and water boils in the reactor core. In some industrial settings, there can also be steam-producing heat exchangers called heat recovery steam generators (HRSG) which utilize heat from some industrial process, most commonly utilizing hot exhaust from a gas turbine. The steam generating boiler has to produce steam at the high purity, pressure and temperature required for the steam turbine that drives the electrical generator. Geothermal plants do not need boilers because they use naturally occurring steam sources. Heat exchangers may be used where the geothermal steam is very corrosive or contains excessive suspended solids. A fossil fuel steam generator includes an economizer , a steam drum , and the furnace with its steam generating tubes and superheater coils. Necessary safety valves are located at suitable points to protect against excessive boiler pressure. The air and flue gas path equipment include: forced draft (FD) fan , air preheater (AP), boiler furnace, induced draft (ID) fan, fly ash collectors ( electrostatic precipitator or baghouse ), and the flue-gas stack . [ 10 ] [ 11 ] [ 12 ] The boiler feed water used in the steam boiler is a means of transferring heat energy from the burning fuel to the mechanical energy of the spinning steam turbine . The total feed water consists of recirculated condensate water and purified makeup water . Because the metallic materials it contacts are subject to corrosion at high temperatures and pressures, the makeup water is highly purified before use. A system of water softeners and ion exchange demineralizes produces water so pure that it coincidentally becomes an electrical insulator , with conductivity in the range of 0.3–1.0 microsiemens per centimeter. The makeup water in a 500 MWe plant amounts to perhaps 120 US gallons per minute (7.6 L/s) to replace water drawn off from the boiler drums for water purity management, and to also offset the small losses from steam leaks in the system. The feed water cycle begins with condensate water being pumped out of the condenser after traveling through the steam turbines. The condensate flow rate at full load in a 500 MW plant is about 6,000 US gallons per minute (400 L/s). The water is usually pressurized in two stages, and typically flows through a series of six or seven intermediate feed water heaters, heated up at each point with steam extracted from an appropriate extraction connection on the turbines and gaining temperature at each stage. Typically, in the middle of this series of feedwater heaters, and before the second stage of pressurization, the condensate plus the makeup water flows through a deaerator [ 13 ] [ 14 ] that removes dissolved air from the water, further purifying and reducing its corrosiveness. The water may be dosed following this point with hydrazine , a chemical that removes the remaining oxygen in the water to below 5 parts per billion (ppb). It is also dosed with pH control agents such as ammonia or morpholine to keep the residual acidity low and thus non-corrosive. The boiler is a rectangular furnace about 50 feet (15 m) on a side and 130 feet (40 m) tall. Its walls are made of a web of high pressure steel tubes about 2.3 inches (58 mm) in diameter. [ citation needed ] Fuel such as pulverized coal is air-blown into the furnace through burners located at the four corners, or along one wall, or two opposite walls, and it is ignited to rapidly burn, forming a large fireball at the center. The thermal radiation of the fireball heats the water that circulates through the boiler tubes near the boiler perimeter. The water circulation rate in the boiler is three to four times the throughput. As the water in the boiler circulates it absorbs heat and changes into steam. It is separated from the water inside a drum at the top of the furnace. The saturated steam is introduced into superheat pendant tubes that hang in the hottest part of the combustion gases as they exit the furnace. Here the steam is superheated to 1,000 °F (540 °C) to prepare it for the turbine. Plants that use gas turbines to heat the water for conversion into steam use boilers known as heat recovery steam generators (HRSG). The exhaust heat from the gas turbines is used to make superheated steam that is then used in a conventional water-steam generation cycle, as described in the gas turbine combined-cycle plants section. The water enters the boiler through a section in the convection pass called the economizer . From the economizer it passes to the steam drum and from there it goes through downcomers to inlet headers at the bottom of the water walls. From these headers the water rises through the water walls of the furnace where some of it is turned into steam and the mixture of water and steam then re-enters the steam drum. This process may be driven purely by natural circulation (because the water is the downcomers is denser than the water/steam mixture in the water walls) or assisted by pumps. In the steam drum, the water is returned to the downcomers and the steam is passed through a series of steam separators and dryers that remove water droplets from the steam. The dry steam then flows into the superheater coils. The boiler furnace auxiliary equipment includes coal feed nozzles and igniter guns, soot blowers , water lancing, and observation ports (in the furnace walls) for observation of the furnace interior. Furnace explosions due to any accumulation of combustible gases after a trip-out are avoided by flushing out such gases from the combustion zone before igniting the coal. The steam drum (as well as the superheater coils and headers) have air vents and drains needed for initial start up. Fossil fuel power stations often have a superheater section in the steam generating furnace. [ citation needed ] The steam passes through drying equipment inside the steam drum on to the superheater, a set of tubes in the furnace. Here the steam picks up more energy from hot flue gases outside the tubing, and its temperature is now superheated above the saturation temperature. The superheated steam is then piped through the main steam lines to the valves before the high-pressure turbine. Nuclear-powered steam plants do not have such sections but produce steam at essentially saturated conditions. Experimental nuclear plants were equipped with fossil-fired superheaters in an attempt to improve overall plant operating cost. [ citation needed ] The condenser condenses the steam from the exhaust of the turbine into liquid to allow it to be pumped. If the condenser can be made cooler, the pressure of the exhaust steam is reduced and efficiency of the cycle increases. The surface condenser is a shell and tube heat exchanger in which cooling water is circulated through the tubes. [ 11 ] [ 15 ] [ 16 ] [ 17 ] The exhaust steam from the low-pressure turbine enters the shell, where it is cooled and converted to condensate (water) by flowing over the tubes as shown in the adjacent diagram. Such condensers use steam ejectors or rotary motor -driven exhausts for continuous removal of air and gases from the steam side to maintain vacuum . For best efficiency, the temperature in the condenser must be kept as low as practical in order to achieve the lowest possible pressure in the condensing steam. Since the condenser temperature can almost always be kept significantly below 100 °C where the vapor pressure of water is much less than atmospheric pressure, the condenser generally works under vacuum . Thus leaks of non-condensible air into the closed loop must be prevented. Typically the cooling water causes the steam to condense at a temperature of about 25 °C (77 °F) and that creates an absolute pressure in the condenser of about 2–7 kPa (0.59–2.07 inHg ), i.e. a vacuum of about −95 kPa (−28 inHg) relative to atmospheric pressure. The large decrease in volume that occurs when water vapor condenses to liquid creates the vacuum that generally increases the efficiency of the turbines. The limiting factor is the temperature of the cooling water. That is limited by the prevailing average climatic conditions at the power station's location. It may be possible to lower the temperature beyond the turbine limits during winter, causing excessive condensation in the turbine. Plants operating in hot climates may have to reduce output if their source of condenser cooling water becomes warmer. Unfortunately this usually coincides with periods of high electrical demand for air conditioning . The condenser generally uses either circulating cooling water from a cooling tower to reject waste heat to the atmosphere, or once-through cooling (OTC) water from a river, lake or ocean. In the United States, about two-thirds of power plants use OTC systems, which often have significant adverse environmental impacts. The impacts include thermal pollution and killing large numbers of fish and other aquatic species at cooling water intakes . [ 18 ] [ 19 ] The heat absorbed by the circulating cooling water in the condenser tubes must also be removed to maintain the ability of the water to cool as it circulates. This is done by pumping the warm water from the condenser through either natural draft, forced draft or induced draft cooling towers (as seen in the adjacent image) that reduce the temperature of the water by evaporation, by about 11 to 17 °C (52 to 63 °F)—expelling waste heat to the atmosphere. The circulation flow rate of the cooling water in a 500 MW unit is about 14.2 m 3 /s (500 ft 3 /s or 225,000 US gal/min) at full load. [ 20 ] The condenser tubes are typically made stainless steel or other alloys to resist corrosion from either side. Nevertheless, they may become internally fouled during operation by bacteria or algae in the cooling water or by mineral scaling, all of which inhibit heat transfer and reduce thermodynamic efficiency . Many plants include an automatic cleaning system that circulates sponge rubber balls through the tubes to scrub them clean without the need to take the system off-line. [ citation needed ] The cooling water used to condense the steam in the condenser returns to its source without having been changed other than having been warmed. If the water returns to a local water body (rather than a circulating cooling tower), it is often tempered with cool 'raw' water to prevent thermal shock when discharged into that body of water. Another form of condensing system is the air-cooled condenser . The process is similar to that of a radiator and fan. Exhaust heat from the low-pressure section of a steam turbine runs through the condensing tubes, the tubes are usually finned and ambient air is pushed through the fins with the help of a large fan. The steam condenses to water to be reused in the water-steam cycle. Air-cooled condensers typically operate at a higher temperature than water-cooled versions. While saving water, the efficiency of the cycle is reduced (resulting in more carbon dioxide per megawatt-hour of electricity). From the bottom of the condenser, powerful condensate pumps recycle the condensed steam (water) back to the water/steam cycle. Power station furnaces may have a reheater section containing tubes heated by hot flue gases outside the tubes. Exhaust steam from the high-pressure turbine is passed through these heated tubes to collect more energy before driving the intermediate and then low-pressure turbines. External fans are provided to give sufficient air for combustion. The Primary air fan takes air from the atmosphere and, first warms the air in the air preheater for better economy. Primary air then passes through the coal pulverizers, and carries the coal dust to the burners for injection into the furnace. The Secondary air fan takes air from the atmosphere and, first warms the air in the air preheater for better economy. Secondary air is mixed with the coal/primary air flow in the burners. The induced draft fan assists the FD fan by drawing out combustible gases from the furnace, maintaining slightly below atmospheric pressure in the furnace to avoid leakage of combustion products from the boiler casing. A steam turbine generator consists of a series of steam turbines interconnected to each other and a generator on a common shaft. There is usually a high-pressure turbine at one end, followed by an intermediate-pressure turbine, and finally one, two, or three low-pressure turbines, and the shaft that connects to the generator. As steam moves through the system and loses pressure and thermal energy, it expands in volume, requiring increasing diameter and longer blades at each succeeding stage to extract the remaining energy. The entire rotating mass may be over 200 metric tons and 100 feet (30 m) long. It is so heavy that it must be kept turning slowly even when shut down (at 3 rpm ) so that the shaft will not bow even slightly and become unbalanced. This is so important that it is one of only six functions of blackout emergency power batteries on site. The other five being emergency lighting , communication , station alarms, generator hydrogen seal system, and turbogenerator lube oil. For a typical late 20th-century power station, superheated steam from the boiler is delivered through 14–16-inch-diameter (360–410 mm) piping at 2,400 psi (17 MPa; 160 atm) and 1,000 °F (540 °C) to the high-pressure turbine, where it falls in pressure to 600 psi (4.1 MPa; 41 atm) and to 600 °F (320 °C) in temperature through the stage. It exits via 24–26-inch-diameter (610–660 mm) cold reheat lines and passes back into the boiler, where the steam is reheated in special reheat pendant tubes back to 1,000 °F (540 °C). The hot reheat steam is conducted to the intermediate-pressure turbine, where it falls in both temperature and pressure and exits directly to the long-bladed low-pressure turbines and finally exits to the condenser. [ citation needed ] The generator, typically about 30 feet (9 m) long and 12 feet (3.7 m) in diameter, contains a stationary stator and a spinning rotor , each containing miles of heavy copper conductor. There is generally no permanent magnet , thus preventing black starts . In operation it generates up to 21,000 amperes at 24,000 volts AC (504 MWe) as it spins at either 3,000 or 3,600 rpm , synchronized to the power grid . The rotor spins in a sealed chamber cooled with hydrogen gas, selected because it has the highest known heat transfer coefficient of any gas and for its low viscosity , which reduces windage losses. This system requires special handling during startup, with air in the chamber first displaced by carbon dioxide before filling with hydrogen. This ensures that a highly explosive hydrogen– oxygen environment is not created. The power grid frequency is 60 Hz across North America and 50 Hz in Europe , Oceania , Asia ( Korea and parts of Japan are notable exceptions), and parts of Africa . The desired frequency affects the design of large turbines, since they are highly optimized for one particular speed. The electricity flows to a distribution yard where transformers increase the voltage for transmission to its destination. The steam turbine-driven generators have auxiliary systems enabling them to work satisfactorily and safely. The steam turbine generator, being rotating equipment, generally has a heavy, large-diameter shaft. The shaft therefore requires not only supports but also has to be kept in position while running. To minimize the frictional resistance to the rotation, the shaft has a number of bearings . The bearing shells, in which the shaft rotates, are lined with a low-friction material like Babbitt metal . Oil lubrication is provided to further reduce the friction between shaft and bearing surface and to limit the heat generated. As the combustion flue gas exits the boiler it is routed through a rotating flat basket of metal mesh which picks up heat and returns it to incoming fresh air as the basket rotates. This is called the air preheater . The gas exiting the boiler is laden with fly ash , which are tiny spherical ash particles. The flue gas contains nitrogen along with combustion products carbon dioxide , sulfur dioxide , and nitrogen oxides . The fly ash is removed by fabric bag filters in baghouses or electrostatic precipitators . Once removed, the fly ash byproduct can sometimes be used in the manufacturing of concrete . This cleaning up of flue gases only occurs in plants that are fitted with the appropriate technology. Still, the majority of coal-fired power stations in the world do not have these facilities. [ citation needed ] Legislation in Europe has been efficient to reduce flue gas pollution. Japan has been using flue gas cleaning technology for over 30 years and the US has been doing the same for over 25 years. China is now beginning to grapple with the pollution caused by coal-fired power stations. Where required by law, the sulfur and nitrogen oxide pollutants are removed by stack gas scrubbers which use a pulverized limestone or other alkaline wet slurry to remove those pollutants from the exit stack gas. Other devices use catalysts to remove nitrous oxide compounds from the flue-gas stream. The gas travelling up the flue-gas stack may by this time have dropped to about 50 °C (120 °F). A typical flue-gas stack may be 150–180 metres (490–590 ft) tall to disperse the remaining flue gas components in the atmosphere. The tallest flue-gas stack in the world is 419.7 metres (1,377 ft) tall at the Ekibastuz GRES-2 Power Station in Kazakhstan . In the United States and a number of other countries, atmospheric dispersion modeling [ 21 ] studies are required to determine the flue-gas stack height needed to comply with the local air pollution regulations. The United States also requires the height of a flue-gas stack to comply with what is known as the " good engineering practice " (GEP) stack height. [ 22 ] [ 23 ] In the case of existing flue gas stacks that exceed the GEP stack height, any air pollution dispersion modeling studies for such stacks must use the GEP stack height rather than the actual stack height. Carbon capture and storage (CCS) captures carbon dioxide from the flue gas of power plants or other industry, transporting it to an appropriate location where it can be buried securely in an underground reservoir. Between 1972 and 2017, plans were made to add CCS to enough coal and gas power plants to sequester 171 million tonnes of CO 2 per year, but by 2021 over 98% of these plans had failed. [ 24 ] Cost, the absence of measures to address long-term liability for stored CO 2 , and limited social acceptability have all contributed to project cancellations. [ 25 ] : 133 As of 2024, CCS is in operation at only five power plants worldwide. [ 26 ] Since there is continuous withdrawal of steam and continuous return of condensate to the boiler, losses due to blowdown and leakages have to be made up to maintain a desired water level in the boiler steam drum. For this, continuous make-up water is added to the boiler water system. Impurities in the raw water input to the plant generally consist of calcium and magnesium salts which impart hardness to the water. Hardness in the make-up water to the boiler will form deposits on the tube water surfaces which will lead to overheating and failure of the tubes. The salts have to be removed from the water, and that is done by a water demineralising treatment plant (DM). A DM plant generally consists of cation, anion, and mixed bed exchangers. Any ions in the final water from this process consist essentially of hydrogen ions and hydroxide ions, which recombine to form pure water. Very pure DM water becomes highly corrosive once it absorbs oxygen from the atmosphere because of its very high affinity for oxygen. The capacity of the DM plant is dictated by the type and quantity of salts in the raw water input. Some storage is essential as the DM plant may be down for maintenance. For this purpose, a storage tank is installed from which DM water is continuously withdrawn for boiler make-up. The storage tank for DM water is made from materials not affected by corrosive water, such as PVC . The piping and valves are generally of stainless steel. Sometimes, a steam blanketing arrangement or stainless steel doughnut float is provided on top of the water in the tank to avoid contact with air. DM water make-up is generally added at the steam space of the surface condenser (i.e., the vacuum side). This arrangement not only sprays the water but also DM water gets deaerated, with the dissolved gases being removed by a de-aerator through an ejector attached to the condenser. In coal-fired power stations, the raw feed coal from the coal storage area is first crushed into small pieces and then conveyed to the coal feed hoppers at the boilers. The coal is next pulverized into a very fine powder. The pulverizers may be ball mills , rotating drum grinders , or other types of grinders. Some power stations burn fuel oil rather than coal. The oil must kept warm (above its pour point ) in the fuel oil storage tanks to prevent the oil from congealing and becoming unpumpable. The oil is usually heated to about 100 °C before being pumped through the furnace fuel oil spray nozzles. Boilers in some power stations use processed natural gas as their main fuel. Other power stations may use processed natural gas as auxiliary fuel in the event that their main fuel supply (coal or oil) is interrupted. In such cases, separate gas burners are provided on the boiler furnaces. Barring gear (or "turning gear") is the mechanism provided to rotate the turbine generator shaft at a very low speed after unit stoppages. Once the unit is "tripped" (i.e., the steam inlet valve is closed), the turbine coasts down towards standstill. When it stops completely, there is a tendency for the turbine shaft to deflect or bend if allowed to remain in one position too long. This is because the heat inside the turbine casing tends to concentrate in the top half of the casing, making the top half portion of the shaft hotter than the bottom half. The shaft therefore could warp or bend by millionths of inches. This small shaft deflection, only detectable by eccentricity meters, would be enough to cause damaging vibrations to the entire steam turbine generator unit when it is restarted. The shaft is therefore automatically turned at low speed (about one percent rated speed) by the barring gear until it has cooled sufficiently to permit a complete stop. An auxiliary oil system pump is used to supply oil [ clarification needed ] at the start-up of the steam turbine generator. It supplies the hydraulic oil system required for steam turbine's main inlet steam stop valve, the governing control valves, the bearing and seal oil systems, the relevant hydraulic relays and other mechanisms. At a preset speed of the turbine during start-ups, a pump driven by the turbine main shaft takes over the functions of the auxiliary system. [ citation needed ] While small generators may be cooled by air drawn through filters at the inlet, larger units generally require special cooling arrangements. Hydrogen gas cooling, in an oil-sealed casing, is used because it has the highest known heat transfer coefficient of any gas and for its low viscosity which reduces windage losses. This system requires special handling during start-up, with air in the generator enclosure first displaced by carbon dioxide before filling with hydrogen. This ensures that the highly flammable hydrogen does not mix with oxygen in the air. The hydrogen pressure inside the casing is maintained slightly higher than atmospheric pressure to avoid outside air ingress, and up to about two atmospheres pressure to improve heat transfer capacity. The hydrogen must be sealed against outward leakage where the shaft emerges from the casing. Mechanical seals around the shaft are installed with a very small annular gap to avoid rubbing between the shaft and the seals on smaller turbines, with labyrinth type seals on larger machines.. Seal oil is used to prevent the hydrogen gas leakage to atmosphere. The generator also uses water cooling. Since the generator coils are at a potential of about 22 kV , an insulating barrier such as Teflon is used to interconnect the water line and the generator high-voltage windings. Demineralized water of low conductivity is used. The generator voltage for modern utility-connected generators ranges from 11 kV in smaller units to 30 kV in larger units. The generator high-voltage leads are normally large aluminium channels because of their high current as compared to the cables used in smaller machines. They are enclosed in well-grounded aluminium bus ducts and are supported on suitable insulators. The generator high-voltage leads are connected to step-up transformers for connecting to a high-voltage electrical substation (usually in the range of 115 kV to 765 kV) for further transmission by the local power grid. The necessary protection and metering devices are included for the high-voltage leads. Thus, the steam turbine generator and the transformer form one unit. Smaller units may share a common generator step-up transformer with individual circuit breakers to connect the generators to a common bus. Most of the power station operational controls are automatic. At times, manual intervention may be required. Thus, the plant is provided with monitors and alarm systems that alert the plant operators when certain operating parameters are seriously deviating from their normal range. A central battery system consisting of lead–acid cell units is provided to supply emergency electric power, when needed, to essential items such as the power station's control systems, communication systems, generator hydrogen seal system, turbine lube oil pumps, and emergency lighting. This is essential for a safe, damage-free shutdown of the units in an emergency situation. To dissipate the thermal load of main turbine exhaust steam, condensate from gland steam condenser, and condensate from Low Pressure Heater by providing a continuous supply of cooling water to the main condenser thereby leading to condensation. The consumption of cooling water by inland power stations is estimated to reduce power availability for the majority of thermal power stations by 2040–2069. [ 27 ]
https://en.wikipedia.org/wiki/Thermal_power_station
In thermodynamics , thermal pressure (also known as the thermal pressure coefficient ) is a measure of the relative pressure change of a fluid or a solid as a response to a temperature change at constant volume . The concept is related to the Pressure-Temperature Law, also known as Amontons's law or Gay-Lussac's law . [ 1 ] In general pressure, ( P {\displaystyle P} ) can be written as the following sum: P total ( V , T ) = P ref ( V , T ) + Δ P thermal ( V , T ) {\displaystyle P_{\text{total}}(V,T)=P_{\text{ref}}(V,T)+\Delta P_{\text{thermal}}(V,T)} . P ref {\displaystyle P_{\text{ref}}} is the pressure required to compress the material from its volume V 0 {\displaystyle V_{0}} to volume V {\displaystyle V} at a constant temperature T 0 {\displaystyle T_{0}} . The second term expresses the change in thermal pressure Δ P thermal {\displaystyle \Delta P_{\text{thermal}}} . This is the pressure change at constant volume due to the temperature difference between T 0 {\displaystyle T_{0}} and T {\displaystyle T} . Thus, it is the pressure change along an isochore of the material. The thermal pressure γ v {\displaystyle \gamma _{v}} is customarily expressed in its simple form as γ v = ( ∂ P ∂ T ) V . {\displaystyle \gamma _{v}=\left({\frac {\partial P}{\partial T}}\right)_{V}.} Because of the equivalences between many properties and derivatives within thermodynamics (e.g., see Maxwell Relations ), there are many formulations of the thermal pressure coefficient, which are equally valid, leading to distinct yet correct interpretations of its meaning. Some formulations for the thermal pressure coefficient include: ( ∂ P ∂ T ) v = α κ T = γ V C V = α β T {\displaystyle \left({\frac {\partial P}{\partial T}}\right)_{v}=\alpha \kappa _{T}={\frac {\gamma }{V}}C_{V}={\frac {\alpha }{\beta _{T}}}} Where α {\displaystyle \alpha } is the volume thermal expansion , κ T {\displaystyle \kappa _{T}} the isothermal bulk modulus , γ {\displaystyle \gamma } the Grüneisen parameter , β T {\displaystyle \beta _{T}} the compressibility and C V {\displaystyle C_{V}} the constant-volume heat capacity . [ 2 ] Details of the calculation: ( ∂ P ∂ T ) V = − ( ∂ V ∂ T ) p ( ∂ P ∂ V ) T = − ( V α ) ( − 1 κ T ) = α κ T {\displaystyle \left({\frac {\partial P}{\partial T}}\right)_{V}=-\left({\frac {\partial V}{\partial T}}\right)_{p}\left({\frac {\partial P}{\partial V}}\right)_{T}=-(V\alpha )\left({\frac {-1}{\kappa _{T}}}\right)=\alpha \kappa _{T}} ( ∂ P ∂ T ) V = 1 V ( ∂ V ∂ T ) p − 1 V ( ∂ V ∂ P ) T = α β {\displaystyle \left({\frac {\partial P}{\partial T}}\right)_{V}={\frac {{\frac {1}{V}}\left({\frac {\partial V}{\partial T}}\right)_{p}}{{\frac {-1}{V}}\left({\frac {\partial V}{\partial P}}\right)_{T}}}={\frac {\alpha }{\beta }}} The thermal pressure coefficient can be considered as a fundamental property; it is closely related to various properties such as internal pressure , sonic velocity , the entropy of melting, isothermal compressibility , isobaric expansibility, phase transition , etc. Thus, the study of the thermal pressure coefficient provides a useful basis for understanding the nature of liquid and solid. Since it is normally difficult to obtain the properties by thermodynamic and statistical mechanics methods due to complex interactions among molecules, experimental methods attract much attention. The thermal pressure coefficient is used to calculate results that are applied widely in industry, and they would further accelerate the development of thermodynamic theory. Commonly the thermal pressure coefficient may be expressed as functions of temperature and volume. There are two main types of calculation of the thermal pressure coefficient: one is the Virial theorem and its derivatives; the other is the Van der Waals type and its derivatives. [ 4 ] As mentioned above, α κ T {\displaystyle \alpha \kappa _{T}} is one of the most common formulations for the thermal pressure coefficient. Both α {\displaystyle \alpha } and κ T {\displaystyle \kappa _{T}} are affected by temperature changes, but the value of α {\displaystyle \alpha } and κ T {\displaystyle \kappa _{T}} of a solid much less sensitive to temperature change above its Debye temperature . Thus, the thermal pressure of a solid due to moderate temperature change above the Debye temperature can be approximated by assuming a constant value of α {\displaystyle \alpha } and κ T {\displaystyle \kappa _{T}} . [ 5 ] On the contrary, in the paper, [ 6 ] authors demonstrated that, at ambient pressure, the pressure predicted of Au and MgO from a constant value of α κ T {\displaystyle \alpha \kappa _{T}} deviates from the experimental data, and the higher temperature, the more deviation. In addition, the authors suggested a thermal expansion model to replace the thermal pressure model. The thermal pressure of a crystal defines how the unit-cell parameters change as a function of pressure and temperature . Therefore, it also controls how the cell parameters change along an isochore, namely as a function of ( ∂ P ∂ T ) V {\textstyle \left({\frac {\partial P}{\partial T}}\right)_{V}} . Usually, Mie-Grüneisen-Debye and other Quasi harmonic approximation (QHA) based state functions are being used to estimate volumes and densities of mineral phases in diverse applications such as thermodynamic, deep-Earth geophysical models and other planetary bodies. In the case of isotropic (or approximately isotropic) thermal pressure, the unit cell parameter remains constant along the isochore and the QHA is valid. But when the thermal pressure is anisotropic, the unit cell parameter changes so, the frequencies of vibrational modes also change even in constant volume and the QHA is no longer valid. The combined effect of a change in pressure and temperature is described by the strain tensor ε i j {\displaystyle \varepsilon _{ij}} : ε i j = α i j d T − β i j d P {\displaystyle \varepsilon _{ij}=\alpha _{ij}dT-\beta _{ij}dP} Where α i j {\displaystyle \alpha _{ij}} is the volume thermal expansion tensor and β i j {\displaystyle \beta _{ij}} is the compressibility tensor. The line in the P-T space which indicates that the strain ϵ i j {\displaystyle \epsilon _{ij}} is constant in a particular direction within the crystal is defined as: ( ∂ P ∂ T ) V = α i j β i j {\displaystyle \left({\frac {\partial P}{\partial T}}\right)_{V}={\frac {\alpha _{ij}}{\beta _{ij}}}} Which is an equivalent definition of the isotropic degree of thermal pressure. [ 7 ]
https://en.wikipedia.org/wiki/Thermal_pressure
In theoretical physics , thermal quantum field theory ( thermal field theory for short) or finite temperature field theory is a set of methods to calculate expectation values of physical observables of a quantum field theory at finite temperature . In the Matsubara formalism , the basic idea (due to Felix Bloch [ 1 ] ) is that the expectation values of operators in a canonical ensemble may be written as expectation values in ordinary quantum field theory [ 2 ] where the configuration is evolved by an imaginary time τ = i t ( 0 ≤ τ ≤ β ) {\displaystyle \tau =it(0\leq \tau \leq \beta )} . One can therefore switch to a spacetime with Euclidean signature , where the above trace (Tr) leads to the requirement that all bosonic and fermionic fields be periodic and antiperiodic, respectively, with respect to the Euclidean time direction with periodicity β = 1 / ( k T ) {\displaystyle \beta =1/(kT)} (we are assuming natural units ℏ = 1 {\displaystyle \hbar =1} ). This allows one to perform calculations with the same tools as in ordinary quantum field theory, such as functional integrals and Feynman diagrams , but with compact Euclidean time. Note that the definition of normal ordering has to be altered. [ 3 ] In momentum space , this leads to the replacement of continuous frequencies by discrete imaginary (Matsubara) frequencies v n = n / β {\displaystyle v_{n}=n/\beta } and, through the de Broglie relation , to a discretized thermal energy spectrum E n = 2 n π k T {\displaystyle E_{n}=2n\pi kT} . This has been shown to be a useful tool in studying the behavior of quantum field theories at finite temperature. [ 4 ] [ 5 ] [ 6 ] [ 7 ] It has been generalized to theories with gauge invariance and was a central tool in the study of a conjectured deconfining phase transition of Yang–Mills theory . [ 8 ] [ 9 ] In this Euclidean field theory, real-time observables can be retrieved by analytic continuation . [ 10 ] The Feynman rules for gauge theories in the Euclidean time formalism, were derived by C. W. Bernard. [ 8 ] The Matsubara formalism, also referred to as imaginary time formalism, can be extended to systems with thermal variations. [ 11 ] [ 12 ] In this approach, the variation in the temperature is recast as a variation in the Euclidean metric. Analysis of the partition function leads to an equivalence between thermal variations and the curvature of the Euclidean space. [ 11 ] [ 12 ] The alternative to the use of fictitious imaginary times is to use a real-time formalism which come in two forms. [ 13 ] A path-ordered approach to real-time formalisms includes the Schwinger–Keldysh formalism and more modern variants. [ 14 ] The latter involves replacing a straight time contour from (large negative) real initial time t i {\displaystyle t_{i}} to t i − i β {\displaystyle t_{i}-i\beta } by one that first runs to (large positive) real time t f {\displaystyle t_{f}} and then suitably back to t i − i β {\displaystyle t_{i}-i\beta } . [ 15 ] In fact all that is needed is one section running along the real time axis, as the route to the end point, t i − i β {\displaystyle t_{i}-i\beta } , is less important. [ 16 ] The piecewise composition of the resulting complex time contour leads to a doubling of fields and more complicated Feynman rules, but obviates the need of analytic continuations of the imaginary-time formalism. The alternative approach to real-time formalisms is an operator based approach using Bogoliubov transformations , known as thermo field dynamics . [ 13 ] [ 17 ] As well as Feynman diagrams and perturbation theory, other techniques such as dispersion relations and the finite temperature analog of Cutkosky rules can also be used in the real time formulation. [ 18 ] [ 19 ] An alternative approach which is of interest to mathematical physics is to work with KMS states .
https://en.wikipedia.org/wiki/Thermal_quantum_field_theory
Thermal radiation is electromagnetic radiation emitted by the thermal motion of particles in matter . All matter with a temperature greater than absolute zero emits thermal radiation. The emission of energy arises from a combination of electronic, molecular, and lattice oscillations in a material. [ 1 ] Kinetic energy is converted to electromagnetism due to charge-acceleration or dipole oscillation. At room temperature , most of the emission is in the infrared (IR) spectrum, [ 2 ] : 73–86 though above around 525 °C (977 °F) enough of it becomes visible for the matter to visibly glow. This visible glow is called incandescence . Thermal radiation is one of the fundamental mechanisms of heat transfer , along with conduction and convection . The primary method by which the Sun transfers heat to the Earth is thermal radiation. This energy is partially absorbed and scattered in the atmosphere , the latter process being the reason why the sky is visibly blue. [ 3 ] Much of the Sun's radiation transmits through the atmosphere to the surface where it is either absorbed or reflected. Thermal radiation can be used to detect objects or phenomena normally invisible to the human eye. Thermographic cameras create an image by sensing infrared radiation. These images can represent the temperature gradient of a scene and are commonly used to locate objects at a higher temperature than their surroundings. In a dark environment where visible light is at low levels, infrared images can be used to locate animals or people due to their body temperature. Cosmic microwave background radiation is another example of thermal radiation. Blackbody radiation is a concept used to analyze thermal radiation in idealized systems. This model applies if a radiating object meets the physical characteristics of a black body in thermodynamic equilibrium . [ 4 ] : 278 Planck's law describes the spectrum of blackbody radiation, and relates the radiative heat flux from a body to its temperature. Wien's displacement law determines the most likely frequency of the emitted radiation, and the Stefan–Boltzmann law gives the radiant intensity. [ 4 ] : 280 Where blackbody radiation is not an accurate approximation, emission and absorption can be modeled using quantum electrodynamics (QED). [ 1 ] Thermal radiation is the emission of electromagnetic waves from all matter that has a temperature greater than absolute zero . [ 5 ] [ 2 ] Thermal radiation reflects the conversion of thermal energy into electromagnetic energy . Thermal energy is the kinetic energy of random movements of atoms and molecules in matter. It is present in all matter of nonzero temperature. These atoms and molecules are composed of charged particles, i.e., protons and electrons . The kinetic interactions among matter particles result in charge acceleration and dipole oscillation. This results in the electrodynamic generation of coupled electric and magnetic fields, resulting in the emission of photons , radiating energy away from the body. Electromagnetic radiation, including visible light, will propagate indefinitely in vacuum . [ citation needed ] The characteristics of thermal radiation depend on various properties of the surface from which it is emanating, including its temperature and its spectral emissivity , as expressed by Kirchhoff's law . [ 5 ] The radiation is not monochromatic, i.e., it does not consist of only a single frequency, but comprises a continuous spectrum of photon energies, its characteristic spectrum. If the radiating body and its surface are in thermodynamic equilibrium and the surface has perfect absorptivity at all wavelengths, it is characterized as a black body . A black body is also a perfect emitter. The radiation of such perfect emitters is called black-body radiation . The ratio of any body's emission relative to that of a black body is the body's emissivity , so a black body has an emissivity of one. Absorptivity, reflectivity , and emissivity of all bodies are dependent on the wavelength of the radiation. Due to reciprocity , absorptivity and emissivity for any particular wavelength are equal at equilibrium – a good absorber is necessarily a good emitter, and a poor absorber is a poor emitter. The temperature determines the wavelength distribution of the electromagnetic radiation. The distribution of power that a black body emits with varying frequency is described by Planck's law . At any given temperature, there is a frequency f max at which the power emitted is a maximum. Wien's displacement law, and the fact that the frequency is inversely proportional to the wavelength, indicates that the peak frequency f max is proportional to the absolute temperature T of the black body. The photosphere of the sun, at a temperature of approximately 6000 K, emits radiation principally in the (human-)visible portion of the electromagnetic spectrum. Earth's atmosphere is partly transparent to visible light, and the light reaching the surface is absorbed or reflected. Earth's surface emits the absorbed radiation, approximating the behavior of a black body at 300 K with spectral peak at f max . At these lower frequencies, the atmosphere is largely opaque and radiation from Earth's surface is absorbed or scattered by the atmosphere. Though about 10% of this radiation escapes into space, most is absorbed and then re-emitted by atmospheric gases. It is this spectral selectivity of the atmosphere that is responsible for the planetary greenhouse effect , contributing to global warming and climate change in general (but also critically contributing to climate stability when the composition and properties of the atmosphere are not changing). Burning glasses are known to date back to about 700 BC. One of the first accurate mentions of burning glasses appears in Aristophanes 's comedy, The Clouds , written in 423 BC. [ 6 ] According to the Archimedes' heat ray anecdote, Archimedes is purported to have developed mirrors to concentrate heat rays in order to burn attacking Roman ships during the Siege of Syracuse ( c. 213–212 BC), but no sources from the time have been confirmed. [ 6 ] Catoptrics is a book attributed to Euclid on how to focus light in order to produce heat, but the book might have been written in 300 AD. [ 6 ] During the Renaissance, Santorio Santorio came up with one of the earliest thermoscopes . In 1612 he published his results on the heating effects from the Sun, and his attempts to measure heat from the Moon. [ 6 ] Earlier, in 1589, Giambattista della Porta reported on the heat felt on his face, emitted by a remote candle and facilitated by a concave metallic mirror. He also reported the cooling felt from a solid ice block. [ 6 ] Della Porta's experiment would be replicated many times with increasing accuracy. It was replicated by astronomers Giovanni Antonio Magini and Christopher Heydon in 1603, and supplied instructions for Rudolf II, Holy Roman Emperor who performed it in 1611. In 1660, della Porta's experiment was updated by the Accademia del Cimento using a thermometer invented by Ferdinand II, Grand Duke of Tuscany . [ 6 ] In 1761, Benjamin Franklin wrote a letter describing his experiments on the relationship between color and heat absorption. [ 7 ] He found that darker color clothes got hotter when exposed to sunlight than lighter color clothes. One experiment he performed consisted of placing square pieces of cloth of various colors out in the snow on a sunny day. He waited some time and then measured that the black pieces sank furthest into the snow of all the colors, indicating that they got the hottest and melted the most snow. Antoine Lavoisier considered that radiation of heat was concerned with the condition of the surface of a physical body rather than the material of which it was composed. [ 8 ] Lavoisier described a poor radiator to be a substance with a polished or smooth surface as it possessed its molecules lying in a plane closely bound together thus creating a surface layer of caloric fluid which insulated the release of the rest within. [ 8 ] He described a good radiator to be a substance with a rough surface as only a small proportion of molecules held caloric in within a given plane, allowing for greater escape from within. [ 8 ] Count Rumford would later cite this explanation of caloric movement as insufficient to explain the radiation of cold, which became a point of contention for the theory as a whole. [ 8 ] In his first memoir, Augustin-Jean Fresnel responded to a view he extracted from a French translation of Isaac Newton 's Optics . He says that Newton imagined particles of light traversing space uninhibited by the caloric medium filling it, and refutes this view (never actually held by Newton) by saying that a body under illumination would increase indefinitely in heat. [ 9 ] In Marc-Auguste Pictet 's famous experiment of 1790 , it was reported that a thermometer detected a lower temperature when a set of mirrors were used to focus "frigorific rays" from a cold object. [ 10 ] In 1791, Pierre Prevost a colleague of Pictet, introduced the concept of radiative equilibrium , wherein all objects both radiate and absorb heat. [ 11 ] When an object is cooler than its surroundings, it absorbs more heat than it emits, causing its temperature to increase until it reaches equilibrium. Even at equilibrium, it continues to radiate heat, balancing absorption and emission. [ 11 ] The discovery of infrared radiation is ascribed to astronomer William Herschel . Herschel published his results in 1800 before the Royal Society of London . Herschel used a prism to refract light from the sun and detected the calorific rays, beyond the red part of the spectrum, by an increase in the temperature recorded on a thermometer in that region. [ 12 ] [ 13 ] At the end of the 19th century it was shown that the transmission of light or of radiant heat was allowed by the propagation of electromagnetic waves . [ 14 ] Television and radio broadcasting waves are types of electromagnetic waves with specific wavelengths . [ 15 ] All electromagnetic waves travel at the same speed; therefore, shorter wavelengths are associated with high frequencies. All bodies generate and receive electromagnetic waves at the expense of heat exchange. [ 15 ] In 1860, Gustav Kirchhoff published a mathematical description of thermal equilibrium (i.e. Kirchhoff's law of thermal radiation ). [ 16 ] : 275–301 By 1884 the emissive power of a perfect blackbody was inferred by Josef Stefan using John Tyndall 's experimental measurements, and derived by Ludwig Boltzmann from fundamental statistical principles. [ 17 ] This relation is known as Stefan–Boltzmann law . The microscopic theory of radiation is best known as the quantum theory and was first offered by Max Planck in 1900. [ 14 ] According to this theory, energy emitted by a radiator is not continuous but is in the form of quanta. Planck noted that energy was emitted in quantas of frequency of vibration similarly to the wave theory. [ 18 ] The energy E of an electromagnetic wave in vacuum is found by the expression E = hf , where h is the Planck constant and f is its frequency. Bodies at higher temperatures emit radiation at higher frequencies with an increasing energy per quantum. While the propagation of electromagnetic waves of all wavelengths is often referred as "radiation", thermal radiation is often constrained to the visible and infrared regions. For engineering purposes, it may be stated that thermal radiation is a form of electromagnetic radiation which varies on the nature of a surface and its temperature. [ 14 ] Radiation waves may travel in unusual patterns compared to conduction heat flow . Radiation allows waves to travel from a heated body through a cold non-absorbing or partially absorbing medium and reach a warmer body again. [ 14 ] An example is the case of the radiation waves that travel from the Sun to the Earth. Thermal radiation emitted by a body at any temperature consists of a wide range of frequencies. The frequency distribution is given by Planck's law of black-body radiation for an idealized emitter as shown in the diagram at top. The dominant frequency (or color) range of the emitted radiation shifts to higher frequencies as the temperature of the emitter increases. For example, a red hot object radiates mainly in the long wavelengths (red and orange) of the visible band. If it is heated further, it also begins to emit discernible amounts of green and blue light, and the spread of frequencies in the entire visible range cause it to appear white to the human eye; it is white hot . Even at a white-hot temperature of 2000 K, 99% of the energy of the radiation is still in the infrared. This is determined by Wien's displacement law . In the diagram the peak value for each curve moves to the left as the temperature increases. The total radiation intensity of a black body rises as the fourth power of the absolute temperature, as expressed by the Stefan–Boltzmann law . A kitchen oven, at a temperature about double room temperature on the absolute temperature scale (600 K vs. 300 K) radiates 16 times as much power per unit area. An object at the temperature of the filament in an incandescent light bulb —roughly 3000 K, or 10 times room temperature—radiates 10,000 times as much energy per unit area. As for photon statistics , thermal light obeys Super-Poissonian statistics . When the temperature of a body is high enough, its thermal radiation spectrum becomes strong enough in the visible range to visibly glow. The visible component of thermal radiation is sometimes called incandescence , [ 20 ] though this term can also refer to thermal radiation in general. The term derives from the Latin verb incandescere , 'to glow white'. [ 21 ] In practice, virtually all solid or liquid substances start to glow around 798 K (525 °C; 977 °F), with a mildly dull red color, whether or not a chemical reaction takes place that produces light as a result of an exothermic process. This limit is called the Draper point . The incandescence does not vanish below that temperature, but it is too weak in the visible spectrum to be perceptible. The rate of electromagnetic radiation emitted by a body at a given frequency is proportional to the rate that the body absorbs radiation at that frequency, a property known as reciprocity . Thus, a surface that absorbs more red light thermally radiates more red light. This principle applies to all properties of the wave, including wavelength (color), direction, polarization , and even coherence . It is therefore possible to have thermal radiation which is polarized, coherent, and directional; though polarized and coherent sources are fairly rare in nature. Thermal radiation is one of the three principal mechanisms of heat transfer . It entails the emission of a spectrum of electromagnetic radiation due to an object's temperature. Other mechanisms are convection and conduction . Thermal radiation is characteristically different from conduction and convection in that it does not require a medium and, in fact it reaches maximum efficiency in a vacuum . Thermal radiation is a type of electromagnetic radiation which is often modeled by the propagation of waves. These waves have the standard wave properties of frequency, ν {\displaystyle \nu } and wavelength , λ {\displaystyle \lambda } which are related by the equation λ = c ν {\displaystyle \lambda ={\frac {c}{\nu }}} where c {\displaystyle c} is the speed of light in the medium. [ 22 ] : 769 Thermal irradiation is the rate at which radiation is incident upon a surface per unit area. [ 22 ] : 771 It is measured in watts per square meter. Irradiation can either be reflected , absorbed , or transmitted . The components of irradiation can then be characterized by the equation α + ρ + τ = 1 {\displaystyle \alpha +\rho +\tau =1\,} where α {\displaystyle \alpha } represents the absorptivity , ρ {\displaystyle \rho } represents reflectivity and τ {\displaystyle \tau } represents transmissivity . [ 22 ] : 772 These components are a function of the wavelength of the electromagnetic wave as well as the material properties of the medium. The spectral absorption is equal to the emissivity ϵ {\displaystyle \epsilon } ; this relation is known as Kirchhoff's law of thermal radiation . An object is called a black body if this holds for all frequencies, and the following formula applies: α = ϵ = 1. {\displaystyle \alpha =\epsilon =1.\,} If objects appear white (reflective in the visual spectrum ), they are not necessarily equally reflective (and thus non-emissive) in the thermal infrared – see the diagram at the left. Most household radiators are painted white, which is sensible given that they are not hot enough to radiate any significant amount of heat, and are not designed as thermal radiators at all – instead, they are actually convectors , and painting them matt black would make little difference to their efficacy. Acrylic and urethane based white paints have 93% blackbody radiation efficiency at room temperature [ 23 ] (meaning the term "black body" does not always correspond to the visually perceived color of an object). These materials that do not follow the "black color = high emissivity/absorptivity" caveat will most likely have functional spectral emissivity/absorptivity dependence. Only truly gray systems (relative equivalent emissivity/absorptivity and no directional transmissivity dependence in all control volume bodies considered) can achieve reasonable steady-state heat flux estimates through the Stefan-Boltzmann law. Encountering this "ideally calculable" situation is almost impossible (although common engineering procedures surrender the dependency of these unknown variables and "assume" this to be the case). Optimistically, these "gray" approximations will get close to real solutions, as most divergence from Stefan-Boltzmann solutions is very small (especially in most standard temperature and pressure lab controlled environments). Reflectivity deviates from the other properties in that it is bidirectional in nature. In other words, this property depends on the direction of the incident of radiation as well as the direction of the reflection. Therefore, the reflected rays of a radiation spectrum incident on a real surface in a specified direction forms an irregular shape that is not easily predictable. In practice, surfaces are often assumed to reflect either in a perfectly specular or a diffuse manner. In a specular reflection , the angles of reflection and incidence are equal. In diffuse reflection , radiation is reflected equally in all directions. Reflection from smooth and polished surfaces can be assumed to be specular reflection, whereas reflection from rough surfaces approximates diffuse reflection. [ 14 ] In radiation analysis a surface is defined as smooth if the height of the surface roughness is much smaller relative to the wavelength of the incident radiation. A medium that experiences no transmission ( τ = 0 {\displaystyle \tau =0} ) is opaque, in which case absorptivity and reflectivity sum to unity: ρ + α = 1. {\displaystyle \rho +\alpha =1.} Radiation emitted from a surface can propagate in any direction from the surface. [ 22 ] : 773 Irradiation can also be incident upon a surface from any direction. The amount of irradiation on a surface is therefore dependent on the relative orientation of both the emitter and the receiver. The parameter radiation intensity, I {\displaystyle I} is used to quantify how much radiation makes it from one surface to another. Radiation intensity is often modeled using a spherical coordinate system . [ 22 ] : 773 Emissive power is the rate at which radiation is emitted per unit area. [ 22 ] : 776 It is a measure of heat flux . The total emissive power from a surface is denoted as E {\displaystyle E} and can be determined by, E = π I {\displaystyle E=\pi I} where π {\displaystyle \pi } is in units of steradians and I {\displaystyle I} is the total intensity. The total emissive power can also be found by integrating the spectral emissive power over all possible wavelengths. [ 22 ] : 776 This is calculated as, E = ∫ 0 ∞ E λ ( λ ) d λ {\displaystyle E=\int _{0}^{\infty }E_{\lambda }(\lambda )d\lambda } where λ {\displaystyle \lambda } represents wavelength. The spectral emissive power can also be determined from the spectral intensity, I λ {\displaystyle I_{\lambda }} as follows, E λ ( λ ) = π I λ ( λ ) {\displaystyle E_{\lambda }(\lambda )=\pi I_{\lambda }(\lambda )} where both spectral emissive power and emissive intensity are functions of wavelength. [ 22 ] : 776 A "black body" is a body which has the property of allowing all incident rays to enter without surface reflection and not allowing them to leave again. [ 16 ] Blackbodies are idealized surfaces that act as the perfect absorber and emitter. [ 22 ] : 782–783 They serve as the standard against which real surfaces are compared when characterizing thermal radiation. A blackbody is defined by three characteristics: The spectral intensity of a blackbody, I λ , b {\displaystyle I_{\lambda ,b}} was first determined by Max Planck. [ 3 ] It is given by Planck's law per unit wavelength as: I λ , b ( λ , T ) = 2 h c 2 λ 5 ⋅ 1 e h c / k B T λ − 1 {\displaystyle I_{\lambda ,b}(\lambda ,T)={\frac {2hc^{2}}{\lambda ^{5}}}\cdot {\frac {1}{e^{hc/k_{\rm {B}}T\lambda }-1}}} This formula mathematically follows from calculation of spectral distribution of energy in quantized electromagnetic field which is in complete thermal equilibrium with the radiating object. Planck's law shows that radiative energy increases with temperature, and explains why the peak of an emission spectrum shifts to shorter wavelengths at higher temperatures. It can also be found that energy emitted at shorter wavelengths increases more rapidly with temperature relative to longer wavelengths. [ 24 ] The equation is derived as an infinite sum over all possible frequencies in a semi-sphere region. The energy, E = h ν {\displaystyle E=h\nu } , of each photon is multiplied by the number of states available at that frequency, and the probability that each of those states will be occupied. The Planck distribution can be used to find the spectral emissive power of a blackbody, E λ , b {\displaystyle E_{\lambda ,b}} as follows, [ 22 ] : 784–785 E λ , b = π I λ , b . {\displaystyle E_{\lambda ,b}=\pi I_{\lambda ,b}.} The total emissive power of a blackbody is then calculated as, E b = ∫ 0 ∞ π I λ , b d λ . {\displaystyle E_{b}=\int _{0}^{\infty }\pi I_{\lambda ,b}d\lambda .} The solution of the above integral yields a remarkably elegant equation for the total emissive power of a blackbody, the Stefan-Boltzmann law , which is given as, E b = σ T 4 {\displaystyle E_{b}=\sigma T^{4}} where σ {\displaystyle \sigma } is the Steffan-Boltzmann constant . The wavelength λ {\displaystyle \lambda \,} for which the emission intensity is highest is given by Wien's displacement law as: λ max = b T {\displaystyle \lambda _{\text{max}}={\frac {b}{T}}} Definitions of constants used in the above equations: Definitions of variables, with example values: For surfaces which are not black bodies, one has to consider the (generally frequency dependent) emissivity factor ϵ ( ν ) {\displaystyle \epsilon (\nu )} . This factor has to be multiplied with the radiation spectrum formula before integration. If it is taken as a constant, the resulting formula for the power output can be written in a way that contains ϵ {\displaystyle \epsilon } as a factor: P = ϵ σ A T 4 {\displaystyle P=\epsilon \sigma AT^{4}} This type of theoretical model, with frequency-independent emissivity lower than that of a perfect black body, is often known as a grey body . For frequency-dependent emissivity, the solution for the integrated power depends on the functional form of the dependence, though in general there is no simple expression for it. Practically speaking, if the emissivity of the body is roughly constant around the peak emission wavelength, the gray body model tends to work fairly well since the weight of the curve around the peak emission tends to dominate the integral. Calculation of radiative heat transfer between groups of objects, including a 'cavity' or 'surroundings' requires solution of a set of simultaneous equations using the radiosity method. In these calculations, the geometrical configuration of the problem is distilled to a set of numbers called view factors , which give the proportion of radiation leaving any given surface that hits another specific surface. These calculations are important in the fields of solar thermal energy , boiler and furnace design and raytraced computer graphics . The net radiative heat transfer from one surface to another is the radiation leaving the first surface for the other minus that arriving from the second surface. Q ˙ 1 → 2 = A 1 E b 1 F 1 → 2 − A 2 E b 2 F 2 → 1 {\displaystyle {\dot {Q}}_{1\rightarrow 2}=A_{1}E_{b1}F_{1\rightarrow 2}-A_{2}E_{b2}F_{2\rightarrow 1}} where A {\displaystyle A} is surface area, E b {\displaystyle E_{b}} is energy flux (the rate of emission per unit surface area) and F 1 → 2 {\displaystyle F_{1\rightarrow 2}} is the view factor from surface 1 to surface 2. Applying both the reciprocity rule for view factors, A 1 F 1 → 2 = A 2 F 2 → 1 {\displaystyle A_{1}F_{1\rightarrow 2}=A_{2}F_{2\rightarrow 1}} , and the Stefan–Boltzmann law , E b = σ T 4 {\displaystyle E_{b}=\sigma T^{4}} , yields: Q ˙ 1 → 2 = σ A 1 F 1 → 2 ( T 1 4 − T 2 4 ) {\displaystyle {\dot {Q}}_{1\rightarrow 2}=\sigma A_{1}F_{1\rightarrow 2}\left(T_{1}^{4}-T_{2}^{4}\right)\!} Formulas for radiative heat transfer can be derived for more particular or more elaborate physical arrangements, such as between parallel plates, concentric spheres and the internal surfaces of a cylinder. [ 18 ] Thermal radiation is an important factor of many engineering applications, especially for those dealing with high temperatures. Sunlight is the incandescence of the "white hot" surface of the Sun. Electromagnetic radiation from the sun has a peak wavelength of about 550 nm, [ 1 ] and can be harvested to generate heat or electricity. Thermal radiation can be concentrated on a tiny spot via reflecting mirrors, which concentrating solar power takes advantage of. Instead of mirrors, Fresnel lenses can also be used to concentrate radiant energy . Either method can be used to quickly vaporize water into steam using sunlight. For example, the sunlight reflected from mirrors heats the PS10 Solar Power Plant , and during the day it can heat water to 285 °C (558 K; 545 °F). A selective surface can be used when energy is being extracted from the sun. Selective surfaces are surfaces tuned to maximize the amount of energy they absorb from the sun's radiation while minimizing the amount of energy they lose to their own thermal radiation. Selective surfaces can also be used on solar collectors. The incandescent light bulb creates light by heating a filament to a temperature at which it emits significant visible thermal radiation. For a tungsten filament at a typical temperature of 3000 K, only a small fraction of the emitted radiation is visible, and the majority is infrared light. This infrared light does not help a person see, but still transfers heat to the environment, making incandescent lights relatively inefficient as a light source. [ 25 ] If the filament could be made hotter, efficiency would increase; however, there are currently no materials able to withstand such temperatures which would be appropriate for use in lamps. More efficient light sources, such as fluorescent lamps and LEDs , do not function by incandescence. [ 26 ] Thermal radiation plays a crucial role in human comfort, influencing perceived temperature sensation . Various technologies have been developed to enhance thermal comfort, including personal heating and cooling devices. The mean radiant temperature is a metric used to quantify the exchange of radiant heat between a human and their surrounding environment. Radiant personal heaters are devices that convert energy into infrared radiation that are designed to increase a user's perceived temperature. They typically are either gas-powered or electric. In domestic and commercial applications, gas-powered radiant heaters can produce a higher heat flux than electric heaters which are limited by the amount of current that can be drawn through a circuit breaker. Personalized cooling technology is an example of an application where optical spectral selectivity can be beneficial. Conventional personal cooling is typically achieved through heat conduction and convection. However, the human body is a very efficient emitter of infrared radiation, which provides an additional cooling mechanism. Most conventional fabrics are opaque to infrared radiation and block thermal emission from the body to the environment. Fabrics for personalized cooling applications have been proposed that enable infrared transmission to directly pass through clothing, while being opaque at visible wavelengths, allowing the wearer to remain cooler. Low-emissivity windows in houses are a more complicated technology, since they must have low emissivity at thermal wavelengths while remaining transparent to visible light. To reduce the heat transfer from a surface, such as a glass window, a clear reflective film with a low emissivity coating can be placed on the interior of the surface. "Low-emittance (low-E) coatings are microscopically thin, virtually invisible, metal or metallic oxide layers deposited on a window or skylight glazing surface primarily to reduce the U-factor by suppressing radiative heat flow". [ 27 ] By adding this coating we are limiting the amount of radiation that leaves the window thus increasing the amount of heat that is retained inside the window. Shiny metal surfaces, have low emissivities both in the visible wavelengths and in the far infrared. Such surfaces can be used to reduce heat transfer in both directions; an example of this is the multi-layer insulation used to insulate spacecraft. Since any electromagnetic radiation, including thermal radiation, conveys momentum as well as energy, thermal radiation also induces very small forces on the radiating or absorbing objects. Normally these forces are negligible, but they must be taken into account when considering spacecraft navigation. The Pioneer anomaly , where the motion of the craft slightly deviated from that expected from gravity alone, was eventually tracked down to asymmetric thermal radiation from the spacecraft. Similarly, the orbits of asteroids are perturbed since the asteroid absorbs solar radiation on the side facing the Sun, but then re-emits the energy at a different angle as the rotation of the asteroid carries the warm surface out of the Sun's view (the YORP effect ). Nanostructures with spectrally selective thermal emittance properties offer numerous technological applications for energy generation and efficiency, [ 28 ] e.g., for daytime radiative cooling of photovoltaic cells and buildings. These applications require high emittance in the frequency range corresponding to the atmospheric transparency window in 8 to 13 micron wavelength range. A selective emitter radiating strongly in this range is thus exposed to the clear sky, enabling the use of the outer space as a very low temperature heat sink. [ 29 ] In a practical, room-temperature setting, humans lose considerable energy due to infrared thermal radiation in addition to that lost by conduction to air (aided by concurrent convection, or other air movement like drafts). The heat energy lost is partially regained by absorbing heat radiation from walls or other surroundings. Human skin has an emissivity of very close to 1.0. [ 30 ] A human, having roughly 2 m 2 in surface area, and a temperature of about 307 K , continuously radiates approximately 1000 W. If people are indoors, surrounded by surfaces at 296 K, they receive back about 900 W from the wall, ceiling, and other surroundings, resulting in a net loss of 100 W. These estimates are highly dependent on extrinsic variables, such as wearing clothes. Lighter colors and also whites and metallic substances absorb less of the illuminating light, and as a result heat up less. However, color makes little difference in the heat transfer between an object at everyday temperatures and its surroundings. This is because the dominant emitted wavelengths are not in the visible spectrum, but rather infrared. Emissivities at those wavelengths are largely unrelated to visual emissivities (visible colors); in the far infra-red, most objects have high emissivities. Thus, except in sunlight, the color of clothing makes little difference as regards warmth; likewise, paint color of houses makes little difference to warmth except when the painted part is sunlit. Thermal radiation is a phenomenon that can burn skin and ignite flammable materials. The time to a damage from exposure to thermal radiation is a function of the rate of delivery of the heat. Radiative heat flux and effects are given as follows: [ 31 ] At distances on the scale of the wavelength of a radiated electromangetic wave or smaller, Planck's law is not accurate. For objects this small and close together, the quantum tunneling of EM waves has a significant impact on the rate of radiation. [ 1 ] A more sophisticated framework involving electromagnetic theory must be used for smaller distances from the thermal source or surface. For example, although far-field thermal radiation at distances from surfaces of more than one wavelength is generally not coherent to any extent, near-field thermal radiation (i.e., radiation at distances of a fraction of various radiation wavelengths) may exhibit a degree of both temporal and spatial coherence. [ 32 ] Planck's law of thermal radiation has been challenged in recent decades by predictions and successful demonstrations of the radiative heat transfer between objects separated by nanoscale gaps that deviate significantly from the law predictions. This deviation is especially strong (up to several orders in magnitude) when the emitter and absorber support surface polariton modes that can couple through the gap separating cold and hot objects. However, to take advantage of the surface-polariton-mediated near-field radiative heat transfer, the two objects need to be separated by ultra-narrow gaps on the order of microns or even nanometers. This limitation significantly complicates practical device designs. Another way to modify the object thermal emission spectrum is by reducing the dimensionality of the emitter itself. [ 28 ] This approach builds upon the concept of confining electrons in quantum wells, wires and dots, and tailors thermal emission by engineering confined photon states in two- and three-dimensional potential traps, including wells, wires, and dots. Such spatial confinement concentrates photon states and enhances thermal emission at select frequencies. [ 33 ] To achieve the required level of photon confinement, the dimensions of the radiating objects should be on the order of or below the thermal wavelength predicted by Planck's law. Most importantly, the emission spectrum of thermal wells, wires and dots deviates from Planck's law predictions not only in the near field, but also in the far field, which significantly expands the range of their applications.
https://en.wikipedia.org/wiki/Thermal_radiation
Thermal rearrangements of aromatic hydrocarbons are considered to be unimolecular reactions that directly involve the atoms of an aromatic ring structure and require no other reagent than heat. These reactions can be categorized in two major types: one that involves a complete and permanent skeletal reorganization ( isomerization ), and one in which the atoms are scrambled but no net change in the aromatic ring occurs ( automerization ). [ 1 ] The general reaction schemes of the two types are illustrated in Figure 1. This class of reactions was uncovered through studies on the automerization of naphthalene as well as the isomerization of unsubstituted azulene , to naphthalene. Research on thermal rearrangements of aromatic hydrocarbons has since been expanded to isomerizations and automerizations of benzene and polycyclic aromatic hydrocarbons . The first proposed mechanism for a thermal rearrangement of an aromatic compound was for the automerization of naphthalene . It was suggested that the rearrangement of naphthalene occurred due to reversibility of the isomerization of azulene to naphthalene. [ 2 ] [ 3 ] This mechanism would therefore involve an azulene intermediate and is depicted below: Subsequent work showed that the isomerization of azulene to naphthalene is not readily reversible ( the free energy of a naphthalene to azulene isomerization was too high - approximately 90 kcal/mol ). [ 1 ] A new reaction mechanism was suggested that involved a carbene intermediate and consecutive 1,2-hydrogen and 1,2-carbon shifts across the same C-C bond but in opposite directions. This is currently the preferred mechanism [ 4 ] and is as follows: The isomerization of unsubstituted azulene to naphthalene was the first reported thermal transformation of an aromatic hydrocarbon, and has consequently been the most widely studied rearrangement. However, the following mechanisms are generalized to all thermal isomerizations of aromatic hydrocarbons. Many mechanisms have been suggested for this isomerization, yet none have been unequivocally determined as the only correct mechanism. Five mechanisms were originally considered: [ 1 ] a reversible ring-closure mechanism, which is shown above, a norcaradiene- vinylidene mechanism, a diradical mechanism, a methylene walk mechanism, and a spiran mechanism. It was quickly determined that the reversible ring-closure mechanism was inaccurate, and it was later decided that there must be multiple reaction pathways occurring simultaneously. This was widely accepted, as at such high temperatures, one mechanism would have to be substantially energetically favored over the others to be occurring alone. Energetic studies displayed similar activation energies for all possible mechanisms. [ 1 ] Four mechanisms for thermal isomerizations have been proposed: a dyotropic mechanism, a diradical mechanism, and two benzene ring contraction mechanisms; a 1,2-carbon shift to a carbene preceding a 1,2-hydrogen shift , and a 1-2-hydrogen shift to a carbene followed by a 1,2-carbon shift. [ 5 ] [ 6 ] The dyotropic mechanism involves concerted 1,2-shifts as displayed below. Electronic studies show this mechanism to be unlikely, but it must still be considered a viable mechanism as it has not yet been disproven. The diradical mechanism has been supported by kinetic studies performed on the reaction, which have revealed that the reaction is not truly unimolecular, as it is most likely initiated by hydrogen addition from another gas-phase species. However, the reaction still obeys first-order kinetics , which is a classical characteristic of radical chain reactions . [ 7 ] A mechanistic rational for the thermal rearrangement of azulene to naphthalene is included below. Homolysis of the weakest bond in azulene occurs, followed by a hydrogen shift and ring closure so as to retain the aromaticity of the molecule. Benzene ring contractions are the last two mechanisms that have been suggested, and they are currently the preferred mechanisms. These reaction mechanisms proceed through the lowest free energy transition states compared to the diradical and dyotropic mechanisms. The difference between the two ring contractions is minute however, so it has not been determined which is favored over the other. Both mechanisms are shown as follows for the ring contraction of biphenylene : The first involves a 1,2-hydrogen shift to a carbene followed by a 1,2-carbon shift on the same C-C bond but in opposite directions. The second differs from the first only by the order of the 1,2-shifts, with the 1,2-carbon shift preceding the 1,2-hydrogen shift. The four described mechanisms would all result in the isomerization from azulene to naphthalene. Kinetic data and 13 C-labeling have been used to elucidate the correct mechanism, and have led organic chemists to believe that one of the benzene ring contractions is the most likely mechanism through which these isomerizations of aromatic hydrocarbons occur. [ 5 ] [ 8 ] Indications of thermal rearrangements of aromatic hydrocarbons were first noted in the early 20th century by natural products chemists who were working with sesquiterpenes . At the time, they noticed the automerization of a substituted azulene shown below, but no further structural or mechanistic investigations were made. The oldest characterized thermal rearrangement of an aromatic compound was that of the isomerization of azulene to naphthalene by Heilbronner et al. in 1947. [ 9 ] Since then, many other isomerizations have been recorded, however the rearrangement of azulene to naphthalene has received the most attention. Likewise, since the characterization of the automerization of naphthalene by Scott in 1977, [ 2 ] similar atom scramblings of other aromatic hydrocarbons such as pyrene , [ 10 ] azulene , [ 3 ] [ 11 ] benz[ a ]anthracene [ 12 ] and even benzene have been described. [ 13 ] While the existence of these reactions has been confirmed, the isomerization and automerization mechanisms remain unknown. Thermal rearrangements of aromatic hydrocarbons are generally carried out through flash vacuum pyrolysis (FVP). [ 14 ] In a typical FVP apparatus, a sample is sublimed under high vacuum (0.1-1.0 mmHg ), heated in the range of 500-1100 °C by an electric furnace as it passes through a horizontal quartz tube, and collected in a cold trap. Sample is carried through the apparatus by nitrogen carrier gas. FVP has numerous limitations: Thermal rearrangements of aromatic hydrocarbons have been shown to be important in areas of chemical research and industry including fullerene synthesis, materials applications, and the formation of soot in combustion . [ 5 ] Thermal rearrangements of aceanthrylene and acephenanthrylene can yield fluoranthene , an important species in syntheses of corannulene and fullerenes that proceed through additional internal rearrangements. [ 8 ] [ 16 ] Many of the polycyclic aromatic hydrocarbons known to be tumorigenic or mutagenic are found in atmospheric aerosols , which is connected to the thermal rearrangement of polycyclic aromatic hydrocarbons in fast soot formation during combustion. [ 16 ]
https://en.wikipedia.org/wiki/Thermal_rearrangement_of_aromatic_hydrocarbons
A thermal reservoir , also thermal energy reservoir or thermal bath , is a thermodynamic system with a heat capacity so large that the temperature of the reservoir changes relatively little when a significant amount of heat is added or extracted. [ 1 ] As a conceptual simplification, it effectively functions as an infinite pool of thermal energy at a given, constant temperature. Since it can act as an inertial source and sink of heat, it is often also referred to as a heat reservoir or heat bath . Lakes, oceans and rivers often serve as thermal reservoirs in geophysical processes, such as the weather. In atmospheric science , large air masses in the atmosphere often function as thermal reservoirs. Since the temperature of a thermal reservoir T does not change during the heat transfer , the change of entropy in the reservoir is d S Res = δ Q T . {\displaystyle dS_{\text{Res}}={\frac {\delta Q}{T}}.} The microcanonical partition sum Z ( E ) {\displaystyle Z(E)} of a heat bath of temperature T has the property Z ( E + Δ E ) = Z ( E ) e Δ E / k B T , {\displaystyle Z(E+\Delta E)=Z(E)e^{\Delta E/k_{\text{B}}T},} where k B {\displaystyle k_{\text{B}}} is the Boltzmann constant . It thus changes by the same factor when a given amount of energy is added. The exponential factor in this expression can be identified with the reciprocal of the Boltzmann factor . For an engineering application, see geothermal heat pump . This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Thermal_reservoir
Thermal runaway describes a process that is accelerated by increased temperature , in turn releasing energy that further increases temperature. Thermal runaway occurs in situations where an increase in temperature changes the conditions in a way that causes a further increase in temperature, often leading to a destructive result. It is a kind of uncontrolled positive feedback . In chemistry (and chemical engineering ), thermal runaway is associated with strongly exothermic reactions that are accelerated by temperature rise. In electrical engineering , thermal runaway is typically associated with increased current flow and power dissipation . Thermal runaway can occur in civil engineering , notably when the heat released by large amounts of curing concrete is not controlled. [ citation needed ] In astrophysics , runaway nuclear fusion reactions in stars can lead to nova and several types of supernova explosions, and also occur as a less dramatic event in the normal evolution of solar-mass stars, the " helium flash ". Chemical reactions involving thermal runaway are also called thermal explosions in chemical engineering , or runaway reactions in organic chemistry . It is a process by which an exothermic reaction goes out of control: the reaction rate increases due to an increase in temperature, causing a further increase in temperature and hence a further rapid increase in the reaction rate. This has contributed to industrial chemical accidents , most notably the 1947 Texas City disaster from overheated ammonium nitrate in a ship's hold, and the 1976 explosion of zoalene , in a drier, at King's Lynn . [ 1 ] Frank-Kamenetskii theory provides a simplified analytical model for thermal explosion. Chain branching is an additional positive feedback mechanism which may also cause temperature to skyrocket because of rapidly increasing reaction rate. Chemical reactions are either endothermic or exothermic, as expressed by their change in enthalpy . Many reactions are highly exothermic, so many industrial-scale and oil refinery processes have some level of risk of thermal runaway. These include hydrocracking , hydrogenation , alkylation (S N 2), oxidation , metalation and nucleophilic aromatic substitution . For example, oxidation of cyclohexane into cyclohexanol and cyclohexanone and ortho-xylene into phthalic anhydride have led to catastrophic explosions when reaction control failed. Thermal runaway may result from unwanted exothermic side reaction(s) that begin at higher temperatures, following an initial accidental overheating of the reaction mixture. This scenario was behind the Seveso disaster , where thermal runaway heated a reaction to temperatures such that in addition to the intended 2,4,5-trichlorophenol , poisonous 2,3,7,8-tetrachlorodibenzo- p -dioxin was also produced, and was vented into the environment after the reactor's rupture disk burst. [ 2 ] Thermal runaway is most often caused by failure of the reactor vessel's cooling system. Failure of the mixer can result in localized heating, which initiates thermal runaway. Similarly, in flow reactors , localized insufficient mixing causes hotspots to form, wherein thermal runaway conditions occur, which causes violent blowouts of reactor contents and catalysts. Incorrect equipment component installation is also a common cause. Many chemical production facilities are designed with high-volume emergency venting, a measure to limit the extent of injury and property damage when such accidents occur. At large scale, it is unsafe to "charge all reagents and mix", as is done in laboratory scale. This is because the amount of reaction scales with the cube of the size of the vessel (V ∝ r³), but the heat transfer area scales with the square of the size (A ∝ r²), so that the heat production-to-area ratio scales with the size (V/A ∝ r). Consequently, reactions that easily cool fast enough in the laboratory can dangerously self-heat at ton scale. In 2007, this kind of erroneous procedure caused an explosion of a 2,400 U.S. gallons (9,100 L)-reactor used to metalate methylcyclopentadiene with metallic sodium , causing the loss of four lives and parts of the reactor being flung 400 feet (120 m) away. [ 3 ] [ 4 ] Thus, industrial scale reactions prone to thermal runaway are preferably controlled by the addition of one reagent at a rate corresponding to the available cooling capacity. Some laboratory reactions must be run under extreme cooling, because they are very prone to hazardous thermal runaway. For example, in Swern oxidation , the formation of sulfonium chloride must be performed in a cooled system (−30 °C), because at room temperature the reaction undergoes explosive thermal runaway. [ 4 ] Microwaves are used for heating of various materials in cooking and various industrial processes. The rate of heating of the material depends on the energy absorption, which depends on the dielectric constant of the material. The dependence of dielectric constant on temperature varies for different materials; some materials display significant increase with increasing temperature. This behavior, when the material gets exposed to microwaves, leads to selective local overheating, as the warmer areas are better able to accept further energy than the colder areas—potentially dangerous especially for thermal insulators, where the heat exchange between the hot spots and the rest of the material is slow. These materials are called thermal runaway materials . This phenomenon occurs in some ceramics . Some electronic components develop lower resistances or lower triggering voltages (for nonlinear resistances) as their internal temperature increases. If circuit conditions cause markedly increased current flow in these situations, increased power dissipation may raise the temperature further by Joule heating . A vicious circle or positive feedback effect of thermal runaway can cause failure, sometimes in a spectacular fashion (e.g. electrical explosion or fire). To prevent these hazards, well-designed electronic systems typically incorporate current limiting protection, such as thermal fuses, circuit breakers, or PTC current limiters. To handle larger currents, circuit designers may connect multiple lower-capacity devices (e.g. transistors, diodes, or MOVs ) in parallel . This technique can work well, but is susceptible to a phenomenon called current hogging , in which the current is not shared equally across all devices. Typically, one device may have a slightly lower resistance, and thus draws more current, heating it more than its sibling devices, causing its resistance to drop further. The electrical load ends up funneling into a single device, which then rapidly fails. Thus, an array of devices may end up no more robust than its weakest component. The current-hogging effect can be reduced by carefully matching the characteristics of each paralleled device, or by using other design techniques to balance the electrical load. However, maintaining load balance under extreme conditions may not be straightforward. Devices with an intrinsic positive temperature coefficient (PTC) of electrical resistance are less prone to current hogging, but thermal runaway can still occur because of poor heat sinking or other problems. Many electronic circuits contain special provisions to prevent thermal runaway. This is most often seen in transistor biasing arrangements for high-power output stages. However, when equipment is used above its designed ambient temperature, thermal runaway can still occur in some cases. This occasionally causes equipment failures in hot environments, or when air cooling vents are blocked. Silicon shows a peculiar profile, in that its electrical resistance increases with temperature up to about 160 °C, then starts decreasing , and drops further when the melting point is reached. This can lead to thermal runaway phenomena within internal regions of the semiconductor junction ; the resistance decreases in the regions which become heated above this threshold, allowing more current to flow through the overheated regions, in turn causing yet more heating in comparison with the surrounding regions, which leads to further temperature increase and resistance decrease. This leads to the phenomenon of current crowding and formation of current filaments (similar to current hogging, but within a single device), and is one of the underlying causes of many semiconductor junction failures . Leakage current increases significantly in bipolar transistors (especially germanium -based bipolar transistors) as they increase in temperature. Depending on the design of the circuit, this increase in leakage current can increase the current flowing through a transistor and thus the power dissipation , causing a further increase in collector-to-emitter leakage current. This is frequently seen in a push–pull stage of a class AB amplifier . If the pull-up and pull-down transistors are biased to have minimal crossover distortion at room temperature , and the biasing is not temperature-compensated, then as the temperature rises both transistors will be increasingly biased on, causing current and power to further increase, and eventually destroying one or both devices. One rule of thumb to avoid thermal runaway is to keep the operating point of a BJT so that V ce ≤ 1/2 V cc . Another practice is to mount a thermal feedback sensing transistor or other device on the heat sink, to control the crossover bias voltage. As the output transistors heat up, so does the thermal feedback transistor. This in turn causes the thermal feedback transistor to turn on at a slightly lower voltage, reducing the crossover bias voltage, and so reducing the heat dissipated by the output transistors. If multiple BJT transistors are connected in parallel (which is typical in high current applications), a current hogging problem can occur. Special measures must be taken to control this characteristic vulnerability of BJTs. In power transistors (which effectively consist of many small transistors in parallel), current hogging can occur between different parts of the transistor itself, with one part of the transistor becoming more hot than the others. This is called second breakdown , and can result in destruction of the transistor even when the average junction temperature seems to be at a safe level. Power MOSFETs typically increase their on-resistance with temperature. Under some circumstances, power dissipated in this resistance causes more heating of the junction, which further increases the junction temperature , in a positive feedback loop. As a consequence, power MOSFETs have stable and unstable regions of operation. [ 5 ] However, the increase of on-resistance with temperature helps balance current across multiple MOSFETs connected in parallel, so current hogging does not occur. If a MOSFET transistor produces more heat than the heatsink can dissipate, then thermal runaway can still destroy the transistors. This problem can be alleviated to a degree by lowering the thermal resistance between the transistor die and the heatsink. See also Thermal Design Power . Metal oxide varistors typically develop lower resistance as they heat up. If connected directly across an AC or DC power bus (a common usage for protection against voltage spikes ), a MOV which has developed a lowered trigger voltage can slide into catastrophic thermal runaway, possibly culminating in a small explosion or fire. [ 6 ] To prevent this possibility, fault current is typically limited by a thermal fuse, circuit breaker, or other current limiting device. Tantalum capacitors are, under some conditions, prone to self-destruction by thermal runaway. The capacitor typically consists of a sintered tantalum sponge acting as the anode , a manganese dioxide cathode , and a dielectric layer of tantalum pentoxide created on the tantalum sponge surface by anodizing . It may happen that the tantalum oxide layer has weak spots that undergo dielectric breakdown during a voltage spike . The tantalum sponge then comes into direct contact with the manganese dioxide, and increased leakage current causes localized heating; usually, this drives an endothermic chemical reaction that produces manganese(III) oxide and regenerates ( self-heals ) the tantalum oxide dielectric layer. However, if the energy dissipated at the failure point is high enough, a self-sustaining exothermic reaction can start, similar to the thermite reaction, with metallic tantalum as fuel and manganese dioxide as oxidizer. This undesirable reaction will destroy the capacitor, producing smoke and possibly flame . [ 7 ] Therefore, tantalum capacitors can be freely deployed in small-signal circuits, but application in high-power circuits must be carefully designed to avoid thermal runaway failures. The leakage current of logic switching transistors increases with temperature. In rare instances, this may lead to thermal runaway in digital circuits. This is not a common problem, since leakage currents usually make up a small portion of overall power consumption, so the increase in power is fairly modest — for an Athlon 64 , the power dissipation increases by about 10% for every 30 degrees Celsius. [ 8 ] For a device with a TDP of 100 W, for thermal runaway to occur, the heat sink would have to have a thermal resistivity of over 3 K/W (kelvins per watt), which is about 6 times worse than a stock Athlon 64 heat sink. (A stock Athlon 64 heat sink is rated at 0.34 K/W, although the actual thermal resistance to the environment is somewhat higher, due to the thermal boundary between processor and heatsink, rising temperatures in the case, and other thermal resistances. [ citation needed ] ) Regardless, an inadequate heat sink with a thermal resistance of over 0.5 to 1 K/W would result in the destruction of a 100 W device even without thermal runaway effects. When handled improperly, manufactured defectively, or damaged some rechargeable batteries can experience thermal runaway resulting in overheating. Sealed cells will sometimes explode violently if safety vents are overwhelmed or nonfunctional. [ 9 ] Especially prone to thermal runaway are lithium-ion batteries , most markedly in the form of the lithium polymer battery . [ citation needed ] Lithium-ion batteries are often found in everyday consumer electronics and vehicles. Reports of exploding cellphones occasionally appear in newspapers. In 2006, batteries from Apple, HP, Toshiba, Lenovo, Dell and other notebook manufacturers were recalled because of fire and explosions. [ 10 ] [ 11 ] [ 12 ] [ 13 ] The Pipeline and Hazardous Materials Safety Administration (PHMSA) of the U.S. Department of Transportation has established regulations regarding the carrying of certain types of batteries on airplanes because of their instability in certain situations. This action was partially inspired by a cargo bay fire on a FedEx airplane. [ 14 ] One of the possible solutions is in using safer and less reactive anode (lithium titanates) and cathode ( lithium iron phosphate ) materials — thereby avoiding the cobalt electrodes in many lithium rechargeable cells — together with non-flammable electrolytes based on ionic liquids. Runaway thermonuclear reactions can occur in stars when nuclear fusion is ignited in conditions under which the gravitational pressure exerted by overlying layers of the star greatly exceeds thermal pressure , a situation that makes possible rapid increases in temperature through gravitational compression . Such a scenario may arise in stars containing degenerate matter , in which electron degeneracy pressure rather than normal thermal pressure does most of the work of supporting the star against gravity, and in stars undergoing implosion. In all cases, the imbalance arises prior to fusion ignition; otherwise, the fusion reactions would be naturally regulated to counteract temperature changes and stabilize the star. When thermal pressure is in equilibrium with overlying pressure, a star will respond to the increase in temperature and thermal pressure due to initiation of a new exothermic reaction by expanding and cooling. A runaway reaction is only possible when this response is inhibited. When stars in the 0.8–2.0 solar mass range exhaust the hydrogen in their cores and become red giants , the helium accumulating in their cores reaches degeneracy before it ignites. When the degenerate core reaches a critical mass of about 0.45 solar masses, helium fusion is ignited and takes off in a runaway fashion, called the helium flash , briefly increasing the star's energy production to a rate 100 billion times normal. About 6% of the core is quickly converted into carbon. [ 15 ] While the release is sufficient to convert the core back into normal plasma after a few seconds, it does not disrupt the star, [ 16 ] [ 17 ] nor immediately change its luminosity. The star then contracts, leaving the red giant phase and continuing its evolution into a stable helium-burning phase . A nova results from runaway hydrogen fusion (via the CNO cycle ) in the outer layer of a carbon-oxygen white dwarf star. If a white dwarf has a companion star from which it can accrete gas , the material will accumulate in a surface layer made degenerate by the dwarf's intense gravity. Under the right conditions, a sufficiently thick layer of hydrogen is eventually heated to a temperature of 20 million K, igniting runaway fusion. The surface layer is blasted off the white dwarf, increasing luminosity by a factor on the order of 50,000. The white dwarf and companion remain intact, however, so the process can repeat. [ 18 ] A much rarer type of nova may occur when the outer layer that ignites is composed of helium. [ 19 ] Analogous to the process leading to novae, degenerate matter can also accumulate on the surface of a neutron star that is accreting gas from a close companion. If a sufficiently thick layer of hydrogen accumulates, ignition of runaway hydrogen fusion can then lead to an X-ray burst . As with novae, such bursts tend to repeat and may also be triggered by helium or even carbon fusion. [ 20 ] [ 21 ] It has been proposed that in the case of "superbursts", runaway breakup of accumulated heavy nuclei into iron group nuclei via photodissociation rather than nuclear fusion could contribute the majority of the energy of the burst. [ 21 ] A type Ia supernova results from runaway carbon fusion in the core of a carbon-oxygen white dwarf star. If a white dwarf, which is composed almost entirely of degenerate matter, can gain mass from a companion, the increasing temperature and density of material in its core will ignite carbon fusion if the star's mass approaches the Chandrasekhar limit . This leads to an explosion that completely disrupts the star. Luminosity increases by a factor of greater than 5 billion. One way to gain the additional mass would be by accreting gas from a giant star (or even main sequence ) companion. [ 22 ] A second and apparently more common mechanism to generate the same type of explosion is the merger of two white dwarfs . [ 22 ] [ 23 ] A pair-instability supernova is believed to result from runaway oxygen fusion in the core of a massive , 130–250 solar mass, low to moderate metallicity star. [ 24 ] According to theory, in such a star, a large but relatively low density core of nonfusing oxygen builds up, with its weight supported by the pressure of gamma rays produced by the extreme temperature. As the core heats further, the gamma rays eventually begin to pass the energy threshold needed for collision-induced decay into electron - positron pairs, a process called pair production . This causes a drop in the pressure within the core, leading it to contract and heat further, causing more pair production, a further pressure drop, and so on. The core starts to undergo gravitational collapse . At some point this ignites runaway oxygen fusion, releasing enough energy to obliterate the star. These explosions are rare, perhaps about one per 100,000 supernovae. Not all supernovae are triggered by runaway nuclear fusion. Type Ib, Ic and type II supernovae also undergo core collapse, but because they have exhausted their supply of atomic nuclei capable of undergoing exothermic fusion reactions, they collapse all the way into neutron stars , or in the higher-mass cases, stellar black holes , powering explosions by the release of gravitational potential energy (largely via release of neutrinos ). It is the absence of runaway fusion reactions that allows such supernovae to leave behind compact stellar remnants .
https://en.wikipedia.org/wiki/Thermal_runaway
Thermal scanning probe lithography ( t-SPL ) is a form of scanning probe lithography [ 1 ] (SPL) whereby material is structured on the nanoscale using scanning probes , primarily through the application of thermal energy . Related fields are thermo-mechanical SPL (see also Millipede memory ), thermochemical SPL [ 2 ] [ 3 ] (or thermochemical nanolithography ) where the goal is to influence the local chemistry, and thermal dip-pen lithography [ 4 ] as an additive technique. Scientists around Daniel Rugar and John Mamin at the IBM research laboratories in Almaden have been the pioneers in using heated AFM (atomic force microscope) probes for the modification of surfaces. In 1992, they used microsecond laser pulses to heat AFM tips to write indents as small as 150 nm into the polymer PMMA at rates of 100 kHz. [ 5 ] In the following years, they developed cantilevers with resonance frequencies above 4 MHz and integrated resistive heaters and piezoresistive sensors for writing and reading of data. [ 6 ] [ 7 ] This thermo-mechanical data storage concept formed the basis of the Millipede project which was initialized by Peter Vettiger and Gerd Binnig at the IBM Research laboratories Zurich in 1995. It was an example of a memory storage device with a large array of parallel probes, which was however never commercialized due to growing competition from non-volatile memory such as flash memory . The storage medium of the Millipede memory consisted of polymers with shape memory functionality, like e.g. cross-linked polystyrene , [ 8 ] in order to allow to write data indents by plastic deformation and erasing of the data again by heating. However, evaporation instead of plastic deformation was necessary for nanolithography applications to be able to create any pattern in the resist . Such local evaporation of resist induced by a heated tip could be achieved for several materials like pentaerythritol tetranitrate , [ 9 ] cross-linked polycarbonates , [ 10 ] and Diels-Alder polymers. [ 11 ] Significant progress in the choice of resist material was made in 2010 at IBM Research in Zurich, leading to high resolution and precise 3D-relief patterning [ 12 ] with the use of the self-amplified depolymerization polymer polyphthalaldehyde (PPA) [ 12 ] [ 13 ] and molecular glasses [ 14 ] as resist, where the polymer decomposes into volatile monomers upon heating with the tip without the application of mechanical force and without pile-up or residues of the resist. The thermal cantilevers are fabricated from silicon wafers using bulk – and surface micro-machining processes. Probes have a radius of curvatures below 5 nm, enabling sub-10 nm resolution in the resist. [ 15 ] The resistive heating is carried out by integrated micro-heaters in the cantilever legs which are created by different levels of doping . The time constant of the heaters lies between 5 μs to 100 μs. [ 16 ] [ 17 ] Electromigration limits the longterm sustainable heater temperature to 700–800 °C. [ 17 ] The integrated heaters enable in-situ metrology of the written patterns, allowing feedback control, [ 18 ] field stitching without the use of alignment markers [ 19 ] and using pre-patterned structures as reference for sub-5 nm overlay . [ 20 ] Pattern transfer for semiconductor device fabrication including reactive ion etching and metal lift-off has been demonstrated with sub-20 nm resolution. [ 21 ] Due to the ablative nature of the patterning process, no development step (as in: selective removal of either the exposed or non-exposed regions of the resist as for e-beam and optical lithography ) is needed, neither are optical proximity corrections . Maximum linear writing speeds of up to 20 mm/s have been shown [ 22 ] with throughputs in the 10 4 – 10 5 μm 2 h −1 range [ 1 ] which is comparable to single-column, Gaussian-shaped e-beam using HSQ as resist. [ 23 ] The resolution of t-SPL is determined by the probe tip shape and not limited by the diffraction limit or by the focal spot size of beam approaches, however, tip-sample interactions during the in-situ metrology process create tip wear , [ 24 ] limiting the lifetime of the probes. In order to extend the lifetime of the probe tips, Ultrananocrystalline diamond (UNCD) [ 25 ] and Silicon-Carbide (SiC)-coated [ 24 ] tips or wear-less floating contact imaging methods [ 26 ] have been demonstrated. No electron damage or charging is caused to the patterned surfaces due to the absence of electron or ion beams. [ 21 ]
https://en.wikipedia.org/wiki/Thermal_scanning_probe_lithography
A thermal shift assay ( TSA ) measures changes in the thermal denaturation temperature and hence stability of a protein under varying conditions such as variations in drug concentration, buffer formulation ( pH or ionic strength ), redox potential, or sequence mutation . The most common method for measuring protein thermal shifts is differential scanning fluorimetry (DSF). DSF methodology includes techniques such as nanoDSF , [ 1 ] [ 2 ] which relies on the intrinsic fluorescence from native tryptophan or tyrosine residues, and Thermofluor, which utilizes extrinsic fluorogenic dyes. [ 3 ] The binding of low molecular weight ligands can increase the thermal stability of a protein, as described by Daniel Koshland (1958) [ 4 ] and Kaj Ulrik Linderstrøm-Lang and Schellman (1959). [ 5 ] Almost half of enzymes require a metal ion co-factor . [ 6 ] Thermostable proteins are often more useful than their non-thermostable counterparts, e.g., DNA polymerase in the polymerase chain reaction, [ 7 ] so protein engineering often includes adding mutations to increase thermal stability. Protein crystallization is more successful for proteins with a higher melting point [ 8 ] and adding buffer components that stabilize proteins improve the likelihood of protein crystals forming. [ 9 ] If examining pH then the possible effects of the buffer molecule on thermal stability should be taken into account along with the fact that pKa of each buffer molecule changes uniquely with temperature. [ 10 ] Additionally, any time a charged species is examined the effects of the counterion should be accounted for. Thermal stability of proteins has traditionally been investigated using biochemical assays , circular dichroism , or differential scanning calorimetry . Biochemical assays require a catalytic activity of the protein in question as well as a specific assay. Circular dichroism and differential scanning calorimetry both consume large amounts of protein and are low-throughput methods. The Thermofluor assay was the first high-throughput thermal shift assay and its utility and limitations has spurred the invention of a plethora of alternate methods. Each method has its strengths and weaknesses but they all struggle with intrinsically disordered proteins without any clearly defined tertiary structure as the essence of a thermal shift assay is measuring the temperature at which a protein goes from well-defined structure to disorder. nano-Differential scanning fluorimetry, or nanoDSF , is a biophysical characterization technique used for assessing the conformational stability of a biological sample, typically a protein. [ 2 ] Samples are subjected to either temperature ramps or gradients of chemical denaturant, and the intrinsic fluorescence is measured and fit to determine the melting point ( T m ). Applications include formulation ranking, protein engineering (comparing mutants to wild type), and ligand binding (quantification of affinity constants). A prerequisite of the technique is that the protein must contain an intrinsically fluorescent residue, typically tryptophan or tyrosine residues. Benefits include tag-free analysis, avoidance of extrinsic fluorophores, low sample consumption, easy of use, amenity to automation, and high screening throughput. Drawbacks include a propensity for false positives and negatives, usually necessitating follow-up screening with a potentially lower-throughout orthogonal technique to confirm. Current commercial instruments employ either proprietary capillaries [ 11 ] [ 12 ] or generic high-throughput 384-well plates [ 13 ] for sample analysis. The technique was first described by Semisotnov et al. (1991) [ 14 ] using 1,8-ANS and quartz cuvettes. 3 Dimensional Pharmaceuticals were the first to describe a high-throughput version using a plate reader [ 15 ] and Wyeth Research published a variation of the method with SYPRO Orange instead of 1,8-ANS. [ 16 ] SYPRO Orange has an excitation/emission wavelength profile compatible with qPCR machines which are almost ubiquitous in institutions that perform molecular biology research. The name differential scanning fluorimetry (DSF) was introduced later [ 17 ] but Thermofluor is preferable as Thermofluor is no longer trademarked and differential scanning fluorimetry is easily confused with differential scanning calorimetry. SYPRO Orange binds nonspecifically to hydrophobic surfaces, and water strongly quenches its fluorescence. When the protein unfolds, the exposed hydrophobic surfaces bind the dye, resulting in an increase in fluorescence by excluding water. Detergent micelles will also bind the dye and increase background noise dramatically. This effect is lessened by switching to the dye ANS; [ 18 ] however, this reagent requires UV excitation. The stability curve and its midpoint value (melting temperature, T m also known as the temperature of hydrophobic exposure, T h ) are obtained by gradually increasing the temperature to unfold the protein and measuring the fluorescence at each point. Curves are measured for protein only and protein + ligand, and Δ T m is calculated. The method may not work very well for protein-protein interactions if one of the interaction partners contains large hydrophobic patches as it is difficult to dissect prevention of aggregation, stabilization of a native folds, and steric hindrance of dye access to hydrophobic sites. In addition, partly aggregated protein can also limit the relative fluorescence increase upon heating; in extreme cases there will be no fluorescence increase at all because all protein is already in aggregates before heating. Knowing this effect can be very useful as a high relative fluorescence increase suggests a significant fraction of folded protein in the starting material. This assay allows high-throughput screening of ligands to the target protein and it is widely used in the early stages of drug discovery in the pharmaceutical industry, structural genomics efforts, and high-throughput protein engineering. Alexandrov et al. (2008) [ 20 ] published a variation on the Thermofluor assay where SYPRO Orange was replaced by N -[4-(7-diethylamino-4-methyl-3-coumarinyl)phenyl]maleimide (CPM), a compound that only fluoresces after reacting with a nucleophile. CPM has a high preference for thiols over other typical biological nucleophiles and therefore will react with cysteine side chains before others. Cysteines are typically buried in the interior of a folded protein as they are hydrophobic. When a protein denatures cysteine thiols become available and a fluorescent signal can be read from reacted CPM. The excitation and emission wavelengths for reacted CPM are 387 nm/ 463 nm so a fluorescence plate reader or a qPCR machine with specialized filters is required. Alexandrov et al. used the technique successfully on the membrane proteins Apelin GPCR and FAAH as well as β-lactoglobin which fibrillates on heating rather than going to a molten globule. The DSF-GTP technique was developed by a team led by Patrick Schaeffer at James Cook University and published in Moreau et al. 2012. [ 21 ] The development of differential scanning fluorimetry and the high-throughput capability of Thermofluor have vastly facilitated the screening of crystallization conditions of proteins and large mutant libraries in structural genomics programs, as well as ligands in drug discovery and functional genomics programs. These techniques are limited by their requirement for both highly purified proteins and solvatochromic dyes, prompting the need for more robust high-throughput technologies that can be used with crude protein samples. This need was met with the development of a new high-throughput technology for the quantitative determination of protein stability and ligand binding by differential scanning fluorimetry of proteins tagged with green fluorescent protein (GFP). This technology is based on the principle that a change in the proximal environment of GFP, such as unfolding and aggregation of the protein of interest, is measurable through its effect on the fluorescence of the fluorophore. The technology is simple, fast and insensitive to variations in sample volumes, and the useful temperature and pH range is 30–80 °C and 5–11 respectively. The system does not require solvatochromic dyes, reducing the risk of interferences. The protein samples are simply mixed with the test conditions in a 96-well plate and subjected to a melt-curve protocol using a real-time thermal cycler. The data are obtained within 1–2 h and include unique quality control measures through the GFP signal. DSF-GTP has been applied for the characterization of proteins and the screening of small compounds. [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] 4-(dicyanovinyl)julolidine (DCVJ) is a molecular rotor probe with fluorescence that is strongly dependent on the rigidity of its environment. When protein denatures, DCVJ increases in fluorescence. It has been reported to work with 40 mg/ml of antibody. [ 27 ] The lifetime of tryptophan fluorescence differs between folded and unfolded protein. Quantification of UV-excited fluorescence lifetimes at various temperature intervals yields a measurement of T m . A prominent advantage of this technique is that no reporter dyes need be added as tryptophan is an intrinsic part of the protein. This can also be a disadvantage as not all proteins contain tryptophan. Intrinsic fluorescence lifetime works with membrane proteins and detergent micelles but a powerful UV fluorophore (e.g. auto-fluorescent small molecule) in the buffer could drown out the signal. Utilization of the intrinsic fluorescence properties of tryptophan residues in many proteins forms the basis of nanoDSF. The emission wavelengths of tryptophan residues are dependent on the surrounding chemical environment, notably solvation (see solvatochromism ) and therefore differ between folded and unfolded protein, just as with the fluorescence lifetime. Typically, interior tryptophan residues in a more hydrophobic environment exhibit a notable emission red shift from approximately 330 nm to 350 nm upon protein unfolding and exposure to water. Quantification of fluorescence wavelength shifts at various temperature intervals yields a measurement of T m . Currently there are at least three instruments on the market that can read this shift in wavelength in a high-throughput manner while heating the samples. [ 13 ] [ 11 ] [ 12 ] The advantages and disadvantages are the same as for fluorescence lifetime except that there are more examples in the scientific literature of use. [ 2 ] Static light scattering allows monitoring of the sizes of the species in solution. Since proteins typically aggregate upon denaturation (or form fibrils) the detected species size will go up. This is label-free and independent of specific residues in the protein or buffer composition. The only requirement is that the protein actually aggregates/fibrillates after denaturation and that the protein of interest has been purified. In fast parallel proteolysis the researcher adds a thermostable protease ( thermolysin ) and takes out samples in parallel upon heating in a thermal gradient cycler. [ 28 ] Optionally, for instance for proteins expressed at low levels, a western blot is then run to determine at what temperature a protein becomes degraded. For pure or highly enriched proteins, direct SDS-PAGE detection is possible facilitating Commassie-fluorescence based direct quantification. FastPP exploits that proteins become increasingly susceptible to proteolysis when unfolded and that thermolysin cleaves at hydrophobic residues which are typically found in the core of proteins. To reduce the workload, western blots could be replaced by SDS-PAGE gel polyhistidine-tag staining, provided that the protein has such a tag and is expressed in adequate amounts. FastPP can be used on unpurified, complex mixtures of proteins and proteins fused with other proteins, such as GST or GFP , as long as the sequence that is the target of the western blot, e.g., His-tag , is directly linked to the protein of interest. However, commercially available thermolysin is dependent on calcium ions for activity and denatures itself just above 85 degrees Celsius. So calcium must be present and calcium chelators absent in the buffer - other compounds that interfere with the function (such as high concentrations of detergents) of the protease could also be problematic. FASTpp has also been used to monitor binding-coupled folding of intrinsically disordered proteins (IDPs). Cellular thermal shift assay (CETSA ® ) [ 29 ] is a biophysical technique applicable on living cells as well as tissue biopsies. CETSA ® is based on the discovery that protein melting curves can also be generated in intact cells and that drug binding leads to very significant thermal stabilization of proteins. Upon denaturation, proteins are aggregated and can thus be removed by centrifugation after lysis of the cells. The stable proteins are found in the supernatant can be detected; e.g., by western blot, alpha-LISA, or mass spectrometry. The CETSA ® -technique is highly stringent, reproducible, and not prone to false positives. [ citation needed ] However, it is possible for a sample, or small molecule compound, to bind a protein in a given target's pathway. If that protein induces further stabilization of the original target protein through a cascade event, it could manifest as direct target engagement. An advantage of this method is that it is label-free and thus applicable for studies of drug binding in a wide range of cells and tissues. CETSA ® can also be conducted on cell lysates versus intact cells, helping to determine sample penetration of the cell membrane. [ citation needed ] Thermofluor variant specific for flavin-binding proteins. Analogous to Thermofluor binding assays, a small volume of protein solution is heated up and the fluorescence increase is followed as function of temperature. In contrast to Thermofluor, no external fluorescent dye is needed because the flavin cofactor is already present in the flavin-binding protein and its fluorescence properties change upon unfolding. [ 30 ] Size exclusion chromatography can be used directly to access protein stability in the presence or absence of ligands. [ 31 ] Samples of purified protein are heated in a water bath or thermocycler , cooled, centrifuged to remove aggregated proteins, and run on an analytical HPLC . As the melting temperature is reached and protein precipitates or aggregates, peak height decreases and void peak height increases. This can be used to identify ligands and inhibitors, and optimize purification conditions. [ 32 ] [ 33 ] While of lower throughput than FSEC-TS, requiring large amounts of purified protein, SEC-TS avoids any influence of the fluorescent tag on apparent protein stability. In fluorescence-detection size exclusion chromatography the protein of interest is fluorescently tagged (e.g., with GFP ) and run through a gel filtration column on an FPLC system equipped with a fluorescence detector. The resulting chromatogram allows the researcher to estimate the dispersity and expression level of the tagged protein in the current buffer. [ 34 ] Since only fluorescence is measured, only the tagged protein is seen in the chromatogram. FSEC is typically used to compare membrane protein orthologs or screen detergents to solubilize specific membrane proteins in. For fluorescence-detection size-exclusion chromatography-based thermostability assay (FSEC-TS) the samples are heated in the same manner as in FastPP and CETSA and following centrifugation to clear away precipitate the supernatant is treated in the same manner as FSEC. [ 35 ] Larger aggregates are seen in the void volume while the peak height for the protein of interest decreases when the unfolding temperature is reached. GFP has a T m of ~76 °C so the technique is limited to temperature below ~70 °C. [ 35 ] GPCRs are pharmacologically important transmembrane proteins . Their X-ray crystal structures were revealed long after other transmembrane proteins of lesser interest. The difficulty in obtaining protein crystals of GPCRs was likely due to their high flexibility. Less flexible versions were obtained by truncating, mutating, and inserting T4 lysozyme in the recombinant sequence. One of the methods researchers used to guide these alterations was radioligand binding thermostability assay. [ 36 ] The assay is performed by incubating the protein with a radiolabelled ligand of the protein for 30 minutes at a given temperature, then quench on ice, run through a gel filtration mini column, and quantify the radiation levels of the protein that comes off the column. The radioligand concentration is high enough to saturate the protein. Denatured protein is unable to bind the radioligand and the protein and radioligand will be separated in the gel filtration mini column. When screening mutants selection will be for thermal stability in the specific conformation, i.e., if the radioligand is an agonist, selection will be for the agonist binding conformation and if it is an antagonist, then the screening is for stability in the antagonist binding conformation. Radioassays have the advantage of working with minute amounts of protein. But it is work with radioactive substances and large amount of manual labour is involved. A high-affinity ligand has to be known for the protein of interest and the buffer must not interfere with the binding of the radioligand. Other thermal shift assays can also select for specific conformations if a ligand of the appropriate type is added to the experiment. Thermofluor has been extensively used in drug screening campaigns. [ 15 ] [ 37 ] [ 19 ] [ 20 ] [ 16 ] [ 38 ] [ 39 ] [ 40 ] [ 10 ] Because Thermofluor detects high affinity binding sites for small molecules on proteins, it can find hits that bind to active site subsites, cofactor sites, or allosteric binding sites with equal efficacy. The method typically requires the use of screening compound concentrations at >10x the desired binding threshold. Setting 5 μM as a reasonable hit threshold consequently requires a test ligand concentration of 50 to 100 μM in the sample well. For most drug compound libraries, where many compounds are not soluble beyond ~100 μM, screening multiple compounds is consequently not feasible owing to solubility issues. Thermofluor screens do not require the development of custom screening reagents (e.g. cleavable substrate analogs), do not require any radioactive reagents, and are generally less sensitive to the effects of compounds that are chemically reactive with protein active site residues, and that consequently show up as undesirable hits in enzyme activity screens. Thermofluor measurements of T m can be quantitatively related to drug K d values, [ 41 ] although this requires the additional calorimetric measurements of the target proteins’ enthalpy of unfolding, determined using DSC. The dynamic range of the Thermofluor assay is very large, so that the same assay can be used to find micromolar hits and to optimize sub-nanomolar leads, making the method particularly useful in the development of QSAR relationships for lead optimization. Many proteins require the simultaneous or sequential binding of multiple substrates, cofactors, and/or allosteric effectors. Thermofluor studies of molecules that bind to active site subsites, cofactor sites, or allosteric binding sites can help elucidate specific features of enzyme mechanism that can be important in the design of effective drug screening campaigns [ 42 ] and in characterizing novel inhibitory mechanisms. [ 43 ] Thermofluor pre-screens can be performed that sample a wide range of pH, ionic strength, and additives such as added metal ions and cofactors. The generation of a protein response surface is useful for establishing optimal assay conditions and can frequently lead to improved purification scheme as required to support HTS campaigns and biophysical studies. [ 17 ] [ 44 ] Many applications of protein engineering for drug discovery or biophysics applications involve modification of the protein amino acid sequence through truncation, domain fusions, site-specific modifications or random mutagenesis. Thermofluor provides a high throughput method for the evaluation of the effects of such sequence variations on protein stability as well as means for developing stabilizing conditions if required. [ 45 ] [ 46 ] Although proteins are dynamic structures in solution, formation of protein crystals is expected to be favored when all molecules lie in their lowest energy conformation. Thermofluor evaluation of conditions that stabilize proteins is consequently a useful strategy for finding optimal crystallization conditions [ 9 ] [ 47 ] [ 8 ] Since Thermofluor is a label-free assay that detects small molecule binding to high affinity binding sites on a target protein, it is well suited to finding small molecule inhibitors of protein-protein interactions or allosteric modulation sites. [ 48 ] [ 49 ] Of course, whether or not a protein-protein interaction is ultimately "druggable" with a small molecule requires the presence of a suitable binding site on the target protein that provides enough local energetic interactions to allow specific drug binding. Membrane proteins are often isolated in the presence of hydrophobic solubilizing agents that can partition hydrophobic-binding dyes like 1,8-ANS and SYPRO orange and generate a fluorescence background that obscures observation of a Thermofluor protein melting signal. Nevertheless, careful optimization of conditions (e.g., to avoid micelle formation of the solubilizing agent) can often produce satisfactory assay conditions [ 18 ] [ 20 ] The biochemical function of protein targets identified through gene knockout or proteomics approaches are often obscure if they have low amino acid sequence homology with proteins of known function. In many cases some useful information can be gained through the identification of binding cofactors or substrate analogs in classifying protein function, information useful in using Thermofluor can assist in "decrypting" the function of proteins whose biochemical function might otherwise be unknown. [ 50 ] [ 51 ] Recent developments have extended thermal shift approaches to the analysis of ligand interactions in complex mixtures, including intact cells. Initial observations of individual proteins using fast parallel proteolysis (FastPP) showed that stabilization by ligand binding could impart resistance to proteolytic digestion with thermolysin. Protection relative to reference was quantified through either protein staining on gels or western blotting with a labeling antibody directed to a tag fused to the target protein. [ 28 ] CETSA, for cellular thermal shift assay, is a method that monitors the stabilization effect of drug binding through the prevention of irreversible protein precipitation, which is usually initiated when a protein becomes thermally denatured. In CETSA, aliquots of cell lysate are transiently heated to different temperatures, following which samples are centrifuged to separate soluble fractions from precipitated proteins. The presence of the target protein in each soluble fraction is determined by western blotting and used to construct a CETSA melting curve that can inform regarding in vivo targeting, drug distribution, and bioavailability. [ 29 ] Both FastPP and CETSA generally require antibodies to facilitate target detection, and consequently are generally used in contexts where the target identity is known a priori. Newer developments seek to merge aspects of FastPP and CETSA approaches, by assessing the ligand-dependent dependent proteolytic protection of targets in cells using mass spectroscopy (MS) to detect shifts in proteolysis patterns associated with protein stabilization. [ 52 ] Present implementations still require a priori knowledge of expected targets to facilitate data analysis, but improvements in MS data collection strategies, together with the use of improved computational tools and database structures can potentially allow the approach to be used for de novo target decryption on the total cell proteome scale. This would be a major advance for drug discovery since it would allow the identification of discrete molecular targets (as well as off-target interactions) for drugs identified through high-content cellular or phenotypic drug screens.
https://en.wikipedia.org/wiki/Thermal_shift_assay
Thermal shock is a phenomenon characterized by a rapid change in temperature that results in a transient mechanical load on an object. The load is caused by the differential expansion of different parts of the object due to the temperature change. This differential expansion can be understood in terms of strain , rather than stress . When the strain exceeds the tensile strength of the material, it can cause cracks to form, and eventually lead to structural failure. Methods to prevent thermal shock include: [ 1 ] Borosilicate glass is made to withstand thermal shock better than most other glass through a combination of reduced expansion coefficient, and greater strength, though fused quartz outperforms it in both these respects. Some glass-ceramic materials (mostly in the lithium aluminosilicate (LAS) system [ 2 ] ) include a controlled proportion of material with a negative expansion coefficient, so that the overall coefficient can be reduced to almost exactly zero over a reasonably wide range of temperatures. Among the best thermomechanical materials, there are alumina , zirconia , tungsten alloys, silicon nitride , silicon carbide , boron carbide , and some stainless steels . Reinforced carbon-carbon is extremely resistant to thermal shock, due to graphite 's extremely high thermal conductivity and low expansion coefficient, the high strength of carbon fiber , and a reasonable ability to deflect cracks within the structure. To measure thermal shock, the impulse excitation technique proved to be a useful tool. It can be used to measure Young's modulus, Shear modulus , Poisson's ratio , and damping coefficient in a non destructive way. The same test-piece can be measured after different thermal shock cycles, and this way the deterioration in physical properties can be mapped out. Thermal shock resistance measures can be used for material selection in applications subject to rapid temperature changes. A common measure of thermal shock resistance is the maximum temperature differential, Δ T {\displaystyle \Delta T} , which can be sustained by the material for a given thickness. [ 3 ] Thermal shock resistance measures can be used for material selection in applications subject to rapid temperature changes. The maximum temperature jump, Δ T {\displaystyle \Delta T} , sustainable by a material can be defined for strength-controlled models by: [ 4 ] [ 3 ] B Δ T = σ f α E {\displaystyle B\Delta T={\frac {\sigma _{f}}{\alpha E}}} where σ f {\displaystyle \sigma _{f}} is the failure stress (which can be yield or fracture stress ), α {\displaystyle \alpha } is the coefficient of thermal expansion, E {\displaystyle E} is the Young's modulus, and B {\displaystyle B} is a constant depending upon the part constraint, material properties, and thickness. B = A C {\displaystyle B={\frac {A}{C}}} where C {\displaystyle C} is a system constrain constant dependent upon the Poisson's ratio, ν {\displaystyle \nu } , and A {\displaystyle A} is a non-dimensional parameter dependent upon the Biot number , B i {\displaystyle \mathrm {Bi} } . C = { 1 axial stress ( 1 − ν ) biaxial constraint ( 1 − 2 ν ) triaxial constraint {\displaystyle C={\begin{cases}1&{\text{axial stress}}\\(1-\nu )&{\text{biaxial constraint}}\\(1-2\nu )&{\text{triaxial constraint}}\end{cases}}} A {\displaystyle A} may be approximated by: A = H h / k 1 + H h / k = B i 1 + B i {\displaystyle A={\frac {Hh/k}{1+Hh/k}}={\frac {\mathrm {Bi} }{1+\mathrm {Bi} }}} where H {\displaystyle H} is the thickness, h {\displaystyle h} is the heat transfer coefficient , and k {\displaystyle k} is the thermal conductivity . If perfect heat transfer ( B i = ∞ {\displaystyle \mathrm {Bi} =\infty } ) is assumed, the maximum heat transfer supported by the material is: [ 4 ] [ 5 ] Δ T = A 1 σ f E α {\displaystyle \Delta T=A_{1}{\frac {\sigma _{f}}{E\alpha }}} A material index for material selection according to thermal shock resistance in the fracture stress derived perfect heat transfer case is therefore: σ f E α {\displaystyle {\frac {\sigma _{f}}{E\alpha }}} For cases with poor heat transfer ( B i < 1 {\displaystyle \mathrm {Bi} <1} ), the maximum heat differential supported by the material is: [ 4 ] [ 5 ] Δ T = A 2 σ f E α 1 B i = A 2 σ f E α k h H {\displaystyle \Delta T=A_{2}{\frac {\sigma _{f}}{E\alpha }}{\frac {1}{\mathrm {Bi} }}=A_{2}{\frac {\sigma _{f}}{E\alpha }}{\frac {k}{hH}}} In the poor heat transfer case, a higher thermal conductivity is beneficial for thermal shock resistance. The material index for the poor heat transfer case is often taken as: k σ f E α {\displaystyle {\frac {k\sigma _{f}}{E\alpha }}} According to both the perfect and poor heat transfer models, larger temperature differentials can be tolerated for hot shock than for cold shock. In addition to thermal shock resistance defined by material fracture strength, models have also been defined within the fracture mechanics framework. Lu and Fleck produced criteria for thermal shock cracking based on fracture toughness controlled cracking. The models were based on thermal shock in ceramics (generally brittle materials). Assuming an infinite plate, and mode I cracking, the crack was predicted to start from the edge for cold shock, but the center of the plate for hot shock. [ 4 ] Cases were divided into perfect, and poor heat transfer to further simplify the models. The sustainable temperature jump decreases, with increasing convective heat transfer (and therefore larger Biot number). This is represented in the model shown below for perfect heat transfer ( B i = ∞ {\displaystyle \mathrm {Bi} =\infty } ). [ 4 ] [ 5 ] Δ T = A 3 K I c E α π H {\displaystyle \Delta T=A_{3}{\frac {K_{Ic}}{E\alpha {\sqrt {\pi H}}}}} where K I c {\displaystyle K_{Ic}} is the mode I fracture toughness , E {\displaystyle E} is the Young's modulus, α {\displaystyle \alpha } is the thermal expansion coefficient, and H {\displaystyle H} is half the thickness of the plate. A material index for material selection in the fracture mechanics derived perfect heat transfer case is therefore: K I c E α {\displaystyle {\frac {K_{Ic}}{E\alpha }}} For cases with poor heat transfer, the Biot number is an important factor in the sustainable temperature jump. [ 4 ] [ 5 ] Δ T = A 4 K I c E α π H k h H {\displaystyle \Delta T=A_{4}{\frac {K_{Ic}}{E\alpha {\sqrt {\pi H}}}}{\frac {k}{hH}}} Critically, for poor heat transfer cases, materials with higher thermal conductivity, k , have higher thermal shock resistance. As a result, a commonly chosen material index for thermal shock resistance in the poor heat transfer case is: k K I c E α {\displaystyle {\frac {kK_{Ic}}{E\alpha }}} The temperature difference to initiate fracture has been described by William David Kingery to be: [ 6 ] [ 7 ] Δ T c = S k σ ∗ ( 1 − ν ) E α 1 h = S h R ′ {\displaystyle \Delta T_{c}=S{\frac {k\sigma ^{*}(1-\nu )}{E\alpha }}{\frac {1}{h}}={\frac {S}{hR^{'}}}} where S {\displaystyle S} is a shape factor, σ ∗ {\displaystyle \sigma ^{*}} is the fracture stress, k {\displaystyle k} is the thermal conductivity, E {\displaystyle E} is the Young's modulus, α {\displaystyle \alpha } is the coefficient of thermal expansion, h {\displaystyle h} is the heat transfer coefficient, and R ′ {\displaystyle R'} is a fracture resistance parameter. The fracture resistance parameter is a common metric used to define the thermal shock tolerance of materials. [ 1 ] R ′ = k σ ∗ ( 1 − v ) E α {\displaystyle R'={\frac {k\sigma ^{*}(1-v)}{E\alpha }}} The formulas were derived for ceramic materials, and make the assumptions of a homogeneous body with material properties independent of temperature, but can be well applied to other brittle materials. [ 7 ] Thermal shock testing exposes products to alternating low and high temperatures to accelerate failures caused by temperature cycles or thermal shocks during normal use. The transition between temperature extremes occurs very rapidly, greater than 15 °C per minute. Equipment with single or multiple chambers is typically used to perform thermal shock testing. When using single chamber thermal shock equipment, the products remain in one chamber and the chamber air temperature is rapidly cooled and heated. Some equipment uses separate hot and cold chambers with an elevator mechanism that transports the products between two or more chambers. Glass containers can be sensitive to sudden changes in temperature. One method of testing involves rapid movement from cold to hot water baths, and back. [ 8 ]
https://en.wikipedia.org/wiki/Thermal_shock
Thermal spraying techniques are coating processes in which melted (or heated) materials are sprayed onto a surface. The "feedstock" (coating precursor) is heated by electrical (plasma or arc) or chemical means (combustion flame). Thermal spraying can provide thick coatings (approx. thickness range is 20 microns to several mm, depending on the process and feedstock), over a large area at high deposition rate as compared to other coating processes such as electroplating , physical and chemical vapor deposition . Coating materials available for thermal spraying include metals, alloys, ceramics, plastics and composites. They are fed in powder or wire form, heated to a molten or semimolten state and accelerated towards substrates in the form of micrometer-size particles. Combustion or electrical arc discharge is usually used as the source of energy for thermal spraying. Resulting coatings are made by the accumulation of numerous sprayed particles. The surface may not heat up significantly, allowing the coating of flammable substances. Coating quality is usually assessed by measuring its porosity , oxide content, macro and micro- hardness , bond strength and surface roughness . Generally, the coating quality increases with increasing particle velocities. Several variations of thermal spraying are distinguished: In classical (developed between 1910 and 1920) but still widely used processes such as flame spraying and wire arc spraying, the particle velocities are generally low (< 150 m/s), and raw materials must be molten to be deposited. Plasma spraying, developed in the 1970s, uses a high-temperature plasma jet generated by arc discharge with typical temperatures >15,000 K, which makes it possible to spray refractory materials such as oxides, molybdenum , etc. [ 1 ] A typical thermal spray system consists of the following: The detonation gun consists of a long water-cooled barrel with inlet valves for gases and powder. Oxygen and fuel (acetylene most common) are fed into the barrel along with a charge of powder. A spark is used to ignite the gas mixture, and the resulting detonation heats and accelerates the powder to supersonic velocity through the barrel. A pulse of nitrogen is used to purge the barrel after each detonation. This process is repeated many times a second. The high kinetic energy of the hot powder particles on impact with the substrate results in a buildup of a very dense and strong coating. The coating adheres through a mechanical bond resulting from the deformation of the base substrate wrapping around the sprayed particles after the high speed impact. In plasma spraying process, the material to be deposited (feedstock) — typically as a powder , sometimes as a liquid , [ 2 ] suspension [ 3 ] or wire — is introduced into the plasma jet, emanating from a plasma torch . In the jet, where the temperature is on the order of 10,000 K, the material is melted and propelled towards a substrate. There, the molten droplets flatten, rapidly solidify and form a deposit. Commonly, the deposits remain adherent to the substrate as coatings; free-standing parts can also be produced by removing the substrate. There are a large number of technological parameters that influence the interaction of the particles with the plasma jet and the substrate and therefore the deposit properties. These parameters include feedstock type, plasma gas composition and flow rate, energy input, torch offset distance, substrate cooling, etc. The deposits consist of a multitude of pancake-like 'splats' called lamellae , formed by flattening of the liquid droplets. As the feedstock powders typically have sizes from micrometers to above 100 micrometers, the lamellae have thickness in the micrometer range and lateral dimension from several to hundreds of micrometers. Between these lamellae, there are small voids, such as pores, cracks and regions of incomplete bonding. As a result of this unique structure, the deposits can have properties significantly different from bulk materials. These are generally mechanical properties, such as lower strength and modulus , higher strain tolerance, and lower thermal and electrical conductivity . Also, due to the rapid solidification , metastable phases can be present in the deposits. This technique is mostly used to produce coatings on structural materials. Such coatings provide protection against high temperatures (for example thermal barrier coatings for exhaust heat management ), corrosion , erosion , wear ; they can also change the appearance, electrical or tribological properties of the surface, replace worn material, etc. When sprayed on substrates of various shapes and removed, free-standing parts in the form of plates, tubes, shells, etc. can be produced. It can also be used for powder processing (spheroidization, homogenization, modification of chemistry, etc.). In this case, the substrate for deposition is absent and the particles solidify during flight or in a controlled environment (e.g., water). This technique with variation may also be used to create porous structures, suitable for bone ingrowth, as a coating for medical implants. A polymer dispersion aerosol can be injected into the plasma discharge in order to create a grafting of this polymer on to a substrate surface. [ 3 ] This application is mainly used to modify the surface chemistry of polymers. Plasma spraying systems can be categorized by several criteria. Plasma jet generation: Plasma-forming medium: Spraying environment: Another variation consists of having a liquid feedstock instead of a solid powder for melting, this technique is known as Solution precursor plasma spray Vacuum plasma spraying (VPS) is a technology for etching and surface modification to create porous layers with high reproducibility and for cleaning and surface engineering of plastics, rubbers and natural fibers as well as for replacing CFCs for cleaning metal components. This surface engineering can improve properties such as frictional behavior, heat resistance , surface electrical conductivity , lubricity , cohesive strength of films, or dielectric constant , or it can make materials hydrophilic or hydrophobic . The process typically operates at 39–120 °C to avoid thermal damage. It can induce non-thermally activated surface reactions, causing surface changes which cannot occur with molecular chemistries at atmospheric pressure. Plasma processing is done in a controlled environment inside a sealed chamber at a medium vacuum, around 13–65 Pa . The gas or mixture of gases is energized by an electrical field from DC to microwave frequencies, typically 1–500 W at 50 V. The treated components are usually electrically isolated. The volatile plasma by-products are evacuated from the chamber by the vacuum pump , and if necessary can be neutralized in an exhaust scrubber . In contrast to molecular chemistry, plasmas employ: Plasma also generates electromagnetic radiation in the form of vacuum UV photons to penetrate bulk polymers to a depth of about 10 μm. This can cause chain scissions and cross-linking. Plasmas affect materials at an atomic level. Techniques like X-ray photoelectron spectroscopy and scanning electron microscopy are used for surface analysis to identify the processes required and to judge their effects. As a simple indication of surface energy , and hence adhesion or wettability, often a water droplet contact angle test is used. The lower the contact angle, the higher the surface energy and more hydrophilic the material is. At higher energies ionization tends to occur more than chemical dissociations . In a typical reactive gas, 1 in 100 molecules form free radicals whereas only 1 in 10 6 ionizes. The predominant effect here is the forming of free radicals. Ionic effects can predominate with selection of process parameters and if necessary the use of noble gases. Wire arc spray is a form of thermal spraying where two consumable metal wires are fed independently into the spray gun. These wires are then charged and an arc is generated between them. The heat from this arc melts the incoming wire, which is then entrained in an air jet from the gun. This entrained molten feedstock is then deposited onto a substrate with the help of compressed air. This process is commonly used for metallic, heavy coatings. [ 1 ] Plasma transferred wire arc (PTWA) is another form of wire arc spray which deposits a coating on the internal surface of a cylinder, or on the external surface of a part of any geometry. It is predominantly known for its use in coating the cylinder bores of an engine, enabling the use of Aluminum engine blocks without the need for heavy cast iron sleeves. A single conductive wire is used as "feedstock" for the system. A supersonic plasma jet melts the wire, atomizes it and propels it onto the substrate. The plasma jet is formed by a transferred arc between a non-consumable cathode and the type of a wire. After atomization, forced air transports the stream of molten droplets onto the bore wall. The particles flatten when they impinge on the surface of the substrate, due to the high kinetic energy. The particles rapidly solidify upon contact. The stacked particles make up a high wear resistant coating. The PTWA thermal spray process utilizes a single wire as the feedstock material. All conductive wires up to and including 0.0625 in (1.59 mm) can be used as feedstock material, including "cored" wires. PTWA can be used to apply a coating to the wear surface of engine or transmission components to replace a bushing or bearing. For example, using PTWA to coat the bearing surface of a connecting rod offers a number of benefits including reductions in weight, cost, friction potential, and stress in the connecting rod. During the 1980s, a class of thermal spray processes called high velocity oxy-fuel spraying was developed. A mixture of gaseous or liquid fuel and oxygen is fed into a combustion chamber , where they are ignited and combusted continuously. The resultant hot gas at a pressure close to 1 MPa emanates through a converging–diverging nozzle and travels through a straight section. The fuels can be gases ( hydrogen , methane , propane , propylene , acetylene , natural gas , etc.) or liquids ( kerosene , etc.). The jet velocity at the exit of the barrel (>1000 m/s) exceeds the speed of sound . A powder feed stock is injected into the gas stream, which accelerates the powder up to 800 m/s. The stream of hot gas and powder is directed towards the surface to be coated. The powder partially melts in the stream, and deposits upon the substrate. The resulting coating has low porosity and high bond strength . [ 1 ] HVOF coatings may be as thick as 12 mm ( 1 ⁄ 2 in). It is typically used to deposit wear and corrosion resistant coatings on materials, such as ceramic and metallic layers. Common powders include WC -Co, chromium carbide , MCrAlY, and alumina . The process has been most successful for depositing cermet materials (WC–Co, etc.) and other corrosion-resistant alloys ( stainless steels , nickel-based alloys, aluminium, hydroxyapatite for medical implants , etc.). [ 1 ] HVAF coating technology is the combustion of propane in a compressed air stream. Like HVOF, this produces a uniform high velocity jet. HVAF differs by including a heat baffle to further stabilize the thermal spray mechanisms. Material is injected into the air-fuel stream and coating particles are propelled toward the part. [ 4 ] HVAF has a maximum flame temperature of 3,560° to 3,650 °F and an average particle velocity of 3,300 ft/sec. Since the maximum flame temperature is relatively close to the melting point of most spray materials, HVAF results in a more uniform, ductile coating. This also allows for a typical coating thickness of 0.002-0.050". HVAF coatings also have a mechanical bond strength of greater that 12,000 psi. Common HVAF coating materials include, but are not limited to; tungsten carbide , chrome carbide, stainless steel , hastelloy , and inconel . Due to its ductile nature hvaf coatings can help resist cavitation damage. [ 5 ] Spray and fuse uses high heat to increase the bond between the thermal spray coating and the substrate of the part. Unlike other types of thermal spray, spray and fuse creates a metallurgical bond between the coating and the surface. This means that instead of relying on friction for coating adhesion, it melds the surface and coating material into one material. Spray and fuse comes down to the difference between adhesion and cohesion. This process usually involves spraying a powdered material onto the component then following with an acetylene torch. The torch melts the coating material and the top layer of the component material; fusing them together. Due to the high heat of spray and fuse, some heat distortion may occur, and care must be taken to determine if a component is a good candidate. These high temperatures are akin to those used in welding. This metallurgical bond creates an extremely wear and abrasion resistant coating. Spray and fuse delivers the benefits of hardface welding with the ease of thermal spray. [ 6 ] Cold spraying (or gas dynamic cold spraying) was introduced to the market in the 1990s. The method was originally developed in the Soviet Union – while experimenting with the erosion of the target substrate, which was exposed to a two-phase high-velocity flow of fine powder in a wind tunnel, scientists observed accidental rapid formation of coatings. [ 1 ] In cold spraying, particles are accelerated to very high speeds by the carrier gas forced through a converging–diverging de Laval type nozzle . Upon impact, solid particles with sufficient kinetic energy deform plastically and bond mechanically to the substrate to form a coating. The critical velocity needed to form bonding depends on the material's properties, powder size and temperature. Metals , polymers , ceramics , composite materials and nanocrystalline powders can be deposited using cold spraying. [ 7 ] Soft metals such as Cu and Al are best suited for cold spraying, but coating of other materials (W, Ta, Ti, MCrAlY, WC–Co, etc.) by cold spraying has been reported. [ 1 ] The deposition efficiency is typically low for alloy powders, and the window of process parameters and suitable powder sizes is narrow. To accelerate powders to higher velocity, finer powders (<20 micrometers) are used. It is possible to accelerate powder particles to much higher velocity using a processing gas having high speed of sound (helium instead of nitrogen). However, helium is costly and its flow rate, and thus consumption, is higher. To improve acceleration capability, nitrogen gas is heated up to about 900 °C. As a result, deposition efficiency and tensile strength of deposits increase. [ 1 ] Warm spraying is a novel modification of high velocity oxy-fuel spraying, in which the temperature of combustion gas is lowered by mixing nitrogen with the combustion gas, thus bringing the process closer to the cold spraying. The resulting gas contains much water vapor, unreacted hydrocarbons and oxygen, and thus is dirtier than the cold spraying. However, the coating efficiency is higher. On the other hand, lower temperatures of warm spraying reduce melting and chemical reactions of the feed powder, as compared to HVOF. These advantages are especially important for such coating materials as Ti, plastics, and metallic glasses, which rapidly oxidize or deteriorate at high temperatures. [ 1 ] Thermal spraying is a line of sight process and the bond mechanism is primarily mechanical. Thermal spray application is not compatible with the substrate if the area to which it is applied is complex or blocked by other bodies. [ 9 ] Thermal spraying need not be a dangerous process if the equipment is treated with care and correct spraying practices are followed. As with any industrial process, there are a number of hazards of which the operator should be aware and against which specific precautions should be taken. Ideally, equipment should be operated automatically in enclosures specially designed to extract fumes, reduce noise levels, and prevent direct viewing of the spraying head. Such techniques will also produce coatings that are more consistent. There are occasions when the type of components being treated, or their low production levels, require manual equipment operation. Under these conditions, a number of hazards peculiar to thermal spraying are experienced in addition to those commonly encountered in production or processing industries. [ 10 ] [ 11 ] Metal spraying equipment uses compressed gases which create noise. Sound levels vary with the type of spraying equipment, the material being sprayed, and the operating parameters. Typical sound pressure levels are measured at 1 meter behind the arc. [ 12 ] Combustion spraying equipment produces an intense flame, which may have a peak temperature more than 3,100 °C and is very bright. Electric arc spraying produces ultra-violet light which may damage delicate body tissues. Plasma also generates quite a lot of UV radiation, easily burning exposed skin and can also cause "flash burn" to the eyes. Spray booths and enclosures should be fitted with ultra-violet absorbent dark glass. Where this is not possible, operators, and others in the vicinity should wear protective goggles containing BS grade 6 green glass. Opaque screens should be placed around spraying areas. The nozzle of an arc pistol should never be viewed directly unless it is certain that no power is available to the equipment. [ 10 ] The atomization of molten materials produces a large amount of dust and fumes made up of very fine particles (ca. 80–95% of the particles by number <100 nm). [ 13 ] Proper extraction facilities are vital not only for personal safety, but to minimize entrapment of re-frozen particles in the sprayed coatings. The use of respirators fitted with suitable filters is strongly recommended where equipment cannot be isolated. [ 13 ] Certain materials offer specific known hazards: [ 10 ] Combustion spraying guns use oxygen and fuel gases. The fuel gases are potentially explosive. In particular, acetylene may only be used under approved conditions. Oxygen, while not explosive, will sustain combustion and many materials will spontaneously ignite if excessive oxygen levels are present. Care must be taken to avoid leakage and to isolate oxygen and fuel gas supplies when not in use. [ 10 ] Electric arc guns operate at low voltages (below 45 V dc), but at relatively high currents. They may be safely hand-held. The power supply units are connected to 440 V AC sources, and must be treated with caution. [ 10 ]
https://en.wikipedia.org/wiki/Thermal_spraying
In thermodynamics , thermal stability describes the stability of a water body and its resistance to mixing . [ 1 ] It is the amount of work needed to transform the water to a uniform water density . The Schmidt stability "S" is commonly measured in joules per square meter (J/m 2 ). This molecular physics –related article is a stub . You can help Wikipedia by expanding it . This engineering-related article is a stub . You can help Wikipedia by expanding it . This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Thermal_stability
Thermal transmittance is the rate of transfer of heat through matter. The thermal transmittance of a material (such as insulation or concrete) or an assembly (such as a wall or window) is expressed as a U-value . The thermal insulance of a structure is the reciprocal of its thermal transmittance. Although the concept of U-value (or U-factor) is universal, U-values can be expressed in different units. In most countries, U-value is expressed in SI units, as watts per square metre- kelvin : In the United States, U-value is expressed as British thermal units (Btu) per hour-square feet-degrees Fahrenheit: Within this article, U-values are expressed in SI unless otherwise noted. To convert from SI to US customary values, divide by 5.678. [ 1 ] Well-insulated parts of a building have a low thermal transmittance whereas poorly insulated parts of a building have a high thermal transmittance. Losses due to thermal radiation , thermal convection and thermal conduction are taken into account in the U-value. Although it has the same units as heat transfer coefficient , thermal transmittance is different in that the heat transfer coefficient is used to solely describe heat transfer in fluids while thermal transmittance is used to simplify an equation that has several different forms of thermal resistances. It is described by the equation: where Φ is the heat transfer in watts, U is the thermal transmittance, T 1 is the temperature on one side of the structure, T 2 is the temperature on the other side of the structure and A is the area in square metres. Thermal transmittances of most walls and roofs can be calculated using ISO 6946, unless there is metal bridging the insulation in which case it can be calculated using ISO 10211. For most ground floors it can be calculated using ISO 13370. For most windows the thermal transmittance can be calculated using ISO 10077 or ISO 15099. ISO 9869 describes how to measure the thermal transmittance of a structure experimentally. Choice of materials and quality of installation has a critical impact on the window insulation results. The frame and double sealing of the window system are the actual weak points in the window insulation. Typical thermal transmittance values for common building structures are as follows: [ citation needed ] In practice the thermal transmittance is strongly affected by the quality of workmanship and if insulation is fitted poorly, the thermal transmittance can be considerably higher than if insulation is fitted well [ 3 ] When calculating a thermal transmittance it is helpful to consider the building's construction in terms of its different layers. For instance a cavity wall might be described as in the following table: In this example the total insulance is 1.64 K⋅m 2 /W. The thermal transmittance of the structure is the reciprocal of the total thermal insulance. The thermal transmittance of this structure is therefore 0.61 W/(m 2 ⋅K). (Note that this example is simplified as it does not take into account any metal connectors, air gaps interrupting the insulation or mortar joints between the bricks and concrete blocks.) It is possible to allow for mortar joints in calculating the thermal transmittance of a wall, as in the following table. Since the mortar joints allow heat to pass more easily than the light concrete blocks, the mortar is said to "bridge" the light concrete blocks. The average thermal insulance of the "bridged" layer depends upon the fraction of the area taken up by the mortar in comparison with the fraction of the area taken up by the light concrete blocks. To calculate thermal transmittance when there are "bridging" mortar joints it is necessary to calculate two quantities, known as R max and R min . R max can be thought of as the total thermal insulance obtained if it is assumed that there is no lateral flow of heat and R min can be thought of as the total thermal insulance obtained if it is assumed that there is no resistance to the lateral flow of heat. The U-value of the above construction is approximately equal to 2 / ( R max + R min ) Further information about how to deal with "bridging" is given in ISO 6946. Whilst calculation of thermal transmittance can readily be carried out with the help of software which is compliant with ISO 6946, a thermal transmittance calculation does not fully take workmanship into account and it does not allow for adventitious circulation of air between, through and around sections of insulation. To take the effects of workmanship-related factors fully into account it is necessary to carry out a thermal transmittance measurement. [ 4 ] ISO 9869 describes how to measure the thermal transmittance of a roof or a wall by using heat flux sensor . These heat flux meters usually consist of thermopiles which provide an electrical signal which is in direct proportion to the heat flux. Typically they might be about 100 mm (3.9 in) in diameter and perhaps about 5 mm (0.20 in) thick and they need to be fixed firmly to the roof or wall which is under test in order to ensure good thermal contact. When the heat flux is monitored over a sufficiently long time, the thermal transmittance can be calculated by dividing the average heat flux by the average difference in temperature between the inside and outside of the building. For most wall and roof constructions the heat flux meter needs to monitor heat flows (and internal and external temperatures) continuously for a period of 72 hours to be conform the ISO 9869 standards. Generally, thermal transmittance measurements are most accurate when: When convection currents play a part in transmitting heat across a building component, then thermal transmittance increases as the temperature difference increases. For example, for an internal temperature of 20 °C (68 °F) and an external temperature of −20 °C (−4 °F), the optimum gap between panes in a double glazed window will be smaller than the optimum gap for an external temperature of 0 °C (32 °F). The inherent thermal transmittance of materials can also vary with temperature—the mechanisms involved are complex, and the transmittance may increase or decrease as the temperature increases. [ 5 ]
https://en.wikipedia.org/wiki/Thermal_transmittance
The transport of heat in solids involves both electrons and vibrations of the atoms ( phonons ). When the solid is perfectly ordered over hundreds of thousands of atoms, this transport obeys established physics. However, when the size of the ordered regions decreases new physics can arise, thermal transport in nanostructures . In some cases heat transport is more effective, in others it is not. In general two carrier types can contribute to thermal conductivity - electrons and phonons . In nanostructures phonons usually dominate and the phonon properties of the structure become of a particular importance for thermal conductivity. [ 1 ] [ 2 ] [ 3 ] These phonon properties include: phonon group velocity , phonon scattering mechanisms, heat capacity , Grüneisen parameter . Unlike bulk materials, nanoscale devices have thermal properties which are complicated by boundary effects due to small size. It has been shown that in some cases phonon-boundary scattering effects dominate the thermal conduction processes, reducing thermal conductivity. [ 1 ] [ 4 ] Depending on the nanostructure size, the phonon mean free path values (Λ) may be comparable or larger than the object size, L {\displaystyle L} . When L {\displaystyle L} is larger than the phonon mean free path, Umklapp scattering process limits thermal conductivity (regime of diffusive thermal conductivity). When L {\displaystyle L} is comparable to or smaller than the mean free path (which is of the order 1 μm for carbon nanostructures [ 5 ] ), the continuous energy model used for bulk materials no longer applies and nonlocal and nonequilibrium aspects to heat transfer also need to be considered. [ 1 ] In this case phonons in defectless structure could propagate without scattering and thermal conductivity becomes ballistic (similar to ballistic conductivity ). More severe changes in thermal behavior are observed when the feature size L {\displaystyle L} shrinks further down to the wavelength of phonons. [ 6 ] The first measurement of thermal conductivity in silicon nanowires was published in 2003. [ 4 ] Two important features were pointed out: 1) The measured thermal conductivities are significantly lower than that of the bulk Si and, as the wire diameter is decreased, the corresponding thermal conductivity is reduced. 2) As the wire diameter is reduced, the phonon boundary scattering dominates over phonon–phonon Umklapp scattering , which decreases the thermal conductivity with an increase in temperature. For 56 nm and 115 nm wires k ~ T 3 dependence was observed, while for 37 nm wire k ~ T 2 dependence and for 22 nm wire k ~ T dependence were observed. Chen et al. [ 7 ] has shown that the one-dimensional cross-over for 20 nm Si nanowire occurs around 8K, while the phenomenon was observed for temperature values greater than 20K. Therefore, the reason of such behaviour is not in the confinement experienced by phonons so that three-dimensional structures display two-dimensional or one-dimensional behavior. Assuming that Boltzmann transport equation is valid, thermal conductivity can be written as: where C is the heat capacity, v g is the group velocity and τ {\displaystyle \tau } is the relaxation time. Note that this assumption breaks down when the dimensions of the system are comparable to or smaller than the wavelength of the phonons responsible for thermal transport. In our case, phonon wavelengths are generally in the 1 nm range [ 8 ] and the nanowires under consideration are within tens of nanometers range, the assumption is valid. Different phonon mode contributions to heat conduction can be extracted from analysis of the experimental data for silicon nanowires of different diameters [ 1 ] to extract the C·v g product for analysis. It was shown that all phonon modes contributing to thermal transport are excited well below the Si Debye temperature (645 K). From the thermal conductivity equation, one can write the product C·v g for each isotropic phonon branch i . where x = h ω / k B T {\displaystyle x=h\omega /k_{B}T} and v p , i {\displaystyle v_{p,i}} is the phonon phase velocity, which is less sensitive to phonon dispersions than the group velocity v g . Many models of phonon thermal transport ignores the effects of transverse acoustic phonons (TA) at high frequency due to their small group velocity. (Optical phonon contributions are also ignored for the same reason.) However, upper branch of TA phonons have non-zero group velocity at the Brillouin zone boundary along the Γ-Κ direction and, in fact, behave similarly to the longitudinal acoustic phonons (LA) and can contribute to the heat transport. Then, the possible phonon modes contributing to heat conduction are both LA and TA phonons at low and high frequencies. Using the corresponding dispersion curves, the C·v g product can then be calculated and fitted to the experimental data. The best fit was found when contribution of high-frequency TA phonons is accounted as 70% of the product at room temperature. The remaining 30% is contributed by the LA and TA phonons at low-frequency. Thermal conductivity in nanowires can be computed based on complete phonon dispersions instead of the linearlized dispersion relations commonly used to calculate thermal conductivity in bulk materials. [ 9 ] Assuming the phonon transport is diffusive and Boltzmann transport equation (BTE) is valid, nanowire thermal conductance G(T) can be defined as: where the variable α represents discrete quantum numbers associated with sub-bands found in one-dimensional phonon dispersion relations, f B represents the Bose-Einstein distribution, v z is the phonon velocity in the z direction and λ is the phonon relaxation length along the direction of the wire length. Thermal conductivity is then expressed as: where S is the cross sectional area of the wire, a z is the lattice constant. It was shown [ 9 ] that, using this formula and atomistically computed phonon dispersions (with interatomic potentials developed in [ 10 ] ), it is possible to predictively calculate lattice thermal conductivity curves for nanowires, in good agreement with experiments. On the other hand, it was not possible to obtain correct results with the approximated Callaway formula. [ 11 ] These results are expected to apply to ”nanowhiskers” for which phonon confinement effects are unimportant. Si nanowires wider than ~35 nm are within this category. [ 9 ] For large diameter nanowires, theoretical models assuming the nanowire diameters are comparable to the mean free path and that the mean free path is independent of phonon frequency have been able to closely match the experimental results. But for very thin nanowires whose dimensions are comparable to the dominant phonon wavelength, a new model is required. The study in [ 7 ] has shown that in such cases, the phonon-boundary scattering is dependent on frequency. The new mean free path is then should be used: Here, l is the mean free path (same as Λ). The parameter h is length scale associated with the disordered region, d is the diameter, N(ω) is number of modes at frequency ω, and B is a constant related to the disorder region. [ 7 ] Thermal conductance is then calculated using the Landauer formula: As nanoscale graphitic structures, carbon nanotubes are of great interest for their thermal properties. The low-temperature specific heat and thermal conductivity show direct evidence of 1-D quantization of the phonon band structure . Modeling of the low-temperature specific heat allows determination of the on-tube phonon velocity, the splitting of phonon subbands on a single tube, and the interaction between neighboring tubes in a bundle. Measurements show a single-wall carbon nanotubes (SWNTs) room-temperature thermal conductivity about 3500 W/(m·K), [ 12 ] and over 3000 W/(m·K) for individual multiwalled carbon nanotubes (MWNTs). [ 13 ] It is difficult to replicate these properties on the macroscale due to imperfect contact between individual CNTs, and so tangible objects from CNTs such as films or fibres have reached only up to 1500 W/(m·K) [ 14 ] so far. Addition of nanotubes to epoxy resin can double the thermal conductivity for a loading of only 1%, showing that nanotube composite materials may be useful for thermal management applications. Thermal conductivity in CNT is mainly due to phonons rather than electrons [ 2 ] so the Wiedemann–Franz law is not applicable. In general, the thermal conductivity is a tensor quality, but for this discussion, it is only important to consider the diagonal elements: where C is the specific heat, and v z and τ {\displaystyle \tau } are the group velocity and relaxation time of a given phonon state. At temperatures far below the Debye temperature, the relaxation time is determined by scattering of fixed impurities, defects, sample boundaries, etc. and is roughly constant. [ citation needed ] Therefore, in ordinary materials, the low-temperature thermal conductivity has the same temperature dependence as the specific heat. However, in anisotropic materials, this relationship does not strictly hold. Because the contribution of each state is weighted by the scattering time and the square of the velocity, the thermal conductivity preferentially samples states with large velocity and scattering time. For instance, in graphite, the thermal conductivity parallel to the basal planes is only weakly dependent on the interlayer phonons. In SWNT bundles, it is likely that k(T) depends only on the on-tube phonons, rather than the intertube modes. Thermal conductivity is of particular interest in low-dimensional systems. For CNT, represented as 1-D ballistic electronic channel, the electronic conductance is quantized, with a universal value of Similarly, for a single ballistic 1-D channel, the thermal conductance is independent of materials parameters, and there exists a quantum of thermal conductance , which is linear in temperature: [ 15 ] Possible conditions for observation of this quantum were examined by Rego and Kirczenow. [ 16 ] In 1999, Keith Schwab , Erik Henriksen, John Worlock, and Michael Roukes carried out a series of experimental measurements that enabled first observation of the thermal conductance quantum. [ 17 ] The measurements employed suspended nanostructures coupled to sensitive dc SQUID measurement devices. In 2008, a colorized electron micrograph of one of the Caltech devices was acquired for the permanent collection of the Museum of Modern Art in New York. At high temperatures, three-phonon Umklapp scattering begins to limit the phonon relaxation time. Therefore, the phonon thermal conductivity displays a peak and decreases with increasing temperature. Umklapp scattering requires production of a phonon beyond the Brillouin zone boundary; because of the high Debye temperature of diamond and graphite, the peak in the thermal conductivity of these materials is near 100 K, significantly higher than for most other materials. In less crystalline forms of graphite, such as carbon fibers, the peak in k(T) occurs at higher temperatures, because defect scattering remains dominant over Umklapp scattering to higher temperature. [ 18 ] In low-dimensional systems, it is difficult to conserve both energy and momentum for Umklapp processes, [ 19 ] and so it may be possible that Umklapp scattering is suppressed in nanotubes relative to 2-D or 3-D forms of carbon. Berber et al. [ 20 ] have calculated the phonon thermal conductivity of isolated nanotubes. The value k(T) peaks near 100 K, and then decreases with increasing temperature. The value of k(T) at the peak (37,000 W/(m·K)) is comparable to the highest thermal conductivity ever measured (41,000 W/(m·K) for an isotopically pure diamond sample at 104 K). Even at room temperature, the thermal conductivity is quite high (6600 W/(m·K)), exceeding the reported room-temperature thermal conductivity of isotopically pure diamond by almost a factor of 2. In graphite, the interlayer interactions quench the thermal conductivity by nearly 1 order of magnitude [ citation needed ] . It is likely that the same process occurs in nanotube bundles [ citation needed ] . Thus it is significant that the coupling between tubes in bundles is weaker than expected [ citation needed ] . It may be that this weak coupling, which is problematic for mechanical applications of nanotubes, is an advantage for thermal applications. The phonon density of states is to calculated through band structure of isolated nanotubes, which is studied in Saito et al. [ 21 ] [ 22 ] and Sanchez-Portal et al. [ 23 ] When a graphene sheet is ‘‘rolled’’ into a nanotube, the 2-D band structure folds into a large number of 1-D subbands. In a (10,10) tube, for instance, the six phonon bands (three acoustic and three optical) of the graphene sheet become 66 separate 1-D subbands. A direct result of this folding is that the nanotube density of states has a number of sharp peaks due to 1-D van Hove singularities , which are absent in graphene and graphite. Despite the presence of these singularities, the overall density of states is similar at high energies, so that the high temperature specific heat should be roughly equal as well. This is to be expected: the high-energy phonons are more reflective of carbon–carbon bonding than the geometry of the graphene sheet. Thin films are prevalent in the micro and nanoelectronics industry for the fabrication of sensors, actuators and transistors; thus, thermal transport properties affect the performance and reliability of many structures such as transistors, solid-state lasers, sensors, and actuators. Although these devices are traditionally made from bulk crystalline material (silicon), they often contain thin films of oxides, polysilicon, metal, as well as superlattices such as thin-film stacks of GaAs/AlGaAs for lasers. Silicon-on-insulator (SOI) films with silicon thicknesses of 0.05 μm to 10 μm above a buried silicon dioxide layer are increasingly popular for semiconductor devices due to the increased dielectric isolation associated with SOI/ [ 24 ] SOI wafers contain a thin-layer of silicon on an oxide layer and a thin-film of single-crystal silicon, which reduces the effective thermal conductivity of the material by up to 50% as compared to bulk silicon, due to phonon-interface scattering and defects and dislocations in the crystalline structure. Previous studies by Asheghi et al. , show a similar trend. [ 24 ] Other studies of thin-films show similar thermal effects [ citation needed ] . Thermal properties associated with superlattices are critical in the development of semiconductor lasers. Heat conduction of superlattices is less understood than homogeneous thin films. It is theorized that superlattices have a lower thermal conductivity due to impurities from lattice mismatches and at the heterojunctions. Phonon-interface scattering at heterojunctions needs to be considered in this case; fully elastic scattering underestimates the heat conduction, while fully inelastic scattering overestimates the heat conduction. [ 25 ] [ 26 ] For example, a Si/Ge thin-film superlattice has a greater decrease in thermal conductivity than an AlAs/GaAs film stack [ 27 ] due to increased lattice mismatch. A simple estimate of heat conduction of superlattices is: where C 1 and C 2 are the corresponding heat capacity of film1 and film2 respectively, v 1 and v 2 are the acoustic propagation velocities in film1 and film2, and d1 and d2 are the thicknesses of film1 and film2. This model neglects scattering within the layers and assumes fully diffuse, inelastic scattering. [ 28 ] Polycrystalline films are common in semiconductor devices, as the gate electrode of a field-effect transistor is often made of polycrystalline silicon . If the polysilicon grain sizes are small, internal scattering from grain boundaries can overwhelm the effects of film-boundary scattering. Also, grain boundaries contain more impurities, which result in impurity scattering. Likewise, disordered or amorphous films will experience a severe reduction of thermal conductivity, since the small grain size results in numerous grain-boundary scattering effects. [ 29 ] Different deposition methods of amorphous films will result in differences in impurities and grain sizes. [ 28 ] The simplest approach to modeling phonon scattering at grain boundaries is to increase the scattering rate by introducing this equation: where B is a dimensionless parameter that correlates with the phonon reflection coefficient at the grain boundaries, d G is the characteristic grain size, and v is the phonon velocity through the material. A more formal approach to estimating the scattering rate is: where v G is the dimensionless grain-boundary scattering strength, defined as Here σ j {\displaystyle \sigma _{j}} is the cross-section of a grain-boundary area, and ν j is the density of the grain boundary area. [ 28 ] There are two approaches to experimentally determine the thermal conductivity of thin films. The goal of experimental metrology of thermal conductivity of thin films is to attain an accurate thermal measurement without disturbing the properties of the thin-film. Electrical heating is used for thin films which have a lower thermal conductivity than the substrate; it is fairly accurate in measuring out-of-plane conductivity. Often, a resistive heater and thermistor is fabricated on the sample film using a highly conductive metal, such as aluminium . The most straightforward approach would be to apply a steady-state current and measure the change in temperature of adjacent thermistors. A more versatile approach uses an AC signal applied to the electrodes. The third harmonic of the AC signal reveals heating and temperature fluctuations of the material. [ 28 ] Laser heating is a non-contact metrology method, which uses picosecond and nanosecond laser pulses to deliver thermal energy to the substrate. Laser heating uses a pump-probe mechanism; the pump beam introduces energy to the thin-film, as the probe beam picks up the characteristics of how the energy propagates through the film. Laser heating is advantageous because the energy delivered to the film can be precisely controlled; furthermore, the short heating duration decouples the thermal conductivity of the thin film from the substrate [ citation needed ] . In the early 2000's several studies reported anomalously high thermal conductivity enhancement in nano-fluids, i.e. suspensions of nanoparticles in liquids. Subsequent detailed studies (such as multinational "Benchmark Study on the Thermal Conductivity of Nanofluids") [ 30 ] failed to reproduce the reported anomalies, and their experimental findings were consistent with shape-adjusted mean field theory . [ 31 ] [ 32 ]
https://en.wikipedia.org/wiki/Thermal_transport_in_nanostructures
A thermal vacuum chamber ( TVAC ) is a vacuum chamber in which the radiative thermal environment is controlled. Typically the thermal environment is achieved by passing liquids or fluids through thermal shrouds for cold temperatures or through the application of thermal lamps for high temperatures . Thermal vacuum chambers are frequently used for testing spacecraft or parts thereof under a simulated space environment. Thermal vacuum chambers can be found at: This tool article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Thermal_vacuum_chamber
Thermal velocity or thermal speed is a typical velocity of the thermal motion of particles that make up a gas, liquid, etc. Thus, indirectly, thermal velocity is a measure of temperature. Technically speaking, it is a measure of the width of the peak in the Maxwell–Boltzmann particle velocity distribution . Note that in the strictest sense thermal velocity is not a velocity , since velocity usually describes a vector rather than simply a scalar speed . Since the thermal velocity is only a "typical" velocity, a number of different definitions can be and are used. Taking k B {\displaystyle k_{\text{B}}} to be the Boltzmann constant , T {\displaystyle T} the absolute temperature , and m {\displaystyle m} the mass of a particle, we can write the different thermal velocities: If v th {\displaystyle v_{\text{th}}} is defined as the root mean square of the velocity in any one dimension (i.e. any single direction), then [ 1 ] [ 2 ] v th = k B T m . {\displaystyle v_{\text{th}}={\sqrt {\frac {k_{\text{B}}T}{m}}}.} If v th {\displaystyle v_{\text{th}}} is defined as the mean of the magnitude of the velocity in any one dimension (i.e. any single direction), then v th = 2 k B T π m . {\displaystyle v_{\text{th}}={\sqrt {\frac {2k_{\text{B}}T}{\pi m}}}.} If v th {\displaystyle v_{\text{th}}} is defined as the most probable speed, then [ 2 ] v th = 2 k B T m . {\displaystyle v_{\text{th}}={\sqrt {\frac {2k_{\text{B}}T}{m}}}.} If v th {\displaystyle v_{\text{th}}} is defined as the root mean square of the total velocity, then v th = 3 k B T m . {\displaystyle v_{\text{th}}={\sqrt {\frac {3k_{\text{B}}T}{m}}}.} If v th {\displaystyle v_{\text{th}}} is defined as the mean of the magnitude of the velocity of the atoms or molecules, then v th = 8 k B T π m . {\displaystyle v_{\text{th}}={\sqrt {\frac {8k_{\text{B}}T}{\pi m}}}.} All of these definitions are in the range v th = ( 1.6 ± 0.2 ) k B T m . {\displaystyle v_{\text{th}}=(1.6\pm 0.2){\sqrt {\frac {k_{\text{B}}T}{m}}}.} At 20 °C (293.15 kelvins ), the mean thermal velocity of common gasses in three dimensions is: [ 3 ] This thermodynamics -related article is a stub . You can help Wikipedia by expanding it . This article about statistical mechanics is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Thermal_velocity
A thermal wheel , also known as a rotary heat exchanger , or rotary air-to-air enthalpy wheel , energy recovery wheel , or heat recovery wheel , is a type of energy recovery heat exchanger positioned within the supply and exhaust air streams of air-handling units or rooftop units or in the exhaust gases of an industrial process, in order to recover the heat energy. Other variants include enthalpy wheels and desiccant wheels . A cooling-specific thermal wheel is sometimes referred to as a Kyoto wheel . Rotary thermal wheels are a mechanical means of heat recovery. A rotating porous metallic wheel transfers thermal energy from one air stream to another by passing through each fluid alternately. The system operates by working as a thermal storage mass whereby the heat from the air is temporarily stored within the wheel matrix until it is transferred to the cooler air stream. [ 1 ] Two types of rotary thermal wheels exist: heat wheels and enthalpy ( desiccant ) wheels. Though there is a geometrical similarity between heat and enthalpy wheels, there are differences that affect the operation of each design. In a system using a desiccant wheel, the moisture in the air stream with the highest relative humidity is transferred to the opposite air stream after flowing through the wheel. This can work in both directions of incoming air to exhaust air and exhaust air to incoming air. The supply air can then be used directly or employed to further cool the air. This is an energy-intensive process. [ 2 ] [ need quotation to verify ] [ why? ] The rotary air-to-air enthalpy wheel heat exchanger is a rotating cylinder filled with an air permeable material, typically polymer, aluminum, or synthetic fiber, providing the large surface area required for the sensible enthalpy transfer ( enthalpy is a measure of heat). As the wheel rotates between the supply and exhaust air streams it picks up heat energy and releases it into the colder air stream. The driving force behind the exchange is the difference in temperatures between the opposing air streams (the thermal gradient). The enthalpy exchange is accomplished through the use of desiccants . Desiccants transfer moisture through the process of adsorption which is predominately driven by the difference in the partial pressure of vapor within the opposing air-streams. Typical desiccants consist of silica gel , and molecular sieves . Enthalpy wheels are the most effective devices to transfer both latent and sensible heat energy. Choice of construction materials for the rotor, most commonly polymer, aluminum, or fiberglass, determines durability. When using rotary energy recovery devices the two air streams must be adjacent to one another to allow for the local transfer of energy. Also, there should be special considerations paid in colder climates to avoid wheel frosting. Systems can avoid frosting by modulating wheel speed, preheating the air, or stop/jogging the system. [ citation needed ] O’Connor et al. [ 3 ] studied the effect that a rotary thermal wheel has on the supply air flow rates into a building. A computational model was created to simulate the effects of a rotary thermal wheel on air flow rates when incorporated into a commercial wind tower system. The simulation was validated with a scale model experiment in a closed-loop subsonic wind tunnel. The data obtained from both tests were compared in order to analyze the flow rates. Although the flow rates were reduced compared to a wind tower which did not include a rotary thermal wheel, the guideline ventilation rates for occupants in a school or office building were met above an external wind speed of 3 m/s, which is lower than the average wind speed of the UK (4–5 m/s). No full-scale experimental or field test data was completed in this study, therefore it cannot be conclusively proved that rotary thermal wheels are feasible for integration into a commercial wind tower system. However, despite the air flow rate decrease within the building after the introduction of the rotary thermal wheel, the reduction was not large enough to prevent the ventilation guideline rates from being met. Sufficient research has not yet been conducted to determine the suitability of rotary thermal wheels in natural ventilation, ventilation supply rates can be met but the thermal capabilities of the rotary thermal wheel have not yet been investigated. Further work would beneficial to increase understanding of the system. [ 4 ] A thermal wheel consists of a circular honeycomb matrix of heat-absorbing material, which is slowly rotated within the supply and exhaust air streams of an air-handling system. As the thermal wheel rotates, heat is captured from the exhaust air stream in one half of the rotation and released to the fresh air stream in the other half of the rotation. Thus waste heat energy from the exhaust air stream is transferred to the matrix material and then from the matrix material to the fresh air stream. This increases the temperature of the supply air stream by an amount proportional to the temperature differential between air streams, or "thermal gradient" and depending upon the efficiency of the device. Heat exchange is most efficient when the streams flow in opposite directions , since this causes a favourable temperature gradient across the thickness of the wheel. The principle works in reverse, and "cooling" energy can be recovered to the supply air stream if desired and the temperature differential allows. The heat exchange matrix may be aluminium, plastic, or synthetic fiber. The heat exchanger is rotated by a small electric motor and belt drive system. The motors are often inverter speed-controlled for improved control of the exiting air temperature. If no heat exchange is required, the motor can be stopped altogether. Because heat is transferred from the exhaust air stream to the supply air stream without passing directly through an exchange medium, the gross efficiencies are usually higher than any other air-side heat recovery system. The shallower depth of the heat exchange matrix, as compared to a plate heat exchanger, means that the pressure drop through the device is normally lower in comparison. Generally, a thermal wheel will be selected for face velocities between 1.5 and 3.0 metres per second (4.9 and 9.8 ft/s), and with equal air volume flow rates, gross "sensible" efficiencies of 85% can be expected. Although there is a small energy requirement to rotate the wheel, the motor energy consumption is usually low and has little effect upon the seasonal efficiency of the device. The ability to recover "latent" heat can improve gross efficiencies by 10–15%. Normally the heat transfer between airstreams provided by the device is termed as " sensible ", which is the exchange of energy, or enthalpy , resulting in a change in temperature of the medium (air in this case), but with no change in moisture content. However, if moisture or relative humidity levels in the return air stream are high enough to allow condensation to take place in the device, then this will cause " latent " heat to be released, and the heat transfer material will be covered with a film of water. Despite a corresponding absorption of latent heat, as some of the water film is evaporated in the opposite air stream, the water will reduce the thermal resistance of the boundary layer of the heat exchanger material and thus improve the heat transfer coefficient of the device, and hence increase efficiency. The energy exchange of such devices now comprises both sensible and latent heat transfer; in addition to a change in temperature, there is also a change in moisture content of the air streams. However, the film of condensation will also slightly increase pressure drop through the device, and depending upon the spacing of the matrix material, this can increase resistance by up to 30%. This will increase fan energy consumption and reduce the seasonal efficiency of the device. Aluminium matrices are also available with an applied hygroscopic coating, and the use of this, or the use of porous synthetic fiber matrices, allows for the adsorption and release of water vapour, at moisture levels much lower than that normally required for condensation and latent heat transfer to occur. The benefit of this is an even higher heat transfer efficiency, but it also results in the drying or humidification of air streams, which may also be desired for the particular process being served by the supply air. For this reason these devices are also commonly known as an enthalpy wheel . During the automotive industry's interest in gas turbines for vehicle propulsion (around 1965), Chrysler invented a unique type of rotary heat exchanger [ 5 ] that consisted of a rotary drum constructed from corrugated metal (similar in appearance to corrugated cardboard). This drum was continuously rotated by reduction gears driven by the turbine. The hot exhaust gasses were directed through a portion of the device, which would then rotate to a section that conducted the induction air, where this intake air was heated. This recovery of the heat of combustion significantly increased the efficiency of the turbine engine. This engine proved impractical for an automotive application due to its poor low-speed torque. Even such an efficient engine, if large enough to deliver the proper performance, would have a low average fuel efficiency . Such an engine may at some future time be attractive when combined with an electric motor in a hybrid vehicle owing to its robust longevity and an ability to burn a wide variety of liquid fuels. [ original research? ] A desiccant wheel is very similar to a thermal wheel, but with a coating applied for the sole purpose of dehumidifying, or "drying", the air stream. The desiccant is normally silica gel . As the wheel turns, the desiccant passes alternately through the incoming air, where the moisture is adsorbed , and through a "regenerating" zone, where the desiccant is dried and the moisture expelled. The wheel continues to rotate, and the adsorbent process is repeated. Regeneration is normally carried out by the use of a heating coil, such as a water or steam coil, or a direct-fired gas burner. Thermal wheels and desiccant wheels are often used in series configuration to provide the required dehumidification as well as recovering the heat from the regeneration cycle. Thermal wheels are not suitable for use where total separation of supply and exhaust air streams is required, since air will bypass at the interface between the air streams at the heat exchanger boundary, and at the point where the wheel passes from one air stream to the other during its normal rotation. The former is reduced by brush seals, and the latter is reduced by a small purge section, formed by plating off a small segment of the wheel, normally in the exhaust air stream. Matrices made from fibrous materials, or with hygroscopic coatings, for the transfer of latent heat, are far more susceptible to damage and degradation by " fouling " than plain metal or plastic materials, and are difficult or impossible to effectively clean if dirty. Care must be taken to properly filter the air streams on both exhaust and fresh air sides of the wheel. Any dirt attaching on either air side will invariably be transported into the air stream of the other side.
https://en.wikipedia.org/wiki/Thermal_wheel
In physics , thermalisation (or thermalization ) is the process of physical bodies reaching thermal equilibrium through mutual interaction. In general, the natural tendency of a system is towards a state of equipartition of energy and uniform temperature that maximizes the system's entropy . Thermalisation, thermal equilibrium, and temperature are therefore important fundamental concepts within statistical physics , statistical mechanics , and thermodynamics ; all of which are a basis for many other specific fields of scientific understanding and engineering application . Examples of thermalisation include: The hypothesis, foundational to most introductory textbooks treating quantum statistical mechanics , [ 4 ] assumes that systems go to thermal equilibrium (thermalisation). The process of thermalisation erases local memory of the initial conditions. The eigenstate thermalisation hypothesis is a hypothesis about when quantum states will undergo thermalisation and why. Not all quantum states undergo thermalisation. Some states have been discovered which do not (see below), and their reasons for not reaching thermal equilibrium are unclear as of March 2019 [update] . The process of equilibration can be described using the H-theorem or the relaxation theorem , [ 5 ] see also entropy production . Broadly-speaking, classical systems with non-chaotic behavior will not thermalise. Systems with many interacting constituents are generally expected to be chaotic , but this assumption sometimes fails. A notable counter example is the Fermi–Pasta–Ulam–Tsingou problem , which displays unexpected recurrence and will only thermalise over very long time scales. [ 6 ] Non-chaotic systems which are pertubed by weak non-linearities will not thermalise for a set of initial conditions, with non-zero volume in the phase space, as stated by the KAM theorem , although the size of this set decreases exponentially with the number of degrees of freedom. [ 7 ] Many-body integrable systems , which have an extensive number of conserved quantities, will not thermalise in the usual sense, but will equilibrate according to a generalized Gibbs ensemble . [ 8 ] [ 9 ] Some such phenomena resisting the tendency to thermalize include (see, e.g., a quantum scar ): [ 10 ] Other systems that resist thermalisation and are better understood are quantum integrable systems [ 23 ] and systems with dynamical symmetries . [ 24 ]
https://en.wikipedia.org/wiki/Thermalisation
The thermally induced unidirectional shape-shape-memory effect is an effect classified within the new so-called smart materials . Polymers with thermally induced shape-memory effect are new materials, whose applications are recently being studied in different fields of science (e.g., medicine), communications and entertainment. There are currently reported and commercially used systems. However, the possibility of programming other polymers is present, due to the number of copolymers that can be designed: the possibilities are almost endless. Polymers with thermally induced shape-memory effect are those polymers that respond to external stimuli and because of this have the ability to change their shape . The thermally induced shape-memory effect results from a combination of proper processing and programming of the system. This effect can be observed in polymers with very different chemical composition , which opens a great possibility of applications. In the first step the polymers are processed by means of common techniques, such as injection or extrusion , thermoforming , at a temperature (T High ) at which the polymer melts , obtaining a final shape which is called "permanent" shape. The next step is called system programming and involves heating the sample to a transition temperature (T Trans ). At that temperature the polymer is deformed , reaching a shape called "temporary". Immediately afterwards the temperature of the sample is lowered. The final step of the effect involves the recovery of the permanent shape. The sample is heated to the transition temperature (T Trans ) and within a short time the recovery of the permanent shape is observed. This effect is not a natural property of the polymer, but results from proper programming of the system with the appropriate chemistry. For a polymer to exhibit this effect, it must have two components at the molecular level: bonds ( chemical or physical) to determine the permanent shape and "trigger" segments with a T Trans to fix the temporary shape. It should first be noted that the first inelastic mechanism of these polymers is the mobility of the chains and the conformational rearrangement of the groups. Then the effect on semi-crystalline and amorphous polymers must be distinguished. In both cases, anchor points must be created that act as "triggers" for the effect. In the case of amorphous polymers, these will be the knots or "tangles" of the chains, and in the case of semi-crystalline polymers, the crystals themselves will form these anchor points. By modifying the shape of the material under minimal critical stress , the chains slide and a metastable structure is created, which increases the organization and order of the chains (lower entropy ), when the deformation load is eliminated, the anchor points provide a storage mechanism for macroscopic stresses in the form of small localized stresses and decreasing entropy. In the glassy state the rotational motions of the molecules are frozen and impeded, as the temperature increases and the glassy state is reached, these motions thaw and rotations and relaxations occur, the molecules take the form that is entropically most favorable to them, the one with the lowest energy. These movements are called relaxation process and the formation of "random strings" to eliminate stresses is called shape-memory loss. A polymer will exhibit the shape-memory effect if it is susceptible to being stabilized in a given state of deformation, preventing the molecules from slipping and regaining their higher entropy (lower energy) form. This can be achieved almost entirely by creating crosslinking or vulcanization , these new bonds act as anchors and prevent the relaxation of the chains, the anchor points can be physical or chemical. The unidirectional shape-shape-memory effect was first observed by Chand and Read in 1951 in a Gold-Cadmium alloy and in 1963 Buehler described this effect for nitinol , which is an equiatomic Nickel-Titanium alloy. This effect in metals and ceramics is based on a change in the crystal structure, called martensitic phase transition . The disadvantage of these materials is that it is an equiatomic alloy and deviations of 1% in the composition modify the transition temperature by approximately 100 K . Some metals and ceramics present the effect bidirectionally, which means that at a certain temperature there is a shape and this can be changed by changing the temperature, but if the first temperature is recovered, also the first shape is recovered. This is achieved by training the material for each shape at each temperature. Metals and ceramics with thermally induced bidirectional shape-memory effect have had great application in medical implants, sensors, transducers, etc. Many present a risk however due to their high toxicity . To obtain the effect, it is necessary to achieve a phase separation, one of these phases works as the trigger for the temporary form, using a transition temperature that can be Tm or Tg and in this effect is called T Trans . A second phase has the higher transition temperature and above this temperature the polymer melts and is processed by conventional methods. The ratio of the elements forming the phase separation largely regulates the T Trans transition temperature; this is much easier to control than in metallic alloys. An example of this is the poly( ethylene oxide - ethylene terephthalate ) or EOET copolymer . The polyethylene terephthalate (PET) segment has a relatively high Tg and its Tm is commonly referred to as the "hard" segment, whereas polyethylene ethylene oxide (PEO), has a relatively low Tm and Tg and is referred to as the "soft" segment. In the final polymer these segments separate into two phases in the solid state. PET has a high degree of crystallinity and the formation of these crystals provides for the flow and rearrangement of the PEO chains as they are stretched at temperatures higher than their Tm. If crosslinking with slight vulcanization is desired, standardized methods for each polymer must be taken into account. In the case of PCO, for example, it is a polymer without shape-shape-memory effect because it does not present a clear "plateau", but the addition of a minimum amount of peroxide (~1%) provides PCO with all the requirements to present this effect. Some polymers fatigue first, so each system can be evaluated with a simple experiment that consists of programming the system 10 or 20 times in a row and measuring the recovery in % and time. Polymers that can crystallize are (with the exception of PP ) guarantee to obtain this effect, mainly due to their ordering capacity, which is reflected in the crystallinity, the crystals have affinity for their constituent elements and form new bonds these achieve anchoring forces that give stability to the temporary form. To analyze the behavior of the crystals in this type of polymers, the WAXS and DSC techniques are used; these techniques help to determine what percentage of the polymer are crystals and how they are organized. This is due to the fact that the crystallinity decreases as the crosslinking increases, since the chains lose the ability to arrange themselves and order is essential to achieve crystallinity. A second problem present when crosslinking molecules is melting, since an excess of crosslinking modifies the molecule in such a way that it stops melting (similar to a thermoset ) and therefore the temporary shape cannot be obtained. The control of curing either by electromagnetic waves or with peroxides is very important since it increases the T Trans and decreases the crystallinity, determining factors in the shape-shape-memory effect. In the case of biocompatible semicrystalline systems such as poly(ε-caprolactone) and poly(n-butyl acrylate ), crosslinked by photopolymerization it has been reported that the crystallization behavior is affected by the cooling rate, as in any other semicrystalline polymer, but the heat of crystallization remains independent of the cooling rate. The influence of the crosslinking of the molecules, the cooling rate and the crystallization behavior are specific to each system and impossible to enumerate since the synthesis possibilities are almost infinite. Crystallizable polymers such as oligo ( ε-caprolactone ) can have amorphous segments such as poly(n-butyl acrylate) and the molecular mass ratio of each determine the behavior of the system in programming temporary form and recovery to permanent form. If the polymeric system is amorphous, then the anchor points of the crystalline structure are not available and the only way to ensure the stability of the temporary shape is through chain entanglements (physical entanglements and not chemical crosslinking), in addition to the possibility of crosslinking. In the glassy state, the movements of the long chain segments are frozen, the movements of these segments depend on an activation temperature that brings the polymer to a smoothing and elastic state, the rotation on the carbon bonds and the movements of the chains no longer have strong impediments to accommodate and acquire the conformation that requires less energy, the chains then "unravel" forming random strings, without order and therefore with higher entropy. If a polymer sample is stretched for a short time in the elastic range , when the load is removed, the sample will recover its original shape, but if the load remains for a sufficiently long period, the chains rearrange and the original shape is not recovered, the result is an irreversible deformation, also called relaxation process (in this case: creep). In order for a polymer to exhibit the thermally induced shape-memory effect, it is necessary to fix the chains with anchor points to avoid these relaxation processes that inelastically modify the system. Amorphous polymers do not have a crystallization temperature (Tm) like semi-crystalline polymers and have only a glass transition temperature (Tg). This has a decisive influence on the behavior of shape-shape-memory polymer systems. A crystalline copolymer system alone can result in the crosslinker-treated copolymer losing its crystallinity and becoming practically amorphous. An amorphous polymer depends on the level of crosslinking or the degree of polymerization to exhibit this effect. In the case of poly(norbornene), which is a linear, amorphous polymer, with a content of 70 to 80% of trans bonds in commercial products, molecular mass of approximately 3x106 g mol −1 and Tg of approximately 35 to 45 °C . Because it achieves an unusually high degree of polymerization, chain entanglements can be relied upon as anchor points to achieve the thermally induced shape-memory effect. Therefore, this polymer relies solely on physical anchor points. When heated up to Tg, the material abruptly changes from a rigid state to a tapered state (softens). To achieve the effect, the shape must be changed rapidly to avoid rearrangement of the segments of the polymer chains and immediately cool the material also very rapidly below Tg. Reheating the material back to Tg will show the recovery of the original shape. In designing copolymers for thermally induced shape-memory effect it is very important to keep in mind that a slight change in chemical structure (cis/trans ratios, tacticity, molecular mass, etc.) produces a significant change in the shape-memory polymer. An example is the copolymer of poly( methylmethacrylate -co- methacrylic acid ) or poly(MAA-co-MMA) compared to poly(MAA-co-MMA)-PEG, where PEG is short for poly( ethylene glycol ) which forms complexes in the copolymer. Changes in the morphology of the material including PEG provide shape-memory effect to the copolymer, showing two phases, the three-dimensional network providing a stable phase and the reversible phase formed by the amorphous part of the PEG-PMAA complexes. The complexes show a high modulus storage capacity, so when a PEG of higher molecular mass is introduced into the copolymer, an increase in the elastic modulus, higher modulus in the glassy state and faster recovery are observed. Its properties can be studied with differential scanning calorimetry (DSC), wide-angle X-ray diffraction (WAXD) and dynamic mechanical analysis (DMA) techniques to determine its physicochemical arrangement. Most of the applications of polymers with this effect are only suggestions for now, many possibilities have been proposed, but so far only a few have been used, the most important being medical devices and automotive elements, although the greatest success has been achieved with heat-shrinkable polyethylene, which is also an exception in the programming step, since it is processed in a different way.
https://en.wikipedia.org/wiki/Thermally_induced_shape-memory_effect_(polymers)
In thermodynamics , a thermally isolated system can exchange no mass or heat energy with its environment. The internal energy of a thermally isolated system may therefore change due to the exchange of work energy. The entropy of a thermally isolated system will increase over time if it is not at equilibrium, but as long as it is at equilibrium, its entropy will be at a maximum and constant value and will not change, no matter how much work energy the system exchanges with its environment. To maintain this constant entropy, any exchange of work energy with the environment must therefore be quasi-static in nature in order to ensure that the system remains essentially at equilibrium during the process. [ 1 ] The opposite of a thermally isolated system is a thermally open system, which allows the transfer of heat energy and entropy. Thermally open systems may vary, however, in the rate at which they equilibrate, depending on the nature of the boundary of the open system. At equilibrium, the temperatures on both sides of a thermally open boundary are equal. At equilibrium, only a thermally isolating boundary can support a temperature difference. This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Thermally_isolated_system
Thermally stimulated current (TSC) spectroscopy (not to be confused with thermally stimulated depolarization current ) is an experimental technique which is used to study energy levels in semiconductors or insulators (organic or inorganic). Energy levels are first filled either by optical or electrical injection usually at a relatively low temperature, subsequently electrons or holes are emitted by heating to a higher temperature. A curve of emitted current will be recorded and plotted against temperature, resulting in a TSC spectrum. By analyzing TSC spectra, information can be obtained regarding energy levels in semiconductors or insulators. A driving force is required for emitted carriers to flow when the sample temperature is being increased. This driving force can be an electric field or a temperature gradient . Usually, the driving force adopted is an electric field; however, electron traps and hole traps cannot be distinguished. If the driving force adopted is a temperature gradient, electron traps and hole traps can be distinguished by the sign of the current. TSC based on a temperature gradient is also known as "Thermoelectric Effect Spectroscopy" (TEES) according to 2 scientists (Santic and Desnica) from ex- Yugoslavia ; they demonstrated their technique on semi-insulating gallium arsenide (GaAs). (Note: TSC based on a temperature gradient was invented before Santic and Desnica and applied to the study of organic plastic materials. However, Santic and Desnica applied TSC based on a temperature gradient to study a technologically important semiconductor material and coined a new name, TEES, for it.) Historically, Frei and Groetzinger published a paper in German in 1936 with the title "Liberation of electrical energy during the fusion of electrets" (English translation of the original title in German). This may be the first paper on TSC. Before the invention of deep-level transient spectroscopy (DLTS), thermally stimulated current (TSC) spectroscopy was a popular technique to study traps in semiconductors. Nowadays, for traps in Schottky diodes or p-n junctions , DLTS is the standard method to study traps. However, there is an important shortcoming for DLTS: it cannot be used for an insulating material while TSC can be applied to such a situation. (Note: an insulator can be considered as a very large bandgap semiconductor.) In addition, the standard transient capacitance based DLTS method may not be very good for the study of traps in the i-region of a p-i-n diode while the transient current based DLTS (I-DLTS) may be more useful. TSC has been used to study traps in semi-insulating gallium arsenide (GaAs) substrates. It has also been applied to materials used for particle detectors or semiconductor detectors used in nuclear research, for example, high-resistivity silicon , cadmium telluride (CdTe), etc. TSC has also been applied to various organic insulators. TSC is useful for electret research. More advanced modifications of TSC have been applied to study traps in ultrathin high-k dielectric thin films. W. S. Lau ( Lau Wai Shing , Republic of Singapore ) applied zero-bias thermally stimulated current or zero-temperature-gradient zero-bias thermally stimulated current to ultrathin tantalum pentoxide samples. For samples with some shallow traps which can be filled at low temperature and some deep traps which can be filled only at high temperature, a two-scan TSC may be useful as suggested by Lau in 2007. TSC has also been applied to hafnium oxide . TSC technique is used to study dielectric materials and polymers. Different theories was made to describe the response curve for this technique in order to calculate the peak parameters which are, the activation energy and the relaxation time.
https://en.wikipedia.org/wiki/Thermally_stimulated_current_spectroscopy
Thermally stimulated depolarization current (TSDC) is a scientific technique used to measure dielectric properties of materials. It can be used to measure the thermally stimulated depolarization of molecules within a material. One method of doing so is to place the material between two electrodes, cool the material in the presence of an external electric field, remove the field once a desired temperature has been reached, and measure the current between the electrodes as the material warms. [ 1 ] The external electric field must be applied at a sufficiently high temperature to allow the molecular dipoles time to align with the field. Because the dielectric relaxation time increases exponentially on cooling, the polarization caused by their alignment with the field gets "frozen-in". So when the field is removed and the material begins to warm the dipoles begin to "thaw" whereby losing their net alignment and thus the material become depolarized. This depolarization can be measured if the material is sandwiched between two ohmic electrodes and the current is measured on warming. As the material depolarizes, charges are pulled to (or pushed away from) the electrodes which causes a current through the measuring device. This article about materials science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Thermally_stimulated_depolarization_current
Thermate is a variation of thermite and is an incendiary pyrotechnic composition that can generate short bursts of very high temperatures focused on a small area for a short period of time. It is used primarily in incendiary grenades . The main chemical reaction in thermate is the same as in thermite: an aluminothermic reaction between powdered aluminium and a metal oxide . Thermate can also utilize magnesium or other similar elements in place of aluminium. In addition to thermite, thermate sometimes contains sulfur and sometimes barium nitrate , both of which increase its thermal effect, create flame in burning, and significantly reduce the ignition temperature. [ 1 ] Various mixtures of these compounds can be called thermate, but to avoid confusion with thermate-TH3, one can refer to them as thermite variants or analogs. The composition by weight of Thermate-TH3 (in military use) is 68.7% thermite, 29.0% barium nitrate, 2.0% sulfur and 0.3% binder (such as polybutadiene acrylonitrile (PBAN)). As both thermite and thermate are notoriously difficult to ignite, initiating the reaction normally requires supervision and sometimes persistent effort. Because thermate burns at higher temperatures than ordinary thermite, [ 1 ] it has military applications in cutting through tank armor or other hardened military vehicles or bunkers. As with thermite, thermate's ability to burn without an external supply of oxygen renders it useful for underwater incendiary devices. This article related to weaponry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Thermate
Therminol is a synthetic heat transfer fluid [ 1 ] produced by Eastman Chemical Company . Therminol fluids are used in a variety of applications, including: [ 2 ] Prior to 1997, Therminol fluids were sold in Europe under the trade names SantoTherm and GiloTherm. Since 1997, all forms of Therminol fluid have been sold with the Therminol name and extension to define its uses. [ 3 ] Therminol heat transfer fluids were developed in 1963 by Monsanto . In 1997, the chemical businesses of Monsanto were spun off to form a new company called Solutia Inc. In 2012, Solutia was acquired by Eastman Chemical Company . [ 4 ] Prior to 1971, Monsanto marketed a series of polychlorinated biphenyl -(PCB)-containing heat transfer fluids designated as Therminol FR series in the United States and Santotherm FR series in Europe. FR series Therminol heat transfer fluids contained PCBs, which imparted fire resistance. Monsanto voluntarily ceased sales of these fluids in 1971. No form of Therminol heat transfer fluids have contained PCBs since that time. [ 5 ] Polychlorinated biphenyl was banned by the United States Congress in 1979 and the Stockholm Convention on Persistent Organic Pollutants in 2001.
https://en.wikipedia.org/wiki/Therminol
Thermite ( / ˈ θ ɜːr m aɪ t / ) [ 1 ] is a pyrotechnic composition of metal powder and metal oxide . When ignited by heat or chemical reaction , thermite undergoes an exothermic reduction-oxidation (redox) reaction. Most varieties are not explosive, but can create brief bursts of heat and high temperature in a small area. Its form of action is similar to that of other fuel-oxidizer mixtures, such as black powder . Thermites have diverse compositions. Fuels include aluminum , magnesium , titanium , zinc , silicon , and boron . Aluminum is common because of its high boiling point and low cost. Oxidizers include bismuth(III) oxide , boron(III) oxide , silicon(IV) oxide , chromium(III) oxide , manganese(IV) oxide , iron(III) oxide , iron(II,III) oxide , copper(II) oxide , and lead(II,IV) oxide . [ 2 ] In a thermochemical survey comprising twenty-five metals and thirty-two metal oxides, 288 out of 800 binary combinations were characterized by adiabatic temperatures greater than 2000 K. [ 3 ] Combinations like these, which possess the thermodynamic potential to produce very high temperatures, are either already known to be reactive or are plausible thermitic systems. The first thermite reaction was discovered in 1893 by the German chemist Hans Goldschmidt , who obtained a patent for his process. Today, thermite is used mainly for thermite welding , particularly for welding together railway tracks . Thermites have also been used in metal refining, disabling munitions, and in incendiary weapons . Some thermite-like mixtures are used as pyrotechnic initiators in fireworks . In the following example, elemental aluminum reduces the oxide of another metal , in this common example iron oxide , because aluminum forms stronger and more stable bonds with oxygen than iron: The products are aluminum oxide , elemental iron , [ 4 ] and a large amount of heat . The reactants are commonly powdered and mixed with a binder to keep the material solid and prevent separation. Other metal oxides can be used, such as chromium oxide, to generate the given metal in its elemental form. For example, a copper thermite reaction using copper oxide and elemental aluminum can be used for creating electric joints in a process called cadwelding , that produces elemental copper (it may react violently): Thermites with nanosized particles are described by a variety of terms, such as metastable intermolecular composites, super-thermite, [ 5 ] nano-thermite , [ 6 ] and nanocomposite energetic materials. [ 7 ] [ 8 ] [ 9 ] The thermite ( German : Thermit ) reaction was discovered in 1893 and patented in 1895 by German chemist Hans Goldschmidt . [ 10 ] [ 11 ] Consequently, the reaction is sometimes called the "Goldschmidt reaction" or "Goldschmidt process". Goldschmidt was originally interested in producing very pure metals by avoiding the use of carbon in smelting , but he soon discovered the value of thermite in welding . [ 12 ] The first commercial application of thermite was the welding of tram tracks in Essen in 1899. [ 13 ] Red iron(III) oxide (Fe 2 O 3 , commonly known as rust ) is the most common iron oxide used in thermite. [ 14 ] [ 15 ] [ 16 ] Black iron(II,III) oxide (Fe 3 O 4 , magnetite ) also works. [ 17 ] Other oxides are occasionally used, such as MnO 2 in manganese thermite, Cr 2 O 3 in chromium thermite, SiO 2 (quartz) in silicon thermite, or copper(II) oxide in copper thermite, but only for specialized purposes. [ 17 ] All of these examples use aluminum as the reactive metal. Fluoropolymers can be used in special formulations, Teflon with magnesium or aluminum being a relatively common example. Magnesium/Teflon/Viton is another pyrolant of this type. [ 18 ] Combinations of dry ice (frozen carbon dioxide) and reducing agents such as magnesium, aluminum and boron follow the same chemical reaction as with traditional thermite mixtures, producing metal oxides and carbon. Despite the very low temperature of a dry ice thermite mixture, such a system is capable of being ignited with a flame. [ 19 ] When the ingredients are finely divided, confined in a pipe and armed like a traditional explosive, this cryo-thermite is detonatable and a portion of the carbon liberated in the reaction emerges in the form of diamond . [ 20 ] In principle, any reactive metal could be used instead of aluminum. This is rarely done, because the properties of aluminum are nearly ideal for this reaction: Although the reactants are stable at room temperature, they burn with an extremely intense exothermic reaction when they are heated to ignition temperature. The products emerge as liquids due to the high temperatures reached (up to 2500 °C (4532°F) with iron(III) oxide)—although the actual temperature reached depends on how quickly heat can escape to the surrounding environment. Thermite contains its own supply of oxygen and does not require any external source of air. Consequently, it cannot be smothered, and may ignite in any environment given sufficient initial heat. It burns well while wet, and cannot be easily extinguished with water—though enough water to remove sufficient heat may stop the reaction. [ 22 ] Small amounts of water boil before reaching the reaction. Even so, thermite is used for welding under water . [ 23 ] The thermites are characterized by almost complete absence of gas production during burning, high reaction temperature, and production of molten slag . The fuel should have high heat of combustion and produce oxides with low melting point and high boiling point. The oxidizer should contain at least 25% oxygen, have high density, low heat of formation, and produce metal with low melting and high boiling points (so the energy released is not consumed in evaporation of reaction products). Organic binders can be added to the composition to improve its mechanical properties, but they tend to produce endothermic decomposition products, causing some loss of reaction heat and production of gases. [ 24 ] The temperature achieved during the reaction determines the outcome. In an ideal case, the reaction produces a well-separated melt of metal and slag. For this, the temperature must be high enough to melt both reaction products, the resulting metal and the fuel oxide. Too low a temperature produces a mixture of sintered metal and slag; too high a temperature (above the boiling point of any reactant or product) leads to rapid production of gas, dispersing the burning reaction mixture, sometimes with effects similar to a low-yield explosion. In compositions intended for production of metal by aluminothermic reaction , these effects can be counteracted. Too low a reaction temperature (e.g., when producing silicon from sand) can be boosted with addition of a suitable oxidizer (e.g., sulfur in aluminum-sulfur-sand compositions); too high a temperature can be reduced by using a suitable coolant or slag flux . The flux often used in amateur compositions is calcium fluoride , as it reacts only minimally, has relatively low melting point, low melt viscosity at high temperatures (therefore increasing fluidity of the slag) and forms a eutectic with alumina. Too much flux, however, dilutes the reactants to the point of not being able to sustain combustion. The type of metal oxide also has dramatic influence to the amount of energy produced; the higher the oxide, the higher the amount of energy produced. A good example is the difference between manganese(IV) oxide and manganese(II) oxide , where the former produces too high temperature and the latter is barely able to sustain combustion; to achieve good results, a mixture with proper ratio of both oxides can be used. [ 25 ] The reaction rate can be also tuned with particle sizes; coarser particles burn slower than finer particles. The effect is more pronounced with the particles requiring heating to higher temperature to start reacting. This effect is pushed to the extreme with nano-thermites . The temperature achieved in the reaction in adiabatic conditions , when no heat is lost to the environment, can be estimated using Hess’s law – by calculating the energy produced by the reaction itself (subtracting the enthalpy of the reactants from the enthalpy of the products) and subtracting the energy consumed by heating the products (from their specific heat, when the materials only change their temperature, and their enthalpy of fusion and eventually enthalpy of vaporization , when the materials melt or boil). In real conditions, the reaction loses heat to the environment, the achieved temperature is therefore somewhat lower. The heat transfer rate is finite, so the faster the reaction is, the closer to adiabatic condition it runs and the higher is the achieved temperature. [ 26 ] The most common composition is iron thermite. The oxidizer used is usually either iron(III) oxide or iron(II,III) oxide . The former produces more heat. The latter is easier to ignite, likely due to the crystal structure of the oxide. Addition of copper or manganese oxides can significantly improve the ease of ignition. The density of prepared thermite is often as low as 0.7 g/cm 3 . This, in turn, results in relatively poor energy density (about 3 kJ/cm 3 ), rapid burn times, and spray of molten iron due to the expansion of trapped air. Thermite can be pressed to densities as high as 4.9 g/cm 3 (almost 16 kJ/cm 3 ) with slow burning speeds (about 1 cm/s). Pressed thermite has higher melting power, i.e. it can melt a steel cup where a low-density thermite would fail. [ 27 ] Iron thermite with or without additives can be pressed into cutting devices that have heat-resistant casing and a nozzle. [ 28 ] Oxygen-balanced iron thermite 2Al + Fe 2 O 3 has theoretical maximum density of 4.175 g/cm 3 an adiabatic burn temperature of 3135 K or 2862 °C or 5183 °F (with phase transitions included, limited by iron, which boils at 3135 K), the aluminum oxide is (briefly) molten and the produced iron is mostly liquid with part of it being in gaseous form - 78.4 g of iron vapor per kg of thermite are produced. The energy content is 945.4 cal/g (3 956 J/g). The energy density is 16,516 J/cm 3 . [ 29 ] The original mixture, as invented, used iron oxide in the form of mill scale . The composition was very difficult to ignite. [ 24 ] Copper thermite can be prepared using either copper(I) oxide (Cu 2 O, red) or copper(II) oxide (CuO, black). The burn rate tends to be very fast and the melting point of copper is relatively low, so the reaction produces a significant amount of molten copper in a very short time. Copper(II) thermite reactions can be so fast that it can be considered a type of flash powder . An explosion can occur, which sends a spray of copper drops to considerable distances. [ 30 ] Oxygen-balanced mixture has theoretical maximum density of 5.109 g/cm 3 , adiabatic flame temperature 2843 K (phase transitions included) with the aluminum oxide being molten and copper in both liquid and gaseous form; 343 g of copper vapor per kg of this thermite are produced. The energy content is 974 cal/g. [ 29 ] Copper(I) thermite has industrial uses in e.g., welding of thick copper conductors ( cadwelding ). This kind of welding is being evaluated also for cable splicing on the US Navy fleet, for use in high-current systems, e.g., electric propulsion. [ 31 ] Oxygen-balanced mixture has theoretical maximum density of 5.280 g/cm 3 , adiabatic flame temperature 2843 K (phase transitions included) with the aluminum oxide being molten and copper in both liquid and gaseous form; 77.6 g of copper vapor per kg of this thermite are produced. The energy content is 575.5 cal/g. [ 29 ] Thermate composition is a thermite enriched with a salt-based oxidizer (usually nitrates, e.g., barium nitrate , or peroxides). In contrast with thermites, thermates burn with evolution of flame and gases. The presence of the oxidizer makes the mixture easier to ignite and improves penetration of target by the burning composition, as the evolved gas is projecting the molten slag and providing mechanical agitation. [ 24 ] This mechanism makes thermate more suitable than thermite for incendiary purposes and for emergency destruction of sensitive equipment (e.g., cryptographic devices), as thermite's effect is more localized. [ citation needed ] Metals , under the right conditions, burn in a process similar to the combustion of wood or gasoline. In fact, rust is the result of oxidation of steel or iron at very slow rates. A thermite reaction results when the correct mixtures of metallic fuels combine and ignite. Ignition itself requires extremely high temperatures. [ 32 ] Ignition of a thermite reaction normally requires a sparkler or easily obtainable magnesium ribbon, but may require persistent efforts, as ignition can be unreliable and unpredictable. These temperatures cannot be reached with conventional black powder fuses , nitrocellulose rods, detonators , pyrotechnic initiators , or other common igniting substances. [ 17 ] Even when the thermite is hot enough to glow bright red, it does not ignite, as it has a very high ignition temperature. [ 33 ] Starting the reaction is possible using a propane torch if done correctly. [ 34 ] Often, strips of magnesium metal are used as fuses . Because metals burn without releasing cooling gases, they can potentially burn at extremely high temperatures. Reactive metals such as magnesium can easily reach temperatures sufficiently high for thermite ignition. Magnesium ignition remains popular among amateur thermite users, mainly because it can be easily obtained, [ 17 ] but a piece of the burning strip can fall off into the mixture, resulting in premature ignition. [ citation needed ] The reaction between potassium permanganate and glycerol or ethylene glycol is used as an alternative to the magnesium method. When these two substances mix, a spontaneous reaction begins, slowly increasing the temperature of the mixture until it produces flames. The heat released by the oxidation of glycerine is sufficient to initiate a thermite reaction. [ 17 ] Apart from magnesium ignition, some amateurs also choose to use sparklers to ignite the thermite mixture. [ 35 ] These reach the necessary temperatures and provide enough time before the burning point reaches the sample. [ 36 ] This can be a dangerous method, as the iron sparks , like the magnesium strips, burn at thousands of degrees and can ignite the thermite, though the sparkler itself is not in contact with it. This is especially dangerous with finely powdered thermite. [ citation needed ] Match heads burn hot enough to ignite thermite. Use of match heads enveloped with aluminum foil and a sufficiently long viscofuse/electric match leading to the match heads is possible. [ citation needed ] Similarly, finely powdered thermite can be ignited by a flint spark lighter , as the sparks are burning metal (in this case, the highly reactive rare-earth metals lanthanum and cerium ). [ 37 ] Therefore, it is unsafe to strike a lighter close to thermite. [ citation needed ] Thermite reactions have many uses. It is not an explosive; instead, it operates by exposing a very small area to extremely high temperatures. Intense heat focused on a small spot can be used to cut through metal or weld metal components together both by melting metal from the components, and by injecting molten metal from the thermite reaction itself. [ citation needed ] Thermite may be used for repair by the welding in-place of thick steel sections such as locomotive axle -frames where the repair can take place without removing the part from its installed location. [ 38 ] Thermite can be used for quickly cutting or welding steel such as rail tracks , without requiring complex or heavy equipment. [ 39 ] [ 40 ] However, defects such as slag inclusions and voids (holes) are often present in such welded junctions, so great care is needed to operate the process successfully. The numerical analysis of thermite welding of rails has been approached similar to casting cooling analysis. Both this finite element analysis and experimental analysis of thermite rail welds has shown that weld gap is the most influential parameter affecting defect formation. [ 41 ] Increasing weld gap has been shown to reduce shrinkage cavity formation and cold lap welding defects , and increasing preheat and thermite temperature further reduces these defects. However, reducing these defects promotes a second form of defect: microporosity. [ 42 ] Care must also be taken to ensure that the rails remain straight, without resulting in dipped joints, which can cause wear on high speed and heavy axle load lines. [ 43 ] Studies to make the hardness of thermite welds to repair tracks have made improvements to the hardness to compare more to the original tracks while keeping its portable nature. [ 44 ] As the reaction of thermite is oxidation-reduction and environmentally friendly, it has started to be adapted into use for sealing oil wells instead of using concrete. Though thermite is usually in a powder-state, a diluted mixture can reduce damage to the surroundings during the process, though too much alumina can risk hurting the integrity of the seal. [ 45 ] [ 46 ] A higher concentration of mixture was needed to melt the plastic of a model tube, making it a favorable mixture. [ 47 ] Other experiments have been done to simulate the heat flux of the well sealing to predict the temperature on the surface of the seal over time. [ 48 ] A thermite reaction, when used to purify the ores of some metals, is called the thermite process , or aluminothermic reaction. An adaptation of the reaction, used to obtain pure uranium , was developed as part of the Manhattan Project at Ames Laboratory under the direction of Frank Spedding . It is sometimes called the Ames process . [ 49 ] Copper thermite is used for welding together thick copper wires for the purpose of electrical connections. It is used extensively by the electrical utilities and telecommunications industries ( exothermic welded connections ). Thermite hand grenades and charges are typically used by armed forces in both an anti- materiel role and in the partial destruction of equipment, the latter being common when time is not available for safer or more thorough methods. [ 50 ] [ 51 ] For example, thermite can be used for the emergency destruction of cryptographic equipment when there is a danger that it might be captured by enemy troops. Because standard iron-thermite is difficult to ignite, burns with practically no flame and has a small radius of action, standard thermite is rarely used on its own as an incendiary composition. In general, an increase in the volume of gaseous reaction products of a thermite blend increases the heat transfer rate (and therefore damage) of that particular thermite blend. [ 52 ] It is usually used with other ingredients that increase its incendiary effects. Thermate-TH3 is a mixture of thermite and pyrotechnic additives that have been found superior to standard thermite for incendiary purposes. [ 53 ] Its composition by weight is generally about 68.7% thermite, 29.0% barium nitrate , 2.0% sulfur , and 0.3% of a binder (such as PBAN ). [ 53 ] The addition of barium nitrate to thermite increases its thermal effect, produces a larger flame, and significantly reduces the ignition temperature. [ 53 ] Although the primary purpose of Thermate-TH3 by the armed forces is as an incendiary anti-materiel weapon, it also has uses in welding together metal components. A classic military use for thermite is disabling artillery pieces, and it has been used for this purpose since World War II, such as at Pointe du Hoc , Normandy . [ 54 ] Because it permanently disables artillery pieces without the use of explosive charges, thermite can be used when silence is necessary to an operation. This can be accomplished by inserting one or more armed thermite grenades into the breech , then quickly closing it; this welds the breech shut and makes loading the weapon impossible. [ 55 ] During World War II, both German and Allied incendiary bombs used thermite mixtures. [ 56 ] [ 57 ] Incendiary bombs usually consisted of dozens of thin, thermite-filled canisters ( bomblets ) ignited by a magnesium fuse. Incendiary bombs created massive damage in numerous cities due to the fires started by the thermite. Cities that primarily consisted of wooden buildings were especially susceptible. These incendiary bombs were used primarily during nighttime air raids . Bombsights could not be used at night, creating the need for munitions that could destroy targets without requiring precision placement. So called Dragon drones equipped with thermite munitions were used by the Ukrainian army during the Russian invasion of Ukraine against Russian positions. [ 58 ] Thermite usage is hazardous due to the extremely high temperatures produced and the extreme difficulty in smothering a reaction once initiated. Small streams of molten iron released in the reaction can travel considerable distances and may melt through metal containers, igniting their contents. Additionally, flammable metals with relatively low boiling points such as zinc (with a boiling point of 907 °C, which is about 1,370 °C below the temperature at which thermite burns) could potentially spray superheated boiling metal violently into the air if near a thermite reaction. [ citation needed ] If, for some reason, thermite is contaminated with organics, hydrated oxides and other compounds able to produce gases upon heating or reaction with thermite components, the reaction products may be sprayed. Moreover, if the thermite mixture contains enough empty spaces with air and burns fast enough, the super-heated air also may cause the mixture to spray. For this reason it is preferable to use relatively crude powders, so the reaction rate is moderate and hot gases could escape the reaction zone. Preheating of thermite before ignition can easily be done accidentally, for example by pouring a new pile of thermite over a hot, recently ignited pile of thermite slag . When ignited, preheated thermite can burn almost instantaneously, releasing light and heat energy at a much higher rate than normal and causing burns and eye damage at what would normally be a reasonably safe distance. [ citation needed ] The thermite reaction can take place accidentally in industrial locations where workers use abrasive grinding and cutting wheels with ferrous metals . Using aluminum in this situation produces a mixture of oxides that can explode violently. [ 59 ] Mixing water with thermite or pouring water onto burning thermite can cause a steam explosion , spraying hot fragments in all directions. [ 60 ] Thermite's main ingredients were also utilized for their individual qualities, specifically reflectivity and heat insulation, in a paint coating or dope for the German zeppelin Hindenburg , possibly contributing to its fiery destruction. This was a theory put forward by the former NASA scientist Addison Bain , and later tested in small scale by the scientific reality-TV show MythBusters with semi-inconclusive results (it was proven not to be the fault of the thermite reaction alone, but instead conjectured to be a combination of that and the burning of hydrogen gas that filled the body of the Hindenburg ). [ 61 ] The MythBusters program also tested the veracity of a video found on the Internet, whereby a quantity of thermite in a metal bucket was ignited while sitting on top of several blocks of ice, causing a sudden explosion. They were able to confirm the results, finding huge chunks of ice as far as 50 m from the point of explosion. Co-host Jamie Hyneman conjectured that this was due to the thermite mixture aerosolizing , perhaps in a cloud of steam, causing it to burn even faster. Hyneman also voiced skepticism about another theory explaining the phenomenon: that the reaction somehow separated the hydrogen and oxygen in the ice and then ignited them. This explanation claims that the explosion is due to the reaction of high temperature molten aluminum with water. aluminum reacts violently with water or steam at high temperatures, releasing hydrogen and oxidizing in the process. The speed of that reaction and the ignition of the resulting hydrogen can easily account for the explosion verified. [ 62 ] This process is akin to the explosive reaction caused by dropping metallic potassium into water.
https://en.wikipedia.org/wiki/Thermite
Thermo-acoustic instability refers to an instabiltiy arising due to acoustics field and unsteady heat release process. This instability is very relevant in combustion instabilities in systems such as rocket engines , etc. [ 1 ] [ 2 ] [ 3 ] A very simple mechanism of acoustic amplification was first identified by Lord Rayleigh in 1878. [ 4 ] [ 5 ] In simple terms, Rayleigh criterion states that amplification results if, on the average, heat addition occurs in phase with the pressure increases during the oscillation . [ 1 ] . That is, if p ′ {\displaystyle p'} is the pressure perturbation (with respect to its mean value ⟨ p ⟩ {\displaystyle \langle p\rangle } ) and q ˙ ′ {\displaystyle {\dot {q}}'} is the rate of heat release per unit volume (with respect to its mean value ⟨ q ˙ ⟩ {\displaystyle \langle {\dot {q}}\rangle } ), then the Rayleigh criterion says that acoustic amplification occurs if ⟨ p ′ q ˙ ′ ⟩ > 0. {\displaystyle \langle p'{\dot {q}}'\rangle >0.} Rayleigh criterion is used to many explain phenomena such as singing flames in tubes, sound amplification in Rijke tube and others. In complex systems, Rayleigh criterion, may not be strictly valid, as there exists many damping factors such as viscous/wall/nozzle/relaxation/homogeneous/particle damping, mean-flow effects, et, that are not accounted in Rayleigh's analysis. [ 1 ]
https://en.wikipedia.org/wiki/Thermo-acoustic_instability
The thermo-dielectric effect is the production of electric currents and charge separation during phase transition . This interesting effect was discovered by Joaquim da Costa Ribeiro in 1944. The Brazilian physicist observed that solidification and melting of many dielectrics are accompanied by charge separation. A thermo-dielectric effect was demonstrated with carnauba wax , naphthalene and paraffin . Charge separation in ice was also expected. This effect was observed during water freezing period, electrical storm effects can be caused by this strange phenomenon . Effect was measured by many researches - Bernhard Gross , Armando Dias Tavares , Sergio Mascarenhas etc. César Lattes (co-discoverer of the pion ) supposed that this was the only effect ever to be discovered entirely in Brazil .
https://en.wikipedia.org/wiki/Thermo-dielectric_effect
Thermo-mechanical fatigue (short TMF ) is the overlay of a cyclical mechanical loading, that leads to fatigue of a material, with a cyclical thermal loading. Thermo-mechanical fatigue is an important point that needs to be considered, when constructing turbine engines or gas turbines. There are three mechanisms acting in thermo-mechanical fatigue Each factor has more or less of an effect depending on the parameters of loading. In phase (IP) thermo-mechanical loading (when the temperature and load increase at the same time) is dominated by creep. The combination of high temperature and high stress is the ideal condition for creep. The heated material flows more easily in tension, but cools and stiffens under compression. Out of phase (OP) thermo-mechanical loading is dominated by the effects of oxidation and fatigue. Oxidation weakens the surface of the material, creating flaws and seeds for crack propagation. As the crack propagates, the newly exposed crack surface then oxidizes, weakening the material further and enabling the crack to extend. A third case occurs in OP TMF loading when the stress difference is much greater than the temperature difference. Fatigue alone is the driving cause of failure in this case, causing the material to fail before oxidation can have much of an effect. [ 1 ] TMF still is not fully understood. There are many different models to attempt to predict the behavior and life of materials undergoing TMF loading. The two models presented below take different approaches. There are many different models that have been developed in an attempt to understand and explain TMF. This page will address the two broadest approaches, constitutive and phenomenological models. Constitutive models utilize the current understanding of the microstructure of materials and failure mechanisms. These models tend to be more complex, as they try to incorporate everything we know about how the materials fail. These types of models are becoming more popular recently as improved imaging technology has allowed for a better understanding of failure mechanisms. Phenomenological models are based purely on the observed behavior of materials. They treat the exact mechanism of failure as a sort of "black box". Temperature and loading conditions are input, and the result is the fatigue life. These models try to fit some equation to match the trends found between different inputs and outputs. The damage accumulation model is a constitutive model of TMF. It adds together the damage from the three failure mechanisms of fatigue, creep, and oxidation. 1 N f = 1 N f f a t i g u e + 1 N f o x i d a t i o n + 1 N f c r e e p {\displaystyle {\frac {1}{N_{f}}}={\frac {1}{N_{f}^{fatigue}}}+{\frac {1}{N_{f}^{oxidation}}}+{\frac {1}{N_{f}^{creep}}}} where N f {\displaystyle N_{f}} is the fatigue life of the material, that is, the number of loading cycles until failure. The fatigue life for each failure mechanism is calculated individually and combined to find the total fatigue life of the specimen. [ 2 ] [ 3 ] The life from fatigue is calculated for isothermal loading conditions. It is dominated by the strain applied to the specimen. Δ ϵ m 2 = C ( 2 N f f a t i g u e ) d {\displaystyle {\frac {\Delta \epsilon _{m}}{2}}=C(2N_{f}^{fatigue})^{d}} where C {\displaystyle C} and d {\displaystyle d} are material constants found through isothermal testing. Note that this term does not account for temperature effects. The effects of temperature are treated in the oxidation and creep terms.. The life from oxidation is affected by temperature and cycle time. 1 N f o x i d a t i o n = ( h c r δ 0 B Φ o x i d a t i o n K p e f f ) − 1 β 2 ( Δ ϵ m ˙ ) 2 β + 1 ϵ 1 − α / β {\displaystyle {\frac {1}{N_{f}^{oxidation}}}=\left({\frac {h_{cr}\delta _{0}}{B\Phi ^{oxidation}K_{p}^{eff}}}\right)^{\frac {-1}{\beta }}{\frac {2(\Delta {\dot {\epsilon _{m}}})^{{\frac {2}{\beta }}+1}}{\epsilon ^{1-\alpha /\beta }}}} where K p e f f = 1 t c ∫ 0 t D 0 e x p ( − Q R T ( t ) ) d t {\displaystyle K_{p}^{eff}={\frac {1}{t_{c}}}\int _{0}^{t}D_{0}exp\left({\frac {-Q}{RT(t)}}\right)dt} and Φ o x i d a t i o n = 1 t c ∫ 0 t e x p [ − 1 2 ( ( ϵ t h ˙ / ϵ m ˙ ) + 1 ζ ˙ o x i d a t i o n ) 2 ] d t {\displaystyle \Phi ^{oxidation}={\frac {1}{t_{c}}}\int _{0}^{t}exp\left[-{\frac {1}{2}}\left({\frac {({\dot {\epsilon _{th}}}/{\dot {\epsilon _{m}}})+1}{{\dot {\zeta }}^{oxidation}}}\right)^{2}\right]dt} Parameters are found by comparing fatigue tests done in air and in an environment with no oxygen (vacuum or argon). Under these testing conditions, it has been found that the effects of oxidation can reduce the fatigue life of a specimen by a whole order of magnitude. Higher temperatures greatly increase the amount of damage from environmental factors. [ 4 ] D c r e e p = Φ c r e e p ∫ 0 t A e ( − Δ H / R T ( t ) ) ( α 1 σ ¯ + α 2 σ H K ) m d t {\displaystyle D^{creep}=\Phi ^{creep}\int _{0}^{t}Ae^{(-\Delta H/RT(t))}\left({\frac {\alpha _{1}{\bar {\sigma }}+\alpha _{2}\sigma _{H}}{K}}\right)^{m}dt} where Φ c r e e p = 1 t c ∫ 0 t e x p [ − 1 2 ( ( ϵ t h ˙ / ϵ m ˙ ) − 1 ζ ˙ c r e e p ) 2 ] d t {\displaystyle \Phi ^{creep}={\frac {1}{t_{c}}}\int _{0}^{t}exp\left[-{\frac {1}{2}}\left({\frac {({\dot {\epsilon _{th}}}/{\dot {\epsilon _{m}}})-1}{{\dot {\zeta }}^{creep}}}\right)^{2}\right]dt} The damage accumulation model is one of the most in-depth and accurate models for TMF. It accounts for the effects of each failure mechanism. The damage accumulation model is also one of the most complex models for TMF. There are several material parameters that must be found through extensive testing. [ 5 ] Strain-rate partitioning is a phenomenological model of thermo-mechanical fatigue. It is based on observed phenomenon instead of the failure mechanisms. This model deals only with inelastic strain and ignores elastic strain completely. It accounts for different types of deformation and breaks strain into four possible scenarios: [ 6 ] The damage and life for each partition is calculated and combined in the model 1 N f = F p p N p p ′ + F c c N c c ′ + F p c N p c ′ + F c p N c p ′ {\displaystyle {\frac {1}{N_{f}}}={\frac {F_{pp}}{N'_{pp}}}+{\frac {F_{cc}}{N'_{cc}}}+{\frac {F_{pc}}{N'_{pc}}}+{\frac {F_{cp}}{N'_{cp}}}} where F p p = Δ ϵ p p Δ ϵ i n e l a s t i c , F c c = Δ ϵ c c Δ ϵ i n e l a s t i c , F p c = Δ ϵ p c Δ ϵ i n e l a s t i c , F c p = Δ ϵ c p Δ ϵ i n e l a s t i c {\displaystyle F_{pp}={\frac {\Delta \epsilon _{pp}}{\Delta \epsilon _{inelastic}}},F_{cc}={\frac {\Delta \epsilon _{cc}}{\Delta \epsilon _{inelastic}}},F_{pc}={\frac {\Delta \epsilon _{pc}}{\Delta \epsilon _{inelastic}}},F_{cp}={\frac {\Delta \epsilon _{cp}}{\Delta \epsilon _{inelastic}}}} and N p p ′ {\displaystyle N'_{pp}} etc., are found from variations of the equation Δ ϵ i n e l a s t i c = A p p ( N p p ′ ) C p p {\displaystyle \Delta \epsilon _{inelastic}=A_{pp}(N'_{pp})^{C_{pp}}} where A and C are material constants for individual loading. Strain-Rate Partitioning is a much simpler model than the damage accumulation model. Because it breaks down the loading into specific scenarios, it can account for different phases in loading. The model is based on inelastic strain. This means that it does not work well with scenarios of low inelastic strain, such as brittle materials or loading with very low strain. This model can be an oversimplification. Because it fails to account for oxidation damage, it may overpredict specimen life in certain loading conditions. The next area of research is attempting to understand TMF of composites. The interaction between the different materials adds another layer of complexity. Zhang and Wang are currently investigating the TMF of a unidirectional fiber reinforced matrix. They are using a finite element method that accounts for the known microstructure. They have discovered that the large difference in the thermal expansion coefficient between the matrix and the fiber is the driving cause of failure, causing high internal stress. [ 7 ]
https://en.wikipedia.org/wiki/Thermo-mechanical_fatigue
In chemistry , thermochemical cycles combine solely heat sources ( thermo ) with chemical reactions to split water into its hydrogen and oxygen components. [ 1 ] The term cycle is used because aside of water, hydrogen and oxygen, the chemical compounds used in these processes are continuously recycled. If work is partially used as an input, the resulting thermochemical cycle is defined as a hybrid one. This concept was first postulated by Funk and Reinstrom (1966) as a maximally efficient way to produce fuels (e.g. hydrogen , ammonia ) from stable and abundant species (e.g. water , nitrogen ) and heat sources. [ 2 ] Although fuel availability was scarcely considered before the oil crisis efficient fuel generation was an issue in important niche markets . As an example, in the military logistics field, providing fuels for vehicles in remote battlefields is a key task. Hence, a mobile production system based on a portable heat source (a nuclear reactor was considered) was being investigated with utmost interest. Following the oil crisis, multiple programs (Europe, Japan, United States) were created to design, test and qualify such processes for purposes such as energy independence. High-temperature (around 1,000 K (730 °C; 1,340 °F) operating temperature) nuclear reactors were still considered as the likely heat sources. However, optimistic expectations based on initial thermodynamics studies were quickly moderated by pragmatic analyses comparing standard technologies ( thermodynamic cycles for electricity generation, coupled with the electrolysis of water ) and by numerous practical issues (insufficient temperatures from even nuclear reactors, slow reactivities, reactor corrosion, significant losses of intermediate compounds with time...). [ 3 ] Hence, the interest for this technology faded during the next decades, [ 4 ] or at least some tradeoffs (hybrid versions) were being considered with the use of electricity as a fractional energy input instead of only heat for the reactions (e.g. Hybrid sulfur cycle ). A rebirth in the year 2000 can be explained by both the new energy crisis, demand for electricity, and the rapid pace of development of concentrated solar power technologies whose potentially very high temperatures are ideal for thermochemical processes, [ 5 ] while the environmentally friendly side of thermochemical cycles attracted funding in a period concerned with a potential peak oil outcome. Consider a system composed of chemical species (e.g. water splitting ) in thermodynamic equilibrium at constant pressure and thermodynamic temperature T: Equilibrium is displaced to the right only if energy ( enthalpy change ΔH for water-splitting) is provided to the system under strict conditions imposed by thermodynamics : Hence, for an ambient temperature T° of 298K ( kelvin ) and a pressure of 1 atm ( atmosphere (unit) ) (ΔG° and ΔS° are respectively equal to 237 kJ/mol and 163 J/mol/K, relative to the initial amount of water), more than 80% of the required energy ΔH must be provided as work in order for water-splitting to proceed. If phase transitions are neglected for simplicity's sake (e.g. water electrolysis under pressure to keep water in its liquid state), one can assume that ΔH et ΔS do not vary significantly for a given temperature change. These parameters are thus taken equal to their standard values ΔH° et ΔS° at temperature T°. Consequently, the work required at temperature T is, As ΔS° is positive, a temperature increase leads to a reduction of the required work. This is the basis of high-temperature electrolysis . This can also be intuitively explained graphically. Chemical species can have various excitation levels depending on the absolute temperature T, which is a measure of the thermal agitation. The latter causes shocks between atoms or molecules inside the closed system such that energy spreading among the excitation levels increases with time, and stop (equilibrium) only when most of the species have similar excitation levels (a molecule in a highly excited level will quickly return to a lower energy state by collisions) ( Entropy (statistical thermodynamics) ). Relative to the absolute temperature scale, the excitation levels of the species are gathered based on standard enthalpy change of formation considerations; i.e. their stabilities. As this value is null for water but strictly positive for oxygen and hydrogen, most of the excitation levels of these last species are above the ones of water. Then, the density of the excitation levels for a given temperature range is monotonically increasing with the species entropy. A positive entropy change for water-splitting means far more excitation levels in the products. Consequently, One can imagine that if T were high enough in Eq.(3), ΔG could be nullified, meaning that water-splitting would occur even without work ( thermolysis of water). Though possible, this would require tremendously high temperatures: considering the same system naturally with steam instead of liquid water (ΔH° = 242 kJ/mol; ΔS° = 44 J/mol/K) would hence give required temperatures above 3000K, that make reactor design and operation extremely challenging. [ 6 ] Hence, a single reaction only offers one freedom degree (T) to produce hydrogen and oxygen only from heat (though using Le Chatelier's principle would also allow to slightly decrease the thermolysis temperature, work must be provided in this case for extracting the gas products from the system) On the contrary, as shown by Funk and Reinstrom, multiple reactions (e.g. k steps) provide additional means to allow spontaneous water-splitting without work thanks to different entropy changes ΔS° i for each reaction i. An extra benefit compared with water thermolysis is that oxygen and hydrogen are separately produced, avoiding complex separations at high temperatures. [ 7 ] The first pre-requisites (Eqs.(4) and (5)) for multiple reactions i to be equivalent to water-splitting are trivial (cf. Hess's law ): Similarly, the work ΔG required by the process is the sum of each reaction work ΔG i : As Eq. (3) is a general law, it can be used anew to develop each ΔG i term. If the reactions with positive (p indice) and negative (n indice) entropy changes are expressed as separate summations, this gives, Using Eq. (6) for standard conditions allows to factorize the ΔG° i terms, yielding, Now consider the contribution of each summation in Eq. (8): in order to minimize ΔG, they must be as negative as possible: Finally, one can deduce from this last equation the relationship required for a null work requirement (ΔG ≤ 0) Consequently, a thermochemical cycle with i steps can be defined as sequence of i reactions equivalent to water-splitting and satisfying equations (4), (5) and (10) . The key point to remember in that case is that the process temperature T H can theoretically be arbitrary chosen (1000K as a reference in most of the past studies, for high temperature nuclear reactors), far below the water thermolysis one. This equation can alternatively (and naturally) be derived via the Carnot's theorem , that must be respected by the system composed of a thermochemical process coupled with a work producing unit (chemical species are thus in a closed loop): Consequently, replacing W (ΔG°) and Q (Eq.(14)) in Eq.(11) gives after reorganization Eq.(10) (assuming that the ΔS i do not change significantly with the temperature, i.e. are equal to ΔS° i ) Equation (10) has practical implications about the minimum number of reactions for such a process according to the maximum process temperature T H . [ 8 ] Indeed, a numerical application (ΔG° equals to 229 kJ/K for water considered as steam) in the case of the originally chosen conditions (high-temperature nuclear reactor with T H and T° respectively equal to 1000K and 298K) gives a minimum value around 330 J/mol/K for the summation of the positive entropy changes ΔS° i of the process reactions. This last value is very high as most of the reactions have entropy change values below 50 J/mol/K, and even an elevated one (e.g. water-splitting from liquid water: 163 J/mol/K) is twice lower. Consequently, thermochemical cycles composed of less than three steps are practically impossible with the originally planned heat sources (below 1000K), or require "hybrid" versions In this case, an extra freedom degree is added via a relatively small work input W add (maximum work consumption, Eq.(9) with ΔG ≤ W add ), and Eq.(10) becomes, If W add is expressed as a fraction f of the process heat Q (Eq.(14)), Eq.(15) becomes after reorganization, Using a work input equals to a fraction f of the heat input is equivalent relative to the choice of the reactions to operate a pure similar thermochemical cycle but with a hot source with a temperature increased by the same proportion f. Naturally, this decreases the heat-to-work efficiency in the same proportion f . Consequently, if one want a process similar to a thermochemical cycle operating with a 2000K heat source (instead of 1000K), the maximum heat-to-work efficiency is twice lower. As real efficiencies are often significantly lower than ideal one, such a process is thus strongly limited. Practically, use of work is restricted to key steps such as product separations, where techniques relying on work (e.g. electrolysis) might sometimes have fewer issues than those using only heat (e.g. distillations ) According to equation (10), the minimum required entropy change (right term) for the summation of the positive entropy changes decreases when T H increases. As an example, performing the same numerical application but with T H equals to 2000K would give a twice lower value (around 140 kJ/mol), which allows thermochemical cycles with only two reactions. Such processes can be realistically coupled with concentrated solar power technologies like Solar Updraft Tower . As an example in Europe, this is the goal of the Hydrosol-2 project (Greece, Germany ( German Aerospace Center ), Spain, Denmark, England) [ 9 ] and of the researches of the solar department of the ETH Zurich and the Paul Scherrer Institute (Switzerland). [ 10 ] Examples of reactions satisfying high entropy changes are metal oxide dissociations , as the products have more excitation levels due to their gaseous state (metal vapors and oxygen) than the reactant (solid with crystalline structure, so symmetry dramatically reduces the number of different excitation levels). Consequently, these entropy changes can often be larger than the water-splitting one and thus a reaction with a negative entropy change is required in the thermochemical process so that Eq.(5) is satisfied. Furthermore, assuming similar stabilities of the reactant (ΔH°) for both thermolysis and oxide dissociation, a larger entropy change in the second case explained again a lower reaction temperature (Eq.(3)). Let us assume two reactions, with positive ( 1 subscript, at T H ) and negative ( 2 subscript, at T°) entropy changes. An extra property can be derived in order to have T H strictly lower than the thermolysis temperature: The standard thermodynamic values must be unevenly distributed among the reactions . [ 11 ] Indeed, according to the general equations (2) (spontaneous reaction), (4) and (5), one must satisfy, Hence, if ΔH° 1 is proportional to ΔH° 2 by a given factor, and if ΔS° 1 and ΔS° 2 follow a similar law (same proportionality factor), the inequality (17) is broken (equality instead, so T H equals to the water thermolysis temperature). Hundreds of such cycles have been proposed and investigated. This task has been eased by the availability of computers, allowing a systematic screening of chemical reactions sequences based on thermodynamic databases. [ 12 ] Only the main "families" will be described in this article. [ 13 ] Two-step thermochemical cycles, often involving metal oxides, [ 14 ] can be divided into two categories depending on the nature of the reaction: volatile and non-volatile. Volatile cycles utilize metal species that sublime during the reduction of the metal oxides, and non-volatile cycles can be further categorized into stoichiometric cycles and non-stoichiometric cycles. During the reduction half-cycle of the stochiometric cycle, the metal oxide is reduced and forms a new metal oxide with different oxidation states (Fe 3 O 4 → 3FeO + 1/2 O 2 ); a non-stochiometric cycle's reduction of the metal oxide will produce vacancies, often oxygen vacancies, but the crystal structure remains stable and only a portion of the metal atoms change their oxidation state (CeO 2 → CeO 2−δ + δ/2 O 2 ). The non-stoichiometric cycles with CeO 2 can be describes with the following reactions: The reduction occurs when CeO 2 , or ceria, is exposed to a inert atmosphere at around 1500 °C to 1600 °C, [ 15 ] and hydrogen release occurs at 800 °C during hydrolysis when it is subjected to an atmosphere containing water vapor. One advantage of ceria over iron oxide lies in its higher melting point, which allows it to sustain higher temperature during reduction cycle. In addition, ceria's ionic conductivity allows oxygen atoms to diffuse through its structure several orders of magnitude faster than Fe ions can diffuse through iron oxide. Consequently, the redox reactions of ceria can occur at occur at a larger length scale, making it an ideal candidate for thermochemical reactor testing. Ceria-based thermochemical reactor has been created and tested as early as 2010, and viability of cycling was corroborated under realistic solar concentrating conditions. One disadvantage that limits ceria's application is its relatively lower oxygen storage capability. The non-stoichiometric cycles with a perovskite ABO 3 can be describes with the following reactions: The reduction thermodynamics of perovskite makes it more favorable during the reduction half-cycle, during which more oxygen is produced; however, the oxidation thermodynamics proves less suitable, and sometimes perovskite is not completely oxidized. The two atomic sites, A and B, offer more doping possibilities and a much larger potential for different configurations. [ 16 ] Due to sulfur's high covalence , it can form up to 6 chemical bonds with other elements such as oxygen, resulting in a large number of oxidation states . Thus, there exist several redox reactions involving sulfur compounds. This freedom allows numerous chemical steps with different entropy changes, increasing the odds of meeting the criteria for a thermochemical cycle. Much of the initial research was conducted in the United States, with sulfate- and sulfide-based cycles studied at Kentucky University, [ 17 ] [ 18 ] the Los Alamos National Laboratory [ 19 ] and General Atomics . Significant research based on sulfates (e.g., FeSO 4 and CuSO 4 ) was conducted in Germany [ 20 ] and Japan. [ 21 ] [ 22 ] The sulfur-iodine cycle , discovered by General Atomics, has been proposed as a way of supplying a hydrogen economy without the need for hydrocarbons . [ 23 ] Above 973K, the Deacon reaction is reversed, yielding hydrogen chloride and oxygen from water and chlorine :
https://en.wikipedia.org/wiki/Thermochemical_cycle
In thermochemistry , a thermochemical equation is a balanced chemical equation that represents the energy changes from a system to its surroundings . One such equation involves the enthalpy change, which is denoted with Δ H {\displaystyle \Delta H} In variable form, a thermochemical equation would appear similar to the following: A {\displaystyle A} , B {\displaystyle B} , and C {\displaystyle C} are the usual agents of a chemical equation with coefficients and e {\displaystyle e} is a positive or negative numerical value, which generally has units of kJ/mol. Another equation may include the symbol E {\displaystyle E} to denote energy; E {\displaystyle E} 's position determines whether the reaction is considered endothermic (energy-absorbing) or exothermic (energy-releasing). Enthalpy ( H {\displaystyle H} ) is the transfer of energy in a reaction (for chemical reactions, it is in the form of heat) and Δ H {\displaystyle \Delta H} is the change in enthalpy. Δ H {\displaystyle \Delta H} is a state function, meaning that Δ H {\displaystyle \Delta H} is independent of processes occurring between initial and final states. In other words, it does not matter which steps are taken to get from initial reactants to final products, as Δ H {\displaystyle \Delta H} will always be the same. Δ H rxn {\displaystyle \Delta H_{\text{rxn}}} , or the change in enthalpy of a reaction, has the same value of Δ H {\displaystyle \Delta H} as in a thermochemical equation; however, Δ H rxn {\displaystyle \Delta H_{\text{rxn}}} is measured in units of kJ/mol, meaning that it is the enthalpy change per moles of any particular substance in an equation. Values of Δ H {\displaystyle \Delta H} are determined experimentally under standard conditions of 1 atm [ clarification needed ] and 25 °C (298.15K). As discussed earlier, Δ H {\displaystyle \Delta H} can have a positive or negative sign. If Δ H {\displaystyle \Delta H} has a positive sign, the system uses heat and is endothermic ; if Δ H {\displaystyle \Delta H} is negative, then heat is produced and the system is exothermic . Endothermic: A + B + Heat → C , Δ H > 0 Exothermic: A + B → C + Heat , Δ H < 0 {\displaystyle {\begin{aligned}&{\text{Endothermic:}}&A+B+{\text{Heat}}\to C,\quad &\Delta H>0\\&{\text{Exothermic:}}&A+B\to C+{\text{Heat}},\quad &\Delta H<0\end{aligned}}} Since enthalpy is a state function, the Δ H {\displaystyle \Delta H} given for a particular reaction is only true for that exact reaction. Physical states of reactants and products matter, as do molar concentrations. Since Δ H {\displaystyle \Delta H} is dependent on the physical state and molar concentrations in reactions, thermochemical equations must be stoichiometrically correct. If one agent of an equation is changed through multiplication, then all agents must be proportionally changed, including Δ H {\displaystyle \Delta H} . The multiplicative property of thermochemical equations is mainly due to the first law of thermodynamics , which says that energy can neither be created nor destroyed; this concept is commonly known as the conservation of energy. It holds true on a physical or molecular scale. Thermochemical equations can be changed, as mentioned above, by multiplying by any numerical coefficient. All agents must be multiplied, including Δ H {\displaystyle \Delta H} . Using the thermochemical equation of variables as above, one gets the following example. One must assume that A {\displaystyle A} needs to be multiplied by two in order for the thermochemical equation to be used. All the agents in the reaction must then be multiplied by the same coefficient, like so: This is again considered to be logical when the first law of thermodynamics is considered. Twice as much product is produced, so twice as much heat is removed or given off. The division of coefficients functions in the same way. Hess's law states that the sum of the energy changes of all thermochemical equations included in an overall reaction is equal to the overall energy change. Since Δ H {\displaystyle \Delta H} is a state function and is not dependent on how reactants become products as a result, steps (in the form of several thermochemical equations) can be used to find the Δ H {\displaystyle \Delta H} of the overall reaction. For instance: Reaction 1 : C graphite ( s ) + O 2 ( g ) → CO 2 ( g ) {\displaystyle {\ce {Reaction\ 1:\quad C_{graphite}(s)\ +O2(g)\to CO2(g)}}} This reaction is the result of two steps (a reaction sequence): C graphite ( s ) + 1 2 O 2 ( g ) → CO ( g ) {\displaystyle {\ce {C_{graphite}(s)\ +{\frac {1}{2}}O2(g)\to CO(g)}}} Δ H = − 110.5 kJ mol {\displaystyle \Delta H=-110.5\ {\frac {\text{kJ}}{\text{mol}}}} CO ( g ) + 1 2 O 2 ( g ) → CO 2 ( g ) {\displaystyle {\ce {CO(g)\ +{\frac {1}{2}}O2(g)\to CO2(g)}}} Δ H = − 283.0 kJ mol {\displaystyle \Delta H=-283.0\ {\frac {\text{kJ}}{\text{mol}}}} Adding these two reactions together results in Reaction 1, which allows Δ H {\displaystyle \Delta H} to be found, so whether or not agents in the reaction sequence are equal to each other is verified. The reaction sequences are then added together. In the following example, CO {\displaystyle {\ce {CO}}} is not in Reaction 1 and equals another reaction. C graphite ( s ) + 1 2 O 2 ( g ) + 1 2 O 2 ( g ) → CO 2 ( g ) {\displaystyle {\ce {C_{graphite}(s)\ +{\frac {1}{2}}O2(g)\ +{\frac {1}{2}}O2(g)\to CO2(g)}}} and C graphite ( s ) + O 2 ( g ) → CO 2 ( g ) , Reaction 1 {\displaystyle {\ce {C_{graphite}(s)\ +O2(g)\to CO2(g),\ Reaction\ 1}}} To solve for Δ H {\displaystyle \Delta H} , the Δ H {\displaystyle \Delta H} s of the two equations in the reaction sequence are added together: Δ H = − 110.5 + − 283.0 = − 393.5 kJ mol {\displaystyle \Delta H=-110.5+-283.0=-393.5\ {\frac {\text{kJ}}{\text{mol}}}} Another example involving thermochemical equations is that when methane gas is combusted, heat is released, making the reaction exothermic. In the process, 890.4 kJ of heat is released per mole of reactants, so the heat is written as a product of the reaction. Values of Δ H {\displaystyle \Delta H} have been experimentally determined and are available in table form. Most general chemistry textbooks have appendixes including common Δ H {\displaystyle \Delta H} values. There are several online tables available. A software offered with Active Thermochemical Tables (ATcT) provides more information online.
https://en.wikipedia.org/wiki/Thermochemical_equation
Thermochemical nanolithography ( TCNL ) or thermochemical scanning probe lithography ( tc-SPL ) is a scanning probe microscopy -based nanolithography technique which triggers thermally activated chemical reactions to change the chemical functionality or the phase of surfaces . Chemical changes can be written very quickly through rapid probe scanning, since no mass is transferred from the tip to the surface, and writing speed is limited only by the heat transfer rate [ citation needed ] . TCNL was invented in 2007 by a group at the Georgia Institute of Technology. [ 1 ] Riedo and collaborators demonstrated that TCNL can produce local chemical changes with feature sizes down to 12 nm at scan speeds up to 1 mm/s. [ 1 ] TCNL was used in 2013 to create a nano-scale replica of the Mona Lisa "painted" with different probe tip temperatures. Called the Mini Lisa , the portrait measured 30 micrometres (0.0012 in), about 1/25,000th the size of the original. [ 2 ] [ 3 ] The AFM thermal cantilevers are generally made from a silicon wafers using traditional bulk and surface micro-machining processes. Through the application of an electric current through its highly doped silicon wings, resistive heating occurs at the light doping zone around the probe tip, where the largest fraction of the heat is dissipated. The tip is able to change its temperature very quickly due to its small volume; an average tip in contact with polycarbonate has a time constant of 0.35 ms. [ citation needed ] The tips can be cycled between ambient temperature and 1100 °C at up to 10 MHz [ citation needed ] while the distance of the tip from the surface and the tip temperature can be controlled independently. Thermally activated reactions have been triggered in proteins , [ 4 ] organic semiconductors , [ 5 ] electroluminescent conjugated polymers, and nanoribbon resistors. [ 6 ] Deprotection of functional groups [ 7 ] (sometimes involving a temperature gradients [ 8 ] ), and the reduction of graphene oxide [ 9 ] has been demonstrated. The wettability of a polymer surface at the nanoscale [ 1 ] [ 10 ] has been modified, and nanostructures of poly(p-phenylene vinylene) (an electroluminescence conjugated polymer) have been created. [ 11 ] Nanoscale templates on polymer films for the assembly of nano-objects such as proteins and DNA have also been created [ 12 ] and crystallization of ferroelectric ceramics with storage densities up to 213 Gb/in 2 have been produced. [ 13 ] The use of a material that can undergo multiple chemical reactions at significantly different temperatures could lead to a multi-state system , wherein different functionalities can be addressed at different temperatures. [ citation needed ] Synthetic polymers, such as PMCC , have been used as functional layers on substrate, which allow for high-resolution patterning. [ 14 ] Thermo-mechanical scanning probe lithography relies on the application of heat and force order to create indentations for patterning purposes (see also: Millipede memory ). Thermal scanning probe lithography (t-SPL) specializes on removing material from a substrate without the intent of chemically altering the created topography. Local oxidation nanolithography relies on oxidation reactions in a water meniscus around the probe tip.
https://en.wikipedia.org/wiki/Thermochemical_nanolithography
Thermochemistry is the study of the heat energy which is associated with chemical reactions and/or phase changes such as melting and boiling . A reaction may release or absorb energy, and a phase change may do the same. Thermochemistry focuses on the energy exchange between a system and its surroundings in the form of heat. Thermochemistry is useful in predicting reactant and product quantities throughout the course of a given reaction. In combination with entropy determinations, it is also used to predict whether a reaction is spontaneous or non-spontaneous, favorable or unfavorable. Endothermic reactions absorb heat, while exothermic reactions release heat. Thermochemistry coalesces the concepts of thermodynamics with the concept of energy in the form of chemical bonds. The subject commonly includes calculations of such quantities as heat capacity , heat of combustion , heat of formation , enthalpy , entropy , and free energy . Thermochemistry is one part of the broader field of chemical thermodynamics , which deals with the exchange of all forms of energy between system and surroundings, including not only heat but also various forms of work , as well the exchange of matter. When all forms of energy are considered, the concepts of exothermic and endothermic reactions are generalized to exergonic reactions and endergonic reactions . Thermochemistry rests on two generalizations. Stated in modern terms, they are as follows: [ 1 ] These statements preceded the first law of thermodynamics (1845) and helped in its formulation. Thermochemistry also involves the measurement of the latent heat of phase transitions . Joseph Black had already introduced the concept of latent heat in 1761, based on the observation that heating ice at its melting point did not raise the temperature but instead caused some ice to melt. [ 4 ] Gustav Kirchhoff showed in 1858 that the variation of the heat of reaction is given by the difference in heat capacity between products and reactants: dΔH / dT = ΔC p . Integration of this equation permits the evaluation of the heat of reaction at one temperature from measurements at another temperature. [ 5 ] [ 6 ] The measurement of heat changes is performed using calorimetry , usually an enclosed chamber within which the change to be examined occurs. The temperature of the chamber is monitored either using a thermometer or thermocouple , and the temperature plotted against time to give a graph from which fundamental quantities can be calculated. Modern calorimeters are frequently supplied with automatic devices to provide a quick read-out of information, one example being the differential scanning calorimeter . Several thermodynamic definitions are very useful in thermochemistry. A system is the specific portion of the universe that is being studied. Everything outside the system is considered the surroundings or environment. A system may be: A system undergoes a process when one or more of its properties changes. A process relates to the change of state. An isothermal (same-temperature) process occurs when temperature of the system remains constant. An isobaric (same-pressure) process occurs when the pressure of the system remains constant. A process is adiabatic when no heat exchange occurs.
https://en.wikipedia.org/wiki/Thermochemistry
Thermochromic ink (also called thermochromatic ink ) is a type of dye that changes color in response to a change in temperature . [ 1 ] [ 2 ] [ 3 ] It was first used in the 1970s in novelty toys like mood rings, but has found some practical uses in things such as thermometers, product packaging, and pens. [ 4 ] The ink has also found applications within the medical field for specific medical simulations in medical training. Thermochromic ink can also turn transparent when heat is applied; an example of this type of ink can be found on the corners of an examination mark sheet to prove that the sheet has not been edited or photocopied . There are two main variants of thermochromic ink, one composed of leuco dyes and one composed of liquid crystals . For both types of ink, the chemicals need to be contained within capsules around 3 to 5 microns long. This protects the dyes and crystals from mixing with other chemicals that might affect the functionality of the ink. The leuco dye variant is typically composed of leuco dyes with additional chemicals to add different desired effects. It is the most commonly used type because it is easier to manufacture. They can be designed to react to changes in temperature that range from -15 °C to 60 °C. Most common applications of the ink have activation temperatures at -10 °C (cold), 31 °C (body temperature), or 43 °C (warm). At lower temperatures, the ink appears to be a certain color, and once the temperature increases, the ink becomes either translucent or lightly colored, allowing hidden patterns to be seen. This gives the effect of a change in color, and the process can also be reversed by lowering the temperature again. [ 5 ] [ 6 ] [ 7 ] Liquid crystals can change from liquid to solid in response to a change in temperature. At lower temperatures, the crystals are mostly solid and hardly reflect any light, causing it to appear black. As it gradually increases in temperature, the crystals become more spaced out, causing light to reflect differently and changing the color of the crystals. The temperatures at which these crystals change their properties can range from -30 °C to 90 °C. [ 5 ] On June 20, 2017, [ 8 ] the United States Postal Service released the first application of thermochromic ink to postage stamps in its Total Eclipse of the Sun Forever stamp [ 9 ] to commemorate the solar eclipse of August 21, 2017 . When pressed with a finger, body heat turns the black circle in the center of the stamp into an image of the full moon . The stamp image is a photo of a total solar eclipse seen in Jalu , Libya , on March 29, 2006. The photo was taken by retired NASA astrophysicist Fred Espenak , aka "Mr. Eclipse". In medical training, thermochromic ink can be used to imitate human blood because it shares its color changing property. It is currently being tested in medical simulations involving extracorporeal membrane oxygenation (ECMO). In these procedures, a change in color of blood between a dark and light red indicates blood oxygenation and blood deoxygenation, which describes the oxygen concentration levels within a person's blood sample. It's important to accurately identify this change in order to safely and correctly operate the ECMO machines. This has led to simulation-based trainings (SBT) which allows medical students to run simulations that mimic real ECMO machines before using them in serious situations. By using thermochromic ink in these simulations, the color changing effect can be realistically copied and observed without using real human blood or other costly methods. [ 10 ] [ 11 ] Artificial blood or animal blood is typically used in these simulations; however, there are some advantages in using thermochromic ink as an alternative. It can be reused for multiple simulations with minimal variance in the outcomes and it is more cost effective. There are limitations to using this as the ink does not share any other properties with blood, so its only practical use is to observe the change in color of blood. [ 10 ] Product packaging is an important aspect of maintaining the quality of consumer goods. Modern day packaging is split into 2 categories; active packaging and smart packaging . Thermochromic ink has found use in smart packaging, which is the aspect of packaging that deals with monitoring the condition of the products. Since most consumer goods are affected by changes in temperature, using thermochromic ink as an indicator of those temperature changes allows consumers to recognize when the quality of a product has changed. It can also be used to tell consumers the right temperatures to consume the product. [ 12 ] In 2006, Pilot Corporation, Japan developed a pen with erasable ink that utilized thermochromic ink. It was composed of a solvent, a colorant, and a resin film-forming agent. At temperatures below 65 °C, the ink stayed in a colored state. Once temperatures went above 65 °C, the ink began to melt and became colorless, creating the effect of erasable ink. The ink was able to return to its colored state by cooling the temperature down to below -10 °C. [ 13 ]
https://en.wikipedia.org/wiki/Thermochromic_ink
Thermochromism is the property of substances to change color due to a change in temperature . A mood ring is an example of this property used in a consumer product although thermochromism also has more practical uses, such as baby bottles, which change to a different color when cool enough to drink, or kettles which change color when water is at or near boiling point. Thermochromism is one of several types of chromism . The two common approaches are based on liquid crystals and leuco dyes . Liquid crystals are used in precision applications, as their responses can be engineered to accurate temperatures, but their color range is limited by their principle of operation. Leuco dyes allow wider range of colors to be used, but their response temperatures are more difficult to set with accuracy. Some liquid crystals are capable of displaying different colors at different temperatures. This change is dependent on selective reflection of certain wavelengths by the crystallic structure of the material, as it changes between the low-temperature crystallic phase, through anisotropic chiral or twisted nematic phase, to the high-temperature isotropic liquid phase. Only the nematic mesophase has thermochromic properties; this restricts the effective temperature range of the material. The twisted nematic phase has the molecules oriented in layers with regularly changing orientation, which gives them periodic spacing. The light passing through the crystal undergoes Bragg diffraction on these layers, and the wavelength with the greatest constructive interference is reflected back, which is perceived as a spectral color. A change in the crystal temperature can result in a change of spacing between the layers and therefore in the reflected wavelength. The color of the thermochromic liquid crystal can therefore continuously range from non-reflective (black) through the spectral colors to black again, depending on the temperature. Typically, the high temperature state will reflect blue-violet, while the low-temperature state will reflect red-orange. Since blue is a shorter wavelength than red, this indicates that the distance of layer spacing is reduced by heating through the liquid-crystal state. Some such materials are cholesteryl nonanoate or cyanobiphenyls. Mixtures with 3–5 °C span of temperatures and ranges from about 17–23 °C to about 37–40 °C can be composed from varying proportions of cholesteryl oleyl carbonate , cholesteryl nonanoate , and cholesteryl benzoate . For example, the mass ratio of 65:25:10 yields range of 17–23 °C, and 30:60:10 yields range of 37–40 °C. [ 1 ] Liquid crystals used in dyes and inks often come microencapsulated, in the form of suspension. Liquid crystals are used in applications where the color change has to be accurately defined. They find applications in thermometers for room, refrigerator, aquarium, and medical use, and in indicators of level of propane in tanks. A popular application for thermochromic liquid crystals are the mood rings . Liquid crystals are difficult to work with and require specialized printing equipment. The material itself is also typically more expensive than alternative technologies. High temperatures, ultraviolet radiation, some chemicals and/or solvents have a negative impact on their lifespan. Thermochromic dyes are based on mixtures of leuco dyes with other suitable chemicals, displaying a color change (usually between the colorless leuco form and the colored form) that depends upon temperature. The dyes are rarely applied on materials directly; they are usually in the form of microcapsules with the mixture sealed inside. An illustrative example is the Hypercolor fashion, where microcapsules with crystal violet lactone , weak acid , and a dissociable salt dissolved in dodecanol are applied to the fabric. When the solvent is solid, the dye exists in its lactone leuco form, while when the solvent melts, the salt dissociates, the pH inside the microcapsule lowers, the dye becomes protonated, its lactone ring opens, and its absorption spectrum shifts drastically, therefore it becomes deeply violet. In this case the apparent thermochromism is in fact halochromism . The dyes most commonly used are spirolactones , fluorans , spiropyrans , and fulgides. The acids include bisphenol A , parabens , 1,2,3-triazole derivates, and 4-hydroxycoumarin and act as proton donors, changing the dye molecule between its leuco form and its protonated colored form; stronger acids would make the change irreversible. Leuco dyes have less accurate temperature response than liquid crystals. They are suitable for general indicators of approximate temperature ("too cool", "too hot", "about OK"), or for various novelty items. They are usually used in combination with some other pigment, producing a color change between the color of the base pigment and the color of the pigment combined with the color of the non-leuco form of the leuco dye. Organic leuco dyes are available for temperature ranges between about −5 °C (23 °F) and 60 °C (140 °F), in wide range of colors. The color change usually happens in a 3 °C (5.4 °F) interval. Leuco dyes are used in applications where temperature response accuracy is not critical: e.g. novelties, bath toys, flying discs , and approximate temperature indicators for microwave-heated foods. Microencapsulation allows their use in wide range of materials and products. The size of the microcapsules typically ranges between 3–5 μm (over 10 times larger than regular pigment particles), which requires some adjustments to printing and manufacturing processes. An application of leuco dyes is in the Duracell battery state indicators. A layer of a leuco dye is applied on a resistive strip to indicate its heating, thus gauging the amount of current the battery is able to supply. The strip is triangular-shaped, changing its resistance along its length, therefore heating up a proportionally long segment with the amount of current flowing through it. The length of the segment above the threshold temperature for the leuco dye then becomes colored. Exposure to ultraviolet radiation, solvents and high temperatures reduce the lifespan of leuco dyes. Temperatures above about 200–230 °C (392–446 °F) typically cause irreversible damage to leuco dyes; a time-limited exposure of some types to about 250 °C (482 °F) is allowed during manufacturing. Thermochromic paints use liquid crystals or leuco dye technology. After absorbing a certain amount of light or heat, the crystallic or molecular structure of the pigment reversibly changes in such a way that it absorbs and emits light at a different wavelength than at lower temperatures. Thermochromic paints are seen quite often as a coating on coffee mugs, whereby once hot coffee is poured into the mugs, the thermochromic paint absorbs the heat and becomes colored or transparent , therefore changing the appearance of the mug. These are known as magic mugs or heat changing mugs. Another common example is the use of leuco dye in spoons used in ice cream parlors and frozen yogurt shops. Once dipped into the cold desserts, part of the spoon appears to change color. Thermochromic papers are used for thermal printers . One example is the paper impregnated with the solid mixture of a fluoran dye with octadecylphosphonic acid . This mixture is stable in solid phase; however, when the octadecylphosphonic acid is melted, the dye undergoes a chemical reaction in the liquid phase, and assumes the protonated colored form. This state is then conserved when the matrix solidifies again, if the cooling process is fast enough. As the leuco form is more stable in lower temperatures and solid phase, the records on thermochromic papers slowly fade out over years. Thermochromism can appear in thermoplastics, duroplastics, gels or any kind of coatings. The polymer itself, an embedded thermochromic additive or a high ordered structure built by the interaction of the polymer with an incorporated non-thermochromic additive can be the origin of the thermochromic effect. Furthermore, from the physical point of view, the origin of the thermochromic effect can be multifarious. So it can come from changes of light reflection , absorption and/or scattering properties with temperature. [ 2 ] The application of thermochromic polymers for adaptive solar protection is of great interest. [ 3 ] For instance, polymer films with tunable thermochromic nanoparticles, reflective or transparent to sunlight depending on the temperature, have been used to create windows that optimize to the weather. [ 4 ] A function by design strategy, [ 5 ] e.g. applied for the development of non-toxic thermochromic polymers has come into the focus in the last decade. [ 6 ] Thermochromic inks or dyes are temperature sensitive compounds , developed in the 1970s, that temporarily change color with exposure to heat . They come in two forms, liquid crystals and leuco dyes . Leuco dyes are easier to work with and allow for a greater range of applications. These applications include: flat thermometers , battery testers , clothing, and the indicator on bottles of maple syrup that change color when the syrup is warm. The thermometers are often used on the exterior of aquariums , or to obtain a body temperature via the forehead. Coors Light uses thermochromic ink on its cans, changing from white to blue to indicate the can is cold. Virtually all inorganic compounds are thermochromic to some extent. Most examples however involve only subtle changes in color. For example, titanium dioxide , zinc sulfide and zinc oxide are white at room temperature but when heated change to yellow. Similarly indium(III) oxide is yellow and darkens to yellow-brown when heated. Lead(II) oxide exhibits a similar color change on heating. The color change is linked to changes in the electronic properties (energy levels, populations) of these materials. More dramatic examples of thermochromism are found in materials that undergo phase transition or exhibit charge-transfer bands near the visible region. Examples include Other thermochromic solid semiconductor materials include Many tetraorgano­diarsine , -distibine , and -dibismuthine compounds are strongly thermochromic. The color changes arise because they form van der Waals chains when cold, and the intermolecular spacing is sufficiently short for orbital overlap. The energy levels of the resulting bands then depend on the intermolecular distance, which varies with temperature. [ 14 ] Some minerals are thermochromic as well; for example some chromium -rich pyropes , normally reddish-purplish, become green when heated to about 80 °C. [ 15 ] Some materials change color irreversibly. These can be used for e.g. laser marking of materials. [ 16 ] Thermochromic materials, in the form of coatings, can be applied in buildings as a technique of passive energy retrofit. [ 18 ] Thermochromic coatings are characterized as active, dynamic and adaptive materials that can adjust their optical properties according to external stimuli, usually temperature. Thermochromic coating modulate their reflectance as a function of their temperature, making them an appropriate solution for combating cooling loads, without diminishing the building's thermal performance during the winter period. [ 18 ] Thermochromic materials are categorized into two subgroups, dye-based and non-dye-based thermochromic materials. [ 19 ] However, the only class of dye-based thermochromic materials that are widely, commercially available [ 20 ] and have been applicated and tested into buildings, are the leuco dyes. [ 21 ] [ 22 ]
https://en.wikipedia.org/wiki/Thermochromism
A thermocouple , also known as a "thermoelectrical thermometer", is an electrical device consisting of two dissimilar electrical conductors forming an electrical junction . A thermocouple produces a temperature-dependent voltage as a result of the Seebeck effect , and this voltage can be interpreted to measure temperature . Thermocouples are widely used as temperature sensors . [ 1 ] Commercial thermocouples are inexpensive, [ 2 ] interchangeable, are supplied with standard connectors , and can measure a wide range of temperatures. In contrast to most other methods of temperature measurement, thermocouples are self-powered and require no external form of excitation. The main limitation with thermocouples is accuracy; system errors of less than one degree Celsius (°C) can be difficult to achieve. [ 3 ] Thermocouples are widely used in science and industry. Applications include temperature measurement for kilns , gas turbine exhaust, diesel engines , and other industrial processes. Thermocouples are also used in homes, offices and businesses as the temperature sensors in thermostats , and also as flame sensors in safety devices for gas-powered appliances. In 1821, the German physicist Thomas Johann Seebeck discovered that a magnetic needle held near a circuit made up of two dissimilar metals got deflected when one of the dissimilar metal junctions was heated. At the time, Seebeck referred to this consequence as thermo-magnetism. The magnetic field he observed was later shown to be due to thermo-electric current. In practical use, the voltage generated at a single junction of two different types of wire is what is of interest as this can be used to measure temperature at very high and low temperatures. The magnitude of the voltage depends on the types of wire being used. Generally, the voltage is in the microvolt range and care must be taken to obtain a usable measurement. Although very little current flows, power can be generated by a single thermocouple junction. Power generation using multiple thermocouples, as in a thermopile , is common. The standard configuration of a thermocouple is shown in the figure. The dissimilar conductors contact at the measuring (aka hot) junction and at the reference (aka cold) junction. The thermocouple is connected to the electrical system at its reference junction. The figure shows the measuring junction on the left, the reference junction in the middle and represents the rest of the electrical system as a voltage meter on the right. The temperature T sense is obtained via a characteristic function E ( T ) for the type of thermocouple which requires inputs: measured voltage V and reference junction temperature T ref . The solution to the equation E ( T sense ) = V + E ( T ref ) yields T sense . Sometimes these details are hidden inside a device that packages the reference junction block (with T ref thermometer), voltmeter, and equation solver. The Seebeck effect refers to the development of an electromotive force across two points of an electrically conducting material when there is a temperature difference between those two points. Under open-circuit conditions where there is no internal current flow, the gradient of voltage ( ∇ V {\displaystyle \scriptstyle {\boldsymbol {\nabla }}V} ) is directly proportional to the gradient in temperature ( ∇ T {\displaystyle \scriptstyle {\boldsymbol {\nabla }}T} ): where S ( T ) {\displaystyle S(T)} is a temperature-dependent material property known as the Seebeck coefficient . The standard measurement configuration shown in the figure shows four temperature regions and thus four voltage contributions: The first and fourth contributions cancel out exactly, because these regions involve the same temperature change and an identical material. As a result, T m e t e r {\displaystyle \scriptstyle T_{\mathrm {meter} }} does not influence the measured voltage. The second and third contributions do not cancel, as they involve different materials. The measured voltage turns out to be where S + {\displaystyle \scriptstyle S_{+}} and S − {\displaystyle \scriptstyle S_{-}} are the Seebeck coefficients of the conductors attached to the positive and negative terminals of the voltmeter, respectively (chromel and alumel in the figure). The thermocouple's behaviour is captured by a characteristic function E ( T ) {\displaystyle \scriptstyle E(T)} , which needs only to be consulted at two arguments: In terms of the Seebeck coefficients, the characteristic function is defined by The constant of integration in this indefinite integral has no significance, but is conventionally chosen such that E ( 0 ∘ C ) = 0 {\displaystyle \scriptstyle E(0\,{}^{\circ }{\rm {C}})=0} . Thermocouple manufacturers and metrology standards organizations such as NIST provide tables of the function E ( T ) {\displaystyle \scriptstyle E(T)} that have been measured and interpolated over a range of temperatures, for particular thermocouple types (see External links section for access to these tables). To obtain the desired measurement of T s e n s e {\displaystyle \scriptstyle T_{\mathrm {sense} }} , it is not sufficient to just measure V {\displaystyle \scriptstyle V} . The temperature at the reference junctions T r e f {\displaystyle \scriptstyle T_{\mathrm {ref} }} must also be known. Two strategies are often used here: In both cases the value V + E ( T r e f ) {\displaystyle \scriptstyle V+E(T_{\mathrm {ref} })} is calculated, then the function E ( T ) {\displaystyle \scriptstyle E(T)} is searched for a matching value. The argument where this match occurs is the value of T s e n s e {\displaystyle \scriptstyle T_{\mathrm {sense} }} : Thermocouples ideally should be very simple measurement devices, with each type being characterized by a precise E ( T ) {\displaystyle \scriptstyle E(T)} curve, independent of any other details. In reality, thermocouples are affected by issues such as alloy manufacturing uncertainties, aging effects, and circuit design mistakes/misunderstandings. A common error in thermocouple construction is related to cold junction compensation. If an error is made on the estimation of T r e f {\displaystyle T_{\mathrm {ref} }} , an error will appear in the temperature measurement. For the simplest measurements, thermocouple wires are connected to copper far away from the hot or cold point whose temperature is measured; this reference junction is then assumed to be at room temperature, but that temperature can vary. [ 4 ] Because of the nonlinearity in the thermocouple voltage curve, the errors in T r e f {\displaystyle T_{\mathrm {ref} }} and T s e n s e {\displaystyle T_{\mathrm {sense} }} are generally unequal values. Some thermocouples, such as Type B, have a relatively flat voltage curve near room temperature, meaning that a large uncertainty in a room-temperature T r e f {\displaystyle T_{\mathrm {ref} }} translates to only a small error in T s e n s e {\displaystyle T_{\mathrm {sense} }} . Junctions should be made in a reliable manner, but there are many possible approaches to accomplish this. For low temperatures, junctions can be brazed or soldered; however, it may be difficult to find a suitable flux and this may not be suitable at the sensing junction due to the solder's low melting point. Reference and extension junctions are therefore usually made with screw terminal blocks . For high temperatures, the most common approach is the spot weld or crimp using a durable material. [ 5 ] One common myth regarding thermocouples is that junctions must be made cleanly without involving a third metal, to avoid unwanted added EMFs. [ 6 ] This may result from another common misunderstanding that the voltage is generated at the junction. [ 7 ] In fact, the junctions should in principle have uniform internal temperature; therefore, no voltage is generated at the junction. The voltage is generated in the thermal gradient, along the wire. A thermocouple produces small signals, often microvolts in magnitude. Precise measurements of this signal require an amplifier with low input offset voltage and with care taken to avoid thermal EMFs from self-heating within the voltmeter itself. If the thermocouple wire has a high resistance for some reason (poor contact at junctions, or very thin wires used for fast thermal response), the measuring instrument should have high input impedance to prevent an offset in the measured voltage. A useful feature in thermocouple instrumentation will simultaneously measure resistance and detect faulty connections in the wiring or at thermocouple junctions. While a thermocouple wire type is often described by its chemical composition, the actual aim is to produce a pair of wires that follow a standardized E ( T ) {\displaystyle \scriptstyle E(T)} curve. Impurities affect each batch of metal differently, producing variable Seebeck coefficients. To match the standard behaviour, thermocouple wire manufacturers will deliberately mix in additional impurities to "dope" the alloy, compensating for uncontrolled variations in source material. [ 5 ] As a result, there are standard and specialized grades of thermocouple wire, depending on the level of precision demanded in the thermocouple behaviour. Precision grades may only be available in matched pairs, where one wire is modified to compensate for deficiencies in the other wire. A special case of thermocouple wire is known as "extension grade", designed to carry the thermoelectric circuit over a longer distance. Extension wires follow the stated E ( T ) {\displaystyle \scriptstyle E(T)} curve but for various reasons they are not designed to be used in extreme environments and so they cannot be used at the sensing junction in some applications. For example, an extension wire may be in a different form, such as highly flexible with stranded construction and plastic insulation, or be part of a multi-wire cable for carrying many thermocouple circuits. With expensive noble metal thermocouples, the extension wires may even be made of a completely different, cheaper material that mimics the standard type over a reduced temperature range. [ 5 ] Thermocouples are often used at high temperatures and in reactive furnace atmospheres. In this case, the practical lifetime is limited by thermocouple aging. The thermoelectric coefficients of the wires in a thermocouple that is used to measure very high temperatures may change with time, and the measurement voltage accordingly drops. The simple relationship between the temperature difference of the junctions and the measurement voltage is only correct if each wire is homogeneous (uniform in composition). As thermocouples age in a process, their conductors can lose homogeneity due to chemical and metallurgical changes caused by extreme or prolonged exposure to high temperatures. If the aged section of the thermocouple circuit is exposed to a temperature gradient, the measured voltage will differ, resulting in error. Aged thermocouples are only partly modified; for example, being unaffected in the parts outside the furnace. For this reason, aged thermocouples cannot be taken out of their installed location and recalibrated in a bath or test furnace to determine error. This also explains why error can sometimes be observed when an aged thermocouple is pulled partly out of a furnace—as the sensor is pulled back, aged sections may see exposure to increased temperature gradients from hot to cold as the aged section now passes through the cooler refractory area, contributing significant error to the measurement. Likewise, an aged thermocouple that is pushed deeper into the furnace might sometimes provide a more accurate reading if being pushed further into the furnace causes the temperature gradient to occur only in a fresh section. [ 8 ] Certain combinations of alloys have become popular as industry standards. Selection of the combination is driven by cost, availability, convenience, melting point, chemical properties, stability, and output. Different types are best suited for different applications. They are usually selected on the basis of the temperature range and sensitivity needed. Thermocouples with low sensitivities (B, R, and S types) have correspondingly lower resolutions. Other selection criteria include the chemical inertness of the thermocouple material and whether it is magnetic or not. Standard thermocouple types are listed below with the positive electrode (assuming T sense > T ref {\displaystyle T_{\text{sense}}>T_{\text{ref}}} ) first, followed by the negative electrode. Type E ( chromel – constantan ) has a high output (68 μV/°C), which makes it well suited to cryogenic use. Additionally, it is non-magnetic. Wide range is −270 °C to +740 °C and narrow range is −110 °C to +140 °C. Type J ( iron – constantan ) has a more restricted range (−40 °C to +1200 °C) than type K but higher sensitivity of about 50 μV/°C. [ 2 ] The Curie point of the iron (770 °C) [ 9 ] causes a smooth change in the characteristic, which determines the upper-temperature limit. Note, the European/German Type L is a variant of the type J, with a different specification for the EMF output (reference DIN 43712:1985-01 [ 10 ] ). The positive wire is made of hard iron, while the negative wire consists of softer copper - nickel . [ 11 ] Due to its iron content, the J-type is slightly heavier and the positive wire is magnetic. [ 12 ] It is highly vulnerable to corrosion in reducing atmospheres, which can lead to significant degradation of the thermocouple's performance. [ 13 ] Type K ( chromel – alumel ) is the most common general-purpose thermocouple with a sensitivity of approximately 41 μV/°C. [ 14 ] It is inexpensive, and a wide variety of probes are available in its −200 °C to +1350 °C (−330 °F to +2460 °F) range. Type K was specified at a time when metallurgy was less advanced than it is today, and consequently characteristics may vary considerably between samples. One of the constituent metals, nickel , is magnetic; a characteristic of thermocouples made with magnetic material is that they undergo a deviation in output when the material reaches its Curie point , which occurs for type K thermocouples at around 185 °C. [ citation needed ] They operate very well in oxidizing atmospheres. If, however, a mostly reducing atmosphere (such as hydrogen with a small amount of oxygen) comes into contact with the wires, the chromium in the chromel alloy oxidizes. This reduces the emf output, and the thermocouple reads low. This phenomenon is known as green rot , due to the color of the affected alloy. Although not always distinctively green, the chromel wire will develop a mottled silvery skin and become magnetic. An easy way to check for this problem is to see whether the two wires are magnetic (normally, chromel is non-magnetic). Hydrogen in the atmosphere is the usual cause of green rot. At high temperatures, it can diffuse through solid metals or an intact metal thermowell. Even a sheath of magnesium oxide insulating the thermocouple will not keep the hydrogen out. [ 15 ] Green rot does not occur in atmospheres sufficiently rich in oxygen, or oxygen-free. A sealed thermowell can be filled with inert gas, or an oxygen scavenger (e.g. a sacrificial titanium wire) can be added. Alternatively, additional oxygen can be introduced into the thermowell. Another option is using a different thermocouple type for the low-oxygen atmospheres where green rot can occur; a type N thermocouple is a suitable alternative. [ 16 ] [ unreliable source? ] Type M (82%Ni/18% Mo –99.2%Ni/0.8% Co , by weight) are used in vacuum furnaces for the same reasons as with type C (described below). Upper temperature is limited to 1400 °C. It is less commonly used than other types. Type N ( Nicrosil – Nisil ) thermocouples are suitable for use between −270 °C and +1300 °C, owing to its stability and oxidation resistance. Sensitivity is about 39 μV/°C at 900 °C, slightly lower compared to type K. Designed at the Defence Science and Technology Organisation (DSTO) of Australia, by Noel A. Burley, type-N thermocouples overcome the three principal characteristic types and causes of thermoelectric instability in the standard base-metal thermoelement materials: [ 17 ] The Nicrosil and Nisil thermocouple alloys show greatly enhanced thermoelectric stability relative to the other standard base-metal thermocouple alloys because their compositions substantially reduce the thermoelectric instabilities described above. This is achieved primarily by increasing component solute concentrations (chromium and silicon) in a base of nickel above those required to cause a transition from internal to external modes of oxidation, and by selecting solutes (silicon and magnesium) that preferentially oxidize to form a diffusion-barrier, and hence oxidation-inhibiting films. [ 18 ] Type N thermocouples are suitable alternative to type K for low-oxygen conditions where type K is prone to green rot. They are suitable for use in vacuum, inert atmospheres, oxidizing atmospheres, or dry reducing atmospheres. They do not tolerate the presence of sulfur. [ 19 ] Type T ( copper – constantan ) thermocouples are suited for measurements in the −200 to 350 °C range. Often used as a differential measurement, since only copper wire touches the probes. Since both conductors are non-magnetic, there is no Curie point and thus no abrupt change in characteristics. Type-T thermocouples have a sensitivity of about 43 μV/°C. Note that copper has a much higher thermal conductivity than the alloys generally used in thermocouple constructions, and so it is necessary to exercise extra care with thermally anchoring type-T thermocouples. A similar composition is found in the obsolete Type U in the German specification DIN 43712:1985-01. [ 10 ] Types B, R, and S thermocouples use platinum or a platinum/ rhodium alloy for each conductor. These are among the most stable thermocouples, but have lower sensitivity than other types, approximately 10 μV/°C. Type B, R, and S thermocouples are usually used only for high-temperature measurements due to their high cost and low sensitivity. For type R and S thermocouples, HTX platinum wire can be used in place of the pure platinum leg to strengthen the thermocouple and prevent failures from grain growth that can occur in high temperature and harsh conditions. Type B (70%Pt/30%Rh–94%Pt/6%Rh, by weight) thermocouples are suited for use at up to 1800 °C. Type-B thermocouples produce the same output at 0 °C and 42 °C, limiting their use below about 50 °C. The emf function has a minimum around 21 °C (for 21.020262 °C emf=-2.584972 μV), meaning that cold-junction compensation is easily performed, since the compensation voltage is essentially a constant for a reference at typical room temperatures. [ 20 ] Type R (87%Pt/13%Rh–Pt, by weight) thermocouples are used 0 to 1600 °C. Type R Thermocouples are quite stable and capable of long operating life when used in clean, favorable conditions. When used above 1100 °C ( 2000 °F), these thermocouples must be protected from exposure to metallic and non-metallic vapors. Type R is not suitable for direct insertion into metallic protecting tubes. Long term high temperature exposure causes grain growth which can lead to mechanical failure and a negative calibration drift caused by Rhodium diffusion to pure platinum leg as well as from Rhodium volatilization. This type has the same uses as type S, but is not interchangeable with it. Type S (90%Pt/10%Rh–Pt, by weight) thermocouples, similar to type R, are used up to 1600 °C. Before the introduction of the International Temperature Scale of 1990 (ITS-90), precision type-S thermocouples were used as the practical standard thermometers for the range of 630 °C to 1064 °C, based on an interpolation between the freezing points of antimony , silver , and gold . Starting with ITS-90, platinum resistance thermometers have taken over this range as standard thermometers. [ 21 ] These thermocouples are well-suited for measuring extremely high temperatures. Typical uses are hydrogen and inert atmospheres, as well as vacuum furnaces . They are not used in oxidizing environments at high temperatures because of embrittlement . [ 22 ] A typical range is 0 to 2315 °C, which can be extended to 2760 °C in inert atmosphere and to 3000 °C for brief measurements. [ 23 ] Pure tungsten at high temperatures undergoes recrystallization and becomes brittle. Therefore, types C and D are preferred over type G in some applications. In presence of water vapor at high temperature, tungsten reacts to form tungsten(VI) oxide , which volatilizes away, and hydrogen. Hydrogen then reacts with tungsten oxide, after which water is formed again. Such a "water cycle" can lead to erosion of the thermocouple and eventual failure. In high temperature vacuum applications, it is therefore desirable to avoid the presence of traces of water. [ 24 ] An alternative to tungsten/ rhenium is tungsten/ molybdenum , but the voltage–temperature response is weaker and has minimum at around 1000 K. The thermocouple temperature is limited also by other materials used. For example beryllium oxide , a popular material for high temperature applications, tends to gain conductivity with temperature; a particular configuration of sensor had the insulation resistance dropping from a megaohm at 1000 K to 200 ohms at 2200 K. At high temperatures, the materials undergo chemical reaction. At 2700 K beryllium oxide slightly reacts with tungsten, tungsten-rhenium alloy, and tantalum; at 2600 K molybdenum reacts with BeO, tungsten does not react. BeO begins melting at about 2820 K, magnesium oxide at about 3020 K. [ 25 ] (95%W/5%Re–74%W/26%Re, by weight) [ 22 ] maximum temperature will be measured by type-c thermocouple is 2329 °C. (97%W/3%Re–75%W/25%Re, by weight) [ 22 ] (W–74%W/26%Re, by weight) [ 22 ] In these thermocouples ( chromel – gold / iron alloy), the negative wire is gold with a small fraction (0.03–0.15 atom percent) of iron. The impure gold wire gives the thermocouple a high sensitivity at low temperatures (compared to other thermocouples at that temperature), whereas the chromel wire maintains the sensitivity near room temperature. It can be used for cryogenic applications (1.2–300 K and even up to 600 K). Both the sensitivity and the temperature range depend on the iron concentration. The sensitivity is typically around 15 μV/K at low temperatures, and the lowest usable temperature varies between 1.2 and 4.2 K. Type P (55% Pd /31%Pt/14%Au–65%Au/35%Pd, by weight) thermocouples give a thermoelectric voltage that mimics the type K over the range 500 °C to 1400 °C, however they are constructed purely of noble metals and so shows enhanced corrosion resistance. This combination is also known as Platinel II. [ 26 ] Thermocouples of platinum/molybdenum-alloy (95%Pt/5%Mo–99.9%Pt/0.1%Mo, by weight) are sometimes used in nuclear reactors, since they show a low drift from nuclear transmutation induced by neutron irradiation, compared to the platinum/rhodium-alloy types. [ 27 ] The use of two wires of iridium / rhodium alloys can provide a thermocouple that can be used up to about 2000 °C in inert atmospheres. [ 27 ] Thermocouples made from two different, high-purity noble metals can show high accuracy even when uncalibrated, as well as low levels of drift. Two combinations in use are gold–platinum and platinum–palladium. [ 28 ] Their main limitations are the low melting points of the metals involved (1064 °C for gold and 1555 °C for palladium). These thermocouples tend to be more accurate than type S, and due to their economy and simplicity are even regarded as competitive alternatives to the platinum resistance thermometers that are normally used as standard thermometers. [ 29 ] HTIR-TC offers a breakthrough in measuring high-temperature processes. Its characteristics are: durable and reliable at high temperatures, up to at least 1700 °C; resistant to irradiation; moderately priced; available in a variety of configurations - adaptable to each application; easily installed. Originally developed for use in nuclear test reactors, HTIR-TC may enhance the safety of operations in future reactors. This thermocouple was developed by researchers at the Idaho National Laboratory (INL). [ 30 ] [ 31 ] The table below describes properties of several different thermocouple types. Within the tolerance columns, T represents the temperature of the hot junction, in degrees Celsius. For example, a thermocouple with a tolerance of ±0.0025× T would have a tolerance of ±2.5 °C at 1000 °C. Each cell in the Color Code columns depicts the end of a thermocouple cable, showing the jacket color and the color of the individual leads. The background color represents the color of the connector body. The wires that make up the thermocouple must be insulated from each other everywhere, except at the sensing junction. Any additional electrical contact between the wires, or contact of a wire to other conductive objects, can modify the voltage and give a false reading of temperature. Plastics are suitable insulators for low temperatures parts of a thermocouple, whereas ceramic insulation can be used up to around 1000 °C. Other concerns (abrasion and chemical resistance) also affect the suitability of materials. When wire insulation disintegrates, it can result in an unintended electrical contact at a different location from the desired sensing point. If such a damaged thermocouple is used in the closed loop control of a thermostat or other temperature controller , this can lead to a runaway overheating event and possibly severe damage, as the false temperature reading will typically be lower than the sensing junction temperature. Failed insulation will also typically outgas , which can lead to process contamination. For parts of thermocouples used at very high temperatures or in contamination-sensitive applications, the only suitable insulation may be vacuum or inert gas ; the mechanical rigidity of the thermocouple wires is used to keep them separated. Temperature ratings for insulations may vary based on what the overall thermocouple construction cable consists of. Note: T300 is a new high-temperature material that was recently approved by UL for 300 °C operating temperatures. Thermocouples are suitable for measuring over a large temperature range, from −270 up to 3000 °C (for a short time, in inert atmosphere). [ 23 ] Applications include temperature measurement for kilns , gas turbine exhaust, diesel engines, other industrial processes and fog machines . They are less suitable for applications where smaller temperature differences need to be measured with high accuracy, for example the range 0–100 °C with 0.1 °C accuracy. For such applications thermistors , silicon bandgap temperature sensors and resistance thermometers are more suitable. Type B, S, R and K thermocouples are used extensively in the steel and iron industries to monitor temperatures and chemistry throughout the steel making process. Disposable, immersible, type S thermocouples are regularly used in the electric arc furnace process to accurately measure the temperature of steel before tapping. The cooling curve of a small steel sample can be analyzed and used to estimate the carbon content of molten steel. Many gas -fed heating appliances such as ovens and water heaters make use of a pilot flame to ignite the main gas burner when required. If the pilot flame goes out, unburned gas may be released, which is an explosion risk and a health hazard. To prevent this, some appliances use a thermocouple in a fail-safe circuit to sense when the pilot light is burning. The tip of the thermocouple is placed in the pilot flame, generating a voltage which operates the supply valve which feeds gas to the pilot. So long as the pilot flame remains lit, the thermocouple remains hot, and the pilot gas valve is held open. If the pilot light goes out, the thermocouple temperature falls, causing the voltage across the thermocouple to drop and the valve to close. Where the probe may be easily placed above the flame, a rectifying sensor may often be used instead. With part ceramic construction, they may also be known as flame rods, flame sensors or flame detection electrodes. Some combined main burner and pilot gas valves (mainly by Honeywell ) reduce the power demand to within the range of a single universal thermocouple heated by a pilot (25 mV open circuit falling by half with the coil connected to a 10–12 mV, 0.2–0.25 A source, typically) by sizing the coil to be able to hold the valve open against a light spring, but only after the initial turning-on force is provided by the user pressing and holding a knob to compress the spring during lighting of the pilot. These systems are identifiable by the "press and hold for x minutes" in the pilot lighting instructions. (The holding current requirement of such a valve is much less than a bigger solenoid designed for pulling the valve in from a closed position would require.) Special test sets are made to confirm the valve let-go and holding currents, because an ordinary milliammeter cannot be used as it introduces more resistance than the gas valve coil. Apart from testing the open circuit voltage of the thermocouple, and the near short-circuit DC continuity through the thermocouple gas valve coil, the easiest non-specialist test is substitution of a known good gas valve. Some systems, known as millivolt control systems, extend the thermocouple concept to both open and close the main gas valve as well. Not only does the voltage created by the pilot thermocouple activate the pilot gas valve, it is also routed through a thermostat to power the main gas valve as well. Here, a larger voltage is needed than in a pilot flame safety system described above, and a thermopile is used rather than a single thermocouple. Such a system requires no external source of electricity for its operation and thus can operate during a power failure, provided that all the other related system components allow for this. This excludes common forced air furnaces because external electrical power is required to operate the blower motor, but this feature is especially useful for un-powered convection heaters . A similar gas shut-off safety mechanism using a thermocouple is sometimes employed to ensure that the main burner ignites within a certain time period, shutting off the main burner gas supply valve should that not happen. Out of concern about energy wasted by the standing pilot flame, designers of many newer appliances have switched to an electronically controlled pilot-less ignition, also called intermittent ignition. With no standing pilot flame, there is no risk of gas buildup should the flame go out, so these appliances do not need thermocouple-based pilot safety switches. As these designs lose the benefit of operation without a continuous source of electricity, standing pilots are still used in some appliances. The exception is later model instantaneous (aka "tankless") water heaters that use the flow of water to generate the current required to ignite the gas burner; these designs also use a thermocouple as a safety cut-off device in the event the gas fails to ignite, or if the flame is extinguished. Thermopiles are used for measuring the intensity of incident radiation, typically visible or infrared light, which heats the hot junctions, while the cold junctions are on a heat sink. It is possible to measure radiative intensities of only a few μW/cm 2 with commercially available thermopile sensors. For example, some laser power meters are based on such sensors; these are specifically known as thermopile laser sensor . The principle of operation of a thermopile sensor is distinct from that of a bolometer , as the latter relies on a change in resistance. Thermocouples can generally be used in the testing of prototype electrical and mechanical apparatus. For example, switchgear under test for its current carrying capacity may have thermocouples installed and monitored during a heat run test, to confirm that the temperature rise at rated current does not exceed designed limits. A thermocouple can produce current to drive some processes directly, without the need for extra circuitry and power sources. For example, the power from a thermocouple can activate a valve when a temperature difference arises. The electrical energy generated by a thermocouple is converted from the heat which must be supplied to the hot side to maintain the electric potential. A continuous transfer of heat is necessary because the current flowing through the thermocouple tends to cause the hot side to cool down and the cold side to heat up (the Peltier effect ). Thermocouples can be connected in series to form a thermopile , where all the hot junctions are exposed to a higher temperature and all the cold junctions to a lower temperature. The output is the sum of the voltages across the individual junctions, giving larger voltage and power output. In a radioisotope thermoelectric generator , the radioactive decay of transuranic elements as a heat source has been used to power spacecraft on missions too far from the Sun to use solar power. Thermopiles heated by kerosene lamps were used to run batteryless radio receivers in isolated areas. [ 34 ] There are commercially produced lanterns that use the heat from a candle to run several light-emitting diodes, and thermoelectrically powered fans to improve air circulation and heat distribution in wood stoves . Chemical production and petroleum refineries will usually employ computers for logging and for limit testing the many temperatures associated with a process, typically numbering in the hundreds. For such cases, a number of thermocouple leads will be brought to a common reference block (a large block of copper) containing the second thermocouple of each circuit. The temperature of the block is in turn measured by a thermistor . Simple computations are used to determine the temperature at each measured location. A thermocouple can be used as a vacuum gauge over the range of approximately 0.001 to 1 torr absolute pressure. In this pressure range, the mean free path of the gas is comparable to the dimensions of the vacuum chamber , and the flow regime is neither purely viscous nor purely molecular . [ 35 ] In this configuration, the thermocouple junction is attached to the centre of a short heating wire, which is usually energised by a constant current of about 5 mA, and the heat is removed at a rate related to the thermal conductivity of the gas. The temperature detected at the thermocouple junction depends on the thermal conductivity of the surrounding gas, which depends on the pressure of the gas. The potential difference measured by a thermocouple is proportional to the square of pressure over the low- to medium-vacuum range. At higher (viscous flow) and lower (molecular flow) pressures, the thermal conductivity of air or any other gas is essentially independent of pressure. The thermocouple was first used as a vacuum gauge by Voege in 1906. [ 36 ] The mathematical model for the thermocouple as a vacuum gauge is quite complicated, as explained in detail by Van Atta, [ 37 ] but can be simplified to: where P is the gas pressure, B is a constant that depends on the thermocouple temperature, the gas composition and the vacuum-chamber geometry, V 0 is the thermocouple voltage at zero pressure (absolute), and V is the voltage indicated by the thermocouple. The alternative is the Pirani gauge , which operates in a similar way, over approximately the same pressure range, but is only a 2-terminal device, sensing the change in resistance with temperature of a thin electrically heated wire, rather than using a thermocouple. Thermocouple data tables:
https://en.wikipedia.org/wiki/Thermocouple
The Thermodesulfobacteriota , or Desulfobacterota , [ 4 ] are a phylum of anaerobic Gram-negative bacteria . Many representatives are sulfate-reducing bacteria , [ 5 ] others can grow by disproportionation of various sulphur species, [ 6 ] reduction or iron, [ 7 ] or even use external surfaces as electron acceptors ( exoelectrogens ). [ 8 ] They have highly variable morphology : vibrio, rods, cocci, [ 4 ] as well as filamentous cable bacteria . [ 9 ] Individual members of Desulfobacterota are also studied for their bacterial nanowires [ 10 ] or syntrophic relationships . [ 11 ] The bacterial phylum Desulfobacterota has been created by merging: 1) the well-established class Thermodesulfobacteria , 2) the proposed phylum Dadabacteria, and 3) various taxa separated from the abandoned non-monophyletic class "Deltaproteobacteria" alongside three other phyla: Myxococcota , Bdellovibrionota , and SAR324. [ 4 ] In contrast to their close relatives, the aerobic phyla Myxococcota and Bdellovibrionota , Desulfobacterota are predominantly anaerobic . [ 4 ] They likely retained their anaerobic lifestyle since before the Great Oxidation Event . [ 13 ] Three closely related classes within Desulfobacterota: Thermodesulfobacteria , Dissulfuribacteria , and Desulfofervidia , [ 11 ] as well as the more distant Deferrisomatia , are exclusively thermophilic , while most members of other classes are mesophiles [ 4 ] or even psychrophiles . [ 14 ] [ 15 ] Sulfate-reducing bacteria (SRB) utilize sulfate as a terminal electron acceptor in a respiratory-type metabolism, coupled to the oxidation of organic compounds or hydrogen . By reducing sulfate, many Desulfobacterota species substantially contribute to the sulfur cycle . [ 4 ] Microbial sulfur disproportionation (MSD) is a poorly known type of energy metabolism analogous to organic fermentation, where a single inorganic sulfur species of intermediate oxidation state is simultaneously oxidized and reduced, resulting in production of sulfide and sulfate . In Desulfobacterota, MSD is often present in species that also perform sulfate reduction . [ 6 ] Fe(III) minerals can be microbially reduced by Fe-reducing bacteria (FeRB) using a wide range of organic compounds or H 2 as electron donors. FeRB are widespread across Bacteria . Among Desulfobacterota, they are represented e.g. by the genus Geobacter ( Desulfuromonadia ). [ 16 ] Certain species of the families Geobacteraceae and Desulfuromonadaceae ( Desulfuromonadia) are able to use external surfaces as electron acceptors to complete respiration. [ 8 ] [ 17 ] [ 18 ] Species of the genus Geobacter use bacterial nanowires to transfer electrons to extracellular electron acceptors such as Fe(III) oxides. [ 10 ] Certain species of the class Syntrophia use simple organic molecules as electron donors and grow only in the presence of H 2 / formate -utilizing partners ( methanogens or Desulfovibrio ) in syntrophic associations. [ 19 ] The family Desulfobulbaceae contains two genera of cable bacteria : Ca. Electronema and Ca. Electrothrix . These filamentous bacteria conduct electricity across distances over 1 cm, which allows them to connect distant sources of electron donors and electron acceptors . [ 9 ] The phylogeny is based on phylogenomic analysis: Waite et al. 2020 [ 3 ] Deferrisomatales " Dadaibacteria " Geobacterales Desulfuromonadales Desulfomonilales Syntrophales Syntrophorhabdales Dissulfuribacterales Thermodesulfobacteriales Desulfobulbales "Desulfofervidales" Desulfovibrionales Syntrophobacterales Desulfobaccales "Adiutricales" Desulfarculales Desulfatiglandales Desulfobacterales 16S rRNA based LTP _10_2024 [ 31 ] [ 32 ] [ 33 ] Syntrophorhabdales Geobacterales Desulfuromonadales Desulfovibrionales Desulfatiglandales Desulfomonilales Syntrophales Desulfobacterales Thermodesulfobacteriales Desulfobaccales Dissulfuribacterales Desulfarculales Syntrophobacterales Desulfobulbales Deferrisomatales "Binatales" Deferrisomatales "Deferrimicrobiales" "Zymogenales" "Anaeroferrophilales" Geobacterales Desulfuromonadales Desulfomonilales Syntrophales Dissulfuribacterales Thermodesulfobacteriales Desulfobulbales Desulfovibrionales "Desulfofervidales" Desulfatiglandales Desulfobaccales "Adiutricales" Desulfarculales Syntrophobacterales Desulfobacterales Syntrophorhabdales "Nemesobacterales" "Acidulidesulfobacterales" (SZUA-79)
https://en.wikipedia.org/wiki/Thermodesulfobacteriota
In thermodynamics , activity (symbol a ) is a measure of the "effective concentration" of a species in a mixture, in the sense that the species' chemical potential depends on the activity of a real solution in the same way that it would depend on concentration for an ideal solution . The term "activity" in this sense was coined by the American chemist Gilbert N. Lewis in 1907. [ 1 ] By convention, activity is treated as a dimensionless quantity , although its value depends on customary choices of standard state for the species. The activity of pure substances in condensed phases (solids and liquids) is taken as a = 1. [ 2 ] Activity depends on temperature, pressure and composition of the mixture, among other things. For gases, the activity is the effective partial pressure, and is usually referred to as fugacity . The difference between activity and other measures of concentration arises because the interactions between different types of molecules in non-ideal gases or solutions are different from interactions between the same types of molecules. The activity of an ion is particularly influenced by its surroundings. Equilibrium constants should be defined by activities but, in practice, are often defined by concentrations instead. The same is often true of equations for reaction rates . However, there are circumstances where the activity and the concentration are significantly different and, as such, it is not valid to approximate with concentrations where activities are required. Two examples serve to illustrate this point: The relative activity of a species i , denoted a i , is defined [ 4 ] [ 5 ] as: where μ i is the (molar) chemical potential of the species i under the conditions of interest, μ o i is the (molar) chemical potential of that species under some defined set of standard conditions, R is the gas constant , T is the thermodynamic temperature and e is the exponential constant . Alternatively, this equation can be written as: In general, the activity depends on any factor that alters the chemical potential. Such factors may include: concentration, temperature, pressure, interactions between chemical species, electric fields, etc. Depending on the circumstances, some of these factors, in particular concentration and interactions, may be more important than others. The activity depends on the choice of standard state such that changing the standard state will also change the activity. This means that activity is a relative term that describes how "active" a compound is compared to when it is under the standard state conditions. In principle, the choice of standard state is arbitrary; however, it is often chosen out of mathematical or experimental convenience. Alternatively, it is also possible to define an "absolute activity", λ , which is written as: Note that this definition corresponds to setting as standard state the solution of μ i = 0 {\displaystyle \mu _{i}=0} , if the latter exists. The activity coefficient γ , which is also a dimensionless quantity, relates the activity to a measured mole fraction x i (or y i in the gas phase), molality b i , mass fraction w i , molar concentration (molarity) c i or mass concentration ρ i : [ 6 ] The division by the standard molality b o (usually 1 mol/kg) or the standard molar concentration c o (usually 1 mol/L) is necessary to ensure that both the activity and the activity coefficient are dimensionless, as is conventional. [ 5 ] The activity depends on the chosen standard state and composition scale; [ 6 ] for instance, in the dilute limit it approaches the mole fraction, mass fraction, or numerical value of molarity, all of which are different. However, the activity coefficients are similar. [ citation needed ] When the activity coefficient is close to 1, the substance shows almost ideal behaviour according to Henry's law (but not necessarily in the sense of an ideal solution ). In these cases, the activity can be substituted with the appropriate dimensionless measure of composition x i , ⁠ b i / b o ⁠ or ⁠ c i / c o ⁠ . It is also possible to define an activity coefficient in terms of Raoult's law : the International Union of Pure and Applied Chemistry (IUPAC) recommends the symbol f for this activity coefficient, [ 5 ] although this should not be confused with fugacity . In most laboratory situations, the difference in behaviour between a real gas and an ideal gas is dependent only on the pressure and the temperature, not on the presence of any other gases. At a given temperature, the "effective" pressure of a gas i is given by its fugacity f i : this may be higher or lower than its mechanical pressure. By historical convention, fugacities have the dimension of pressure, so the dimensionless activity is given by: where φ i is the dimensionless fugacity coefficient of the species, y i is its mole fraction in the gaseous mixture ( y = 1 for a pure gas) and p is the total pressure. The value p o is the standard pressure: it may be equal to 1 atm (101.325 kPa) or 1 bar (100 kPa) depending on the source of data, and should always be quoted. The most convenient way of expressing the composition of a generic mixture is by using the mole fractions x i (written y i in the gas phase) of the different components (or chemical species: atoms or molecules) present in the system, where The standard state of each component in the mixture is taken to be the pure substance, i.e. the pure substance has an activity of one. When activity coefficients are used, they are usually defined in terms of Raoult's law , where f i is the Raoult's law activity coefficient: an activity coefficient of one indicates ideal behaviour according to Raoult's law. A solute in dilute solution usually follows Henry's law rather than Raoult's law, and it is more usual to express the composition of the solution in terms of the molar concentration c (in mol/L) or the molality b (in mol/kg) of the solute rather than in mole fractions. The standard state of a dilute solution is a hypothetical solution of concentration c o = 1 mol/L (or molality b o = 1 mol/kg) which shows ideal behaviour (also referred to as "infinite-dilution" behaviour). The standard state, and hence the activity, depends on which measure of composition is used. Molalities are often preferred as the volumes of non-ideal mixtures are not strictly additive and are also temperature-dependent: molalities do not depend on volume, whereas molar concentrations do. [ 7 ] The activity of the solute is given by: When the solute undergoes ionic dissociation in solution (for example a salt), the system becomes decidedly non-ideal and we need to take the dissociation process into consideration. One can define activities for the cations and anions separately ( a + and a – ). In a liquid solution the activity coefficient of a given ion (e.g. Ca 2+ ) isn't measurable because it is experimentally impossible to independently measure the electrochemical potential of an ion in solution. (One cannot add cations without putting in anions at the same time). Therefore, one introduces the notions of where ν = ν + + ν – represent the stoichiometric coefficients involved in the ionic dissociation process Even though γ + and γ – cannot be determined separately, γ ± is a measurable quantity that can also be predicted for sufficiently dilute systems using Debye–Hückel theory . For electrolyte solutions at higher concentrations, Debye–Hückel theory needs to be extended and replaced, e.g., by a Pitzer electrolyte solution model (see external links below for examples). For the activity of a strong ionic solute (complete dissociation) we can write: The most direct way of measuring the activity of a volatile species is to measure its equilibrium partial vapor pressure . For water as solvent, the water activity a w is the equilibrated relative humidity . For non-volatile components, such as sucrose or sodium chloride , this approach will not work since they do not have measurable vapor pressures at most temperatures. However, in such cases it is possible to measure the vapor pressure of the solvent instead. Using the Gibbs–Duhem relation it is possible to translate the change in solvent vapor pressures with concentration into activities for the solute. The simplest way of determining how the activity of a component depends on pressure is by measurement of densities of solution, knowing that real solutions have deviations from the additivity of (molar) volumes of pure components compared to the (molar) volume of the solution. This involves the use of partial molar volumes , which measure the change in chemical potential with respect to pressure. Another way to determine the activity of a species is through the manipulation of colligative properties , specifically freezing point depression . Using freezing point depression techniques, it is possible to calculate the activity of a weak acid from the relation, where b′ is the total equilibrium molality of solute determined by any colligative property measurement (in this case Δ T fus ), b is the nominal molality obtained from titration and a is the activity of the species. There are also electrochemical methods that allow the determination of activity and its coefficient. The value of the mean ionic activity coefficient γ ± of ions in solution can also be estimated with the Debye–Hückel equation , the Davies equation or the Pitzer equations . The prevailing view that single ion activities are unmeasurable, or perhaps even physically meaningless, has its roots in the work of Edward A. Guggenheim in the late 1920s. [ 8 ] However, chemists have not given up the idea of single ion activities. For example, pH is defined as the negative logarithm of the hydrogen ion activity. By implication, if the prevailing view on the physical meaning and measurability of single ion activities is correct it relegates pH to the category of thermodynamically unmeasurable quantities. For this reason the International Union of Pure and Applied Chemistry (IUPAC) states that the activity-based definition of pH is a notional definition only and further states that the establishment of primary pH standards requires the application of the concept of 'primary method of measurement' tied to the Harned cell. [ 9 ] Nevertheless, the concept of single ion activities continues to be discussed in the literature, and at least one author purports to define single ion activities in terms of purely thermodynamic quantities. The same author also proposes a method of measuring single ion activity coefficients based on purely thermodynamic processes. [ 10 ] A different approach [ 11 ] has a similar objective. Chemical activities should be used to define chemical potentials , where the chemical potential depends on the temperature T , pressure p and the activity a i according to the formula : where R is the gas constant and μ o i is the value of μ i under standard conditions. Note that the choice of concentration scale affects both the activity and the standard state chemical potential, which is especially important when the reference state is the infinite dilution of a solute in a solvent. Chemical potential has units of joules per mole (J/mol), or energy per amount of matter. Chemical potential can be used to characterize the specific Gibbs free energy changes occurring in chemical reactions or other transformations. Formulae involving activities can be simplified by considering that: Therefore, it is approximately equal to its concentration. The latter follows from any definition based on Raoult's law, because if we let the solute concentration x 1 go to zero, the vapor pressure of the solvent p will go to p* . Thus its activity a = ⁠ p / p * ⁠ will go to unity. This means that if during a reaction in dilute solution more solvent is generated (the reaction produces water for example) we can typically set its activity to unity. Solid and liquid activities do not depend very strongly on pressure because their molar volumes are typically small. Graphite at 100 bars has an activity of only 1.01 if we choose p o = 1 bar as standard state. Only at very high pressures do we need to worry about such changes. Activity expressed in terms of pressure is called fugacity . Example values of activity coefficients of sodium chloride in aqueous solution are given in the table. [ 12 ] In an ideal solution, these values would all be unity. The deviations tend to become larger with increasing molality and temperature, but with some exceptions.
https://en.wikipedia.org/wiki/Thermodynamic_activity
In statistical thermodynamics , thermodynamic beta , also known as coldness , [ 1 ] is the reciprocal of the thermodynamic temperature of a system: β = 1 k B T {\displaystyle \beta ={\frac {1}{k_{\rm {B}}T}}} (where T is the temperature and k B is Boltzmann constant ). [ 2 ] Thermodynamic beta has units reciprocal to that of energy (in SI units , reciprocal joules , [ β ] = J − 1 {\displaystyle [\beta ]={\textrm {J}}^{-1}} ). In non-thermal units, it can also be measured in byte per joule, or more conveniently, gigabyte per nanojoule; [ 3 ] 1 K −1 is equivalent to about 13,062 gigabytes per nanojoule; at room temperature: T = 300K, β ≈ 44 GB/nJ ≈ 39 eV −1 ≈ 2.4 × 10 20 J −1 . The conversion factor is 1 GB/nJ = 8 ln ⁡ 2 × 10 18 {\displaystyle 8\ln 2\times 10^{18}} J −1 . Thermodynamic beta is essentially the connection between the information theory and statistical mechanics interpretation of a physical system through its entropy and the thermodynamics associated with its energy . It expresses the response of entropy to an increase in energy. If a small amount of energy is added to the system, then β describes the amount the system will randomize. Via the statistical definition of temperature as a function of entropy, the coldness function can be calculated in the microcanonical ensemble from the formula (i.e., the partial derivative of the entropy S with respect to the energy E at constant volume V and particle number N ). Though completely equivalent in conceptual content to temperature, β is generally considered a more fundamental quantity than temperature owing to the phenomenon of negative temperature , in which β is continuous as it crosses zero whereas T has a singularity. [ 4 ] In addition, β has the advantage of being easier to understand causally: If a small amount of heat is added to a system, β is the increase in entropy divided by the increase in heat. Temperature is difficult to interpret in the same sense, as it is not possible to "Add entropy" to a system except indirectly, by modifying other quantities such as temperature, volume, or number of particles. From the statistical point of view, β is a numerical quantity relating two macroscopic systems in equilibrium. The exact formulation is as follows. Consider two systems, 1 and 2, in thermal contact, with respective energies E 1 and E 2 . We assume E 1 + E 2 = some constant E . The number of microstates of each system will be denoted by Ω 1 and Ω 2 . Under our assumptions Ω i depends only on E i . We also assume that any microstate of system 1 consistent with E 1 can coexist with any microstate of system 2 consistent with E 2 . Thus, the number of microstates for the combined system is We will derive β from the fundamental assumption of statistical mechanics : (In other words, the system naturally seeks the maximum number of microstates.) Therefore, at equilibrium, But E 1 + E 2 = E implies So i.e. The above relation motivates a definition of β : When two systems are in equilibrium, they have the same thermodynamic temperature T . Thus intuitively, one would expect β (as defined via microstates) to be related to T in some way. This link is provided by Boltzmann's fundamental assumption written as where k B is the Boltzmann constant , S is the classical thermodynamic entropy, and Ω is the number of microstates. So Substituting into the definition of β from the statistical definition above gives Comparing with thermodynamic formula we have where τ {\displaystyle \tau } is called the fundamental temperature of the system, and has units of energy. The thermodynamic beta was originally introduced in 1971 (as Kältefunktion "coldness function") by Ingo Müller [ de ] , one of the proponents of the rational thermodynamics school of thought, [ 5 ] [ 6 ] based on earlier proposals for a "reciprocal temperature" function. [ 1 ] [ 7 ] [ non-primary source needed ]
https://en.wikipedia.org/wiki/Thermodynamic_beta
A thermodynamic cycle consists of linked sequences of thermodynamic processes that involve transfer of heat and work into and out of the system, while varying pressure, temperature, and other state variables within the system, and that eventually returns the system to its initial state. [ 1 ] In the process of passing through a cycle, the working fluid (system) may convert heat from a warm source into useful work, and dispose of the remaining heat to a cold sink, thereby acting as a heat engine . Conversely, the cycle may be reversed and use work to move heat from a cold source and transfer it to a warm sink thereby acting as a heat pump . If at every point in the cycle the system is in thermodynamic equilibrium , the cycle is reversible. Whether carried out reversible or irreversibly, the net entropy change of the system is zero, as entropy is a state function . During a closed cycle, the system returns to its original thermodynamic state of temperature and pressure. Process quantities (or path quantities), such as heat and work are process dependent. For a cycle for which the system returns to its initial state the first law of thermodynamics applies: The above states that there is no change of the internal energy ( U {\displaystyle U} ) of the system over the cycle. E i n {\displaystyle E_{in}} represents the total work and heat input during the cycle and E o u t {\displaystyle E_{out}} would be the total work and heat output during the cycle. The repeating nature of the process path allows for continuous operation, making the cycle an important concept in thermodynamics . Thermodynamic cycles are often represented mathematically as quasistatic processes in the modeling of the workings of an actual device. Two primary classes of thermodynamic cycles are power cycles and heat pump cycles . Power cycles are cycles which convert some heat input into a mechanical work output, while heat pump cycles transfer heat from low to high temperatures by using mechanical work as the input. Cycles composed entirely of quasistatic processes can operate as power or heat pump cycles by controlling the process direction. On a pressure–volume (PV) diagram or temperature–entropy diagram , the clockwise and counterclockwise directions indicate power and heat pump cycles, respectively. Because the net variation in state properties during a thermodynamic cycle is zero, it forms a closed loop on a P-V diagram . A P-V diagram's abscissa , Y axis, shows pressure ( P ) and ordinate , X axis, shows volume ( V ). The area enclosed by the loop is the net work ( W n e t {\displaystyle W_{net}} ) done by the processes, i.e. the cycle: This work is equal to the net heat (Q) transferred into and out of the system: Equation (2) is consistent with the First Law; even though the internal energy changes during the course of the cyclic process, when the cyclic process finishes the system's internal energy is the same as the energy it had when the process began. If the cyclic process moves clockwise around the loop, then W n e t {\displaystyle W_{net}} will be positive, the cyclic machine will transform part of the heat exchanged into work and it represents a heat engine . If it moves counterclockwise, then W n e t {\displaystyle W_{net}} will be negative, the cyclic machine will require work to absorb heat at a low temperature and reject it at a higher temperature and it represents a heat pump . The following processes are often used to describe different stages of a thermodynamic cycle: The Otto Cycle is an example of a reversible thermodynamic cycle. Thermodynamic power cycles are the basis for the operation of heat engines, which supply most of the world's electric power and run the vast majority of motor vehicles . Power cycles can be organized into two categories: real cycles and ideal cycles. Cycles encountered in real world devices (real cycles) are difficult to analyze because of the presence of complicating effects (friction), and the absence of sufficient time for the establishment of equilibrium conditions. For the purpose of analysis and design, idealized models (ideal cycles) are created; these ideal models allow engineers to study the effects of major parameters that dominate the cycle without having to spend significant time working out intricate details present in the real cycle model. Power cycles can also be divided according to the type of heat engine they seek to model. The most common cycles used to model internal combustion engines are the Otto cycle , which models gasoline engines , and the Diesel cycle , which models diesel engines . Cycles that model external combustion engines include the Brayton cycle , which models gas turbines , the Rankine cycle , which models steam turbines , the Stirling cycle , which models hot air engines , and the Ericsson cycle , which also models hot air engines. For example :--the pressure-volume mechanical work output from the ideal Stirling cycle (net work out), consisting of 4 thermodynamic processes, is [ citation needed ] [ dubious – discuss ] : For the ideal Stirling cycle, no volume change happens in process 4-1 and 2-3, thus equation (3) simplifies to: Thermodynamic heat pump cycles are the models for household heat pumps and refrigerators . There is no difference between the two except the purpose of the refrigerator is to cool a very small space while the household heat pump is intended to warm or cool a house. Both work by moving heat from a cold space to a warm space. The most common refrigeration cycle is the vapor compression cycle , which models systems using refrigerants that change phase. The absorption refrigeration cycle is an alternative that absorbs the refrigerant in a liquid solution rather than evaporating it. Gas refrigeration cycles include the reversed Brayton cycle and the Hampson–Linde cycle . Multiple compression and expansion cycles allow gas refrigeration systems to liquify gases . Thermodynamic cycles may be used to model real devices and systems, typically by making a series of assumptions to reduce the problem to a more manageable form. [ 2 ] For example, as shown in the figure, devices such a gas turbine or jet engine can be modeled as a Brayton cycle . The actual device is made up of a series of stages, each of which is itself modeled as an idealized thermodynamic process. Although each stage which acts on the working fluid is a complex real device, they may be modelled as idealized processes which approximate their real behavior. If energy is added by means other than combustion, then a further assumption is that the exhaust gases would be passed from the exhaust to a heat exchanger that would sink the waste heat to the environment and the working gas would be reused at the inlet stage. The difference between an idealized cycle and actual performance may be significant. [ 2 ] For example, the following images illustrate the differences in work output predicted by an ideal Stirling cycle and the actual performance of a Stirling engine: As the net work output for a cycle is represented by the interior of the cycle, there is a significant difference between the predicted work output of the ideal cycle and the actual work output shown by a real engine. It may also be observed that the real individual processes diverge from their idealized counterparts; e.g., isochoric expansion (process 1-2) occurs with some actual volume change. In practice, simple idealized thermodynamic cycles are usually made out of four thermodynamic processes . Any thermodynamic processes may be used. However, when idealized cycles are modeled, often processes where one state variable is kept constant, such as: Some example thermodynamic cycles and their constituent processes are as follows: An ideal cycle is simple to analyze and consists of: If the working substance is a perfect gas , U {\displaystyle U} is only a function of T {\displaystyle T} for a closed system since its internal pressure vanishes. Therefore, the internal energy changes of a perfect gas undergoing various processes connecting initial state a {\displaystyle a} to final state b {\displaystyle b} are always given by the formula Assuming that C v {\displaystyle C_{v}} is constant, Δ U = C v Δ T {\displaystyle \Delta U=C_{v}\Delta T} for any process undergone by a perfect gas. Under this set of assumptions, for processes A and C we have W = p Δ v {\displaystyle W=p\Delta v} and Q = C p Δ T {\displaystyle Q=C_{p}\Delta T} , whereas for processes B and D we have W = 0 {\displaystyle W=0} and Q = Δ U = C v Δ T {\displaystyle Q=\Delta U=C_{v}\Delta T} . The total work done per cycle is W c y c l e = p A ( v 2 − v 1 ) + p C ( v 4 − v 3 ) = p A ( v 2 − v 1 ) + p C ( v 1 − v 2 ) = ( p A − p C ) ( v 2 − v 1 ) {\displaystyle W_{cycle}=p_{A}(v_{2}-v_{1})+p_{C}(v_{4}-v_{3})=p_{A}(v_{2}-v_{1})+p_{C}(v_{1}-v_{2})=(p_{A}-p_{C})(v_{2}-v_{1})} , which is just the area of the rectangle. If the total heat flow per cycle is required, this is easily obtained. Since Δ U c y c l e = Q c y c l e − W c y c l e = 0 {\displaystyle \Delta U_{cycle}=Q_{cycle}-W_{cycle}=0} , we have Q c y c l e = W c y c l e {\displaystyle Q_{cycle}=W_{cycle}} . Thus, the total heat flow per cycle is calculated without knowing the heat capacities and temperature changes for each step (although this information would be needed to assess the thermodynamic efficiency of the cycle). The Carnot cycle is a cycle composed of the totally reversible processes of isentropic compression and expansion and isothermal heat addition and rejection. The thermal efficiency of a Carnot cycle depends only on the absolute temperatures of the two reservoirs in which heat transfer takes place, and for a power cycle is: where T L {\displaystyle {T_{L}}} is the lowest cycle temperature and T H {\displaystyle {T_{H}}} the highest. For Carnot power cycles the coefficient of performance for a heat pump is: and for a refrigerator the coefficient of performance is: The second law of thermodynamics limits the efficiency and COP for all cyclic devices to levels at or below the Carnot efficiency. The Stirling cycle and Ericsson cycle are two other reversible cycles that use regeneration to obtain isothermal heat transfer. A Stirling cycle is like an Otto cycle, except that the adiabats are replaced by isotherms. It is also the same as an Ericsson cycle with the isobaric processes substituted for constant volume processes. Heat flows into the loop through the top isotherm and the left isochore, and some of this heat flows back out through the bottom isotherm and the right isochore, but most of the heat flow is through the pair of isotherms. This makes sense since all the work done by the cycle is done by the pair of isothermal processes, which are described by Q=W . This suggests that all the net heat comes in through the top isotherm. In fact, all of the heat which comes in through the left isochore comes out through the right isochore: since the top isotherm is all at the same warmer temperature T H {\displaystyle T_{H}} and the bottom isotherm is all at the same cooler temperature T C {\displaystyle T_{C}} , and since change in energy for an isochore is proportional to change in temperature, then all of the heat coming in through the left isochore is cancelled out exactly by the heat going out the right isochore. If Z is a state function then the balance of Z remains unchanged during a cyclic process: Entropy is a state function and is defined in an absolute sense through the Third Law of Thermodynamics as where a reversible path is chosen from absolute zero to the final state, so that for an isothermal reversible process In general, for any cyclic process the state points can be connected by reversible paths, so that meaning that the net entropy change of the working fluid over a cycle is zero.
https://en.wikipedia.org/wiki/Thermodynamic_cycle
Thermodynamic diagrams are diagrams used to represent the thermodynamic states of a material (typically fluid ) and the consequences of manipulating this material. For instance, a temperature– entropy diagram ( T–s diagram ) may be used to demonstrate the behavior of a fluid as it is changed by a compressor. Especially in meteorology , they are used to analyze the actual state of the atmosphere derived from the measurements of radiosondes , usually obtained with weather balloons . In such diagrams, temperature and humidity values (represented by the dew point ) are displayed with respect to pressure . Thus the diagram gives at a first glance the actual atmospheric stratification and vertical water vapor distribution. Further analysis gives the actual base and top height of convective clouds or possible instabilities in the stratification. By assuming the energy amount due to solar radiation it is possible to predict the 2 m (6.6 ft ) temperature, humidity, and wind during the day, the development of the boundary layer of the atmosphere, the occurrence and development of clouds and the conditions for soaring flight during the day. The main feature of thermodynamic diagrams is the equivalence between the area in the diagram and energy. When air changes pressure and temperature during a process and prescribes a closed curve within the diagram the area enclosed by this curve is proportional to the energy which has been gained or released by the air. General purpose diagrams include: Specific to weather services, there are mainly three different types of thermodynamic diagrams used: All four diagrams are derived from the physical P–alpha diagram which combines pressure ( P ) and specific volume ( alpha ) as its basic coordinates. The P–alpha diagram shows a strong deformation of the grid for atmospheric conditions and is therefore not useful in atmospheric sciences . The three diagrams are constructed from the P–alpha diagram by using appropriate coordinate transformations. Not a thermodynamic diagram in a strict sense, since it does not display the energy–area equivalence, is the But due to its simpler construction it is preferred in education. [ citation needed ] Another widely-used diagram that does not display the energy–area equivalence is the θ-z diagram (Theta-height diagram), extensively used boundary layer meteorology . Thermodynamic diagrams usually show a net of five different lines: The lapse rate , dry adiabatic lapse rate (DALR) and moist adiabatic lapse rate (MALR), are obtained. With the help of these lines, parameters such as cloud condensation level , level of free convection , onset of cloud formation. etc. can be derived from the soundings. The path or series of states through which a system passes from an initial equilibrium state to a final equilibrium state [ 1 ] and can be viewed graphically on a pressure-volume (P-V), pressure-temperature (P-T), and temperature-entropy (T-s) diagrams. [ 2 ] There are an infinite number of possible paths from an initial point to an end point in a process . In many cases the path matters, however, changes in the thermodynamic properties depend only on the initial and final states and not upon the path. [ 3 ] Consider a gas in cylinder with a free floating piston resting on top of a volume of gas V 1 at a temperature T 1 . If the gas is heated so that the temperature of the gas goes up to T 2 while the piston is allowed to rise to V 2 as in Figure 1, then the pressure is kept the same in this process due to the free floating piston being allowed to rise making the process an isobaric process or constant pressure process. This Process Path is a straight horizontal line from state one to state two on a P-V diagram. It is often valuable to calculate the work done in a process. The work done in a process is the area beneath the process path on a P-V diagram. Figure 2 If the process is isobaric, then the work done on the piston is easily calculated. For example, if the gas expands slowly against the piston, the work done by the gas to raise the piston is the force F times the distance d . But the force is just the pressure P of the gas times the area A of the piston, F = PA . [ 4 ] Thus Now let’s say that the piston was not able to move smoothly within the cylinder due to static friction with the walls of the cylinder. Assuming that the temperature was increased slowly, you would find that the process path is not straight and no longer isobaric, but would instead undergo an isometric process till the force exceeded that of the frictional force and then would undergo an isothermal process back to an equilibrium state. This process would be repeated till the end state is reached. See figure 3 . The work done on the piston in this case would be different due to the additional work required for the resistance of the friction. The work done due to friction would be the difference between the work done on these two process paths. Many engineers neglect friction at first in order to generate a simplified model. [ 1 ] For more accurate information, the height of the highest point, or the max pressure, to surpass the static friction would be proportional to the frictional coefficient and the slope going back down to the normal pressure would be the same as an isothermal process if the temperature was increased at a slow enough rate. [ 4 ] Another path in this process is an isometric process . This is a process where volume is held constant which shows as a vertical line on a P-V diagram. Figure 3 Since the piston is not moving during this process, there is not any work being done. [ 1 ]
https://en.wikipedia.org/wiki/Thermodynamic_diagrams
Thermodynamics is expressed by a mathematical framework of thermodynamic equations which relate various thermodynamic quantities and physical properties measured in a laboratory or production process. Thermodynamics is based on a fundamental set of postulates, that became the laws of thermodynamics . One of the fundamental thermodynamic equations is the description of thermodynamic work in analogy to mechanical work , or weight lifted through an elevation against gravity, as defined in 1824 by French physicist Sadi Carnot . Carnot used the phrase motive power for work. In the footnotes to his famous On the Motive Power of Fire , he states: “We use here the expression motive power to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised.” With the inclusion of a unit of time in Carnot's definition, one arrives at the modern definition for power : P = W t = ( m g ) h t {\displaystyle P={\frac {W}{t}}={\frac {(mg)h}{t}}} During the latter half of the 19th century, physicists such as Rudolf Clausius , Peter Guthrie Tait , and Willard Gibbs worked to develop the concept of a thermodynamic system and the correlative energetic laws which govern its associated processes. The equilibrium state of a thermodynamic system is described by specifying its "state". The state of a thermodynamic system is specified by a number of extensive quantities , the most familiar of which are volume , internal energy , and the amount of each constituent particle ( particle numbers ). Extensive parameters are properties of the entire system, as contrasted with intensive parameters which can be defined at a single point, such as temperature and pressure. The extensive parameters (except entropy ) are generally conserved in some way as long as the system is "insulated" to changes to that parameter from the outside. The truth of this statement for volume is trivial, for particles one might say that the total particle number of each atomic element is conserved. In the case of energy, the statement of the conservation of energy is known as the first law of thermodynamics . A thermodynamic system is in equilibrium when it is no longer changing in time. This may happen in a very short time, or it may happen with glacial slowness. A thermodynamic system may be composed of many subsystems which may or may not be "insulated" from each other with respect to the various extensive quantities. If we have a thermodynamic system in equilibrium in which we relax some of its constraints, it will move to a new equilibrium state. The thermodynamic parameters may now be thought of as variables and the state may be thought of as a particular point in a space of thermodynamic parameters. The change in the state of the system can be seen as a path in this state space. This change is called a thermodynamic process . Thermodynamic equations are now used to express the relationships between the state parameters at these different equilibrium state. The concept which governs the path that a thermodynamic system traces in state space as it goes from one equilibrium state to another is that of entropy. The entropy is first viewed as an extensive function of all of the extensive thermodynamic parameters. If we have a thermodynamic system in equilibrium, and we release some of the extensive constraints on the system, there are many equilibrium states that it could move to consistent with the conservation of energy, volume, etc. The second law of thermodynamics specifies that the equilibrium state that it moves to is in fact the one with the greatest entropy. Once we know the entropy as a function of the extensive variables of the system, we will be able to predict the final equilibrium state. ( Callen 1985 ) Some of the most common thermodynamic quantities are: The conjugate variable pairs are the fundamental state variables used to formulate the thermodynamic functions. The most important thermodynamic potentials are the following functions: Thermodynamic systems are typically affected by the following types of system interactions. The types under consideration are used to classify systems as open systems , closed systems , and isolated systems . Common material properties determined from the thermodynamic functions are the following: The following constants are constants that occur in many relationships due to the application of a standard system of units. The behavior of a thermodynamic system is summarized in the laws of Thermodynamics , which concisely are: The first and second law of thermodynamics are the most fundamental equations of thermodynamics. They may be combined into what is known as fundamental thermodynamic relation which describes all of the changes of thermodynamic state functions of a system of uniform temperature and pressure. As a simple example, consider a system composed of a number of k different types of particles and has the volume as its only external variable. The fundamental thermodynamic relation may then be expressed in terms of the internal energy as: Some important aspects of this equation should be noted: ( Alberty 2001 ), ( Balian 2003 ), ( Callen 1985 ) By the principle of minimum energy , the second law can be restated by saying that for a fixed entropy, when the constraints on the system are relaxed, the internal energy assumes a minimum value. This will require that the system be connected to its surroundings, since otherwise the energy would remain constant. By the principle of minimum energy, there are a number of other state functions which may be defined which have the dimensions of energy and which are minimized according to the second law under certain conditions other than constant entropy. These are called thermodynamic potentials . For each such potential, the relevant fundamental equation results from the same Second-Law principle that gives rise to energy minimization under restricted conditions: that the total entropy of the system and its environment is maximized in equilibrium. The intensive parameters give the derivatives of the environment entropy with respect to the extensive properties of the system. The four most common thermodynamic potentials are: After each potential is shown its "natural variables". These variables are important because if the thermodynamic potential is expressed in terms of its natural variables, then it will contain all of the thermodynamic relationships necessary to derive any other relationship. In other words, it too will be a fundamental equation. For the above four potentials, the fundamental equations are expressed as: The thermodynamic square can be used as a tool to recall and derive these potentials. Just as with the internal energy version of the fundamental equation, the chain rule can be used on the above equations to find k +2 equations of state with respect to the particular potential. If Φ is a thermodynamic potential, then the fundamental equation may be expressed as: where the X i {\displaystyle X_{i}} are the natural variables of the potential. If γ i {\displaystyle \gamma _{i}} is conjugate to X i {\displaystyle X_{i}} then we have the equations of state for that potential, one for each set of conjugate variables. Only one equation of state will not be sufficient to reconstitute the fundamental equation. All equations of state will be needed to fully characterize the thermodynamic system. Note that what is commonly called "the equation of state" is just the "mechanical" equation of state involving the Helmholtz potential and the volume: For an ideal gas, this becomes the familiar PV = Nk B T . Because all of the natural variables of the internal energy U are extensive quantities , it follows from Euler's homogeneous function theorem that Substituting into the expressions for the other main potentials we have the following expressions for the thermodynamic potentials: Note that the Euler integrals are sometimes also referred to as fundamental equations. Differentiating the Euler equation for the internal energy and combining with the fundamental equation for internal energy, it follows that: which is known as the Gibbs-Duhem relationship. The Gibbs-Duhem is a relationship among the intensive parameters of the system. It follows that for a simple system with r components, there will be r+1 independent parameters, or degrees of freedom. For example, a simple system with a single component will have two degrees of freedom, and may be specified by only two parameters, such as pressure and volume for example. The law is named after Willard Gibbs and Pierre Duhem . There are many relationships that follow mathematically from the above basic equations. See Exact differential for a list of mathematical relationships. Many equations are expressed as second derivatives of the thermodynamic potentials (see Bridgman equations ). Maxwell relations are equalities involving the second derivatives of thermodynamic potentials with respect to their natural variables. They follow directly from the fact that the order of differentiation does not matter when taking the second derivative. The four most common Maxwell relations are: The thermodynamic square can be used as a tool to recall and derive these relations. Second derivatives of thermodynamic potentials generally describe the response of the system to small changes. The number of second derivatives which are independent of each other is relatively small, which means that most material properties can be described in terms of just a few "standard" properties. For the case of a single component system, there are three properties generally considered "standard" from which all others may be derived: These properties are seen to be the three possible second derivative of the Gibbs free energy with respect to temperature and pressure. Properties such as pressure, volume, temperature, unit cell volume, bulk modulus and mass are easily measured. Other properties are measured through simple relations, such as density, specific volume, specific weight. Properties such as internal energy, entropy, enthalpy, and heat transfer are not so easily measured or determined through simple relations. Thus, we use more complex relations such as Maxwell relations , the Clapeyron equation , and the Mayer relation. Maxwell relations in thermodynamics are critical because they provide a means of simply measuring the change in properties of pressure, temperature, and specific volume, to determine a change in entropy. Entropy cannot be measured directly. The change in entropy with respect to pressure at a constant temperature is the same as the negative change in specific volume with respect to temperature at a constant pressure, for a simple compressible system. Maxwell relations in thermodynamics are often used to derive thermodynamic relations. [ 2 ] The Clapeyron equation allows us to use pressure, temperature, and specific volume to determine an enthalpy change that is connected to a phase change. It is significant to any phase change process that happens at a constant pressure and temperature. One of the relations it resolved to is the enthalpy of vaporization at a provided temperature by measuring the slope of a saturation curve on a pressure vs. temperature graph. It also allows us to determine the specific volume of a saturated vapor and liquid at that provided temperature. In the equation below, L {\displaystyle L} represents the specific latent heat, T {\displaystyle T} represents temperature, and Δ v {\displaystyle \Delta v} represents the change in specific volume. [ 3 ] The Mayer relation states that the specific heat capacity of a gas at constant volume is slightly less than at constant pressure. This relation was built on the reasoning that energy must be supplied to raise the temperature of the gas and for the gas to do work in a volume changing case. According to this relation, the difference between the specific heat capacities is the same as the universal gas constant. This relation is represented by the difference between Cp and Cv: Cp – Cv = R [ 4 ]
https://en.wikipedia.org/wiki/Thermodynamic_equations
A thermodynamic instrument is any device for the measurement of thermodynamic systems . In order for a thermodynamic parameter or physical quantity to be truly defined, a technique for its measurement must be specified. For example, the ultimate definition of temperature is "what a thermometer reads". The question follows – what is a thermometer? There are two types of thermodynamic instruments: the meter and the reservoir. [ 1 ] A thermodynamic meter is any device which measures any parameter of a thermodynamic system. A thermodynamic reservoir is a system which is so large that it does not appreciably alter its state parameters when brought into contact with the test system. [ 1 ] Two general complementary tools are the meter and the reservoir. It is important that these two types of instruments are distinct. A meter does not perform its task accurately if it behaves like a reservoir of the state variable it is trying to measure. If, for example, a thermometer, were to act as a temperature reservoir it would alter the temperature of the system being measured, and the reading would be incorrect. Ideal meters have no effect on the state variables of the system they are measuring. A meter is a thermodynamic system which displays some aspect of its thermodynamic state to the observer. The nature of its contact with the system it is measuring can be controlled, and it is sufficiently small that it does not appreciably affect the state of the system being measured. The theoretical thermometer described below is just such a meter. In some cases, the thermodynamic parameter is actually defined in terms of an idealized measuring instrument. For example, the zeroth law of thermodynamics states that if two bodies are in thermal equilibrium with a third body, they are also in thermal equilibrium with each other. [ 2 ] This principle, as noted by James Maxwell in 1872, asserts that it is possible to measure temperature. An idealized thermometer is a sample of an ideal gas at constant pressure. From the ideal gas law , the volume of such a sample can be used as an indicator of temperature; in this manner it defines temperature. Although pressure is defined mechanically, a pressure-measuring device called a barometer may also be constructed from a sample of an ideal gas held at a constant temperature. A calorimeter is a device which is used to measure and define the internal energy of a system. Some common thermodynamic meters are: A reservoir is a thermodynamic system which controls the state of a system, usually by "imposing" itself upon the system being controlled. This means that the nature of its contact with the system can be controlled. A reservoir is so large that its thermodynamic state is not appreciably affected by the state of the system being controlled. The term " atmospheric pressure " in the below description of a theoretical thermometer is essentially a "pressure reservoir" which imposes atmospheric pressure upon the thermometer. Some common reservoirs are: Let's assume that we understand mechanics well enough to understand and measure volume , area , mass , and force . These may be combined to understand the concept of pressure, which is force per unit area and density, which is mass per unit volume. It has been experimentally determined that, at low enough pressures and densities, all gases behave as ideal gases . The behavior of an ideal gas is given by the ideal gas law: where P is pressure, V is volume, N is the number of particles (total mass divided by mass per particle), k is the Boltzmann constant , and T is temperature. In fact, this equation is more than a phenomenological equation, it gives an operational, or experimental, definition of temperature. A thermometer is a tool that measures temperature - a primitive thermometer would simply be a small container of an ideal gas, that was allowed to expand against atmospheric pressure. If we bring it into thermal contact with the system whose temperature we wish to measure, wait until it equilibrates, and then measure the volume of the thermometer, we will be able to calculate the temperature of the system in question via T = PV / Nk . Hopefully, the thermometer will be small enough that it does not appreciably alter the temperature of the system it is measuring, and also that the atmospheric pressure is not affected by the expansion of the thermometer. The ideal gas thermometer can be defined more precisely by saying it is a system containing an ideal gas, which is thermally connected to the system it is measuring, while being dynamically and materially insulated from it. It is simultaneously dynamically connected to an external pressure reservoir, from which it is materially and thermally insulated. Other thermometers (e.g. mercury thermometers , which display the volume of mercury to the observer), may now be constructed, and calibrated against the ideal gas thermometer.
https://en.wikipedia.org/wiki/Thermodynamic_instruments
Thermodynamic integration is a method used to compare the difference in free energy between two given states (e.g., A and B) whose potential energies U A {\displaystyle U_{A}} and U B {\displaystyle U_{B}} have different dependences on the spatial coordinates. Because the free energy of a system is not simply a function of the phase space coordinates of the system, but is instead a function of the Boltzmann-weighted integral over phase space (i.e. partition function ), the free energy difference between two states cannot be calculated directly from the potential energy of just two coordinate sets (for state A and B respectively). In thermodynamic integration, the free energy difference is calculated by defining a thermodynamic path between the states and integrating over ensemble-averaged enthalpy changes along the path. Such paths can either be real chemical processes or alchemical processes. An example alchemical process is the Kirkwood's coupling parameter method. [ 1 ] Consider two systems, A and B, with potential energies U A {\displaystyle U_{A}} and U B {\displaystyle U_{B}} . The potential energy in either system can be calculated as an ensemble average over configurations sampled from a molecular dynamics or Monte Carlo simulation with proper Boltzmann weighting. Now consider a new potential energy function defined as: Here, λ {\displaystyle \lambda } is defined as a coupling parameter with a value between 0 and 1, and thus the potential energy as a function of λ {\displaystyle \lambda } varies from the energy of system A for λ = 0 {\displaystyle \lambda =0} and system B for λ = 1 {\displaystyle \lambda =1} . In the canonical ensemble , the partition function of the system can be written as: In this notation, U s ( λ ) {\displaystyle U_{s}(\lambda )} is the potential energy of state s {\displaystyle s} in the ensemble with potential energy function U ( λ ) {\displaystyle U(\lambda )} as defined above. The free energy of this system is defined as: If we take the derivative of F with respect to λ, we will get that it equals the ensemble average of the derivative of potential energy with respect to λ. The change in free energy between states A and B can thus be computed from the integral of the ensemble averaged derivatives of potential energy over the coupling parameter λ {\displaystyle \lambda } . [ 2 ] In practice, this is performed by defining a potential energy function U ( λ ) {\displaystyle U(\lambda )} , sampling the ensemble of equilibrium configurations at a series of λ {\displaystyle \lambda } values, calculating the ensemble-averaged derivative of U ( λ ) {\displaystyle U(\lambda )} with respect to λ {\displaystyle \lambda } at each λ {\displaystyle \lambda } value, and finally computing the integral over the ensemble-averaged derivatives. Umbrella sampling is a related free energy method. It adds a bias to the potential energy. In the limit of an infinite strong bias it is equivalent to thermodynamic integration. [ 3 ]
https://en.wikipedia.org/wiki/Thermodynamic_integration
In statistical mechanics , the thermodynamic limit or macroscopic limit , [ 1 ] of a system is the limit for a large number N of particles (e.g., atoms or molecules ) where the volume V is taken to grow in proportion with the number of particles. [ 2 ] The thermodynamic limit is defined as the limit of a system with a large volume, with the particle density held fixed: [ 3 ] In this limit, macroscopic thermodynamics is valid. There, thermal fluctuations in global quantities are negligible, and all thermodynamic quantities , such as pressure and energy , are simply functions of the thermodynamic variables, such as temperature and density. For example, for a large volume of gas , the fluctuations of the total internal energy are negligible and can be ignored, and the average internal energy can be predicted from knowledge of the pressure and temperature of the gas. Note that not all types of thermal fluctuations disappear in the thermodynamic limit—only the fluctuations in system variables cease to be important. There will still be detectable fluctuations (typically at microscopic scales) in some physically observable quantities, such as Mathematically an asymptotic analysis is performed when considering the thermodynamic limit. The thermodynamic limit is essentially a consequence of the central limit theorem of probability theory. The internal energy of a gas of N molecules is the sum of order N contributions, each of which is approximately independent, and so the central limit theorem predicts that the ratio of the size of the fluctuations to the mean is of order 1/ N 1/2 . Thus for a macroscopic volume with perhaps the Avogadro number of molecules, fluctuations are negligible, and so thermodynamics works. In general, almost all macroscopic volumes of gases, liquids and solids can be treated as being in the thermodynamic limit. For small microscopic systems, different statistical ensembles ( microcanonical , canonical , grand canonical ) permit different behaviours. For example, in the canonical ensemble the number of particles inside the system is held fixed, whereas particle number can fluctuate in the grand canonical ensemble . In the thermodynamic limit, these global fluctuations cease to be important. [ 3 ] It is at the thermodynamic limit that the additivity property of macroscopic extensive variables is obeyed. That is, the entropy of two systems or objects taken together (in addition to their energy and volume ) is the sum of the two separate values. In some models of statistical mechanics, the thermodynamic limit exists, but depends on boundary conditions. For example, this happens in six vertex model : the bulk free energy is different for periodic boundary conditions and for domain wall boundary conditions. A thermodynamic limit does not exist in all cases. Usually, a model is taken to the thermodynamic limit by increasing the volume together with the particle number while keeping the particle number density constant. Two common regularizations are the box regularization, where matter is confined to a geometrical box, and the periodic regularization, where matter is placed on the surface of a flat torus (i.e. box with periodic boundary conditions). However, the following three examples demonstrate cases where these approaches do not lead to a thermodynamic limit:
https://en.wikipedia.org/wiki/Thermodynamic_limit
A thermodynamic operation is an externally imposed manipulation that affects a thermodynamic system. The change can be either in the connection or wall between a thermodynamic system and its surroundings, or in the value of some variable in the surroundings that is in contact with a wall of the system that allows transfer of the extensive quantity belonging that variable. [ 1 ] [ 2 ] [ 3 ] [ 4 ] It is assumed in thermodynamics that the operation is conducted in ignorance of any pertinent microscopic information. A thermodynamic operation requires a contribution from an independent external agency, that does not come from the passive properties of the systems. Perhaps the first expression of the distinction between a thermodynamic operation and a thermodynamic process is in Kelvin's statement of the second law of thermodynamics : "It is impossible, by means of inanimate material agency, to derive mechanical effect from any portion of matter by cooling it below the temperature of the surrounding objects." A sequence of events that occurred other than "by means of inanimate material agency" would entail an action by an animate agency, or at least an independent external agency. Such an agency could impose some thermodynamic operations. For example, those operations might create a heat pump , which of course would comply with the second law. A Maxwell's demon conducts an extremely idealized and naturally unrealizable kind of thermodynamic operation. [ 5 ] Another commonly used term that indicates a thermodynamic operation is 'change of constraint', for example referring to the removal of a wall between two otherwise isolated compartments. An ordinary language expression for a thermodynamic operation is used by Edward A. Guggenheim : "tampering" with the bodies. [ 6 ] A typical thermodynamic operation is externally imposed change of position of a piston, so as to alter the volume of the system of interest. Another thermodynamic operation is a removal of an initially separating wall, a manipulation that unites two systems into one undivided system. A typical thermodynamic process consists of a redistribution that spreads a conserved quantity between a system and its surroundings across a previously impermeable but newly semi-permeable wall between them. [ 7 ] More generally, a process can be considered as a transfer of some quantity that is defined by a change of an extensive state variable of the system, corresponding to a conserved quantity, so that a transfer balance equation can be written. [ 8 ] According to Uffink, "... thermodynamic processes only take place after an external intervention on the system (such as: removing a partition, establishing thermal contact with a heat bath, pushing a piston, etc.). They do not correspond to the autonomous behaviour of a free system." [ 9 ] For example, for a closed system of interest, a change of internal energy (an extensive state variable of the system) can be occasioned by transfer of energy as heat. In thermodynamics, heat is not an extensive state variable of the system. The quantity of heat transferred, is however, defined by the amount of adiabatic work that would produce the same change of the internal energy as the heat transfer; energy transferred as heat is the conserved quantity. As a matter of history, the distinction, between a thermodynamic operation and a thermodynamic process, is not found in these terms in nineteenth century accounts. For example, Kelvin spoke of a "thermodynamic operation" when he meant what present-day terminology calls a thermodynamic operation followed by a thermodynamic process. [ 10 ] Again, Planck usually spoke of a "process" when our present-day terminology would speak of a thermodynamic operation followed by a thermodynamic process. [ 11 ] [ 12 ] Planck held that all "natural processes" (meaning, in present-day terminology, a thermodynamic operation followed by a thermodynamic process) are irreversible and proceed in the sense of increase of entropy sum. [ 13 ] In these terms, it would be by thermodynamic operations that, if he could exist, Maxwell's demon would conduct unnatural affairs, which include transitions in the sense away from thermodynamic equilibrium. They are physically theoretically conceivable up to a point, but are not natural processes in Planck's sense. The reason is that ordinary thermodynamic operations are conducted in total ignorance of the very kinds of microscopic information that is essential to the efforts of Maxwell's demon. A thermodynamic cycle is constructed as a sequence of stages or steps. Each stage consists of a thermodynamic operation followed by a thermodynamic process. For example, an initial thermodynamic operation of a cycle of a Carnot heat engine could be taken as the setting of the working body, at a known high temperature, into contact with a thermal reservoir at the same temperature (the hot reservoir), through a wall permeable only to heat, while it remains in mechanical contact with the work reservoir. This thermodynamic operation is followed by a thermodynamic process, in which the expansion of the working body is so slow as to be effectively reversible, while internal energy is transferred as heat from the hot reservoir to the working body and as work from the working body to the work reservoir. Theoretically, the process terminates eventually, and this ends the stage. The engine is then subject to another thermodynamic operation, and the cycle proceeds into another stage. The cycle completes when the thermodynamic variables (the thermodynamic state) of the working body return to their initial values. A refrigeration device passes a working substance through successive stages, overall constituting a cycle. This may be brought about not by moving or changing separating walls around an unmoving body of working substance, but rather by moving a body of working substance to bring about exposure to a cyclic succession of unmoving unchanging walls. The effect is virtually a cycle of thermodynamic operations. The kinetic energy of bulk motion of the working substance is not a significant feature of the device, and the working substance may be practically considered as nearly at rest. For many chains of reasoning in thermodynamics, it is convenient to think of the combination of two systems into one. It is imagined that the two systems, separated from their surroundings, are juxtaposed and (by a shift of viewpoint) regarded as constituting a new, composite system. The composite system is imagined amid its new overall surroundings. This sets up the possibility of interaction between the two subsystems and between the composite system and its overall surroundings, for example by allowing contact through a wall with a particular kind of permeability. This conceptual device was introduced into thermodynamics mainly in the work of Carathéodory, and has been widely used since then. [ 2 ] [ 3 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] If the thermodynamic operation is entire removal of walls, then extensive state variables of the composed system are the respective sums of those of the component systems. This is called the additivity of extensive variables. A thermodynamic system consisting of a single phase, in the absence of external forces, in its own state of internal thermodynamic equilibrium, is homogeneous. [ 18 ] This means that the material in any region of the system can be interchanged with the material of any congruent and parallel region of the system, and the effect is to leave the system thermodynamically unchanged. The thermodynamic operation of scaling is the creation of a new homogeneous system whose size is a multiple of the old size, and whose intensive variables have the same values. Traditionally the size is stated by the mass of the system, but sometimes it is stated by the entropy, or by the volume. [ 19 ] [ 20 ] [ 21 ] [ 22 ] For a given such system Φ , scaled by the real number λ to yield a new one λ Φ , a state function , X (.) , such that X ( λ Φ) = λ X (Φ) , is said to be extensive . Such a function as X is called a homogeneous function of degree 1. There are two different concepts mentioned here, sharing the same name: (a) the mathematical concept of degree-1 homogeneity in the scaling function; and (b) the physical concept of the spatial homogeneity of the system. It happens that the two agree here, but that is not because they are tautologous. It is a contingent fact of thermodynamics. If two systems, S a and S b , have identical intensive variables, a thermodynamic operation of wall removal can compose them into a single system, S , with the same intensive variables. If, for example, their internal energies are in the ratio λ :(1− λ ) , then the composed system, S , has internal energy in the ratio of 1: λ to that of the system S a . By the inverse thermodynamic operation, the system S can be split into two subsystems in the obvious way. As usual, these thermodynamic operations are conducted in total ignorance of the microscopic states of the systems. More particularly, it is characteristic of macroscopic thermodynamics that the probability vanishes, that the splitting operation occurs at an instant when system S is in the kind of extreme transient microscopic state envisaged by the Poincaré recurrence argument. Such splitting and recomposition is in accord with the above defined additivity of extensive variables. Thermodynamic operations appear in the statements of the laws of thermodynamics. For the zeroth law, one considers operations of thermally connecting and disconnecting systems. For the second law, some statements contemplate an operation of connecting two initially unconnected systems. For the third law, one statement is that no finite sequence of thermodynamic operations can bring a system to absolute zero temperature.
https://en.wikipedia.org/wiki/Thermodynamic_operation
A thermodynamic potential (or more accurately, a thermodynamic potential energy ) [ 1 ] [ 2 ] is a scalar quantity used to represent the thermodynamic state of a system . Just as in mechanics , where potential energy is defined as capacity to do work, similarly different potentials have different meanings. The concept of thermodynamic potentials was introduced by Pierre Duhem in 1886. Josiah Willard Gibbs in his papers used the term fundamental functions . Effects of changes in thermodynamic potentials can sometimes be measured directly, while their absolute magnitudes can only be assessed using computational chemistry or similar methods. [ 3 ] One main thermodynamic potential that has a physical interpretation is the internal energy U . It is the energy of configuration of a given system of conservative forces (that is why it is called potential) and only has meaning with respect to a defined set of references (or data). Expressions for all other thermodynamic energy potentials are derivable via Legendre transforms from an expression for U . In other words, each thermodynamic potential is equivalent to other thermodynamic potentials; each potential is a different expression of the others. In thermodynamics , external forces, such as gravity , are counted as contributing to total energy rather than to thermodynamic potentials. For example, the working fluid in a steam engine sitting on top of Mount Everest has higher total energy due to gravity than it has at the bottom of the Mariana Trench , but the same thermodynamic potentials. This is because the gravitational potential energy belongs to the total energy rather than to thermodynamic potentials such as internal energy. Five common thermodynamic potentials are: [ 4 ] where T = temperature , S = entropy , p = pressure , V = volume . N i is the number of particles of type i in the system and μ i is the chemical potential for an i -type particle. The set of all N i are also included as natural variables but may be ignored when no chemical reactions are occurring which cause them to change. The Helmholtz free energy is in ISO/IEC standard called Helmholtz energy [ 1 ] or Helmholtz function. It is often denoted by the symbol F , but the use of A is preferred by IUPAC , [ 5 ] ISO and IEC . [ 6 ] These five common potentials are all potential energies, but there are also entropy potentials . The thermodynamic square can be used as a tool to recall and derive some of the potentials. Just as in mechanics , where potential energy is defined as capacity to do work, similarly different potentials have different meanings like the below: From these meanings (which actually apply in specific conditions, e.g. constant pressure, temperature, etc.), for positive changes (e.g., Δ U > 0 ), we can say that Δ U is the energy added to the system, Δ F is the total work done on it, Δ G is the non-mechanical work done on it, and Δ H is the sum of non-mechanical work done on the system and the heat given to it. Note that the sum of internal energy is conserved, but the sum of Gibbs energy, or Helmholtz energy, are not conserved, despite being named "energy". They can be better interpreted as the potential to perform "useful work", and the potential can be wasted. [ 7 ] Thermodynamic potentials are very useful when calculating the equilibrium results of a chemical reaction , or when measuring the properties of materials in a chemical reaction. The chemical reactions usually take place under some constraints such as constant pressure and temperature, or constant entropy and volume, and when this is true, there is a corresponding thermodynamic potential that comes into play. Just as in mechanics, the system will tend towards a lower value of a potential and at equilibrium, under these constraints, the potential will take the unchanging minimum value. The thermodynamic potentials can also be used to estimate the total amount of energy available from a thermodynamic system under the appropriate constraint. In particular: (see principle of minimum energy for a derivation) [ 8 ] For each thermodynamic potential, there are thermodynamic variables that need to be held constant to specify the potential value at a thermodynamical equilibrium state, such as independent variables for a mathematical function. These variables are termed the natural variables of that potential. [ 9 ] The natural variables are important not only to specify the potential value at the equilibrium, but also because if a thermodynamic potential can be determined as a function of its natural variables, all of the thermodynamic properties of the system can be found by taking partial derivatives of that potential with respect to its natural variables and this is true for no other combination of variables. If a thermodynamic potential is not given as a function of its natural variables, it will not, in general, yield all of the thermodynamic properties of the system. The set of natural variables for each of the above four thermodynamic potentials is formed from a combination of the T , S , p , V variables, excluding any pairs of conjugate variables ; there is no natural variable set for a potential including the T - S or p - V variables together as conjugate variables for energy. An exception for this rule is the N i − μ i conjugate pairs as there is no reason to ignore these in the thermodynamic potentials, and in fact we may additionally define the four potentials for each species. [ 10 ] Using IUPAC notation in which the brackets contain the natural variables (other than the main four), we have: If there is only one species, then we are done. But, if there are, say, two species, then there will be additional potentials such as U [ μ 1 , μ 2 ] = U − μ 1 N 1 − μ 2 N 2 {\displaystyle U[\mu _{1},\mu _{2}]=U-\mu _{1}N_{1}-\mu _{2}N_{2}} and so on. If there are D dimensions to the thermodynamic space, then there are 2 D unique thermodynamic potentials. For the most simple case, a single phase ideal gas, there will be three dimensions, yielding eight thermodynamic potentials. The definitions of the thermodynamic potentials may be differentiated and, along with the first and second laws of thermodynamics, a set of differential equations known as the fundamental equations follow. [ 11 ] (Actually they are all expressions of the same fundamental thermodynamic relation, but are expressed in different variables.) By the first law of thermodynamics , any differential change in the internal energy U of a system can be written as the sum of heat flowing into the system subtracted by the work done by the system on the environment, along with any change due to the addition of new particles to the system: where δQ is the infinitesimal heat flow into the system, and δW is the infinitesimal work done by the system, μ i is the chemical potential of particle type i and N i is the number of the type i particles. (Neither δQ nor δW are exact differentials , i.e., they are thermodynamic process path-dependent. Small changes in these variables are, therefore, represented with δ rather than d .) By the second law of thermodynamics , we can express the internal energy change in terms of state functions and their differentials. In case of reversible changes we have: δ Q = T d S {\displaystyle \delta Q=T\,\mathrm {d} S} δ W = p d V {\displaystyle \delta W=p\,\mathrm {d} V} where and the equality holds for reversible processes. This leads to the standard differential form of the internal energy in case of a quasistatic reversible change: d U = T d S − p d V + ∑ i μ i d N i {\displaystyle \mathrm {d} U=T\mathrm {d} S-p\mathrm {d} V+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}} Since U , S and V are thermodynamic functions of state (also called state functions), the above relation also holds for arbitrary non-reversible changes. If the system has more external variables than just the volume that can change, the fundamental thermodynamic relation generalizes to: d U = T d S − p d V + ∑ j μ j d N j + ∑ i X i d x i {\displaystyle dU=T\,\mathrm {d} S-p\,\mathrm {d} V+\sum _{j}\mu _{j}\,\mathrm {d} N_{j}+\sum _{i}X_{i}\,\mathrm {d} x_{i}} Here the X i are the generalized forces corresponding to the external variables x i . [ 12 ] Applying Legendre transforms repeatedly, the following differential relations hold for the four potentials ( fundamental thermodynamic equations or fundamental thermodynamic relation ): The infinitesimals on the right-hand side of each of the above equations are of the natural variables of the potential on the left-hand side. Similar equations can be developed for all of the other thermodynamic potentials of the system. There will be one fundamental equation for each thermodynamic potential, resulting in a total of 2 D fundamental equations. The differences between the four thermodynamic potentials can be summarized as follows: d ( p V ) = d H − d U = d G − d F {\displaystyle \mathrm {d} (pV)=\mathrm {d} H-\mathrm {d} U=\mathrm {d} G-\mathrm {d} F} d ( T S ) = d U − d F = d H − d G {\displaystyle \mathrm {d} (TS)=\mathrm {d} U-\mathrm {d} F=\mathrm {d} H-\mathrm {d} G} We can use the above equations to derive some differential definitions of some thermodynamic parameters. If we define Φ to stand for any of the thermodynamic potentials, then the above equations are of the form: d Φ = ∑ i x i d y i {\displaystyle \mathrm {d} \Phi =\sum _{i}x_{i}\,\mathrm {d} y_{i}} where x i and y i are conjugate pairs, and the y i are the natural variables of the potential Φ . From the chain rule it follows that: x j = ( ∂ Φ ∂ y j ) { y i ≠ j } {\displaystyle x_{j}=\left({\frac {\partial \Phi }{\partial y_{j}}}\right)_{\{y_{i\neq j}\}}} where { y i ≠ j } is the set of all natural variables of Φ except y j that are held as constants. This yields expressions for various thermodynamic parameters in terms of the derivatives of the potentials with respect to their natural variables. These equations are known as equations of state since they specify parameters of the thermodynamic state . [ 13 ] If we restrict ourselves to the potentials U ( Internal energy ), F ( Helmholtz energy ), H ( Enthalpy ) and G ( Gibbs energy ), then we have the following equations of state (subscripts showing natural variables that are held as constants): + T = ( ∂ U ∂ S ) V , { N i } = ( ∂ H ∂ S ) p , { N i } {\displaystyle +T=\left({\frac {\partial U}{\partial S}}\right)_{V,\{N_{i}\}}=\left({\frac {\partial H}{\partial S}}\right)_{p,\{N_{i}\}}} − p = ( ∂ U ∂ V ) S , { N i } = ( ∂ F ∂ V ) T , { N i } {\displaystyle -p=\left({\frac {\partial U}{\partial V}}\right)_{S,\{N_{i}\}}=\left({\frac {\partial F}{\partial V}}\right)_{T,\{N_{i}\}}} + V = ( ∂ H ∂ p ) S , { N i } = ( ∂ G ∂ p ) T , { N i } {\displaystyle +V=\left({\frac {\partial H}{\partial p}}\right)_{S,\{N_{i}\}}=\left({\frac {\partial G}{\partial p}}\right)_{T,\{N_{i}\}}} − S = ( ∂ G ∂ T ) p , { N i } = ( ∂ F ∂ T ) V , { N i } {\displaystyle -S=\left({\frac {\partial G}{\partial T}}\right)_{p,\{N_{i}\}}=\left({\frac {\partial F}{\partial T}}\right)_{V,\{N_{i}\}}} μ j = ( ∂ ϕ ∂ N j ) X , Y , { N i ≠ j } {\displaystyle ~\mu _{j}=\left({\frac {\partial \phi }{\partial N_{j}}}\right)_{X,Y,\{N_{i\neq j}\}}} where, in the last equation, ϕ is any of the thermodynamic potentials ( U , F , H , or G ), and X , Y , { N i ≠ j } {\displaystyle {X,Y,\{N_{i\neq j}\}}} are the set of natural variables for that potential, excluding N i . If we use all thermodynamic potentials, then we will have more equations of state such as − N j = ( ∂ U [ μ j ] ∂ μ j ) S , V , { N i ≠ j } {\displaystyle -N_{j}=\left({\frac {\partial U[\mu _{j}]}{\partial \mu _{j}}}\right)_{S,V,\{N_{i\neq j}\}}} and so on. In all, if the thermodynamic space is D dimensions, then there will be D equations for each potential, resulting in a total of D 2 D equations of state because 2 D thermodynamic potentials exist. If the D equations of state for a particular potential are known, then the fundamental equation for that potential (i.e., the exact differential of the thermodynamic potential) can be determined. This means that all thermodynamic information about the system will be known because the fundamental equations for any other potential can be found via the Legendre transforms and the corresponding equations of state for each potential as partial derivatives of the potential can also be found. The above equations of state suggest methods to experimentally measure changes in the thermodynamic potentials using physically measurable parameters. For example the free energy expressions + V = ( ∂ G ∂ p ) T , { N i } {\displaystyle +V=\left({\frac {\partial G}{\partial p}}\right)_{T,\{N_{i}\}}} and − p = ( ∂ F ∂ V ) T , { N i } {\displaystyle -p=\left({\frac {\partial F}{\partial V}}\right)_{T,\{N_{i}\}}} can be integrated at constant temperature and quantities to obtain: which can be measured by monitoring the measurable variables of pressure, temperature and volume. Changes in the enthalpy and internal energy can be measured by calorimetry (which measures the amount of heat ΔQ released or absorbed by a system). The expressions + T = ( ∂ U ∂ S ) V , { N i } = ( ∂ H ∂ S ) p , { N i } {\displaystyle +T=\left({\frac {\partial U}{\partial S}}\right)_{V,\{N_{i}\}}=\left({\frac {\partial H}{\partial S}}\right)_{p,\{N_{i}\}}} can be integrated: Note that these measurements are made at constant { N j } and are therefore not applicable to situations in which chemical reactions take place. Again, define x i and y i to be conjugate pairs, and the y i to be the natural variables of some potential Φ . We may take the "cross differentials" of the state equations, which obey the following relationship: ( ∂ ∂ y j ( ∂ Φ ∂ y k ) { y i ≠ k } ) { y i ≠ j } = ( ∂ ∂ y k ( ∂ Φ ∂ y j ) { y i ≠ j } ) { y i ≠ k } {\displaystyle \left({\frac {\partial }{\partial y_{j}}}\left({\frac {\partial \Phi }{\partial y_{k}}}\right)_{\{y_{i\neq k}\}}\right)_{\{y_{i\neq j}\}}=\left({\frac {\partial }{\partial y_{k}}}\left({\frac {\partial \Phi }{\partial y_{j}}}\right)_{\{y_{i\neq j}\}}\right)_{\{y_{i\neq k}\}}} From these we get the Maxwell relations . [ 4 ] [ 14 ] There will be ⁠ ( D − 1) / 2 ⁠ of them for each potential giving a total of ⁠ D ( D − 1) / 2 ⁠ equations in all. If we restrict ourselves the U , F , H , G ( ∂ T ∂ V ) S , { N i } = − ( ∂ p ∂ S ) V , { N i } {\displaystyle \left({\frac {\partial T}{\partial V}}\right)_{S,\{N_{i}\}}=-\left({\frac {\partial p}{\partial S}}\right)_{V,\{N_{i}\}}} ( ∂ T ∂ p ) S , { N i } = + ( ∂ V ∂ S ) p , { N i } {\displaystyle \left({\frac {\partial T}{\partial p}}\right)_{S,\{N_{i}\}}=+\left({\frac {\partial V}{\partial S}}\right)_{p,\{N_{i}\}}} ( ∂ S ∂ V ) T , { N i } = + ( ∂ p ∂ T ) V , { N i } {\displaystyle \left({\frac {\partial S}{\partial V}}\right)_{T,\{N_{i}\}}=+\left({\frac {\partial p}{\partial T}}\right)_{V,\{N_{i}\}}} ( ∂ S ∂ p ) T , { N i } = − ( ∂ V ∂ T ) p , { N i } {\displaystyle \left({\frac {\partial S}{\partial p}}\right)_{T,\{N_{i}\}}=-\left({\frac {\partial V}{\partial T}}\right)_{p,\{N_{i}\}}} Using the equations of state involving the chemical potential we get equations such as: ( ∂ T ∂ N j ) V , S , { N i ≠ j } = ( ∂ μ j ∂ S ) V , { N i } {\displaystyle \left({\frac {\partial T}{\partial N_{j}}}\right)_{V,S,\{N_{i\neq j}\}}=\left({\frac {\partial \mu _{j}}{\partial S}}\right)_{V,\{N_{i}\}}} and using the other potentials we can get equations such as: ( ∂ N j ∂ V ) S , μ j , { N i ≠ j } = − ( ∂ p ∂ μ j ) S , V { N i ≠ j } {\displaystyle \left({\frac {\partial N_{j}}{\partial V}}\right)_{S,\mu _{j},\{N_{i\neq j}\}}=-\left({\frac {\partial p}{\partial \mu _{j}}}\right)_{S,V\{N_{i\neq j}\}}} ( ∂ N j ∂ N k ) S , V , μ j , { N i ≠ j , k } = − ( ∂ μ k ∂ μ j ) S , V { N i ≠ j } {\displaystyle \left({\frac {\partial N_{j}}{\partial N_{k}}}\right)_{S,V,\mu _{j},\{N_{i\neq j,k}\}}=-\left({\frac {\partial \mu _{k}}{\partial \mu _{j}}}\right)_{S,V\{N_{i\neq j}\}}} Again, define x i and y i to be conjugate pairs, and the y i to be the natural variables of the internal energy. Since all of the natural variables of the internal energy U are extensive quantities U ( { α y i } ) = α U ( { y i } ) {\displaystyle U(\{\alpha y_{i}\})=\alpha U(\{y_{i}\})} it follows from Euler's homogeneous function theorem that the internal energy can be written as: U ( { y i } ) = ∑ j y j ( ∂ U ∂ y j ) { y i ≠ j } {\displaystyle U(\{y_{i}\})=\sum _{j}y_{j}\left({\frac {\partial U}{\partial y_{j}}}\right)_{\{y_{i\neq j}\}}} From the equations of state, we then have: U = T S − p V + ∑ i μ i N i {\displaystyle U=TS-pV+\sum _{i}\mu _{i}N_{i}} This formula is known as an Euler relation , because Euler's theorem on homogeneous functions leads to it. [ 15 ] [ 16 ] (It was not discovered by Euler in an investigation of thermodynamics, which did not exist in his day.). Substituting into the expressions for the other main potentials we have: F = − p V + ∑ i μ i N i {\displaystyle F=-pV+\sum _{i}\mu _{i}N_{i}} H = T S + ∑ i μ i N i {\displaystyle H=TS+\sum _{i}\mu _{i}N_{i}} G = ∑ i μ i N i {\displaystyle G=\sum _{i}\mu _{i}N_{i}} As in the above sections, this process can be carried out on all of the other thermodynamic potentials. Thus, there is another Euler relation, based on the expression of entropy as a function of internal energy and other extensive variables. Yet other Euler relations hold for other fundamental equations for energy or entropy, as respective functions of other state variables including some intensive state variables. [ 17 ] Deriving the Gibbs–Duhem equation from basic thermodynamic state equations is straightforward. [ 11 ] [ 18 ] [ 19 ] Equating any thermodynamic potential definition with its Euler relation expression yields: U = T S − P V + ∑ i μ i N i {\displaystyle U=TS-PV+\sum _{i}\mu _{i}N_{i}} Differentiating, and using the second law: d U = T d S − P d V + ∑ i μ i d N i {\displaystyle \mathrm {d} U=T\mathrm {d} S-P\mathrm {d} V+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}} yields: 0 = S d T − V d P + ∑ i N i d μ i {\displaystyle 0=S\mathrm {d} T-V\mathrm {d} P+\sum _{i}N_{i}\mathrm {d} \mu _{i}} Which is the Gibbs–Duhem relation. The Gibbs–Duhem is a relationship among the intensive parameters of the system. It follows that for a simple system with I components, there will be I + 1 independent parameters, or degrees of freedom. For example, a simple system with a single component will have two degrees of freedom, and may be specified by only two parameters, such as pressure and volume for example. The law is named after Josiah Willard Gibbs and Pierre Duhem . As the internal energy is a convex function of entropy and volume, the stability condition requires that the second derivative of internal energy with entropy or volume to be positive. It is commonly expressed as d 2 U > 0 {\displaystyle d^{2}U>0} . Since the maximum principle of entropy is equivalent to minimum principle of internal energy, the combined criteria for stability or thermodynamic equilibrium is expressed as d 2 U > 0 {\displaystyle d^{2}U>0} and d U = 0 {\displaystyle dU=0} for parameters, entropy and volume. This is analogous to d 2 S < 0 {\displaystyle d^{2}S<0} and d S = 0 {\displaystyle dS=0} condition for entropy at equilibrium. [ 20 ] The same concept can be applied to the various thermodynamic potentials by identifying if they are convex or concave of respective their variables. ( ∂ 2 F ∂ T 2 ) V , N ≤ 0 {\displaystyle {\biggl (}{\frac {\partial ^{2}F}{\partial T^{2}}}{\biggr )}_{V,N}\leq 0} and ( ∂ 2 F ∂ V 2 ) T , N ≥ 0 {\displaystyle {\biggl (}{\frac {\partial ^{2}F}{\partial V^{2}}}{\biggr )}_{T,N}\geq 0} Where Helmholtz energy is a concave function of temperature and convex function of volume. ( ∂ 2 H ∂ P 2 ) S , N ≤ 0 {\displaystyle {\biggl (}{\frac {\partial ^{2}H}{\partial P^{2}}}{\biggr )}_{S,N}\leq 0} and ( ∂ 2 H ∂ S 2 ) P , N ≥ 0 {\displaystyle {\biggl (}{\frac {\partial ^{2}H}{\partial S^{2}}}{\biggr )}_{P,N}\geq 0} Where enthalpy is a concave function of pressure and convex function of entropy. ( ∂ 2 G ∂ T 2 ) P , N ≤ 0 {\displaystyle {\biggl (}{\frac {\partial ^{2}G}{\partial T^{2}}}{\biggr )}_{P,N}\leq 0} and ( ∂ 2 G ∂ P 2 ) T , N ≤ 0 {\displaystyle {\biggl (}{\frac {\partial ^{2}G}{\partial P^{2}}}{\biggr )}_{T,N}\leq 0} Where Gibbs potential is a concave function of both pressure and temperature. In general the thermodynamic potentials (the internal energy and its Legendre transforms ), are convex functions of their extrinsic variables and concave functions of intrinsic variables . The stability conditions impose that isothermal compressibility is positive and that for non-negative temperature, C P > C V {\displaystyle C_{P}>C_{V}} . [ 21 ] Changes in these quantities are useful for assessing the degree to which a chemical reaction will proceed. The relevant quantity depends on the reaction conditions, as shown in the following table. Δ denotes the change in the potential and at equilibrium the change will be zero. Most commonly one considers reactions at constant p and T , so the Gibbs free energy is the most useful potential in studies of chemical reactions.
https://en.wikipedia.org/wiki/Thermodynamic_potential
Classical thermodynamics considers three main kinds of thermodynamic processes : (1) changes in a system, (2) cycles in a system, and (3) flow processes. (1) A Thermodynamic process is a process in which the thermodynamic state of a system is changed. A change in a system is defined by a passage from an initial to a final state of thermodynamic equilibrium . In classical thermodynamics, the actual course of the process is not the primary concern, and often is ignored. A state of thermodynamic equilibrium endures unchangingly unless it is interrupted by a thermodynamic operation that initiates a thermodynamic process. The equilibrium states are each respectively fully specified by a suitable set of thermodynamic state variables, that depend only on the current state of the system, not on the path taken by the processes that produce the state. In general, during the actual course of a thermodynamic process, the system may pass through physical states which are not describable as thermodynamic states, because they are far from internal thermodynamic equilibrium. Non-equilibrium thermodynamics , however, considers processes in which the states of the system are close to thermodynamic equilibrium, and aims to describe the continuous passage along the path, at definite rates of progress. As a useful theoretical but not actually physically realizable limiting case, a process may be imagined to take place practically infinitely slowly or smoothly enough to allow it to be described by a continuous path of equilibrium thermodynamic states, when it is called a " quasi-static " process. This is a theoretical exercise in differential geometry, as opposed to a description of an actually possible physical process; in this idealized case, the calculation may be exact. A really possible or actual thermodynamic process, considered closely, involves friction . This contrasts with theoretically idealized, imagined, or limiting, but not actually possible, quasi-static processes which may occur with a theoretical slowness that avoids friction. It also contrasts with idealized frictionless processes in the surroundings, which may be thought of as including 'purely mechanical systems'; this difference comes close to defining a thermodynamic process. [ 1 ] (2) A cyclic process carries the system through a cycle of stages, starting and being completed in some particular state. The descriptions of the staged states of the system are not the primary concern. The primary concern is the sums of matter and energy inputs and outputs to the cycle. Cyclic processes were important conceptual devices in the early days of thermodynamical investigation, while the concept of the thermodynamic state variable was being developed. (3) Defined by flows through a system, a flow process is a steady state of flows into and out of a vessel with definite wall properties. The internal state of the vessel contents is not the primary concern. The quantities of primary concern describe the states of the inflow and the outflow materials, and, on the side, the transfers of heat, work, and kinetic and potential energies for the vessel. Flow processes are of interest in engineering. Defined by a cycle of transfers into and out of a system, a cyclic process is described by the quantities transferred in the several stages of the cycle. The descriptions of the staged states of the system may be of little or even no interest. A cycle is a sequence of a small number of thermodynamic processes that indefinitely often, repeatedly returns the system to its original state. For this, the staged states themselves are not necessarily described, because it is the transfers that are of interest. It is reasoned that if the cycle can be repeated indefinitely often, then it can be assumed that the states are recurrently unchanged. The condition of the system during the several staged processes may be of even less interest than is the precise nature of the recurrent states. If, however, the several staged processes are idealized and quasi-static, then the cycle is described by a path through a continuous progression of equilibrium states. Defined by flows through a system, a flow process is a steady state of flow into and out of a vessel with definite wall properties. The internal state of the vessel contents is not the primary concern. The quantities of primary concern describe the states of the inflow and the outflow materials, and, on the side, the transfers of heat, work, and kinetic and potential energies for the vessel. The states of the inflow and outflow materials consist of their internal states, and of their kinetic and potential energies as whole bodies. Very often, the quantities that describe the internal states of the input and output materials are estimated on the assumption that they are bodies in their own states of internal thermodynamic equilibrium. Because rapid reactions are permitted, the thermodynamic treatment may be approximate, not exact. A quasi-static thermodynamic process can be visualized by graphically plotting the path of idealized changes to the system's state variables . In the example, a cycle consisting of four quasi-static processes is shown. Each process has a well-defined start and end point in the pressure-volume state space . In this particular example, processes 1 and 3 are isothermal , whereas processes 2 and 4 are isochoric . The PV diagram is a particularly useful visualization of a quasi-static process, because the area under the curve of a process is the amount of work done by the system during that process. Thus work is considered to be a process variable , as its exact value depends on the particular path taken between the start and end points of the process. Similarly, heat may be transferred during a process, and it too is a process variable. It is often useful to group processes into pairs, in which each variable held constant is one member of a conjugate pair. The pressure–volume conjugate pair is concerned with the transfer of mechanical energy as the result of work. The temperature-entropy conjugate pair is concerned with the transfer of energy, especially for a closed system. The processes just above have assumed that the boundaries are also impermeable to particles. Otherwise, we may assume boundaries that are rigid, but are permeable to one or more types of particle. Similar considerations then hold for the chemical potential – particle number conjugate pair, which is concerned with the transfer of energy via this transfer of particles. Any of the thermodynamic potentials may be held constant during a process. For example: A polytropic process is a thermodynamic process that obeys the relation: where P is the pressure, V is volume, n is any real number (the "polytropic index"), and C is a constant. This equation can be used to accurately characterize processes of certain systems , notably the compression or expansion of a gas , but in some cases, liquids and solids . According to Planck, one may think of three main classes of thermodynamic process: natural, fictively reversible, and impossible or unnatural. [ 2 ] [ 3 ] Only natural processes occur in nature. For thermodynamics, a natural process is a transfer between systems that increases the sum of their entropies, and is irreversible. [ 2 ] Natural processes may occur spontaneously upon the removal of a constraint, or upon some other thermodynamic operation , or may be triggered in a metastable or unstable system, as for example in the condensation of a supersaturated vapour. [ 4 ] Planck emphasised the occurrence of friction as an important characteristic of natural thermodynamic processes that involve transfer of matter or energy between system and surroundings. To describe the geometry of graphical surfaces that illustrate equilibrium relations between thermodynamic functions of state, no one can fictively think of so-called "reversible processes". They are convenient theoretical objects that trace paths across graphical surfaces. They are called "processes" but do not describe naturally occurring processes, which are always irreversible. Because the points on the paths are points of thermodynamic equilibrium, it is customary to think of the "processes" described by the paths as fictively "reversible". [ 2 ] Reversible processes are always quasistatic processes, but the converse is not always true. Unnatural processes are logically conceivable but do not occur in nature. They would decrease the sum of the entropies if they occurred. [ 2 ] A quasistatic process is an idealized or fictive model of a thermodynamic "process" considered in theoretical studies. It does not occur in physical reality. It may be imagined as happening infinitely slowly so that the system passes through a continuum of states that are infinitesimally close to equilibrium .
https://en.wikipedia.org/wiki/Thermodynamic_process
A thermodynamic solar panel is a type of air source heat pump . Instead of a large fan to take energy from the air, it has a flat plate collector. This means the system gains energy from the sun as well as the ambient air. [ 1 ] Thermodynamic water heaters use a compressor to transfer the collected heat from the panel to the hot water system using refrigerant fluid that circulates in a closed cycle. [ citation needed ] In the UK, thermodynamic solar panels cannot be used to claim the Renewable Heat Incentive . This is due to the lack of technical standards for the testing and installation. The UK Microgeneration Certification Scheme is working to develop a testing standard, either based on MIS 3001 or MIS 3005 or a brand new scheme document if appropriate. [ 2 ] Lab testing has been carried out by Das Wärmepumpen-Testzentrum Buchs (WPZ) in Buchs Switzerland on an Energi Eco 200esm/i thermodynamic solar panel system. [ citation needed ] This showed a Coefficient of performance of 2.8 or 2.9 (depending on tank volume). [ 3 ] In the UK, the first independent test is under-way at Narec Distributed Energy . So far data is available for January to April 2014. [ 4 ] As with the Carnot cycle , the achievable efficiency is strongly dependent on the temperatures on both side of the system. [1] [2] This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Thermodynamic_solar_panel
The thermodynamic square (also known as the thermodynamic wheel , Guggenheim scheme or Born square ) is a mnemonic diagram attributed to Max Born and used to help determine thermodynamic relations. Born presented the thermodynamic square in a 1929 lecture. [ 1 ] The symmetry of thermodynamics appears in a paper by F.O. Koenig. [ 2 ] The corners represent common conjugate variables while the sides represent thermodynamic potentials . The placement and relation among the variables serves as a key to recall the relations they constitute. A mnemonic used by students to remember the Maxwell relations (in thermodynamics ) is " G ood P hysicists H ave S tudied U nder V ery F ine T eachers", which helps them remember the order of the variables in the square, in clockwise direction. Another mnemonic used here is " V alid F acts and T heoretical U nderstanding G enerate S olutions to H ard P roblems", which gives the letter in the normal left-to-right writing direction. Both times A has to be identified with F , another common symbol for Helmholtz free energy . To prevent the need for this switch the following mnemonic is also widely used:" G ood P hysicists H ave S tudied U nder V ery A mbitious T eachers"; another one is G ood P hysicists H ave SUVAT , in reference to the equations of motion . One other useful variation of the mnemonic when the symbol E is used for internal energy instead of U is the following: " S ome H ard P roblems G o T o F inish V ery E asy". [ 3 ] The thermodynamic square is mostly used to compute the derivative of any thermodynamic potential of interest. Suppose for example one desires to compute the derivative of the internal energy U {\displaystyle U} . The following procedure should be considered: The Gibbs–Duhem equation can be derived by using this technique. Notice though that the final addition of the differential of the chemical potential has to be generalized. The thermodynamic square can also be used to find the first-order derivatives in the common Maxwell relations . The following procedure should be considered: By rotating the ⊔ {\displaystyle \sqcup } shape (randomly, for example by 90 degrees counterclockwise into a ⊐ {\displaystyle \sqsupset } shape) other relations such as: ( ∂ p ∂ T ) V = ( ∂ S ∂ V ) T {\displaystyle \left({\partial p \over \partial T}\right)_{V}=\left({\partial S \over \partial V}\right)_{T}} can be found. Finally, the potential at the center of each side is a natural function of the variables at the corner of that side. So, G {\displaystyle G} is a natural function of p {\displaystyle p} and T {\displaystyle T} , and U {\displaystyle U} is a natural function of S {\displaystyle S} and V {\displaystyle V} .
https://en.wikipedia.org/wiki/Thermodynamic_square
In thermodynamics , a thermodynamic state of a system is its condition at a specific time; that is, fully identified by values of a suitable set of parameters known as state variables , state parameters or thermodynamic variables. Once such a set of values of thermodynamic variables has been specified for a system, the values of all thermodynamic properties of the system are uniquely determined. Usually, by default, a thermodynamic state is taken to be one of thermodynamic equilibrium . This means that the state is not merely the condition of the system at a specific time, but that the condition is the same, unchanging, over an indefinitely long duration of time. When a system undergoes a change from one state to another, it is said to traverse a path. The path can be described by how the properties change, like isothermal (constant temperature) or isobaric (constant pressure) paths. Thermodynamics sets up an idealized conceptual structure that can be summarized by a formal scheme of definitions and postulates. Thermodynamic states are amongst the fundamental or primitive objects or notions of the scheme, for which their existence is primary and definitive, rather than being derived or constructed from other concepts. [ 1 ] [ 2 ] [ 3 ] A thermodynamic system is not simply a physical system . [ 4 ] Rather, in general, infinitely many different alternative physical systems comprise a given thermodynamic system, because in general a physical system has vastly many more microscopic characteristics than are mentioned in a thermodynamic description. A thermodynamic system is a macroscopic object , the microscopic details of which are not explicitly considered in its thermodynamic description. The number of state variables required to specify the thermodynamic state depends on the system, and is not always known in advance of experiment; it is usually found from experimental evidence. The number is always two or more; usually it is not more than some dozen. Though the number of state variables is fixed by experiment, there remains choice of which of them to use for a particular convenient description; a given thermodynamic system may be alternatively identified by several different choices of the set of state variables. The choice is usually made on the basis of the walls and surroundings that are relevant for the thermodynamic processes that are to be considered for the system. For example, if it is intended to consider heat transfer for the system, then a wall of the system should be permeable to heat, and that wall should connect the system to a body, in the surroundings, that has a definite time-invariant temperature. [ 5 ] [ 6 ] For equilibrium thermodynamics, in a thermodynamic state of a system, its contents are in internal thermodynamic equilibrium, with zero flows of all quantities, both internal and between system and surroundings. For Planck, the primary characteristic of a thermodynamic state of a system that consists of a single phase , in the absence of an externally imposed force field, is spatial homogeneity. [ 7 ] For non-equilibrium thermodynamics , a suitable set of identifying state variables includes some macroscopic variables, for example a non-zero spatial gradient of temperature, that indicate departure from thermodynamic equilibrium. Such non-equilibrium identifying state variables indicate that some non-zero flow may be occurring within the system or between system and surroundings. [ 8 ] A thermodynamic system can be identified or described in various ways. Most directly, it can be identified by a suitable set of state variables. Less directly, it can be described by a suitable set of quantities that includes state variables and state functions. The primary or original identification of the thermodynamic state of a body of matter is by directly measurable ordinary physical quantities. For some simple purposes, for a given body of given chemical constitution, a sufficient set of such quantities is 'volume and pressure'. Besides the directly measurable ordinary physical variables that originally identify a thermodynamic state of a system, the system is characterized by further quantities called state functions , which are also called state variables, thermodynamic variables, state quantities, or functions of state. They are uniquely determined by the thermodynamic state as it has been identified by the original state variables. There are many such state functions. Examples are internal energy , enthalpy , Helmholtz free energy , Gibbs free energy , thermodynamic temperature , and entropy . For a given body, of a given chemical constitution, when its thermodynamic state has been fully defined by its pressure and volume, then its temperature is uniquely determined. Thermodynamic temperature is a specifically thermodynamic concept, while the original directly measureable state variables are defined by ordinary physical measurements, without reference to thermodynamic concepts; for this reason, it is helpful to regard thermodynamic temperature as a state function. A passage from a given initial thermodynamic state to a given final thermodynamic state of a thermodynamic system is known as a thermodynamic process; usually this is transfer of matter or energy between system and surroundings. In any thermodynamic process, whatever may be the intermediate conditions during the passage, the total respective change in the value of each thermodynamic state variable depends only on the initial and final states. For an idealized continuous or quasi-static process, this means that infinitesimal incremental changes in such variables are exact differentials . Together, the incremental changes throughout the process, and the initial and final states, fully determine the idealized process. In the most commonly cited simple example, an ideal gas , the thermodynamic variables would be any three variables out of the following four: amount of substance , pressure , temperature , and volume . Thus, the thermodynamic state would range over a three-dimensional state space. The remaining variable, as well as other quantities such as the internal energy and the entropy , would be expressed as state functions of these three variables. The state functions satisfy certain universal constraints, expressed in the laws of thermodynamics , and they depend on the peculiarities of the materials that compose the concrete system. Various thermodynamic diagrams have been developed to model the transitions between thermodynamic states. Physical systems found in nature are practically always dynamic and complex, but in many cases, macroscopic physical systems are amenable to description based on proximity to ideal conditions. One such ideal condition is that of a stable equilibrium state. Such a state is a primitive object of classical or equilibrium thermodynamics, in which it is called a thermodynamic state. Based on many observations, thermodynamics postulates that all systems that are isolated from the external environment will evolve so as to approach unique stable equilibrium states. There are a number of different types of equilibrium, corresponding to different physical variables, and a system reaches thermodynamic equilibrium when the conditions of all the relevant types of equilibrium are simultaneously satisfied. A few different types of equilibrium are listed below.
https://en.wikipedia.org/wiki/Thermodynamic_state
A thermodynamic system is a body of matter and/or radiation separate from its surroundings that can be studied using the laws of thermodynamics . Thermodynamic systems can be passive and active according to internal processes. According to internal processes, passive systems and active systems are distinguished: passive, in which there is a redistribution of available energy, active, in which one type of energy is converted into another. Depending on its interaction with the environment, a thermodynamic system may be an isolated system , a closed system , or an open system . An isolated system does not exchange matter or energy with its surroundings. A closed system may exchange heat, experience forces, and exert forces, but does not exchange matter. An open system can interact with its surroundings by exchanging both matter and energy. The physical condition of a thermodynamic system at a given time is described by its state , which can be specified by the values of a set of thermodynamic state variables. A thermodynamic system is in thermodynamic equilibrium when there are no macroscopically apparent flows of matter or energy within it or between it and other systems. [ 1 ] Thermodynamic equilibrium is characterized not only by the absence of any flow of mass or energy , but by “the absence of any tendency toward change on a macroscopic scale.” [ 2 ] Equilibrium thermodynamics, as a subject in physics, considers macroscopic bodies of matter and energy in states of internal thermodynamic equilibrium. It uses the concept of thermodynamic processes , by which bodies pass from one equilibrium state to another by transfer of matter and energy between them. The term 'thermodynamic system' is used to refer to bodies of matter and energy in the special context of thermodynamics. The possible equilibria between bodies are determined by the physical properties of the walls that separate the bodies. Equilibrium thermodynamics in general does not measure time. Equilibrium thermodynamics is a relatively simple and well settled subject. One reason for this is the existence of a well defined physical quantity called 'the entropy of a body'. Non-equilibrium thermodynamics , as a subject in physics, considers bodies of matter and energy that are not in states of internal thermodynamic equilibrium, but are usually participating in processes of transfer that are slow enough to allow description in terms of quantities that are closely related to thermodynamic state variables . It is characterized by presence of flows of matter and energy. For this topic, very often the bodies considered have smooth spatial inhomogeneities, so that spatial gradients, for example a temperature gradient, are well enough defined. Thus the description of non-equilibrium thermodynamic systems is a field theory, more complicated than the theory of equilibrium thermodynamics. Non-equilibrium thermodynamics is a growing subject, not an established edifice. Example theories and modeling approaches include the GENERIC formalism for complex fluids, viscoelasticity, and soft materials. In general, it is not possible to find an exactly defined entropy for non-equilibrium problems. For many non-equilibrium thermodynamical problems, an approximately defined quantity called 'time rate of entropy production' is very useful. Non-equilibrium thermodynamics is mostly beyond the scope of the present article. Another kind of thermodynamic system is considered in most engineering. It takes part in a flow process. The account is in terms that approximate, well enough in practice in many cases, equilibrium thermodynamical concepts. This is mostly beyond the scope of the present article, and is set out in other articles, for example the article Flow process . The classification of thermodynamic systems arose with the development of thermodynamics as a science. Theoretical studies of thermodynamic processes in the period from the first theory of heat engines (Saadi Carnot, France, 1824) to the theory of dissipative structures (Ilya Prigozhin, Belgium, 1971) mainly concerned the patterns of interaction of thermodynamic systems with the environment. At the same time, thermodynamic systems were mainly classified as isolated, closed and open, with corresponding properties in various thermodynamic states, for example, in states close to equilibrium, nonequilibrium and strongly nonequilibrium. In 2010, Boris Dobroborsky (Israel, Russia) proposed a classification of thermodynamic systems according to internal processes consisting in energy redistribution (passive systems) and energy conversion (active systems). If there is a temperature difference inside the thermodynamic system, for example in a rod, one end of which is warmer than the other, then thermal energy transfer processes occur in it, in which the temperature of the colder part rises and the warmer part decreases. As a result, after some time, the temperature in the rod will equalize – the rod will come to a state of thermodynamic equilibrium. If the process of converting one type of energy into another takes place inside a thermodynamic system, for example, in chemical reactions, in electric or pneumatic motors, when one solid body rubs against another, then the processes of energy release or absorption will occur, and the thermodynamic system will always tend to a non-equilibrium state with respect to the environment. In isolated systems it is consistently observed that as time goes on internal rearrangements diminish and stable conditions are approached. Pressures and temperatures tend to equalize, and matter arranges itself into one or a few relatively homogeneous phases . A system in which all processes of change have gone practically to completion is considered in a state of thermodynamic equilibrium . [ 3 ] The thermodynamic properties of a system in equilibrium are unchanging in time. Equilibrium system states are much easier to describe in a deterministic manner than non-equilibrium states. In some cases, when analyzing a thermodynamic process , one can assume that each intermediate state in the process is at equilibrium. Such a process is called quasistatic. [ 4 ] For a process to be reversible , each step in the process must be reversible. For a step in a process to be reversible, the system must be in equilibrium throughout the step. That ideal cannot be accomplished in practice because no step can be taken without perturbing the system from equilibrium, but the ideal can be approached by making changes slowly. The very existence of thermodynamic equilibrium, defining states of thermodynamic systems, is the essential, characteristic, and most fundamental postulate of thermodynamics, though it is only rarely cited as a numbered law. [ 5 ] [ 6 ] [ 7 ] According to Bailyn, the commonly rehearsed statement of the zeroth law of thermodynamics is a consequence of this fundamental postulate. [ 8 ] In reality, practically nothing in nature is in strict thermodynamic equilibrium, but the postulate of thermodynamic equilibrium often provides very useful idealizations or approximations, both theoretically and experimentally; experiments can provide scenarios of practical thermodynamic equilibrium. In equilibrium thermodynamics the state variables do not include fluxes because in a state of thermodynamic equilibrium all fluxes have zero values by definition. Equilibrium thermodynamic processes may involve fluxes but these must have ceased by the time a thermodynamic process or operation is complete bringing a system to its eventual thermodynamic state. Non-equilibrium thermodynamics allows its state variables to include non-zero fluxes, which describe transfers of mass or energy or entropy between a system and its surroundings. [ 9 ] impermeable to matter impermeable to matter A system is enclosed by walls that bound it and connect it to its surroundings. [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] Often a wall restricts passage across it by some form of matter or energy, making the connection indirect. Sometimes a wall is no more than an imaginary two-dimensional closed surface through which the connection to the surroundings is direct. A wall can be fixed (e.g. a constant volume reactor) or moveable (e.g. a piston). For example, in a reciprocating engine, a fixed wall means the piston is locked at its position; then, a constant volume process may occur. In that same engine, a piston may be unlocked and allowed to move in and out. Ideally, a wall may be declared adiabatic , diathermal , impermeable, permeable, or semi-permeable . Actual physical materials that provide walls with such idealized properties are not always readily available. The system is delimited by walls or boundaries, either actual or notional, across which conserved (such as matter and energy) or unconserved (such as entropy) quantities can pass into and out of the system. The space outside the thermodynamic system is known as the surroundings , a reservoir , or the environment . The properties of the walls determine what transfers can occur. A wall that allows transfer of a quantity is said to be permeable to it, and a thermodynamic system is classified by the permeabilities of its several walls. A transfer between system and surroundings can arise by contact, such as conduction of heat, or by long-range forces such as an electric field in the surroundings. A system with walls that prevent all transfers is said to be isolated . This is an idealized conception, because in practice some transfer is always possible, for example by gravitational forces. It is an axiom of thermodynamics that an isolated system eventually reaches internal thermodynamic equilibrium , when its state no longer changes with time. The walls of a closed system allow transfer of energy as heat and as work, but not of matter, between it and its surroundings. The walls of an open system allow transfer both of matter and of energy. [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] This scheme of definition of terms is not uniformly used, though it is convenient for some purposes. In particular, some writers use 'closed system' where 'isolated system' is here used. [ 22 ] [ 23 ] Anything that passes across the boundary and effects a change in the contents of the system must be accounted for in an appropriate balance equation. The volume can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine , such as Sadi Carnot defined in 1824. It could also be just one nuclide (i.e. a system of quarks ) as hypothesized in quantum thermodynamics . The system is the part of the universe being studied, while the surroundings is the remainder of the universe that lies outside the boundaries of the system. It is also known as the environment or the reservoir . Depending on the type of system, it may interact with the system by exchanging mass, energy (including heat and work), momentum , electric charge , or other conserved properties . The environment is ignored in the analysis of the system, except in regards to these interactions. In a closed system, no mass may be transferred in or out of the system boundaries. The system always contains the same amount of matter, but (sensible) heat and (boundary) work can be exchanged across the boundary of the system. Whether a system can exchange heat, work, or both is dependent on the property of its boundary. One example is fluid being compressed by a piston in a cylinder. Another example of a closed system is a bomb calorimeter , a type of constant-volume calorimeter used in measuring the heat of combustion of a particular reaction. Electrical energy travels across the boundary to produce a spark between the electrodes and initiates combustion. Heat transfer occurs across the boundary after combustion but no mass transfer takes place either way. The first law of thermodynamics for energy transfers for closed system may be stated: where U {\displaystyle U} denotes the internal energy of the system, Q {\displaystyle Q} heat added to the system, W {\displaystyle W} the work done by the system. For infinitesimal changes the first law for closed systems may stated: If the work is due to a volume expansion by d V {\displaystyle \mathrm {d} V} at a pressure P {\displaystyle P} then: For a quasi-reversible heat transfer, the second law of thermodynamics reads: where T {\displaystyle T} denotes the thermodynamic temperature and S {\displaystyle S} the entropy of the system. With these relations the fundamental thermodynamic relation , used to compute changes in internal energy, is expressed as: For a simple system, with only one type of particle (atom or molecule), a closed system amounts to a constant number of particles. For systems undergoing a chemical reaction , there may be all sorts of molecules being generated and destroyed by the reaction process. In this case, the fact that the system is closed is expressed by stating that the total number of each elemental atom is conserved, no matter what kind of molecule it may be a part of. Mathematically: where N j {\displaystyle N_{j}} denotes the number of j {\displaystyle j} -type molecules, a i j {\displaystyle a_{ij}} the number of atoms of element i {\displaystyle i} in molecule j {\displaystyle j} , and b i 0 {\displaystyle b_{i}^{0}} the total number of atoms of element i {\displaystyle i} in the system, which remains constant, since the system is closed. There is one such equation for each element in the system. An isolated system is more restrictive than a closed system as it does not interact with its surroundings in any way. Mass and energy remains constant within the system, and no energy or mass transfer takes place across the boundary. As time passes in an isolated system, internal differences in the system tend to even out and pressures and temperatures tend to equalize, as do density differences. A system in which all equalizing processes have gone practically to completion is in a state of thermodynamic equilibrium . Truly isolated physical systems do not exist in reality (except perhaps for the universe as a whole), because, for example, there is always gravity between a system with mass and masses elsewhere. [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] However, real systems may behave nearly as an isolated system for finite (possibly very long) times. The concept of an isolated system can serve as a useful model approximating many real-world situations. It is an acceptable idealization used in constructing mathematical models of certain natural phenomena . In the attempt to justify the postulate of entropy increase in the second law of thermodynamics , Boltzmann's H-theorem used equations , which assumed that a system (for example, a gas ) was isolated. That is all the mechanical degrees of freedom could be specified, treating the walls simply as mirror boundary conditions . This inevitably led to Loschmidt's paradox . However, if the stochastic behavior of the molecules in actual walls is considered, along with the randomizing effect of the ambient, background thermal radiation , Boltzmann's assumption of molecular chaos can be justified. The second law of thermodynamics for isolated systems states that the entropy of an isolated system not in equilibrium tends to increase over time, approaching maximum value at equilibrium. Overall, in an isolated system, the internal energy is constant and the entropy can never decrease. A closed system's entropy can decrease e.g. when heat is extracted from the system. Isolated systems are not equivalent to closed systems. Closed systems cannot exchange matter with the surroundings, but can exchange energy. Isolated systems can exchange neither matter nor energy with their surroundings, and as such are only theoretical and do not exist in reality (except, possibly, the entire universe). 'Closed system' is often used in thermodynamics discussions when 'isolated system' would be correct – i.e. there is an assumption that energy does not enter or leave the system. For a thermodynamic process, the precise physical properties of the walls and surroundings of the system are important, because they determine the possible processes. An open system has one or several walls that allow transfer of matter. To account for the internal energy of the open system, this requires energy transfer terms in addition to those for heat and work. It also leads to the idea of the chemical potential . A wall selectively permeable only to a pure substance can put the system in diffusive contact with a reservoir of that pure substance in the surroundings. Then a process is possible in which that pure substance is transferred between system and surroundings. Also, across that wall a contact equilibrium with respect to that substance is possible. By suitable thermodynamic operations , the pure substance reservoir can be dealt with as a closed system. Its internal energy and its entropy can be determined as functions of its temperature, pressure, and mole number. A thermodynamic operation can render impermeable to matter all system walls other than the contact equilibrium wall for that substance. This allows the definition of an intensive state variable, with respect to a reference state of the surroundings, for that substance. The intensive variable is called the chemical potential; for component substance i it is usually denoted μ i . The corresponding extensive variable can be the number of moles N i of the component substance in the system. For a contact equilibrium across a wall permeable to a substance, the chemical potentials of the substance must be same on either side of the wall. This is part of the nature of thermodynamic equilibrium, and may be regarded as related to the zeroth law of thermodynamics. [ 29 ] In an open system, there is an exchange of energy and matter between the system and the surroundings. The presence of reactants in an open beaker is an example of an open system. Here the boundary is an imaginary surface enclosing the beaker and reactants. It is named closed , if borders are impenetrable for substance, but allow transit of energy in the form of heat, and isolated , if there is no exchange of heat and substances. The open system cannot exist in the equilibrium state. To describe deviation of the thermodynamic system from equilibrium, in addition to constitutive variables that was described above, a set of internal variables ξ 1 , ξ 2 , … {\displaystyle \xi _{1},\xi _{2},\ldots } have been introduced. The equilibrium state is considered to be stable and the main property of the internal variables, as measures of non-equilibrium of the system, is their trending to disappear; the local law of disappearing can be written as relaxation equation for each internal variable where τ i = τ i ( T , x 1 , x 2 , … , x n ) {\displaystyle \tau _{i}=\tau _{i}(T,x_{1},x_{2},\ldots ,x_{n})} is a relaxation time of a corresponding variable. It is convenient to consider the initial value ξ i 0 {\displaystyle \xi _{i}^{0}} equal to zero. The specific contribution to the thermodynamics of open non-equilibrium systems was made by Ilya Prigogine , who investigated a system of chemically reacting substances. [ 30 ] In this case the internal variables appear to be measures of incompleteness of chemical reactions, that is measures of how much the considered system with chemical reactions is out of equilibrium. The theory can be generalized, [ 31 ] [ 32 ] [ 33 ] to consider any deviations from the equilibrium state, such as structure of the system, gradients of temperature, difference of concentrations of substances and so on, to say nothing of degrees of completeness of all chemical reactions, to be internal variables. The increments of Gibbs free energy G {\displaystyle G} and entropy S {\displaystyle S} at T = const {\displaystyle T={\text{const}}} and p = const {\displaystyle p={\text{const}}} are determined as The stationary states of the system exist due to exchange of both thermal energy ( Δ Q α {\displaystyle \Delta Q_{\alpha }} ) and a stream of particles . The sum of the last terms in the equations presents the total energy coming into the system with the stream of particles of substances Δ N α {\displaystyle \Delta N_{\alpha }} that can be positive or negative; the quantity μ α {\displaystyle \mu _{\alpha }} is chemical potential of substance α {\displaystyle \alpha } .The middle terms in equations (2) and (3) depict energy dissipation ( entropy production ) due to the relaxation of internal variables ξ j {\displaystyle \xi _{j}} , while Ξ j {\displaystyle \Xi _{j}} are thermodynamic forces. This approach to the open system allows describing the growth and development of living objects in thermodynamic terms. [ 34 ]
https://en.wikipedia.org/wiki/Thermodynamic_system
Thermodynamic reaction control or kinetic reaction control in a chemical reaction can decide the composition in a reaction product mixture when competing pathways lead to different products and the reaction conditions influence the selectivity or stereoselectivity . The distinction is relevant when product A forms faster than product B because the activation energy for product A is lower than that for product B , yet product B is more stable. In such a case A is the kinetic product and is favoured under kinetic control and B is the thermodynamic product and is favoured under thermodynamic control. [ 1 ] [ 2 ] [ 3 ] The conditions of the reaction, such as temperature, pressure, or solvent, affect which reaction pathway may be favored: either the kinetically controlled or the thermodynamically controlled one. Note this is only true if the activation energy of the two pathways differ, with one pathway having a lower E a ( energy of activation ) than the other. Prevalence of thermodynamic or kinetic control determines the final composition of the product when these competing reaction pathways lead to different products. The reaction conditions as mentioned above influence the selectivity of the reaction - i.e., which pathway is taken. Asymmetric synthesis is a field in which the distinction between kinetic and thermodynamic control is especially important. Because pairs of enantiomers have, for all intents and purposes, the same Gibbs free energy, thermodynamic control will produce a racemic mixture by necessity. Thus, any catalytic reaction that provides product with nonzero enantiomeric excess is under at least partial kinetic control. (In many stoichiometric asymmetric transformations, the enantiomeric products are actually formed as a complex with the chirality source before the workup stage of the reaction, technically making the reaction a diastereoselective one. Although such reactions are still usually kinetically controlled, thermodynamic control is at least possible, in principle.) The Diels–Alder reaction of cyclopentadiene with furan can produce two isomeric products. At room temperature , kinetic reaction control prevails and the less stable endo isomer 2 is the main reaction product. At 81 °C and after long reaction times, the chemical equilibrium can assert itself and the thermodynamically more stable exo isomer 1 is formed. [ 4 ] The exo product is more stable by virtue of a lower degree of steric congestion , while the endo product is favoured by orbital overlap in the transition state . An outstanding and very rare example of the full kinetic and thermodynamic reaction control in the process of the tandem inter-/intramolecular Diels–Alder reaction of bis-furyl dienes 3 with hexafluoro-2-butyne or dimethyl acetylenedicarboxylate (DMAD) have been discovered and described in 2018. [ 5 ] [ 6 ] At low temperature, the reactions occur chemoselectively leading exclusively to adducts of pincer-[4+2] cycloaddition ( 5 ). The exclusive formation of domino -adducts ( 6 ) is observed at elevated temperatures. Theoretical DFT calculations of the reaction between hexafluoro-2-butyne and dienes 3a - c were performed. The reaction starting with [4+2] cycloaddition of CF 3 C≡CCF 3 at one of the furan moieties occurs in a concerted fashion via TS1 and represents the rate limiting step of the whole process with the activation barrier Δ G ‡ ≈ 23.1–26.8 kcal/mol. Further, the reaction could proceed via two competing channels, i.e. either leading to the pincer type products 5 via TS2k or resulting in the formation of the domino product 6 via TS2t . The calculations showed that the first channel is more kinetically favourable (Δ G ‡ ≈ 5.7–5.9 kcal/mol). Meanwhile, the domino products 6 are more thermodynamically stable than 5 (Δ G ‡ ≈ 4.2-4.7 kcal/mol) and this fact may cause isomerization of 5 into 6 at elevated temperature. Indeed, the calculated activation barriers for the 5 → 6 isomerization via the retro-Diels–Alder reaction of 5 followed by the intramolecular [4+2]-cycloaddition in the chain intermediate 4 to give 6 are 34.0–34.4 kcal/mol. In the protonation of an enolate ion , the kinetic product is the enol and the thermodynamic product is a ketone or aldehyde . Carbonyl compounds and their enols interchange rapidly by proton transfers catalyzed by acids or bases , even in trace amounts, in this case mediated by the enolate or the proton source. In the deprotonation of an unsymmetrical ketone , the kinetic product is the enolate resulting from removal of the most accessible α-H while the thermodynamic product has the more highly substituted enolate moiety. [ 7 ] [ 8 ] [ 9 ] [ 10 ] Use of low temperatures and sterically demanding bases increases the kinetic selectivity. Here, the difference in p K b between the base and the enolate is so large that the reaction is essentially irreversible, so the equilibration leading to the thermodynamic product is likely a proton exchange occurring during the addition between the kinetic enolate and as-yet-unreacted ketone. An inverse addition (adding ketone to the base) with rapid mixing would minimize this. The position of the equilibrium will depend on the countercation and solvent. If a much weaker base is used, the deprotonation will be incomplete, and there will be an equilibrium between reactants and products. Thermodynamic control is obtained, however the reaction remains incomplete unless the product enolate is trapped, as in the example below. Since H transfers are very fast, the trapping reaction being slower, the ratio of trapped products largely mirrors the deprotonation equilibrium. The electrophilic addition reaction of hydrogen bromide to 1,3-butadiene above room temperature leads predominantly to the thermodynamically more stable 1,4 adduct, 1-bromo-2-butene, but decreasing the reaction temperature to below room temperature favours the kinetic 1,2 adduct, 3-bromo-1-butene. [ 3 ] The first to report on the relationship between kinetic and thermodynamic control were R.B. Woodward and Harold Baer in 1944. [ 18 ] They were re-investigating a reaction between maleic anhydride and a fulvene first reported in 1929 by Otto Diels and Kurt Alder . [ 19 ] They observed that while the endo isomer is formed more rapidly, longer reaction times, as well as relatively elevated temperatures, result in higher exo / endo ratios which had to be considered in the light of the remarkable stability of the exo-compound on the one hand and the very facile dissociation of the endo isomer on the other. C. K. Ingold with E. D. Hughes and G. Catchpole independently described a thermodynamic and kinetic reaction control model in 1948. [ 20 ] They were reinvestigating a certain allylic rearrangement reported in 1930 by Jakob Meisenheimer . [ 21 ] Solvolysis of gamma-phenylallyl chloride with AcOK in acetic acid was found to give a mixture of the gamma and the alpha acetate with the latter converting to the first by equilibration. This was interpreted as a case in the field of anionotropy of the phenomenon, familiar in prototropy, of the distinction between kinetic and thermodynamic control in ion-recombination .
https://en.wikipedia.org/wiki/Thermodynamic_versus_kinetic_reaction_control
In thermodynamics , a thermodynamicist is someone who studies thermodynamic processes and phenomena, i.e. the physics that deal with mechanical action and relations of heat . Among the well-known number of famous thermodynamicists , include Sadi Carnot , Rudolf Clausius , Willard Gibbs , Hermann von Helmholtz , and Max Planck . Although most consider the French physicist Nicolas Sadi Carnot to be the first true thermodynamicist, the term thermodynamics itself wasn't coined until 1849 by Lord Kelvin in his publication An Account of Carnot's Theory of the Motive Power of Heat . [ 1 ] The first thermodynamic textbook was written in 1859 by William Rankine , a civil and mechanical engineering professor at the University of Glasgow . [ 2 ]
https://en.wikipedia.org/wiki/Thermodynamicist
In the history of thermodynamics , Thermodynamik chemischer Vorgänge (Chemical thermodynamic process) is a sequence of three papers (1882–1883) written by German physicist Hermann von Helmholtz . It is one of the founding papers in thermodynamics , along with Josiah Willard Gibbs 's 1876 paper " On the Equilibrium of Heterogeneous Substances ". Together they form the foundation of chemical thermodynamics as well as a large part of physical chemistry . [ 1 ] [ 2 ] It was published in three parts in Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften zu Berlin ["Proceedings of the Royal Prussian Academy of Sciences"], and is available on HathiTrust [ 3 ] and online archive of the Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften zu Berlin . [ 4 ]
https://en.wikipedia.org/wiki/Thermodynamik_chemischer_Vorgänge
The thermoelectric effect is the direct conversion of temperature differences to electric voltage and vice versa via a thermocouple . [ 1 ] A thermoelectric device creates a voltage when there is a different temperature on each side. Conversely, when a voltage is applied to it, heat is transferred from one side to the other, creating a temperature difference. [ 2 ] This effect can be used to generate electricity , measure temperature or change the temperature of objects. Because the direction of heating and cooling is affected by the applied voltage, thermoelectric devices can be used as temperature controllers. The term "thermoelectric effect" encompasses three separately identified effects: the Seebeck effect (temperature differences cause electromotive forces), the Peltier effect (thermocouples create temperature differences), and the Thomson effect (the Seebeck coefficient varies with temperature). The Seebeck and Peltier effects are different manifestations of the same physical process; textbooks may refer to this process as the Peltier–Seebeck effect (the separation derives from the independent discoveries by French physicist Jean Charles Athanase Peltier and Baltic German physicist Thomas Johann Seebeck ). The Thomson effect is an extension of the Peltier–Seebeck model and is credited to Lord Kelvin . Joule heating , the heat that is generated whenever a current is passed through a conductive material, is not generally termed a thermoelectric effect. The Peltier–Seebeck and Thomson effects are thermodynamically reversible , [ 3 ] whereas Joule heating is not. At the atomic scale, a temperature gradient causes charge carriers in the material to diffuse from the hot side to the cold side. This is due to charge carrier particles having higher mean velocities (and thus kinetic energy ) at higher temperatures, leading them to migrate on average towards the colder side, in the process carrying heat across the material. [ 4 ] Depending on the material properties and nature of the charge carriers (whether they are positive holes in a bulk material or electrons of negative charge), heat can be carried in either direction with respect to voltage. Semiconductors of n-type and p-type are often combined in series as they have opposite directions for heat transport, as specified by the sign of their Seebeck coefficients . [ 5 ] The Seebeck effect is the emergence of electromotive force (emf) that develops across two points of an electrically conducting material when there is a temperature difference between them. The emf is called the Seebeck emf (or thermo/thermal/thermoelectric emf). The ratio between the emf and temperature difference is the Seebeck coefficient. A thermocouple measures the difference in potential across a hot and cold end for two dissimilar materials. This potential difference is proportional to the temperature difference between the hot and cold ends. First discovered in 1794 by Italian scientist Alessandro Volta , [ 6 ] [ note 1 ] it is named after the Russian born, Baltic German physicist Thomas Johann Seebeck who rediscovered it in 1821. Seebeck observed what he called "thermomagnetic effect" wherein a magnetic compass needle would be deflected by a closed loop formed by two different metals joined in two places, with an applied temperature difference between the joints. Danish physicist Hans Christian Ørsted noted that the temperature difference was in fact driving an electric current, with the generation of magnetic field being an indirect consequence, and so coined the more accurate term "thermoelectricity". [ 7 ] The Seebeck effect is a classic example of an electromotive force (EMF) and leads to measurable currents or voltages in the same way as any other EMF. The local current density is given by J = σ ( − ∇ V + E emf ) , {\displaystyle \mathbf {J} =\sigma (-\nabla V+\mathbf {E} _{\text{emf}}),} where V {\displaystyle V} is the local voltage , [ 8 ] and σ {\displaystyle \sigma } is the local conductivity . In general, the Seebeck effect is described locally by the creation of an electromotive field E emf = − S ∇ T , {\displaystyle \mathbf {E} _{\text{emf}}=-S\nabla T,} where S {\displaystyle S} is the Seebeck coefficient (also known as thermopower), a property of the local material, and ∇ T {\displaystyle \nabla T} is the temperature gradient. The Seebeck coefficients generally vary as function of temperature and depend strongly on the composition of the conductor. For ordinary materials at room temperature, the Seebeck coefficient may range in value from −100 μV/K to +1,000 μV/K (see Seebeck coefficient article for more information). In practice, thermoelectric effects are essentially unobservable for a localized hot or cold spot in a single homogeneous conducting material, since the overall EMFs from the increasing and decreasing temperature gradients will perfectly cancel out. Attaching an electrode to the hotspot in an attempt to measure the locally shifted voltage will only partly succeed: It means another temperature gradient will appear inside of the electrode, so the overall EMF will depend on the difference in Seebeck coefficients between the electrode and the conductor it is attached to. Thermocouples involve two wires, each of a different material, that are electrically joined in a region of unknown temperature. The loose ends are measured in an open-circuit state (without any current, J = 0 {\displaystyle \mathbf {J} =0} ). Although the materials' Seebeck coefficients S {\displaystyle S} are nonlinearly temperature dependent and different for the two materials, the open-circuit condition means that ∇ V = − S ∇ T {\displaystyle \nabla V=-S\nabla T} everywhere. Therefore (see the thermocouple article for more details) the voltage measured at the loose ends of the wires is directly dependent on the unknown temperature, and yet totally independent of other details such as the exact geometry of the wires. This direct relationship allows the thermocouple arrangement to be used as a straightforward uncalibrated thermometer, provided knowledge of the difference in S {\displaystyle S} -vs- T {\displaystyle T} curves of the two materials, and of the reference temperature at the measured loose wire ends. Thermoelectric sorting functions similarly to a thermocouple but involves an unknown material instead of an unknown temperature: a metallic probe of known composition is kept at a constant known temperature and held in contact with the unknown sample that is locally heated to the probe temperature, thereby providing an approximate measurement of the unknown Seebeck coefficient S {\displaystyle S} . This can help distinguish between different metals and alloys. Thermopiles are formed from many thermocouples in series, zig-zagging back and forth between hot and cold. This multiplies the voltage output. Thermoelectric generators are like a thermocouple/thermopile but instead draw some current from the generated voltage in order to extract power from heat differentials. They are optimized differently from thermocouples, using high quality thermoelectric materials in a thermopile arrangement, to maximize the extracted power. Though not particularly efficient, these generators have the advantage of not having any moving parts. When an electric current is passed through a circuit of a thermocouple , heat is generated (dumped, pumped) at one junction and absorbed at the other junction. This is known as the Peltier effect : the presence of heating or cooling at an electrified junction of two different conductors. The effect is named after French physicist Jean Charles Athanase Peltier , who discovered it in 1834. [ 9 ] When a current is made to flow through a junction between two conductors, A and B, heat may be generated or removed at the junction. The Peltier heat generated at the junction per unit time is Q ˙ = ( Π A − Π B ) I , {\displaystyle {\dot {Q}}=(\Pi _{\text{A}}-\Pi _{\text{B}})I,} where Π A {\displaystyle \Pi _{\text{A}}} and Π B {\displaystyle \Pi _{\text{B}}} are the Peltier coefficients of conductors A and B, and I {\displaystyle I} is the electric current (from A to B). The total heat generated is not determined by the Peltier effect alone, as it may also be influenced by Joule heating and thermal-gradient effects (see below). The Peltier coefficients represent how much heat is carried per unit charge. Since charge current must be continuous across a junction, the associated heat flow will develop a discontinuity if Π A {\displaystyle \Pi _{\text{A}}} and Π B {\displaystyle \Pi _{\text{B}}} are different. The Peltier effect can be considered as the back-action counterpart to the Seebeck effect (analogous to the back-EMF in magnetic induction): if a simple thermoelectric circuit is closed, then the Seebeck effect will drive a current, which in turn (by the Peltier effect) will always transfer heat from the hot to the cold junction. The close relationship between Peltier and Seebeck effects can be seen in the direct connection between their coefficients: Π = T S {\displaystyle \Pi =TS} (see below ). A typical Peltier heat pump involves multiple junctions in series, through which a current is driven. Some of the junctions lose heat due to the Peltier effect, while others gain heat. Thermoelectric heat pumps exploit this phenomenon, as do thermoelectric cooling devices found in refrigerators. The Peltier effect can be used to create a heat pump . Notably, the Peltier thermoelectric cooler is a refrigerator that is compact and has no circulating fluid or moving parts. Such refrigerators are useful in applications where their advantages outweigh the disadvantage of their very low efficiency. Other heat pump applications such as dehumidifiers may also use Peltier heat pumps. Thermoelectric coolers are trivially reversible, in that they can be used as heaters by simply reversing the current. Unlike ordinary resistive electrical heating ( Joule heating ) that varies with the square of current, the thermoelectric heating effect is linear in current (at least for small currents) but requires a cold sink to replenish with heat energy. This rapid reversing heating and cooling effect is used by many modern thermal cyclers , laboratory devices used to amplify DNA by the polymerase chain reaction (PCR). PCR requires the cyclic heating and cooling of samples to specified temperatures. The inclusion of many thermocouples in a small space enables many samples to be amplified in parallel. For certain materials, the Seebeck coefficient is not constant in temperature, and so a spatial gradient in temperature can result in a gradient in the Seebeck coefficient. If a current is driven through this gradient, then a continuous version of the Peltier effect will occur. This Thomson effect was predicted and later observed in 1851 by Lord Kelvin (William Thomson). [ 10 ] It describes the heating or cooling of a current-carrying conductor with a temperature gradient. If a current density J {\displaystyle \mathbf {J} } is passed through a homogeneous conductor, the Thomson effect predicts a heat production rate per unit volume. q ˙ = − K J ⋅ ∇ T , {\displaystyle {\dot {q}}=-{\mathcal {K}}\mathbf {J} \cdot \nabla T,} where ∇ T {\displaystyle \nabla T} is the temperature gradient, and K {\displaystyle {\mathcal {K}}} is the Thomson coefficient. The Thomson effect is a manifestation of the direction of flow of electrical carriers with respect to a temperature gradient within a conductor. These absorb energy (heat) flowing in a direction opposite to a thermal gradient, increasing their potential energy, and, when flowing in the same direction as a thermal gradient, they liberate heat, decreasing their potential energy. [ 11 ] The Thomson coefficient is related to the Seebeck coefficient as K = T d S d T {\displaystyle {\mathcal {K}}=T{\tfrac {dS}{dT}}} (see below ). This equation, however, neglects Joule heating and ordinary thermal conductivity (see full equations below). Often, more than one of the above effects is involved in the operation of a real thermoelectric device. The Seebeck effect, Peltier effect, and Thomson effect can be gathered together in a consistent and rigorous way, described here; this also includes the effects of Joule heating and ordinary heat conduction. As stated above, the Seebeck effect generates an electromotive force, leading to the current equation [ 12 ] J = σ ( − ∇ V − S ∇ T ) . {\displaystyle \mathbf {J} =\sigma (-{\boldsymbol {\nabla }}V-S\nabla T).} To describe the Peltier and Thomson effects, we must consider the flow of energy. If temperature and charge change with time, the full thermoelectric equation for the energy accumulation, e ˙ {\displaystyle {\dot {e}}} , is [ 12 ] e ˙ = ∇ ⋅ ( κ ∇ T ) − ∇ ⋅ ( V + Π ) J + q ˙ ext , {\displaystyle {\dot {e}}=\nabla \cdot (\kappa \nabla T)-\nabla \cdot (V+\Pi )\mathbf {J} +{\dot {q}}_{\text{ext}},} where κ {\displaystyle \kappa } is the thermal conductivity . The first term is the Fourier's heat conduction law , and the second term shows the energy carried by currents. The third term, q ˙ ext {\displaystyle {\dot {q}}_{\text{ext}}} , is the heat added from an external source (if applicable). If the material has reached a steady state, the charge and temperature distributions are stable, so e ˙ = 0 {\displaystyle {\dot {e}}=0} and ∇ ⋅ J = 0 {\displaystyle \nabla \cdot \mathbf {J} =0} . Using these facts and the second Thomson relation (see below), the heat equation can be simplified to − q ˙ ext = ∇ ⋅ ( κ ∇ T ) + J ⋅ ( σ − 1 J ) − T J ⋅ ∇ S . {\displaystyle -{\dot {q}}_{\text{ext}}=\nabla \cdot (\kappa \nabla T)+\mathbf {J} \cdot \left(\sigma ^{-1}\mathbf {J} \right)-T\mathbf {J} \cdot \nabla S.} The middle term is the Joule heating, and the last term includes both Peltier ( ∇ S {\displaystyle \nabla S} at junction) and Thomson ( ∇ S {\displaystyle \nabla S} in thermal gradient) effects. Combined with the Seebeck equation for J {\displaystyle \mathbf {J} } , this can be used to solve for the steady-state voltage and temperature profiles in a complicated system. If the material is not in a steady state, a complete description needs to include dynamic effects such as relating to electrical capacitance , inductance and heat capacity . The thermoelectric effects lie beyond the scope of equilibrium thermodynamics. They necessarily involve continuing flows of energy. At least, they involve three bodies or thermodynamic subsystems, arranged in a particular way, along with a special arrangement of the surroundings. The three bodies are the two different metals and their junction region. The junction region is an inhomogeneous body, assumed to be stable, not suffering amalgamation by diffusion of matter. The surroundings are arranged to maintain two temperature reservoirs and two electric reservoirs. For an imagined, but not actually possible, thermodynamic equilibrium, heat transfer from the hot reservoir to the cold reservoir would need to be prevented by a specifically matching voltage difference maintained by the electric reservoirs, and the electric current would need to be zero. For a steady state, there must be at least some heat transfer or some non-zero electric current. The two modes of energy transfer, as heat and by electric current, can be distinguished when there are three distinct bodies and a distinct arrangement of surroundings. But in the case of continuous variation in the media, heat transfer and thermodynamic work cannot be uniquely distinguished. This is more complicated than the often considered thermodynamic processes, in which just two respectively homogeneous subsystems are connected. In 1854, Lord Kelvin found relationships between the three coefficients, implying that the Thomson, Peltier, and Seebeck effects are different manifestations of one effect (uniquely characterized by the Seebeck coefficient). [ 13 ] The first Thomson relation is [ 12 ] K ≡ d Π d T − S , {\displaystyle {\mathcal {K}}\equiv {\frac {d\Pi }{dT}}-S,} where T {\displaystyle T} is the absolute temperature, K {\displaystyle {\mathcal {K}}} is the Thomson coefficient, Π {\displaystyle \Pi } is the Peltier coefficient, and S {\displaystyle S} is the Seebeck coefficient. This relationship is easily shown given that the Thomson effect is a continuous version of the Peltier effect. The second Thomson relation is Π = T S . {\displaystyle \Pi =TS.} This relation expresses a subtle and fundamental connection between the Peltier and Seebeck effects. It was not satisfactorily proven until the advent of the Onsager relations , and it is worth noting that this second Thomson relation is only guaranteed for a time-reversal symmetric material; if the material is placed in a magnetic field or is itself magnetically ordered ( ferromagnetic , antiferromagnetic , etc.), then the second Thomson relation does not take the simple form shown here. [ 14 ] Now, using the second relation, the first Thomson relation becomes K = T d S d T {\displaystyle {\mathcal {K}}=T{\tfrac {dS}{dT}}} The Thomson coefficient is unique among the three main thermoelectric coefficients because it is the only one directly measurable for individual materials. The Peltier and Seebeck coefficients can only be easily determined for pairs of materials; hence, it is difficult to find values of absolute Seebeck or Peltier coefficients for an individual material. If the Thomson coefficient of a material is measured over a wide temperature range, it can be integrated using the Thomson relations to determine the absolute values for the Peltier and Seebeck coefficients. This needs to be done only for one material, since the other values can be determined by measuring pairwise Seebeck coefficients in thermocouples containing the reference material and then adding back the absolute Seebeck coefficient of the reference material. For more details on absolute Seebeck coefficient determination, see Seebeck coefficient .
https://en.wikipedia.org/wiki/Thermoelectric_effect
Thermoelectric materials [ 1 ] [ 2 ] show the thermoelectric effect in a strong or convenient form. The thermoelectric effect refers to phenomena by which either a temperature difference creates an electric potential or an electric current creates a temperature difference. These phenomena are known more specifically as the Seebeck effect (creating a voltage from temperature difference), Peltier effect (driving heat flow with an electric current), and Thomson effect (reversible heating or cooling within a conductor when there is both an electric current and a temperature gradient). While all materials have a nonzero thermoelectric effect, in most materials it is too small to be useful. However, low-cost materials that have a sufficiently strong thermoelectric effect (and other required properties) are also considered for applications including power generation and refrigeration . The most commonly used thermoelectric material is based on bismuth telluride ( Bi 2 Te 3 ). Thermoelectric materials are used in thermoelectric systems for cooling or heating in niche applications , and are being studied as a way to regenerate electricity from waste heat . [ 3 ] Research in the field is still driven by materials development, primarily in optimizing transport and thermoelectric properties. [ 4 ] The usefulness of a material in thermoelectric systems is determined by the device efficiency . This is determined by the material's electrical conductivity ( σ ), thermal conductivity ( κ ), and Seebeck coefficient (S), which change with temperature ( T ). The maximum efficiency of the energy conversion process (for both power generation and cooling) at a given temperature point in the material is determined by the thermoelectric materials figure of merit z T {\displaystyle zT} , given by [ 1 ] [ 5 ] [ 6 ] z T = σ S 2 T κ . {\displaystyle zT={\sigma S^{2}T \over \kappa }.} The efficiency of a thermoelectric device for electricity generation is given by η {\displaystyle \eta } , defined as η = energy provided to the load heat energy absorbed at hot junction . {\displaystyle \eta ={{\text{energy provided to the load}} \over {\text{heat energy absorbed at hot junction}}}.} The maximum efficiency of a thermoelectric device is typically described in terms of its device figure of merit Z T {\displaystyle ZT} where the maximum device efficiency is approximately given by [ 7 ] η m a x = T H − T C T H 1 + Z T ¯ − 1 1 + Z T ¯ + T C T H , {\displaystyle \eta _{\mathrm {max} }={T_{\rm {H}}-T_{\rm {C}} \over T_{\rm {H}}}{{\sqrt {1+Z{\bar {T}}}}-1 \over {\sqrt {1+Z{\bar {T}}}}+{T_{\rm {C}} \over T_{\rm {H}}}},} where T H {\displaystyle T_{\rm {H}}} is the fixed temperature at the hot junction, T C {\displaystyle T_{\rm {C}}} is the fixed temperature at the surface being cooled, and T ¯ {\displaystyle {\bar {T}}} is the mean of T H {\displaystyle T_{\rm {H}}} and T C {\displaystyle T_{\rm {C}}} . This maximum efficiency equation is exact when thermoelectric properties are temperature-independent. For a single thermoelectric leg the device efficiency can be calculated from the temperature dependent properties S , κ and σ and the heat and electric current flow through the material. [ 8 ] [ 9 ] [ 10 ] In an actual thermoelectric device, two materials are used (typically one n-type and one p-type) with metal interconnects. The maximum efficiency η m a x {\displaystyle \eta _{\mathrm {max} }} is then calculated from the efficiency of both legs and the electrical and thermal losses from the interconnects and surroundings. Ignoring these losses and temperature dependencies in S , κ and σ , an inexact estimate for Z T {\displaystyle ZT} is given by [ 1 ] [ 5 ] Z T ¯ = ( S p − S n ) 2 T ¯ [ ( ρ n κ n ) 1 / 2 + ( ρ p κ p ) 1 / 2 ] 2 {\displaystyle Z{\bar {T}}={(S_{p}-S_{n})^{2}{\bar {T}} \over [(\rho _{n}\kappa _{n})^{1/2}+(\rho _{p}\kappa _{p})^{1/2}]^{2}}} where ρ {\displaystyle \rho } is the electrical resistivity, and the properties are averaged over the temperature range; the subscripts n and p denote properties related to the n- and p-type semiconducting thermoelectric materials, respectively. Only when n and p elements have the same and temperature independent properties ( S p = − S n {\displaystyle S_{p}=-S_{n}} ) does Z T ¯ = z T ¯ {\displaystyle Z{\bar {T}}=z{\bar {T}}} . Since thermoelectric devices are heat engines, their efficiency is limited by the Carnot efficiency T H − T C T H {\displaystyle {\frac {T_{\rm {H}}-T_{\rm {C}}}{T_{\rm {H}}}}} , the first factor in η m a x {\displaystyle \eta _{\mathrm {max} }} , while Z T {\displaystyle ZT} and z T {\displaystyle zT} determines the maximum reversibility of the thermodynamic process globally and locally, respectively. Regardless, the coefficient of performance of current commercial thermoelectric refrigerators ranges from 0.3 to 0.6, one-sixth the value of traditional vapor-compression refrigerators. [ 11 ] Often the thermoelectric power factor is reported for a thermoelectric material, given by P o w e r f a c t o r = σ S 2 [ W / m / K 2 ] {\displaystyle \mathrm {Power~factor} =\sigma S^{2}[W/m/K^{2}]} where S is the Seebeck coefficient , and σ is the electrical conductivity . Although it is often claimed that TE devices with materials with a higher power factor are able to 'generate' more energy (move more heat or extract more energy from that temperature difference) this is only true for a thermoelectric device with fixed geometry and unlimited heat source and cooling. If the geometry of the device is optimally designed for the specific application, the thermoelectric materials will operate at their peak efficiency which is determined by their z T {\displaystyle zT} not σ S 2 {\displaystyle \sigma S^{2}} . [ 12 ] For good efficiency, materials with high electrical conductivity, low thermal conductivity and high Seebeck coefficient are needed. The band structure of semiconductors offers better thermoelectric effects than the band structure of metals. The Fermi energy is below the conduction band causing the state density to be asymmetric around the Fermi energy. Therefore, the average electron energy of the conduction band is higher than the Fermi energy, making the system conducive for charge motion into a lower energy state. By contrast, the Fermi energy lies in the conduction band in metals. This makes the state density symmetric about the Fermi energy so that the average conduction electron energy is close to the Fermi energy, reducing the forces pushing for charge transport. Therefore, semiconductors are ideal thermoelectric materials. [ 13 ] In the efficiency equations above, thermal conductivity and electrical conductivity compete. The thermal conductivity κ in crystalline solids has mainly two components: According to the Wiedemann–Franz law , the higher the electrical conductivity, the higher κ electron becomes. [ 13 ] Thus in metals the ratio of thermal to electrical conductivity is about fixed, as the electron part dominates. In semiconductors, the phonon part is important and cannot be neglected. It reduces the efficiency. For good efficiency a low ratio of κ phonon / κ electron is desired. Therefore, it is necessary to minimize κ phonon and keep the electrical conductivity high. Thus semiconductors should be highly doped. G. A. Slack [ 14 ] proposed that in order to optimize the figure of merit, phonons , which are responsible for thermal conductivity must experience the material as a glass (experiencing a high degree of phonon scattering—lowering thermal conductivity ) while electrons must experience it as a crystal (experiencing very little scattering—maintaining electrical conductivity ): this concept is called phonon glass electron crystal. The figure of merit can be improved through the independent adjustment of these properties. The maximum Z T ¯ {\displaystyle Z{\bar {T}}} of a material is given by the material's quality factor where k B {\displaystyle k_{\rm {B}}} is the Boltzmann constant, ℏ {\displaystyle \hbar } is the reduced Planck constant, N v {\displaystyle N_{\rm {v}}} is the number of degenerated valleys for the band, C l {\displaystyle C_{\rm {l}}} is the average longitudinal elastic moduli, m l ∗ {\displaystyle m_{\rm {l}}^{*}} is the inertial effective mass, Ξ {\displaystyle \Xi } is the deformation potential coefficient, κ L {\displaystyle \kappa _{\rm {L}}} is the lattice thermal conduction, and T {\displaystyle T} is temperature. The figure of merit, Z T ¯ {\displaystyle Z{\bar {T}}} , depends on doping concentration and temperature of the material of interest. [ 15 ] The material quality factor B {\displaystyle B} is useful because it allows for an intrinsic comparison of possible efficiency between different materials. [ 16 ] This relation shows that improving the electronic component N v m l ∗ Ξ 2 {\displaystyle {\frac {N_{\rm {v}}}{m_{\rm {l}}^{*}\Xi ^{2}}}} , which primarily affects the Seebeck coefficient, will increase the quality factor of a material. A large density of states can be created due to a large number of conducting bands ( N v {\displaystyle N_{\rm {v}}} ) or by flat bands giving a high band effective mass ( m b ∗ {\displaystyle m_{\rm {b}}^{*}} ). For isotropic materials m b ∗ = m l ∗ {\displaystyle m_{\rm {b}}^{*}=m_{\rm {l}}^{*}} . Therefore, it is desirable for thermoelectric materials to have high valley degeneracy in a very sharp band structure. [ 17 ] Other complex features of the electronic structure are important. These can be partially quantified using an electronic fitness function. [ 18 ] Strategies to improve thermoelectric performances include both advanced bulk materials and the use of low-dimensional systems. Such approaches to reduce lattice thermal conductivity fall under three general material types: (1) Alloys : create point defects, vacancies, or rattling structures ( heavy-ion species with large vibrational amplitudes contained within partially filled structural sites) to scatter phonons within the unit cell crystal; [ 19 ] (2) Complex crystals : separate the phonon glass from the electron crystal using approaches similar to those for superconductors (the region responsible for electron transport should be an electron crystal of a high-mobility semiconductor, while the phonon glass should ideally house disordered structures and dopants without disrupting the electron crystal, analogous to the charge reservoir in high-T c superconductors [ 20 ] ); (3) Multiphase nanocomposites : scatter phonons at the interfaces of nanostructured materials, [ 21 ] be they mixed composites or thin film superlattices . Materials under consideration for thermoelectric device applications include: Materials such as Bi 2 Te 3 and Bi 2 Se 3 comprise some of the best performing room temperature thermoelectrics with a temperature-independent figure-of-merit, ZT, between 0.8 and 1.0. [ 22 ] Nanostructuring these materials to produce a layered superlattice structure of alternating Bi 2 Te 3 and Sb 2 Te 3 layers produces a device within which there is good electrical conductivity but perpendicular to which thermal conductivity is poor. The result is an enhanced ZT (approximately 2.4 at room temperature for p-type). [ 23 ] Note that this high value of ZT has not been independently confirmed due to the complicated demands on the growth of such superlattices and device fabrication; however the material ZT values are consistent with the performance of hot-spot coolers made out of these materials and validated at Intel Labs. Bismuth telluride and its solid solutions are good thermoelectric materials at room temperature and therefore suitable for refrigeration applications around 300 K. The Czochralski method has been used to grow single crystalline bismuth telluride compounds. These compounds are usually obtained with directional solidification from melt or powder metallurgy processes. Materials produced with these methods have lower efficiency than single crystalline ones due to the random orientation of crystal grains, but their mechanical properties are superior and the sensitivity to structural defects and impurities is lower due to high optimal carrier concentration. The required carrier concentration is obtained by choosing a nonstoichiometric composition, which is achieved by introducing excess bismuth or tellurium atoms to primary melt or by dopant impurities. Some possible dopants are halogens and group IV and V atoms. Due to the small bandgap (0.16 eV) Bi 2 Te 3 is partially degenerate and the corresponding Fermi-level should be close to the conduction band minimum at room temperature. The size of the band-gap means that Bi 2 Te 3 has high intrinsic carrier concentration. Therefore, minority carrier conduction cannot be neglected for small stoichiometric deviations. Use of telluride compounds is limited by the toxicity and rarity of tellurium. [ 24 ] Heremans et al. (2008) demonstrated that thallium -doped lead telluride alloy (PbTe) achieves a ZT of 1.5 at 773 K. [ 25 ] Later, Snyder et al. (2011) reported ZT~1.4 at 750 K in sodium-doped PbTe, [ 26 ] and ZT~1.8 at 850 K in sodium-doped PbTe 1−x Se x alloy. [ 27 ] Snyder's group determined that both thallium and sodium alter the electronic structure of the crystal increasing electronic conductivity. They also claim that selenium increases electric conductivity and reduces thermal conductivity. In 2012 another team used lead telluride to convert waste heat to electricity, reaching a ZT of 2.2, which they claimed was the highest yet reported. [ 28 ] [ 29 ] Inorganic clathrates have the general formula A x B y C 46-y (type I) and A x B y C 136-y (type II), where B and C are group III and IV elements, respectively, which form the framework where “guest” A atoms ( alkali or alkaline earth metal ) are encapsulated in two different polyhedra facing each other. The differences between types I and II come from the number and size of voids present in their unit cells . Transport properties depend on the framework's properties, but tuning is possible by changing the “guest” atoms. [ 30 ] [ 31 ] [ 32 ] The most direct approach to synthesize and optimize the thermoelectric properties of semiconducting type I clathrates is substitutional doping, where some framework atoms are replaced with dopant atoms. In addition, powder metallurgical and crystal growth techniques have been used in clathrate synthesis. The structural and chemical properties of clathrates enable the optimization of their transport properties as a function of stoichiometry . [ 33 ] [ 34 ] The structure of type II materials allows a partial filling of the polyhedra, enabling better tuning of the electrical properties and therefore better control of the doping level. [ 35 ] [ 36 ] Partially filled variants can be synthesized as semiconducting or even insulating. [ 37 ] Blake et al. have predicted ZT~0.5 at room temperature and ZT~1.7 at 800 K for optimized compositions. Kuznetsov et al. measured electrical resistance and Seebeck coefficient for three different type I clathrates above room temperature and by estimating high temperature thermal conductivity from the published low temperature data they obtained ZT~0.7 at 700 K for Ba 8 Ga 16 Ge 30 and ZT~0.87 at 870 K for Ba 8 Ga 16 Si 30 . [ 38 ] Mg 2 B IV (B 14 =Si, Ge, Sn) compounds and their solid solutions are good thermoelectric materials and their ZT values are comparable with those of established materials. The appropriate production methods are based on direct co-melting, but mechanical alloying has also been used. During synthesis, magnesium losses due to evaporation and segregation of components (especially for Mg 2 Sn) need to be taken into account. Directed crystallization methods can produce single crystals of Mg 2 Si , but they intrinsically have n-type conductivity, and doping, e.g. with Sn, Ga, Ag or Li, is required to produce p-type material which is required for an efficient thermoelectric device. [ 39 ] Solid solutions and doped compounds have to be annealed in order to produce homogeneous samples – with the same properties throughout. At 800 K, Mg 2 Si 0.55−x Sn 0.4 Ge 0.05 Bi x has been reported to have a figure of merit about 1.4, the highest ever reported for these compounds. [ 40 ] Skutterudites have a chemical composition of LM 4 X 12 , where L is a rare-earth metal (optional component), M is a transition metal , and X is a metalloid , a group V element or a pnictogen such as phosphorus , antimony , or arsenic . These materials exhibit ZT>1.0 and can potentially be used in multistage thermoelectric devices. [ 41 ] Unfilled, these materials contain voids, which can be filled with low-coordination ions (usually rare-earth elements ) to reduce thermal conductivity by producing sources for lattice phonon scattering , without reducing electrical conductivity . [ 42 ] It is also possible to reduce the thermal conductivity in skutterudite without filling these voids using a special architecture containing nano- and micro-pores. [ 43 ] NASA is developing a Multi-Mission Radioisotope Thermoelectric Generator in which the thermocouples would be made of skutterudite , which can function with a smaller temperature difference than the current tellurium designs. This would mean that an otherwise similar RTG would generate 25% more power at the beginning of a mission and at least 50% more after seventeen years. NASA hopes to use the design on the next New Frontiers mission. [ 44 ] Homologous oxide compounds (such as those of the form ( SrTiO 3 ) n (SrO) m —the Ruddlesden-Popper phase ) have layered superlattice structures that make them promising candidates for use in high-temperature thermoelectric devices. [ 45 ] These materials exhibit low thermal conductivity perpendicular to the layers while maintaining good electronic conductivity within the layers. Their ZT values can reach 2.4 for epitaxial SrTiO 3 films, and the enhanced thermal stability of such oxides, as compared to conventional high-ZT bismuth compounds, makes them superior high-temperature thermoelectrics. [ 46 ] Interest in oxides as thermoelectric materials was reawakened in 1997 when a relatively high thermoelectric power was reported for NaCo 2 O 4 . [ 47 ] [ 46 ] In addition to their thermal stability, other advantages of oxides are their low toxicity and high oxidation resistance. Simultaneously controlling both the electric and phonon systems may require nanostructured materials. Layered Ca 3 Co 4 O 9 exhibited ZT values of 1.4–2.7 at 900 K. [ 46 ] If the layers in a given material have the same stoichiometry, they will be stacked so that the same atoms will not be positioned on top of each other, impeding phonon conductivity perpendicular to the layers. [ 45 ] Recently, oxide thermoelectrics have gained a lot of attention so that the range of promising phases increased drastically. Novel members of this family include ZnO, [ 46 ] MnO 2 , [ 48 ] and NbO 2 . [ 49 ] [ 50 ] All variables mentioned are included in the equation for the dimensionless figure of merit , zT , which can be seen at the top of this page. The goal of any thermoelectric experiment is to make the power factor, S 2 σ , larger while maintaining a small thermal conductivity . This is because electricity is produced through a temperature gradient, so materials that can equilibrate heat very quickly are not useful. [ 51 ] The two compounds detailed below were found to exhibit high-performing thermoelectric properties, which can be evidenced by the reported figure of merit in either respective manuscript. Cuprokalininite (CuCr 2 S 4 ) is a copper-dominant analogue of the mineral joegoldsteinite . It was recently found within metamorphic rocks in Slyudyanka, part of the South Baikal region of Russia, and researchers have determined that Sb- doped cuprokalininite (Cu 1-x Sb x Cr 2 S 4 ) shows promise in renewable technology. [ 52 ] Doping is the act of intentionally adding an impurity, usually to modify the electrochemical characteristics of the seed material. The introduction of antimony enhances the power factor by bringing in extra electrons, which increases the Seebeck coefficient , S , and reduces the magnetic moment (how likely the particles are to align with a magnetic field); it also distorts the crystal structure, which lowers the thermal conductivity , κ . Khan et al. (2017) were able to discover the optimal amount of Sb content (x=0.3) in cuprokalininte in order to develop a device with a ZT value of 0.43. [ 52 ] Bornite (Cu 5 FeS 4 ) is a sulfide mineral named after an Austrian mineralogist, though it is much more common than the aforementioned cuprokalininite. This metal ore was found to demonstrate an improved thermoelectric performance after undergoing cation exchange with iron. [ 53 ] Cation exchange is the process of surrounding a parent crystal with an electrolyte complex, so that the cations (positively charged ions) within the structure can be swapped out for those in solution without affecting the anion sublattice (negatively charged crystal network). [ 54 ] What one is left with are crystals that possess a different composition, yet an identical framework. In this way, scientists are granted extreme morphological control and uniformity when generating complicated heterostructures. [ 55 ] As to why it was thought to improve the ZT value, the mechanics of cation exchange often bring about crystallographic defects , which cause phonons (simply put, heat particles) to scatter. According to the Debye-Callaway formalism, a model used to determine the lattice thermal conductivity, κ L , the highly anharmonic behavior due to phonon scattering results in a large thermal resistance. [ 56 ] Therefore, a greater defect density decreases the lattice thermal conductivity, thereby making a larger figure of merit. In conclusion, Long et al. reported that greater Cu-deficiencies resulted in increases of up to 88% in the ZT value, with a maximum of 0.79. [ 53 ] The composition of thermoelectric devices can dramatically vary depending on the temperature of the heat they must harvest; considering the fact that more than eighty percent of industry waste falls within a range of 373-575 K, chalcogenides and antimonides are better suited for thermoelectric conversion because they can utilize heat at lower temperatures. [ 53 ] Not only is sulfur the cheapest and lightest chalcogenide, current surpluses may be causing threat to the environment since it is a byproduct of oil capture, so sulfur consumption could help mitigate future damage. [ 52 ] As for the metal, copper is an ideal seed particle for any kind of substitution method because of its high mobility and variable oxidation state , for it can balance or complement the charge of more inflexible cations. Therefore, either the cuprokalininite or bornite minerals could prove ideal thermoelectric components. Half-Heusler (HH) alloys have a great potential for high-temperature power generation applications. Examples of these alloys include NbFeSb, NbCoSn and VFeSb. They have a cubic MgAgAs-type structure formed by three interpenetrating face-centered-cubic (fcc) lattices. The ability to substitute any of these three sublattices opens the door for wide variety of compounds to be synthesized. Various atomic substitutions are employed to reduce the thermal conductivity and enhance the electrical conductivity. [ 57 ] Previously, ZT could not peak more than 0.5 for p-type and 0.8 for n-type HH compound. However, in the past few years, researchers were able to achieve ZT≈1 for both n-type and p-type. [ 57 ] Nano-sized grains is one of the approaches used to lower the thermal conductivity via grain boundaries- assisted phonon scattering. [ 58 ] Other approach was to utilize the principles of nanocomposites, by which certain combination of metals were favored on others due to the atomic size difference. For instance, Hf and Ti is more effective than Hf and Zr, when reduction of thermal conductivity is of concern, since the atomic size difference between the former is larger than that of the latter. [ 59 ] Conducting polymers are of significant interest for flexible thermoelectric development. They are flexible, lightweight, geometrically versatile, and can be processed at scale, an important component for commercialization. However, the structural disorder of these materials often inhibits the electrical conductivity much more than the thermal conductivity, limiting their use so far. Some of the most common conducting polymers investigated for flexible thermoelectrics include poly(3,4-ethylenedioxythiophene) (PEDOT), polyanilines (PANIs), polythiophenes, polyacetylenes, polypyrrole, and polycarbazole. P-type PEDOT:PSS (polystyrene sulfonate) and PEDOT-Tos (Tosylate) have been some of the most encouraging materials investigated. Organic, air-stable n-type thermoelectrics are often harder to synthesize because of their low electron affinity and likelihood of reacting with oxygen and water in the air. [ 60 ] These materials often have a figure of merit that is still too low for commercial applications (~0.42 in PEDOT:PSS ) due to the poor electrical conductivity. [ 61 ] Hybrid composite thermoelectrics involve blending the previously discussed electrically conducting organic materials or other composite materials with other conductive materials in an effort to improve transport properties. The conductive materials that are most commonly added include carbon nanotubes and graphene due to their conductivities and mechanical properties. It has been shown that carbon nanotubes can increase the tensile strength of the polymer composite they are blended with. However, they can also reduce the flexibility. [ 62 ] Furthermore, future study into the orientation and alignment of these added materials will allow for improved performance. [ 63 ] The percolation threshold of CNT’s is often especially low, well below 10%, due to their high aspect ratio. [ 64 ] A low percolation threshold is desirable for both cost and flexibility purposes. Reduced graphene oxide (rGO) as graphene-related material was also used to enhance figure of merit of thermoelectric materials. [ 65 ] The addition of rather low amount of graphene or rGO around 1 wt% mainly strengthens the phonon scattering at grain boundaries of all these materials as well as increases the charge carrier concentration and mobility in chalcogenide-, skutterudite- and, particularly, metal oxide-based composites. However, significant growth of ZT after addition of graphene or rGO was observed mainly for composites based on thermoelectric materials with low initial ZT. When thermoelectric material is already nanostructured and possesses high electrical conductivity, such an addition does not enhance ZT significantly. Thus, graphene or rGO-additive works mainly as an optimizer of the intrinsic performance of thermoelectric materials. Hybrid thermoelectric composites also refer to polymer-inorganic thermoelectric composites. This is generally achieved through an inert polymer matrix that is host to thermoelectric filler material. The matrix is generally nonconductive so as to not short current as well as to let the thermoelectric material dominate electrical transport properties. One major benefit of this method is that the polymer matrix will generally be highly disordered and random on many different length scales, meaning that the composite material will can have a much lower thermal conductivity. The general procedure to synthesize these materials involves a solvent to dissolve the polymer and dispersion of the thermoelectric material throughout the mixture. [ 66 ] Bulk Si exhibits a low ZT of ~0.01 because of its high thermal conductivity. However, ZT can be as high as 0.6 in silicon nanowires , which retain the high electrical conductivity of doped Si, but reduce the thermal conductivity due to elevated scattering of phonons on their extensive surfaces and low cross-section. [ 67 ] Combining Si and Ge also allows to retain a high electrical conductivity of both components and reduce the thermal conductivity. The reduction originates from additional scattering due to very different lattice (phonon) properties of Si and Ge. [ 68 ] As a result, Silicon-germanium alloys are currently the best thermoelectric materials around 1000 °C and are therefore used in some radioisotope thermoelectric generators (RTG) (notably the MHW-RTG and GPHS-RTG ) and some other high^temperature applications, such as waste heat recovery . Usability of silicon-germanium alloys is limited by their high price and moderate ZT values (p-SiGe ~0.7 and n-SiGe ~1.0); [ 69 ] however, ZT can be increased to 1–2 in SiGe nanostructures owing to the reduction in thermal conductivity. [ 70 ] Experiments on crystals of sodium cobaltate, using X-ray and neutron scattering experiments carried out at the European Synchrotron Radiation Facility (ESRF) and the Institut Laue-Langevin (ILL) in Grenoble were able to suppress thermal conductivity by a factor of six compared to vacancy-free sodium cobaltate. The experiments agreed with corresponding density functional calculations . The technique involved large anharmonic displacements of Na 0.8 CoO 2 contained within the crystals. [ 71 ] [ 72 ] In 2002, Nolas and Goldsmid have come up with a suggestion that systems with the phonon mean free path larger than the charge carrier mean free path can exhibit an enhanced thermoelectric efficiency. [ 73 ] This can be realized in amorphous thermoelectrics and soon they became a focus of many studies. This ground-breaking idea was accomplished in Cu-Ge-Te, [ 74 ] NbO 2 , [ 75 ] In-Ga-Zn-O, [ 76 ] Zr-Ni-Sn, [ 77 ] Si-Au, [ 78 ] and Ti-Pb-V-O [ 79 ] amorphous systems. It should be mentioned that modelling of transport properties is challenging enough without breaking the long-range order so that design of amorphous thermoelectrics is at its infancy. Naturally, amorphous thermoelectrics give rise to extensive phonon scattering, which is still a challenge for crystalline thermoelectrics. A bright future is expected for these materials. Functionally graded materials make it possible to improve the conversion efficiency of existing thermoelectrics. These materials have a non-uniform carrier concentration distribution and in some cases also solid solution composition. In power generation applications the temperature difference can be several hundred degrees and therefore devices made from homogeneous materials have some part that operates at the temperature where ZT is substantially lower than its maximum value. This problem can be solved by using materials whose transport properties vary along their length thus enabling substantial improvements to the operating efficiency over large temperature differences. This is possible with functionally graded materials as they have a variable carrier concentration along the length of the material, which is optimized for operations over specific temperature range. [ 80 ] In addition to nanostructured Bi 2 Te 3 / Sb 2 Te 3 superlattice thin films, other nanostructured materials, including silicon nanowires , [ 67 ] nanotubes and quantum dots show potential in improving thermoelectric properties. Another example of a superlattice involves a PbTe/PbSeTe quantum dot superlattices provides an enhanced ZT (approximately 1.5 at room temperature) that was higher than the bulk ZT value for either PbTe or PbSeTe (approximately 0.5). [ 81 ] Not all nanocrystalline materials are stable, because the crystal size can grow at high temperatures, ruining the materials' desired characteristics. Nanocrystalline materials have many interfaces between crystals, which Physics of SASER scatter phonons so the thermal conductivity is reduced. Phonons are confined to the grain, if their mean free path is larger than the material grain size. [ 67 ] Nanocrystalline transition metal silicides are a promising material group for thermoelectric applications, because they fulfill several criteria that are demanded from the commercial applications point of view. In some nanocrystalline transition metal silicides the power factor is higher than in the corresponding polycrystalline material but the lack of reliable data on thermal conductivity prevents the evaluation of their thermoelectric efficiency. [ 82 ] Skutterudites, a cobalt arsenide mineral with variable amounts of nickel and iron, can be produced artificially, and are candidates for better thermoelectric materials. One advantage of nanostructured skutterudites over normal skutterudites is their reduced thermal conductivity, caused by grain boundary scattering. ZT values of ~0.65 and > 0.4 have been achieved with CoSb 3 based samples; the former values were 2.0 for Ni and 0.75 for Te-doped material at 680 K and latter for Au-composite at T > 700 K . [ 83 ] Even greater performance improvements can be achieved by using composites and by controlling the grain size, the compaction conditions of polycrystalline samples and the carrier concentration. Graphene is known for its high electrical conductivity and Seebeck coefficient at room temperature. [ 84 ] [ 85 ] However, from thermoelectric perspective, its thermal conductivity is notably high, which in turn limits its ZT. [ 86 ] Several approaches were suggested to reduce the thermal conductivity of graphene without altering much its electrical conductivity. These include, but not limited to, the following: Superlattices – nano structured thermocouples, are considered a good candidate for better thermoelectric device manufacturing, with materials that can be used in manufacturing this structure. Their production is expensive for general-use due to fabrication processes based on expensive thin-film growth methods. However, since the amount of thin-film materials required for device fabrication with superlattices, is so much less than thin-film materials in bulk thermoelectric materials (almost by a factor of 1/10,000) the long-term cost advantage is indeed favorable. This is particularly true given the limited availability of tellurium causing competing solar applications for thermoelectric coupling systems to rise. Superlattice structures also allow the independent manipulation of transport parameters by adjusting the structure itself, enabling research for better understanding of the thermoelectric phenomena in nanoscale, and studying the phonon-blocking electron-transmitting structures – explaining the changes in electric field and conductivity due to the material's nano-structure. [ 23 ] Many strategies exist to decrease the superlattice thermal conductivity that are based on engineering of phonon transport. The thermal conductivity along the film plane and wire axis can be reduced by creating diffuse interface scattering and by reducing the interface separation distance, both which are caused by interface roughness. Interface roughness can naturally occur or may be artificially induced. In nature, roughness is caused by the mixing of atoms of foreign elements. Artificial roughness can be created using various structure types, such as quantum dot interfaces and thin-films on step-covered substrates. [ 70 ] [ 68 ] Reduced electrical conductivity : Reduced phonon-scattering interface structures often also exhibit a decrease in electrical conductivity. The thermal conductivity in the cross-plane direction of the lattice is usually very low, but depending on the type of superlattice, the thermoelectric coefficient may increase because of changes to the band structure. Low thermal conductivity in superlattices is usually due to strong interface scattering of phonons. Minibands are caused by the lack of quantum confinement within a well. The mini-band structure depends on the superlattice period so that with a very short period (~1 nm) the band structure approaches the alloy limit and with a long period (≥ ~60 nm) minibands become so close to each other that they can be approximated with a continuum. [ 89 ] Superlattice structure countermeasures : Counter measures can be taken which practically eliminate the problem of decreased electrical conductivity in a reduced phonon-scattering interface. These measures include the proper choice of superlattice structure, taking advantage of mini-band conduction across superlattices, and avoiding quantum-confinement . It has been shown that because electrons and phonons have different wavelengths, it is possible to engineer the structure in such a way that phonons are scattered more diffusely at the interface than electrons. [ 23 ] Phonon confinement countermeasures : Another approach to overcome the decrease in electrical conductivity in reduced phonon-scattering structures is to increase phonon reflectivity and therefore decrease the thermal conductivity perpendicular to the interfaces. This can be achieved by increasing the mismatch between the materials in adjacent layers, including density , group velocity , specific heat , and the phonon-spectrum. Interface roughness causes diffuse phonon scattering, which either increases or decreases the phonon reflectivity at the interfaces. A mismatch between bulk dispersion relations confines phonons, and the confinement becomes more favorable as the difference in dispersion increases. The amount of confinement is currently unknown as only some models and experimental data exist. As with a previous method, the effects on the electrical conductivity have to be considered. [ 70 ] [ 68 ] Attempts to localize long-wavelength phonons by aperiodic superlattices or composite superlattices with different periodicities have been made. In addition, defects, especially dislocations, can be used to reduce thermal conductivity in low dimensional systems. [ 70 ] [ 68 ] Parasitic heat : Parasitic heat conduction in the barrier layers could cause significant performance loss. It has been proposed but not tested that this can be overcome by choosing a certain correct distance between the quantum wells. The Seebeck coefficient can change its sign in superlattice nanowires due to the existence of minigaps as Fermi energy varies. This indicates that superlattices can be tailored to exhibit n or p-type behavior by using the same dopants as those that are used for corresponding bulk materials by carefully controlling Fermi energy or the dopant concentration. With nanowire arrays, it is possible to exploit semimetal -semiconductor transition due to the quantum confinement and use materials that normally would not be good thermoelectric materials in bulk form. Such elements are for example bismuth. The Seebeck effect could also be used to determine the carrier concentration and Fermi energy in nanowires. [ 90 ] In quantum dot thermoelectrics, unconventional or nonband transport behavior (e.g. tunneling or hopping) is necessary to utilize their special electronic band structure in the transport direction. It is possible to achieve ZT>2 at elevated temperatures with quantum dot superlattices, but they are almost always unsuitable for mass production. However, in superlattices, where quantum-effects are not involved, with film thickness of only a few micrometers (μm) to about 15 μm, Bi 2 Te 3 /Sb 2 Te 3 superlattice material has been made into high-performance microcoolers and other devices. The performance of hot-spot coolers [ 23 ] are consistent with the reported ZT~2.4 of superlattice materials at 300 K. [ 91 ] Nanocomposites are promising material class for bulk thermoelectric devices, but several challenges have to be overcome to make them suitable for practical applications. It is not well understood why the improved thermoelectric properties appear only in certain materials with specific fabrication processes. [ 92 ] SrTe nanocrystals can be embedded in a bulk PbTe matrix so that rocksalt lattices of both materials are completely aligned (endotaxy) with optimal molar concentration for SrTe only 2%. This can cause strong phonon scattering but would not affect charge transport. In such case, ZT~1.7 can be achieved at 815 K for p-type material. [ 93 ] In 2014, researchers at Northwestern University discovered that tin selenide (SnSe) has a ZT of 2.6 along the b axis of the unit cell. [ 94 ] [ 95 ] This was the highest value reported to date. This was attributed to an extremely low thermal conductivity found in the SnSe lattice. Specifically, SnSe demonstrated a lattice thermal conductivity of 0.23 W·m −1 ·K −1 , much lower than previously reported values of 0.5 W·m −1 ·K −1 and greater. [ 96 ] This material also exhibited a ZT of 2.3 ± 0.3 along the c-axis and 0.8 ± 0.2 along the a-axis. These results were obtained at a temperature of 923 K (650 °C). As shown by the figures below, SnSe performance metrics were found to significantly improve at higher temperatures; this is due to a structural change. Power factor, conductivity, and thermal conductivity all reach their optimal values at or above 750 K, and appear to plateau at higher temperatures. However, other groups have not been able to reproduce the reported bulk thermal conductivity data. [ 97 ] Although it exists at room temperature in an orthorhombic structure with space group Pnma, SnSe undergoes a transition to a structure with higher symmetry, space group Cmcm, at higher temperatures. [ 98 ] This structure consists of Sn-Se planes that are stacked upwards in the a-direction, which accounts for the poor performance out-of-plane (along a-axis). Upon transitioning to the Cmcm structure, SnSe maintains its low thermal conductivity but exhibits higher carrier mobilities. [ 96 ] One impediment to further development of SnSe is that it has a relatively low carrier concentration: approximately 10 17 cm −3 . Compounding this issue is the fact that SnSe has been reported to have low doping efficiency. [ 99 ] However, such single crystalline materials suffer from inability to make useful devices due to their brittleness as well as narrow range of temperatures, where ZT is reported to be high. In 2021 the researchers announced a polycrystalline form of SnSe that was at once less brittle and featured a ZT of 3.1. [ 100 ] Anderson localization is a quantum mechanical phenomenom where charge carriers in a random potential are trapped in place (i.e. they are in localized states as opposed to being in scattering states if they could move freely). [ 101 ] This localization prevents the charge carriers from moving, which inhibits their contribution to the thermal conductivity of a material, but because it also lowers the electrical conductivity, it was thought to reduce ZT and be detrimental for thermoelectric materials. [ 102 ] [ 103 ] In 2019, it was proposed that by localizing only the minority charge carriers in a doped semiconductor (i.e. holes in an n-doped semiconductor or electrons in a p-doped semiconductor), Anderson localization could increase ZT. The heat conductivity associated with movement of the minority charge carriers would be reduced while electrical conductivity of the majority charge carrier would be unaffected. [ 104 ] In 2020, researchers at Kyung Hee University demonstrated the use of Anderson localization in an n-type semiconductor to improve the thermoelectric properties of a material. They embedded nanoparticles of silver telluride (Ag 2 Te) in a lead telluride (PbTe) matrix. Ag 2 Te undergoes a phase transition around 407 K. Below this temperature, both holes and electrons are localized at the Ag 2 Te nanoparticles, while after the transition, holes are still localized, but electrons can move freely in the material. The researchers were able to increase ZT from 1.5 to above 2.0 using this method. [ 105 ] Production methods for these materials can be divided into powder and crystal growth based techniques. Powder based techniques offer excellent ability to control and maintain desired carrier distribution, particle size, and composition. [ 106 ] In crystal growth techniques dopants are often mixed with melt, but diffusion from gaseous phase can also be used. [ 107 ] In the zone melting techniques disks of different materials are stacked on top of others and then materials are mixed with each other when a traveling heater causes melting. In powder techniques, either different powders are mixed with a varying ratio before melting or they are in different layers as a stack before pressing and melting. There are applications, such as cooling of electronic circuits, where thin films are required. Therefore, thermoelectric materials can also be synthesized using physical vapor deposition techniques. Another reason to utilize these methods is to design these phases and provide guidance for bulk applications. Significant improvement on 3D printing skills has made it possible for thermoelectric components to be prepared via 3D printing. Thermoelectric products are made from special materials that absorb heat and create electricity. The requirement of fitting complex geometries in tightly constrained spaces makes 3D printing the ideal manufacturing technique. [ 108 ] There are several benefits to the use of additive manufacturing in thermoelectric material production. Additive manufacturing allows for innovation in the design of these materials, facilitating intricate geometries that would not otherwise be possible by conventional manufacturing processes. It reduces the amount of wasted material during production and allows for faster production turnaround times by eliminating the need for tooling and prototype fabrication, which can be time-consuming and expensive. [ 109 ] There are several major additive manufacturing technologies that have emerged as feasible methods for the production of thermoelectric materials, including continuous inkjet printing, dispenser printing, screen printing, stereolithography , and selective laser sintering . Each method has its own challenges and limitations, especially related to the material class and form that can be used. For example, selective laser sintering (SLS) can be used with metal and ceramic powders, stereolithography (SLA) must be used with curable resins containing solid particle dispersions of the thermoelectric material of choice, and inkjet printing must use inks which are usually synthesized by dispersing inorganic powders to organic solvent or making a suspension. [ 110 ] [ 111 ] The motivation for producing thermoelectrics by means of additive manufacturing is due to a desire to improve the properties of these materials, namely increasing their thermoelectric figure of merit ZT, and thereby improving their energy conversion efficiency . [ 112 ] Research has been done proving the efficacy and investigating the material properties of thermoelectric materials produced via additive manufacturing. An extrusion-based additive manufacturing method was used to successfully print bismuth telluride (Bi 2 Te 3 ) with various geometries. This method utilized an all-inorganic viscoelastic ink synthesized using Sb 2 Te 2 chalcogenidometallate ions as binders for Bi 2 Te 3 -based particles. The results of this method showed homogenous thermoelectric properties throughout the material and a thermoelectric figure of merit ZT of 0.9 for p-type samples and 0.6 for n-type samples. The Seebeck coefficient of this material was also found to increase with increasing temperature up to around 200 °C. [ 113 ] Groundbreaking research has also been done towards the use of selective laser sintering (SLS) for the production of thermoelectric materials. Loose Bi 2 Te 3 powders have been printed via SLS without the use of pre- or post-processing of the material, pre-forming of a substrate, or use of binder materials. The printed samples achieved 88% relative density (compared to a relative density of 92% in conventionally manufactured Bi 2 Te 3 ). Scanning Electron Microscopy (SEM) imaging results showed adequate fusion between layers of deposited materials. Though pores existed within the melted region, this is a general existing issue with parts made by SLS, occurring as a result of gas bubbles that get trapped in the melted material during its rapid solidification. X-ray diffraction results showed that the crystal structure of the material was intact after laser melting. The Seebeck coefficient, figure of merit ZT, electrical and thermal conductivity, specific heat, and thermal diffusivity of the samples were also investigated, at high temperatures up to 500 °C. Of particular interest is the ZT of these Bi 2 Te 3 samples, which were found to decrease with increasing temperatures up to around 300 °C, increase slightly at temperatures between 300-400 °C, and then increase sharply without further increase in temperature. The highest achieved ZT value (for an n-type sample) was about 0.11. The bulk thermoelectric material properties of samples produced using SLS had comparable thermoelectric and electrical properties to thermoelectric materials produced using conventional manufacturing methods. This the first time the SLS method of thermoelectric material production has been used successfully. [ 112 ] Thermoelectric materials are commonly used in thermoelectric generators to convert the thermal energy into electricity. Thermoelectric generators have the advantage of no moving parts and do not require any chemical reaction for energy conversion, which make them stand out from other sustainable energy resources such as wind turbine and solar cells; Nevertheless, the mechanical performance of thermoelectric generators may decay over time due to plastic, fatigue and creep deformation as a result of being subjected to complex and time-varying thermomechanical stresses. In their research, Al-Merbati et al. [ 115 ] found that the stress levels around the leg corners of thermoelectric devices were high and generally increased closer to the hot side. However, switching to a trapezoidal leg geometry reduced thermal stresses. Erturun et al. [ 116 ] compared various leg geometries and discovered that rectangular prism and cylindrical legs experienced the highest stresses. Studies have also shown that using thinner and longer legs can significantly relieve stress. [ 117 ] [ 118 ] [ 119 ] [ 120 ] Tachibana and Fang [ 121 ] estimated the relationship between thermal stress, temperature difference, coefficient of thermal expansion, and module dimensions. They found that the thermal stress was proportional to $ ( L ⋅ α ⋅ Δ T h ) 2 $ {\displaystyle \$(L\cdot \alpha \cdot {\frac {\Delta T}{h}})^{2}\$} , where L, α, ΔT and h are module thickness, Coefficients of Thermal Expansion(CTE), temperature difference and leg height, respectively. Clin et al. [ 122 ] conducted finite-element analysis to replicate thermal stresses in a thermoelectric module and concluded that the thermal stresses were dependent on the mechanical boundary conditions on the module and on CTE mismatch between various components. The corners of the legs exhibited maximum stresses. In a separate investigation, Turenne et al. [ 123 ] examined the distribution of stress in large freestanding thermoelectric modules and those rigidly fixed between two surfaces for thermal exchange. Although boundary conditions significantly altered the stress distribution, the authors deduced that external compressive loading on the TE module resulted in the creation of global compressive stresses. Thermoelectric materials commonly contain different types of defects, such as dislocations, vacancies, secondary phases and antisite defects. These defects can affect thermoelectric performance by evolving under service conditions. In 2019, Yun Zheng et al. [ 124 ] studied thermal fatigue of Bi 2 Te 3 {\displaystyle {\ce {Bi_2Te_3}}} -based materials and they proposed that their fatigue behavior can be reduced via boosting the fracture toughness by introducing pores, microcracks or inclusion with the inextricable trade-off with fracture strength. Thermoelectric materials can undergo thermal shock loading through service temperature spikes, soldering and metallizing processes. The thermoelectric leg can be coated with metals to form the required diffusion barrier (Metallizing) and dipping the metallized leg in a molten alloy bath (Soldering) for connecting the leg to the interconnect. In a study conducted by Pelletier et al. [ 125 ] thermoelectric disks were quenched for the purpose of thermal shock experiments. They realized that quenching in a hot medium helped disks' surface to produce compressive stresses in contrary to the core, which developed tensile stress. Anisotropic materials and thin disks were reported to develop greater maximum stresses. They also observed fracturing of specimens during quenching process in a soldering bath from room temperature. Thermal stresses have been quantified and extensively studied in thermoelectric modules throughout the years but von Mises stresses are commonly reported. The von Mises stress defines a constraint on plastic yielding without having any information of the stress nature. For instance, in a study by Sakamoto et al. [ 126 ] the mechanical stability of a Mg 2 Si {\displaystyle {\ce {Mg_2Si}}} -based structure was investigated that could utilize thermoelectric legs at an angle with elecftrical interconnects and substrates. Maximum tensile strength stresses were calculated and compared to the ultimate tensile strength of different materials. This approach might be misleading for brittle materials (such as ceramics) as they do not possess a defined tensile strength. In 2018, Chen et al. [ 127 ] investigated the cracking failure of Cu pillar bump that was caused by electromigration under thermoelectric coupling load. They showed that under thermoelectric coupling load, will experience severe joule heat and current density that can be accumulate thermoemechanical stress and miscrostructure evolution. They also pointed out that the difference in CTE between materials in the flip chip package causes thermal mismatch stress which can later develop the cavities to expand along cathode into cracks. Also, it is worth noting that they mentioned thermal-electrical coupling can cause electromigration , microcracks and delamination due to temperature and stress concentration that can fail Cu pillar bumps. Phase transformation can occur in thermoelectric materials as well as many other energy materials. As pointed out by Al Malki et al., [ 128 ] phase transformation can lead to a total plastic strain when internal mismatch stresses are biased with shear stress . The alpha phase of Ag 2 S {\displaystyle {\ce {Ag_2S}}} transforms to a body centered cubic phase. Liang et al. [ 129 ] showed that a crack was observed when heating through 407 K through this phase transformation. Creep deformation is a time-dependent mechanism where strain accumulates as amaterial is subjected to external or internal stressesat a high homologous temperature in excess ofT/Tm= 0.5(whereTmis the melting point in K). [ 128 ] This phenomenon can emerge in thermoelectric devices after operating for a long time (i.e. months to years). A coarse-grained or monocrystalline structures have been shown to be desirable as creep-resistant materials. [ 130 ] Thermoelectric materials can be used as refrigerators, called "thermoelectric coolers", or "Peltier coolers" after the Peltier effect that controls their operation. As a refrigeration technology, Peltier cooling is far less common than vapor-compression refrigeration . The main advantages of a Peltier cooler (compared to a vapor-compression refrigerator) are its lack of moving parts or refrigerant , and its small size and flexible shape (form factor). [ 131 ] The main disadvantage of Peltier coolers is low efficiency. It is estimated that materials with ZT>3 (about 20–30% Carnot efficiency) would be required to replace traditional coolers in most applications. [ 81 ] Today, Peltier coolers are only used in niche applications, especially small scale, where efficiency is not important. [ 131 ] Thermoelectric efficiency depends on the figure of merit , ZT. There is no theoretical upper limit to ZT, and as ZT approaches infinity, the thermoelectric efficiency approaches the Carnot limit . However, until recently no known thermoelectrics had a ZT>3. [ 132 ] In 2019, researchers reported a material with approximated ZT between 5 and 6. [ 133 ] [ 134 ] As of 2010, thermoelectric generators serve application niches where efficiency and cost are less important than reliability, light weight, and small size. [ 135 ] [ 136 ] Internal combustion engines capture 20–25% of the energy released during fuel combustion. [ 135 ] [ 137 ] Increasing the conversion rate can increase mileage and provide more electricity for on-board controls and creature comforts (stability controls, telematics, navigation systems, electronic braking, etc.) [ 138 ] It may be possible to shift energy draw from the engine (in certain cases) to the electrical load in the car, e.g., electrical power steering or electrical coolant pump operation. [ 135 ] [ 137 ] Cogeneration power plants use the heat produced during electricity generation for alternative purposes; being this more profitable in industries with high amounts of waste energy. [ 135 ] Thermoelectrics may find applications in such systems or in solar thermal energy generation. [ 135 ] [ 139 ]
https://en.wikipedia.org/wiki/Thermoelectric_materials
Thermoelectric temperature control is the use of the thermoelectric effect , specifically the Peltier effect , to heat or cool materials by applying an electrical current across them. [ 1 ] A typical Peltier cell absorbs heat on one side and produces heat on the other. [ 1 ] Because of this, Peltier cells can be used for temperature control. [ 1 ] However, the currently use of this effect for air conditioning on a large scale (for homes or commercial buildings) is rare due to its low efficiency and high cost relative to other options. [ 1 ] A typical Peltier cell based heat pump can be used by coupling the thermoelectric generators with photovoltaic air cooled panels as defined in the PhD thesis of Alexandra Thedeby. [ 2 ] Considering the system with an air plant that ensures the possibility of heating on one side and cooling on the other. [ 3 ] By changing the configuration it allows both winter and summer acclimatization. [ 4 ] These elements are expected to be an effective element for zero-energy buildings , if coupled with solar thermal energy and photovoltaic [ 5 ] with particular reference to create radiant heat pumps on the walls of a building. [ 6 ] It must be remarked that this acclimatization method ensures the ideal efficiency during summer cooling if coupled with a photovoltaic generator. The air circulation could be also used for cooling the temperature of PV modules. The most important engineering requirement is the accurate design of heat sinks [ 7 ] to optimize the heat exchange and minimize the fluiddynamic losses. The efficiency can be determined by the following relation: η = T C − T H T C {\displaystyle \eta ={\frac {T_{C}-T_{H}}{T_{C}}}} where T C {\displaystyle T_{C}} is the temperature of the cooling surface and T H {\displaystyle T_{H}} is the temperature of the heating surface. The key energy phenomena and the reason of defining a specific use of thermoelectric elements (Figure 1) as heat pump resides in the energy fluxes that those elements allow realizing: [ 8 ] [ 9 ] Where the following terms are used: Δ T = T H − T C {\displaystyle \Delta T=T_{H}-T_{C}} , I {\displaystyle I} electric current ; α Seebeck coefficient ; R electric resistance , S surface area, d cell thickness, and k thermal conductivity . The efficiencies of the system are: COP can be calculated according to Cannistraro. [ 10 ] Thermoelectric heat pumps can be easily used for both local acclimatization for removing local discomfort situations. [ 11 ] For example, thermoelectric ceilings are today in an advanced research stage [ 12 ] with the aim of increasing indoor comfort conditions according to Fanger, [ 13 ] such as the ones that may appear in presence of large glassed surfaces, and for small building acclimatization if coupled with solar systems. [ 14 ] [ 15 ] Those systems have the key importance in the direction of new zero emissions passive building because of a very high COP value [ 16 ] and the following high performances by an accurate exergy optimization of the system. [ 17 ] At industrial level thermoelectric acclimatization appliances are actually under development [ 18 ]
https://en.wikipedia.org/wiki/Thermoelectric_temperature_control
In electrochemistry , a thermogalvanic cell is a kind of galvanic cell in which heat is employed to provide electrical power directly. [ 1 ] [ 2 ] These cells are electrochemical cells in which the two electrodes are deliberately maintained at different temperatures. This temperature difference generates a potential difference between the electrodes. [ 3 ] [ 4 ] The electrodes can be of identical composition and the electrolyte solution homogeneous. This is usually the case in these cells. [ 5 ] This is in contrast to galvanic cells in which electrodes and/or solutions of different compositions provide the electromotive potential. As long as there is a difference in temperature between the electrodes a current will flow through the circuit. A thermogalvanic cell can be seen as analogous to a concentration cell but instead of running on differences in the concentration/pressure of the reactants they make use of differences in the "concentrations" of thermal energy. [ 6 ] [ 7 ] [ 8 ] The principal application of thermogalvanic cells is the production of electricity from low-temperature heat sources ( waste heat and solar heat ). Their energetic efficiency is low, in the range of 0.1% to 1% for conversion of heat into electricity. [ 7 ] The use of heat to empower galvanic cells was first studied around 1880. [ 9 ] However it was not until the 1950s that more serious research was undertaken in this field. [ 3 ] Thermogalvanic cells are a kind of heat engine . Ultimately the driving force behind them is the transport of entropy from the high temperature source to the low temperature sink. [ 10 ] Therefore, these cells work thanks to a thermal gradient established between different parts of the cell. Because the rate and enthalpy of chemical reactions depend directly on the temperature, different temperatures at the electrodes imply different chemical equilibrium constants. This translates into unequal chemical equilibrium conditions on the hot side and on the cold side. The thermocell tries to approach a homogeneous equilibrium and, in doing so, produces a flow of chemical species and electrons. The electrons flow through the path of least resistance (the outer circuit) making it possible to extract power from the cell. Different thermogalvanic cells have been constructed attending to their uses and properties. Usually they are classified according to the electrolyte employed in each specific type of cell. In these cells the electrolyte between the electrodes is a water solution of some salt or hydrophylic compound. [ 5 ] An essential property of these compounds is that they must be able to undergo redox reactions in order to shuttle electrons from one electrode to the other during the cell operation. The electrolyte is a solution of some other solvent different from water. [ 5 ] Solvents like methanol , acetone , dimethyl sulphoxide and dimethyl formamide have been successfully employed in thermogalvanic cells running on copper sulfate. [ 11 ] In this type of thermocell the electrolyte is some kind of salt with a relatively low melting point. Their use solves two problems. On one hand the temperature range of the cell is much larger. This is an advantage as these cells produce more power the larger the difference between the hot and cold sides. On the other hand, the liquid salt directly provides the anions and cations necessary for sustainment of a current through the cell. Therefore, no additional current-carrying compounds are necessary as the melted salt is the electrolyte itself. [ 12 ] Typical hot source temperatures are between 600–900 K, but can get as high as 1730 K. Cold sink temperatures are in the 400–500 K range. Thermocells in which the electrolyte connecting the electrodes is an ionic material have been considered and constructed too. [ 5 ] The temperature range is also elevated as compared to liquid electrolytes. Studied systems fall in the 400–900 K. Some solid ionic materials that have been employed to construct thermogalvanic cells are AgI , PbCl 2 and PbBr 2 . The main application of thermogalvanic cells is electricity production under conditions where excess heat is available. In particular thermogalvanic cells are being used in the following areas. The heat collected from this process generates steam, which can be used in a conventional steam turbine system to make electricity. In contrast to the low-temperature solar thermal systems that are used for air or water heating in domestic or commercial buildings, these solar thermal electricity plants operate at high temperatures, requiring both concentrated sunlight and a large collection area, making the Moroccan desert an ideal location. This is an alternative approach to the more widely used “photovoltaic” technology for producing electricity from sunlight. In a photovoltaic system, the sunlight is absorbed in the photovoltaic device (commonly called a solar cell) and energy is passed to electrons in the material, converting the solar energy directly into electricity. Sometimes, solar thermal electricity and photovoltaics are portrayed as competing technologies and, while this may be true when deciding on the way forward for a specific site, in general they are complementary, using solar energy as extensively as possible. Thermogalvanic cells can be used to extract a useful quantity of energy from waste heat sources even when the temperature gradient is less than 100C (sometimes only a few tens of degrees). This is often the case in many industrial areas. [ 13 ] Research has suggested that thermogalvanic hydrogel could be used to generate electricity from the heat produced by a mobile phone battery when the phone is using power, while also cooling the battery. [ 14 ] Thermogalvanism uses chemical energy to move heat from a colder to a hotter body. It was claimed in 2025 that thermogalvanic cooling technology could be a sustainable alternative to the vapour compression technology used in refrigerators . It was claimed that the cooling power of hydrogalvanic cells could be improved by 70% by optimising electrolytes, and that this made cooling viable. [ 14 ] [ 15 ] [ 16 ]
https://en.wikipedia.org/wiki/Thermogalvanic_cell
Thermogenesis is the process of heat production in organisms . It occurs in all warm-blooded animals, and also in a few species of thermogenic plants such as the Eastern skunk cabbage , the Voodoo lily ( Sauromatum venosum ), and the giant water lilies of the genus Victoria . The lodgepole pine dwarf mistletoe, Arceuthobium americanum , disperses its seeds explosively through thermogenesis. [ 1 ] Depending on whether or not they are initiated through locomotion and intentional movement of the muscles , thermogenic processes can be classified as one of the following: One method to raise temperature is through shivering . It produces heat because the conversion of the chemical energy of ATP into kinetic energy causes almost all of the energy to show up as heat. Shivering is the process by which the body temperature of hibernating mammals (such as some bats and ground squirrels) is raised as these animals emerge from hibernation. Non-shivering thermogenesis occurs in brown adipose tissue (brown fat) [ 4 ] that is present in almost all eutherians ( swine being the only exception currently known [ 5 ] [ 6 ] ). [ 7 ] Brown adipose tissue has a unique uncoupling protein ( thermogenin , also known as uncoupling protein 1) that allows for the synthesis of ATP to be uncoupled from the production of protons ( H + ), thus enabling mitochondria to burn fatty acids and oxygen to generate heat. [ 8 ] The atomic structure of human uncoupling protein 1 UCP1 has been solved by cryogenic-electron microscopy. The structure has the typical fold of a member of the SLC25 family . [ 9 ] [ 10 ] UCP1 is locked in a cytoplasmic-open state by guanosine triphosphate in a pH-dependent manner, preventing proton leak. [ 11 ] In this process, substances such as free fatty acids (derived from triacylglycerols ) remove purine (ADP, GDP and others) inhibition of thermogenin, which causes an influx of H + into the matrix of the mitochondrion and bypasses the ATP synthase channel. This uncouples oxidative phosphorylation , and the energy from the proton motive force is dissipated as heat rather than producing ATP from ADP, which would store chemical energy for the body's use. Thermogenesis can also be produced by leakage of the sodium-potassium pump and the Ca 2+ pump. [ 12 ] Thermogenesis is contributed to by futile cycles , such as the simultaneous occurrence of lipogenesis and lipolysis [ 13 ] or glycolysis and gluconeogenesis . In a broader context, futile cycles can be influenced by activity/rest cycles such as the Summermatter cycle . [ 14 ] Acetylcholine stimulates muscle to raise metabolic rate . [ 15 ] The low demands of thermogenesis mean that free fatty acids draw, for the most part, on lipolysis as the method of energy production. A comprehensive list of human and mouse genes regulating cold-induced thermogenesis (CIT) in living animals ( in vivo ) or tissue samples ( ex vivo ) has been assembled [ 16 ] and is available in CITGeneDB. [ 16 ] The biological processes which allow for thermogenesis in animals did not evolve from a singular, common ancestor . [ 17 ] Rather, avian (birds) and eutherian (placental mammalian) lineages developed the ability to perform thermogenesis independently through separate evolutionary processes. [ 17 ] The fact that the same evolutionary character evolved independently in two different lineages after their last known common ancestor means that thermogenic processes are classified as an example of convergent evolution . However, while both clades are capable of performing thermogenesis, the biological processes involved are different. The reason that avians and eutherians both developed the capacity to perform thermogenesis is a subject of ongoing study by evolutionary biologists , and two competing explanations have been proposed to explain why this character appears in both lineages. [ 17 ] One explanation for the convergence is the "aerobic capacity" model. This theory suggests that natural selection favored individuals with higher resting metabolic rates , and that as the metabolic capacity of birds and eutherians increased, they developed the capacity for endothermic thermogenesis. [ 18 ] Researchers have linked high levels of oxygen consumption with high resting metabolic rates, suggesting that the two are directly correlated. Rather than animals developing the capacity to maintain high and stable body temperatures only to be able to thermoregulate without the aid of the environment, this theory suggests that thermogenesis is actually a by-product of natural selection for higher aerobic and metabolic capacities. [ 19 ] These higher metabolic capacities may initially have evolved for the simple reason that animals capable of metabolizing more oxygen for longer periods of time would have been better suited to, for example, run from predators or gather food. [ 19 ] This model explaining the development of thermogenesis is older and more widely accepted among evolutionary biologists who study thermogenesis. The second explanation is the "parental care" model. This theory proposes that the convergent evolution of thermogenesis in birds and eutherians is based on shared behavioral traits . Specifically, birds and eutherians both provide high levels of parental care to young offspring. This high level of care is theorized to give new born or hatched animals the opportunity to mature more rapidly because they have to expend less energy to satisfy their food, shelter, and temperature needs. [ 17 ] The "parental care" model thus proposes that higher aerobic capacity was selected for in parents as a means of meeting the needs of their offspring . [ 18 ] While the "parental care" model does differ from the "aerobic capacity" model, it shares some similarities in that both explanations for the rise of thermogenesis rest on natural selection favoring individuals with higher aerobic capacities for one reason or another. The primary difference between the two theories is that the "parental care" model proposes that a specific biological function (childcare) resulted in selective pressure for higher metabolic rates. Despite both relying on similar explanations for the process by which organisms gained the capacity to perform non-shivering thermogenesis, neither of these explanations has secured a large enough consensus to be considered completely authoritative on convergent evolution of NST in birds and mammals, and scientists continue to conduct studies which support both positions. [ 19 ] [ 17 ] [ 18 ] Brown Adipose Tissue (BAT) thermogenesis is one of the two known forms of non-shivering thermogenesis (NST). This type of heat-generation occurs only in eutherians, not in birds or other thermogenic organisms. BAT NST occurs when Uncoupling Protein 1 (UCP1) performs oxidative phosphorylation in eutherians’ bodies resulting in the generation of heat (Berg et al., 2006, p. 1178). [ 20 ] This process generally only begins in eutherians after they have been subjected to low temperatures for an extended period of time, after which the process allows an organism's body to maintain a high and stable temperature without a reliance on environmental thermoregulation mechanisms (such as sunlight/shade). Because eutherians are the only clade which store brown adipose tissue, scientists previously thought that UCP1 evolved in conjunction with brown adipose tissue. However, recent studies have shown that UCP1 can also be found in non-eutherians like fish, birds, and reptiles. [ 21 ] This discovery means that UCP1 probably existed in a common ancestor before the radiation of the eutherian lineage. Since this evolutionary split, though, UCP1 has evolved independently in eutherians, through a process which scientists believe was not driven by natural selection, but rather by neutral processes like genetic drift . [ 21 ] The second form of NST occurs in skeletal muscle. While eutherians use both BAT and skeletal muscle NST for thermogenesis, birds only use the latter form. This process has also been shown to occur in rare instances in fish . [ 17 ] In skeletal muscle NST, Calcium ions slip across muscle cells to generate heat. [ 17 ] Even though BAT NST was originally thought to be the only process by which animals could maintain endothermy, scientists now suspect that skeletal muscle NST was the original form of the process and that BAT NST developed later. [ 17 ] Though scientists once also believed that only birds maintained their body temperatures using skeletal muscle NST, research in the late 2010s showed that mammals and other eutherians also use this process when they do not have adequate stores of brown adipose tissue in their bodies. [ 22 ] Skeletal muscle NST might also be used to maintain body temperature in heterothermic mammals during states of torpor or hibernation . [ 17 ] Given that early eutherians and the reptiles which later evolved into avian lineages were either heterothermic or ectothermic, both forms of NST are thought not to have developed fully until after the K-pg extinction roughly 66 million years ago. [ 23 ] However, some estimates place the evolution of these characters earlier, at roughly 100 mya. [ 24 ] It is most likely that the process of evolving the capacity for thermogenesis as it currently exists was a process which began prior to the K-pg extinction and ended well after. The fact that skeletal muscle NST is common among eutherians during periods of torpor and hibernation further supports the theory that this form of thermogenesis is older than BAT NST. This is because early eutherians would not have had the capacity for non-shivering thermogenesis as it currently exists, so they more frequently used torpor and hibernation as means of thermal regulation, relying on systems which, in theory, predate BAT NST. However, there remains no consensus among evolutionary biologists on the order in which the two processes evolved, nor an exact timeframe for their evolution. Non-shivering thermogenesis is regulated mainly by the synergistic effect of thyroid hormone (TH) and the sympathetic nervous system (SNS) on brown adipose tissue. When BAT is stimulated by norepinephrine released by the SNS, this triggers an intracellular cascade which increases the conversion of the less active thyroxine (T4) to the more active triiodothyronine (T3) within the tissue. T3 then increases the expression of UCP1 in BAT, enhancing heat production. [ 25 ] TH also increases obligatory thermogenesis through stimulating metabolism, energy production and utilization. Other sources of heat production stimulated by TH include the sodium-potassium pump, and calcium ion cycling in muscle. [ 26 ] Rising insulin levels after eating may be responsible for diet-induced thermogenesis ( thermic effect of food ) through increased glucose uptake. [ 27 ] Intranasal insulin has been shown to increase metabolic rate by inhibiting warm-sensitive hypothalamic neurons, whose role is to lower body temperature in response to perceived warmth. [ 28 ] Inhibiting these neurons stimulates BAT thermogenesis. [ 29 ] Progesterone also increases body temperature. While commonly thought to directly stimulate BAT thermogenesis, the mechanism by which leptin increases thermogenesis is through inhibiting torpor, which raises the body temperature threshold where heat-conserving mechanisms such as vasoconstriction will start to occur. Leptin deficient mice perceive a deficit in energy, triggering the body to conserve energy by reducing metabolic rate (torpor), which also lowers the body temperature threshold. [ 30 ] There are several pharmaceuticals that can stimulate different types of thermogenesis, with varying levels of safety. Caffeine, for example, has been shown to increase both resting metabolic rate and energy expenditure from exercise, thus enhancing obligatory thermogenesis as well as exercise-induced thermogenesis. [ 31 ] [ 32 ] Caffeine has also been used in combination with Ephedrine, a sympathomimetic, and aspirin, a mitochondrial uncoupler, to promote weight loss and has shown some clinical efficacy. [ 33 ] [ 34 ] [ 35 ] Ephedrine, due to increased risk of side effects such as hypertension, tachycardia, and stroke contributing to increased risk of death or permanent disability, was banned by the FDA in 2004. [ 36 ] [ 37 ] Caffeine is generally considered to be safe at doses up to 400 mg/day, with increased risk of cardiac events and seizures with increasing dose. [ 38 ] 2,4-Dinitrophenol (DNP) is another uncoupler, which is much more potent than aspirin, and also more toxic, with a risk of triggering hyperthermia, tachycardia, and tachypnea which eventually is fatal. The chemical uncoupling of oxidative phosphorylation by DNP causes low ATP in cells by allowing protons to leak through the mitochondrial membrane instead of being used in ATP synthase, which leads to loss of energy as heat, triggering rapid catabolism of fats and carbohydrates (and thus weight loss) to replenish ATP levels, which further amplifies heat production. [ 39 ] It has been historically been used as a weight loss agent but is still widely available, mainly through online pharmacies, despite being banned for human consumption in 1938 due its toxicity. [ 40 ] A novel and interesting method named the thermogenin-like system (TLS) has recently been proposed to produce thermogenesis from white adipose tissue or from other substantial tissues (such as endothelial or muscle cells). Ultimately, this could lead to new therapeutic methods for treating morbid obesity or severe diabetes. The proposed model is purely theoretical and relies on the use of light-activated PoXeR pumps integrated into the inner membrane of mitochondria. These pumps allow the passage of protons in such a way that the proton motive force is reduced. This would enable greater consumption of blood glucose from white adipose, endothelial, or muscle cells, thereby potentially lowering blood glucose levels. The explanation is that glycolysis is accelerated when glucose enters the cells and undergoes the Krebs cycle in the mitochondria. Since muscle cells have many mitochondria, it is also interesting to express PoXeR pumps in this tissue. [ 41 ] However, the method is invasive, relies on gene therapy, and requires several clinical trials as well as hospitalization to integrate the system at the level of white or muscle adipose tissue in the abdominal fat. It is also a light-responsive system. Since light does not penetrate the skin from the outside, the system must include an under-skin component with alternating activation of green light for a certain duration, followed by deactivation for another period. This cycle repeats over several weeks, particularly to recharge the light system. To ensure that ATP levels do not drop too low (otherwise the cell dies), the system self-regulates. Indeed, for light to be activated in the system, it is necessary to have a mechanism that continuously provides light without significantly lowering ATP levels. As luciferase can emit light in exchange for ATP, if ATP levels decrease too drastically, the light stops, ATP levels rise again, and the light is reactivated to induce thermogenesis.
https://en.wikipedia.org/wiki/Thermogenesis
7350 22227 ENSG00000109424 ENSMUSG00000031710 P25874 P12242 NM_021833 NM_009463 NP_068605 NP_033489 Thermogenin (called uncoupling protein by its discoverers and now known as uncoupling protein 1, or UCP1 ) [ 5 ] is a mitochondrial carrier protein found in brown adipose tissue (BAT). It is used to generate heat by non-shivering thermogenesis , and makes a quantitatively important contribution to countering heat loss in babies which would otherwise occur due to their high surface area-volume ratio. UCP1 belongs to the UCP family which are transmembrane proteins that decrease the proton gradient generated in oxidative phosphorylation. They do this by increasing the permeability of the inner mitochondrial membrane, allowing protons that have been pumped into the intermembrane space to return to the mitochondrial matrix and hence dissipating the proton gradient. UCP1-mediated heat generation in brown fat uncouples the respiratory chain, allowing for fast substrate oxidation with a low rate of ATP production. UCP1 is related to other mitochondrial metabolite transporters such as the adenine nucleotide translocator, a proton channel in the mitochondrial inner membrane that permits the translocation of protons from the mitochondrial intermembrane space to the mitochondrial matrix . UCP1 is restricted to brown adipose tissue , where it provides a mechanism for the enormous heat-generating capacity of the tissue. UCP1 is activated in the brown fat cell by fatty acids and inhibited by nucleotides. [ 6 ] Fatty acids are released by the following signaling cascade: Sympathetic nervous system terminals release Norepinephrine onto a Beta-3 adrenergic receptor on the plasma membrane . This activates adenylyl cyclase , which catalyses the conversion of ATP to cyclic AMP (cAMP). cAMP activates protein kinase A , causing its active C subunits to be freed from its regulatory R subunits. Active protein kinase A, in turn, phosphorylates triacylglycerol lipase , thereby activating it. The lipase converts triacylglycerols into free fatty acids, which activate UCP1, overriding the inhibition caused by purine nucleotides ( GDP and ADP ). During the termination of thermogenesis, thermogenin is inactivated and residual fatty acids are disposed of through oxidation, allowing the cell to resume its normal energy-conserving state. UCP1 is very similar to the ATP/ADP Carrier protein, or Adenine Nucleotide Translocator ( ANT ). [ 7 ] [ 8 ] The proposed alternating access model for UCP1 is based on the similar ANT mechanism. [ 9 ] The substrate comes in to the half open UCP1 protein from the cytoplasmic side of the membrane, the protein closes the cytoplasmic side so the substrate is enclosed in the protein, and then the matrix side of the protein opens, allowing the substrate to be released into the mitochondrial matrix . The opening and closing of the protein is accomplished by the tightening and loosening of salt bridges at the membrane surface of the protein. Substantiation for this modelling of UCP1 on ANT is found in the many conserved residues between the two proteins that are actively involved in the transportation of substrate across the membrane. Both proteins are integral membrane proteins , localized to the inner mitochondrial membrane, and they have a similar pattern of salt bridges, proline residues, and hydrophobic or aromatic amino acids that can close or open when in the cytoplasmic or matrix state. [ 7 ] The atomic structure of human uncoupling protein 1 UCP1 has been solved by cryogenic-electron microscopy. [ 10 ] The structure has the typical fold of a member of the SLC25 family. [ 11 ] [ 12 ] UCP1 is locked in a cytoplasmic-open state by guanosine triphosphate in a pH-dependent manner, preventing proton leak. [ 10 ] UCP1 is expressed in brown adipose tissue, which is functionally found only in eutherians . The UCP1, or thermogenin, gene likely arose in an ancestor of modern vertebrates , but did not initially allow for our vertebrate ancestor to use non-shivering thermogenesis for warmth. It wasn't until heat generation was adaptively selected for in placental mammal descendants of this common ancestor that UCP1 evolved its current function in brown adipose tissue to provide additional warmth. [ 13 ] While UCP1 plays a key thermogenic role in a wide range of placental mammals, particularly those with small body size and those that hibernate, the UCP1 gene has lost functionality in several large-bodied lineages (e.g. horses , elephants , sea cows , whales and hyraxes ) and lineages with low metabolic rates (e.g. pangolins , armadillos , sloths and anteaters ). [ 14 ] Recent discoveries of non-heat-generating orthologues of UCP1 in fish and marsupials , other descendants of the ancestor of modern vertebrates, show that this gene was passed on to all modern vertebrates, but aside from placental mammals, none have heat producing capability. [ 15 ] This further suggests that UCP1 had a different original purpose and in fact phylogenetic and sequence analyses indicate that UCP1 is likely a mutated form of a dicarboxylate carrier protein that adapted for thermogenesis in placental mammals. [ 16 ] Researchers in the 1960s investigating brown adipose tissue , found that in addition to producing more heat than typical of other tissues, brown adipose tissue seemed to short circuit, or uncouple, respiration coupling. [ 17 ] Uncoupling protein 1 was discovered in 1976 by David G. Nicholls , Vibeke Bernson , and Gillian Heaton , and the discovery was published in 1978 and shown to be the protein responsible for this uncoupling effect. [ 18 ] UCP1 was later purified for the first time in 1980 and was first cloned in 1988. [ 19 ] [ 20 ] Uncoupling protein two (UCP2), a homolog of UCP1, was identified in 1997. UCP2 localizes to a wide variety of tissues, and is thought to be involved in regulating reactive oxygen species (ROS). In the past decade, three additional homologs of UCP1 have been identified, including UCP3 , UCP4 , and UCP5 (also known as BMCP1 or SLC25A14). Methods of delivering UCP1 to cells by gene transfer therapy or methods of its upregulation have been an important line of enquiry in research into the treatment of obesity, due to their ability to dissipate excess metabolic stores. [ 21 ]
https://en.wikipedia.org/wiki/Thermogenin
Thermogravimetric analysis or thermal gravimetric analysis ( TGA ) is a method of thermal analysis in which the mass of a sample is measured over time as the temperature changes. This measurement provides information about physical phenomena, such as phase transitions , absorption , adsorption and desorption ; as well as chemical phenomena including chemisorptions , thermal decomposition , and solid-gas reactions (e.g., oxidation or reduction ). [ 1 ] Thermogravimetric analysis (TGA) is conducted on an instrument referred to as a thermogravimetric analyzer. A thermogravimetric analyzer continuously measures mass while the temperature of a sample is changed over time. Mass, temperature, and time are considered base measurements in thermogravimetric analysis while many additional measures may be derived from these three base measurements. A typical thermogravimetric analyzer consists of a precision balance with a sample pan located inside a furnace with a programmable control temperature. The temperature is generally increased at constant rate (or for some applications the temperature is controlled for a constant mass loss) to incur a thermal reaction. The thermal reaction may occur under a variety of atmospheres including: ambient air , vacuum , inert gas, oxidizing/reducing gases, corrosive gases, carburizing gases, vapors of liquids or "self-generated atmosphere"; as well as a variety of pressures including: a high vacuum, high pressure, constant pressure, or a controlled pressure. The thermogravimetric data collected from a thermal reaction is compiled into a plot of mass or percentage of initial mass on the y axis versus either temperature or time on the x-axis. This plot, which is often smoothed , is referred to as a TGA curve . The first derivative of the TGA curve (the DTG curve) may be plotted to determine inflection points useful for in-depth interpretations as well as differential thermal analysis . A TGA can be used for materials characterization through analysis of characteristic decomposition patterns. It is an especially useful technique for the study of polymeric materials, including thermoplastics , thermosets , elastomers , composites , plastic films , fibers , coatings , paints , and fuels . There are three types of thermogravimetry: TGA can be used to evaluate the thermal stability of a material. In a desired temperature range, if a species is thermally stable, there will be no observed mass change. Negligible mass loss corresponds to little or no slope in the TGA trace. TGA also gives the upper use temperature of a material. Beyond this temperature the material will begin to degrade. TGA is used in the analysis of polymers. Polymers usually melt before they decompose, thus TGA is mainly used to investigate the thermal stability of polymers. Most polymers melt or degrade before 200 °C. However, there is a class of thermally stable polymers that are able to withstand temperatures of at least 300 °C in air and 500 °C in inert gases without structural changes or strength loss, which can be analyzed by TGA. [ 2 ] [ 3 ] [ 4 ] The simplest materials characterization is the residue remaining after a reaction. For example, a combustion reaction could be tested by loading a sample into a thermogravimetric analyzer at normal conditions . The thermogravimetric analyzer would cause ion combustion in the sample by heating it beyond its ignition temperature . The resultant TGA curve plotted with the y-axis as a percentage of initial mass would show the residue at the final point of the curve. Oxidative mass losses are the most common observable losses in TGA. [ 5 ] Studying the resistance to oxidation in copper alloys is very important. For example, NASA (National Aeronautics and Space Administration) is conducting research on advanced copper alloys for their possible use in combustion engines . However, oxidative degradation can occur in these alloys as copper oxides form in atmospheres that are rich in oxygen. Resistance to oxidation is significant because NASA wants to be able to reuse shuttle materials. TGA can be used to study the static oxidation of materials such as these for practical use. Combustion during TG analysis is identifiable by distinct traces made in the TGA thermograms produced. One interesting example occurs with samples of as-produced unpurified carbon nanotubes that have a large amount of metal catalyst present. Due to combustion, a TGA trace can deviate from the normal form of a well-behaved function. This phenomenon arises from a rapid temperature change. When the weight and temperature are plotted versus time, a dramatic slope change in the first derivative plot is concurrent with the mass loss of the sample and the sudden increase in temperature seen by the thermocouple. The mass loss could result from particles of smoke released from burning caused by inconsistencies in the material itself, beyond the oxidation of carbon due to poorly controlled weight loss. Different weight losses on the same sample at different points can also be used as a diagnosis of the sample's anisotropy. For instance, sampling the top side and the bottom side of a sample with dispersed particles inside can be useful to detect sedimentation, as thermograms will not overlap but will show a gap between them if the particle distribution is different from side to side. [ 6 ] [ 7 ] Thermogravimetric kinetics may be explored for insight into the reaction mechanisms of thermal (catalytic or non-catalytic) decomposition involved in the pyrolysis and combustion processes of different materials. [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] Activation energies of the decomposition process can be calculated using Kissinger method. [ 15 ] Though a constant heating rate is more common, a constant mass loss rate can illuminate specific reaction kinetics. For example, the kinetic parameters of the carbonization of polyvinyl butyral were found using a constant mass loss rate of 0.2 wt %/min. [ 16 ] Thermogravimetric analysis is often combined with other processes or used in conjunction with other analytical methods. For example, the TGA instrument continuously weighs a sample as it is heated to temperatures of up to 2000 °C for coupling with Fourier-transform infrared spectroscopy (FTIR) and mass spectrometry gas analysis. As the temperature increases, various components of the sample are decomposed and the weight percentage of each resulting mass change can be measured. DTA can be used to study any process in which heat is absorbed or liberated.
https://en.wikipedia.org/wiki/Thermogravimetric_analysis
A thermogravitational cycle is a reversible thermodynamic cycle using the gravitational works of weight and buoyancy to respectively compress and expand a working fluid . Consider a column filled with a transporting medium and a balloon filled with a working fluid . Due to the hydrostatic pressure of the transporting medium, the pressure inside the column increases along the z axis (see figure). Initially, the balloon is inflated by the working fluid at temperature T C and pressure P 0 and located on top of the column. A thermogravitational cycle is decomposed into four ideal steps: [ 1 ] For a thermogravitational cycle to occur, the balloon has to be denser than the transporting medium during 1→2 step and less dense during 3→4 step. If these conditions are not naturally satisfied by the working fluid, a weight can be attached to the balloon to increase its effective mass density. An experimental device working according to thermogravitational cycle principle was developed in a laboratory of the University of Bordeaux and patented in France. [ 2 ] Such thermogravitational electric generator is based on inflation and deflation cycles of an elastic bag made of nitrile elastomer cut from a glove finger. [ 1 ] The bag is filled with a volatile working fluid that has low chemical affinity for the elastomer such as perfluorohexane (C 6 F 14 ). It is attached to a strong NdFeB spherical magnet that acts both as a weight and for transducing the mechanical energy into voltage. The glass cylinder is filled with water acting as transporting fluid. It is heated at the bottom by a hot circulating water-jacket, and cooled down at the top by a cold water bath. Due to its low boiling point temperature (56 °C), the perfluorohexane drop contained in the bag vaporizes and inflates the balloon. Once its density is lower than the water density, the balloon raises according to Archimedes’ principle . Cooled down at the column top, the balloon deflates partially until its gets effectively denser than water and starts to fall down. As seen from the videos, the cyclic motion has a period of several seconds. These oscillations can last for several hours and their duration is limited only by leaks of the working fluid through the rubbery membrane. Each time the magnet goes through the coil produces a variation in the magnetic flux . An electromotive force is created and detected through an oscilloscope. It has been estimated that the average power of this machine is 7 μW and its efficiency is 4.8 x 10 −6 . [ 1 ] Although these values are very small, this experiment brings a proof of principle of renewable energy device for harvesting electricity from a weak waste heat source without need of other external energy supply, e.g. for a compressor in a regular heat engine . The experiment was successfully reproduced by undergraduate students in preparatory classes of the Lycée Hoche in Versailles. Several other applications based on the thermogravitational cycles could be found in the literature. For example: The efficiency η of a thermogravitational cycle depends on the thermodynamic processes the working fluid goes through during each step of the cycle. Below some examples:
https://en.wikipedia.org/wiki/Thermogravitational_cycle
Thermohaline circulation ( THC ) is a part of the large-scale ocean circulation driven by global density gradients formed by surface heat and freshwater fluxes . [ 1 ] [ 2 ] The name thermohaline is derived from thermo- , referring to temperature, and haline , referring to salt content —factors which together determine the density of sea water . Wind-driven surface currents (such as the Gulf Stream ) travel polewards from the equatorial Atlantic Ocean, cooling and sinking en-route to higher latitudes - eventually becoming part of the North Atlantic Deep Water - before flowing into the ocean basins . [ 3 ] While the bulk of thermohaline water upwells in the Southern Ocean , the oldest waters (with a transit time of approximately 1000 years) upwell in the North Pacific; [ 4 ] extensive mixing takes place between the ocean basins, reducing the difference in their densities, forming the Earth's oceans a global system . [ 3 ] The water in these circuits transport energy - as heat - and mass - as dissolved solids and gases - around the globe. Consequently, the state of the circulation greatly impacts the climate of Earth. The thermohaline circulation is often referred to as the ocean conveyor belt , great ocean conveyor , or " global conveyor belt " - a term coined by climate scientist, Wallace Smith Broecker . [ 5 ] [ 6 ] It is also known as the meridional overturning circulation, or MOC ; a name used to signify that circulation patterns caused by temperature and salinity gradients are not necessarily part of a single global circulation. This is due, in part, to the difficulty in separating parts of the circulation driven by temperature and salinity from those affected by factors such as wind and tidal force . [ 7 ] This global circulation comprises two major "limbs;" the Atlantic meridional overturning circulation ( AMOC ) centered in the north Atlantic Ocean, and the Southern Ocean overturning circulation , or Southern Ocean meridional circulation ( SMOC ) located near Antarctica . Since 90% of the human population occupies the Northern Hemisphere , [ 8 ] more extensive research has been undertaken on the AMOC, however the SMOC is of equal importance to the global climate. Evidence suggests both circulations are slowing due to climate change in line with increasing rates of dilution from melting ice sheets - critically affecting the salinity of Antarctic bottom water . [ 9 ] [ 10 ] In addition, the potential for outright collapse of either circulation to a much weaker state exemplifies tipping points in the climate system . If either hemisphere experiences collapse of its circulation, the likelihood of proplonged dry spells and droughts would increase as precipitation decreases, while the other hemisphere will become wetter. Marine ecosystems are then more likely to receive fewer nutrients and experience greater ocean deoxygenation . In the Northern Hemisphere, the collapse of AMOC would lead to substantially lower temperatures in many European countries, while the east coast of North America is predicted to see accelerated sea level rise . The collapse of these circulations is generally accepted to be more than a century away, and may only occur in the event of rapid and high sea-temperature increases. However, these projections are marked by significant uncertainty. [ 10 ] [ 11 ] It has long been known that wind can drive ocean currents, but only at the surface. [ 12 ] In the 19th century, some oceanographers suggested that the convection of heat could drive deeper currents. In 1908, Johan Sandström performed a series of experiments at a Bornö Marine Research Station which proved that the currents driven by thermal energy transfer can exist, but require that "heating occurs at a greater depth than cooling". [ 1 ] [ 13 ] Normally, the opposite occurs, because ocean water is heated from above by the Sun and becomes less dense, so the surface layer floats on the surface above the cooler, denser layers, resulting in ocean stratification . However, wind and tides cause mixing between these water layers, with diapycnal mixing caused by tidal currents being one example. [ 14 ] This mixing is what enables the convection between ocean layers, and thus, deep water currents. [ 1 ] In the 1920s, Sandström's framework was expanded by accounting for the role of salinity in ocean layer formation. [ 1 ] Salinity is important because like temperature, it affects water density . Water becomes less dense as its temperature increases and the distance between its molecules expands, but more dense as the salinity increases, since there is a larger mass of salts dissolved within that water. [ 15 ] Further, while fresh water is at its most dense at 4 °C, seawater only gets denser as it cools, up until it reaches the freezing point. That freezing point is also lower than for fresh water due to salinity, and can be below −2 °C, depending on salinity and pressure. [ 16 ] These density differences caused by temperature and salinity ultimately separate ocean water into distinct water masses , such as the North Atlantic Deep Water (NADW) and Antarctic Bottom Water (AABW). These two waters are the main drivers of the circulation, which was established in 1960 by Henry Stommel and Arnold B. Arons. [ 17 ] They have chemical, temperature and isotopic ratio signatures (such as 231 Pa / 230 Th ratios) which can be traced, their flow rate calculated, and their age determined. NADW is formed because North Atlantic is a rare place in the ocean where precipitation , which adds fresh water to the ocean and so reduces its salinity, is outweighed by evaporation , in part due to high windiness. When water evaporates, it leaves salt behind, and so the surface waters of the North Atlantic are particularly salty. North Atlantic is also an already cool region, and evaporative cooling reduces water temperature even further. Thus, this water sinks downward in the Norwegian Sea , fills the Arctic Ocean Basin and spills southwards through the Greenland-Scotland-Ridge – crevasses in the submarine sills that connect Greenland , Iceland and Great Britain. It cannot flow towards the Pacific Ocean due to the narrow shallows of the Bering Strait , but it does slowly flow into the deep abyssal plains of the south Atlantic. [ 18 ] In the Southern Ocean , strong katabatic winds blowing from the Antarctic continent onto the ice shelves will blow the newly formed sea ice away, opening polynyas in locations such as Weddell and Ross Seas , off the Adélie Coast and by Cape Darnley . The ocean, no longer protected by sea ice, suffers a brutal and strong cooling (see polynya ). Meanwhile, sea ice starts reforming, so the surface waters also get saltier, hence very dense. In fact, the formation of sea ice contributes to an increase in surface seawater salinity; saltier brine is left behind as the sea ice forms around it (pure water preferentially being frozen). Increasing salinity lowers the freezing point of seawater, so cold liquid brine is formed in inclusions within a honeycomb of ice. The brine progressively melts the ice just beneath it, eventually dripping out of the ice matrix and sinking. This process is known as brine rejection . The resulting Antarctic bottom water sinks and flows north and east. It is denser than the NADW, and so flows beneath it. AABW formed in the Weddell Sea will mainly fill the Atlantic and Indian Basins, whereas the AABW formed in the Ross Sea will flow towards the Pacific Ocean. At the Indian Ocean, a vertical exchange of a lower layer of cold and salty water from the Atlantic and the warmer and fresher upper ocean water from the tropical Pacific occurs, in what is known as overturning . In the Pacific Ocean, the rest of the cold and salty water from the Atlantic undergoes haline forcing, and becomes warmer and fresher more quickly. [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] The out-flowing undersea of cold and salty water makes the sea level of the Atlantic slightly lower than the Pacific and salinity or halinity of water at the Atlantic higher than the Pacific. This generates a large but slow flow of warmer and fresher upper ocean water from the tropical Pacific to the Indian Ocean through the Indonesian Archipelago to replace the cold and salty Antarctic Bottom Water . This is also known as 'haline forcing' (net high latitude freshwater gain and low latitude evaporation). This warmer, fresher water from the Pacific flows up through the South Atlantic to Greenland , where it cools off and undergoes evaporative cooling and sinks to the ocean floor, providing a continuous thermohaline circulation. [ 25 ] [ 26 ] As the deep waters sink into the ocean basins, they displace the older deep-water masses, which gradually become less dense due to continued ocean mixing. Thus, some water is rising, in what is known as upwelling . Its speeds are very slow even compared to the movement of the bottom water masses. It is therefore difficult to measure where upwelling occurs using current speeds, given all the other wind-driven processes going on in the surface ocean. Deep waters have their own chemical signature, formed from the breakdown of particulate matter falling into them over the course of their long journey at depth. A number of scientists have tried to use these tracers to infer where the upwelling occurs. Wallace Broecker , using box models, has asserted that the bulk of deep upwelling occurs in the North Pacific, using as evidence the high values of silicon found in these waters. Other investigators have not found such clear evidence. [ 27 ] Computer models of ocean circulation increasingly place most of the deep upwelling in the Southern Ocean, associated with the strong winds in the open latitudes between South America and Antarctica. [ 28 ] Direct estimates of the strength of the thermohaline circulation have also been made at 26.5°N in the North Atlantic, by the UK-US RAPID programme. It combines direct estimates of ocean transport using current meters and subsea cable measurements with estimates of the geostrophic current from temperature and salinity measurements to provide continuous, full-depth, basin-wide estimates of the meridional overturning circulation. However, it has only been operating since 2004, which is too short when the timescale of the circulation is measured in centuries. [ 29 ] The thermohaline circulation plays an important role in supplying heat to the polar regions, and thus in regulating the amount of sea ice in these regions, although poleward heat transport outside the tropics is considerably larger in the atmosphere than in the ocean. [ 30 ] Changes in the thermohaline circulation are thought to have significant impacts on the Earth's radiation budget . Large influxes of low-density meltwater from Lake Agassiz and deglaciation in North America are thought to have led to a shifting of deep water formation and subsidence in the extreme North Atlantic and caused the climate period in Europe known as the Younger Dryas . [ 31 ] In 2021, the IPCC Sixth Assessment Report again said the AMOC is "very likely" to decline within the 21st century and that there was a "high confidence" changes to it would be reversible within centuries if warming was reversed. [ 32 ] : 19 Unlike the Fifth Assessment Report, it had only "medium confidence" rather than "high confidence" in the AMOC avoiding a collapse before the end of the 21st century. This reduction in confidence was likely influenced by several review studies that draw attention to the circulation stability bias within general circulation models , [ 33 ] [ 34 ] and simplified ocean-modelling studies suggesting the AMOC may be more vulnerable to abrupt change than larger-scale models suggest. [ 35 ] As of 2024 [update] , there is no consensus on whether a consistent slowing of the AMOC circulation has occurred but there is little doubt it will occur in the event of continued climate change. [ 37 ] According to the IPCC, the most-likely effects of future AMOC decline are reduced precipitation in mid-latitudes, changing patterns of strong precipitation in the tropics and Europe, and strengthening storms that follow the North Atlantic track. [ 37 ] In 2020, research found a weakened AMOC would slow the decline in Arctic sea ice . [ 38 ] and result in atmospheric trends similar to those that likely occurred during the Younger Dryas , [ 39 ] such as a southward displacement of Intertropical Convergence Zone . Changes in precipitation under high-emissions scenarios would be far larger. [ 38 ] Additionally, the main controlling pattern of the extratropical Southern Hemisphere's climate is the Southern Annular Mode (SAM), which has been spending more and more years in its positive phase due to climate change (as well as the aftermath of ozone depletion ), which means more warming and more precipitation over the ocean due to stronger westerlies , freshening the Southern Ocean further. [ 44 ] [ 45 ] : 1240 Climate models currently disagree on whether the Southern Ocean circulation would continue to respond to changes in SAM the way it does now, or if it will eventually adjust to them. As of early 2020s, their best, limited-confidence estimate is that the lower cell would continue to weaken, while the upper cell may strengthen by around 20% over the 21st century. [ 45 ] A key reason for the uncertainty is the poor and inconsistent representation of ocean stratification in even the CMIP6 models – the most advanced generation available as of early 2020s. [ 46 ] Furthermore, the largest long-term role in the state of the circulation is played by Antarctic meltwater, [ 47 ] and Antarctic ice loss had been the least-certain aspect of future sea level rise projections for a long time. [ 48 ]
https://en.wikipedia.org/wiki/Thermohaline_circulation
Thermokinetics deals with the study of thermal decomposition kinetics . This thermodynamics -related article is a stub . You can help Wikipedia by expanding it . This article about analytical chemistry is a stub . You can help Wikipedia by expanding it . This physical chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Thermokinetics
Thermolabile refers to a substance which is subject to decomposition or change in response to heat . This term is often used describe biochemical substances. [ 1 ] For example, many bacterial exotoxins are thermolabile and can be easily inactivated by the application of moderate heat. Enzymes are also thermolabile and lose their activity when the temperature rises. Loss of activity in such toxins and enzymes is likely due to change in the three-dimensional structure of the toxin protein during exposure to heat. In pharmaceutical compounds, heat generated during grinding may lead to degradation of thermolabile compounds. This is of particular use in testing gene function. [ 2 ] This is done by intentionally creating mutants which are thermolabile. Growth below the permissive temperature allows normal protein function, while increasing the temperature above the permissive temperature ablates activity, likely by denaturing the protein. Thermolabile enzymes are also studied for their applications in DNA replication techniques, such as PCR , where thermostable enzymes are necessary for proper DNA replication. Enzyme function at higher temperatures may be enhanced with trehalose , which opens up the possibility of using normally thermolabile enzymes in DNA replication. [ 3 ] This biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Thermolabile
Thermolabile Protecting Groups (TPGs) are applied in chemical synthesis when mild deprotection conditions are required. Their removal merely consists of increasing temperature, which leads to deprotection of the protected sensitive part of a molecule. The deprotection mechanism has been proven only for several Thermolabile Protecting Groups (TPGs). Most of these groups are removed on the basis of intramolecular cyclization depending either on nucleophilicity or configuration. TPGs are characterized by a different half-life after increasing temperature by 70 °C. The shortest deprotection time with high stability in lower temperatures has been found for 2- pyridyl TPGs [ 1 ] that are applied to protect a hydroxyl group , [ 2 ] carboxylic acid [ 3 ] or a phosphate esters. For these groups stabilization systems have been developed depending on the protected part of a molecule: for a phosphate centre it is the "click-clack" approach, [ 4 ] and for a hydroxyl group – the "chemical switch" concept. [ 5 ] TPGs are applied as an element that increases the specificity of starters in PCR ; they may also be used in microarray construction. [ 6 ]
https://en.wikipedia.org/wiki/Thermolabile_protecting_groups
Thermoluminescence is a form of luminescence that is exhibited by certain crystalline materials, such as some minerals, when previously absorbed energy from electromagnetic radiation or other ionizing radiation is re-emitted as light upon heating of the material. The phenomenon is distinct from that of black-body radiation . High energy radiation creates electronic excited states in crystalline materials. In some materials, these states are trapped , or arrested , for extended periods of time by localized defects, or imperfections, in the lattice interrupting the normal intermolecular or inter-atomic interactions in the crystal lattice. Quantum-mechanically, these states are stationary states which have no formal time dependence; however, they are not stable energetically, as vacuum fluctuations are always "prodding" these states. Heating the material enables the trapped states to interact with phonons , i.e. lattice vibrations, to rapidly decay into lower-energy states, causing the emission of photons in the process. The amount of luminescence is proportional to the original dose of radiation received. In thermoluminescence dating, this can be used to date buried objects that have been heated in the past, since the ionizing dose received from radioactive elements in the soil or from cosmic rays is proportional to age. This phenomenon has been applied in the thermoluminescent dosimeter, a device to measure the radiation dose received by a chip of suitable material that is carried by a person or placed with an object. Thermoluminescence is a common geochronology tool for dating pottery or other fired archeological materials, as heat empties or resets the thermoluminescent signature of the material (Figure 1). Subsequent recharging of this material from ambient radiation can then be empirically dated by the equation: Age = (subsequently accumulated dose of ambient radiation) / (dose accumulated per year) This technique was modified for use as a passive sand migration analysis tool (Figure 2). [ 1 ] The research shows direct consequences resulting from the improper replenishment of starving beaches using fine sands. Beach nourishment is a problem worldwide and receives large amounts of attention due to the millions of dollars spent yearly in order to keep beaches beautified for tourists, [ 2 ] e.g. in Waikiki , Hawaii. Sands with sizes 90–150 μm (very fine sand) were found to migrate from the swash zone 67% faster than sand grains of 150-212 μm (fine sand; Figure 3). Furthermore, the technique was shown to provide a passive method of policing sand replenishment and a passive method of observing riverine or other sand inputs along shorelines (Figure 4). [ 1 ]
https://en.wikipedia.org/wiki/Thermoluminescence
Thermoluminescence dating ( TL ) is the determination, by means of measuring the accumulated radiation dose, of the time elapsed since material containing crystalline minerals was either heated ( lava , ceramics ) or exposed to sunlight ( sediments ). As a crystalline material is heated during measurements, the process of thermoluminescence starts. Thermoluminescence emits a weak light signal that is proportional to the radiation dose absorbed by the material. It is a type of luminescence dating . The technique has wide application, and is relatively cheap at some US$300–700 per object; ideally a number of samples are tested. Sediments are more expensive to date. [ 1 ] The destruction of a relatively significant amount of sample material is necessary, which can be a limitation in the case of artworks. The heating must have taken the object above 500 °C, which covers most ceramics, although very high-fired porcelain creates other difficulties. It will often work well with stones that have been heated by fire. The clay core of bronze sculptures made by lost wax casting is also able to be tested. [ 2 ] Different materials vary considerably in their suitability for the technique, depending on several factors. Subsequent irradiation, for example if an x-ray is taken, can affect accuracy, as will the "annual dose" of radiation a buried object has received from the surrounding soil. Ideally this is assessed by measurements made at the precise findspot over a long period. For artworks, it may be sufficient to confirm whether a piece is broadly ancient or modern (that is, authentic or a fake), and this may be possible even if a precise date cannot be estimated. [ 2 ] Natural crystalline materials contain imperfections: impurity ions , stress dislocations, and other phenomena that disturb the regularity of the electric field that holds the atoms in the crystalline lattice together. These imperfections lead to local humps and dips in the crystalline material's electric potential . Where there is a dip (a so-called " electron trap"), a free electron may be attracted and trapped. The flux of ionizing radiation—both from cosmic radiation and from natural radioactivity —excites electrons from atoms in the crystal lattice into the conduction band where they can move freely. Most excited electrons will soon recombine with lattice ions, but some will be trapped, storing part of the energy of the radiation in the form of trapped electric charge ( Figure 1 ). Depending on the depth of the traps (the energy required to free an electron from them) the storage time of trapped electrons will vary as some traps are sufficiently deep to store charge for hundreds of thousands of years. Another important technique in testing samples from a historic or archaeological site is a process known as thermoluminescence testing, which involves the principle that all objects absorb radiation from the environment. This process frees electrons within elements or minerals that remain caught within the item. Thermoluminescence testing involves heating a sample until it releases a type of light, which is then measured to determine the last time the item was heated. In thermoluminescence dating, these long-term traps are used to determine the age of materials: When irradiated crystalline material is again heated or exposed to strong light, the trapped electrons are given sufficient energy to escape. In the process of recombining with a lattice ion, they lose energy and emit photons (light quanta ), detectable in the laboratory . The amount of light produced is proportional to the number of trapped electrons that have been freed which is in turn proportional to the radiation dose accumulated. In order to relate the signal (the thermoluminescence—light produced when the material is heated) to the radiation dose that caused it, it is necessary to calibrate the material with known doses of radiation since the density of traps is highly variable. Thermoluminescence dating presupposes a "zeroing" event in the history of the material, either heating (in the case of pottery or lava) or exposure to sunlight (in the case of sediments), that removes the pre-existing trapped electrons. Therefore, at that point the thermoluminescence signal is zero. As time goes on, the ionizing radiation field around the material causes the trapped electrons to accumulate ( Figure 2 ). In the laboratory, the accumulated radiation dose can be measured, but this by itself is insufficient to determine the time since the zeroing event. The Radiation Dose Rate - the dose accumulated per year-must be determined first. This is commonly done by measurement of the alpha radioactivity (the uranium and thorium content) and the potassium content (K-40 is a beta and gamma emitter) of the sample material. Often the gamma radiation field at the position of the sample material is measured, or it may be calculated from the alpha radioactivity and potassium content of the sample environment, and the cosmic ray dose is added in. Once all components of the radiation field are determined, the accumulated dose from the thermoluminescence measurements is divided by the dose accumulating each year, to obtain the years since the zeroing event. Thermoluminescence dating is used for material where radiocarbon dating is not available, like sediments . Its use is now common in the authentication of old ceramic wares, for which it gives the approximate date of the last firing. An example of this can be seen in Rink and Bartoll, 2005 . Thermoluminescence dating was modified for use as a passive sand migration analysis tool by Keizars, et al. , 2008 ( Figure 3 ), demonstrating the direct consequences resulting from the improper replenishment of starving beaches using fine sands, as well as providing a passive method of policing sand replenishment and observing riverine or other sand inputs along shorelines ( Figure 4 ). Optically stimulated luminescence dating is a related measurement method which replaces heating with exposure to intense light. The sample material is illuminated with a very bright source of green or blue light (for quartz ) or infrared light (for potassium feldspar ). Ultraviolet light emitted by the sample is detected for measurement. Oxford Authentication: Home - TL Testing Authentication 'Oxford Authentication® Ltd authenticates ceramic antiquities using the scientific technique of thermoluminescence (TL). TL testing is a dating method for archaeological items which can distinguish between genuine and fake antiquities.' See some of their case studies here: https://www.oxfordauthentication.com/case-studies/
https://en.wikipedia.org/wiki/Thermoluminescence_dating
Thermomechanical analysis ( TMA ) is a technique used in thermal analysis , a branch of materials science which studies the properties of materials as they change with temperature. Thermomechanical analysis is a subdiscipline of the thermomechanometry (TM) technique. [ 1 ] Thermomechanometry is the measurement of a change of a dimension or a mechanical property of the sample while it is subjected to a temperature regime. An associated thermoanalytical method is thermomechanical analysis. A special related technique is thermodilatometry (TD), the measurement of a change of a dimension of the sample with a negligible force acting on the sample while it is subjected to a temperature regime. The associated thermoanalytical method is thermodilatometric analysis (TDA). TDA is often referred to as zero force TMA. The temperature regime may be heating, cooling at a rate of temperature change that can include stepwise temperature changes, linear rate of change, temperature modulation with a set frequency and amplitude, free (uncontrolled) heating or cooling, or maintaining a constant increase in temperature. The sequence of temperatures with respect to time may be predetermined (temperature programmed) or sample controlled (controlled by a feedback signal from the sample response). Thermomechanometry includes several variations according to the force and the way the force is applied. Static force TM (sf-TM) is when the applied force is constant; previously called TMA with TD as the special case of zero force. Dynamic force TM (df-TM) is when the force is changed as for the case of a typical stress–strain analysis; previously called TMA with the term dynamic meaning any alteration of the variable with time, and not to be confused with dynamic mechanical analysis (DMA). Modulated force TM (mf-TM) is when the force is changed with a frequency and amplitude; previously called DMA. The term modulated is a special variant of dynamic, used to be consistent with modulated temperature differential scanning calorimetry (mt-DSC) and other situations when a variable is imposed in a cyclic manner. [ 2 ] Mechanical testing seeks to measure mechanical properties of materials using various test specimen and fixture geometries using a range of probe types. Measurement is desired to take place with minimal disturbance of the material being measured. Some characteristics of a material can be measured without disturbance, such as dimensions, mass , volume , density . However, measurement of mechanical properties normally involves disturbance of the system being measured. The measurement often reflects the combined material and measuring device as the system. Knowledge of a structure can be gained by imposing an external stimulus and measuring the response of the material with a suitable probe. The external stimulus can be a stress or strain , however in thermal analysis the influence is often temperature. Thermomechanometry is where a stress is applied to a material and the resulting strain is measured while the material is subjected to a controlled temperature program. The simplest mode of TM is where the imposed stress is zero. No mechanical stimulus is imposed upon the material, the material response is generated by a thermal stress, either by heating or cooling. Zero force TM (a variant of sf-TM or TD) measures the response of the material to changes in temperature and the basic change is due to activation of atomic or molecular phonons . Increased thermal vibrations produce thermal expansion characterized by the coefficient of thermal expansion (CTE) that is the gradient of the graph of dimensional change versus temperature. CTE depends upon thermal transitions such as the glass transition . CTE of the glassy state is low, while at the glass transition temperature (Tg) increased degrees of molecular segmental motion are released so CTE of the rubbery state is high. Changes in an amorphous polymer may involve other sub-Tg thermal transitions associated with short molecular segments, side-chains and branches. The linearity of the sf-TM curve will be changed by such transitions. Other relaxations may be due to release of internal stress arising from the non-equilibrium state of the glassy amorphous polymer. Such stress is referred to as thermal aging. Other stresses may be as a result of moulding pressures, extrusion orientation, thermal gradients during solidification and externally imparted stresses. Semi-crystalline polymers are more complex than amorphous polymers , since the crystalline regions are interspersed with amorphous regions. Amorphous regions in close association to the crystals or contain common molecules as tie molecules have less degrees of freedom than the bulk amorphous phase. These immobilised amorphous regions are called the rigid amorphous phase. CTE of the rigid amorphous phase is expected to be lower than that of the bulk amorphous phase. The crystallite are typically not at equilibrium and they may contain different polymorphs . The crystals re-organize during heating so that they approach the equilibrium crystalline state. Crystal re-organization is a thermally activated process. Further crystallization of the amorphous phase may take place. Each of these processes will interfere with thermal expansion of the material. The material may be a blend or a two-phase block or graft copolymer . If both phases are amorphous then two Tg will be observed if the material exists as two phases. If one Tg is exhibited then it will be between the Tg of the components and the resultant Tg will likely be described by a relationship such as the Flory–Fox or Kwei equations. If one of the components is semi-crystalline then the complexity of a pure crystalline phase and either one or two amorphous phases will result. If both components are semi-crystalline then the morphology will be complex since both crystal phases will likely form separately, though with influence on each other. Cross-linking will restrict the molecular response to temperature change since degree of freedom for segmental motions are reduced as molecules become irreversibly linked. Cross-linking chemically links molecules, while crystallinity and fillers introduce physical constraints to motion. Mechanical properties such as derived from stress-strain testing are used to calculate crosslink density that is usually expressed as the molar mass between crosslinks (Mc). The sensitivity of zero stress TMA to crosslinking is low since the structure receives minimum disturbance. Sensitivity to crosslinks requires high strain such that the segments between crosslinks become fully extended. Zero force TM will only be sensitive to changes in the bulk that are expressed as a change in a linear dimension of the material. The measured change will be the resultant of all processes occurring as the temperature is changed. Some of the processes will be reversible, others irreversible, and others time-dependent. The methodology must be chosen to best detect, distinguish and resolve the thermal expansion or contractions observable. The TM instrument need only apply sufficient stress to keep the probe in contact with the specimen surface, but it must have high sensitivity to dimensional change. The experiment must be conducted at a temperature change rate slow enough for the material to approach thermal equilibrium throughout. While the temperature should be the same throughout the material it will not necessarily be at thermal equilibrium in the context of molecular relaxations. The temperature of the molecules relative to equilibrium is expressed as the fictive temperature. The fictive temperature is the temperature at which the unrelaxed molecules would be at equilibrium. TM is sufficient for zero stress experiments since superimposition of a frequency to create a dynamic mechanical experiment will have no effect since there is no stress other than a nominal contact stress. The material can be best characterized by an experiment in which the original material is first heated to the upper temperature required, then the material should be cooled at the same rate, followed by a second heating scan. The first heating scan provides a measure of the material with all of its structural complexities. The cooling scan allows and measures the material as the molecules lose mobility, so it is going from an equilibrium state and gradually moving away from equilibrium as the cooling rate exceeds the relaxation rate. The second heating scan will differ from the first heating scan because of thermal relaxation during the first scan and the equilibration achieved during the cooling scan. A second cooling scan followed by a third heating scan can be performed to check on the reliability of the prior scans. Different heating and cooling rates can be used to produce different equilibrations. Annealing at specific temperatures can be used to provide different isothermal relaxations that can be measured by a subsequent heating scan. The sf-TM experiments duplicate experiments that can be performed using differential scanning calorimetry (DSC). A limitation of DSC is that the heat exchange during a process or due to the heat capacity of the material cannot be measured over long times or at slow heating or cooling rates since the finite quantity of heat exchanges will be dispersed over too long a time to be detected. The limitation does not apply to sf-TM since the dimensional change of the material can be measured over any time. The constraint is the practical time for the experiment. The application of multiple scans is shown above to distinguish reversible from irreversible changes. Thermal cycling and annealing steps can be added to provide complex thermal programs to test various attributes of a material as more becomes known about the material. Modulated temperature TM (mt-TM) has been used as an analogous experiment to modulated-temperature DSC (mtDSC). The principle of mt-TM is similar to the DSC analogy. The temperature is modulated as the TM experiment proceeds. Some thermal processes are reversible, such as the true CTE, while others such as stress relief, orientation randomization and crystallization are irreversible within the conditions of the experiment. The modulation conditions should be different from mt-DSC since the sample and test fixture and enclosure is larger thus requiring longer equilibration time. mt-DSC typically uses a period of 60 s, amplitude 0.5-1.0 °C and average heating or cooling rate of 2 °C·min-1. MT-TMA may have a period of 1000 s with the other parameters similar to mt-DSC. These conditions will require long scan times. Another experiment is an isothermal equilibration where the material is heated rapidly to a temperature where relaxations can proceed more rapidly. Thermal aging can take several hours or more under ideal conditions. Internal stresses may relax rapidly. TM can be used to measure the relaxation rates and hence characteristic times for these events, provides they are within practical measurements times available for the instrument. Temperature is the variable that can be changed to bring relaxations into measurable time ranges. Table 1. Typical zero-stress thermomechanometry parameters Creep and stress relaxation measures the elasticity , viscoelasticity and viscous behaviour of materials under a selected stress and temperature. Tensile geometry is the most common for creep measurements. A small force is initially imparted to keep the specimen aligned and straight. The selected stress is applied rapidly and held constant for the required time; this may be 1 h or more. During application of force the elastic property is observed as an immediate elongation or strain. During the constant force period the time dependent elastic response or viscoelasticity, together with the viscous response, result in further increase in strain. [ 3 ] [ 4 ] The force is removed rapidly, though the small alignment force is maintained. The recovery measurement time should be four times the creep time, so in this example the recovery time should be 4 h. Upon removal of the force the elastic component results in an immediate contraction. The viscoelastic recovery is exponential as the material slowly recovers some of the previously imparted creep strain. After recovery there is a permanent unrecovered strain due to the viscous component of the properties. [ 5 ] Analysis of the data is performed using the four component viscoelastic model where the elements are represented by combinations of springs and dashpots . The experiment can be repeated using different creep forces. The results for varying forces after the same creep time can be used to construct isochronal stress–strain curves. The creep and recovery experiment can be repeated under different temperatures. The creep–time curves measured at various temperatures can be extended using the time-temperature-superposition principle to construct a creep and recovery mastercurve that extends the data to very long and very short times. These times would be impractical to measure directly. Creep at very long timeframes is important for prediction of long term properties and product lifetimes. A complementary property is stress relaxation, where a strain is applied and the corresponding stress change is measured. The mode of measurement is not directly available with most thermomechanical instruments. Stress relaxation is available using any standard universal test instruments, since their mode of operation is application of strain, while the stress is measured. Experiments where the force is changed with time are called dynamic force thermomechanometry (df-TM). This use of the term dynamic is distinct from the situation where the force is periodically changed with time, typically following a sine relationship, where the term modulated is recommended. Most thermomechanical instruments are force controlled, that is they apply a force, then measure a resulting change in a dimension of the test specimen. Usually a constant strain rate is used for stress–strain measurements, but in the case of df-TM the stress will be applied at a chosen rate. The result of a stress-strain analysis is a curve that will reveal the modulus (hardness) or compliance (softness, the reciprocal of modulus). The modulus is the slope of the initial linear region of the stress–strain curve. Various ways of selecting the region to calculate gradient are used such as the initial part of the curve, another is to select a region defined by the secant to the curve. If the test material is a thermoplastic a yield zone may be observed and a yield stress (strength) calculated. A brittle material will break before it yields. A ductile material will further deform after yielding. When the material breaks a break stress (ultimate stress) and break strain are calculated. The area under the stress–strain curve is the energy required to break (toughness). Thermomechanical instruments are distinct in that they can measure only small changes in linear dimension (typically 1 to 10 mm) so it is possible to measure yield and break properties for small specimens and those that do not change dimensions very much before exhibiting these properties. A purpose of measuring a stress–strain curve is to establish the linear viscoelastic region (LVR). LVR is this initial linear part of a stress–strain curve where an increase in stress is accompanied by a proportional increase in strain, that is the modulus is constant and the change in dimension is reversible. A knowledge of LVR is a prerequisite for any modulated force thermomechanometry experiments. Conduct of complex experiments should be preceded by preliminary experiments with a limited range of variables to establish the behaviour of the test material for selection of further instrument configuration and operating parameters. Modulated temperature conditions are where the temperature is changed in a cyclic manner such as in a sine, isothermal-heating, isothermal-cooling or heat-cool. The underlying temperature can increase, decrease or be constant. Modulated temperature conditions enable separation of the data into reversing data that is in-phase with the temperature changes, and non-reversing that is out-of-phase with the temperature changes. Sf-TM is required since the force should be constant while the temperature is modulated, or at least constant for each modulation period. A reversing properties is coefficient of thermal expansion . Non-reversing properties are thermal relaxations, stress relief and morphological changes that occur during heating, causing the material to approach thermal equilibrium. [ 6 ]
https://en.wikipedia.org/wiki/Thermomechanical_analysis
The thermomechanical cuttings cleaner (TCC) is a patented technology mainly used by service providers in the oil and gas industry to separate and recover the components of oil-contaminated drilling waste. [ 1 ] A TCC converts kinetic energy to thermal energy in a thermal desorption process which efficiently transforms drilling waste into re-usable products. [ 2 ] Using kinetic energy instead of indirect heating allows for very short retention times and as a consequence the quality of the separated components is not affected by the treatment. Thus the recovered water, base oil and solids can be re-used after the treatment process. [ 3 ] This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Thermomechanical_cuttings_cleaner
The Harwell TMG Stirling engine , an abbreviation for "Thermo-Mechanical Generator", was invented in 1967 by E. H. Cooke-Yarborough at the Harwell Labs of the United Kingdom Atomic Energy Authority . [ 1 ] [ 2 ] [ 3 ] [ 4 ] It was intended to be a remote electrical power source with low cost and very long life, albeit by sacrificing some efficiency. The TMG (model TMG120) was at one time the only Stirling engine sold by a manufacturer, namely HoMach Systems Ltd., England. [ 5 ] The engine has near isothermal cylinders because 1) the heater area covers the entire cylinder end, 2) it is a short stroke device, with wide shallow cylinders, yielding a high surface area to volume ratio, 3) the average thickness of the gas space is about 0.1 cm, and 4) the working fluid is Helium , a gas having good thermal properties for Stirling engines. The engine's displacer also has very low losses. These low-loss operating characteristics simplify the engine analysis, compared to more conventional Stirling engines. [ 5 ] : 113 The design has many advantages over conventional Stirling engines. The simplicity of the heater greatly reduces the cost by allowing the TMG to avoid the need for a brazed tubular or finned heater, which can account for 40% of the cost of a conventional Stirling engine. [ 6 ] The heat exchangers for the heater and cooler are mechanically trivial. The regenerator is a simple annulus, referred to as a "flat plate". Along with the cylinder wall and the displacer, there are a total of four regenerating surfaces. The TMG is a free piston engine. There are no rolling bearings or sliding seals , thus there is very little friction or wear. The working space is hermetically sealed, allowing it to contain pressurized helium gas for many thousands of hours. The displacer is a stainless steel can, 27 cm in diameter. It is suspended by a low-loss planer metal spring centered in a 27.4 cm diameter cylinder. The 2 mm radial clearance is divided into two concentric annular gaps by a thin, open-ended cylinder, which is fixed to the engine's cylinder. This annulus acts as the regenerator , which is much less costly than a wire-mesh type. The engine is a "free-cylinder" design, in which the entire engine is mounted on springs and allowed to vibrate slightly. This allows the displacer to be driven by positive feedback from the motion of the power piston and the magnets in the linear-alternator magnets, which have a combined weight of 10 kg. [ 5 ] : 109 The unique power piston was invented by Cooke-Yarborough, and is called an "articulated diaphragm". It consists of a stainless steel annulus, with an outer diameter of 35 cm and an inner diameter of 26 cm. This annulus is clamped to the engine on the outer edge by two flexible rubber o-rings, and on the inner edge it is similarly clamped, in this case to a rigid center hub that makes up the piston's center. The o-rings flex but do not slide, thus no lubricant is needed and there is negligible wear in the entire machine. The compression space is located between the power-piston hub and the displacer, and this space is cooled by direct conduction through the power piston. A developmental model of the TMG contained a double articulated diaphragm containing cooling water, which was pumped by a thermosyphon . The depth of the compression space varies from 0.2 to 2.7 mm, as governed by the 2 mm displacer stroke and the 1.5 mm power piston stroke moving 90 degrees out of phase. The TMG engine successfully overcomes many of the economic and mechanical difficulties common in conventional Stirling engines. However, there are some limitations of this design. The simple, low-cost annular regenerator is inefficient compared to other types, (and this contributes to this engine's somewhat low thermal efficiency of only 10%). The mechanical limitations of the articulated diaphragm only allow a maximum stroke of an estimated 3 mm. These properties limit the maximum obtainable power to about 500 - 1000 Watts from an engine of this design. [ 5 ] : 195 Nevertheless, it is rare for a low-cost Stirling engine to obtain this high level of reliability and operating life, which can only be attributed to the ingenuity of the design.
https://en.wikipedia.org/wiki/Thermomechanical_generator
Thermomechanical processing is a metallurgical process that combines mechanical or plastic deformation process like compression or forging , rolling , etc. with thermal processes like heat-treatment , water quenching , heating and cooling at various rates into a single process. [ 1 ] The quenching process produces a high strength bar from inexpensive low carbon steel . The process quenches the surface layer of the bar, which pressurizes and deforms the crystal structure of intermediate layers, and simultaneously begins to temper the quenched layers using the heat from the bar's core. Steel billets 130mm² ("pencil ingots") are heated to approximately 1200°C to 1250°C in a reheat furnace. Then, they are progressively rolled to reduce the billets to the final size and shape of reinforcing bar . After the last rolling stand, the billet moves through a quench box. The quenching converts the billet's surface layer to martensite , and causes it to shrink. The shrinkage pressurizes the core, helping to form the correct crystal structures. The core remains hot, and austenitic . A microprocessor controls the water flow to the quench box, to manage the temperature difference through the cross-section of the bars. The correct temperature difference assures that all processes occur, and bars have the necessary mechanical properties. [ 2 ] The bar leaves the quench box with a temperature gradient through its cross section. As the bar cools, heat flows from the bar's centre to its surface so that the bar's heat and pressure correctly tempers an intermediate ring of martensite and bainite . Finally, the slow cooling after quenching automatically tempers the austenitic core to ferrite and pearlite on the cooling bed. These bars therefore exhibit a variation in microstructure in their cross section, having strong, tough, tempered martensite in the surface layer of the bar, an intermediate layer of martensite and bainite, and a refined, tough and ductile ferrite and pearlite core. When the cut ends of TMT bars are etched in Nital (a mixture of nitric acid and methanol ), three distinct rings appear: 1. A tempered outer ring of martensite, 2. A semi-tempered middle ring of martensite and bainite, and 3. a mild circular core of bainite, ferrite and pearlite. This is the desired micro structure for quality construction rebar. In contrast, lower grades of rebar are twisted when cold, work hardening them to increase their strength. However, after thermo mechanical treatment (TMT), bars do not need more work hardening. As there is no twisting during TMT, no torsional stress occurs, and so torsional stress cannot form surface defects in TMT bars. Therefore TMT bars resist corrosion better than cold, twisted and deformed (CTD) bars. After thermomechanical processing, some grades in which TMT Bars can be covered includes Fe: 415 /500 /550/ 600. These are much stronger compared with conventional CTD Bars and give up to 20% more strength to concrete structure with same quantity of steel.
https://en.wikipedia.org/wiki/Thermomechanical_processing
A thermometric titration is one of a number of instrumental titration techniques where endpoints can be located accurately and precisely without a subjective interpretation on the part of the analyst as to their location. Enthalpy change is arguably the most fundamental and universal property of chemical reactions, so the observation of temperature change is a natural choice in monitoring their progress. It is not a new technique, with possibly the first recognizable thermometric titration method reported early in the 20th century (Bell and Cowell, 1913). In spite of its attractive features, and in spite of the considerable research that has been conducted in the field and a large body of applications that have been developed; it has been until now an under-utilized technique in the critical area of industrial process and quality control. Automated potentiometric titration systems have pre-dominated in this area since the 1970s. With the advent of cheap computers able to handle the powerful thermometric titration software, development has now reached the stage where easy to use automated thermometric titration systems can in many cases offer a superior alternative to potentiometric titrimetry. Potentiometric titrimetry has been the predominant automated titrimetric technique since the 1970s, so it is worthwhile considering the basic differences between it and thermometric titrimetry. Potentiometrically-sensed titrations rely on a free energy change in the reaction system. Measurement of a free energy dependent term is necessary. Where: In order for a reaction to be amenable to potentiometric titrimetry, the free energy change must be sufficient for an appropriate sensor to respond with a significant inflection (or "kink") in the titration curve where sensor response is plotted against the amount of titrant delivered. However, free energy is just one of three related parameters in describing any chemical reaction: where: For any reaction where the free energy is not opposed by the entropy change, the enthalpy change will be significantly greater than the free energy. Thus a titration based on a change in temperature (which permits observation of the enthalpy change) will show a greater inflection than will curves obtained from sensors reacting to free energy changes alone. In the thermometric titration, titrant is added at a known constant rate to a titrand until the completion of the reaction is indicated by a change in temperature. The endpoint is determined by an inflection in the curve generated by the output of a temperature measuring device. Consider the titration reaction: Where: At completion, the reaction produces a molar heat of reaction Δ H r which is shown as a measurable temperature change Δ T . In an ideal system, where no losses or gains of heat due to environmental influences are involved, the progress of the reaction is observed as a constant increase or decrease of temperature depending respectively on whether Δ H r is negative (indicating an exothermic reaction) or positive (indicating an endothermic reaction). In this context, environmental influences may include (in order of importance): If the equilibrium for the reaction lies far to the right (i.e. a stoichiometric equilibrium has been achieved), then when all analyte has been reacted by the titrant continuing addition of titrant will be revealed by a sharp break in the temperature/volume curve. Figures 1a and 1b illustrate idealized examples. The shape of experimentally obtained thermometric titration plots will vary from such idealized examples, and some of the environmental influences listed above may have impacts. Curvature at the endpoint might be observed. This can be due to insensitivity of the sensor or where thermal equilibrium at the endpoint is slow to occur. It can also occur where the reaction between titrant and titrand does not proceed to stoichiometric completion. The determinant of the degree to which a reaction will proceed to completion is the free energy change. If this is favourable, then the reaction will proceed to be completion and be essentially stoichiometric. In this case, the sharpness of the endpoint is dependent on the magnitude of the enthalpy change. If it is unfavourable, the endpoint will be rounded regardless of the magnitude of the enthalpy change. Reactions where non-stoichiometric equilibria are evident can be used to obtain satisfactory results using a thermometric titration approach. If the portions of the titration curve both prior to and after the endpoint are reasonably linear, then the intersection of tangents to these lines will accurately locate the endpoint. This is illustrated in Figure 2. Consider the reaction for the equation a A + b B = p P which is non-stoichiometric at equilibrium. Let A represent the titrant, and B the titrand. At the beginning of the titration, the titrand B is strongly in excess, and the reaction is pushed towards completion. Under these conditions, for a constant rate of titrant addition the temperature increase is constant and the curve is essentially linear until the endpoint is approached. In a similar manner, when the titrant is in excess past the endpoint, a linear temperature response can also be anticipated. Thus intersection of tangents will reveal the true endpoint. An actual thermometric titration plot for the determination of a strong base with a strong acid is illustrated in Figure 3. The most practical sensor for measuring temperature change in titrating solutions has been found to be the thermistor. Thermistors are small solid state devices which exhibit relatively large changes in electrical resistance for small changes in temperature. They are manufactured from sintered mixed metal oxides, with lead wires enabling connection to electrical circuitry. The thermistor is encapsulated in a suitable electrically insulating medium with satisfactory heat transfer characteristics and acceptable chemical resistance. Typically for thermistors used for chemical analysis the encapsulating medium is glass, although thermistors encapsulated in epoxy resin may be used in circumstances where either chemical attack (e.g., by acidic fluoride-containing solutions) or severe mechanical stress is anticipated. The thermistor is supported by suitable electronic circuitry to maximize sensitivity to minute changes in solution temperature. The circuitry in the Metrohm 859 Titrotherm thermometric titration interface module is capable of resolving temperature changes as low as 10 −5 K. A critical element in modern automated thermometric titrimetry is the ability to locate the endpoint with a high degree of reproducibility. It is clearly impractical and insufficient for modern demands of accuracy and precision to estimate the inflection by intersection of tangents . This is done conveniently by derivatization of the temperature curve. The second derivative essentially locates the intersection of tangents to the temperature curve immediately pre- and post- the breakpoint. Thermistors respond quickly to small changes in temperature such as temperature gradients in the mixed titration solution, and thus the signal can exhibit a small amount of noise. Prior to derivatization it is therefore necessary to digitally smooth (or "filter") the temperature curve in order to obtain sharp, symmetrical second derivative "peaks" which will accurately locate the correct inflection point. This is illustrated in Figure 5. The degree of digital smoothing is optimized for each determination, and is stored as a method parameter for application every time a titration for that particular analysis is run. Because enthalpy change is a universal characteristic of chemical reactions, thermometric endpoint sensing can be applied to a wide range of titration types, e.g. Further, since the sensor is not required to interact with the titration solution electrochemically , titrations in non-conducting media can be performed, as can titrations using reactions for which no convenient or cost-effective potentiometric sensor is available. Thermometric titrations generally demand rapid reaction kinetics in order to obtain sharp reproducible endpoints. Where reaction kinetics are slow, and direct titrations between titrant and titrand are not possible, indirect or back-titrations often can be devised to solve the problem. Catalytically enhanced endpoints can be used in some instances where the temperature change at the endpoint is very small and endpoints would not be detected satisfactorily by the titration software. The suitability of a particular chemical reaction as a candidate for a thermometric titration procedure can generally be predicted on the basis of the estimated amount of analyte present in the sample and the enthalpy of the reaction. However, other parameters such as the kinetics of the reaction, the sample matrix itself, heats of dilution and losses of heat to the environment can affect the outcome. A properly designed experimental program is the most reliable way of determining the viability of a thermometric titration approach. Successful applications for thermometric titrations are generally where titrant-titrand reaction kinetics are fast, and chemical equilibria are stoichiometric or nearly so. A suitable setup for automated thermometric titrimetry comprises the following: Figure 6 illustrates a modern automated thermometric titration system based on the Metrohm 859 Titrotherm interface module with Thermoprobe sensor, Metrohm 800 Dosino dispensing devices and a computer running the operational software. Figure 7 is a schematic of the relationship between components in automated thermometric titration system. A = dosing device B = thermometric sensor C = stirring device D = thermometric titration interface module E = computer Applications for thermometric titrimetry are drawn from the major groupings, namely: Because the sensor does not interact electrically or electrochemically with the solution, electrical conductance of the titrating medium is not a pre-requisite for a determination. Titrations may be carried out in completely non-conducting, non-polar media if required. Further, titrations may be carried out in turbid solutions or even suspensions of solids, and titrations where precipitates are reaction products can be contemplated. The range of possible thermometric titration applications far exceeds the actual experience of this writer, and the reader will be referred to the appropriate literature in some instances. The heat of neutralization of a fully dissociated acid with a fully dissociated base is approximately –56kJ/mol. The reaction is thus strongly exothermic, and is an excellent basis for a wide range of analysis in industry. An advantage for the industrial analyst is that the use of stronger titrants (1 to 2 mol/L) permits a reduction in the amount of sample preparation, and samples can often be directly and accurately dispensed into the titration vessel prior to titration. Weakly dissociated acids yield sharp thermometric endpoints when titrated with a strong base. For instance, bicarbonate can be unequivocally determined in the company of carbonate by titrating with hydroxyl (Δ 0 H r =-40.9 kJ/mol). Mixtures of complex acids can be resolved by thermometric titration with standard NaOH in aqueous solution. In a mixture of nitric, acetic and phosphoric acids used in the fabrication of semi-conductors, three endpoints could be predicted on the basis of the dissociation constants of the acids: (pK a = -1.3) (pK a = 4.75) (pK a 1 = 2.12) (pK a 2 = 7.21) (pK a 3 = 12.36) The key to determine the amount of each acid present in the mixture is the ability to obtain an accurate value for the amount of phosphoric acid present, as revealed by titration of the third proton of H 3 PO 4 . Figure 10 illustrates a titration plot of this mixture, showing 3 sharp endpoints. The thermometric titrimetric analysis of sodium aluminate liquor (" Bayer liquor") in the production of alumina from bauxite is accomplished in an automated two titration sequence. This is an adaptation of a classic thermometric titration application (VanDalen and Ward, 1973). In the first titration, tartrate solution is added to an aliquot of liquor to complex aluminate, releasing one mole of hydroxyl for each mole of aluminate present. This is titrated acidimetrically along with "free" hydroxyl present and the carbonate content (as a second endpoint). The second titration is preceded by the automatic addition of fluoride solution. The alumina-tartrate complex is broken in favour of the formation of an aluminium fluoride complex and the concomitant release of three moles of hydroxyl for each mole of aluminium present, which are then titrated acidimetrically. The whole determination can be completed in less than 5 minutes. Non-aqueous acid–base titrations can be carried out advantageously by thermometric means. Acid leach solutions from some copper mines can contain large quantities of Fe(III) as well as Cu(II). The "free acid" ( sulfuric acid ) content of these leach solutions is a critical process parameter. While thermometric titrimetry can determine the free acid content with modest amounts of Fe(III), in some solutions the Fe(III) content is so high as to cause serious interference. Complexation with necessarily large amounts of oxalate is undesirable due to the toxicity of the reagent. A thermometric titration was devised by diluting the aliquot with propan-2-ol and titration with standard KOH in propan-2-ol. Most of the metal content precipitated prior to the commencement of the titration, and a clear, sharp endpoint for the sulfuric acid content was obtained. The determination of trace acids in organic matrices is a common analytical task assigned to titrimetry. Examples are Total Acid Number (TAN) in mineral and lubricating oils and Free Fatty Acids (FFA) in edible fats and oils. Automated potentiometric titration procedures have been granted standard method status, for example by ASTM for TAN and AOAC for FFA. The methodology is similar in both instances. The sample is dissolved in a suitable solvent mixture; say a hydrocarbon and an alcohol which also must contain a small amount of water. The water is intended to enhance the electrical conductivity of the solution. The trace acids are titrated with standard base in an alcohol. The sample environment is essentially hostile to the pH electrode used to sense the titration. The electrode must be taken out of service on a regular basis to rehydrate the glass sensing membrane, which is also in danger of fouling by the oily sample solution. A recent thermometric titrimetric procedure for the determination of FFA developed by Cameiro et al. (2002) has been shown to be particularly amenable to automation. It is fast, highly precise, and results agree very well with those obtained by the official AOAC method. The temperature change for the titration of very weak acids such as oleic acid by 0.1 mol/L KOH in propan-2-ol is too small to yield an accurate endpoint. In this procedure, a small amount of paraformaldehyde as a fine powder is added to the titrand before the titration. At the endpoint, the first excess of hydroxyl ions catalyzes the depolymerization of paraformaldehyde. The reaction is strongly endothermic and yields a sharp inflection. The titration plot is illustrated in Figure 13. The speed of this titration coupled with its precision and accuracy makes it ideal for the analysis of FFA in biodiesel feedstocks and product. Redox reactions are normally strongly exothermic, and can make excellent candidates for thermometric titrations. In the classical determination of ferrous ion with permanganate, the reaction enthalpy is more than double that of a strong acid/strong base titration:Δ 0 H r = −123.9 kJ/mol of Fe. The determination of hydrogen peroxide by permanganate titration is even more strongly exothermic at Δ 0 H r = −149.6 kJ/mol H 2 O 2 In the determination of hypochlorite (for example in commercial bleach formulations), a direct titration with thiosulfate can be employed without recourse to an iodometric finish. Thermometric iodometric titrations employing thiosulfate as a titrant are also practical, for example in the determination of Cu(II). In this instance, it has been found advantageous to incorporate the potassium iodide reagent with the thiosulfate titrant in such proportions that iodine is released into solution just prior to its reduction by thiosulfate. This minimizes iodine losses during the course of the titration. While relatively unstable and requiring frequent standardization, sodium hypochlorite has been used in a very rapid thermometric titration method for the determination of ammonium ion. This is an alternative to the classical approach of ammonia distillation from basic solution and consequent acid–base titration. The thermometric titration is carried out in bicarbonate solution containing bromide ion (Brown et al., 1969). Thermometric titrations employing sodium salts of ethylenediaminetetra-acetic acid (EDTA) have been demonstrated for the determination of a range of metal ions. Reaction enthalpies are modest, so titrations are normally carried out with titrant concentrations of 1 mol/L. This necessitates the use of the tetra-sodium salt of EDTA rather than the more common di-sodium salt which is saturated at a concentration of only approximately 0.25 mol/L. An excellent application is the sequential determination of calcium and magnesium. Although calcium reacts exothermically with EDTA (heat of chelation ~-23.4 kJ/mol), magnesium reacts endothermically with a heat of chelation of ~+20.1 kJ/mol. This is illustrated in the titration plot of EDTA with calcium and magnesium in sea water (Figure 14). Following the solution temperature curve, the breakpoint for the calcium content (red-tagged endpoint) is followed by a region of modest temperature rise due to competition between the heats of dilution of the titrant with the solution, and the endothermic reaction of Mg 2+ and EDTA. The breakpoint for the consumption of Mg 2+ (blue-tagged endpoint) by EDTA is revealed by upswing in temperature caused purely by the heat of dilution. Direct EDTA titrations with metal ions are possible when reaction kinetics are fast, for example zinc, copper, calcium and magnesium. However, with slower reaction kinetics such as those exhibited by cobalt and nickel, back-titrations are used. Titrations for cobalt and nickel are carried out in an ammoniacal environment; buffered with ammonia:ammonium chloride solution. An excess of EDTA is added, and is back-titrated with Cu(II) solution. It is postulated that the breakpoint is revealed by the difference in reaction enthalpies between the formation of the Cu-EDTA complex, and that for the formation of the Cu-amine complex. A catalyzed endpoint procedure to determine trace amounts of metal ions in solution (down to approximately 10 mg/L) employs 0.01 mol/L EDTA. This has been applied to the determination of low level Cu(II) in specialized plating baths, and to the determination of total hardness in water. The reaction enthalpies of EDTA with most metal ions are often quite low, and typically titrant concentrations around 1 mol/L are employed with commensurately high amounts of titrand in order to obtain sharp, reproducible endpoints. Using a catalytically indicated endpoint, very low EDTA titrant concentrations can be used. A back-titration is used. An excess of EDTA solution is added. The excess of EDTA is back-titrated with a suitable metal ion such as Mn 2+ or Cu 2+ . At the endpoint, the first excess of metal ion catalyzes a strongly exothermic reaction between a polyhydric phenol (such as resorcinol) and hydrogen peroxide. Thermometric titrimetry is particularly suited to the determination of a range of analytes where a precipitate is formed by reaction with the titrant. In some cases, an alternative to traditional potentiometric titration practice can be offered. In other cases, reaction chemistries may be employed for which there is no satisfactory equivalent in potentiometric titrimetry. Thermometric titrations of silver nitrate with halides and cyanide are all possible. The reaction of silver nitrate with chloride is strongly exothermic. For instance, the reaction enthalpy of Ag + with Cl − is a high −61.2 kJ/mol. This permits convenient determination of chloride with commonly available standard 0.1 mol/L AgNO 3 . Endpoints are very sharp, and with care, chloride concentrations down to 15 mg/L can be analyzed. Bromide and chloride may be determined in admixture. Sulfate may be rapidly and easily titrated thermometrically using standard solutions of Ba 2+ as titrant. Industrially, the procedure has been applied to the determination of sulfate in brine (including electrolysis brines), in nickel refining solutions and particularly for sulfate in wet process phosphoric acid , where it has proven to be quite popular. The procedure can also be used to assist in the analysis of complex acid mixtures containing sulfuric acid where resorting to titration in non-aqueous media is not feasible. The reaction enthalpy for the formation of barium sulfate is a modest −18.8 kJ/mol. This can place a restriction on the lower limit of sulfate in a sample which can be analyzed. Thermometric titrimetry offers a rapid, highly precise method for the determination of aluminium in solution. A solution of aluminium is conditioned with acetate buffer and an excess of sodium and potassium ions. Titration with sodium or potassium fluoride yields the exothermic precipitation of an insoluble alumino-fluoride salt. Because 6 mole of fluoride react with one mole of aluminium, the titration is particularly precise, and a coefficient of variance (CV) of 0.03 has been achieved in the analysis of alum. When aluminium ion (say as aluminium nitrate) is employed as the titrant, fluoride can be determined using the same chemistry. This titration is useful in the determination of fluoride in complex acid mixtures used as etchants in the semi-conductor industry. Orthophosphate ion can be conveniently thermometrically titrated with magnesium ions in the presence of ammonium ion. An aliquot of sample is buffered to approximately pH10 with an NH 3 /NH 4 Cl solution. The reaction: Is exothermic. CV's of under 0.1 have been achieved in test applications. The procedure is suitable for the determination of orthophosphate in fertilizers and other products. Nickel can be titrated thermometrically using di-sodium dimethylglyoximate as titrant. The chemistry is analogous to the classic gravimetric procedure, but the time taken for a determination can be reduced from many hours to a few minutes. Potential interferences need to be considered. Anionic and cationic surfactants can be determined thermometrically by titrating one type against the other. For instance, benzalkonium chloride (a quaternary-type cationic surfactant) may be determined in cleaners and algaecides for swimming pools and spas by titrating with a standard solution of sodium dodecyl sulfate. Alternatively, anionic surfactants such as sodium lauryl sulfate can be titrated with cetyl pyridinium chloride . When an excess of Ba 2+ is added to a non-ionic surfactant of the alkyl propylene oxide derivative type, a pseudo-cationic complex is formed. This may be titrated with standard sodium tetraphenylborate. Two moles tetraphenylborate react with one mole of the Ba 2+ / non-ionic surfactant complex. Acidic solutions of fluoride (including hydrofluoric acid) can be determined by a simple thermometric titration with boric acid. The titration plot illustrated in Figure 19 shows that the endpoint is quite rounded, suggesting that the reaction might not proceed to stoichiometric equilibrium. However, since the regions of the temperature curve immediately before and after the endpoint are quite linear, the second derivative of this curve (representing the intersection of tangents) will accurately locate the endpoint. Indeed, excellent precision can be obtained with this titration, with a CV of less than 0.1. Formaldehyde can be determined in electroless copper plating solutions by the addition of an excess of sodium sulfite solution and titrating the liberated hydroxyl ion with standard acid. solutions. Anal. Chem. 45 (13) 2248-2251, (1973)
https://en.wikipedia.org/wiki/Thermometric_titration
The Thermomicrobia is a group of thermophilic green non-sulfur bacteria . Based on species Thermomicrobium roseum (type species) and Sphaerobacter thermophilus , this bacteria class has the following description: [ 3 ] [ 4 ] The class Thermomicrobia subdivides into two orders with validly published names: Thermomicrobiales Garrity and Holt 2001 and Sphaerobacterales Stackebrandt, Rainey and Ward-Rainey 1997 . Gram negative. Pleomorphic, non-motile, non-spore-forming rods. Non-sporulating. No diamino acid present. No peptidoglycan in significant amount. Atypical proteinaceous cell walls. Hyper-thermophilic, optimum growth temperature at 70-75 °C. Obligatory aerobic and chemoorganotrophic. [ note 1 ] As thermophilic bacteria, members of this class are usually found in environments which are distant from human activity. [ 5 ] However, they have features like improved growth in antibiotics and CO oxidizing activity, making them interesting topics of research (e.g. for biotechnology application). In 1973, a strain of rose-pink thermophilic bacteria was isolated from Toadstool Spring in Yellowstone National Park, which was later named Thermomicrobium roseum and proposed as a novel species of the novel genus Thermomicrobium . [ 6 ] At that time the genus was categorized under family Achromobacteraceae, but it became a distinct phylum by 2001. [ 3 ] In 2004, it was proposed, on the basis of an analysis of genetic affiliations, that the Thermomicrobia should more properly be reclassified as a class belonging to the phylum Chloroflexota (formerly Chloroflexi). The bacteria Sphaerobacter thermophilus originally described as an Actinobacteria is now considered a Thermomicrobia. [ 4 ] [ 7 ] In the same year, another strain of rose-pink thermophilic bacteria was isolated from Yellowstone National Park, which was named Thermobaculum terrenum . [ 8 ] Later analysis based on genome put this species under Thermomicrobia class. [ 9 ] However, the current standing of Thermobaculum terrenum is disputed. [ 10 ] In 2012, a thermo-tolerant nitrite-oxidizing bacterium was isolated from a bioreactor, which was named Nitrolancetus hollandica and proposed as a novel species later in 2014. [ 11 ] While it has nitrite-oxidizing activity, which is unique in the Thermomicrobia class, it is placed under the Thermomicrobia class based on 16s rRNA phylogeny. [ 12 ] In 2014, two thermophilic, Gram-positive, rod-shaped, non-spore-forming bacteria (strains KI3 T and KI4 T ) isolated from geothermally heated biofilms growing on a tumulus in the Kilauea Iki pit crater on the flank of Kilauea Volcano (Hawai'i) were proposed as representatives of new species based on 16s rRNA phylogeny. The KI3 T strain, later named as Thermomicrobium carboxidum , is closely related to Thermomicrobium roseum . The KI4 T strain, later named as Thermorudis peleae , was proposed as a type strain of new genus Thermorudis . [ 13 ] In 2015, a thermophilic bacteria strain WKT50.2 isolated from geothermal soil in Waitike (New Zealand) was proposed to be a novel species, later named Thermorudis pharmacophila . Phylogenic analysis based on 16s rRNA place it within Thermomicrobia class, as close relative to Thermorudis peleae . [ 5 ] Members of the class Thermomicrobia are broadly distributed across a wide range of both aquatic and terrestrial habitats. Thermomicrobium roseum was found in geothermally heated hot springs, Thermorudis pharmacophila and Thermobaculum terrenum from heated soils, and Thermomicrobium carboxidum and Thermorudis peleae from heated sediments [ 13 ] [ 5 ] [ 14 ] In addition, Sphaerobacter thermophilus was found in sewage sludge that went through thermophilic treatment. [ 15 ] The common features of their habitats include temperature ranging from around 65~75 °C and a pH around 6.0~8.0 (except for Nitrolancea hollandica which grow around 40 °C [ 11 ] ). Members of Thermomicrobia class have variation in their basic metabolism. Nitrolancetus hollandica has nitrifying activity that utilize NO 2 − as energy source, which is unique in the whole Chloroflexota phylum. [ 12 ] Thermomicrobium spp. and Sphaerobacter thermophilus have constitutive CO oxidizing not found in other species in this class. [ 13 ] [ 16 ] However, species of this class do share some features, as listed below: Members of Thermomicrobia class exhibit certain level of resistance against metronidazole and/or trimethoprim , which are clinically relevant for humans. [ 17 ] [ 18 ] Thermomicrobium carboxidum and Thermorudis peleae show resistance against both of those antibiotics, while Sphaerobacter thermophilus shows resistance against only metronidazole . [ 5 ] Interestingly, Thermomicrobium roseum and Thermorudis pharmacophila have an increased growth in both metronidazole and trimethoprim , a rare trait even within antibiotic resistant bacteria. [ 5 ] The mechanisms behind are currently undocumented, and further study is required on this topic. Members of Thermomicrobia class have various Gram-staining results. Thermomicrobium roseum , Sphaerobacter thermophilus and Thermorudis pharmacophila are reported to be Gram-negative and have a typical layered diderm cell envelope structure. [ 3 ] [ 4 ] [ 5 ] However, their cell envelope composition are atypical compared to typical Gram-negative bacteria. Cell envelope of Thermomicrobium roseum lacks significant amount of peptidoglycan, which is fundamental for typical Gram-negative bacteria, while being rich in protein. [ 3 ] Membrane lipids of Thermomicrobium roseum are mostly long chain diols instead of glycerol-based lipids commonly found in bacteria. [ 19 ] The same feature was found in Sphaerobacter thermophilus and Thermorudis pharmacophila. [ 5 ] It was suggested that the high-protein and diol-based lipid composition are responsible for heat resistance of these bacteria. [ 4 ] [ 20 ] Meanwhile, other members of Thermomicrobia class are reported to be Gram-positive and have typical monoderm cell envelope. [ 8 ] [ 12 ] [ 13 ] There are some possible explanations of the inconsistency of Gram-staining result within the class. For Thermorudis pharmacophila , a possible explanation suggested by Houghton et al. is that it is actually an atypical monoderm bacterium, because its cell envelope contains amino acids usually associated with Gram-positive bacteria, have reaction to KOH, vancomycin and ampicillin , and lacks genes responsible for diderm formation. [ 5 ] It is also suggested that further study is required to resolve this problem, since the inconsistent reports of cell envelope structure are found for the whole Chloroflexota phylum. Sphaerobacter Nitrolancea Thermalbibacter Thermomicrobium Thermorudis Nitrolancea Sphaerobacter Thermalbibacter Thermomicrobium Thermorudis The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) [ 27 ] and National Center for Biotechnology Information (NCBI). [ 28 ]
https://en.wikipedia.org/wiki/Thermomicrobia
The Thermomicrobia is a group of thermophilic green non-sulfur bacteria . Based on species Thermomicrobium roseum (type species) and Sphaerobacter thermophilus , this bacteria class has the following description: [ 3 ] [ 4 ] The class Thermomicrobia subdivides into two orders with validly published names: Thermomicrobiales Garrity and Holt 2001 and Sphaerobacterales Stackebrandt, Rainey and Ward-Rainey 1997 . Gram negative. Pleomorphic, non-motile, non-spore-forming rods. Non-sporulating. No diamino acid present. No peptidoglycan in significant amount. Atypical proteinaceous cell walls. Hyper-thermophilic, optimum growth temperature at 70-75 °C. Obligatory aerobic and chemoorganotrophic. [ note 1 ] As thermophilic bacteria, members of this class are usually found in environments which are distant from human activity. [ 5 ] However, they have features like improved growth in antibiotics and CO oxidizing activity, making them interesting topics of research (e.g. for biotechnology application). In 1973, a strain of rose-pink thermophilic bacteria was isolated from Toadstool Spring in Yellowstone National Park, which was later named Thermomicrobium roseum and proposed as a novel species of the novel genus Thermomicrobium . [ 6 ] At that time the genus was categorized under family Achromobacteraceae, but it became a distinct phylum by 2001. [ 3 ] In 2004, it was proposed, on the basis of an analysis of genetic affiliations, that the Thermomicrobia should more properly be reclassified as a class belonging to the phylum Chloroflexota (formerly Chloroflexi). The bacteria Sphaerobacter thermophilus originally described as an Actinobacteria is now considered a Thermomicrobia. [ 4 ] [ 7 ] In the same year, another strain of rose-pink thermophilic bacteria was isolated from Yellowstone National Park, which was named Thermobaculum terrenum . [ 8 ] Later analysis based on genome put this species under Thermomicrobia class. [ 9 ] However, the current standing of Thermobaculum terrenum is disputed. [ 10 ] In 2012, a thermo-tolerant nitrite-oxidizing bacterium was isolated from a bioreactor, which was named Nitrolancetus hollandica and proposed as a novel species later in 2014. [ 11 ] While it has nitrite-oxidizing activity, which is unique in the Thermomicrobia class, it is placed under the Thermomicrobia class based on 16s rRNA phylogeny. [ 12 ] In 2014, two thermophilic, Gram-positive, rod-shaped, non-spore-forming bacteria (strains KI3 T and KI4 T ) isolated from geothermally heated biofilms growing on a tumulus in the Kilauea Iki pit crater on the flank of Kilauea Volcano (Hawai'i) were proposed as representatives of new species based on 16s rRNA phylogeny. The KI3 T strain, later named as Thermomicrobium carboxidum , is closely related to Thermomicrobium roseum . The KI4 T strain, later named as Thermorudis peleae , was proposed as a type strain of new genus Thermorudis . [ 13 ] In 2015, a thermophilic bacteria strain WKT50.2 isolated from geothermal soil in Waitike (New Zealand) was proposed to be a novel species, later named Thermorudis pharmacophila . Phylogenic analysis based on 16s rRNA place it within Thermomicrobia class, as close relative to Thermorudis peleae . [ 5 ] Members of the class Thermomicrobia are broadly distributed across a wide range of both aquatic and terrestrial habitats. Thermomicrobium roseum was found in geothermally heated hot springs, Thermorudis pharmacophila and Thermobaculum terrenum from heated soils, and Thermomicrobium carboxidum and Thermorudis peleae from heated sediments [ 13 ] [ 5 ] [ 14 ] In addition, Sphaerobacter thermophilus was found in sewage sludge that went through thermophilic treatment. [ 15 ] The common features of their habitats include temperature ranging from around 65~75 °C and a pH around 6.0~8.0 (except for Nitrolancea hollandica which grow around 40 °C [ 11 ] ). Members of Thermomicrobia class have variation in their basic metabolism. Nitrolancetus hollandica has nitrifying activity that utilize NO 2 − as energy source, which is unique in the whole Chloroflexota phylum. [ 12 ] Thermomicrobium spp. and Sphaerobacter thermophilus have constitutive CO oxidizing not found in other species in this class. [ 13 ] [ 16 ] However, species of this class do share some features, as listed below: Members of Thermomicrobia class exhibit certain level of resistance against metronidazole and/or trimethoprim , which are clinically relevant for humans. [ 17 ] [ 18 ] Thermomicrobium carboxidum and Thermorudis peleae show resistance against both of those antibiotics, while Sphaerobacter thermophilus shows resistance against only metronidazole . [ 5 ] Interestingly, Thermomicrobium roseum and Thermorudis pharmacophila have an increased growth in both metronidazole and trimethoprim , a rare trait even within antibiotic resistant bacteria. [ 5 ] The mechanisms behind are currently undocumented, and further study is required on this topic. Members of Thermomicrobia class have various Gram-staining results. Thermomicrobium roseum , Sphaerobacter thermophilus and Thermorudis pharmacophila are reported to be Gram-negative and have a typical layered diderm cell envelope structure. [ 3 ] [ 4 ] [ 5 ] However, their cell envelope composition are atypical compared to typical Gram-negative bacteria. Cell envelope of Thermomicrobium roseum lacks significant amount of peptidoglycan, which is fundamental for typical Gram-negative bacteria, while being rich in protein. [ 3 ] Membrane lipids of Thermomicrobium roseum are mostly long chain diols instead of glycerol-based lipids commonly found in bacteria. [ 19 ] The same feature was found in Sphaerobacter thermophilus and Thermorudis pharmacophila. [ 5 ] It was suggested that the high-protein and diol-based lipid composition are responsible for heat resistance of these bacteria. [ 4 ] [ 20 ] Meanwhile, other members of Thermomicrobia class are reported to be Gram-positive and have typical monoderm cell envelope. [ 8 ] [ 12 ] [ 13 ] There are some possible explanations of the inconsistency of Gram-staining result within the class. For Thermorudis pharmacophila , a possible explanation suggested by Houghton et al. is that it is actually an atypical monoderm bacterium, because its cell envelope contains amino acids usually associated with Gram-positive bacteria, have reaction to KOH, vancomycin and ampicillin , and lacks genes responsible for diderm formation. [ 5 ] It is also suggested that further study is required to resolve this problem, since the inconsistent reports of cell envelope structure are found for the whole Chloroflexota phylum. Sphaerobacter Nitrolancea Thermalbibacter Thermomicrobium Thermorudis Nitrolancea Sphaerobacter Thermalbibacter Thermomicrobium Thermorudis The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) [ 27 ] and National Center for Biotechnology Information (NCBI). [ 28 ]
https://en.wikipedia.org/wiki/Thermomicrobiota
Thermomicroscopy , developed by the Austrian pharmacognosist Ludwig Kofler (1891-1951) and his wife Adelheid Kofler and continued by Maria Kuhnert-Brandstätter and Walter C. McCrone , is a method for observing the phases of solid drug substances . [ 1 ]
https://en.wikipedia.org/wiki/Thermomicroscopy
Thermomonosporaceae is a family of bacteria that share similar genotypic and phenotypic characteristics. The family Thermomonosporaceae includes aerobic, Gram-positive, non-acid-fast, chemo-organotrophic Actinomycetota . They produce a branched substrate mycelium bearing aerial hyphae that undergo differentiation into single or short chains of arthrospores . All species of Thermomonosporaceae share the same cell wall type (type III; meso-diaminopimelic acid), a similar menaquinone profile in which MK-9(H6)is predominant, and fatty acid profile type 3a. The presence of the diagnostic sugar madurose is variable, but can be found in most species of this family. The polar lipid profiles are characterized as phospholipid type PI for most species of Thermomonospora , Actinomadura and Spirillospora . The members of Actinocorallia are characterized by phospholipid type PII. [ 3 ] The G+C content of the DNA lies within the range 66±72 mol%. [ 3 ] The pattern of 16S rRNA signatures consists of nucleotides at positions 440 : 497 (C–G), 501 : 544 (C–G), 502 : 543 (G–C), 831 : 855 (G–G), 843 (U), 844 (A) and 1355 : 1367 (A–U). [ 5 ] The type genus is Thermomonospora [ 6 ] The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) [ 2 ] and National Center for Biotechnology Information (NCBI). [ 4 ] Actinoallomurus Actinomadura alba Actinomadura craniellae Thermomonospora Thermomonospora echinospora Actinocorallia Spirillospora rubra Actinomadura scrupuli Spirillospora Couch 1963 Actinomadura ~ Actinomadura [incl. Excellospora ] { Treboniaceae } { Nocardiopsaceae } Actinoallomurus Tamura et al . 2009 Actinomadura alba Actinocorallia Iinuma et al . 1994 Actinomadura craniellae Thermomonospora Henssen 1957 Spirillospora Couch 1963 [incl. Actinomadura Lechevalier and Lechevalier 1970 ; Excellospora Agre & Guzeva 1975 ] { Streptosporangiaceae s.s.}
https://en.wikipedia.org/wiki/Thermomonosporaceae
In electrochemistry , a thermoneutral voltage is a voltage drop across an electrochemical cell which is sufficient not only to drive the cell reaction, but to also provide the heat necessary to maintain a constant temperature. For a reaction of the form The thermoneutral voltage is given by where Δ H {\displaystyle \Delta H} is the change in enthalpy and F is the Faraday constant . For a cell reaction characterized by the chemical equation: at constant temperature and pressure, the thermodynamic voltage (minimum voltage required to drive the reaction) is given by the Nernst equation : where Δ G {\displaystyle \Delta G} is the Gibbs energy and F is the Faraday constant . The standard thermodynamic voltage (i.e. at standard temperature and pressure) is given by: and the Nernst equation can be used to calculate the standard potential at other conditions. The cell reaction is generally endothermic : i.e. it will extract heat from its environment. [ citation needed ] The Gibbs energy calculation generally assumes an infinite thermal reservoir to maintain a constant temperature, but in a practical case, the reaction will cool the electrode interface and slow the reaction occurring there. If the cell voltage is increased above the thermodynamic voltage, the product of that voltage and the current will generate heat, and if the voltage is such that the heat generated matches the heat required by the reaction to maintain a constant temperature, that voltage is called the "thermoneutral voltage". The rate of delivery of heat is equal to T d S d t {\displaystyle T{\frac {dS}{dt}}} where T is the temperature (the standard temperature, in this case) and dS/dt is the rate of entropy production in the cell. At the thermoneutral voltage, this rate will be zero, which indicates that the thermoneutral voltage may be calculated from the enthalpy . [ 1 ] For water at standard temperature (25 C) the net cell reaction may be written: Using Gibbs potentials ( Δ G H 2 O o = − 237.18 {\displaystyle \Delta G_{H2O}^{o}=-237.18} kJ/mol), [ 2 ] [ 3 ] the thermodynamic voltage at standard conditions is Just as the combustion of hydrogen and oxygen generates heat, the reverse reaction generating hydrogen and oxygen will absorb heat. The thermoneutral voltage is (using Δ H H 2 O o = − 285.83 {\displaystyle \Delta H_{H2O}^{o}=-285.83} kJ/mol): [ 2 ] [ 3 ]
https://en.wikipedia.org/wiki/Thermoneutral_voltage
Nuclear fusion is a reaction in which two or more atomic nuclei combine to form a larger nuclei, nuclei/ neutron by-products. The difference in mass between the reactants and products is manifested as either the release or absorption of energy . This difference in mass arises as a result of the difference in nuclear binding energy between the atomic nuclei before and after the fusion reaction. Nuclear fusion is the process that powers all active stars , via many reaction pathways . Fusion processes require an extremely large triple product of temperature, density, and confinement time. These conditions occur only in stellar cores , advanced nuclear weapons , and are approached in fusion power experiments . A nuclear fusion process that produces atomic nuclei lighter than nickel-62 is generally exothermic , due to the positive gradient of the nuclear binding energy curve . The most fusible nuclei are among the lightest, especially deuterium , tritium , and helium-3 . The opposite process, nuclear fission , is most energetic for very heavy nuclei, especially the actinides . Applications of fusion include fusion power , thermonuclear weapons , boosted fission weapons , neutron sources , and superheavy element production. American chemist William Draper Harkins was the first to propose the concept of nuclear fusion in 1915. [ 1 ] Francis William Aston 's 1919 invention of the mass spectrometer allowed the discovery that four hydrogen atoms are heavier than one helium atom. Thus in 1920, Arthur Eddington correctly predicted fusion of hydrogen into helium could be the primary source of stellar energy. [ 2 ] Quantum tunneling was discovered by Friedrich Hund in 1927, with relation to electron levels. [ 3 ] [ 4 ] In 1928, George Gamow was the first to apply tunneling to the nucleus, first to alpha decay , then to fusion as an inverse process. From this, in 1929, Robert Atkinson and Fritz Houtermans made the first estimates for stellar fusion rates. [ 5 ] [ 6 ] In 1938, Hans Bethe worked with Charles Critchfield to enumerate the proton–proton chain that dominates Sun-type stars. In 1939, Bethe published the discovery of the CNO cycle common to higher-mass stars. During the 1920s, Patrick Blackett made the first conclusive experiments in artificial nuclear transmutation at the Cavendish Laboratory . There, John Cockcroft and Ernest Walton built their generator on the inspiration of Gamow's paper. In April 1932, they published experiments on the reaction: where the intermediary nuclide was later confirmed to be the extremely short-lived beryllium-8 . [ 7 ] This has a claim to the first artificial fusion reaction. [ citation needed ] In papers from July and November 1933, Ernest Lawrence et. al. at the University of California Radiation Laboratory , in some of the earliest cyclotron experiments, accidentally produced the first deuterium-deuterium fusion reactions: The Radiation Lab, only detecting the resulting energized protons and neutrons, [ 8 ] [ 9 ] misinterpreted the source as an exothermic disintegration of the deuterons, now known to be impossible. [ 10 ] In May 1934, Mark Oliphant , Paul Harteck , and Ernest Rutherford at the Cavendish Laboratory, [ 11 ] published an intentional deuterium fusion experiment, and made the discovery of both tritium and helium-3 . This is widely considered the first experimental demonstration of fusion. [ 10 ] In 1938, Arthur Ruhlig at the University of Michigan made the first observation of deuterium–tritium (DT) fusion and its characteristic 14 MeV neutrons, now known as the most favourable reaction: Research into fusion for military purposes began in the early 1940s as part of the Manhattan Project . In 1941, Enrico Fermi and Edward Teller had a conversation about the possibility of a fission bomb creating conditions for thermonuclear fusion. In 1942, Emil Konopinski brought Ruhlig's work on the deuterium-tritium reaction to the projects attention. J. Robert Oppenheimer initially commissioned physicists at Chicago and Cornell to use the Harvard University cyclotron to secretly investigate its cross-section, and that of the lithium reaction (see below). Measurements were obtained at Purdue, Chicago, and Los Alamos from 1942-1946. Theoretical assumptions about DT fusion gave it a similar cross-section to DD. However, in 1946 Egon Bretscher discovered a resonance enhancement giving the DT reaction a cross-section ~100 times larger. [ 12 ] From 1945, John von Neumann, Teller, and other Los Alamos scientists used ENIAC , one of the first electronic computers, to simulate thermonuclear weapon detonations. [ 13 ] The first artificial thermonuclear fusion reaction occurred during the 1951 US Greenhouse George nuclear test, using a small amount of deuterium–tritium gas. This produced the largest yield to date, at 225 kt, 15 times that of Little Boy . The first "true" thermonuclear weapon detonation i.e. a two-stage device, was the 1952 Ivy Mike test of a liquid deuterium-fusing device, yielding over 10 Mt. The key to this jump was the full utilization of the fission blast by the Teller-Ulam design. The Soviet Union had begun their focus on a hydrogen bomb program earlier, and in 1953 carried out the RDS-6s test. This had international impacts as the first air-deliverable bomb using fusion, but yielded 400 kt and was limited by its single-stage design. The first Soviet two-stage test was RDS-37 in 1955 yielding 1.5 Mt, using an independently-reached version of the Teller-Ulam design. Modern devices benefit from the usage of solid lithium deuteride with an enrichment of lithium-6. This is due to the Jetter cycle involving the exothermic reaction: During thermonuclear detonations, this provides tritium for the highly energetic DT reaction, and benefits from its neutron production, creating a closed neutron cycle. [ 14 ] While fusion bomb detonations were loosely considered for energy production , the possibility of controlled and sustained reactions remained the scientific focus for peaceful fusion power. Research into developing controlled fusion inside fusion reactors has been ongoing since the 1930s, with Los Alamos National Laboratory 's Scylla I device producing the first laboratory thermonuclear fusion in 1958, but the technology is still in its developmental phase. [ 15 ] The first experiments producing large amounts of controlled fusion power were the experiments with mixes of deuterium and tritium in Tokamaks . Experiments in the TFTR at the PPPL in Princeton University Princeton NJ, USA during 1993-1996 produced created 1.6 GJ fusion energy. The peak fusion power was 10.3 MW from 3.7 x 10 18 reactions per second, and peak fusion energy created in one discharge was 7.6 MJ. Subsequent experiments in the JET in 1997 achieved a peak fusion power of 16 MW (5.8 x 10 18 /s). The central Q, defined as the local fusion power produced to the local applied heating power, is computed to be 1.3. [ 16 ] A JET experiment in 2024 produced 69 MJ of fusion power, consuming 0.2 mgm of D and T. The US National Ignition Facility , which uses laser-driven inertial confinement fusion , was designed with a goal of achieving a fusion energy gain factor (Q) of larger than one; the first large-scale laser target experiments were performed in June 2009 and ignition experiments began in early 2011. [ 17 ] [ 18 ] On 13 December 2022, the United States Department of Energy announced that on 5 December 2022, they had successfully accomplished break-even fusion, "delivering 2.05 megajoules (MJ) of energy to the target, resulting in 3.15 MJ of fusion energy output." [ 19 ] The rate of supplying power to the experimental test cell is hundreds of times larger than the power delivered to the target. Prior to this breakthrough, controlled fusion reactions had been unable to produce break-even (self-sustaining) controlled fusion. [ 20 ] The two most advanced approaches for it are magnetic confinement (toroid designs) and inertial confinement (laser designs). Workable designs for a toroidal reactor that theoretically will deliver ten times more fusion energy than the amount needed to heat plasma to the required temperatures are in development (see ITER ). The ITER facility is expected to finish its construction phase in 2025. It will start commissioning the reactor that same year and initiate plasma experiments in 2025, but is not expected to begin full deuterium–tritium fusion until 2035. [ 21 ] Private companies pursuing the commercialization of nuclear fusion received $2.6 billion in private funding in 2021 alone, going to many notable startups including but not limited to Commonwealth Fusion Systems , Helion Energy Inc ., General Fusion , TAE Technologies Inc. and Zap Energy Inc. [ 22 ] One of the most recent breakthroughs to date in maintaining a sustained fusion reaction occurred in France's WEST fusion reactor. It maintained a 90 million degree plasma for a record time of six minutes. This is a tokamak style reactor which is the same style as the upcoming ITER reactor. [ 23 ] The release of energy with the fusion of light elements is due to the interplay of two opposing forces: the nuclear force , a manifestation of the strong interaction , which holds protons and neutrons tightly together in the atomic nucleus ; and the Coulomb force , which causes positively charged protons in the nucleus to repel each other. [ 25 ] Lighter nuclei (nuclei smaller than iron and nickel) are sufficiently small and proton-poor to allow the nuclear force to overcome the Coulomb force. This is because the nucleus is sufficiently small that all nucleons feel the short-range attractive force at least as strongly as they feel the infinite-range Coulomb repulsion. Building up nuclei from lighter nuclei by fusion releases the extra energy from the net attraction of particles. For larger nuclei , however, no energy is released, because the nuclear force is short-range and cannot act across larger nuclei. Fusion powers stars and produces most elements lighter than cobalt in a process called nucleosynthesis . The Sun is a main-sequence star, and, as such, generates its energy by nuclear fusion of hydrogen nuclei into helium. In its core, the Sun fuses 620 million metric tons of hydrogen and makes 616 million metric tons of helium each second. The fusion of lighter elements in stars releases energy and the mass that always accompanies it. For example, in the fusion of two hydrogen nuclei to form helium, 0.645% of the mass is carried away in the form of kinetic energy of an alpha particle or other forms of energy, such as electromagnetic radiation. [ 26 ] It takes considerable energy to force nuclei to fuse, even those of the lightest element, hydrogen . When accelerated to high enough speeds, nuclei can overcome this electrostatic repulsion and be brought close enough such that the attractive nuclear force is greater than the repulsive Coulomb force. The strong force grows rapidly once the nuclei are close enough, and the fusing nucleons can essentially "fall" into each other and the result is fusion; this is an exothermic process . [ 27 ] Energy released in most nuclear reactions is much larger than in chemical reactions , because the binding energy that holds a nucleus together is greater than the energy that holds electrons to a nucleus. For example, the ionization energy gained by adding an electron to a hydrogen nucleus is 13.6 eV —less than one-millionth of the 17.6 MeV released in the deuterium – tritium (D–T) reaction shown in the adjacent diagram. Fusion reactions have an energy density many times greater than nuclear fission ; the reactions produce far greater energy per unit of mass even though individual fission reactions are generally much more energetic than individual fusion ones, which are themselves millions of times more energetic than chemical reactions. Via the mass–energy equivalence , fusion yields a 0.7% efficiency of reactant mass into energy. This can be only be exceeded by the extreme cases of the accretion process involving neutron stars or black holes, approaching 40% efficiency, and antimatter annihilation at 100% efficiency. (The complete conversion of one gram of matter would expel 9 × 10 13 joules of energy.) Fusion is responsible for the astrophysical production of the majority of elements lighter than iron. This includes most types of Big Bang nucleosynthesis and stellar nucleosynthesis . Non-fusion processes that contribute include the s-process and r-process in neutron merger and supernova nucleosynthesis , responsible for elements heavier than iron. An important fusion process is the stellar nucleosynthesis that powers stars , including the Sun. In the 20th century, it was recognized that the energy released from nuclear fusion reactions accounts for the longevity of stellar heat and light. The fusion of nuclei in a star, starting from its initial hydrogen and helium abundance, provides that energy and synthesizes new nuclei. Different reaction chains are involved, depending on the mass of the star (and therefore the pressure and temperature in its core). Around 1920, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars . [ 28 ] [ 29 ] At that time, the source of stellar energy was unknown; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation E = mc 2 . This was a particularly remarkable development since at that time fusion and thermonuclear energy had not yet been discovered, nor even that stars are largely composed of hydrogen (see metallicity ). Eddington's paper reasoned that: All of these speculations were proven correct in the following decades. The primary source of solar energy, and that of similar size stars, is the fusion of hydrogen to form helium (the proton–proton chain reaction), which occurs at a solar-core temperature of 14 million kelvin. The net result is the fusion of four protons into one alpha particle , with the release of two positrons and two neutrinos (which changes two of the protons into neutrons), and energy. In heavier stars, the CNO cycle and other processes are more important. As a star uses up a substantial fraction of its hydrogen, it begins to fuse heavier elements. In massive cores, silicon-burning is the final fusion cycle, leading to a build-up of iron and nickel nuclei. Nuclear binding energy makes the production of elements heavier than nickel via fusion energetically unfavorable. These elements are produced in non-fusion processes: the s-process , r-process , and the variety of processes that can produce p-nuclei . Such processes occur in giant star shells, or supernovae , or neutron star mergers . Brown dwarfs fuse deuterium and in very high mass cases also fuse lithium. Carbon-oxygen white dwarfs , which accrete matter either from an active stellar companion or white dwarf merger, approach the Chandrasekhar limit of 1.44 solar masses. Immediately prior, carbon burning fusion begins, destroying the Earth-sized dwarf within one second, in a Type Ia supernova . Much more rarely, helium white dwarfs may merge, which does not cause an explosion but begins helium burning in an extreme type of helium star . Some neutron stars accrete hydrogen and helium from an active stellar companion. Periodically, the helium accretion reaches a critical level, and a thermonuclear burn wave propagates across the surface, on the timescale of one second. [ 30 ] Similar to stellar fusion, extreme conditions within black hole accretion disks can allow fusion reactions. Calculations show the most energetic reactions occur around lower stellar mass black holes , below 10 solar masses, compared to those above 100. Beyond five Schwarzschild radii , carbon-burning and fusion of helium-3 dominates the reactions. Within this distance, around lower mass black holes, fusion of nitrogen, oxygen , neon , and magnesium can occur. In the extreme limit, the silicon-burning process can begin with the fusion of silicon and selenium nuclei. [ 31 ] From the period approximately 10 seconds to 20 minutes after the Big Bang , the universe cooled from over 100 keV to 1 keV. This allowed the combination of protons and neutrons in deuterium nuclei, and beginning a rapid fusion chain into tritium and helium-3 and ending in predominantly helium-4, with a minimal fraction of lithium, beryllium, and boron nuclei. A substantial energy barrier of electrostatic forces must be overcome before fusion can occur. At large distances, two naked nuclei repel one another because of the repulsive electrostatic force between their positively charged protons. If two nuclei can be brought close enough together, however, the electrostatic repulsion can be overcome by the quantum effect in which nuclei can tunnel through coulomb forces. When a nucleon such as a proton or neutron is added to a nucleus, the nuclear force attracts it to all the other nucleons of the nucleus (if the atom is small enough), but primarily to its immediate neighbors due to the short range of the force. The nucleons in the interior of a nucleus have more neighboring nucleons than those on the surface. Since smaller nuclei have a larger surface-area-to-volume ratio, the binding energy per nucleon due to the nuclear force generally increases with the size of the nucleus but approaches a limiting value corresponding to that of a nucleus with a diameter of about four nucleons. It is important to keep in mind that nucleons are quantum objects . So, for example, since two neutrons in a nucleus are identical to each other, the goal of distinguishing one from the other, such as which one is in the interior and which is on the surface, is in fact meaningless, and the inclusion of quantum mechanics is therefore necessary for proper calculations. The electrostatic force, on the other hand, is an inverse-square force , so a proton added to a nucleus will feel an electrostatic repulsion from all the other protons in the nucleus. The electrostatic energy per nucleon due to the electrostatic force thus increases without limit as nuclei atomic number grows. The net result of the opposing electrostatic and strong nuclear forces is that the binding energy per nucleon generally increases with increasing size, up to the elements iron and nickel , and then decreases for heavier nuclei. Eventually, the binding energy becomes negative and very heavy nuclei (all with more than 208 nucleons, corresponding to a diameter of about 6 nucleons) are not stable. The four most tightly bound nuclei, in decreasing order of binding energy per nucleon, are 62 Ni , 58 Fe , 56 Fe , and 60 Ni . [ 32 ] Even though the nickel isotope , 62 Ni , is more stable, the iron isotope 56 Fe is an order of magnitude more common. This is due to the fact that there is no easy way for stars to create 62 Ni through the alpha process . An exception to this general trend is the helium-4 nucleus, whose binding energy is higher than that of lithium , the next heavier element. This is because protons and neutrons are fermions , which according to the Pauli exclusion principle cannot exist in the same nucleus in exactly the same state. Each proton or neutron's energy state in a nucleus can accommodate both a spin up particle and a spin down particle. Helium-4 has an anomalously large binding energy because its nucleus consists of two protons and two neutrons (it is a doubly magic nucleus), so all four of its nucleons can be in the ground state. Any additional nucleons would have to go into higher energy states. Indeed, the helium-4 nucleus is so tightly bound that it is commonly treated as a single quantum mechanical particle in nuclear physics, namely, the alpha particle . The situation is similar if two nuclei are brought together. As they approach each other, all the protons in one nucleus repel all the protons in the other. Not until the two nuclei actually come close enough for long enough so the strong attractive nuclear force can take over and overcome the repulsive electrostatic force. This can also be described as the nuclei overcoming the so-called Coulomb barrier . The kinetic energy to achieve this can be lower than the barrier itself because of quantum tunneling. The Coulomb barrier is smallest for isotopes of hydrogen, as their nuclei contain only a single positive charge. A diproton is not stable, so neutrons must also be involved, ideally in such a way that a helium nucleus, with its extremely tight binding, is one of the products. Using deuterium–tritium fuel, the resulting energy barrier is about 0.1 MeV. In comparison, the energy needed to remove an electron from hydrogen is 13.6 eV. The (intermediate) result of the fusion is an unstable 5 He nucleus, which immediately ejects a neutron with 14.1 MeV. The recoil energy of the remaining 4 He nucleus is 3.5 MeV, so the total energy liberated is 17.6 MeV. This is many times more than what was needed to overcome the energy barrier. The reaction cross section (σ) is a measure of the probability of a fusion reaction as a function of the relative velocity of the two reactant nuclei. If the reactants have a distribution of velocities, e.g. a thermal distribution, then it is useful to perform an average over the distributions of the product of cross-section and velocity. This average is called the 'reactivity', denoted ⟨ σv ⟩ . The reaction rate (fusions per volume per time) is ⟨ σv ⟩ times the product of the reactant number densities: If a species of nuclei is reacting with a nucleus like itself, such as the DD reaction, then the product n 1 n 2 {\displaystyle n_{1}n_{2}} must be replaced by n 2 / 2 {\displaystyle n^{2}/2} . ⟨ σ v ⟩ {\displaystyle \langle \sigma v\rangle } increases from virtually zero at room temperatures up to meaningful magnitudes at temperatures of 10 – 100 keV. At these temperatures, well above typical ionization energies (13.6 eV in the hydrogen case), the fusion reactants exist in a plasma state. The significance of ⟨ σ v ⟩ {\displaystyle \langle \sigma v\rangle } as a function of temperature in a device with a particular energy confinement time is found by considering the Lawson criterion . This is an extremely challenging barrier to overcome on Earth, which explains why fusion research has taken many years to reach the current advanced technical state. [ 33 ] [ 34 ] Thermonuclear fusion is the process of atomic nuclei combining or "fusing" using high temperatures to drive them close enough together for this to become possible. Such temperatures cause the matter to become a plasma and, if confined, fusion reactions may occur due to collisions with extreme thermal kinetic energies of the particles. There are two forms of thermonuclear fusion: uncontrolled , in which the resulting energy is released in an uncontrolled manner, as it is in thermonuclear weapons ("hydrogen bombs") and in most stars ; and controlled , where the fusion reactions take place in an environment allowing some or all of the energy released to be harnessed. Temperature is a measure of the average kinetic energy of particles, so by heating the material it will gain energy. After reaching sufficient temperature, given by the Lawson criterion , the energy of accidental collisions within the plasma is high enough to overcome the Coulomb barrier and the particles may fuse together. In a deuterium–tritium fusion reaction , for example, the energy necessary to overcome the Coulomb barrier is 0.1 MeV . Converting between energy and temperature shows that the 0.1 MeV barrier would be overcome at a temperature in excess of 1.2 billion kelvin . There are two effects that are needed to lower the actual temperature. One is the fact that temperature is the average kinetic energy, implying that some nuclei at this temperature would actually have much higher energy than 0.1 MeV, while others would be much lower. It is the nuclei in the high-energy tail of the velocity distribution that account for most of the fusion reactions. The other effect is quantum tunnelling . The nuclei do not actually have to have enough energy to overcome the Coulomb barrier completely. If they have nearly enough energy, they can tunnel through the remaining barrier. For these reasons fuel at lower temperatures will still undergo fusion events, at a lower rate. Thermonuclear fusion is one of the methods being researched in the attempts to produce fusion power . If thermonuclear fusion becomes favorable to use, it would significantly reduce the world's carbon footprint . Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions. [ 35 ] Accelerating light ions is relatively easy, and can be done in an efficient manner—requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between the electrodes. [ citation needed ] The system can be arranged to accelerate ions into a static fuel-infused target, known as beam–target fusion, or by accelerating two streams of ions towards each other, beam–beam fusion. [ citation needed ] The key problem with accelerator-based fusion (and with cold targets in general) is that fusion cross sections are many orders of magnitude lower than Coulomb interaction cross-sections. Therefore, the vast majority of ions expend their energy emitting bremsstrahlung radiation and the ionization of atoms of the target. Devices referred to as sealed-tube neutron generators are particularly relevant to this discussion. These small devices are miniature particle accelerators filled with deuterium and tritium gas in an arrangement that allows ions of those nuclei to be accelerated against hydride targets, also containing deuterium and tritium, where fusion takes place, releasing a flux of neutrons. Hundreds of neutron generators are produced annually for use in the petroleum industry where they are used in measurement equipment for locating and mapping oil reserves. [ citation needed ] A number of attempts to recirculate the ions that "miss" collisions have been made over the years. One of the better-known attempts in the 1970s was Migma , which used a unique particle storage ring to capture ions into circular orbits and return them to the reaction area. Theoretical calculations made during funding reviews pointed out that the system would have significant difficulty scaling up to contain enough fusion fuel to be relevant as a power source. In the 1990s, a new arrangement using a field-reversed configuration (FRC) as the storage system was proposed by Norman Rostoker and continues to be studied by TAE Technologies as of 2021 [update] . A closely related approach is to merge two FRC's rotating in opposite directions, [ 36 ] which is being actively studied by Helion Energy . Because these approaches all have ion energies well beyond the Coulomb barrier , they often suggest the use of alternative fuel cycles like p- 11 B that are too difficult to attempt using conventional approaches. [ 37 ] Fusion of very heavy target nuclei with accelerated ion beams is the primary method of element synthesis. In early 1930s nuclear experiments, deuteron beams were used, to discover the first synthetic elements, such as technetium , neptunium , and plutonium : U 92 238 + H 1 2 ⟶ Np 93 238 + 2 0 1 n {\displaystyle {\begin{aligned}{\ce {{^{238}_{92}U}+{^{2}_{1}H}->}}&{\ce {{^{238}_{93}Np}+2_{0}^{1}n}}\end{aligned}}} Fusion of very heavy target nuclei with heavy ion beams has been used to discover superheavy elements : Pb 82 208 + Ni 28 62 ⟶ Ds 110 269 + 0 1 n {\displaystyle {\begin{aligned}{\ce {{^{208}_{82}Pb}+{^{62}_{28}Ni}->}}&{\ce {{^{269}_{110}Ds}+_{0}^{1}n}}\end{aligned}}} Cf 98 249 + Ca 20 48 ⟶ Og 118 294 + 3 0 1 n {\displaystyle {\begin{aligned}{\ce {{^{249}_{98}Cf}+{^{48}_{20}Ca}->}}&{\ce {{^{294}_{118}Og}+3_{0}^{1}n}}\end{aligned}}} Muon-catalyzed fusion is a fusion process that occurs at ordinary temperatures. It was studied in detail by Steven Jones in the early 1980s. Net energy production from this reaction has been unsuccessful because of the high energy required to create muons , their short 2.2 μs half-life , and the high chance that a muon will bind to the new alpha particle and thus stop catalyzing fusion. [ 38 ] Some other confinement principles have been investigated. The key problem in achieving thermonuclear fusion is how to confine the hot plasma. Due to the high temperature, the plasma cannot be in direct contact with any solid material, so it has to be located in a vacuum . Also, high temperatures imply high pressures. The plasma tends to expand immediately and some force is necessary to act against it. This force can take one of three forms: gravitation in stars, magnetic forces in magnetic confinement fusion reactors, or inertial as the fusion reaction may occur before the plasma starts to expand, so the plasma's inertia is keeping the material together. One force capable of confining the fuel well enough to satisfy the Lawson criterion is gravity . The mass needed, however, is so great that gravitational confinement is only found in stars —the least massive stars capable of sustained fusion are red dwarfs , while brown dwarfs are able to fuse deuterium and lithium if they are of sufficient mass. In stars heavy enough , after the supply of hydrogen is exhausted in their cores, their cores (or a shell around the core) start fusing helium to carbon . In the most massive stars (at least 8–11 solar masses ), the process is continued until some of their energy is produced by fusing lighter elements to iron . As iron has one of the highest binding energies , reactions producing heavier elements are generally endothermic . Therefore, significant amounts of heavier elements are not formed during stable periods of massive star evolution, but are formed in supernova explosions . Some lighter stars also form these elements in the outer parts of the stars over long periods of time, by absorbing energy from fusion in the inside of the star, by absorbing neutrons that are emitted from the fusion process. All of the elements heavier than iron have some potential energy to release, in theory. At the extremely heavy end of element production, these heavier elements can produce energy in the process of being split again back toward the size of iron, in the process of nuclear fission . Nuclear fission thus releases energy that has been stored, sometimes billions of years before, during stellar nucleosynthesis . Electrically charged particles (such as fuel ions) will follow magnetic field lines (see Guiding centre ). The fusion fuel can therefore be trapped using a strong magnetic field. A variety of magnetic configurations exist, including the toroidal geometries of tokamaks and stellarators and open-ended mirror confinement systems. A third confinement principle is to apply a rapid pulse of energy to a large part of the surface of a pellet of fusion fuel, causing it to simultaneously "implode" and heat to very high pressure and temperature. If the fuel is dense enough and hot enough, the fusion reaction rate will be high enough to burn a significant fraction of the fuel before it has dissipated. To achieve these extreme conditions, the initially cold fuel must be explosively compressed. Inertial confinement is used in the hydrogen bomb , where the driver is x-rays created by a fission bomb. Inertial confinement is also attempted in "controlled" nuclear fusion, where the driver is a laser , ion , or electron beam, or a Z-pinch . Another method is to use conventional high explosive material to compress a fuel to fusion conditions. [ 47 ] [ 48 ] The UTIAS explosive-driven-implosion facility was used to produce stable, centred and focused hemispherical implosions [ 49 ] to generate neutrons from D-D reactions. The simplest and most direct method proved to be in a predetonated stoichiometric mixture of deuterium - oxygen . The other successful method was using a miniature Voitenko compressor , [ 50 ] where a plane diaphragm was driven by the implosion wave into a secondary small spherical cavity that contained pure deuterium gas at one atmosphere. [ 51 ] There are also electrostatic confinement fusion devices. These devices confine ions using electrostatic fields. The best known is the fusor . This device has a cathode inside an anode wire cage. Positive ions fly towards the negative inner cage, and are heated by the electric field in the process. If they miss the inner cage they can collide and fuse. Ions typically hit the cathode, however, creating prohibitory high conduction losses. Also, fusion rates in fusors are very low due to competing physical effects, such as energy loss in the form of light radiation. [ 52 ] Designs have been proposed to avoid the problems associated with the cage, by generating the field using a non-neutral cloud. These include a plasma oscillating device, [ 53 ] a Penning trap and the polywell . [ 54 ] The technology is relatively immature, however, and many scientific and engineering questions remain. The most well known Inertial electrostatic confinement approach is the fusor . Starting in 1999, a number of amateurs have been able to do amateur fusion using these homemade devices. [ 55 ] [ 56 ] [ 57 ] [ 58 ] Other IEC devices include: the Polywell , MIX POPS [ 59 ] and Marble concepts. [ 60 ] At the temperatures and densities in stellar cores, the rates of fusion reactions are notoriously slow. For example, at solar core temperature ( T ≈ 15 MK) and density (160 g/cm 3 ), the energy release rate is only 276 μW/cm 3 —about a quarter of the volumetric rate at which a resting human body generates heat. [ 61 ] Thus, reproduction of stellar core conditions in a lab for nuclear fusion power production is completely impractical. Because nuclear reaction rates depend on density as well as temperature, and most fusion schemes operate at relatively low densities, those methods are strongly dependent on higher temperatures. The fusion rate as a function of temperature (exp(− E / kT )), leads to the need to achieve temperatures in terrestrial reactors 10–100 times higher than in stellar interiors: T ≈ (0.1–1.0) × 10 9 K . In artificial fusion, the primary fuel is not constrained to be protons and higher temperatures can be used, so reactions with larger cross-sections are chosen. Another concern is the production of neutrons, which activate the reactor structure radiologically, but also have the advantages of allowing volumetric extraction of the fusion energy and tritium breeding. Reactions that release no neutrons are referred to as aneutronic . To be a useful energy source, a fusion reaction must satisfy several criteria. It must: Few reactions meet these criteria. The following are those with the largest cross sections: [ 62 ] [ 63 ] For reactions with two products, the energy is divided between them in inverse proportion to their masses, as shown. In most reactions with three products, the distribution of energy varies. For reactions that can result in more than one set of products, the branching ratios are given. Some reaction candidates can be eliminated at once. The D– 6 Li reaction has no advantage compared to p + – 11 5 B because it is roughly as difficult to burn but produces substantially more neutrons through 2 1 D – 2 1 D side reactions. There is also a p + – 7 3 Li reaction, but the cross section is far too low, except possibly when T i > 1 MeV, but at such high temperatures an endothermic, direct neutron-producing reaction also becomes very significant. Finally there is also a p + – 9 4 Be reaction, which is not only difficult to burn, but 9 4 Be can be easily induced to split into two alpha particles and a neutron. In addition to the fusion reactions, the following reactions with neutrons are important in order to "breed" tritium in "dry" fusion bombs and some proposed fusion reactors: The latter of the two equations was unknown when the U.S. conducted the Castle Bravo fusion bomb test in 1954. Being just the second fusion bomb ever tested (and the first to use lithium), the designers of the Castle Bravo "Shrimp" had understood the usefulness of 6 Li in tritium production, but had failed to recognize that 7 Li fission would greatly increase the yield of the bomb. While 7 Li has a small neutron cross-section for low neutron energies, it has a higher cross section above 5 MeV. [ 64 ] The 15 Mt yield was 150% greater than the predicted 6 Mt and caused unexpected exposure to fallout. To evaluate the usefulness of these reactions, in addition to the reactants, the products, and the energy released, one needs to know something about the nuclear cross section . Any given fusion device has a maximum plasma pressure it can sustain, and an economical device would always operate near this maximum. Given this pressure, the largest fusion output is obtained when the temperature is chosen so that ⟨ σv ⟩ / T 2 is a maximum. This is also the temperature at which the value of the triple product nTτ required for ignition is a minimum, since that required value is inversely proportional to ⟨ σv ⟩ / T 2 (see Lawson criterion ). (A plasma is "ignited" if the fusion reactions produce enough power to maintain the temperature without external heating.) This optimum temperature and the value of ⟨ σv ⟩ / T 2 at that temperature is given for a few of these reactions in the following table. Note that many of the reactions form chains. For instance, a reactor fueled with 3 1 T and 3 2 He creates some 2 1 D , which is then possible to use in the 2 1 D – 3 2 He reaction if the energies are "right". An elegant idea is to combine the reactions (8) and (9). The 3 2 He from reaction (8) can react with 6 3 Li in reaction (9) before completely thermalizing. This produces an energetic proton, which in turn undergoes reaction (8) before thermalizing. Detailed analysis shows that this idea would not work well, [ citation needed ] but it is a good example of a case where the usual assumption of a Maxwellian plasma is not appropriate. Any of the reactions above can in principle be the basis of fusion power production. In addition to the temperature and cross section discussed above, we must consider the total energy of the fusion products E fus , the energy of the charged fusion products E ch , and the atomic number Z of the non-hydrogenic reactant. Specification of the 2 1 D – 2 1 D reaction entails some difficulties, though. To begin with, one must average over the two branches (2i) and (2ii). More difficult is to decide how to treat the 3 1 T and 3 2 He products. 3 1 T burns so well in a deuterium plasma that it is almost impossible to extract from the plasma. The 2 1 D – 3 2 He reaction is optimized at a much higher temperature, so the burnup at the optimum 2 1 D – 2 1 D temperature may be low. Therefore, it seems reasonable to assume the 3 1 T but not the 3 2 He gets burned up and adds its energy to the net reaction, which means the total reaction would be the sum of (2i), (2ii), and (1): For calculating the power of a reactor (in which the reaction rate is determined by the D–D step), we count the 2 1 D – 2 1 D fusion energy per D–D reaction as E fus = (4.03 MeV + 17.6 MeV) × 50% + (3.27 MeV) × 50% = 12.5 MeV and the energy in charged particles as E ch = (4.03 MeV + 3.5 MeV) × 50% + (0.82 MeV) × 50% = 4.2 MeV. (Note: if the tritium ion reacts with a deuteron while it still has a large kinetic energy, then the kinetic energy of the helium-4 produced may be quite different from 3.5 MeV, [ 78 ] so this calculation of energy in charged particles is only an approximation of the average.) The amount of energy per deuteron consumed is 2/5 of this, or 5.0 MeV (a specific energy of about 225 million MJ per kilogram of deuterium). Another unique aspect of the 2 1 D – 2 1 D reaction is that there is only one reactant, which must be taken into account when calculating the reaction rate. With this choice, we tabulate parameters for four of the most important reactions The last column is the neutronicity of the reaction, the fraction of the fusion energy released as neutrons. This is an important indicator of the magnitude of the problems associated with neutrons like radiation damage, biological shielding, remote handling, and safety. For the first two reactions it is calculated as ( E fus − E ch )/ E fus . For the last two reactions, where this calculation would give zero, the values quoted are rough estimates based on side reactions that produce neutrons in a plasma in thermal equilibrium. Of course, the reactants should also be mixed in the optimal proportions. This is the case when each reactant ion plus its associated electrons accounts for half the pressure. Assuming that the total pressure is fixed, this means that particle density of the non-hydrogenic ion is smaller than that of the hydrogenic ion by a factor 2/( Z + 1) . Therefore, the rate for these reactions is reduced by the same factor, on top of any differences in the values of ⟨ σv ⟩ / T 2 . On the other hand, because the 2 1 D – 2 1 D reaction has only one reactant, its rate is twice as high as when the fuel is divided between two different hydrogenic species, thus creating a more efficient reaction. Thus there is a "penalty" of 2/( Z + 1) for non-hydrogenic fuels arising from the fact that they require more electrons, which take up pressure without participating in the fusion reaction. (It is usually a good assumption that the electron temperature will be nearly equal to the ion temperature. Some authors, however, discuss the possibility that the electrons could be maintained substantially colder than the ions. In such a case, known as a "hot ion mode", the "penalty" would not apply.) There is at the same time a "bonus" of a factor 2 for 2 1 D – 2 1 D because each ion can react with any of the other ions, not just a fraction of them. We can now compare these reactions in the following table. The maximum value of ⟨ σv ⟩ / T 2 is taken from a previous table. The "penalty/bonus" factor is that related to a non-hydrogenic reactant or a single-species reaction. The values in the column "inverse reactivity" are found by dividing 1.24 × 10 −24 by the product of the second and third columns. It indicates the factor by which the other reactions occur more slowly than the 2 1 D – 3 1 T reaction under comparable conditions. The column " Lawson criterion " weights these results with E ch and gives an indication of how much more difficult it is to achieve ignition with these reactions, relative to the difficulty for the 2 1 D – 3 1 T reaction. The next-to-last column is labeled "power density" and weights the practical reactivity by E fus . The final column indicates how much lower the fusion power density of the other reactions is compared to the 2 1 D – 3 1 T reaction and can be considered a measure of the economic potential. The ions undergoing fusion in many systems will essentially never occur alone but will be mixed with electrons that in aggregate neutralize the ions' bulk electrical charge and form a plasma . The electrons will generally have a temperature comparable to or greater than that of the ions, so they will collide with the ions and emit x-ray radiation of 10–30 keV energy, a process known as Bremsstrahlung . The huge size of the Sun and stars means that the x-rays produced in this process will not escape and will deposit their energy back into the plasma. They are said to be opaque to x-rays. But any terrestrial fusion reactor will be optically thin for x-rays of this energy range. X-rays are difficult to reflect but they are effectively absorbed (and converted into heat) in less than mm thickness of stainless steel (which is part of a reactor's shield). This means the bremsstrahlung process is carrying energy out of the plasma, cooling it. The ratio of fusion power produced to x-ray radiation lost to walls is an important figure of merit. This ratio is generally maximized at a much higher temperature than that which maximizes the power density (see the previous subsection). The following table shows estimates of the optimum temperature and the power ratio at that temperature for several reactions: The actual ratios of fusion to Bremsstrahlung power will likely be significantly lower for several reasons. For one, the calculation assumes that the energy of the fusion products is transmitted completely to the fuel ions, which then lose energy to the electrons by collisions, which in turn lose energy by Bremsstrahlung. However, because the fusion products move much faster than the fuel ions, they will give up a significant fraction of their energy directly to the electrons. Secondly, the ions in the plasma are assumed to be purely fuel ions. In practice, there will be a significant proportion of impurity ions, which will then lower the ratio. In particular, the fusion products themselves must remain in the plasma until they have given up their energy, and will remain for some time after that in any proposed confinement scheme. Finally, all channels of energy loss other than Bremsstrahlung have been neglected. The last two factors are related. On theoretical and experimental grounds, particle and energy confinement seem to be closely related. In a confinement scheme that does a good job of retaining energy, fusion products will build up. If the fusion products are efficiently ejected, then energy confinement will be poor, too. The temperatures maximizing the fusion power compared to the Bremsstrahlung are in every case higher than the temperature that maximizes the power density and minimizes the required value of the fusion triple product . This will not change the optimum operating point for 2 1 D – 3 1 T very much because the Bremsstrahlung fraction is low, but it will push the other fuels into regimes where the power density relative to 2 1 D – 3 1 T is even lower and the required confinement even more difficult to achieve. For 2 1 D – 2 1 D and 2 1 D – 3 2 He , Bremsstrahlung losses will be a serious, possibly prohibitive problem. For 3 2 He – 3 2 He , p + – 6 3 Li and p + – 11 5 B the Bremsstrahlung losses appear to make a fusion reactor using these fuels with a quasineutral, isotropic plasma impossible. Some ways out of this dilemma have been considered but rejected. [ 79 ] [ 80 ] This limitation does not apply to non-neutral and anisotropic plasmas ; however, these have their own challenges to contend with. In a classical picture, nuclei can be understood as hard spheres that repel each other through the Coulomb force but fuse once the two spheres come close enough for contact. Estimating the radius of an atomic nuclei as about one femtometer, the energy needed for fusion of two hydrogen is: This would imply that for the core of the sun, which has a Boltzmann distribution with a temperature of around 1.4 keV, the probability hydrogen would reach the threshold is 10 − 290 {\displaystyle 10^{-290}} , that is, fusion would never occur. However, fusion in the sun does occur due to quantum mechanics. The probability that fusion occurs is greatly increased compared to the classical picture, thanks to the smearing of the effective radius as the de Broglie wavelength as well as quantum tunneling through the potential barrier. To determine the rate of fusion reactions, the value of most interest is the cross section , which describes the probability that particles will fuse by giving a characteristic area of interaction. An estimation of the fusion cross-sectional area is often broken into three pieces: where σ geometry {\displaystyle \sigma _{\text{geometry}}} is the geometric cross section, T is the barrier transparency and R is the reaction characteristics of the reaction. σ geometry {\displaystyle \sigma _{\text{geometry}}} is of the order of the square of the de Broglie wavelength σ geometry ≈ λ 2 = ( ℏ m r v ) 2 ∝ 1 ϵ {\displaystyle \sigma _{\text{geometry}}\approx \lambda ^{2}={\bigg (}{\frac {\hbar }{m_{r}v}}{\bigg )}^{2}\propto {\frac {1}{\epsilon }}} where m r {\displaystyle m_{r}} is the reduced mass of the system and ϵ {\displaystyle \epsilon } is the center of mass energy of the system. T can be approximated by the Gamow transparency, which has the form: T ≈ e − ϵ G / ϵ {\displaystyle T\approx e^{-{\sqrt {\epsilon _{G}/\epsilon }}}} where ϵ G = ( π α Z 1 Z 2 ) 2 × 2 m r c 2 {\displaystyle \epsilon _{G}=(\pi \alpha Z_{1}Z_{2})^{2}\times 2m_{r}c^{2}} is the Gamow factor and comes from estimating the quantum tunneling probability through the potential barrier. R contains all the nuclear physics of the specific reaction and takes very different values depending on the nature of the interaction. However, for most reactions, the variation of R ( ϵ ) {\displaystyle R(\epsilon )} is small compared to the variation from the Gamow factor and so is approximated by a function called the astrophysical S-factor , S ( ϵ ) {\displaystyle S(\epsilon )} , which is weakly varying in energy. Putting these dependencies together, one approximation for the fusion cross section as a function of energy takes the form: More detailed forms of the cross-section can be derived through nuclear physics-based models and R-matrix theory. The Naval Research Lab's plasma physics formulary [ 81 ] gives the total cross section in barns as a function of the energy (in keV) of the incident particle towards a target ion at rest fit by the formula: Bosch-Hale [ 82 ] also reports a R-matrix calculated cross sections fitting observation data with Padé rational approximating coefficients . With energy in units of keV and cross sections in units of millibarn, the factor has the form: where σ Bosch-Hale ( ϵ ) = S Bosch-Hale ( ϵ ) ϵ exp ⁡ ( ϵ G / ϵ ) {\displaystyle \sigma ^{\text{Bosch-Hale}}(\epsilon )={\frac {S^{\text{Bosch-Hale}}(\epsilon )}{\epsilon \exp(\epsilon _{G}/{\sqrt {\epsilon }})}}} In fusion systems that are in thermal equilibrium, the particles are in a Maxwell–Boltzmann distribution , meaning the particles have a range of energies centered around the plasma temperature. The sun, magnetically confined plasmas and inertial confinement fusion systems are well modeled to be in thermal equilibrium. In these cases, the value of interest is the fusion cross-section averaged across the Maxwell–Boltzmann distribution. The Naval Research Lab's plasma physics formulary tabulates Maxwell averaged fusion cross sections reactivities in c m 3 / s {\displaystyle \mathrm {cm^{3}/s} } . For energies T ≤ 25 keV {\displaystyle T\leq 25{\text{ keV}}} the data can be represented by: with T in units of keV.
https://en.wikipedia.org/wiki/Thermonuclear_fusion
A thermophile is a type of extremophile that thrives at relatively high temperatures, between 41 and 122 °C (106 and 252 °F). [ 1 ] [ 2 ] Many thermophiles are archaea , though some of them are bacteria and fungi . Thermophilic eubacteria are suggested to have been among the earliest bacteria. [ 3 ] Thermophiles are found in geothermally heated regions of the Earth , such as hot springs like those in Yellowstone National Park and deep sea hydrothermal vents , as well as decaying plant matter, such as peat bogs and compost . They can survive at high temperatures, whereas other bacteria or archaea would be damaged and sometimes killed if exposed to the same temperatures. The enzymes in thermophiles function at high temperatures. Some of these enzymes are used in molecular biology , for example the Taq polymerase used in PCR . [ 4 ] "Thermophile" is derived from the Greek : θερμότητα ( thermotita ), meaning heat , and Greek : φίλια ( philia ), love . Comparative surveys suggest that thermophile diversity is principally driven by pH, not temperature. [ 5 ] Thermophiles can be classified in various ways. One classification sorts these organisms according to their optimal growth temperatures: [ 6 ] In a related classification, thermophiles are sorted as follows: Many hyperthermophilic Archaea require elemental sulfur for growth. Some are anaerobes that use the sulfur instead of oxygen as an electron acceptor during anaerobic cellular respiration . Some are lithotrophs that oxidize sulphur to create sulfuric acid as an energy source, thus requiring the microorganism to be adapted to very low pH (i.e., it is an acidophile as well as thermophile). These organisms are inhabitants of hot, sulfur-rich environments usually associated with volcanism , such as hot springs , geysers , and fumaroles . In these places, especially in Yellowstone National Park, zonation of microorganisms according to their temperature optima occurs. These organisms are often colored, due to the presence of photosynthetic pigments. [ citation needed ] Thermophiles can be discriminated from mesophiles from genomic features. For example, the GC-content levels in the coding regions of some signature genes were consistently identified as correlated with the temperature range condition when the association analysis was applied to mesophilic and thermophilic organisms regardless of their phylogeny, oxygen requirement, salinity, or habitat conditions. [ 7 ] Fungi are the only group of organisms in the Eukaryota domain that can survive at temperature ranges of 50–60 °C. [ 8 ] Thermophilic fungi have been reported from a number of habitats, with most of them belonging to the fungal order Sordariales . [ 9 ] Thermophilic fungi have great biotechnological potential due to their ability to produce industrial-relevant thermostable enzymes, in particular for the degradation of plant biomass. [ 10 ] Sulfolobus solfataricus and Sulfolobus acidocaldarius are hyperthermophilic Archaea . When these organisms are exposed to the DNA damaging agents UV irradiation , bleomycin or mitomycin C , species-specific cellular aggregation is induced. [ 11 ] [ 12 ] In S. acidocaldarius , UV-induced cellular aggregation mediates chromosomal marker exchange with high frequency. [ 12 ] Recombination rates exceed those of uninduced cultures by up to three orders of magnitude. Frols et al. [ 11 ] [ 13 ] and Ajon et al. [ 12 ] (2011) hypothesized that cellular aggregation enhances species-specific DNA transfer between Sulfolobus cells in order to provide increased repair of damaged DNA by means of homologous recombination . Van Wolferen et al., in discussing DNA exchange in the hyperthermophiles under extreme conditions, noted that DNA exchange likely plays a role in repair of DNA via homologous recombination. They suggested that this process is crucial under DNA damaging conditions such as high temperature. Also it has been suggested that DNA transfer in Sulfolobus may be a primitive form of sexual interaction similar to the more well-studied bacterial transformation systems that are associated with species-specific DNA transfer between cells leading to homologous recombinational repair of DNA damage. [ 14 ]
https://en.wikipedia.org/wiki/Thermophile
Thermophobia (adjective: thermophobic ) is intolerance for high temperatures by either inorganic materials or organisms . [ 1 ] The term has a number of specific usages. In pharmacy , a thermophobic foam consisting of 0.1% betamethasone valerate was found to be at least as effective as conventional remedies for treating dandruff . In addition, the foam is non-greasy and does not irritate the scalp . [ 2 ] [ 3 ] Another use of thermophobic material is in treating hyperhydrosis of the axilla and the palm : A thermophobic foam named Bettamousse developed by Mipharm , an Italian company, was found to treat hyperhydrosis effectively. [ 4 ] [ 5 ] In biology , some bacteria are thermophobic, such as mycobacterium leprae which causes leprosy . [ 6 ] Thermophobic response in living organisms is negative response to higher temperatures. In physics , thermophobia is motion of particles in mixtures (solutions, suspensions, etc.) towards the areas of lower temperatures, a particular case of thermophoresis . [ 7 ] In medicine , thermophobia refers to a sensory dysfunction, sensation of abnormal heat, which may be associated with, e.g., hyperthyroidism . [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Thermophobia
A thermophone is a type of transducer that converts an electrical signal into heat , which then becomes sound. It can be thought of as a type of loudspeaker that uses heat fluctuations to produce sound, instead of mechanical vibration . The basic principle of the thermophone has been known since the 19th century. [ 1 ] Thermophones have been used to calibrate acoustical apparatus (like microphones) since the 20th century. [ 2 ] In recent years, the name thermoacoustic speaker has also been used. [ 3 ] Thermoacoustics is the study of the interaction between heat and sound. It is the basis of the thermophone. Byron Higgins in 1802 reported "singing flames" which occurred when the necks of jars were put over a hydrogen gas flame. Sondhauss (1850) and Rijke (1859) performed further experiments. A theory of thermoacoustics was produced by Lord Rayleigh in 1878. [ 4 ] The theory and practice of creating sound with electric heat emerged in the late 19th century. [ 5 ] In 1880, William Henry Preece observed that, upon connecting a microphone transmitter to a platinum wire, sounds were produced: [ 5 ] ...the sonorous effects were most marked and encouraging, when the microphone transmitter M was spoken into. The articluation, though muffled, was clear, and words could easily be heard. [ 5 ] In 1917, Harold D. Arnold and Irving B. Crandall [ de ] of Bell Labs developed a quantitative theory for the thermophone. [ 4 ] [ 6 ] Since then, thermophones have been used as a precision device for microphone calibration. However, they did not see widespread use elsewhere due to their poor efficiency . [ 4 ] When alternating current is passed through a thin conductor , that conductor periodically heats up and cools down following the variations in current strength. This periodic heating and cooling creates temperature waves which the conductor propagates into the surroundings. As the temperature waves propagate away from the conductor, the thermal expansion and contraction of the transmission medium (e.g. air) produces corresponding sound waves . [ 6 ] An ideal thermophone is made of a conductor which is very thin and has a small heat capacity . [ 6 ] In 1999, Shinouda and others presented a porous doped sillicon thermophone for ultrasonic emission. [ 4 ] In 2008, Xiao et al. reported a thermophone made of carbon nanotubes . Since then, there has been a resurgence of research into thermophones and thermoacoustics. [ 4 ] New materials for thermophones are being explored, and thermophones have been created using VLSI processes (like integrated circuits are). [ 4 ]
https://en.wikipedia.org/wiki/Thermophone
Thermophotonics (often abbreviated as TPX ) is a concept for generating usable power from heat which shares some features of thermophotovoltaic (TPV) power generation. Thermophotonics was first publicly proposed by solar photovoltaic researcher Martin Green in 2000. However, no TPX device is known to have been demonstrated to date, apparently because of the stringent requirement on the emitter efficiency. A TPX system consists of a light-emitting diode (LED) (though other types of emitters are conceivable), a photovoltaic (PV) cell, an optical coupling between the two, and an electronic control circuit. The LED is heated to a temperature higher than the PV temperature by an external heat source. If no power is applied to the LED, the system functions much like a very inefficient TPV system, but if a forward bias is applied at some fraction of the bandgap potential, an increased number of electron-hole pairs (EHPs) will be thermally excited to the bandgap energy. These EHPs can then recombine radiatively so that the LED emits light at a rate higher than the thermal radiation rate ("superthermal" emission). This light is then delivered to the cooler PV cell over the optical coupling and converted to electricity. The control circuit presents a load to the PV cell (presumably at the maximum power point ) and converts this voltage to a voltage level that can be used to sustain the bias of the emitter. Provided that the conversion efficiencies of electricity to light and light to electricity are sufficiently high, the power harnessed from the PV cell can exceed the power going into the bias circuit, and this small fraction of excess power (originating from the heat difference) can be utilized. It is thus in some sense a photonic heat engine . Possible applications of thermophotonic generators include solar thermal electricity generation and utilization of waste heat . TPX systems may have the potential to generate power with useful levels of output at temperatures where only thermoelectric systems are now practical, but with higher efficiency. A patent application for a thermophotonic generator using a vacuum gap with thickness on the order of a micrometer or less was published by the US Patent Office in 2009 and assigned to MTPV Corporation of Austin, Texas, USA. This proposed variant of the technology allows better thermal insulation because of the gap between the hot emitter and cold receiver, while maintaining relatively good optical coupling between them due to the gap's being small relative to the optical wavelength.
https://en.wikipedia.org/wiki/Thermophotonics
Thermophotovoltaic ( TPV ) energy conversion is a direct conversion process from heat to electricity via photons . A basic thermophotovoltaic system consists of a hot object emitting thermal radiation and a photovoltaic cell similar to a solar cell but tuned to the spectrum being emitted from the hot object. [ 1 ] As TPV systems generally work at lower temperatures than solar cells, their efficiencies tend to be low. Offsetting this through the use of multi-junction cells based on non-silicon materials is common, but generally very expensive. This currently limits TPV to niche roles like spacecraft power and waste heat collection from larger systems like steam turbines . Typical photovoltaics work by creating a p–n junction near the front surface of a thin semiconductor material. When photons above the bandgap energy of the material hit atoms within the bulk lower layer, below the junction, an electron is photoexcited and becomes free of its atom. The junction creates an electric field that accelerates the electron forward within the cell until it passes the junction and is free to move to the thin electrodes patterned on the surface. Connecting a wire from the front to the rear allows the electrons to flow back into the bulk and complete the circuit. [ 2 ] Photons with less energy than the bandgap do not eject electrons. Photons with energy above the bandgap will eject higher-energy electrons which tend to thermalize within the material and lose their extra energy as heat. If the cell's bandgap is raised, the electrons that are emitted will have higher energy when they reach the junction and thus result in a higher voltage , but this will reduce the number of electrons emitted as more photons will be below the bandgap energy and thus generate a lower current . As electrical power is the product of voltage and current, there is a sweet spot where the total output is maximized. [ 3 ] Terrestrial solar radiation is typically characterized by a standard known as Air Mass 1.5 , or AM1.5. This is very close to 1,000 W of energy per square meter at an apparent temperature of 5780 K. At this temperature, about half of all the energy reaching the surface is in the infrared . Based on this temperature, energy production is maximized when the bandgap is about 1.4 eV, in the near infrared . This just happens to be very close to the bandgap in doped silicon , at 1.1 eV, which makes solar PV inexpensive to produce. [ 3 ] This means that all of the energy in the infrared and lower, about half of AM1.5, goes to waste. There has been continuing research into cells that are made of several different layers, each with a different bandgap, and thus tuned to a different part of the solar spectrum. As of 2022 [update] , cells with overall efficiencies in the range of 40% are commercially available, although they are extremely expensive and have not seen widespread use outside of specific roles like powering spacecraft , where cost is not a significant consideration. [ 4 ] The same process of photoemission can be used to produce electricity from any spectrum, although the number of semiconductor materials that will have just the right bandgap for an arbitrary hot object is limited. Instead, semiconductors that have tuneable bandgaps are needed. It is also difficult to produce solar-like thermal output; an oxyacetylene torch is about 3400 K (~3126 °C), and more common commercial heat sources like coal and natural gas burn at much lower temperatures around 900 °C to about 1300 °C. This further limits the suitable materials. In the case of TPV most research has focused on gallium antimonide (GaSb), although germanium (Ge) is also suitable. [ 5 ] Another problem with lower-temperature sources is that their energy is more spread out, according to Wien's displacement law . While one can make a practical solar cell with a single bandgap tuned to the peak of the spectrum and just ignore the losses in the IR region, doing the same with a lower temperature source will lose much more of the potential energy and result in very low overall efficiency. This means TPV systems almost always use multi-junction cells in order to reach reasonable double-digit efficiencies. Current research in the area aims at increasing system efficiencies while keeping the system cost low, [ 6 ] but even then their roles tend to be niches similar to those of multi-junction solar cells. TPV systems generally consist of a heat source, an emitter, and a waste heat rejection system. The TPV cells are placed between the emitter, often a block of metal or similar, and the cooling system, often a passive radiator. PV systems in general operate at lower efficiency as the temperature increases, and in TPV systems, keeping the photovoltaic cool is a significant challenge. [ 7 ] This contrasts with a somewhat related concept, the "thermoradiative" or "negative emission" cells, in which the photodiode is on the hot side of the heat engine. [ 8 ] [ 9 ] Systems have also been proposed that use a thermoradiative device as an emitter in a TPV system, theoretically allowing power to be extracted from both a hot photodiode and a cold photodiode. [ 10 ] Conventional radioisotope thermoelectric generators (RTGs) used to power spacecraft use a radioactive material whose radiation is used to heat a block of material and then converted to electricity using a thermocouple . Thermocouples are very inefficient and their replacement with TPV could offer significant improvements in efficiency and thus require a smaller and lighter RTG for any given mission. Experimental systems developed by Emcore (a multi-junction solar cell provider), Creare, Oak Ridge and NASA 's Glenn Research Center demonstrated 15 to 20% efficiency. A similar concept was developed by the University of Houston which reached 30% efficiency, a 3 to 4-fold improvement over existing systems. [ 11 ] [ 5 ] Another area of active research is using TPV as the basis of a thermal storage system. In this concept, electricity being generated in off-peak times is used to heat a large block of material, typically carbon or a phase-change material . The material is surrounded by TPV cells which are in turn backed by a reflector and insulation. During storage, the TPV cells are turned off and the photons pass through them and reflect back into the high-temperature source. When power is needed, the TPV is connected to a load. [ citation needed ] TPV cells have been proposed as auxiliary power conversion devices for capture of otherwise lost heat in other power generation systems, such as steam turbine systems or solar cells. [ citation needed ] Henry Kolm constructed an elementary TPV system at MIT in 1956. However, Pierre Aigrain is widely cited as the inventor based on lectures he gave at MIT between 1960–1961 which, unlike Kolm's system, led to research and development. [ 12 ] In the 1980s, efficiency reached close to 30%. [ 13 ] [ 14 ] In 1997 a prototype TPV hybrid car was built, the "Viking 29" (TPV) powered automobile, designed and built by the Vehicle Research Institute (VRI) at Western Washington University . [ 15 ] [ 16 ] In 2022, MIT / NREL announced a device with 41% efficiency. The absorber employed multiple III-V semiconductor layers tuned to absorb variously, ultraviolet, visible, and infrared photons. A gold reflector recycled unabsorbed photons. The device operated at 2400 °C, at which temperature the tungsten emitter reaches maximum brightness. [ 14 ] [ 17 ] In May 2024, researchers announced a device that achieved 44% efficiency when using silicon-carbide (SiC) as the heat storage material (emitter) [ 18 ] . At 1,435 °C (2,615 °F) the device radiates thermal photons at various energy levels. The semiconductor captures 20 to 30% of the photons [ 19 ] . Additional layers include air and a gold reflector layer [ 19 ] . The upper limit for efficiency in TPVs (and all systems that convert heat energy to work) is the Carnot efficiency , that of an ideal heat engine . This efficiency is given by: where T cell is the temperature of the PV converter. Practical systems can achieve T cell = ~300 K and T emit = ~1800 K, giving a maximum possible efficiency of ~83%. This assumes the PV converts the radiation into electrical energy without losses, such as thermalization or Joule heating , though in reality the photovoltaic inefficiency is quite significant. In real devices, as of 2021, the maximum demonstrated efficiency in the laboratory was 35% with an emitter temperature of 1,773 K. [ 20 ] This is the efficiency in terms of heat input being converted to electrical power. In complete TPV systems, a necessarily lower total system efficiency may be cited including the source of heat, so for example, fuel-based TPV systems may report efficiencies in terms of fuel-energy to electrical energy, in which case 5% is considered a "world record" level of efficiency. [ 21 ] Real-world efficiencies are reduced by such effects as heat transfer losses, electrical conversion efficiency (TPV voltage outputs are often quite low), and losses due to active cooling of the PV cell. Deviations from perfect absorption and perfect black body behavior lead to light losses. For selective emitters, any light emitted at wavelengths not matched to the bandgap energy of the photovoltaic may not be efficiently converted, reducing efficiency. In particular, emissions associated with phonon resonances are difficult to avoid for wavelengths in the deep infrared , which cannot be practically converted. An ideal emitter would emit no light at wavelengths other than at the bandgap energy, and much TPV research is devoted to developing emitters that better approximate this narrow emission spectrum. [ citation needed ] For black body emitters or imperfect selective emitters, filters reflect non-ideal wavelengths back to the emitter. These filters are imperfect. Any light that is absorbed or scattered and not redirected to the emitter or the converter is lost, generally as heat. Conversely, practical filters often reflect a small percentage of light in desired wavelength ranges. Both are inefficiencies. The absorption of suboptimal wavelengths by the photovoltaic device also contributes inefficiency and has the added effect of heating it, which also decreases efficiency. [ citation needed ] Even for systems where only light of optimal wavelengths is passed to the photovoltaic converter, inefficiencies associated with non-radiative recombination and Ohmic losses exist. There are also losses from Fresnel reflections at the PV surface, optimal-wavelength light that passes through the cell unabsorbed, and the energy difference between higher-energy photons and the bandgap energy (though this tends to be less significant than with solar PVs). Non-radiative recombination losses tend to become less significant as the light intensity increases, while they increase with increasing temperature, so real systems must consider the intensity produced by a given design and operating temperature . [ citation needed ] In an ideal system, the emitter is surrounded by converters so no light is lost. Realistically, geometries must accommodate the input energy (fuel injection or input light) used to heat the emitter. Additionally, costs have prohibited surrounding the filter with converters. When the emitter reemits light, anything that does not travel to the converters is lost. Mirrors can be used to redirect some of this light back to the emitter; however, the mirrors may have their own losses. [ citation needed ] For black body emitters where photon recirculation is achieved via filters, Planck's law states that a black body emits light with a spectrum given by: I ′ ( λ , T ) = 2 h c 2 λ 5 1 e h c λ k T − 1 {\displaystyle I'(\lambda ,T)={\frac {2hc^{2}}{\lambda ^{5}}}{\frac {1}{e^{\frac {hc}{\lambda kT}}-1}}} where I ′ is the light flux of a specific wavelength, λ , given in units of 1 m –3 ⋅s –1 . h is the Planck constant , k is the Boltzmann constant , c is the speed of light, and T emit is the emitter temperature. Thus, the light flux with wavelengths in a specific range can be found by integrating over the range. The peak wavelength is determined by the temperature, T emit based on Wien's displacement law : where b is Wien's displacement constant. For most materials, the maximum temperature an emitter can stably operate at is about 1800 °C. This corresponds to an intensity that peaks at λ ≅ 1600 nm or an energy of ~0.75 eV. For more reasonable operating temperatures of 1200 °C, this drops to ~0.5 eV. These energies dictate the range of bandgaps that are needed for practical TPV converters (though the peak spectral power is slightly higher). Traditional PV materials such as Si (1.1 eV) and GaAs (1.4 eV) are substantially less practical for TPV systems, as the intensity of the black body spectrum is low at these energies for emitters at realistic temperatures. [ citation needed ] Efficiency, temperature resistance and cost are the three major factors for choosing a TPV emitter. Efficiency is determined by energy absorbed relative to incoming radiation. High temperature operation is crucial because efficiency increases with operating temperature. As emitter temperature increases, black-body radiation shifts to shorter wavelengths, allowing for more efficient absorption by photovoltaic cells. [ citation needed ] Polycrystalline silicon carbide (SiC) is the most commonly used emitter for burner TPVs. SiC is thermally stable to ~1700 °C. However, SiC radiates much of its energy in the long wavelength regime, far lower in energy than even the narrowest bandgap photovoltaic. Such radiation is not converted into electrical energy. However, non-absorbing selective filters in front of the PV, [ 22 ] or mirrors deposited on the back side of the PV [ 23 ] can be used to reflect the long wavelengths back to the emitter, thereby recycling the unconverted energy. In addition, polycrystalline SiC is inexpensive. Tungsten is the most common refractory metal that can be used as a selective emitter. [ 24 ] It has higher emissivity in the visible and near-IR range of 0.45 to 0.47 and a low emissivity of 0.1 to 0.2 in the IR region. [ 25 ] The emitter is usually in the shape of a cylinder with a sealed bottom, which can be considered a cavity. The emitter is attached to the back of a thermal absorber such as SiC and maintains the same temperature. Emission occurs in the visible and near IR range, which can be readily converted by the PV to electrical energy. However, compared to other metals, tungsten oxidizes more easily. Rare-earth oxides such as ytterbium oxide (Yb 2 O 3 ) and erbium oxide (Er 2 O 3 ) are the most commonly used selective emitters. These oxides emit a narrow band of wavelengths in the near-infrared region, allowing the emission spectra to be tailored to better fit the absorbance characteristics of a particular PV material. The peak of the emission spectrum occurs at 1.29 eV for Yb 2 O 3 and 0.827 eV for Er 2 O 3 . As a result, Yb 2 O 3 can be used a selective emitter for silicon cells and Er 2 O 3 , for GaSb or InGaAs. However, the slight mismatch between the emission peaks and band gap of the absorber costs significant efficiency. Selective emission only becomes significant at 1100 °C and increases with temperature. Below 1700 °C, selective emission of rare-earth oxides is fairly low, further decreasing efficiency. Currently, 13% efficiency has been achieved with Yb 2 O 3 and silicon PV cells. In general selective emitters have had limited success. More often filters are used with black body emitters to pass wavelengths matched to the bandgap of the PV and reflect mismatched wavelengths back to the emitter. [ citation needed ] Photonic crystals allow precise control of electromagnetic wave properties. These materials give rise to the photonic bandgap (PBG). In the spectral range of the PBG, electromagnetic waves cannot propagate. Engineering these materials allows some ability to tailor their emission and absorption properties, allowing for more effective emitter design. Selective emitters with peaks at higher energy than the black body peak (for practical TPV temperatures) allow for wider bandgap converters. These converters are traditionally cheaper to manufacture and less temperature sensitive. Researchers at Sandia Labs predicted a high-efficiency (34% of light emitted converted to electricity) based on TPV emitter demonstrated using tungsten photonic crystals. [ 26 ] However, manufacturing of these devices is difficult and not commercially feasible. Early TPV work focused on the use of silicon. Silicon's commercial availability, low cost, scalability and ease of manufacture makes this material an appealing candidate. However, the relatively wide bandgap of Si (1.1eV) is not ideal for use with a black body emitter at lower operating temperatures. Calculations indicate that Si PVs are only feasible at temperatures much higher than 2000 K. No emitter has been demonstrated that can operate at these temperatures. These engineering difficulties led to the pursuit of lower-bandgap semiconductor PVs. [ citation needed ] Using selective radiators with Si PVs is still a possibility. Selective radiators would eliminate high and low energy photons, reducing heat generated. Ideally, selective radiators would emit no radiation beyond the band edge of the PV converter, increasing conversion efficiency significantly. No efficient TPVs have been realized using Si PVs. [ citation needed ] Early investigations into low bandgap semiconductors focused on germanium (Ge). Ge has a bandgap of 0.66 eV, allowing for conversion of a much higher fraction of incoming radiation. However, poor performance was observed due to the high effective electron mass of Ge. Compared to III-V semiconductors , Ge's high electron effective mass leads to a high density of states in the conduction band and therefore a high intrinsic carrier concentration. As a result, Ge diodes have fast decaying "dark" current and therefore, a low open-circuit voltage. In addition, surface passivation of germanium has proven difficult. [ citation needed ] The gallium antimonide (GaSb) PV cell, invented in 1989, [ 27 ] is the basis of most PV cells in modern TPV systems. GaSb is a III-V semiconductor with the zinc blende crystal structure. The GaSb cell is a key development owing to its narrow bandgap of 0.72 eV. This allows GaSb to respond to light at longer wavelengths than silicon solar cell, enabling higher power densities in conjunction with manmade emission sources. A solar cell with 35% efficiency was demonstrated using a bilayer PV with GaAs and GaSb, [ 27 ] setting the solar cell efficiency record. Manufacturing a GaSb PV cell is quite simple. Czochralski tellurium -doped n-type GaSb wafers are commercially available. Vapor-based zinc diffusion is carried out at elevated temperatures (~450 °C) to allow for p-type doping. Front and back electrical contacts are patterned using traditional photolithography techniques and an anti-reflective coating is deposited. Efficiencies are estimated at ~20% using a 1000 °C black body spectrum. [ 28 ] The radiative limit for efficiency of the GaSb cell in this setup is 52%. Indium gallium arsenide antimonide (InGaAsSb) is a compound III-V semiconductor . (In x Ga 1−x As y Sb 1−y ) The addition of GaAs allows for a narrower bandgap (0.5 to 0.6 eV), and therefore better absorption of long wavelengths. Specifically, the bandgap was engineered to 0.55 eV. With this bandgap, the compound achieved a photon-weighted internal quantum efficiency of 79% with a fill factor of 65% for a black body at 1100 °C. [ 29 ] This was for a device grown on a GaSb substrate by organometallic vapour phase epitaxy (OMVPE). Devices have been grown by molecular beam epitaxy (MBE) and liquid phase epitaxy (LPE). The internal quantum efficiencies (IQE) of these devices approach 90%, while devices grown by the other two techniques exceed 95%. [ 30 ] The largest problem with InGaAsSb cells is phase separation. Compositional inconsistencies throughout the device degrade its performance. When phase separation can be avoided, the IQE and fill factor of InGaAsSb approach theoretical limits in wavelength ranges near the bandgap energy. However, the V oc /E g ratio is far from the ideal. [ 30 ] Current methods to manufacture InGaAsSb PVs are expensive and not commercially viable. Indium gallium arsenide (InGaAs) is a compound III-V semiconductor. It can be applied in two ways for use in TPVs. When lattice-matched to an InP substrate, InGaAs has a bandgap of 0.74 eV, no better than GaSb. Devices of this configuration have been produced with a fill factor of 69% and an efficiency of 15%. [ 31 ] However, to absorb higher wavelength photons, the bandgap may be engineered by changing the ratio of In to Ga. The range of bandgaps for this system is from about 0.4 to 1.4 eV. However, these different structures cause strain with the InP substrate. This can be controlled with graded layers of InGaAs with different compositions. This was done to develop of device with a quantum efficiency of 68% and a fill factor of 68%, grown by MBE. [ 29 ] This device had a bandgap of 0.55 eV, achieved in the compound In 0.68 Ga 0.33 As. It is a well-developed material. InGaAs can be made to lattice match perfectly with Ge resulting in low defect densities. Ge as a substrate is a significant advantage over more expensive or harder-to-produce substrates. The InPAsSb quaternary alloy has been grown by both OMVPE and LPE. When lattice-matched to InAs, it has a bandgap in the range 0.3–0.55 eV. The benefits of such a low band gap have not been studied in depth. Therefore, cells incorporating InPAsSb have not been optimized and do not yet have competitive performance. The longest spectral response from an InPAsSb cell studied was 4.3 μm with a maximum response at 3 μm. [ 30 ] For this and other low-bandgap materials, high IQE for long wavelengths is hard to achieve due to an increase in Auger recombination . PbSnSe/PbSrSe quantum well materials, which can be grown by MBE on silicon substrates, have been proposed for low cost TPV device fabrication. [ 32 ] These IV-VI semiconductor materials can have bandgaps between 0.3 and 0.6 eV. Their symmetric band structure and lack of valence band degeneracy result in low Auger recombination rates, typically more than an order of magnitude smaller than those of comparable bandgap III-V semiconductor materials. TPVs promise efficient and economically viable power systems for both military and commercial applications. Compared to traditional nonrenewable energy sources, burner TPVs have little NO x emissions and are virtually silent. Solar TPVs are a source of emission-free renewable energy. TPVs can be more efficient than PV systems owing to recycling of unabsorbed photons. However, losses at each energy conversion step lower efficiency. When TPVs are used with a burner source, they provide on-demand energy. As a result, energy storage may not be needed. In addition, owing to the PV's proximity to the radiative source, TPVs can generate current densities 300 times that of conventional PVs. [ citation needed ] Battlefield dynamics require portable power. Conventional diesel generators are too heavy for use in the field. Scalability allows TPVs to be smaller and lighter than conventional generators. Also, TPVs have few emissions and are silent. Multifuel operation is another potential benefit. Investigations in the 1970s failed due to PV limitations. However, the GaSb photocell led to a renewed effort in the 1990s with improved results. In early 2001, JX Crystals delivered a TPV based battery charger to the US Army that produced 230 W fueled by propane . This prototype utilized an SiC emitter operating at 1250 °C and GaSb photocells and was approximately 0.5 m tall. [ 33 ] The power source had an efficiency of 2.5%, calculated as the ratio of the power generated to the thermal energy of the fuel burned. This is too low for practical battlefield use. No portable TPV power sources have reached troop testing. Converting spare electricity into heat for high-volume, long-term storage is under research at various companies, who claim that costs could be much lower than lithium-ion batteries . [ 14 ] Graphite may be used as a storage medium, with molten tin as heat transfer, at temperatures around 2000°. See LaPotin, A., Schulte, K.L., Steiner, M.A. et al. Thermophotovoltaic efficiency of 40%. Nature 604, 287–291 (2022). [ 34 ] Space power generation systems must provide consistent and reliable power without large amounts of fuel. As a result, solar and radioisotope fuels (extremely high power density and long lifetime) are ideal. TPVs have been proposed for each. In the case of solar energy, orbital spacecraft may be better locations for the large and potentially cumbersome concentrators required for practical TPVs. However, weight considerations and inefficiencies associated with the more complicated design of TPVs, protected conventional PVs continue to dominate. [ citation needed ] The output of isotopes is thermal energy. In the past thermoelectricity (direct thermal to electrical conversion with no moving parts) has been used because TPV efficiency is less than the ~10% of thermoelectric converters. [ 35 ] Stirling engines have been deemed too unreliable, despite conversion efficiencies >20%. [ 36 ] However, with the recent advances in small-bandgap PVs, TPVs are becoming more promising. A TPV radioisotope converter with 20% efficiency was demonstrated that uses a tungsten emitter heated to 1350 K, with tandem filters and a 0.6 eV bandgap InGaAs PV converter (cooled to room temperature). About 30% of the lost energy was due to the optical cavity and filters. The remainder was due to the efficiency of the PV converter. [ 36 ] Low-temperature operation of the converter is critical to the efficiency of TPV. Heating PV converters increases their dark current, thereby reducing efficiency. The converter is heated by the radiation from the emitter. In terrestrial systems it is reasonable to dissipate this heat without using additional energy with a heat sink . However, space is an isolated system, where heat sinks are impractical. Therefore, it is critical to develop innovative solutions to efficiently remove that heat. Both represent substantial challenges. [ 35 ] TPVs can provide continuous power to off-grid homes. Traditional PVs do not provide power during winter months and nighttime, while TPVs can utilize alternative fuels to augment solar-only production. The greatest advantage for TPV generators is cogeneration of heat and power. In cold climates, it can function as both a heater/stove and a power generator. JX Crystals developed a prototype TPV heating stove/generator that burns natural gas and uses a SiC source emitter operating at 1250 °C and GaSb photocell to output 25,000 BTU /hr (7.3kW of heat) simultaneously generating 100W (1.4% efficiency). However, costs render it impractical. Combining a heater and a generator is called combined heat and power (CHP). Many TPV CHP scenarios have been theorized, but a study found that generator using boiling coolant was most cost efficient. [ 37 ] The proposed CHP would utilize a SiC IR emitter operating at 1425 °C and GaSb photocells cooled by boiling coolant. The TPV CHP would output 85,000 BTU/hr (25kW of heat) and generate 1.5 kW. The estimated efficiency would be 12.3% (?)(1.5kW/25kW = 0.06 = 6%) requiring investment or 0.08 €/kWh assuming a 20 year lifetime. The estimated cost of other non-TPV CHPs are 0.12 €/kWh for gas engine CHP and 0.16 €/kWh for fuel cell CHP. This furnace was not commercialized because the market was not thought to be large enough. TPVs have been proposed for use in recreational vehicles. Their ability to use multiple fuel sources makes them interesting as more sustainable fuels emerge. TPVs silent operation allows them to replace noisy conventional generators (i.e. during "quiet hours" in national park campgrounds). However, the emitter temperatures required for practical efficiencies make TPVs on this scale unlikely. [ 38 ]
https://en.wikipedia.org/wiki/Thermophotovoltaic_energy_conversion
Thermophyte ( Greek thérmos = warmth, heat + phyton = plant) is an organism which is tolerant or thriving at high temperatures. These organisms are categorized according to ecological valences at high temperatures, including biological extremely. Such organisms included the hot-spring taxa also. [ 1 ] [ 2 ] A large amount of thermophytes are algae, more specifically blue-green algae, also referred to as cyanobacteria. This type of algae thrives in hot conditions ranging anywhere from 50 to 70 degrees Celsius, [ 3 ] [ 4 ] which other plants and organisms cannot survive in. Thermophytes are able to survive extreme temperatures as their cells contain an “unorganized nucleus”. As the name suggests, thermophytes are found in high temperatures. They can be found in abundance in and around places like freshwater hot springs, such as YellowStone National park and in the Lassen Volcanic National park. There are instances in which a fungus and plant become thermophytes by forming a symbiotic relationship with one another. [ 5 ] Some thermophytes live with a fungal partner in a symbiotic relationship with plants, algae, and viruses. Mutualists like the panic grass and its fungal partner cannot survive individually, but thrive when they are in the symbiotic relationship. This means the fungus, plant, and virus function together to survive in such extreme conditions by benefiting from each other. The fungi typically dwells in the intracellular spaces between the plant's cells. [ 6 ] In a study performed at Washington State, it was discovered that panic grass living near the hot springs in Yellowstone National park thrive due to their relationship with the fungus Curvularia protuberata. [ 7 ] [ 8 ] Neither organism can survive on their own at such high temperatures. The mycoviruses infect the fungi that live within these plants and algae. These mycoviruses prevent the fungi from having a pathogenic effect on the plants, thus preventing the fungus from harming the plant. [ 9 ] [ 10 ] The panic grass benefits because the fungi potentially help to disperse the heat and alert the plant of any environmental stress that the plant should activate its response. The study at Washington State has led to the discovery of a way to use these relationships between fungi and plants to make crops more thermo-tolerant, allowing them to resist damage by heat. The most famous of these ecological groups of organisms are:
https://en.wikipedia.org/wiki/Thermophyte