text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Agricultural wastewater treatment is a farm management agenda for controlling pollution from confined animal operations and from surface runoff that may be contaminated by chemicals in fertilizer, pesticides, animal slurry, crop residues or irrigation water. Agricultural wastewater treatment is required for continuous confined animal operations like milk and egg production. It may be performed in plants using mechanized treatment units similar to those used for industrial wastewater. Where land is available for ponds, settling basins and facultative lagoons may have lower operational costs for seasonal use conditions from breeding or harvest cycles.: 6–8 Animal slurries are usually treated by containment in anaerobic lagoons before disposal by spray or trickle application to grassland. Constructed wetlands are sometimes used to facilitate treatment of animal wastes.
Nonpoint source pollution includes sediment runoff, nutrient runoff and pesticides. Point source pollution includes animal wastes, silage liquor, milking parlour (dairy farming) wastes, slaughtering waste, vegetable washing water and firewater. Many farms generate nonpoint source pollution from surface runoff which is not controlled through a treatment plant.
Farmers can install erosion controls to reduce runoff flows and retain soil on their fields.: pp. 4-95–4-96 Common techniques include contour plowing, crop mulching, crop rotation, planting perennial crops and installing riparian buffers.: pp. 4-95–4-96 Farmers can also develop and implement nutrient management plans to reduce excess application of nutrients: pp. 4-37–4-38 and reduce the potential for nutrient pollution. To minimize pesticide impacts, farmers may use Integrated Pest Management (IPM) techniques (which can include biological pest control) to maintain control over pests, reduce reliance on chemical pesticides, and protect water quality.
== Nonpoint source pollution ==
Nonpoint source pollution from farms is caused by surface runoff from fields during rain storms. Agricultural runoff is a major source of pollution, in some cases the only source, in many watersheds.
=== Sediment runoff ===
Soil washed off fields is the largest source of agricultural pollution in the United States. Excess sediment causes high levels of turbidity in water bodies, which can inhibit growth of aquatic plants, clog fish gills and smother animal larvae.
Farmers may utilize erosion controls to reduce runoff flows and retain soil on their fields. Common techniques include:
contour ploughing
crop mulching
crop rotation
planting perennial crops
installing riparian buffers.: pp. 4-95–4-96
=== Nutrient runoff ===
Nitrogen and phosphorus are key pollutants found in runoff, and they are applied to farmland in several ways, such as in the form of commercial fertilizer, animal manure, or municipal or industrial wastewater (effluent) or sludge. These chemicals may also enter runoff from crop residues, irrigation water, wildlife, and atmospheric deposition.: p. 2–9
Farmers can develop and implement nutrient management plans to mitigate impacts on water quality by:
mapping and documenting fields, crop types, soil types, water bodies
developing realistic crop yield projections
conducting soil tests and nutrient analyses of manures and/or sludges applied
identifying other significant nutrient sources (e.g., irrigation water)
evaluating significant field features such as highly erodible soils, subsurface drains, and shallow aquifers
applying fertilizers, manures, and/or sludges based on realistic yield goals and using precision agriculture techniques.: pp. 4-37–4-38
=== Pesticides ===
Pesticides are widely used by farmers to control plant pests and enhance production, but chemical pesticides can also cause water quality problems. Pesticides may appear in surface water due to:
direct application (e.g. aerial spraying or broadcasting over water bodies)
runoff during rain storms
aerial drift (from adjacent fields).: p.2–22
Some pesticides have also been detected in groundwater.: p.2–24
Farmers may use Integrated Pest Management (IPM) techniques (which can include biological pest control) to maintain control over pests, reduce reliance on chemical pesticides, and protect water quality.
There are few safe ways of disposing of pesticide surpluses other than through containment in well managed landfills or by incineration. In some parts of the world, spraying on land is a permitted method of disposal.
== Point source pollution and treatment steps ==
Farms with large livestock and poultry operations, such as factory farms, can be a major source of point source wastewater. In the United States, these facilities are called concentrated animal feeding operations or confined animal feeding operations and are being subject to increasing government regulation.
Antibiotic-resistant bacteria have been found to infiltrate the water cycle from farms. Raising animals accounts for 73% of antibiotics use globally, and wastewater treatment facilities can transfer antibiotic-resistant bacteria to humans.
=== Animal wastes ===
The constituents of animal wastewater typically contain
Strong organic content — much stronger than human sewage
High solids concentration
High nitrate and phosphorus content
Antibiotics
Synthetic hormones
Often high concentrations of parasites and their eggs
Spores of Cryptosporidium (a protozoan) resistant to drinking water treatment processes
Spores of Giardia
Human pathogenic bacteria such as Brucella and Salmonella
Animal wastes from cattle can be produced as solid or semisolid manure or as a liquid slurry. The production of slurry is especially common in housed dairy cattle.
Treatment
Whilst solid manure heaps outdoors can give rise to polluting wastewaters from runoff, this type of waste is usually relatively easy to treat by containment and/or covering of the heap.
Animal slurries require special handling and are usually treated by containment in lagoons before disposal by spray or trickle application to grassland. Constructed wetlands are sometimes used to facilitate treatment of animal wastes, as are anaerobic lagoons. Excessive application or application to sodden land or insufficient land area can result in direct runoff to watercourses, with the potential for causing severe pollution. Application of slurries to land overlying aquifers can result in direct contamination or, more commonly, elevation of nitrogen levels as nitrite or nitrate.
The disposal of any wastewater containing animal waste upstream of a drinking water intake can pose serious health problems to those drinking the water because of the highly resistant spores present in many animals that are capable of causing disabling disease in humans. This risk exists even for very low-level seepage via shallow surface drains or from rainfall run-off.
Some animal slurries are treated by mixing with straws and composted at high temperature to produce a bacteriologically sterile and friable manure for soil improvement.
==== Piggery waste ====
Piggery waste is comparable to other animal wastes and is processed as for general animal waste, except that many piggery wastes contain elevated levels of copper that can be toxic in the natural environment. The liquid fraction of the waste is frequently separated off and re-used in the piggery to avoid the prohibitively expensive costs of disposing of copper-rich liquid. Ascarid worms and their eggs are also common in piggery waste and can infect humans if wastewater treatment is ineffective.
=== Silage liquor ===
Fresh or wilted grass or other green crops can be made into a semi-fermented product called silage which can be stored and used as winter forage for cattle and sheep. The production of silage often involves the use of an acid conditioner such as sulfuric acid or formic acid. The process of silage making frequently produces a yellow-brown strongly smelling liquid which is very rich in simple sugars, alcohol, short-chain organic acids and silage conditioner. This liquor is one of the most polluting organic substances known. The volume of silage liquor produced is generally in proportion to the moisture content of the ensiled material.
Treatment
Silage liquor is best treated through prevention by wilting crops well before silage making. Any silage liquor that is produced can be used as part of the food for pigs. The most effective treatment is by containment in a slurry lagoon and by subsequent spreading on land following substantial dilution with slurry. Containment of silage liquor on its own can cause structural problems in concrete pits because of the acidic nature of silage liquor.
=== Milking parlour (dairy farming) wastes ===
Although milk is an important food product, its presence in wastewaters is highly polluting because of its organic strength, which can lead to very rapid de-oxygenation of receiving waters. Milking parlour wastes also contain large volumes of wash-down water, some animal waste together with cleaning and disinfection chemicals.
Treatment
Milking parlour wastes are often treated in admixture with human sewage in a local sewage treatment plant. This ensures that disinfectants and cleaning agents are sufficiently diluted and amenable to treatment. Running milking wastewaters into a farm slurry lagoon is a possible option although this tends to consume lagoon capacity very quickly. Land spreading is also a treatment option.
=== Slaughtering waste ===
Wastewater from slaughtering activities is similar to milking parlour waste (see above) although considerably stronger in its organic composition and therefore potentially much more polluting.
Treatment
As for milking parlour waste (see above).
=== Vegetable washing water ===
Washing of vegetables produces large volumes of water contaminated by soil and vegetable pieces. Low levels of pesticides used to treat the vegetables may also be present together with moderate levels of disinfectants such as chlorine.
Treatment
Most vegetable washing waters are extensively recycled with the solids removed by settlement and filtration. The recovered soil can be returned to the land.
=== Firewater ===
Although few farms plan for fires, fires are nevertheless more common on farms than on many other industrial premises. Stores of pesticides, herbicides, fuel oil for farm machinery and fertilizers can all help promote fire and can all be present in environmentally lethal quantities in firewater from fire fighting at farms.
Treatment
All farm environmental management plans should allow for containment of substantial quantities of firewater and for its subsequent recovery and disposal by specialist disposal companies. The concentration and mixture of contaminants in firewater make them unsuited to any treatment method available on the farm. Even land spreading has produced severe taste and odour problems for downstream water supply companies in the past.
== See also ==
Agricultural waste
Agricultural surface runoff
Dark fermentation
Sustainable agriculture
== References ==
== External links ==
Electronic Field Office Technical Guide Archived 2011-07-03 at the Wayback Machine - U.S. NRCS - Detailed soil conservation guides tailored to individual states/counties. | Wikipedia/Agricultural_wastewater_treatment |
Modular construction is a construction technique which involves the prefabrication of 2D panels or 3D volumetric structures in off-site factories and transportation to construction sites for assembly. This process has the potential to be superior to traditional building in terms of both time and costs, with claimed time savings of between 20 and 50 percent faster than traditional building techniques.
It is estimated that by 2030, modular construction could deliver US$22 billion in annual cost savings for the US and European construction industry, helping fill the US$1.6 trillion productivity gap. The current need for standardized, repeatable 3D volumetric housing pre-fabricated units and designs for student accommodations, affordable housing and hotels is driving demand for modular construction.
== Advantages ==
In a 2018 Practice Note, the NEC states that the benefits obtained from offsite construction mainly relate to the creation of components in a factory setting, protected from the weather and using manufacturing techniques such as assembly lines with dedicated and specialist equipment. Through the use of appropriate technology, modular construction can:
increase the speed of construction by increasing the speed of manufacture of the component parts,
reduce waste,
increase economies of scale,
improve quality leading to reduction in the whole life costs of assets
reduce environmental impact such as dust and noise and
reduce accidents and ill health by reducing the amount of construction work taking place at site
== Disadvantages ==
In contrast to the benefits mentioned earlier, modular construction presents two significant obstacles:
Logistical challenges: The transportation of completed modules to the construction site demands meticulous organization and synchronization, often incurring substantial costs.
Constraints on size: The manufacturing and transportation procedures may place limitations on the dimensions of individual modular components. This can impact the room sizes in the building, potentially influencing the overall architectural design.
== Time ==
Modular construction has consistently been at least 20 percent faster than traditional on-site builds. Currently, the design process of modular construction projects tends to take longer than that of traditional building. This is because modular construction is a fairly new technology and not many architects and engineers have experience working with it. In fewer words, the industry has not yet learned how to work this way. It is expected of design firms to develop module libraries which would assist in the automation of this process. These modules libraries would hold various pre-designed 2D panels and 3D structures which would be digitally assembled to create standardized structures.
The foundations of a structure are a crucial part of its rigidity. The magnitude and complexity of such will vary depending on the size, and overall weight of the structure. Therefore, the weight difference of a traditionally built house and a prefabricated structure will mean that foundations needed will be smaller and faster to build.
Off-site manufacturing is the pinnacle of modular construction. The ability to coordinate and repeat activities in a factory along with the increased help of automation result in largely faster manufacturing times than those of on-site building. A large time saver is the ability to parallelly work on the foundation of a structure and the manufacturing of the structure itself. This would be impossible with traditional construction. The on-site construction is radically simplified. The assembly of pre-fabricated components is as simple as assembling the 3D modules, and connecting the services to main site connections. A team of five workers can assemble up to six 3D modules, or the equivalent of 270 square meters of finished floor area, in a single work day.
=== Production algorithms ===
Since the technology required to manufacture the components of modular construction, the prefabricated parts of modular buildings are carried out by modular factories. To optimize time, modular factories consider the specifications and resources of the project and adapt a scheduling algorithm to fulfill the needs of this unique project. However, current scheduling methods assume the quantity of resources will never reach zero, therefore representing an unrealistic work cycle.
A modular factory handling a single project at any given point is rare, and would produce low returns. Hyun and Lee's research propose a Genetic Algorithm (GA) scheduling model which takes into consideration various project's characteristics and shares resources. The production sequence of this algorithm would be largely affected by which modules need to be transported to which site and the dates they should arrive. After considering the variables of production, transportation and on-site assembly the objective function is:
m
i
n
Σ
(
S
)
,
…
S
=
S
i
+
P
i
−
E
i
{\displaystyle min\Sigma (S),\ldots S=S_{i}+P_{i}-E_{i}}
Where Si is the number of stocked units per day, Pi is the number of units per day and Ei is number of units installed per day. Production algorithms are continuously being developed to further accelerate the production of modular construction buildings, enlarging the time saving gap with traditional construction methods.
== Cost ==
Modular construction can yield up to 20 percent of the total project cost in savings. However, there is also a risk of it increasing the cost by 10 percent. This occurs when the savings in the labor area of construction are outweighed by the increase in costs of the logistics area and materials. The pre-fabrication of components used in modular construction have a higher logistics cost than traditional building. Since the panels or 3D structures have to be manufactured in a factory and transported to the construction site, new variables which alter the flow of construction are presented.
=== Transportation ===
Transportation of fabricated components is naturally more expensive than that of raw materials. For one, even a number of 2D panels stacked together are much harder to transport than the raw cement, wood or material used to build them. Panels run a high risk of suffering minor or major damage when being transported through land. If a panel were to be damaged, it would likely have to be replaced entirely. The factory would need to temporarily stop production of other panels to replace this one, increasing the overall manufacturing hours and therefore cost. On top of the manufacturing hours, the transportation hours would also be increased, increasing yet another cost. Regardless, the transportation of 2D panels is still a good alternative to on-site construction.
Transportation reaches its peak cost when shipping 3D volumetric structures. While 1 m2 of 2D floor space takes approximately US$8 to transport 250 km, its equivalent in 3D floor space takes US$45. Adding to this the replacement cost if the structure gets damaged during transport creates a large cost increase.
=== Construction ===
Assembling components in a factory off-site means that workers can use the repeatability of the structures as well as the use of automation to facilitate the manufacturing process. By standardizing the overall design of structures, work which would usually require expensive workers with specific skills (e.g. mechanical, electrical and plumbing) can be completed by low-cost manufacturers, decreasing the total salaries cost. As very little manufacturing occurs on-site, up to 80% of traditional labor activity can be moved off-site to the module factory. This leads to a lower number of sub-contractors needed, further decreasing overall total salaries cost. Overall, the larger the labor-intensive portion of a project, the larger the savings will be if modular construction is used.
Project such as student accommodations, hotels and affordable housing are great candidates for modular construction. The repeatability of their structures leads to faster manufacturing times and therefore less overall cost. Meanwhile, if the project is (for example) a modern beach house with highly irregular wall spaces and ceilings, traditional construction methods may be preferable. As the industry continues to adapt and grow, these repeatable designs could one day be modified and adapted to fit all kinds of structures at decreased costs.
== Safety ==
Construction is considered to be one of the most dangerous industries. Workers fall from heights, objects are dropped, muscles are strained and environmental hazards can be found. Modular construction constrains all manufacturing activities to a ground level, clean space with fewer workers needed. It is estimated that reportable accidents are reduced by over 80% relative to site-intensive construction. When asked in a survey about safety management in the construction industry conducted by McGraw Hill Construction in 2013, 50% of the construction industry believed that pre-fabrication was safer than traditional on-site building, while only 4% said that prefabrication or modular construction had a negative impact on safety performance. Of the general and specialty contractors surveyed, 78% and 59% said that the largest safety impact was the undergoing of complex tasks at ground level. According to the CDC, falling is the leading cause of work-related fatalities in construction, making up more than one in every three deaths in the industry. The reduction of heights at which workers need perform tasks on subsequently reduces the fatality risk they experience, greatly increasing the overall safety of the industry. Also, 69% of the general contractors as well as 69% of the specialty contractors mentioned that the reduced number of workers performing different tasks at the off-site factory also improved construction site safety. Overall, modular construction is safer for the following reasons:
Stable work location
Tasks are performed in ample spaces
Ground level assembly
Cover from harsh weather
Better monitoring of unsafe activities
30 to 50 percent reduction in time spent at construction site
Fewer personnel on-site
Modular construction is still not considered an entirely safe alternative. However, it does reduce accidents and fatalities by a significant amount. Especially in the manufacturing process of a project. 48.1% of all accidents during on-site construction were fall-related, while only 9.1% of the accidents at manufacturing plants were from falls. Manufacturing plant workers were more likely to be struck by an object or equipment (37.1%) and fracture and amputation had the same injury type frequency at 27.3%. Nevertheless, as the construction industry continues to adapt and moves over to more sustainable construction methods like pre-fabricated modular construction, it is expected that the overall safety number of accidents at construction sites will decrease.
The use of modular construction methods is encouraged by proponents of Prevention through Design techniques in construction. It is included as a recommended hazard control for construction projects in the "PtD - Architectural Design and Construction Education Module" published by the National Institute for Occupational Safety and Health.
== Sustainability ==
Modular construction is a great alternative to traditional construction when looking at the amount of waste each method produces. When constructing a high-rise building in Wolverhampton, 824 modules were used. During this process about 5% of the total weight of the construction was produced as waste. If it is compared to traditional methods' 10–13% average waste, a small difference can be observed. This difference may not seem like much when talking about small structures; however, when talking about a 100,000 lb/ft2 building, it is a significant percentage. Also, the number of on-site deliveries decreased by up to 70%. The deliveries are instead moved to the modular factory, where more material can be received. On-site noise pollution is greatly reduced as well. By moving the manufacturing process to an off-site factory, usually located outside of the city, neighboring buildings are not impacted as they would be with the traditional building process.
== Modular construction systems ==
Open-source and commercial hardware components used in modular construction include: open beams, bit beams, maker beams, grid beams, contraptors, OpenStructures components, etc. Space frame systems (such as Mero, Unistrut, Delta Structures, etc.) also tend to be modular in design. Other materials used in construction which are interlocking and thus reusable/modular in nature include interlocking bricks.
== See also ==
Open-source architecture – Emerging design paradigm emphasizing collaboration and ease of use
Commercial modular construction – Non-residential structures that are mostly built offsite
Modular building – Prefabricated building or house that consists of repeated sections
Mass production – High volume production of standardized products
Prefabricated building – Building constructed using prefabrication
== References == | Wikipedia/Modular_construction_systems |
A complex system is a system composed of many components which may interact with each other. Examples of complex systems are Earth's global climate, organisms, the human brain, infrastructure such as power grid, transportation or communication systems, complex software and electronic systems, social and economic organizations (like cities), an ecosystem, a living cell, and, ultimately, for some authors, the entire universe.
The behavior of a complex system is intrinsically difficult to model due to the dependencies, competitions, relationships, and other types of interactions between their parts or between a given system and its environment. Systems that are "complex" have distinct properties that arise from these relationships, such as nonlinearity, emergence, spontaneous order, adaptation, and feedback loops, among others. Because such systems appear in a wide variety of fields, the commonalities among them have become the topic of their independent area of research. In many cases, it is useful to represent such a system as a network where the nodes represent the components and links represent their interactions.
The term complex systems often refers to the study of complex systems, which is an approach to science that investigates how relationships between a system's parts give rise to its collective behaviors and how the system interacts and forms relationships with its environment. The study of complex systems regards collective, or system-wide, behaviors as the fundamental object of study; for this reason, complex systems can be understood as an alternative paradigm to reductionism, which attempts to explain systems in terms of their constituent parts and the individual interactions between them.
As an interdisciplinary domain, complex systems draw contributions from many different fields, such as the study of self-organization and critical phenomena from physics, of spontaneous order from the social sciences, chaos from mathematics, adaptation from biology, and many others. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines, including statistical physics, information theory, nonlinear dynamics, anthropology, computer science, meteorology, sociology, economics, psychology, and biology.
== Types of systems ==
Complex systems can be:
Complex adaptive systems which have the capacity to change.
Polycentric systems : “where many elements are capable of making mutual adjustments for ordering their relationships with one another within a general system of rules where each element acts with independence of other elements”.
Disorganised systems involving localized interactions of multiple entities that do not form a coherent whole. Disorganised systems are linked to self-organisation processes.
Hierarchic systems which are analyzable into successive sets of subsystems. They can also be called nested or embedded systems.
Cybernetic systems involve information feedback loops.
== Key concepts ==
=== Adaptation ===
Complex adaptive systems are special cases of complex systems that are adaptive in that they have the capacity to change and learn from experience. Examples of complex adaptive systems include the international trade markets, social insect and ant colonies, the biosphere and the ecosystem, the brain and the immune system, the cell and the developing embryo, cities, manufacturing businesses and any human social group-based endeavor in a cultural and social system such as political parties or communities.
=== Decomposability ===
A system is decomposable if the parts of the system (subsystems) are independent from each other, for exemple the model of a perfect gas consider the relations among molecules negligeable.
In a nearly decomposable system, the interactions between subsystems are weak but not negligeable, this is often the case in social systems. Conceptually, a system is nearly decomposable if the variables composing it can be separated into classes and subclasses, if these variables are independent for many functions but affect each other, and if the whole system is greater than the parts.
== Features ==
Complex systems may have the following features:
Complex systems may be open
Complex systems are usually open systems – that is, they exist in a thermodynamic gradient and dissipate energy. In other words, complex systems are frequently far from energetic equilibrium: but despite this flux, there may be pattern stability, see synergetics.
Complex systems may exhibit critical transitions
Critical transitions are abrupt shifts in the state of ecosystems, the climate, financial and economic systems or other complex systems that may occur when changing conditions pass a critical or bifurcation point. The 'direction of critical slowing down' in a system's state space may be indicative of a system's future state after such transitions when delayed negative feedbacks leading to oscillatory or other complex dynamics are weak.
Complex systems may be nested
The components of a complex system may themselves be complex systems. For example, an economy is made up of organisations, which are made up of people, which are made up of cells – all of which are complex systems. The arrangement of interactions within complex bipartite networks may be nested as well. More specifically, bipartite ecological and organisational networks of mutually beneficial interactions were found to have a nested structure. This structure promotes indirect facilitation and a system's capacity to persist under increasingly harsh circumstances as well as the potential for large-scale systemic regime shifts.
Dynamic network of multiplicity
As well as coupling rules, the dynamic network of a complex system is important. Small-world or scale-free networks which have many local interactions and a smaller number of inter-area connections are often employed. Natural complex systems often exhibit such topologies. In the human cortex for example, we see dense local connectivity and a few very long axon projections between regions inside the cortex and to other brain regions.
May produce emergent phenomena
Complex systems may exhibit behaviors that are emergent, which is to say that while the results may be sufficiently determined by the activity of the systems' basic constituents, they may have properties that can only be studied at a higher level. For example, empirical food webs display regular, scale-invariant features across aquatic and terrestrial ecosystems when studied at the level of clustered 'trophic' species. Another example is offered by the termites in a mound which have physiology, biochemistry and biological development at one level of analysis, whereas their social behavior and mound building is a property that emerges from the collection of termites and needs to be analyzed at a different level.
Relationships are non-linear
In practical terms, this means a small perturbation may cause a large effect (see butterfly effect), a proportional effect, or even no effect at all. In linear systems, the effect is always directly proportional to cause. See nonlinearity.
Relationships contain feedback loops
Both negative (damping) and positive (amplifying) feedback are always found in complex systems. The effects of an element's behavior are fed back in such a way that the element itself is altered.
== History ==
In 1948, Dr. Warren Weaver published an essay on "Science and Complexity", exploring the diversity of problem types by contrasting problems of simplicity, disorganized complexity, and organized complexity. Weaver described these as "problems which involve dealing simultaneously with a sizable number of factors which are interrelated into an organic whole."
While the explicit study of complex systems dates at least to the 1970s, the first research institute focused on complex systems, the Santa Fe Institute, was founded in 1984. Early Santa Fe Institute participants included physics Nobel laureates Murray Gell-Mann and Philip Anderson, economics Nobel laureate Kenneth Arrow, and Manhattan Project scientists George Cowan and Herb Anderson. Today, there are over 50 institutes and research centers focusing on complex systems.
Since the late 1990s, the interest of mathematical physicists in researching economic phenomena has been on the rise. The proliferation of cross-disciplinary research with the application of solutions originated from the physics epistemology has entailed a gradual paradigm shift in the theoretical articulations and methodological approaches in economics, primarily in financial economics. The development has resulted in the emergence of a new branch of discipline, namely "econophysics", which is broadly defined as a cross-discipline that applies statistical physics methodologies which are mostly based on the complex systems theory and the chaos theory for economics analysis.
The 2021 Nobel Prize in Physics was awarded to Syukuro Manabe, Klaus Hasselmann, and Giorgio Parisi for their work to understand complex systems. Their work was used to create more accurate computer models of the effect of global warming on the Earth's climate.
== Applications ==
=== Complexity in practice ===
The traditional approach to dealing with complexity is to reduce or constrain it. Typically, this involves compartmentalization: dividing a large system into separate parts. Organizations, for instance, divide their work into departments that each deal with separate issues. Engineering systems are often designed using modular components. However, modular designs become susceptible to failure when issues arise that bridge the divisions.
=== Complexity of cities ===
Jane Jacobs described cities as being a problem in organized complexity in 1961, citing Dr. Weaver's 1948 essay. As an example, she explains how an abundance of factors interplay into how various urban spaces lead to a diversity of interactions, and how changing those factors can change how the space is used, and how well the space supports the functions of the city. She further illustrates how cities have been severely damaged when approached as a problem in simplicity by replacing organized complexity with simple and predictable spaces, such as Le Corbusier's "Radiant City" and Ebenezer Howard's "Garden City". Since then, others have written at length on the complexity of cities.
=== Complexity economics ===
Over the last decades, within the emerging field of complexity economics, new predictive tools have been developed to explain economic growth. Such is the case with the models built by the Santa Fe Institute in 1989 and the more recent economic complexity index (ECI), introduced by the MIT physicist Cesar A. Hidalgo and the Harvard economist Ricardo Hausmann.
Recurrence quantification analysis has been employed to detect the characteristic of business cycles and economic development. To this end, Orlando et al. developed the so-called recurrence quantification correlation index (RQCI) to test correlations of RQA on a sample signal and then investigated the application to business time series. The said index has been proven to detect hidden changes in time series. Further, Orlando et al., over an extensive dataset, shown that recurrence quantification analysis may help in anticipating transitions from laminar (i.e. regular) to turbulent (i.e. chaotic) phases such as USA GDP in 1949, 1953, etc. Last but not least, it has been demonstrated that recurrence quantification analysis can detect differences between macroeconomic variables and highlight hidden features of economic dynamics.
=== Complexity and education ===
Focusing on issues of student persistence with their studies, Forsman, Moll and Linder explore the "viability of using complexity science as a frame to extend methodological applications for physics education research", finding that "framing a social network analysis within a complexity science perspective offers a new and powerful applicability across a broad range of PER topics".
=== Complexity in healthcare research and practice ===
Healthcare systems are prime examples of complex systems, characterized by interactions among diverse stakeholders, such as patients, providers, policymakers, and researchers, across various sectors like health, government, community, and education. These systems demonstrate properties like non-linearity, emergence, adaptation, and feedback loops. Complexity science in healthcare frames knowledge translation as a dynamic and interconnected network of processes—problem identification, knowledge creation, synthesis, implementation, and evaluation—rather than a linear or cyclical sequence. Such approaches emphasize the importance of understanding and leveraging the interactions within and between these processes and stakeholders to optimize the creation and movement of knowledge. By acknowledging the complex, adaptive nature of healthcare systems, complexity science advocates for continuous stakeholder engagement, transdisciplinary collaboration, and flexible strategies to effectively translate research into practice.
=== Complexity and biology ===
Complexity science has been applied to living organisms, and in particular to biological systems. Within the emerging field of fractal physiology, bodily signals, such as heart rate or brain activity, are characterized using entropy or fractal indices. The goal is often to assess the state and the health of the underlying system, and diagnose potential disorders and illnesses.
=== Complexity and chaos theory ===
Complex systems theory is related to chaos theory, which in turn has its origins more than a century ago in the work of the French mathematician Henri Poincaré. Chaos is sometimes viewed as extremely complicated information, rather than as an absence of order. Chaotic systems remain deterministic, though their long-term behavior can be difficult to predict with any accuracy. With perfect knowledge of the initial conditions and the relevant equations describing the chaotic system's behavior, one can theoretically make perfectly accurate predictions of the system, though in practice this is impossible to do with arbitrary accuracy.
The emergence of complex systems theory shows a domain between deterministic order and randomness which is complex. This is referred to as the "edge of chaos".
When one analyzes complex systems, sensitivity to initial conditions, for example, is not an issue as important as it is within chaos theory, in which it prevails. As stated by Colander, the study of complexity is the opposite of the study of chaos. Complexity is about how a huge number of extremely complicated and dynamic sets of relationships can generate some simple behavioral patterns, whereas chaotic behavior, in the sense of deterministic chaos, is the result of a relatively small number of non-linear interactions. For recent examples in economics and business see Stoop et al. who discussed Android's market position, Orlando who explained the corporate dynamics in terms of mutual synchronization and chaos regularization of bursts in a group of chaotically bursting cells and Orlando et al. who modelled financial data (Financial Stress Index, swap and equity, emerging and developed, corporate and government, short and long maturity) with a low-dimensional deterministic model.
Therefore, the main difference between chaotic systems and complex systems is their history. Chaotic systems do not rely on their history as complex ones do. Chaotic behavior pushes a system in equilibrium into chaotic order, which means, in other words, out of what we traditionally define as 'order'. On the other hand, complex systems evolve far from equilibrium at the edge of chaos. They evolve at a critical state built up by a history of irreversible and unexpected events, which physicist Murray Gell-Mann called "an accumulation of frozen accidents". In a sense chaotic systems can be regarded as a subset of complex systems distinguished precisely by this absence of historical dependence. Many real complex systems are, in practice and over long but finite periods, robust. However, they do possess the potential for radical qualitative change of kind whilst retaining systemic integrity. Metamorphosis serves as perhaps more than a metaphor for such transformations.
=== Complexity and network science ===
A complex system is usually composed of many components and their interactions. Such a system can be represented by a network where nodes represent the components and links represent their interactions. For example, the Internet can be represented as a network composed of nodes (computers) and links (direct connections between computers). Other examples of complex networks include social networks, financial institution interdependencies, airline networks, and biological networks.
== Notable scholars ==
== See also ==
== References ==
== Further reading ==
Complexity Explained.
L.A.N. Amaral and J.M. Ottino, Complex networks – augmenting the framework for the study of complex system, 2004.
Chu, D.; Strand, R.; Fjelland, R. (2003). "Theories of complexity". Complexity. 8 (3): 19–30. Bibcode:2003Cmplx...8c..19C. doi:10.1002/cplx.10059.
Walter Clemens, Jr., Complexity Science and World Affairs, SUNY Press, 2013.
Gell-Mann, Murray (1995). "Let's Call It Plectics". Complexity. 1 (5): 3–5. Bibcode:1996Cmplx...1e...3G. doi:10.1002/cplx.6130010502.
A. Gogolin, A. Nersesyan and A. Tsvelik, Theory of strongly correlated systems , Cambridge University Press, 1999.
Nigel Goldenfeld and Leo P. Kadanoff, Simple Lessons from Complexity Archived 2017-09-28 at the Wayback Machine, 1999
Kelly, K. (1995). Out of Control, Perseus Books Group.
Orlando, Giuseppe Orlando; Pisarchick, Alexander; Stoop, Ruedi (2021). Nonlinearities in Economics. Dynamic Modeling and Econometrics in Economics and Finance. Vol. 29. doi:10.1007/978-3-030-70982-2. ISBN 978-3-030-70981-5. S2CID 239756912.
Syed M. Mehmud (2011), A Healthcare Exchange Complexity Model
Preiser-Kapeller, Johannes, "Calculating Byzantium. Social Network Analysis and Complexity Sciences as tools for the exploration of medieval social dynamics". August 2010
Donald Snooks, Graeme (2008). "A general theory of complex living systems: Exploring the demand side of dynamics". Complexity. 13 (6): 12–20. Bibcode:2008Cmplx..13f..12S. doi:10.1002/cplx.20225.
Stefan Thurner, Peter Klimek, Rudolf Hanel: Introduction to the Theory of Complex Systems, Oxford University Press, 2018, ISBN 978-0198821939
SFI @30, Foundations & Frontiers (2014).
== External links ==
"The Open Agent-Based Modeling Consortium".
"Complexity Science Focus". Archived from the original on 2017-12-05. Retrieved 2017-09-22.
"Santa Fe Institute".
"The Center for the Study of Complex Systems, Univ. of Michigan Ann Arbor". Archived from the original on 2017-12-13. Retrieved 2017-09-22.
"INDECS". (Interdisciplinary Description of Complex Systems)
"Introduction to Complexity – Free online course by Melanie Mitchell". Archived from the original on 2018-08-30. Retrieved 2018-08-29.
Jessie Henshaw (October 24, 2013). "Complex Systems". Encyclopedia of Earth.
Complex systems in scholarpedia.
Complex Systems Society
(Australian) Complex systems research network.
Complex Systems Modeling based on Luis M. Rocha, 1999.
CRM Complex systems research group
The Center for Complex Systems Research, Univ. of Illinois at Urbana-Champaign
Institute for Cross-Disciplinary Physics and Complex Systems (IFISC) | Wikipedia/Complexity_science |
The Waste Input-Output (WIO) model is an innovative extension of the environmentally extended input-output (EEIO) model. It enhances the traditional Input-Output (IO) model by incorporating physical waste flows generated and treated alongside monetary flows of products and services.
In a WIO model, each waste flow is traced from its generation to its treatment, facilitated by an allocation matrix.
Additionally, the model accounts for the transformation of waste during treatment into secondary waste and residues, as well as recycling and final disposal processes.
By including the end-of-life (EoL) stage of products, the WIO model enables a comprehensive consideration of the entire product life cycle, encompassing production, use, and disposal stages within the IO analysis framework. As such, it serves as a valuable tool for life cycle assessment (LCA).
== Background ==
With growing concerns about environmental issues, the EEIO model evolved from the conventional IO model appended by integrating environmental factors such as resources, emissions, and waste. The standard EEIO model, which includes the economic input-output life-cycle assessment (EIO-LCA) model, can be formally expressed as follows
Here
A
{\displaystyle A}
represents the square matrix of input coefficients,
F
{\displaystyle F}
denotes releases (such as emissions or waste) per unit of output or the intervention matrix,
y
{\displaystyle y}
stands for the vector of final demand (or functional unit),
I
{\displaystyle I}
is the identity matrix, and
E
{\displaystyle E}
represents the resulting releases (For further details, refer to the input-output model). A model in which
F
{\displaystyle F}
represents the generation of waste per unit of output is known as a Waste Extended IO (WEIO) model. In this model, waste generation is included as a satellite account.
However, this formulation, while well-suited for handling emissions or resource use, encounters challenges when dealing with waste. It overlooks the crucial point that waste typically undergoes treatment before recycling or final disposal, leading to a form less harmful to the environment. Additionally, the treatment of emissions results in residues that require proper handling for recycling or final disposal (for instance, the pollution abatement process of sulfur dioxide involves its conversion into gypsum or sulfuric acid). Leontief's pioneering pollution abatement IO model did not address this aspect, whereas Duchin later incorporated it in a simplified illustrative case of wastewater treatment.
In waste management, it is common for various treatment methods to be applicable to a single type of waste. For instance, organic waste might undergo landfilling, incineration, gasification, or composting. Conversely, a single treatment process may be suitable for various types of waste; for example, solid waste of any type can typically be disposed of in a landfill. Formally, this implies that there is no one-to-one correspondence between treatment methods and types of waste.
A theoretical drawback of the Leontief-Duchin EEIO model is that it considers only cases where this one-to-one correspondence between treatment methods and types of waste applies, which makes the model difficult to apply to real waste management issues. The WIO model addresses this weakness by introducing a general mapping between treatment methods and types of waste, establishing a highly adaptable link between waste and treatment. This results in a model that is applicable to a wide range of real waste management issues.
== The Methodology ==
We describe below the major features of the WIO model in its relationship to the Leontief-Duchin EEIO model, starting with notations.
Let there be
n
P
{\displaystyle n_{P}}
producing sectors (each producing a single primary product),
n
T
{\displaystyle n_{T}}
waste treatment sectors, and
n
w
{\displaystyle n_{w}}
waste categories. Now, let's define the matrices and variables:
Z
P
{\displaystyle Z_{P}}
: an
n
P
×
n
P
{\displaystyle n_{P}\times n_{P}}
matrix representing the flow of products among producing sectros.
W
P
+
{\displaystyle W_{P}^{+}}
: an
n
W
×
n
P
{\displaystyle n_{W}\times n_{P}}
matrix representing the generation of wastes from producing sectors. Typical examples include animal waste from livestock, slag from steel mills, sludge from paper mills and the chemical industry, and meal scrap from manufacturing processes.
W
P
−
{\displaystyle W_{P}^{-}}
: an
n
W
×
n
P
{\displaystyle n_{W}\times n_{P}}
matrix representing the use (recycling) of wastes by producing sectors. Typical examples include the use of animal waste in fertilizer production and iron scrap in steel production based on an electric arc furnace.
W
P
{\displaystyle W_{P}}
: an
n
W
×
n
P
{\displaystyle n_{W}\times n_{P}}
matrix representing the net flow of wastes:
W
P
=
W
P
+
−
W
P
−
{\displaystyle W_{P}=W_{P}^{+}-W_{P}^{-}}
.
Z
T
{\displaystyle Z_{T}}
: an
n
W
×
n
T
{\displaystyle n_{W}\times n_{T}}
matrix representing the flow of products in waste treatment sectors.
W
T
{\displaystyle W_{T}}
: an
n
W
×
n
T
{\displaystyle n_{W}\times n_{T}}
matrix representing the net generation of (secondary) waste in waste treatment sectors:
W
T
=
W
T
+
−
W
T
−
{\displaystyle W_{T}=W_{T}^{+}-W_{T}^{-}}
(
W
T
+
{\displaystyle W_{T}^{+}}
and
W
T
−
{\displaystyle W_{T}^{-}}
are defined similar to
W
P
+
{\displaystyle W_{P}^{+}}
and
W
P
−
{\displaystyle W_{P}^{-}}
). Typical examples of
W
T
+
{\displaystyle W_{T}^{+}}
include ashes generated from incineration processes, sludge produced during wastewater treatment, and residues derived from automobile shredding facilities.
y
P
{\displaystyle y_{P}}
: an
n
P
×
1
{\displaystyle n_{P}\times 1}
vector representing the final demand for products.
w
Y
{\displaystyle w_{Y}}
: an
n
W
×
1
{\displaystyle n_{W}\times 1}
vector representing the generation of waste from final demand sectors, such as the generation of kitchen waste and end-of-life consumer appliances.
x
P
{\displaystyle x_{P}}
: an
n
P
×
1
{\displaystyle n_{P}\times 1}
vector representing the quantity of
n
P
{\displaystyle n_{P}}
products produced.
w
{\displaystyle w}
: an
n
W
×
1
{\displaystyle n_{W}\times 1}
vector representing the quantity of
n
w
{\displaystyle n_{w}}
waste for treatment.
It is important to note that variables with
Z
{\displaystyle Z}
or
x
{\displaystyle x}
pertain to conventional components found in an IO table and are measured in monetary units. Conversely, variables with
W
{\displaystyle W}
or
w
{\displaystyle w}
typically do not appear explicitly in an IO table and are measured in physical units.
=== The balance of goods and waste ===
Using the notations introduced above, we can represent the supply and demand balance between products and waste for treatment by the following system of equations:
Here,
ι
P
{\displaystyle \iota _{P}}
dednotes a vector of ones (
n
P
×
1
{\displaystyle n_{P}\times 1}
) used for summing the rows of
Z
P
{\displaystyle Z_{P}}
, and similar definitions apply to other
ι
{\displaystyle \iota }
terms. The first line pertains to the standard balance of goods and services with the left-hand side referring to the demand and the right-hand-side supply. Similarly, the second line refers to the balance of waste, where the left-hand side signifies the generation of waste for treatment, and the right-hand side denotes the waste designated for treatment. It is important to note that increased recycling reduces the amount of waste for treatment
w
{\displaystyle w}
.
=== The IO model with waste and waste treatment ===
We now define the input coefficient matrices
A
{\displaystyle A}
and waste generation coefficients
G
{\displaystyle G}
as follows
A
P
=
Z
P
x
^
P
−
1
,
A
T
=
Z
T
x
^
T
−
1
,
G
P
=
W
P
x
^
P
−
1
,
G
T
=
W
T
x
^
T
−
1
.
{\displaystyle {\begin{aligned}A_{P}=Z_{P}{\hat {x}}_{P}^{-1},A_{T}=Z_{T}{\hat {x}}_{T}^{-1},G_{P}=W_{P}{\hat {x}}_{P}^{-1},G_{T}=W_{T}{\hat {x}}_{T}^{-1}.\end{aligned}}}
Here,
v
^
{\displaystyle {\hat {v}}}
refers to a diagonal matrix where the
(
i
,
i
)
{\displaystyle (i,i)}
element is the
i
{\displaystyle i}
-th element of a vector
v
{\displaystyle v}
.
Using
A
{\displaystyle A}
and
G
{\displaystyle G}
as derived above, the balance (1) can be represented as:
This equation (2) represents the Duchin-Leontief environmental IO model, an extension of the original Leontief model of pollution abatement to account for the generation of secondary waste. It is important to note that this system of equations is generally unsolvable due to the presence of
x
T
{\displaystyle x_{T}}
on the left-hand side and
w
{\displaystyle w}
on the right-hand side, resulting in asymmetry. This asymmetry poses a challenge for solving the equation. However, the Duchin-Leontief environmental IO model addresses this issue by introducing a simplifying assumption:
This assumption (3) implies that a single treatment sector exclusively treats each waste. For instance, waste plastics are either landfilled or incinerated but not both simultaneously. While this assumption simplifies the model and enhances computational feasibility, it may not fully capture the complexities of real-world waste management scenarios. In reality, various treatment methods can be applied to a given waste; for example, organic waste might be landfilled, incinerated, or composted. Therefore, while the assumption facilitates computational tractability, it might oversimplify the actual waste management processes.
=== The WIO model ===
Nakamura and Kondo addressed the above problem by introducing the allocation matrix
S
{\displaystyle S}
of order
n
T
×
n
w
{\displaystyle n_{T}\times n_{w}}
that assigns waste to treatment processes:
Here, the element of
S
k
l
{\displaystyle S_{kl}}
of
S
{\displaystyle S}
represents the proportion of waste
l
{\displaystyle l}
treated by treatment
k
{\displaystyle k}
. Since waste must be treated in some manner (even if illegally dumped, which can be considered a form of treatment), we have:
ι
T
′
S
=
ι
w
′
.
{\displaystyle {\iota _{T}}^{'}S={\iota _{w}}^{'}.}
Here,
′
{\displaystyle '}
stands for the transpose operator.
Note that the allocation matrix
S
{\displaystyle S}
is essential for deriving
x
T
{\displaystyle x_{T}}
from
w
{\displaystyle w}
.
The simplifying condition (3) corresponds to the special case where
n
T
=
n
w
{\displaystyle n_{T}=n_{w}}
and
S
{\displaystyle S}
is a unit matrix.
The table below gives an example of
S
{\displaystyle S}
for seven waste types and three treatment processes. Note that
S
{\displaystyle S}
represents the allocation of waste for treatment, that is, the portion of waste that is not recycled.
The application of the allocation matrix
S
{\displaystyle S}
transforms equation (2) into the following fom:
Note that, different from (2), the variable
x
T
{\displaystyle x_{T}}
occurs on both sides of the equation. This system of equations is thus solvable (provided it exists), with the solution given by:
(
x
P
x
T
)
=
(
I
−
A
P
−
A
T
−
S
G
P
I
−
S
G
T
)
−
1
(
y
P
S
w
y
)
.
{\displaystyle {\begin{aligned}{\begin{pmatrix}x_{P}\\x_{T}\end{pmatrix}}={\begin{pmatrix}I-A_{P}&-A_{T}\\-SG_{P}&I-SG_{T}\end{pmatrix}}^{-1}{\begin{pmatrix}y_{P}\\Sw_{y}\end{pmatrix}}.\end{aligned}}}
The WIO counterpart of the standard EEIO model of emissions, represented by equation (0), can be formulated as follows:
Here,
F
P
{\displaystyle F_{P}}
represents emissions per output from production sectors, and
F
T
{\displaystyle F_{T}}
denotes emissions from waste treatment sectors.
Upon comparison of equation (6) with equation (0), it becomes clear that the former expands upon the latter by incorporating factors related to waste and waste treatment.
Finally, the amount of waste for treatment induced by the final demand sector can be given by:
=== The Supply and Use Extension (WIO-SUT) ===
In the WIO model (5), waste flows are categorized based solely on treatment method, without considering the waste type. Manfred Lenzen addressed this limitation by allowing both waste by type and waste by treatment method to be presented together in a single representation within a supply-and-use framework.
This extension of the WIO framework, given below, results in a symmetric WIO model that does not require the conversion of waste flows into treatment flows.
(
A
P
A
T
0
0
0
S
G
P
G
T
0
)
(
x
P
x
T
w
)
+
(
y
P
0
w
y
)
=
(
x
P
x
T
w
)
{\displaystyle {\begin{pmatrix}A_{P}&A_{T}&0\\0&0&S\\G_{P}&G_{T}&0\end{pmatrix}}{\begin{pmatrix}x_{P}\\x_{T}\\w\end{pmatrix}}+{\begin{pmatrix}y_{P}\\0\\w_{y}\end{pmatrix}}={\begin{pmatrix}x_{P}\\x_{T}\\w\end{pmatrix}}}
It is worth noting that despite the seemingly different forms of the two models, the Leontief inverse matrices of WIO and WIO-SUT are equivalent.
=== The WIO Cost and Price Model ===
Let's denote by
p
P
{\displaystyle p_{P}}
,
p
T
{\displaystyle p_{T}}
,
v
P
{\displaystyle v_{P}}
, and
v
T
{\displaystyle v_{T}}
the vector of product prices, waste treatment prices, value-added ratios of products, and value-added ratios of waste treatments, respectively.
==== The case without waste recycling ====
In the absence of recycling, the cost counterpart of equation (5) becomes:
(
p
P
p
T
)
=
(
p
P
p
T
)
(
A
P
A
T
S
G
P
S
G
T
)
+
(
v
P
v
T
)
{\displaystyle {\begin{pmatrix}p_{P}&p_{T}\end{pmatrix}}={\begin{pmatrix}p_{P}&p_{T}\end{pmatrix}}{\begin{pmatrix}A_{P}&A_{T}\\SG_{P}&SG_{T}\end{pmatrix}}+{\begin{pmatrix}v_{P}&v_{T}\end{pmatrix}}}
which can be solved for
p
P
{\displaystyle p_{P}}
and
p
T
{\displaystyle p_{T}}
as:
==== The case with waste recycling ====
When there is a recycling of waste, the simple representation given by equation (8) must be extended to include the rate of recycling
r
{\displaystyle r}
and the price of waste
p
W
{\displaystyle p^{W}}
:
Here,
p
W
{\displaystyle p^{W}}
is the
n
w
×
1
{\displaystyle n_{w}\times 1}
vector of waste prices,
r
^
{\displaystyle {\hat {r}}}
is the diagonal matrix of the
n
w
×
{\displaystyle n_{w}\times }
vector of the average waste recycling rates,
G
P
+
=
W
P
+
x
P
^
−
1
{\displaystyle G_{P}^{+}=W_{P}^{+}{\hat {x_{P}}}^{-1}}
, and
G
P
−
=
W
P
−
x
P
^
−
1
{\displaystyle G_{P}^{-}=W_{P}^{-}{\hat {x_{P}}}^{-1}}
(
G
T
+
{\displaystyle G_{T}^{+}}
and
G
T
−
{\displaystyle G_{T}^{-}}
are defined in a similar fashion).
Rebitzer and Nakamura used (9) to assess the life-cycle cost of washing machines under alternative End-of-Life scenarios.
More recently, Liao et al. applied (9) to assess the economic effects of recycling copper waste domestically in Taiwan, amid the country's consideration of establishing a copper refinery to meet increasing demand.
=== A caution about possible changes in the input-output coeffcieints of treatment processes when the composition of waste changes ===
The input-output relationships of waste treatment processes are often closely linked to the chemical properties of the treated waste, particularly in incineration processes.
The amount of recoverable heat, and thus the potential heat supply for external uses, including power generation, depends on the heat value of the waste.
This heat value is strongly influenced by the waste's composition.
Therefore, any change in the composition of waste can significantly impact
A
T
{\displaystyle A_{T}}
and
G
T
{\displaystyle G_{T}}
.
To address this aspect of waste treatment, especially in incineration, Nakamura and Kondo recommended using engineering information about the relevant treatment processes.
They suggest solving the entire model iteratively, which consists of the WIO model and a systems engineering model that incorporates the engineering information.
Alternatively, Tisserant et al proposed addressing this issue by distinguishing each waste by its treatment processes. They suggest transforming the rectangular waste flow matrix (
n
w
×
n
T
{\displaystyle n_{w}\times n_{T}}
) not into an
n
T
×
n
T
{\displaystyle n_{T}\times n_{T}}
matrix as done by Nakamura and Kondo, but into an
n
T
n
W
×
n
T
n
W
{\displaystyle n_{T}n_{W}\times n_{T}n_{W}}
matrix. The details of each column element were obtained based on the literature.
== WIO tables and applications ==
=== Waste footprint studies ===
==== The MOE-WIO table for Japan ====
The WIO table compiled by the Japanese Ministry of the Environment (MOE) for the year 2011 stands as the only publicly accessible WIO table developed by a governmental body thus far. This MOE-WIO table distinguishes 80 production sectors, 10 waste treatment sectors, 99 waste categories, and encompasses 7 greenhouse gases (GHGs). The MOE-WIO table is available here.
Equation (7) can be used to assess the waste footprint of products or the amount of waste embodied in a product in its supply chain. Applied to the MOE-WIO, it was found that public construction significantly contributes to reducing construction waste, which mainly originates from building construction and civil engineering sectors. Additionally, public construction is the primary user (recycler) of slag and glass scrap. Regarding waste plastics, findings indicate that the majority of plastic waste originates not from direct household discharge but from various production sectors such as medical services, commerce, construction, personal services, food production, passenger motor vehicles, and real estate.
=== Other studies ===
Many researchers have independently created their own WIO datasets and utilized them for various applications, encompassing different geographical scales and process complexities. Here, we provide a brief overview of a selection of them.
==== End-of-Life electrical and electronic appliances ====
Kondo and Nakamura assessed the environmental and economic impacts of various life-cycle strategies for electrical appliances using the WIO-table they developed for Japan for the year 1995.
This dataset encompassed 80 industrial sectors, 5 treatment processes, and 36 types of waste.
The assessment was based on Equation (6).
The strategies examined included disposal to a landfill, conventional recycling, intensive recycling employing advanced sorting technology, extension of product life, and extension of product life with functional upgrading.
Their analysis revealed that intensive recycling outperformed landfilling and simple shredding in reducing final waste disposal and other impacts, including carbon emissions.
Furthermore, they found that extending the product life significantly decreased environmental impact without negatively affecting economic activity and employment, provided that the reduction in spending on new purchases was balanced by increased expenditure on repair and maintenance.
==== General and hazardous industrial waste ====
Using detailed data on industrial waste, including 196 types of general industrial waste and 157 types of hazardous industrial waste, Liao et al. analyzed the final demand footprint of industrial waste in Taiwan across various final demand categories. Their analysis revealed significant variations in waste footprints among different final demand categories. For example, over 90% of the generation of "Waste acidic etchants" and "Copper and copper compounds" was attributed to exports. Conversely, items like "Waste lees, wine meal, and alcohol mash" and "Pulp and paper sludge" were predominantly associated with household activities
==== Global waste flows ====
Tisserant et al developed a WIO model of the global economy by constructing a harmonized multiregional solid waste account that covered 48 world regions, 163 production sectors, 11 types of solid waste, and 12 waste treatment processes for the year 2007. Russia was found to be the largest generator of waste, followed by China, the US, the larger Western European economies, and Japan.
==== Decision Analytic Extension Based on Linear Programming (LP) ====
Kondo and Nakamura applied linear programming (LP) methodology to extend the WIO model, resulting in the development of a decision analytic extension known as the WIO-LP model. The application of LP to the IO model has a well-established history. This model was applied to explore alternative treatment processes for end-of-life home electric and electronic appliances, aiming to identify the optimal combination of treatment processes to achieve specific objectives, such as minimization of carbon emissions or landfill waste. Lin applied this methodology to the regional Input-Output (IO) table for Tokyo, augmented to incorporate wastewater flows and treatment processes, and identified trade-off relationships between water quality and carbon emissions. A similar method was also employed to assess the environmental impacts of alternative treatment processes for waste plastics in China.
== See also ==
== References ==
== External links ==
WIO table compiled by the Japanese Ministry of the Environment | Wikipedia/Waste_input-output_model |
Industrial metabolism is a concept to describe the material and energy turnover of industrial systems. It was proposed by Robert Ayres in analogy to the biological metabolism as "the whole integrated collection of physical processes that convert raw materials and energy, plus labour, into finished products and wastes..." In analogy to the biological concept of metabolism, which is used to describe the whole of chemical reactions in, for example, a cell to maintain its functions and reproduce itself, the concept of industrial metabolism describes the chemical reactions, transport processes, and manufacturing activities in industry.
Industrial metabolism presupposes a connection between different industrial activities by seeing them as part of a larger system, such as a material cycle or the supply chain of a commodity. System scientists, for example in industrial ecology, use the concept as paradigm to study the flow of materials or energy through the industrial system in order to better understand supply chains, the sources and causes of emissions, and the linkages between the industrial and the wider socio-technological system.
Industrial metabolism is a subsystem of the anthropogenic or socioeconomic metabolism, which also comprises non-industrial human activities in households or the public sector.
== See also ==
Anthropogenic metabolism
Autopoiesis
Dematerialization (economics)
Energy accounting
Industrial ecology
Material flow accounting
Material flow analysis
Information metabolism
Social metabolism
Urban metabolism
== References ==
== Further reading ==
Industrial Metabolism: Restructuring for Sustainable Development Archived 2008-03-13 at the Wayback Machine | Wikipedia/Industrial_metabolism |
Waste treatment refers to the activities required to ensure that waste has the least practicable impact on the environment. In many countries various forms of waste treatment are required by law.
== Solid waste treatment ==
The treatment of solid wastes is a key component of waste management. Different forms of solid waste treatment are graded in the waste hierarchy.
== Waste water treatment ==
=== Agricultural waste water treatment ===
Agricultural wastewater treatment is treatment and disposal of liquid animal waste, pesticide residues etc. from agriculture.
=== Industrial wastewater treatment ===
Industrial wastewater treatment is the treatment of wet wastes from factories, mines, power plants and other commercial facilities.
=== Sewage treatment ===
Sewage treatment is the treatment and disposal of human waste. Sewage is produced by all human communities. Treatment in urbanized areas is typically handled by centralized treatment systems. Alternative systems may use composting processes or processes that separate solid materials by settlement and then convert soluble contaminants into biological sludge and into gases such as carbon dioxide or methane.
== Radioactive waste treatment ==
Radioactive waste treatment is the treatment and containment of radioactive waste.
== References == | Wikipedia/Waste_treatment |
Eco-industrial development (EID) is a framework for industry to develop while reducing its impact on the environment. It uses a closed loop production cycle to tackle a broad set of environmental challenges such as soil and water pollution, desertification, species preservation, energy management, by-product synergy, resource efficiency, air quality, etc.
Mutually beneficial connections among industry, natural systems, energy, material and local communities become central factors in designing industrial production processes.
The approach itself is largely voluntary and market-driven but often pressed ahead by favorable government treatment or efforts of development co-operation.
== History ==
Since the early 1990s the idea of EID evolved from biological symbiosis. This concept was adopted by industrial ecologists in the search for innovative approaches to solve problems of waste, energy shortage and degradation of the environment. A continuous approach towards improving both environmental and economic outcomes is used.
In 1992, the international community officially connected development co-operation to sustainable environmental protection for the first time. At the United Nations Conference on Environment and Development (UNCED) in Rio de Janeiro, Brazil nearly 180 states signed the conference's Rio Declaration. Although non-binding, the Rio Declaration on Environment and Development laid out 27 principles that shall guide the increasing inter-connectedness of development cooperation and sustainability. Moreover, the documents drafting was accompanied by a presentation describing the idea of eco-industrial development for the first time.
In the following years, EID became popular throughout the United States. The recently elected Clinton administration installed a summit of business, labor, government and environmental protection representatives to further develop the approach. This summit established the idea of eco-industrial parks but soon realized that at first a more efficient management of raw materials, energy and waste has to be achieved.
Since then, the broad goals and application principles of EID have hardly changed and only became adapted to the rapid technological progress.
In 2012 the IGEP Foundation, for the promotion of trade, published a report called Pathway to Eco Industrial Development in India – Concepts and Cases.
The field is researched by the Nation Centre for Eco-Industrial Development, a joint project by the University of Southern California and Cornell University.
== Goals and concepts ==
The primary goal of eco-industrial development is a significant and continuous improvement in both business and environmental performance. Herein, the notion of industry not only relates to private-sector manufacturing but also covers state-owned enterprises, the service sector as well as transportation. EID's twin guideline is reflected specifically in the "eco" of eco-industrial as it resembles ecology (decrease in pollution and waste) and economy (increase in commercial success) at the same time. In order to build a framework of defining an enterprise's sustainable performance at the micro level, resource use optimization, minimization of waste, cleaner technologies and pollution limits are used in achieving a broad range of goals in EID:
Resource efficiency minimizes the use of energy, materials, water and transportation. This, in turn, lowers production costs due to savings in virtually all areas of business.
Cleaner production is a predominantly environmental measure, which aims at the reduction or even substitution of toxics, emissions-control or the re-use of residual material.
Renewables in both energy and material use shall eliminate all pollution through fossil fuels.
Greening of buildings or production sites installs high energy and environmental standards by relying on innovation in green architecture or engineering. Moreover, new facility and infrastructure design may also enhance the quality of life in neighboring communities significantly.
Environmental management systems such as the ISO 14000 ensure a continuous improvement through regular audits and the progressing establishment of environmental targets.
Ecological site planning can then combine each of these aspects by developing a clear understanding of air, water and ground system capacities throughout the surrounding eco-system.
Eco-industrial development hence explores the possibility of improvement at the local level. In unique case-to-case analyses, particular geography, human potential or business climate are investigated. In contrast to the widespread race for top-down governmental support such as tax cuts, EID emphasizes locally achievable success and room for improvement. As a result, purposeful enforcement of action plans can make a large difference by optimizing the interaction of business, community and ecological systems.
== Instruments ==
Eco-industrial development includes and employs four major conceptual instruments. Each of the approaches intends to combine the seemingly antithetic processes of industrial development and bolstering sustainability.
Industrial ecology focuses on both industrial as well as consumer behavior. By assessing flows of energy and material, the approach determines the flows influences on the environment. In turn, it explores ways and means of optimizing the whole production chain from flow and use of resources to their final transformation. During these analyses, influences of economic, political, regulatory and social factors are key.
The concept of industrial symbiosis is based on mainly voluntary cooperation of different industries. By conglomerating complementary enterprises and by then adapting their respective production chains, the presence of each may increase viability and profitability of the others. Therefore, symbioses consider resource scarcity and environmental protection as crucial factors in developing sustainable industries and profits. Industrial Symbiosis often becomes manifest in Eco-industrial parks.
Environmental management systems include a wide range of different environmental management approaches in order to ensure continual improvement in sustainability. In early stages, monitoring companies facilitates the identification of hazardous environmental aspects. Further on, objectives and targets are set under consideration of legal requirements. Finally, the establishment of regular audits and other reporting systems combined with continuous follow-up targets shall ensure a constant improvement towards greener industrial production.
The Design for the Environment concept originated in engineering disciplines as well as from the product life-cycle analysis. It is a simple but all-encompassing assessment of a product's potential environmental impact—ranging from energy and materials used for packaging, transportation, consumer use and disposal.
== See also ==
Economic growth
Green economy
== References ==
== External links ==
Eco Industrial Development in India Archived 2018-03-13 at the Wayback Machine
Eco-Industrial Development Network
Eco-Industrial Development Institute
National Center for Eco-Industrial Development (NCEID)
Principles of Eco-Industrial Development: Strategic Approaches and Best Practice for Sustainable Industrial Development - presentation by Andreas Koenig of ecoindustry.org
Green Economy Coalition
Industrial ecology: eco-industrial development and regional economic development (University of Hull) | Wikipedia/Eco-industrial_development |
Atmospheric dispersion modeling is the mathematical simulation of how air pollutants disperse in the ambient atmosphere. It is performed with computer programs that include algorithms to solve the mathematical equations that govern the pollutant dispersion. The dispersion models are used to estimate the downwind ambient concentration of air pollutants or toxins emitted from sources such as industrial plants, vehicular traffic or accidental chemical releases. They can also be used to predict future concentrations under specific scenarios (i.e. changes in emission sources). Therefore, they are the dominant type of model used in air quality policy making. They are most useful for pollutants that are dispersed over large distances and that may react in the atmosphere. For pollutants that have a very high spatio-temporal variability (i.e. have very steep distance to source decay such as black carbon) and for epidemiological studies statistical land-use regression models are also used.
Dispersion models are important to governmental agencies tasked with protecting and managing the ambient air quality. The models are typically employed to determine whether existing or proposed new industrial facilities are or will be in compliance with the National Ambient Air Quality Standards (NAAQS) in the United States and other nations. The models also serve to assist in the design of effective control strategies to reduce emissions of harmful air pollutants. During the late 1960s, the Air Pollution Control Office of the U.S. EPA initiated research projects that would lead to the development of models for the use by urban and transportation planners. A major and significant application of a roadway dispersion model that resulted from such research was applied to the Spadina Expressway of Canada in 1971.
Air dispersion models are also used by public safety responders and emergency management personnel for emergency planning of accidental chemical releases. Models are used to determine the consequences of accidental releases of hazardous or toxic materials, Accidental releases may result in fires, spills or explosions that involve hazardous materials, such as chemicals or radionuclides. The results of dispersion modeling, using worst case accidental release source terms and meteorological conditions, can provide an estimate of location impacted areas, ambient concentrations, and be used to determine protective actions appropriate in the event a release occurs. Appropriate protective actions may include evacuation or shelter in place for persons in the downwind direction. At industrial facilities, this type of consequence assessment or emergency planning is required under the U.S. Clean Air Act (CAA) codified in Part 68 of Title 40 of the Code of Federal Regulations.
The dispersion models vary depending on the mathematics used to develop the model, but all require the input of data that may include:
Meteorological conditions such as wind speed and direction, the amount of atmospheric turbulence (as characterized by what is called the "stability class"), the ambient air temperature, the height to the bottom of any inversion aloft that may be present, cloud cover and solar radiation.
Source term (the concentration or quantity of toxins in emission or accidental release source terms) and temperature of the material
Emissions or release parameters such as source location and height, type of source (i.e., fire, pool or vent stack) and exit velocity, exit temperature and mass flow rate or release rate.
Terrain elevations at the source location and at the receptor location(s), such as nearby homes, schools, businesses and hospitals.
The location, height and width of any obstructions (such as buildings or other structures) in the path of the emitted gaseous plume, surface roughness or the use of a more generic parameter "rural" or "city" terrain.
Many of the modern, advanced dispersion modeling programs include a pre-processor module for the input of meteorological and other data, and many also include a post-processor module for graphing the output data and/or plotting the area impacted by the air pollutants on maps. The plots of areas impacted may also include isopleths showing areas of minimal to high concentrations that define areas of the highest health risk. The isopleths plots are useful in determining protective actions for the public and responders.
The atmospheric dispersion models are also known as atmospheric diffusion models, air dispersion models, air quality models, and air pollution dispersion models.
== Atmospheric layers ==
Discussion of the layers in the Earth's atmosphere is needed to understand where airborne pollutants disperse in the atmosphere. The layer closest to the Earth's surface is known as the troposphere. It extends from sea-level to a height of about 18 km (11 mi) and contains about 80 percent of the mass of the overall atmosphere. The stratosphere is the next layer and extends from 18 km (11 mi) to about 50 km (31 mi). The third layer is the mesosphere which extends from 50 km (31 mi) to about 80 km (50 mi). There are other layers above 80 km, but they are insignificant with respect to atmospheric dispersion modeling.
The lowest part of the troposphere is called the planetary boundary layer (PBL), or sometimes the atmospheric boundary layer. The air temperature of the PBL decreases with increasing altitude until it reaches a capping inversion, which is a type of inversion layer where warmer air sits higher in the atmosphere than cooler air. We call the region of the PBL below its capping inversion the convective planetary boundary layer; it is typically 1.5 to 2 km (0.93 to 1.24 mi) in height. The upper part of the troposphere (i.e., above the inversion layer) is called the free troposphere and it extends up to the tropopause (the boundary in the Earth's atmosphere between the troposphere and the stratosphere). In tropical and mid-latitudes during daytime, the free convective layer can comprise the entire troposphere, which is up to 10 to 18 km (6.2 to 11.2 mi) in the Intertropical Convergence Zone.
The PBL is important with respect to the transport and dispersion of airborne pollutants because the turbulent dynamics of wind are strongest at Earth's surface. The part of the PBL between the Earth's surface and the bottom of the inversion layer is known as the mixing layer. Almost all of the airborne pollutants emitted into the ambient atmosphere are transported and dispersed within the mixing layer. Some of the emissions penetrate the inversion layer and enter the free troposphere above the PBL.
In summary, the layers of the Earth's atmosphere from the surface of the ground upwards are: the PBL made up of the mixing layer capped by the inversion layer; the free troposphere; the stratosphere; the mesosphere and others. Many atmospheric dispersion models are referred to as boundary layer models because they mainly model air pollutant dispersion within the ABL. To avoid confusion, models referred to as mesoscale models have dispersion modeling capabilities that extend horizontally up to a few hundred kilometres. It does not mean that they model dispersion in the mesosphere.
== Gaussian air pollutant dispersion equation ==
The technical literature on air pollution dispersion is quite extensive and dates back to the 1930s and earlier. One of the early air pollutant plume dispersion equations was derived by Bosanquet and Pearson. Their equation did not assume Gaussian distribution nor did it include the effect of ground reflection of the pollutant plume.
Sir Graham Sutton derived an air pollutant plume dispersion equation in 1947 which did include the assumption of Gaussian distribution for the vertical and crosswind dispersion of the plume and also included the effect of ground reflection of the plume.
Under the stimulus provided by the advent of stringent environmental control regulations, there was an immense growth in the use of air pollutant plume dispersion calculations between the late 1960s and today. A great many computer programs for calculating the dispersion of air pollutant emissions were developed during that period of time and they were called "air dispersion models". The basis for most of those models was the Complete Equation For Gaussian Dispersion Modeling Of Continuous, Buoyant Air Pollution Plumes shown below:
C
=
Q
u
⋅
f
σ
y
2
π
⋅
g
1
+
g
2
+
g
3
σ
z
2
π
{\displaystyle C={\frac {\;Q}{u}}\cdot {\frac {\;f}{\sigma _{y}{\sqrt {2\pi }}}}\;\cdot {\frac {\;g_{1}+g_{2}+g_{3}}{\sigma _{z}{\sqrt {2\pi }}}}}
The above equation not only includes upward reflection from the ground, it also includes downward reflection from the bottom of any inversion lid present in the atmosphere.
The sum of the four exponential terms in
g
3
{\displaystyle g_{3}}
converges to a final value quite rapidly. For most cases, the summation of the series with m = 1, m = 2 and m = 3 will provide an adequate solution.
σ
z
{\displaystyle \sigma _{z}}
and
σ
y
{\displaystyle \sigma _{y}}
are functions of the atmospheric stability class (i.e., a measure of the turbulence in the ambient atmosphere) and of the downwind distance to the receptor. The two most important variables affecting the degree of pollutant emission dispersion obtained are the height of the emission source point and the degree of atmospheric turbulence. The more turbulence, the better the degree of dispersion.
Equations for
σ
y
{\displaystyle \sigma _{y}}
and
σ
z
{\displaystyle \sigma _{z}}
are:
σ
y
{\displaystyle \sigma _{y}}
(x) = exp(Iy + Jyln(x) + Ky[ln(x)]2)
σ
z
{\displaystyle \sigma _{z}}
(x) = exp(Iz + Jzln(x) + Kz[ln(x)]2)
(units of
σ
z
{\displaystyle \sigma _{z}}
, and
σ
y
{\displaystyle \sigma _{y}}
, and x are in meters)
The classification of stability class is proposed by F. Pasquill. The six stability classes are referred to:
A-extremely unstable
B-moderately unstable
C-slightly unstable
D-neutral
E-slightly stable
F-moderately stable
The resulting calculations for air pollutant concentrations are often expressed as an air pollutant concentration contour map in order to show the spatial variation in contaminant levels over a wide area under study. In this way the contour lines can overlay sensitive receptor locations and reveal the spatial relationship of air pollutants to areas of interest.
Whereas older models rely on stability classes (see air pollution dispersion terminology) for the determination of
σ
y
{\displaystyle \sigma _{y}}
and
σ
z
{\displaystyle \sigma _{z}}
, more recent models increasingly rely on the Monin-Obukhov similarity theory to derive these parameters.
== Briggs plume rise equations ==
The Gaussian air pollutant dispersion equation (discussed above) requires the input of H which is the pollutant plume's centerline height above ground level—and H is the sum of Hs (the actual physical height of the pollutant plume's emission source point) plus ΔH (the plume rise due to the plume's buoyancy).
To determine ΔH, many if not most of the air dispersion models developed between the late 1960s and the early 2000s used what are known as the Briggs equations. G.A. Briggs first published his plume rise observations and comparisons in 1965. In 1968, at a symposium sponsored by CONCAWE (a Dutch organization), he compared many of the plume rise models then available in the literature. In that same year, Briggs also wrote the section of the publication edited by Slade dealing with the comparative analyses of plume rise models. That was followed in 1969 by his classical critical review of the entire plume rise literature, in which he proposed a set of plume rise equations which have become widely known as "the Briggs equations". Subsequently, Briggs modified his 1969 plume rise equations in 1971 and in 1972.
Briggs divided air pollution plumes into these four general categories:
Cold jet plumes in calm ambient air conditions
Cold jet plumes in windy ambient air conditions
Hot, buoyant plumes in calm ambient air conditions
Hot, buoyant plumes in windy ambient air conditions
Briggs considered the trajectory of cold jet plumes to be dominated by their initial velocity momentum, and the trajectory of hot, buoyant plumes to be dominated by their buoyant momentum to the extent that their initial velocity momentum was relatively unimportant. Although Briggs proposed plume rise equations for each of the above plume categories, it is important to emphasize that "the Briggs equations" which become widely used are those that he proposed for bent-over, hot buoyant plumes.
In general, Briggs's equations for bent-over, hot buoyant plumes are based on observations and data involving plumes from typical combustion sources such as the flue gas stacks from steam-generating boilers burning fossil fuels in large power plants. Therefore, the stack exit velocities were probably in the range of 20 to 100 ft/s (6 to 30 m/s) with exit temperatures ranging from 250 to 500 °F (120 to 260 °C).
A logic diagram for using the Briggs equations to obtain the plume rise trajectory of bent-over buoyant plumes is presented below:
The above parameters used in the Briggs' equations are discussed in Beychok's book.
== See also ==
=== Atmospheric dispersion models ===
List of atmospheric dispersion models provides a more comprehensive list of models than listed below. It includes a very brief description of each model.
=== Organizations ===
Air Quality Modeling Group
Air Resources Laboratory
Finnish Meteorological Institute
KNMI, Royal Dutch Meteorological Institute
National Environmental Research Institute of Denmark
Swedish Meteorological and Hydrological Institute
TA Luft
UK Atmospheric Dispersion Modelling Liaison Committee
UK Dispersion Modelling Bureau
Desert Research Institute
VITO (institute) Belgium; https://vito.be/en
Swedish Defence Research Agency, FOI
=== Others ===
Air pollution dispersion terminology
List of atmospheric dispersion models
Portable Emissions Measurement System (PEMS)
Roadway air dispersion modeling
Useful conversions and formulas for air dispersion modeling
Air pollution forecasting
== References ==
== Further reading ==
=== Books ===
=== Proceedings ===
== External links ==
EPA's Support Center for Regulatory Atmospheric Modeling
EPA's Air Quality Modeling Group (AQMG)
NOAA's Air Resources Laboratory (ARL)
UK Atmospheric Dispersion Modelling Liaison Committee web site
UK Dispersion Modelling Bureau web site
Atmospheric Chemistry transport model LOTOS-EUROS
The Operational Priority Substances model OPS (in Dutch)
HAMS-GPS Dispersion modelling
Wiki on Atmospheric Dispersion Modelling. Addresses the international community of atmospheric dispersion modellers - primarily researchers, but also users of models. Its purpose is to pool experiences gained by dispersion modellers during their work. | Wikipedia/Atmospheric_dispersion_modeling |
A sustainable food system is a type of food system that provides healthy food to people and creates sustainable environmental, economic, and social systems that surround food. Sustainable food systems start with the development of sustainable agricultural practices, development of more sustainable food distribution systems, creation of sustainable diets, and reduction of food waste throughout the system. Sustainable food systems have been argued to be central to many or all 17 Sustainable Development Goals.
Moving to sustainable food systems, including via shifting consumption to sustainable diets, is an important component of addressing the causes of climate change and adapting to it. A 2020 review conducted for the European Union found that up to 37% of global greenhouse gas emissions could be attributed to the food system, including crop and livestock production, transportation, changing land use (including deforestation), and food loss and waste. Reduction of meat production, which accounts for ~60% of greenhouse gas emissions and ~75% of agriculturally used land, is one major component of this change.
The global food system is facing major interconnected challenges, including mitigating food insecurity, effects from climate change, biodiversity loss, malnutrition, inequity, soil degradation, pest outbreaks, water and energy scarcity, economic and political crises, natural resource depletion, and preventable ill-health.
The concept of sustainable food systems is frequently at the center of sustainability-focused policy programs, such as proposed Green New Deal programs.
== Definition ==
There are many different definitions of a sustainable food system.
From a global perspective, the Food and Agriculture Organization of the United Nations describes a sustainable food system as follows:
A sustainable food system (SFS) is a food system that delivers food security and nutrition for all in such a way that the economic, social and environmental bases to generate food security and nutrition for future generations are not compromised. This means that:
It is profitable throughout (economic sustainability);
It has broad-based benefits for society (social sustainability); and
It has a positive or neutral impact on the natural environment (environmental sustainability)
The American Public Health Association (APHA) defines a sustainable food system as:
one that provides healthy food to meet current food needs while maintaining healthy ecosystems that can also provide food for generations to come with minimal negative impact to the environment. A sustainable food system also encourages local production and distribution infrastructures and makes nutritious food available, accessible, and affordable to all. Further, it is humane and just, protecting farmers and other workers, consumers, and communities
The European Union's Scientific Advice Mechanism defines a sustainable food system as a system that:
provides and promotes safe, nutritious and healthy food of low environmental impact for all current and future EU citizens in a manner that itself also protects and restores the natural environment and its ecosystem services, is robust and resilient, economically dynamic, just and fair, and socially acceptable and inclusive. It does so without compromising the availability of nutritious and healthy food for people living outside the EU, nor impairing their natural environment
== Problems with conventional food systems ==
Industrial agriculture causes environmental impacts, as well as health problems associated with both obesity and hunger. This has generated a strong interest in healthy, sustainable eating as a major component of the overall movement toward sustainability and climate change mitigation.
Conventional food systems are largely based on the availability of inexpensive fossil fuels, which is necessary for mechanized agriculture, the manufacturing or collection of chemical fertilizers, the processing of food products, and the packaging of foods. Food processing began when the number of consumers started growing rapidly. The demand for cheap and efficient calories climbed, which resulted in nutrition decline. Since more than two billion people are suffering from malnutrition, in 2017, more than 11 million deaths were environmental malnutrition. In fact, during the last 50 - 70 years, environmental, genetic, and field soil dilution factors have decreased the nutritional density of fruits like apples, oranges, bananas, mangos, and vegetables like tomatoes and potatoes by 25-50%. Industrialized agriculture, due to its reliance on economies of scale to reduce production costs, often leads to the compromising of local, regional, or even global ecosystems through fertilizer runoff, nonpoint source pollution, deforestation, suboptimal mechanisms affecting consumer product choice, and greenhouse gas emissions.
=== Food and power ===
In the contemporary world, transnational corporations execute high levels of control over the food system. In this system, both farmers and consumers are disadvantaged and have little control; power is concentrated in the center of the supply chain, where corporations control how food moves from producers to consumers.
==== Disempowerment of consumers ====
People living in different areas face substantial inequality in their access to healthy food. Areas where affordable, healthy food, particularly fresh fruits and vegetables, is difficult to access are sometimes called food deserts. This term has been particularly applied in the USA. In addition, conventional channels do not distribute food by emergency assistance or charity. Urban residents receive more sustainable food production from healthier and safer sources than low-income communities. Nonetheless, conventional channels are more sustainable than charitable or welfare food resources. Even though the conventional food system provides easier access and lower prices, their food may not be the best for the environment nor consumer health.
People who live in food deserts are currently being overfed with fast food and ultra-processed foods, yet remain undernourished because they are consuming nutrient-poor diets. Both obesity and undernutrition are associated with poverty and marginalization. This has been referred to as the "double burden of malnutrition." In low-income areas, there may be abundant access to fast-food or small convenience stores and "corner" stores, but no supermarkets that sell a variety of healthy foods.
==== Disempowerment of producers ====
Small farms tend to be more sustainable than large farming operations, because of differences in their management and methods. Industrial agriculture replaces human labor using increased usage of fossil fuels, fertilizers, pesticides, and machinery and is heavily reliant on monoculture. However, if current trends continue, the number of operating farms in existence is expected to halve by 2100, as smallholders' farms are consolidated into larger operations. The percentage of people who work as farmers worldwide dropped from 44% to 26% between 1991 and 2020.
Small farmers worldwide are often trapped in poverty and have little agency in the global food system. Smallholder farms produce a greater diversity of crops as well as harboring more non-crop biodiversity, but in wealthy, industrialized countries, small farms have declined severely. For example, in the USA, 4% of the total number of farms operate 26% of all agricultural land.
=== Complications from globalization ===
The need to reduce production costs in an increasingly global market can cause production of foods to be moved to areas where economic costs (labor, taxes, etc.) are lower or environmental regulations are more lax, which are usually further from consumer markets. For example, the majority of salmon sold in the United States is raised off the coast of Chile, due in large part to less stringent Chilean standards regarding fish feed and regardless of the fact that salmon are not indigenous in Chilean coastal waters. The globalization of food production can result in the loss of traditional food systems in less developed countries and have negative impacts on the population health, ecosystems, and cultures in those countries.
Globalization of sustainable food systems has coincided the proliferation of private standards in the agri-food sector where big food retailers have formed multi-stakeholder initiatives (MSIs) with governance over standard setting organizations (SSOs) who maintain the standards. One such MSI is the Consumer Goods Forum(CGF). With CGF members openly using lobbying dollars to influence trade agreements for food systems which leads to creating barriers to competition. Concerns around corporate governance within food systems as a substitute for regulation were raised by the Institute for Multi-Stakeholder Initiative Integrity. The proliferation of private standards resulted in standard harmonization from organizations that include the Global Food Safety Initiative and ISEAL Alliance. The unintended consequence of standard harmonization was a perverse incentive because companies owning private standards generate revenue from fees that other companies have to pay to implement the standards. This has led to more and more private standards entering the marketplace who are enticed to make money.
=== Systemic structures ===
Moreover, the existing conventional food system lacks the inherent framework necessary to foster sustainable models of food production and consumption. Within the decision-making processes associated with this system, the burden of responsibility primarily falls on consumers and private enterprises. This expectation places the onus on individuals to voluntarily and often without external incentives, expend effort to educate themselves about sustainable behaviours and specific product choices. This educational endeavour is reliant on the availability of public information. Subsequently, consumers are urged to alter their decision-making patterns concerning production and consumption, driven by prioritised ethical values and sometimes health benefits, even when significant drawbacks are prevalent. These drawbacks faced by consumers include elevated costs of organic foods, imbalanced monetary price differentials between animal-intensive diets and plant-based alternatives, and an absence of comprehensive consumer guidance aligned with contemporary valuations. In 2020, an analysis of external climate costs of foods indicated that external greenhouse gas costs are typically highest for animal-based products – conventional and organic to about the same extent within that ecosystem subdomain – followed by conventional dairy products and lowest for organic plant-based foods. It finds contemporary monetary evaluations to be "inadequate" and policy-making that lead to reductions of these costs to be possible, appropriate and urgent.
=== Agricultural pollution ===
== Sourcing sustainable food ==
At the global level the environmental impact of agribusiness is being addressed through sustainable agriculture, cellular agriculture and organic farming.
Various alternatives to meat and novel classes of foods can substantially increase sustainability. There are large potential benefits of marine algae-based aquaculture for the development of a future healthy and sustainable food system. Fungiculture, another sector of a growing bioeconomy besides algaculture, may also become a larger component of a sustainable food system. Consumption shares of various other ingredients for meat analogues such as protein from pulses may also rise substantially in a sustainable food system. The integration of single-cell protein, which can be produced from captured CO2. Optimized dietary scenarios would also see changes in various other types of foods such as nuts, as well as pulses such as beans, which have favorable environmental and health profiles.
Complementary approaches under development include vertical farming of various types of foods and various agricultural technologies, often using digital agriculture.
=== Sustainable seafood ===
Sustainable seafood is seafood from either fished or farmed sources that can maintain or increase production in the future without jeopardizing the ecosystems from which it was acquired. The sustainable seafood movement has gained momentum as more people become aware about both overfishing and environmentally destructive fishing methods. The goal of sustainable seafood practices is to ensure that fish populations are able to continue to thrive, that marine habitats are protected, and that fishing and aquaculture practices do not have negative impacts on local communities or economies.
There are several factors that go into determining whether a seafood product is sustainable or not. These include the method of fishing or farming, the health of the fish population, the impact on the surrounding environment, and the social and economic implications of the seafood production. Some sustainable seafood practices include using methods that minimize bycatch, implementing seasonal or area closures to allow fish populations to recover, and using aquaculture methods that minimize the use of antibiotics or other chemicals. Organizations such as the Marine Stewardship Council (MSC) and the Aquaculture Stewardship Council (ASC) work to promote sustainable seafood practices and provide certification for products that meet their sustainability standards. In addition, many retailers and restaurants are now offering sustainable seafood options to their customers, often labeled with a sustainability certification logo to make it easier for consumers to make informed choices. Consumers can also play a role in promoting sustainable seafood by making conscious choices about the seafood they purchase and consume. This can include choosing seafood that is labeled as sustainably harvested or farmed, asking questions about the source and production methods of the seafood they purchase, and supporting restaurants and retailers that prioritize sustainability in their seafood offerings. By working together to promote sustainable seafood practices, we can help to ensure the health and sustainability of our oceans and the communities that depend on them.
=== Sustainable animal feed ===
A study suggests there would be large environmental benefits of using insects for animal feed.When substituting mixed grain, which is currently the main animal feed, insect feed lowers water and land requirement and emits fewer greenhouse gas and ammonia.
==== Sustainable pet food ====
Recent studies show that vegan diets, which are more sustainable, would not have negative impact on the health of pet dogs and cats if implemented appropriately. It aims to minimize the ecological footprint of pet food production while still providing the necessary nutrition for pets. Recent studies have explored the potential benefits of vegan diets for pets in terms of sustainability.
One example is the growing body of research indicating that properly formulated and balanced vegan diets can meet the nutritional needs of dogs and cats without compromising their health. These studies suggest that with appropriate planning and supplementation, pets can thrive on plant-based diets. This is significant from a sustainability perspective as traditional pet food production heavily relies on animal-based ingredients, which contribute to deforestation, greenhouse gas emissions, and overfishing.
By opting for sustainable pet food options, such as plant-based or eco-friendly alternatives, pet owners can reduce their pets' carbon footprint and support more ethical and sustainable practices in the pet food industry. Additionally, sustainable pet food may also prioritize the use of responsibly sourced ingredients, organic farming practices, and minimal packaging waste. It is important to note that when considering a vegan or alternative diet for pets, consultation with a veterinarian is crucial. Each pet has unique nutritional requirements, and a professional can help determine the most suitable diet plan to ensure all necessary nutrients are provided.
=== Substitution of meat and sustainable meat and dairy ===
==== Meat reduction strategies ====
==== Effects and combination of measures ====
"Policy sequencing" to gradually extend regulations once established to other forest risk commodities (e.g. other than beef) and regions while coordinating with other importing countries could prevent ineffectiveness.
==== Meat and dairy ====
Despite meat from livestock such as beef and lamb being considered unsustainable, some regenerative agriculture proponents suggest rearing livestock with a mixed farming system to restore organic matter in grasslands. Organizations such as the Canadian Roundtable for Sustainable Beef (CRSB) are looking for solutions to reduce the impact of meat production on the environment. In October 2021, 17% of beef sold in Canada was certified as sustainable beef by the CRSB. However, sustainable meat has led to criticism, as environmentalists point out that the meat industry excludes most of its emissions.
Important mitigation options for reducing the greenhouse gas emissions from livestock include genetic selection, introduction of methanotrophic bacteria into the rumen, vaccines, feeds, toilet-training, diet modification and grazing management. Other options include shifting to ruminant-free alternatives, such as milk substitutes and meat analogues or poultry, which generates far fewer emissions.
Plant-based meat is proposed for sustainable alternatives to meat consumption. Plant-based meat emits 30%–90% less greenhouse gas than conventional meat (kg-CO2-eq/kg-meat) and 72%–99% less water than conventional meat. Public company Beyond Meat and privately held company Impossible Foods are examples of plant-based food production. However, consulting firm Sustainalytics assured that these companies are not more sustainable than meat-processors competitors such as food processor JBS, and they don't disclose all the CO2 emissions of their supply chain.
Beyond reducing negative impacts of meat production, facilitating shifts towards more sustainable meat, and facilitating reduced meat consumption (including via plant-based meat substitutes), cultured meat may offer a potentially sustainable way to produce real meat without the associated negative environmental impacts.
=== Phase-outs, co-optimization and environmental standards ===
In regards to deforestation, a study proposed kinds of "climate clubs" of "as many other states as possible taking similar measures and establishing uniform environmental standards". It suggested that "otherwise, global problems remain unsolvable, and shifting effects will occur" and that "border adjustments [...] have to be introduced to target those states that do not participate—again, to avoid shifting effects with ecologically and economically detrimental consequences", with such "border adjustments or eco-tariffs" incentivizing other countries to adjust their standards and domestic production to join the climate club. Identified potential barriers to sustainability initiatives may include contemporary trade-policy goals and competition law. Greenhouse gas emissions for countries are often measured according to production, for imported goods that are produced in other countries than where they are consumed "embedded emissions" refers to the emissions of the product. In cases where such products are and remain imported, eco-tariffs could over time adjust prices for specific categories of products – or for specific non-collaborative polluting origin countries – such as deforestation-associated meat, foods with intransparent supply-chain origin or foods with high embedded emissions.
=== Agricultural productivity and environmental efficiency ===
Agricultural productivity (including e.g. reliability of yields) is an important component of food security and increasing it sustainably (e.g. with high efficiency in terms of environmental impacts) could be a major way to decrease negative environmental impacts, such as by decreasing the amount of land needed for farming or reducing environmental degradation like deforestation.
==== Genetically engineered crops ====
There is research and development to engineer genetically modified crops with increased heat/drought/stress resistance, increased yields, lower water requirements, and overall lower environmental impacts, among other things.
==== Novel agricultural technologies ====
=== Organic food ===
=== Local food systems ===
In local and regional food systems, food is produced, distributed, and consumed locally. This type of system can be beneficial both to the consumer (by providing fresher and more sustainably grown product) and to the farmer (by fetching higher prices and giving more direct access to consumer feedback). Local and regional food systems can face challenges arising from inadequate institutions or programs, geographic limitations of producing certain crops, and seasonal fluctuations which can affect product demand within regions. In addition, direct marketing also faces challenges of accessibility, coordination, and awareness.
Farmers' markets, which have increased in number over the past two decades, are designed for supporting local farmers in selling their fresh products to consumers who are willing to buy. Food hubs are also similar locations where farmers deliver products and consumers come to pick them up. Consumers who wish to have weekly produce delivered can buy shares through a system called Community-Supported Agriculture (CSA). However, these farmers' markets also face challenges with marketing needs such as starting up, advertisement, payments, processing, and regulations.
There are various movements working towards local food production, more productive use of urban wastelands and domestic gardens including permaculture, guerilla gardening, urban horticulture, local food, slow food, sustainable gardening, and organic gardening.
Debates over local food system efficiency and sustainability have risen as these systems decrease transportation, which is a strategy for combating environmental footprints and climate change. A popular argument is that the less impactful footprint of food products from local markets on communities and environment. Main factors behind climate change include land use practices and greenhouse emissions, as global food systems produce approximately 33% of theses emissions. Compared to transportation in a local food system, a conventional system takes more fuel for energy and emits more pollution, such as carbon dioxide. This transportation also includes miles for agricultural products to help with agriculture and depends on factors such as transportation sizes, modes, and fuel types. Some airplane importations have shown to be more efficient than local food systems in some cases. Overall, local food systems can often support better environmental practices.
==== Environmental impact of food miles ====
Studies found that food miles are a relatively minor factor of carbon emissions; albeit increased food localization may also enable additional, more significant environmental benefits such as recycling of energy, water, and nutrients. For specific foods, regional differences in harvest seasons may make it more environmentally friendly to import from distant regions than more local production and storage or local production in greenhouses. This may vary depending on the environmental standards in the respective country, the distance of the respective countries and on a case-by-case basis for different foods.
However, a 2022 study suggests global food miles' CO2 emissions are 3.5–7.5 times higher than previously estimated, with transport accounting for about 19% of total food-system emissions, though shifting towards plant-based diets remains substantially more important. The study concludes that "a shift towards plant-based foods must be coupled with more locally produced items, mainly in affluent countries".
== Food distribution ==
In food distribution, increasing food supply is a production problem, as it takes time for products to get marketed, and as they wait to get distributed the food goes to waste. Despite the fact that throughout all food production an estimated 20-30% of food is wasted, there have been efforts to combat this issue, such as campaigns conducted to promote limiting food waste. However, due to insufficient facilities and practices as well as huge amounts of food going unmarketed or harvested due to prices or quality, food is wasted through each phase of its distribution. Another factor for lack of sustainability within food distribution includes transportation in combination with inadequate methods for food handling throughout the packing process. Additionally, poor or long conditions for food in storage and consumer waste add to this list of factors for inefficiency found in food distribution. In 2019, though global production of calories kept pace with population growth, there are still more than 820 million people who have insufficient food and many more consume low-quality diets leading to micronutrient deficiencies.
Some modern tendencies in food distribution also create bounds in which problems are created and solutions must be formed. One factor includes growth of large-scale producing and selling units in bulk to chain stores which displays merchandising power from large scale market organizations as well as their mergence with manufactures. In response to production, another factor includes large scale distribution and buying units among manufacturers in development of food distribution, which also affects producers, distributors, and consumers. Another main factor involves protecting public interest, which means better adaptation for product and service, resulting in rapid development of food distribution. A further factor revolves around price maintenance, which creates pressure for lower prices, resulting in higher drive for lower cost throughout the whole food distribution process. An additional factor comprises new changes and forms of newly invented technical processes such as developments of freezing food, discovered through experiments, to help with distribution efficiency. Another factor is new technical developments in distributing machinery to meet the influence of consumer demands and economic factors. Lastly, one more factor includes government relation to businesses and those who petition against it in correlation with anti-trust laws due to large scale business organizations and the fear of monopoly contributing to changing public attitude.
== Food security, nutrition and diet ==
The environmental effects of different dietary patterns depend on many factors, including the proportion of animal and plant foods consumed and the method of food production. At the same time, current and future food systems need to be provided with sufficient nutrition for not only the current population, but future population growth in light of a world affected by changing climate in the face of global warming.
Nearly one in four households in the United States have experienced food insecurity in 2020–21. Even before the pandemic hit, some 13.7 million households, or 10.5% of all U.S. households, experienced food insecurity at some point during 2019, according to data from the U.S. Department of Agriculture. That works out to more than 35 million Americans who were either unable to acquire enough food to meet their needs, or uncertain of where their next meal might come from, last year.
The "global land squeeze" for agricultural land also has impacts on food security. Likewise, effects of climate change on agriculture can result in lower crop yields and nutritional quality due to for example drought, heat waves and flooding as well as increases in water scarcity, pests and plant diseases. Soil conservation may be important for food security as well. For sustainability and food security, the food system would need to adapt to such current and future problems.
According to one estimate, "just four corporations control 90% of the global grain trade" and researchers have argued that the food system is too fragile due to various issues, such as "massive food producers" (i.e. market-mechanisms) having too much power and nations "polarising into super-importers and super-exporters". However the impact of market power on the food system is contested with other claiming more complex context dependent outcomes.
== Production decision-making ==
In the food industry, especially in agriculture, there has been a rise in problems toward the production of some food products. For instance, growing vegetables and fruits has become more expensive. It is difficult to grow some agricultural crops because some have a preferable climate condition for developing. There has also been an incline on food shortages as production has decreased. Though the world still produces enough food for the population, not everyone receives good quality food because it is not accessible to them, since it depends on their location and/or income. In addition, the number of overweight people has increased, and there are about 2 billion people that are underfed worldwide. This shows how the global food system lacks quantity and quality according to the food consumption patterns.
A study estimated that "relocating current croplands to [environmentally] optimal locations, whilst allowing ecosystems in then-abandoned areas to regenerate, could simultaneously decrease the current carbon, biodiversity, and irrigation water footprint of global crop production by 71%, 87%, and 100%", with relocation only within national borders also having substantial potential.
Policies, including ones that affect consumption, may affect production-decisions such as which foods are produced to various degrees and in various indirect and direct ways. Individual studies have named several proposed options of such and the restricted website Project Drawdown has aggregated and preliminarily evaluated some of these measures.
=== Climate change adaptation ===
== Food waste ==
According to the Food and Agriculture Organization (FAO), food waste is responsible for 8 percent of global human-made greenhouse gas emissions. The FAO concludes that nearly 30 percent of all available agricultural land in the world – 1.4 billion hectares – is used for produced but uneaten food. The global blue water footprint of food waste is 250 km3, the amount of water that flows annually through the Volga or three times Lake Geneva.
There are several factors that explain how food waste has increased globally in food systems. The main factor is population, because as population increases more food is being made, but most food produced goes to waste. Especially, during COVID-19, food waste grew sharply due to the booming of food delivery services according to a 2022 study. In addition, not all countries have the same resources to provide the best quality of food. According to a study done in 2010, private households produce the largest amounts of food waste across the globe. Another major factor is overproduction; the rate of food production is significantly higher than the rate of consumption, leading to a surplus of food waste.
Throughout the world there are different ways that food is being processed. With different priorities, different choices are being made to meet their most important needs. Money is another big factor that determines how long the process will take and who is working, and it is treated differently in low income countries' food systems.
However, high income countries food systems still may deal with other issues such as food security. This demonstrates how all food systems have their weaknesses and strengths. Climate change causes food waste to increase because the warm temperature causes crops to dry faster and creates a higher risk for fires. Food waste can occur any time throughout production. According to the World Wildlife Organization, since most food produced goes to landfills, when it rots it causes methane to be produced. The disposal of food has a big impact on our environment and health.
== Academic Opportunities ==
The study of sustainable food applies systems theory and methods of sustainable design towards food systems. As an interdisciplinary field, the study of sustainable food systems has been growing in the last several decades. University programs focused on sustainable food systems include:
University of Colorado Boulder
Harvard Extension
University of Delaware
Mesa Community College
University of California, Davis
University of Vermont
Sterling College (Vermont)
University of Michigan
Portland State University
University of Sheffield's Institute for Sustainable Food
University of Georgia's Sustainable Food Systems Initiative
The Culinary Institute of America's Master's in Sustainable Food Systems
University of Edinburgh's Global Academy of Agriculture and Food Systems
There is a debate about "establishing a body akin to the Intergovernmental Panel on Climate Change (IPCC) for food systems" which "would respond to questions from policymakers and produce advice based on a synthesis of the available evidence" while identifying "gaps in the science that need addressing".
== Public policy ==
=== European Union ===
=== Global ===
=== Asia ===
== See also ==
Hemp juice
Standardization#Environmental protection
== References ==
=== Cited sources ===
Mbow, C.; Rosenzweig, C.; Barioni, L. G.; Benton, T.; et al. (2019). "Chapter 5: Food Security" (PDF). Climate Change and Land: an IPCC special report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems. p. 454.
== Further reading == | Wikipedia/Sustainable_food_systems |
Building information modeling (BIM) in green buildings aims at enabling sustainable designs and in turn allows architects and engineers to integrate and analyze building performance. It quantifies the environmental impacts of systems and materials to support the decisions needed to produce sustainable buildings, using information about sustainable materials that are stored in the database and interoperability between design and analysis tools. Such data can be useful for building life cycle assessments.
== Services ==
BIM services, including conceptual modeling and topographic modeling, offer an approach to green building.
=== Conceptual energy analysis ===
Conceptual energy analysis allows designers and BIM service providers to transfer conceptual modeling into analytical energy models through exporting mass to gbXML. Possible information that can be transferred includes climate data, graphical energy analysis results, and design contrast options.
=== Solar and shadow analysis ===
Software tools can aid designers and BIM service providers in envisaging or quantifying solar and shadow effects.
=== Sustainability analysis ===
BIM tools and workflow have two phases: inherent BIM features and BIM-based analysis tools.
Inherent BIM features include functions such as 3D Model, visualization clash, and detection, which help integrated project delivery and design optimization.
BIM-based analysis tools are used to analyze energy, solar, thermal, etc. The benefits of those tools are to enable better communication and cooperation, as well as higher accuracy and efficiency.
The following tabulation compares BIM-based software used for green analyses.
=== Industry Foundation Classes data model ===
Industry Foundation Classes (IFC) or COBie is a standard exchange protocol to be used in data exchange between BIM software and rating systems.
== Construction ==
BIM aids in four main areas— land, water, energy and materials.
=== Land ===
BIM and GIS are integrated for site planning. BIM simulations can estimate the progress of construction.
=== Water ===
BIM is utilized in large scale schemes as well as across the industry. It helps decrease unnecessary loss and effectively saves water. BIM improves the design process of building water supply and drainage.
=== Energy ===
BIM can be used to simulate energy consumption. It integrates and analyzes information at the construction stage to calculate the thermal environment that could shorten the construction period.
=== Material ===
BIM tracks material consumption, calculates material requirements, and manages material information uniformly.
== Rating systems ==
Sustainable rating systems are used to evaluate the environmental performance of buildings. These systems have common criteria and are similar in their evaluation of energy consumption, indoor environmental quality, water efficiency, and material. Three rating systems that can integrate with BIM are LEED, BREEAM, and Green Star.
The framework of integrating BIM-based with sustainable rating systems includes "design assistance" and "certification management" modules. The design assistance module assists designers with efficient and sustainable knowledge that is built into the BIM tool to ensure the design-oriented through BIM tool's application programming interface (API). The certification management module is a web-based application used to manage project information, sustainable documentation and submissions for certification purposes.
== See also ==
List of BIM software
== References == | Wikipedia/Building_information_modeling_in_green_building |
An integrated workplace management system (IWMS) is an ultimate software platform for organizational uses of workplace resources, including the management of real estate portfolio, infrastructure and facilities assets of a company. IWMS solutions are commonly packaged as an integrated suite or as individual modules that can be scaled over time. They are used by corporate occupiers, real estate services firms, facilities services providers, landlords and managing agents. Traditionally focused on supporting real estate and facilities professionals, IWMS solutions are becoming more employee-oriented, expanding their focus to include all building occupants and visitors.
== Core functional areas of IWMS ==
IWMS tends to integrate five core functional areas (or a subset of the five) within an enterprise.
=== Real estate management ===
This area involves activities associated with the acquisition (including purchase and lease), financial management and disposition of real property assets. Common IWMS features that support real estate management include strategic planning, transaction management, request for proposal (RFP) analysis, lease analysis, portfolio management, tax management, lease management, and lease accounting.
=== Capital project management ===
This area involves activities associated with the design and development of new facilities and the re-modeling or enhancement of existing facilities, including their reconfiguration and expansion. Common IWMS features that support capital project management include capital planning, design, funding, bidding, procurement, cost and resource management, project documentation and drawing management, scheduling, and critical path analysis.
=== Facilities management ===
This area covers activities related to the operation and optimized utilization of facilities. Common IWMS features that support facility management includes strategic facilities planning (including scenario modeling and analysis), CAD and BIM integration, space management, site and employee service management, resource scheduling, and move management.
=== Maintenance management ===
This area covers activities related to the corrective and preventive maintenance and operation of facilities and assets. Common IWMS features that support maintenance management include asset management, work requests, preventive maintenance, work order administration, warranty tracking, inventory management, vendor management and facility condition assessment.
=== Sustainability and energy management ===
This area covers activities related to the measurement and reduction of resource consumption (including energy and water) and waste production (including greenhouse gas emissions) within facilities. Common IWMS features that support sustainability and energy management include integration with building management systems (BMS), sustainability performance metrics, energy bench-marking, carbon emissions tracking, and energy efficiency project analysis.
== Implementation Planning ==
IWMS components can be implemented in any order—or all together as a single, comprehensive implementation—according to the organization's needs. As an implementation best practice, a phased approach for implementing IWMS components sequentially is often advised—though a multi-function approach can still be followed. Each IWMS functional area requires the same steps for its implementation, though extra care, coordination and project management will be necessary to ensure smooth functioning for more complex implementations.
Adoption of as-shipped business processes included in the IWMS software over an organization's existing business processes constitutes a “core success prerequisite and best practice” in the selection and implementation of IWMS software. As a result, organizations should limit configuration to all but the most compelling cases.
== Analyst coverage ==
Since 2004, the IWMS market has been reported on by independent analyst firms Gartner, Verdantix, IWMSconnect and IWMSNews.
=== Gartner Market Guide for Integrated Workplace Management Systems ===
Until 2014, Gartner published the IWMS Magic Quadrant, evaluating IWMS vendors on two criteria: 'completeness of vision' and 'ability to execute'. As the market further matured, the Magic Quadrant reports were replaced by an annual Market Guide, focusing more on the development of the market itself than on comparative positioning.
The original author, Michael Bell, first described IWMS software as "integrated enterprise solutions that span the life cycle of facilities asset management, from acquisition and operations to disposition." In this first market definition, Gartner identified critical requirements of an IWMS, including a common database, advanced web services technologies and a system architecture that enabled user-defined workflow processes and customized portal interfaces.
Gartner released updated IWMS Market Guide reports, as follows:
The latest Gartner Market Guide for Integrated Workplace Management Systems was published on January 28, 2020, by Carol Rozwell, Former Distinguished VP Analyst, and Rashmi Choudhary, Principal Analyst.
=== Verdantix Green Quadrant Integrated Workplace Management Systems ===
Since 2017, Verdantix publishes Integrated Workplace Management Systems Green Quadrant reports and Buyer's Guides. The research firm's proprietary Green Quadrant methodology uses weighted criteria for vendor evaluation, grouped under 2 categories: Capabilities (breadth and depth of software functionality) and Momentum (strategic success factors).
== Evolution of IWMS ==
While the core functions of IWMS remain critical, it is evolving into a cloud-based software platform that is built with the workplace experience at the center. Modern IWMS, providing an interactive user interface across multiple devices, enables employees to access a variety of workplace services from a mobile app, kiosk, or desktop. According to the latest Verdantix research, 80% of executives who consider IWMS software identified the quality of the user interface as the most important factor influencing their decision.
With the growth of the Internet of Things, a trend that is gaining ground is the integration of IWMS software and Smart Building solutions on a single platform, also termed IWMS+. This allows for IWMS to draw on real-time data from sensors to manage the modern work environment.
In 2022, Verdantix introduced the term "connected portfolio intelligence platforms" (CPIP), describing the next evolution of IWMS with a more open architecture and enhanced interconnectivity with smart buildings and their ecosystems.
== See also ==
Real estate
Facility management
Computer-aided facility management
Enterprise asset management
== References == | Wikipedia/Integrated_workplace_management_system |
Integrated project delivery (IPD) is a construction project delivery method that seeks the efficiency and involvement of all participants (people, systems, business structures and practices) through all phases of design, fabrication, and construction. IPD combines ideas from integrated practice and lean construction. The objectives of IPD are to increase productivity, reduce waste (waste being described as resources spent on activities that do not add value to the end product), avoid time overruns, enhance final product quality, and reduce conflicts between owners, architects and contractors during construction. IPD emphasizes the use of technology to facilitate communication between the parties involved in the construction process.
== Background ==
The construction industry has suffered from a productivity decline since the 1960s while all other non-farm industries have seen large boosts in productivity. Proponents of Integrated project delivery argue that problems in contemporary construction, such as buildings that are behind schedule and over budget, are due to adverse relations between the owner, general contractor, and architect.
Using ideas developed by Toyota in their Toyota Production System and computer technology advances, the new focus in IPD is the final value created for the owner. In essence, IPD sees all allocation of resources for any activity that does not add value to the end product (the finished building) as wasteful.
== IPD in practice ==
In Practice, the IPD system is a process where all disciplines in a construction project work as one firm. The primary team members include the architect, key technical consultants as well as a general contractor and subcontractors. The growing use of building information modeling in the construction industry is allowing for easier sharing of information between project participants using IPD and is considered a tool to increase productivity throughout the construction process.
Unlike the design–build project delivery method which typically places the contractor in the leading role on a building project, IPD represents a return to the "master builder" concept where the entire building team including the owner, architect, general contractor, building engineers, fabricators, and subcontractors work collaboratively throughout the construction process.
== Multi-Party Agreements ==
One common way to further the goals of IPD is through a multi-party agreement among key participants. In a multi-party agreement (MPA), the primary project participants execute a single contract specifying their respective roles, rights, obligations, and liabilities. In effect, the multi-party agreement creates a temporary virtual, and in some instances formal, organization to realize a specific project. Because a single agreement is used, each party understands its role in relationship to the other participants. Compensation structures are often open-book, so each party's interests and contributions are similarly transparent. Multi-party agreements require trust, as compensation is tied to overall project success and individual success depends on the contributions of all team members.
Common forms of multi-party agreements include
project alliances, which create a project structure where the owner guaranteed the direct costs of non-owner parties, but payment of profit, overhead and bonus depends on project outcome;
a single-purpose entity, which is a temporary, but formal, legal structure created to realize a specific project;
and relational contracts, which are similar to Project Alliances in that a virtual organization is created from individual entities, but it differs in its approach to compensation, risk sharing and decision making.
== The role of technology in IPD ==
The adoption of IPD as a standard for collaborative good practice on construction projects presents its own problems. As most construction projects involve disparate stakeholders, traditional IT solutions are not conducive to collaborative working. Sharing files behind IT firewalls, large email attachment sizes and the ability to view all manner of file types without the native software all make IPD difficult.
The need to overcome collaborative IT challenges has been one of the drivers behind the growth of online construction collaboration technology. Since 2000, a new generation of technology companies evolved using SaaS to facilitate IPD.
This collaboration software streamlines the flow of documentation, communications and workflows ensuring everyone is working from 'one version of the truth'. Collaboration software allows users from disparate locations to keep all communications, documents & drawings, forms and data, amongst other types of electronic file, in one place. Version control is assured and users are able to view and mark up files online without the need for native software. The technology also enables project confidence and mitigates risk thanks to inbuilt audit trails.
== Criticism ==
A significant criticism of IPD that the single-minded focus on efficiency is often associated with a lack of concern for employee safety and well-being. This led to a poor safety performance and increased stress levels among construction workers, as they strive reach higher goals with less resources.
== Job Order Contracting ==
Job Order Contracting, JOC is form of integrated project delivery that specifically targets repair, renovation, and minor new construction. It has proven to be capable of delivering over 90% of projects on-time, on-budget, and to the satisfaction of the owner, contractors, and customer alike.
== See also ==
Building information modeling
Lean construction
Patrick MacLeamy - the inventor of MacLeamy Curve
== References ==
=== Selected articles on integrated project delivery ===
2017 NIBS Delivering Better Facilities through Lean Construction and Owner Leadership
Integrated Project Delivery: A Platform for Efficient Construction BuildingGreen, November 1, 2008
Another look: Is IPD the solution? Daily Journal of Commerce, Oct. 20, 2008
Just a month old, the BIM Addendum has won national endorsements Philadelphia Business Journal, Aug. 21, 2008
Integrated Project Delivery Improves Efficiency, Streamlines Construction – Lean Management Approach Eliminates Waste and Enhances Project Outcome – Tradeline, July 16, 2008
Red Business, Blue Business: If architects do not take the leadership role on integrated practice, they will cede this turf DesignIntelligence, May 30, 2008
AIA: American Institute of Architects delivers new contract documents to encourage Integrated Project Delivery architosh, May 21, 2008
AIA Introduces Integrated Project Delivery Agreements Contract Magazine, April 28, 2008
Integrated Project Delivery pulls together people, systems, business structures and practices Daily Commercial News and Construction Record, Mar. 12, 2008
New Colorado Law Permits IPD For State and Local Governments – Colorado Real Estate Journal, February 5, 2008
Changing Project Delivery Strategy – An Implementation Framework Journal of Public Works Management & Policy, Jan. 2008; vol. 12: pp. 483–502
AIA and AIA California Council Partner Introduce Integrated Project Delivery: A Guide Cadalyst, Nov. 6, 2007
The Integrated Agreement for Lean Project Delivery Construction Lawyer, American Bar Association, McDonough Holland & Allen PC, Number 3, Volume 26, Summer 2006
Managing Integrated Project Delivery CMAA College of Fellows, November 2009
== External links ==
Governor Ritter Signs Integrated Project Delivery Bill into Colorado Law
Integrated Project Delivery: First Principles for Owners and Teams – 3xPT Strategy Group, July 7, 2008: Construction Users Roundtable (CURT), Associated General Contractors of America (AGC), American Institute of Architects
National Institute of Building Sciences – many related articles on Integrated Project Delivery, Building Information Modeling
Design Responsibility in Integrated Project Delivery: Looking Back and Moving Forward – Donovan/Hatem LLP, Jan. 2008
HOUSE BILL 07-1342 Colorado State Government, June 1, 2007 | Wikipedia/Integrated_project_delivery |
Building information modeling (BIM) is an approach involving the generation and management of digital representations of the physical and functional characteristics of buildings or other physical assets and facilities. BIM is supported by various tools, processes, technologies and contracts. Building information models (BIMs) are computer files (often but not always in proprietary formats and containing proprietary data) which can be extracted, exchanged or networked to support decision-making regarding a built asset. BIM software is used by individuals, businesses and government agencies who plan, design, construct, operate and maintain buildings and diverse physical infrastructures, such as water, refuse, electricity, gas, communication utilities, roads, railways, bridges, ports and tunnels.
The concept of BIM has been in development since the 1970s, but it only became an agreed term in the early 2000s. The development of standards and the adoption of BIM has progressed at different speeds in different countries. Developed by buildingSMART, Industry Foundation Classes (IFCs) – data structures for representing information – became an international standard, ISO 16739, in 2013, and BIM process standards developed in the United Kingdom from 2007 onwards formed the basis of an international standard, ISO 19650, launched in January 2019.
== History ==
The concept of BIM has existed since the 1970s. The first software tools developed for modeling buildings emerged in the late 1970s and early 1980s, and included workstation products such as Chuck Eastman's Building Description System and GLIDE, RUCAPS, Sonata, Reflex and Gable 4D Series. The early applications, and the hardware needed to run them, were expensive, which limited widespread adoption.
The pioneering role of applications such as RUCAPS, Sonata and Reflex has been recognized by Laiserin as well as the UK's Royal Academy of Engineering; former GMW employee Jonathan Ingram worked on all three products. What became known as BIM products differed from architectural drafting tools such as AutoCAD by allowing the addition of further information (time, cost, manufacturers' details, sustainability, and maintenance information, etc.) to the building model.
As Graphisoft had been developing such solutions for longer than its competitors, Laiserin regarded its ArchiCAD application as then "one of the most mature BIM solutions on the market." Following its launch in 1987, ArchiCAD became regarded by some as the first implementation of BIM, as it was the first CAD product on a personal computer able to create both 2D and 3D geometry, as well as the first commercial BIM product for personal computers. However, Graphisoft founder Gábor Bojár has acknowledged to Jonathan Ingram in an open letter, that Sonata "was more advanced in 1986 than ArchiCAD at that time", adding that it "surpassed already the matured definition of 'BIM' specified only about one and a half decade later".
The term 'building model' (in the sense of BIM as used today) was first used in papers in the mid-1980s: in a 1985 paper by Simon Ruffle eventually published in 1986, and later in a 1986 paper by Robert Aish – then at GMW Computers Ltd, developer of RUCAPS software – referring to the software's use at London's Heathrow Airport. The term 'Building Information Model' first appeared in a 1992 paper by G.A. van Nederveen and F. P. Tolman.
However, the terms 'Building Information Model' and 'Building Information Modeling' (including the acronym "BIM") did not become popularly used until some 10 years later. Facilitating exchange and interoperability of information in digital format was variously with differing terminology: by Graphisoft as "Virtual Building" or "Single Building Model", Bentley Systems as "Integrated Project Models", and by Autodesk or Vectorworks as "Building Information Modeling". In 2002, Autodesk released a white paper entitled "Building Information Modeling," and other software vendors also started to assert their involvement in the field. By hosting contributions from Autodesk, Bentley Systems and Graphisoft, plus other industry observers, in 2003, Jerry Laiserin helped popularize and standardize the term as a common name for the digital representation of the building process.
=== Interoperability and BIM standards ===
As some BIM software developers have created proprietary data structures in their software, data and files created by one vendor's applications may not work in other vendor solutions. To achieve interoperability between applications, neutral, non-proprietary or open standards for sharing BIM data among different software applications have been developed.
Poor software interoperability has long been regarded as an obstacle to industry efficiency in general and to BIM adoption in particular. In August 2004 a US National Institute of Standards and Technology (NIST) report conservatively estimated that $15.8 billion was lost annually by the U.S. capital facilities industry due to inadequate interoperability arising from "the highly fragmented nature of the industry, the industry’s continued paper-based business practices, a lack of standardization, and inconsistent technology adoption among stakeholders".
An early BIM standard was the CIMSteel Integration Standard, CIS/2, a product model and data exchange file format for structural steel project information (CIMsteel: Computer Integrated Manufacturing of Constructional Steelwork). CIS/2 enables seamless and integrated information exchange during the design and construction of steel framed structures. It was developed by the University of Leeds and the UK's Steel Construction Institute in the late 1990s, with inputs from Georgia Tech, and was approved by the American Institute of Steel Construction as its data exchange format for structural steel in 2000.
BIM is often associated with Industry Foundation Classes (IFCs) and aecXML – data structures for representing information – developed by buildingSMART. IFC is recognised by the ISO and has been an international standard, ISO 16739, since 2013. OpenBIM is an initiative by buildingSMART that promotes open standards and interoperability. Based on the IFC standard, it allows vendor-neutral BIM data exchange. OpenBIM standards also include BIM Collaboration Format (BCF) for issue tracking and Information Delivery Specification (IDS) for defining model requirements.
Construction Operations Building information exchange (COBie) is also associated with BIM. COBie was devised by Bill East of the United States Army Corps of Engineers in 2007, and helps capture and record equipment lists, product data sheets, warranties, spare parts lists, and preventive maintenance schedules. This information is used to support operations, maintenance and asset management once a built asset is in service. In December 2011, it was approved by the US-based National Institute of Building Sciences as part of its National Building Information Model (NBIMS-US) standard. COBie has been incorporated into software, and may take several forms including spreadsheet, IFC, and ifcXML. In early 2013 BuildingSMART was working on a lightweight XML format, COBieLite, which became available for review in April 2013. In September 2014, a code of practice regarding COBie was issued as a British Standard: BS 1192-4.
In January 2019, ISO published the first two parts of ISO 19650, providing a framework for building information modelling, based on process standards developed in the United Kingdom. UK BS and PAS 1192 specifications form the basis of further parts of the ISO 19650 series, with parts on asset management (Part 3) and security management (Part 5) published in 2020.
The IEC/ISO 81346 series for reference designation has published 81346-12:2018, also known as RDS-CW (Reference Designation System for Construction Works). The use of RDS-CW offers the prospect of integrating BIM with complementary international standards based classification systems being developed for the Power Plant sector.
== Definition ==
ISO 19650-1:2018 defines BIM as:
Use of a shared digital representation of a built asset to facilitate design, construction and operation processes to form a reliable basis for decisions.
The US National Building Information Model Standard Project Committee has the following definition:
Building Information Modeling (BIM) is a digital representation of physical and functional characteristics of a facility. A BIM is a shared knowledge resource for information about a facility forming a reliable basis for decisions during its life-cycle; defined as existing from earliest conception to demolition.
Traditional building design was largely reliant upon two-dimensional technical drawings (plans, elevations, sections, etc.). Building information modeling extends the three primary spatial dimensions (width, height and depth), incorporating information about time (so-called 4D BIM), cost (5D BIM), asset management, sustainability, etc. BIM therefore covers more than just geometry. It also covers spatial relationships, geospatial information, quantities and properties of building components (for example, manufacturers' details), and enables a wide range of collaborative processes relating to the built asset from initial planning through to construction and then throughout its operational life.
BIM authoring tools present a design as combinations of "objects" – vague and undefined, generic or product-specific, solid shapes or void-space oriented (like the shape of a room), that carry their geometry, relations, and attributes. BIM applications allow extraction of different views from a building model for drawing production and other uses. These different views are automatically consistent, being based on a single definition of each object instance. BIM software also defines objects parametrically; that is, the objects are defined as parameters and relations to other objects so that if a related object is amended, dependent ones will automatically also change. Each model element can carry attributes for selecting and ordering them automatically, providing cost estimates as well as material tracking and ordering.
For the professionals involved in a project, BIM enables a virtual information model to be shared by the design team (architects, landscape architects, surveyors, civil, structural and building services engineers, etc.), the main contractor and subcontractors, and the owner/operator. Each professional adds discipline-specific data to the shared model – commonly, a 'federated' model which combines several different disciplines' models into one. Combining models enables visualisation of all models in a single environment, better coordination and development of designs, enhanced clash avoidance and detection, and improved time and cost decision-making.
=== BIM wash ===
"BIM wash" or "BIM washing" is a term sometimes used to describe inflated, and/or deceptive, claims of using or delivering BIM services or products.
== Usage throughout the asset life cycle ==
Use of BIM goes beyond the planning and design phase of a project, extending throughout the life cycle of the asset. The supporting processes of building lifecycle management include cost management, construction management, project management, facility operation and application in green building.
=== Common Data Environment ===
A 'Common Data Environment' (CDE) is defined in ISO 19650 as an:
Agreed source of information for any given project or asset, for collecting, managing and disseminating each information container through a managed process.
A CDE workflow describes the processes to be used while a CDE solution can provide the underlying technologies. A CDE is used to share data across a project or asset lifecycle, supporting collaboration across a whole project team. The concept of a CDE overlaps with enterprise content management, ECM, but with a greater focus on BIM issues.
=== Management of building information models ===
Building information models span the whole concept-to-occupation time-span. To ensure efficient management of information processes throughout this span, a BIM manager might be appointed. The BIM manager is retained by a design build team on the client's behalf from the pre-design phase onwards to develop and to track the object-oriented BIM against predicted and measured performance objectives, supporting multi-disciplinary building information models that drive analysis, schedules, take-off and logistics. Companies are also now considering developing BIMs in various levels of detail, since depending on the application of BIM, more or less detail is needed, and there is varying modeling effort associated with generating building information models at different levels of detail.
=== BIM in construction management ===
Participants in the building process are constantly challenged to deliver successful projects despite tight budgets, limited staffing, accelerated schedules, and limited or conflicting information. The significant disciplines such as architectural, structural and MEP designs should be well-coordinated, as two things can't take place at the same place and time. BIM additionally is able to aid in collision detection, identifying the exact location of discrepancies.
The BIM concept envisages virtual construction of a facility prior to its actual physical construction, in order to reduce uncertainty, improve safety, work out problems, and simulate and analyze potential impacts. Sub-contractors from every trade can input critical information into the model before beginning construction, with opportunities to pre-fabricate or pre-assemble some systems off-site. Waste can be minimised on-site and products delivered on a just-in-time basis rather than being stock-piled on-site.
Quantities and shared properties of materials can be extracted easily. Scopes of work can be isolated and defined. Systems, assemblies and sequences can be shown in a relative scale with the entire facility or group of facilities. BIM also prevents errors by enabling conflict or 'clash detection' whereby the computer model visually highlights to the team where parts of the building (e.g.:structural frame and building services pipes or ducts) may wrongly intersect.
=== BIM in facility operation and asset management ===
BIM can bridge the information loss associated with handing a project from design team, to construction team and to building owner/operator, by allowing each group to add to and reference back to all information they acquire during their period of contribution to the BIM model. Enabling an effective handover of information from design and construction (including via IFC or COBie) can thus yield benefits to the facility owner or operator. BIM-related processes relating to longer-term asset management are also covered in ISO-19650 Part 3.
For example, a building owner may find evidence of a water leak in a building. Rather than exploring the physical building, the owner may turn to the model and see that a water valve is located in the suspect location. The owner could also have in the model the specific valve size, manufacturer, part number, and any other information ever researched in the past, pending adequate computing power. Such problems were initially addressed by Leite and Akinci when developing a vulnerability representation of facility contents and threats for supporting the identification of vulnerabilities in building emergencies.
Dynamic information about the building, such as sensor measurements and control signals from the building systems, can also be incorporated within software to support analysis of building operation and maintenance. As such, BIM in facility operation can be related to internet of things approaches; rapid access to data may also be aided by use of mobile devices (smartphones, tablets) and machine-readable RFID tags or barcodes; while integration and interoperability with other business systems - CAFM, ERP, BMS, IWMS, etc - can aid operational reuse of data.
There have been attempts at creating information models for older, pre-existing facilities. Approaches include referencing key metrics such as the Facility Condition Index (FCI), or using 3D laser-scanning surveys and photogrammetry techniques (separately or in combination) or digitizing traditional building surveying methodologies by using mobile technology to capture accurate measurements and operation-related information about the asset that can be used as the basis for a model. Trying to retrospectively model a building constructed in, say 1927, requires numerous assumptions about design standards, building codes, construction methods, materials, etc, and is, therefore, more complex than building a model during design.
One of the challenges to the proper maintenance and management of existing facilities is understanding how BIM can be utilized to support a holistic understanding and implementation of building management practices and "cost of ownership" principles that support the full product lifecycle of a building. An American National Standard entitled APPA 1000 – Total Cost of Ownership for Facilities Asset Management incorporates BIM to factor in a variety of critical requirements and costs over the life-cycle of the building, including but not limited to: replacement of energy, utility, and safety systems; continual maintenance of the building exterior and interior and replacement of materials; updates to design and functionality; and recapitalization costs.
=== BIM in green building ===
BIM in green building, or "green BIM", is a process that can help architecture, engineering and construction firms to improve sustainability in the built environment. It can allow architects and engineers to integrate and analyze environmental issues in their design over the life cycle of the asset.
In the ERANet projects EPC4SES and FinSESCo projects worked on the digital representation of the energy demand of the building. The nucleus is the XML from issuing Energy Performance Certificates, amended by roof data to be able to retrieve the position and size of PV or PV/T.
== International developments ==
=== Asia ===
==== China ====
China began its exploration on informatisation in 2001. The Ministry of Construction announced BIM was as the key application technology of informatisation in "Ten new technologies of construction industry" (by 2010). The Ministry of Science and Technology (MOST) clearly announced BIM technology as a national key research and application project in "12th Five-Year" Science and Technology Development Planning. Therefore, the year 2011 was described as "The First Year of China's BIM".
==== Hong Kong ====
In 2006 the Hong Kong Housing Authority introduced BIM, and then set a target of full BIM implementation in 2014/2015. BuildingSmart Hong Kong was inaugurated in Hong Kong SAR in late April 2012. The Government of Hong Kong mandates the use of BIM for all government projects over HK$30M since 1 January 2018.
==== India ====
India Building Information Modelling Association (IBIMA) is a national-level society that represents the entire Indian BIM community. In India BIM is also known as VDC: Virtual Design and Construction. Due to its population and economic growth, India has an expanding construction market. In spite of this, BIM usage was reported by only 22% of respondents in a 2014 survey. In 2019, government officials said BIM could help save up to 20% by shortening construction time, and urged wider adoption by infrastructure ministries.
==== Iran ====
The Iran Building Information Modeling Association (IBIMA) was founded in 2012 by professional engineers from five universities in Iran, including the Civil and Environmental Engineering Department at Amirkabir University of Technology. While it is not currently active, IBIMA aims to share knowledge resources to support construction engineering management decision-making.
==== Malaysia ====
BIM implementation is targeted towards BIM Stage 2 by the year 2020 led by the Construction Industry Development Board (CIDB Malaysia). Under the Construction Industry Transformation Plan (CITP 2016–2020), it is hoped more emphasis on technology adoption across the project life-cycle will induce higher productivity.
==== Singapore ====
The Building and Construction Authority (BCA) has announced that BIM would be introduced for architectural submission (by 2013), structural and M&E submissions (by 2014) and eventually for plan submissions of all projects with gross floor area of more than 5,000 square meters by 2015. The BCA Academy is training students in BIM.
==== Japan ====
The Ministry of Land, Infrastructure and Transport (MLIT) has announced "Start of BIM pilot project in government building and repairs" (by 2010). Japan Institute of Architects (JIA) released the BIM guidelines (by 2012), which showed the agenda and expected effect of BIM to architects. MLIT announced " BIM will be mandated for all of its public works from the fiscal year of 2023, except those having particular reasons". The works subject to WTO Government Procurement Agreement shall comply with the published ISO standards related to BIM such as ISO19650 series as determined by the Article 10 (Technical Specification) of the Agreement.
==== South Korea ====
Small BIM-related seminars and independent BIM effort existed in South Korea even in the 1990s. However, it was not until the late 2000s that the Korean industry paid attention to BIM. The first industry-level BIM conference was held in April 2008, after which, BIM has been spread very rapidly. Since 2010, the Korean government has been gradually increasing the scope of BIM-mandated projects. McGraw Hill published a detailed report in 2012 on the status of BIM adoption and implementation in South Korea.
==== United Arab Emirates ====
Dubai Municipality issued a circular (196) in 2014 mandating BIM use for buildings of a certain size, height or type. The one page circular initiated strong interest in BIM and the market responded in preparation for more guidelines and direction. In 2015 the Municipality issued another circular (207) titled 'Regarding the expansion of applying the (BIM) on buildings and facilities in the emirate of Dubai' which made BIM mandatory on more projects by reducing the minimum size and height requirement for projects requiring BIM. This second circular drove BIM adoption further with several projects and organizations adopting UK BIM standards as best practice. In 2016, the UAE's Quality and Conformity Commission set up a BIM steering group to investigate statewide adoption of BIM.
=== Europe ===
==== Austria ====
Austrian standards for digital modeling are summarized in the ÖNORM A 6241, published on 15 March 2015. The ÖNORM A 6241-1 (BIM Level 2), which replaced the ÖNORM A 6240-4, has been extended in the detailed and executive design stages, and corrected in the lack of definitions. The ÖNORM A 6241-2 (BIM Level 3) includes all the requirements for the BIM Level 3 (iBIM).
==== Czech Republic ====
The Czech BIM Council, established in May 2011, aims to implement BIM methodologies into the Czech building and designing processes, education, standards and legislation.
==== Estonia ====
In Estonia digital construction cluster (Digitaalehituse Klaster) was formed in 2015 to develop BIM solutions for the whole life-cycle of construction. The strategic objective of the cluster is to develop an innovative digital construction environment as well as VDC new product development, Grid and e-construction portal to increase the international competitiveness and sales of Estonian businesses in the construction field. The cluster is equally co-funded by European Structural and Investment Funds through Enterprise Estonia and by the members of the cluster with a total budget of 600 000 euros for the period 2016–2018.
==== France ====
The French arm of buildingSMART, called Mediaconstruct (existing since 1989), is supporting digital transformation in France. A building transition digital plan – French acronym PTNB – was created in 2013 (mandated since 2015 to 2017 and under several ministries). A 2013 survey of European BIM practice showed France in last place, but, with government support, in 2017 it had risen to third place with more than 30% of real estate projects carried out using BIM. PTNB was superseded in 2018 by Plan BIM 2022, administered by an industry body, the Association for the Development of Digital in Construction (AND Construction), founded in 2017, and supported by a digital platform, KROQI, developed and launched in 2017 by CSTB (France's Scientific and Technical Centre for Building).
==== Germany ====
In December 2015, the German minister for transport Alexander Dobrindt announced a timetable for the introduction of mandatory BIM for German road and rail projects from the end of 2020. Speaking in April 2016, he said digital design and construction must become standard for construction projects in Germany, with Germany two to three years behind The Netherlands and the UK in aspects of implementing BIM. BIM was piloted in many areas of German infrastructure delivery and in July 2022 Volker Wissing, Federal Minister for Digital and Transport, announced that, from 2025, BIM will be used as standard in the construction of federal trunk roads in addition to the rail sector.
==== Ireland ====
In November 2017, Ireland's Department for Public Expenditure and Reform launched a strategy to increase use of digital technology in delivery of key public works projects, requiring the use of BIM to be phased in over the next four years.
==== Italy ====
Through the new D.l. 50, in April 2016 Italy has included into its own legislation several European directives including 2014/24/EU on Public Procurement. The decree states among the main goals of public procurement the "rationalization of designing activities and of all connected verification processes, through the progressive adoption of digital methods and electronic instruments such as Building and Infrastructure Information Modelling". A norm in 8 parts is also being written to support the transition: UNI 11337-1, UNI 11337-4 and UNI 11337-5 were published in January 2017, with five further chapters to follow within a year.
In early 2018 the Italian Ministry of Infrastructure and Transport issued a decree (DM 01/12/17) creating a governmental BIM Mandate compelling public client organisations to adopt a digital approach by 2025, with an incremental obligation which will start on 1 January 2019.
==== Lithuania ====
Lithuania is moving towards adoption of BIM infrastructure by founding a public body "Skaitmeninė statyba" (Digital Construction), which is managed by 13 associations. Also, there is a BIM work group established by Lietuvos Architektų Sąjunga (a Lithuanian architects body). The initiative intends Lithuania to adopt BIM, Industry Foundation Classes (IFC) and National Construction Classification as standard. An international conference "Skaitmeninė statyba Lietuvoje" (Digital Construction in Lithuania) has been held annually since 2012.
==== The Netherlands ====
On 1 November 2011, the Rijksgebouwendienst, the agency within the Dutch Ministry of Housing, Spatial Planning and the Environment that manages government buildings, introduced the Rgd BIM Standard, which it updated on 1 July 2012.
==== Norway ====
In Norway BIM has been used increasingly since 2008. Several large public clients require use of BIM in open formats (IFC) in most or all of their projects. The Government Building Authority bases its processes on BIM in open formats to increase process speed and quality, and all large and several small and medium-sized contractors use BIM. National BIM development is centred around the local organisation, buildingSMART Norway which represents 25% of the Norwegian construction industry.
==== Poland ====
BIMKlaster (BIM Cluster) is a non-governmental, non-profit organisation established in 2012 with the aim of promoting BIM development in Poland. In September 2016, the Ministry of Infrastructure and Construction began a series of expert meetings concerning the application of BIM methodologies in the construction industry.
==== Portugal ====
Created in 2015 to promote the adoption of BIM in Portugal and its normalisation, the Technical Committee for BIM Standardisation, CT197-BIM, has created the first strategic document for construction 4.0 in Portugal, aiming to align the country's industry around a common vision, integrated and more ambitious than a simple technology change.
==== Russia ====
The Russian government has approved a list of the regulations that provide the creation of a legal framework for the use of information modeling of buildings in construction and encourages the use of BIM in government projects.
==== Slovakia ====
The BIM Association of Slovakia, "BIMaS", was established in January 2013 as the first Slovak professional organisation focused on BIM. Although there are neither standards nor legislative requirements to deliver projects in BIM, many architects, structural engineers and contractors, plus a few investors are already applying BIM. A Slovak implementation strategy created by BIMaS and supported by the Chamber of Civil Engineers and Chamber of Architects has yet to be approved by Slovak authorities due to their low interest in such innovation.
==== Spain ====
A July 2015 meeting at Spain's Ministry of Infrastructure [Ministerio de Fomento] launched the country's national BIM strategy, making BIM a mandatory requirement on public sector projects with a possible starting date of 2018. Following a February 2015 BIM summit in Barcelona, professionals in Spain established a BIM commission (ITeC) to drive the adoption of BIM in Catalonia.
==== Switzerland ====
Since 2009 through the initiative of buildingSmart Switzerland, then 2013, BIM awareness among a broader community of engineers and architects was raised due to the open competition for Basel's Felix Platter Hospital where a BIM coordinator was sought. BIM has also been a subject of events by the Swiss Society for Engineers and Architects, SIA.
==== United Kingdom ====
In May 2011 UK Government Chief Construction Adviser Paul Morrell called for BIM adoption on UK government construction projects. Morrell also told construction professionals to adopt BIM or be "Betamaxed out". In June 2011 the UK government published its BIM strategy, announcing its intention to require collaborative 3D BIM (with all project and asset information, documentation and data being electronic) on its projects by 2016. Initially, compliance would require building data to be delivered in a vendor-neutral 'COBie' format, thus overcoming the limited interoperability of BIM software suites available on the market. The UK Government BIM Task Group led the government's BIM programme and requirements, including a free-to-use set of UK standards and tools that defined 'level 2 BIM'. In April 2016, the UK Government published a new central web portal as a point of reference for the industry for 'level 2 BIM'. The work of the BIM Task Group then continued under the stewardship of the Cambridge-based Centre for Digital Built Britain (CDBB), announced in December 2017 and formally launched in early 2018.
Outside of government, industry adoption of BIM since 2016 has been led by the UK BIM Alliance, an independent, not-for-profit, collaboratively-based organisation formed to champion and enable the implementation of BIM, and to connect and represent organisations, groups and individuals working towards digital transformation of the UK's built environment industry. In November 2017, the UK BIM Alliance merged with the UK and Ireland chapter of BuildingSMART. In October 2019, CDBB, the UK BIM Alliance and the BSI Group launched the UK BIM Framework. Superseding the BIM levels approach, the framework describes an overarching approach to implementing BIM in the UK, giving free guidance on integrating the international ISO 19650 series of standards into UK processes and practice.
National Building Specification (NBS) has published research into BIM adoption in the UK since 2011, and in 2020 published its 10th annual BIM report. In 2011, 43% of respondents had not heard of BIM; in 2020 73% said they were using BIM.
=== North America ===
==== Canada ====
BIM is not mandatory in Canada. Several organizations support BIM adoption and implementation in Canada: the Canada BIM Council (CANBIM, founded in 2008), the Institute for BIM in Canada, and buildingSMART Canada (the Canadian chapter of buildingSMART International). Public Services and Procurement Canada (formerly Public Works and Government Services Canada) is committed to using non-proprietary or "OpenBIM" BIM standards and avoids specifying any specific proprietary BIM format. Designers are required to use the international standards of interoperability for BIM (IFC).
==== United States ====
The Associated General Contractors of America and US contracting firms have developed various working definitions of BIM that describe it generally as:
an object-oriented building development tool that utilizes 5-D modeling concepts, information technology and software interoperability to design, construct and operate a building project, as well as communicate its details.
Although the concept of BIM and relevant processes are being explored by contractors, architects and developers alike, the term itself has been questioned and debated with alternatives including Virtual Building Environment (VBE) also considered. Unlike some countries such as the UK, the US has not adopted a set of national BIM guidelines, allowing different systems to remain in competition. In 2021, the National Institute of Building Sciences (NIBS) looked at applying UK BIM experiences to developing shared US BIM standards and processes. The US National BIM Standard had largely been developed through volunteer efforts; NIBS aimed to create a national BIM programme to drive effective adoption at a national scale.
BIM is seen to be closely related to Integrated Project Delivery (IPD) where the primary motive is to bring the teams together early on in the project. A full implementation of BIM also requires the project teams to collaborate from the inception stage and formulate model sharing and ownership contract documents.
The American Institute of Architects has defined BIM as "a model-based technology linked with a database of project information",[3] and this reflects the general reliance on database technology as the foundation. In the future, structured text documents such as specifications may be able to be searched and linked to regional, national, and international standards.
=== Africa ===
==== Nigeria ====
BIM has the potential to play a vital role in the Nigerian AEC sector. In addition to its potential clarity and transparency, it may help promote standardization across the industry. For instance, Utiome suggests that, in conceptualizing a BIM-based knowledge transfer framework from industrialized economies to urban construction projects in developing nations, generic BIM objects can benefit from rich building information within specification parameters in product libraries, and used for efficient, streamlined design and construction. Similarly, an assessment of the current 'state of the art' by Kori found that medium and large firms were leading the adoption of BIM in the industry. Smaller firms were less advanced with respect to process and policy adherence. There has been little adoption of BIM in the built environment due to construction industry resistance to changes or new ways of doing things. The industry is still working with conventional 2D CAD systems in services and structural designs, although production could be in 3D systems. There is virtually no utilisation of 4D and 5D systems.
BIM Africa Initiative, primarily based in Nigeria, is a non-profit institute advocating the adoption of BIM across Africa. Since 2018, it has been engaging with professionals and the government towards the digital transformation of the built industry. Produced annually by its research and development committee, the African BIM Report gives an overview of BIM adoption across the African continent.
==== South Africa ====
The South African BIM Institute, established in May 2015, aims to enable technical experts to discuss digital construction solutions that can be adopted by professionals working within the construction sector. Its initial task was to promote the SA BIM Protocol.
There are no mandated or national best practice BIM standards or protocols in South Africa. Organisations implement company-specific BIM standards and protocols at best (there are isolated examples of cross-industry alliances).
=== Oceania ===
==== Australia ====
In February 2016, Infrastructure Australia recommended: "Governments should make the use of Building Information Modelling (BIM) mandatory for the design of large-scale complex infrastructure projects. In support of a mandatory rollout, the Australian Government should commission the Australasian Procurement and Construction Council, working with industry, to develop appropriate guidance around the adoption and use of BIM; and common standards and protocols to be applied when using BIM".
==== New Zealand ====
In 2015, many projects in the rebuilding of Christchurch were being assembled in detail on a computer using BIM well before workers set foot on the site. The New Zealand government started a BIM acceleration committee, as part of a productivity partnership with the goal of 20 per cent more efficiency in the construction industry by 2020. Today, BIM use is still not mandated in the country while several challenges have been identified for its implementation in the country. However, members of the AEC industry and academia have developed a national BIM handbook providing definitions, case studies and templates.
== Purposes or dimensionality ==
Some purposes or uses of BIM may be described as 'dimensions'. However, there is little consensus on definitions beyond 5D. Some organisations dismiss the term; for example, the UK Institution of Structural Engineers does not recommend using nD modelling terms beyond 4D, adding "cost (5D) is not really a 'dimension'."
=== 3D ===
3D BIM, an acronym for three-dimensional building information modeling, refers to the graphical representation of an asset's geometric design, augmented by information describing attributes of individual components. 3D BIM work may be undertaken by professional disciplines such as architectural, structural, and MEP, and the use of 3D models enhances coordination and collaboration between disciplines. A 3D virtual model can also be created by creating a point cloud of the building or facility using laser scanning technology.
=== 4D ===
4D BIM, an acronym for 4-dimensional building information modeling, refers to the intelligent linking of individual 3D CAD components or assemblies with time- or scheduling-related information. The term 4D refers to the fourth dimension: time, i.e. 3D plus time.
4D modelling enables project participants (architects, designers, contractors, clients) to plan, sequence the physical activities, visualise the critical path of a series of events, mitigate the risks, report and monitor progress of activities through the lifetime of the project. 4D BIM enables a sequence of events to be depicted visually on a time line that has been populated by a 3D model, augmenting traditional Gantt charts and critical path (CPM) schedules often used in project management. Construction sequences can be reviewed as a series of problems using 4D BIM, enabling users to explore options, manage solutions and optimize results.
As an advanced construction management technique, it has been used by project delivery teams working on larger projects. 4D BIM has traditionally been used for higher end projects due to the associated costs, but technologies are now emerging that allow the process to be used by laymen or to drive processes such as manufacture.
=== 5D ===
5D BIM, an acronym for 5-dimensional building information modeling refers to the intelligent linking of individual 3D components or assemblies with time schedule (4D BIM) constraints and then with cost-related information. 5D models enable participants to visualise construction progress and related costs over time. This BIM-centric project management technique has potential to improve management and delivery of projects of any size or complexity.
In June 2016, McKinsey & Company identified 5D BIM technology as one of five big ideas poised to disrupt construction. It defined 5D BIM as "a five-dimensional representation of the physical and functional characteristics of any project. It considers a project’s time schedule and cost in addition to the standard spatial design parameters in 3-D."
=== 6D ===
6D BIM, an acronym for 6-dimensional building information modeling, is sometimes used to refer to the intelligent linking of individual 3D components or assemblies with all aspects of project life-cycle management information. However, there is less consensus about the definition of 6D BIM; it is also sometimes used to cover use of BIM for sustainability purposes.
In the project life cycle context, a 6D model is usually delivered to the owner when a construction project is finished. The "As-Built" BIM model is populated with relevant building component information such as product data and details, maintenance/operation manuals, cut sheet specifications, photos, warranty data, web links to product online sources, manufacturer information and contacts, etc. This database is made accessible to the users/owners through a customized proprietary web-based environment. This is intended to aid facilities managers in the operation and maintenance of the facility.
The term is less commonly used in the UK and has been replaced with reference to the Asset Information Requirements (AIR) and an Asset Information Model (AIM) as specified in BS EN ISO 19650-3:2020.
== See also ==
Data model
Design computing
Digital twin (the physical manifestation instrumented and connected to the model)
BCF
GIS
Digital Building Logbook
Landscape design software
Lean construction
List of BIM software
Macro BIM
Open-source 3D file formats
OpenStreetMap
Pre-fire planning
System information modelling
Whole Building Design Guide
Facility management (or Building management)
Building automation (and Building management systems)
== Notes ==
== References ==
== Further reading ==
Kensek, Karen (2014). Building Information Modeling, Routledge. ISBN 978-0-415-71774-8
Kensek, Karen and Noble, Douglas (2014). Building Information Modeling: BIM in Current and Future Practice, Wiley. ISBN 978-1-118-76630-9
Eastman, Chuck; Teicholz, Paul; Sacks, Rafael; Liston, Kathleen (2011). 'BIM Handbook: A Guide to Building Information Modeling for Owners, Managers, Designers, Engineers, and Contractors (2 ed.). John Wiley. ISBN 978-0-470-54137-1.
Lévy, François (2011). BIM in Small-Scale Sustainable Design, Wiley. ISBN 978-0470590898
Weygant, Robert S. (2011) BIM Content Development: Standards, Strategies, and Best Practices, Wiley. ISBN 978-0-470-58357-9
Hardin, Brad (2009). Martin Viveros (ed.). BIM and Construction Management: Proven Tools, Methods and Workflows. Sybex. ISBN 978-0-470-40235-1.
Smith, Dana K. and Tardif, Michael (2009). Building Information Modeling: A Strategic Implementation Guide for Architects, Engineers, Constructors, and Real Estate Asset Managers, Wiley. ISBN 978-0-470-25003-7
Underwood, Jason, and Isikdag, Umit (2009). Handbook of Research on Building Information Modeling and Construction Informatics: Concepts and Technologies, Information Science Publishing. ISBN 978-1-60566-928-1
Krygiel, Eddy and Nies, Brad (2008). Green BIM: Successful Sustainable Design with Building Information Modeling, Sybex. ISBN 978-0-470-23960-5
Kymmell, Willem (2008). Building Information Modeling: Planning and Managing Construction Projects with 4D CAD and Simulations, McGraw-Hill Professional. ISBN 978-0-07-149453-3
Jernigan, Finith (2007). BIG BIM little bim. 4Site Press. ISBN 978-0-9795699-0-6. | Wikipedia/Building_Information_Modeling |
The Whole Building Design Guide or WBDG is guidance in the United States, described by the Federal Energy Management Program as "a complete internet resource to a wide range of building-related design guidance, criteria and technology", and meets the requirements in guidance documents for Executive Order 13123. The WBDG is based on the premise that to create a successful high-performance building, one must apply an integrated design and team approach in all phases of a project, including planning, design, construction, operations and maintenance. The WBDG is managed by the National Institute of Building Sciences.
== History ==
The WBDG was initially designed to serve U.S. Department of Defense (DOD) construction programs. A 2003 DOD memorandum named WBDG the “sole portal to design and construction criteria produced by the U.S. Army Corps of Engineers (USACE), Naval Facilities Engineering Command (NAVFAC), and U.S. Air Force.” Since then, WBDG has expanded to serve all building industry professionals. The majority of its 500,000 monthly users are from the private sector.
The WBDG draws information from the Construction Criteria Base and a privately owned database run by Information Handling Services.
A significant amount of the Whole Building Design Guide content is organized by three categories: Design Guidance, Project Management, and Operations and Maintenance. It is structured to provide WBDG visitors first a broad understanding then increasingly specific information more targeted towards building industry professionals. The WBDG is the resource that federal agencies look to for policy and technical guidance on Federal High Performance and Sustainable Buildings In addition, the WBDG contains online tools, the original Construction Criteria Base, Building Information Modeling guides and libraries, a database of select case studies, federal mandates and other resources. The WBDG also provides over 70 online continuing education courses for architects and other building professionals, free of charge.
== Development ==
Development of the WBDG is a collaborative effort among federal agencies, private sector companies, non-profit organizations and educational institutions.
The WBDG web site maintained by the National Institute of Building Sciences through funding support from the DOD, the NAVFAC Engineering Innovation and Criteria Office, U.S. Army Corps of Engineers, the U.S. Air Force, the U.S. General Services Administration (GSA), the U.S. Department of Veterans Affairs, the National Aeronautics and Space Administration (NASA), and the U.S. Department of Energy (DOE), and the assistance of the Sustainable Buildings Industry Council (SBIC). A Board of Direction and an Advisory Committee consisting of representatives from over 25 participating federal agencies guide the development of the WBDG.
== References ==
== External links ==
Whole Building Design Guide
National Institute of Building Sciences | Wikipedia/Whole_Building_Design_Guide |
The critical path method (CPM), or critical path analysis (CPA), is an algorithm for scheduling a set of project activities. A critical path is determined by identifying the longest stretch of dependent activities and measuring the time required to complete them from start to finish. It is commonly used in conjunction with the program evaluation and review technique (PERT).
== History ==
The CPM is a project-modeling technique developed in the late 1950s by Morgan R. Walker of DuPont and James E. Kelley Jr. of Remington Rand. Kelley and Walker related their memories of the development of CPM in 1989. Kelley attributed the term "critical path" to the developers of the PERT, which was developed at about the same time by Booz Allen Hamilton and the U.S. Navy. The precursors of what came to be known as critical path were developed and put into practice by DuPont between 1940 and 1943 and contributed to the success of the Manhattan Project.
Critical path analysis is commonly used with all forms of projects, including construction, aerospace and defense, software development, research projects, product development, engineering, and plant maintenance, among others. Any project with interdependent activities can apply this method of mathematical analysis. CPM was used for the first time in 1966 for the major skyscraper development of constructing the former World Trade Center Twin Towers in New York City. Although the original CPM program and approach is no longer used, the term is generally applied to any approach used to analyze a project network logic diagram.
== Basic techniques ==
=== Components ===
The essential technique for using CPM is to construct a model of the project that includes:
A list of all activities required to complete the project (typically categorized within a work breakdown structure)
The time (duration) that each activity will take to complete
The dependencies between the activities
Logical end points such as milestones or deliverable items
Using these values, CPM calculates the longest path of planned activities to logical end points or to the end of the project, and the earliest and latest that each activity can start and finish without making the project longer. This process determines which activities are "critical" (i.e., on the longest path) and which have no float/slack or "total float" zero (i.e., can not be delayed without making the project longer). In project management, a critical path is the sequence of project network activities that adds up to the longest overall duration, regardless of whether that longest duration has float or not. This determines the shortest time possible to complete the project. "Total float" (unused time) can occur within the critical path. For example, if a project is testing a solar panel and task 'B' requires 'sunrise', a scheduling constraint on the testing activity could be that it would not start until the scheduled time for sunrise. This might insert dead time (total float) into the schedule on the activities on that path prior to the sunrise due to needing to wait for this event. This path, with the constraint-generated total float, would actually make the path longer, with total float being part of the shortest possible duration for the overall project. In other words, individual tasks on the critical path prior to the constraint might be able to be delayed without elongating the critical path; this is the total float of that task, but the time added to the project duration by the constraint is actually critical path drag, the amount by which the project's duration is extended by each critical path activity and constraint.
A project can have several, parallel, near-critical paths, and some or all of the tasks could have free float and/or total float. An additional parallel path through the network with the total durations shorter than the critical path is called a subcritical or noncritical path. Activities on subcritical paths have no drag, as they are not extending the project's duration.
CPM analysis tools allow a user to select a logical end point in a project and quickly identify its longest series of dependent activities (its longest path). These tools can display the critical path (and near-critical path activities if desired) as a cascading waterfall that flows from the project's start (or current status date) to the selected logical end point.
=== Visualizing critical path schedule ===
Although the activity-on-arrow diagram (PERT chart) is still used in a few places, it has generally been superseded by the activity-on-node diagram, where each activity is shown as a box or node and the arrows represent the logical relationships going from predecessor to successor as shown here in the "Activity-on-node diagram".
In this diagram, Activities A, B, C, D, and E comprise the critical or longest path, while Activities F, G, and H are off the critical path with floats of 15 days, 5 days, and 20 days respectively. Whereas activities that are off the critical path have float and are therefore not delaying completion of the project, those on the critical path will usually have critical path drag, i.e., they delay project completion. The drag of a critical path activity can be computed using the following formula:
If a critical path activity has nothing in parallel, its drag is equal to its duration. Thus A and E have drags of 10 days and 20 days respectively.
If a critical path activity has another activity in parallel, its drag is equal to whichever is less: its duration or the total float of the parallel activity with the least total float. Thus since B and C are both parallel to F (float of 15) and H (float of 20), B has a duration of 20 and drag of 15 (equal to F's float), while C has a duration of only 5 days and thus drag of only 5. Activity D, with a duration of 10 days, is parallel to G (float of 5) and H (float of 20) and therefore its drag is equal to 5, the float of G.
These results, including the drag computations, allow managers to prioritize activities for the effective management of project, and to shorten the planned critical path of a project by pruning critical path activities, by "fast tracking" (i.e., performing more activities in parallel), and/or by "crashing the critical path" (i.e., shortening the durations of critical path activities by adding resources).
Critical path drag analysis has also been used to optimize schedules in processes outside of strict project-oriented contexts, such as to increase manufacturing throughput by using the technique and metrics to identify and alleviate delaying factors and thus reduce assembly lead time.
=== Crash duration ===
"Crash duration" is a term referring to the shortest possible time for which an activity can be scheduled. It can be achieved by shifting more resources towards the completion of that activity, resulting in decreased time spent and often a reduced quality of work, as the premium is set on speed.
Crash duration is typically modeled as a linear relationship between cost and activity duration, but in many cases, a convex function or a step function is more applicable.
=== Expansion ===
Originally, the critical path method considered only logical dependencies between terminal elements. Since then, it has been expanded to allow for the inclusion of resources related to each activity, through processes called activity-based resource assignments and resource optimization techniques such as Resource Leveling and Resource smoothing. A resource-leveled schedule may include delays due to resource bottlenecks (i.e., unavailability of a resource at the required time), and may cause a previously shorter path to become the longest or most "resource critical" path while a resource-smoothed schedule avoids impacting the critical path by using only free and total float. A related concept is called the critical chain, which attempts to protect activity and project durations from unforeseen delays due to resource constraints.
Since project schedules change on a regular basis, CPM allows continuous monitoring of the schedule, which allows the project manager to track the critical activities, and alerts the project manager to the possibility that non-critical activities may be delayed beyond their total float, thus creating a new critical path and delaying project completion. In addition, the method can easily incorporate the concepts of stochastic predictions, using the PERT and event chain methodology.
=== Flexibility ===
A schedule generated using the critical path techniques often is not realized precisely, as estimations are used to calculate times: if one mistake is made, the results of the analysis may change. This could cause an upset in the implementation of a project if the estimates are blindly believed, and if changes are not addressed promptly. However, the structure of critical path analysis is such that the variance from the original schedule caused by any change can be measured, and its impact either ameliorated or adjusted for. Indeed, an important element of project postmortem analysis is the 'as built critical path' (ABCP), which analyzes the specific causes and impacts of changes between the planned schedule and eventual schedule as actually implemented.
== Usage ==
Critical path techniques are widely used in planning, managing and controlling the delivery of construction projects. A technique known as "as-built critical path analysis" can also be used to assess the causes of a delay in completing a project, especially where there may have been more than one delaying factor and liability needs to be established for compensation and damages purposes. However, the use of as-built CPA in a legal context has been criticised, for example in the Scottish court case of City Inn Ltd. v Shepherd Construction (2007).
Currently, there are several software solutions available in industry which use the CPM method of scheduling; see list of project management software. The method currently used by most project management software is based on a manual calculation approach developed by Fondahl of Stanford University.
== In popular culture ==
In Odds On, the first novel by Michael Crichton, robbers use a critical path computer program to help plan a heist.
The Nome Trilogy (in the first book, Truckers) by Terry Pratchett mentions "something called critical path analysis" and says that it means "there's always something you should have done first."
== See also ==
Gantt chart
Graphical Evaluation and Review Technique
Program evaluation and review technique
Critical chain project management
Liebig's law of the minimum
List of project management software
List of project management topics
Main path analysis
Project management
Project planning
Work breakdown structure
== References ==
== Further reading ==
== External links ==
Media related to CPM diagrams at Wikimedia Commons | Wikipedia/Critical_path_method |
System information modelling (SIM) is the process of modelling complex connected systems. System information models are digital representations of connected systems, such as electrical instrumentation and control, power, and communication systems. The objects modelled in a SIM have a 1:1 relationship with the objects in the physical system. Components, connections and functions are defined and linked as they would be in the real world.
== Origins ==
The concept of SIM has existed since the mid-1990s. It was first proposed in 1994 by an Australian instrument, electrical and control system engineering company – I&E Systems Pty Ltd. Like many technological innovations the idea for SIM was born out of necessity. Since the mid-nineties, the complexity of power, control and Information and Communication Technology (ICT) systems has been growing exponentially due to rapid advances in technology; this has rendered the traditional paper-based methodologies and applications used for system design to become obsolete.
The cost of design related activities can be up to 70% of the total project expenditure in an electrical instrumentation and control system (EICS) engineering project. Analyses revealed that the limited nature of paper-based methods/workflows had significant contributions to the high design cost which required duplication of information on multiple documents resulting in design errors and omissions and therefore increasing the cost of labour. With this in mind, the company realized there is a need to shift away from the traditional paper-based methods to a more efficient systematic digital modelling approach.
The term 'System Information Modelling' was first published in a technical report in 2012 by Peter E.D. Love and Jingyang Zhou. The report presented empirical evidence to demonstrate that the use of a SIM could potentially improve productivity and reduce the cost to produce EICS documentation. The research examined a set of electrical engineering drawings of an Iron Ore Stacker Conveyor system; errors and omissions identified from the drawings have been classified and quantified. The report concluded that the use of traditional Computer-Aided-Design (CAD) methods to produce electrical engineering design is ineffective, inefficient and costly.
Since 2013, a number of scholarly research papers have been published that have demonstrated the effectiveness and efficiency of using a SIM instead of CAD to design and document EICS in a variety of projects (e.g., iron ore processing plant, FPSO safety control system, copper smelter plant, oil refinery, and a geothermal power plant).
== Definition ==
System Information Modelling can be defined as the process of digitally modelling a complex connected system. A System Information Model is a shared information resource of a system forming a reliable basis of knowledge during its life-cycle.
== Throughout the life-cycle ==
A SIM containing all the project information can be applied throughout the entire life-cycle of the project.
=== Design ===
Engineering design and documentation can be undertaken simultaneously when using a SIM. A SIM can be created as the design of an EICS progresses. Draftsman and modellers are no longer required. When a SIM is applied to the design of a connected system, all physical equipment and the associated connections to be constructed can be modelled in a relational database. Components are classified according to 'Type' and 'Location' attributes. The 'Type' attribute is used to define equipment functionalities. The 'Location' attribute is used to describe the physical position of equipment. Connections between equipment are modelled as 'connectors'. To facilitate the design, attributes, such as a device module, specifications and vendor manuals can be assigned and attached to each individual object.
When the design process is complete, a read only copy of the model is created, exported and made available to other project team members. The users can access all or part of the design information within the SIM regarding to their respective authorization levels. Private user data can be established and attached to the model.
=== Procurement and construction ===
When the design is approved for construction, a SIM, which is a digital realization of the design, can be issued to different parties such as the procurement team and construction contractors. Information management can be achieved digitally and the role of paper drawings is eliminated. The procurement plan and construction schedule can be created for each individual object in the SIM. Construction activities can be assigned to objects or work-packs with weighting factors defined. This enables the managers to be able to track the progresses of the procurement and construction detailed to individual object level and make informed decisions.
=== Asset management ===
A SIM is specifically useful for asset managers, as it enables information to be stored in a single digital model. In a traditional CAD-based environment paper drawings are typically handed over to the asset owner in the form of 'As Built' drawings, which reflect, in theory, the actual construction of every system, component and connection of a project. If an asset manager wants to maintain, repair or upgrade any portion of the asset, then the 'As Built' drawings need to be used. However, recovering information contained on an array of drawings is a tedious and time-consuming task. Any error or omission contained within the drawings will potentially hinder the interpretation of the design.
When engineering is undertaken using a SIM it can be stored in a digital format whereby a 1:1 mapping is undertaken. Operations such as test, calibration, inspection, repair, minor change and isolation can be defined and scheduled within the SIM. The SIM data can also be conveniently exported and input into other third party asset management applications to comply with the owners' asset management strategy. In addition, the SIM can act as a training tool, which can be used regularly to assist operators to become familiar with the design.
== Software ==
A commercial proprietary software package, Digital Asset Delivery (DAD), has been developed based on the concept of System Information Modelling (SIM) by I&E Systems Pty Ltd.
The initial version of DAD was released in 1997 which was primarily a modelling tool used to design and document the electrical engineering system. Since it was born, DAD has been tested and applied to many projects including but not limited to greenfield and brownfield, power, control and ICT systems. The DAD software has been continuously maintained and upgraded to cater for complex and rapid changing EICS projects. The latest release of DAD is version 13.
DAD provides several facilities to capture the complexities of today's systems including: LAYERS (e.g. Assembly (Physical): How is it built?, Control (Functional): How does it work? etc.), RELATIONSHIPS - links between components on different layers, GROUPS - components and connectors with common features.
DAD works closely with its partner application ActivityExchange which builds upon the power of the digital model to allow users to define, organise, track and exchange work to be done on any project. Once completed, each distinct record of work can be appended in to the digital model for future reference and historical continuity. ActivityExchange manages real-time workflows of all human interaction with system components including design review, procurement, construction, commissioning and finally maintenance.
== International development ==
The concept of SIM has been applied and verified in a number of international projects.
=== Australia ===
There are a number of Australian-based organisations in various industry sectors benefitting from SIM technology. A few examples:
Fortescue Metals Group (FMG) based in Western Australia has adopted SIM for all their projects built since 2010. These projects include the large scale Solomon Iron Ore project, the expansion of their export port facility and the North Star magnetite project. FMG acknowledges that using SIM on these projects resulted in large savings and more efficient project execution and that it continues to provide benefits for the operation of these facilities.
Opticomm builds, owns and operates a large fibre optic communications network which connects tens of thousands of residential and commercial properties. Their network is totally modelled using SIM and all their construction and operations activities are based on the information in their SIM based information model.
In 2016, Perth International Airport adopted the SIM and they had their power distribution network modelled using this technology. The electrical components and cable objects in their SIM are linked to the objects in their geographic information system (GIS). This seamlessly provides full system technical and geographical information about all their electrical system components and cables. Perth Airport has plans to expand the use of SIM to their other connected systems like runway lighting systems, and communication networks.
=== China ===
SIM has been applied to model and manage the electrical and communication systems of the Wuhan Metro Stations in China in 2014. In 2016, a SIM model was created to digitize the distributed control system (DCS) of the Wuhan International Expo Centre. Since 2014, a number of research projects have been undertaken by the BIM Centre of Huazhong University of Science and Technology including SIM application, linking SIM to BIM and linking SIM to Engineering Information Modelling (EIM).
=== Saudi Arabia ===
In 2015, SIM was applied by a large Japanese Engineering and Construction company to model the electrical and instrumentation systems on a very large new oil refinery project in Saudi Arabia. The SIM was used as the basis for management of all procurement and construction activities through Procurement and Construction Portals.
=== Europe ===
In 2018, SIM was applied by a large logistics company in Ireland to model their entire ICT Infrastructure in advance of a significant hardware and software refresh. SIM was used to map the high level business processes of the organisation down to the specific and individual records held in each system by the organisation, assuring migration success to a new ERP as well as providing compliance and assurance on GDPR requirements. The SIM was used as the Configuration Management Database (CMDB) to facilitate the ongoing project activities required to upgrade the organisations technologies and will become an inherent part of their IT operations.
== SIM and BIM ==
System information modelling is different from building information modeling, though both focus on sharing knowledge and information. The process of BIM has been defined as:
Building information modeling (BIM) is a digital representation of physical and functional characteristics of a facility. A BIM is a shared knowledge resource for information about a facility forming a reliable basis for decisions during its life-cycle; defined as existing from earliest conception to demolition.
A SIM is akin to BIM; 'Building' is replaced with 'System' to represent the process of modeling complex connected systems, such as electrical control, power and communications, which do not possess geometry. Essentially, a SIM takes a discipline specific perspective to model complex connected systems, but can be integrated with a building information model when a single point of truth is formed.
The traditional way of documenting the design of the connected system is to use 2D drawings that are created by draftsmen and consist of various views that must be used jointly to form an integrated design. As the drawings are created manually and the information for a component could be represented on several different drawings, the propensity for errors, omission, conflicts and duplications to materialize significantly increases. Since the mid-1970s, there has been a trend to replace the traditional manually drafted drawings with computer aided digital drawings. Though efficiencies in creating drawings has been improved since the introduction of CAD, there remains an over reliance on the production of paper based documentation despite the emergence of 'digital' engineering. With the introduction of SIM, productivity benefits can be achieved, particularly during the operations and maintenance of assets for EICS.
SIM is not restricted to the EICS, power and communication systems. It can be used to model a variety of connected systems such as network topology, causal loop and interactions between people and organizations. The application scope of SIM is beyond the 'physical facility' that has been defined for BIM, which enables the SIM to be applicable to model both the physical and virtual networks of the connected systems.
== Extended applications ==
A SIM can be linked to Geographical Information Systems to support the management of spatial information. For example, a SIM model with components assigned by coordinates can be linked to Google Earth to show the real physical locations of the components. A SIM can also be linked to third party 3D models, using applications such as Autodesk Navisworks, to gain spatial support and also provide detailed system data to the third parties. Interoperability can be achieved between SIM and a variety of technologies such as image modelling, Google Maps, virtual reality, augmented reality, Quick Response code, and radio-frequency identification.
== See also ==
Information model
Building information modeling
System engineering
System design
== References == | Wikipedia/System_information_modelling |
An energy performance certificate (EPC) is a rating scheme to summarise the energy efficiency of buildings or devices.
== European Union ==
In the European Union, EPCs are regulated by Energy Performance of Buildings Directive 2010.
== Turkey ==
EPCs are mandatory when buying or selling property.
== United Kingdom ==
Energy performance certificates (EPCs) are a rating scheme to summarise the energy efficiency of buildings. The building is given a rating between A (Very efficient) - G (Inefficient). The EPC will also include tips about the most cost-effective ways to improve the home energy rating. Energy performance certificates are used in many countries.
== United States ==
Energy Star (trademarked ENERGY STAR) is a program run by the U.S. Environmental Protection Agency (EPA) and U.S. Department of Energy (DOE) that promotes energy efficiency. The program provides information on the energy consumption of products and devices using different standardized methods. The Energy Star label is found on more than 75 different certified product categories, homes, commercial buildings, and industrial plants. In the United States, the Energy Star label is also shown on the Energy Guide appliance label of qualifying products.
== See also ==
Domestic energy assessor
House energy rating
Home Energy Rating
== References == | Wikipedia/Energy_Performance_Certificate |
Landscape design software is used by landscape architects, landscape designers and garden designers to create two dimensional to 3 dimensional planting, softworks, groundworks and hardworks plans before constructing a landscape.
There are two levels of software available, amateur and professional. The former is usually aimed at simple visualization of a garden design, whilst the latter provides tools that allow stylistic representations of a design to be accurately labelled and dimensioned for contractors to interpret and land authorities or local government to sight and approve or otherwise. Since the advent of the personal computer, several software packages have come into existence. The main professional software being:
Idea Spectrum's Realtime Landscaping Architect
CS Design Software's CS Artisan
LANDWorksCAD
Keysoft Solutions' KeySCAPE LandCADD
Landmark, PRO Landscape
Structure Studio's VizTerra
VisionScape's VirtualProperty Architect
Visual Impact's Earthscapes
Asuni's Lands Design
Dynascape
Vectorworks
Sketch-Up
AutoCAD
Professional landscape design software requires detailed information to be output for contract documentation, which will usually constitute drawings, specifications and reports (schedules/bills of quantity). The more sophisticated landscape design software solutions automate the process of generating reports (schedules/bills of quantity) from intelligent data in the drawing; such intelligence is usually contained within labels (annotations) which include, in the case of planting, automatic calculation routines to determine the number of individual plants based on plant spacings (centres) per area or length. When labelled areas or lengths are modified (stretched or shrunk), associated labels are recalculated at the same time as reports (schedules/bills of quantity) contained in or associated with the same drawing.
== Features ==
Below is a list of some features provided by such software:
Video Tutorials
Digital Photo Import
3-D View creation
Plant Encyclopaedia
Plant Selector
Growth Zones & Hardiness Maps
Images of Plants and Objects
Print Shopping Lists
Print Design
Visualize Plant Growth
Outdoor Lighting
Irrigation Design
Outdoor Furniture
Reports/Schedules/Bills of Quantity
Labels/Annotation
Photorealistic Design Presentations
Generating Quotes, Invoices, Reports, Plant Information
== Notes ==
== References == | Wikipedia/Landscape_design_software |
The Murnaghan equation of state is a relationship between the volume of a body and the pressure to which it is subjected. This is one of many state equations that have been used in earth sciences and shock physics to model the behavior of matter under conditions of high pressure. It owes its name to Francis D. Murnaghan who proposed it in 1944 to reflect material behavior under a pressure range as wide as possible to reflect an experimentally established fact: the more a solid is compressed, the more difficult it is to compress further.
The Murnaghan equation is derived, under certain assumptions, from the equations of continuum mechanics. It involves two adjustable parameters: the modulus of incompressibility K0 and its first derivative with respect to the pressure, K'0, both measured at ambient pressure. In general, these coefficients are determined by a regression on experimentally obtained values of volume V as a function of the pressure P. These experimental data can be obtained by X-ray diffraction or by shock tests. Regression can also be performed on the values of the energy as a function of the volume obtained from ab-initio and molecular dynamics calculations.
The Murnaghan equation of state is typically expressed as:
P
(
V
)
=
K
0
K
0
′
[
(
V
V
0
)
−
K
0
′
−
1
]
.
{\displaystyle P(V)={\frac {K_{0}}{K_{0}'}}\left[\left({\frac {V}{V_{0}}}\right)^{-K_{0}'}-1\right]\,.}
If the reduction in volume under compression is low, i.e., for V/V0 greater than about 90%, the Murnaghan equation can model experimental data with satisfactory accuracy. Moreover, unlike many proposed equations of state, it gives an explicit expression of the volume as a function of pressure V(P). But its range of validity is limited and physical interpretation inadequate. However, this equation of state continues to be widely used in models of solid explosives. Of more elaborate equations of state, the most used in earth physics is the Birch–Murnaghan equation of state. In shock physics of metals and alloys, another widely used equation of state is the Mie–Gruneisen equation of state.
== Background ==
The study of the internal structure of the earth through the knowledge of the mechanical properties of the constituents of the inner layers of the planet involves extreme conditions; the pressure can be counted in hundreds of gigapascal and temperatures in thousands of degrees. The study of the properties of matter under these conditions can be done experimentally through devices such as diamond anvil cell for static pressures, or by subjecting the material to shock waves. It also gave rise to theoretical work to determine the equation of state, that is to say the relations among the different parameters that define in this case the state of matter: the volume (or density), temperature and pressure.
There are two approaches:
the state equations derived from interatomic potentials, or possibly ab initio calculations;
derived from the general relations of the state equations mechanics and thermodynamics. The Murnaghan equation belongs to this second category.
Dozens of equations have been proposed by various authors. These are empirical relationships, the quality and relevance depend on the use made of it and can be judged by different criteria: the number of independent parameters that are involved, the physical meaning that can be assigned to these parameters, the quality of the experimental data, and the consistency of theoretical assumptions that underlie their ability to extrapolate the behavior of solids at high compression.
== Expressions for the equation of state ==
Generally, at constant temperature, the bulk modulus is defined by:
K
=
−
V
(
∂
P
∂
V
)
T
.
{\displaystyle K=-V\left({\frac {\partial P}{\partial V}}\right)_{T}.}
The easiest way to get an equation of state linking P and V is to assume that K is constant, that is to say, independent of pressure and deformation of the solid, then we simply find Hooke's law. In this case, the volume decreases exponentially with pressure. This is not a satisfactory result because it is experimentally established that as a solid is compressed, it becomes more difficult to compress. To go further, we must take into account the variations of the elastic properties of the solid with compression.
The assumption Murnaghan is to assume that the bulk modulus is a linear function of pressure :
K
=
K
0
+
P
K
0
′
{\displaystyle K=K_{0}+P\ K_{0}'}
Murnaghan equation is the result of the integration of the differential equation:
P
(
V
)
=
K
0
K
0
′
[
(
V
V
0
)
−
K
0
′
−
1
]
{\displaystyle P(V)={\frac {K_{0}}{K_{0}'}}\left[\left({\frac {V}{V_{0}}}\right)^{-K_{0}'}-1\right]}
We can also express the volume depending on the pressure:
V
(
P
)
=
V
0
[
1
+
P
(
K
0
′
K
0
)
]
−
1
/
K
0
′
{\displaystyle V(P)=V_{0}\left[1+P\left({\frac {K'_{0}}{K_{0}}}\right)\right]^{-1/K'_{0}}}
This simplified presentation is however criticized by Poirier as lacking rigor. The same relationship can be shown in a different way from the fact that the incompressibility of the product of the modulus and the thermal expansion coefficient is not dependent on the pressure for a given material. This equation of state is also a general case of the older Polytrope relation which also has a constant power relation.
In some circumstances, particularly in connection with ab initio calculations, the expression of the energy as a function of the volume will be preferred, which can be obtained by integrating the above equation according to the relationship P = −dE/dV . It can be written to K'0 different from 1,
E
(
V
)
=
E
0
+
K
0
V
0
[
1
K
0
′
(
K
0
′
−
1
)
(
V
V
0
)
1
−
K
0
′
+
1
K
0
′
V
V
0
−
1
K
0
′
−
1
]
.
{\displaystyle E(V)=E_{0}+K_{0}\,V_{0}\left[{\frac {1}{K_{0}'(K_{0}'-1)}}\left({\frac {V}{V_{0}}}\right)^{1-K_{0}'}+{\frac {1}{K_{0}'}}{\frac {V}{V_{0}}}-{\frac {1}{K_{0}'-1}}\right].}
== Advantages and limitations ==
Despite its simplicity, the Murnaghan equation is able to reproduce the experimental data for a range of pressures that can be quite large, on the order of K0/2. It also remains satisfactory as the ratio V/V0 remains above about 90%. In this range, the Murnaghan equation has an advantage compared to other equations of state if one wants to express the volume as a function of pressure.
Nevertheless, other equations may provide better results and several theoretical and experimental studies show that the Murnaghan equation is unsatisfactory for many problems. Thus, to the extent that the ratio V/V0 becomes very low, the theory predicts that K' goes to 5/3, which is the Thomas–Fermi limit. However, in the Murnaghan equation, K' is constant and set to its initial value. In particular, the value K'0 = 5/3 becomes inconsistent with the theory under some situations. In fact, when extrapolated, the behavior predicted by the Murnaghan equation becomes quite quickly unlikely.
Regardless of this theoretical argument, experience clearly shows that K' decreases with pressure, or in other words that the second derivative of the incompressibility modulus K" is strictly negative. A second order theory based on the same principle (see next section) can account for this observation, but this approach is still unsatisfactory. Indeed, it leads to a negative bulk modulus in the limit where the pressure tends to infinity. In fact, this is an inevitable contradiction whatever polynomial expansion is chosen because there will always be a dominant term that diverges to infinity.
These important limitations have led to the abandonment of the Murnaghan equation, which W. Holzapfel calls "a useful mathematical form without any physical justification". In practice, the analysis of compression data is done by using more sophisticated equations of state. The most commonly used within the science community is the Birch–Murnaghan equation, second or third order in the quality of data collected.
Finally, a very general limitation of this type of equation of state is their inability to take into account the phase transitions induced by the pressure and temperature of melting, but also multiple solid-solid transitions that can cause abrupt changes in the density and bulk modulus based on the pressure.
== Examples ==
In practice, the Murnaghan equation is used to perform a regression on a data set, where one gets the values of the coefficients K0 and K'0. These coefficients obtained, and knowing the value of the volume to ambient conditions, then we are in principle able to calculate the volume, density and bulk modulus for any pressure.
The data set is mostly a series of volume measurements for different values of applied pressure, obtained mostly by X-ray diffraction. It is also possible to work on theoretical data, calculating the energy for different values of volume by ab initio methods, and then regressing these results. This gives a theoretical value of the modulus of elasticity which can be compared to experimental results.
The following table lists some of the results of different materials, with the sole purpose of illustrating some numerical analyses that have been made using the Murnaghan equation, without prejudice to the quality of the models obtained. Given the criticisms that have been made in the previous section on the physical meaning of the Murnaghan equation, these results should be considered with caution.
== Extensions and generalizations ==
To improve the models or avoid criticism outlined above, several generalizations of the Murnaghan equation have been proposed. They usually consist in dropping a simplifying assumption and adding another adjustable parameter. This can improve the qualities of refinement, but also lead to complicated expressions. The question of the physical meaning of these additional parameters is also raised.
A possible strategy is to include an additional term P2 in the previous development, requiring that
K
=
K
0
+
P
K
0
′
+
P
2
K
0
″
{\displaystyle K=K_{0}+PK_{0}'+P^{2}K_{0}''}
. Solving this differential equation gives the equation of the second-order Murnaghan:
P
(
V
)
=
2
K
0
K
0
′
[
Γ
K
0
′
(
V
0
V
)
Γ
+
1
(
V
0
V
)
Γ
−
1
−
1
]
−
1
{\displaystyle P(V)=2{\frac {K_{0}}{K_{0}'}}\left[{\frac {\Gamma }{K_{0}'}}\,{\frac {\left({\frac {V_{0}}{V}}\right)^{\Gamma }+1}{\left({\frac {V_{0}}{V}}\right)^{\Gamma }-1}}-1\right]^{-1}}
where
Γ
2
=
K
0
′
2
−
2
K
0
K
0
″
>
0
{\displaystyle \Gamma ^{2}=K_{0}'^{2}-2K_{0}K_{0}''>0}
. Found naturally in the first order equation taking
K
0
″
=
0
{\displaystyle K_{0}''=0}
. Developments to an order greater than 2 are possible in principle, but at the cost of adding an adjustable parameter for each term.
Other generalizations can be cited:
Kumari and Dass have proposed a generalization abandoning the condition K = 0 but assuming the report K / K ' independent of pressure;
Kumar proposed a generalization taking into account the dependence of the Anderson parameter as a function of volume. It was subsequently shown that this generalized equation was not new, but rather reducible to the Tait equation.
== Notes and references ==
== Bibliography ==
Poirier, J.P. (2002), Introduction to the physics of the Earth's interior, Cambridge University Press, ISBN 9780521663922
Silvi, B.; d'Arco, P. (1997), Modelling of Minerals and Silicated Materials, Kluwer Academic Publishers, ISBN 9780792343332
MacDonald, J.R. (1969), "Review of Some Experimental and Analytical Equations of State", Reviews of Modern Physics, 41 (2): 316–349, doi:10.1103/revmodphys.41.316
== See also ==
Equation of state
Birch–Murnaghan equation of state
Rose–Vinet equation of state
Polytrope
== External links ==
EosFit, a program for the refinement of experimental data and calculation relations P (V) for different equations of state, including the Murnaghan equation. | Wikipedia/Murnaghan_equation_of_state |
In thermodynamics, a departure function is defined for any thermodynamic property as the difference between the property as computed for an ideal gas and the property of the species as it exists in the real world, for a specified temperature T and pressure P. Common departure functions include those for enthalpy, entropy, and internal energy.
Departure functions are used to calculate real fluid extensive properties (i.e. properties which are computed as a difference between two states). A departure function gives the difference between the real state, at a finite volume or non-zero pressure and temperature, and the ideal state, usually at zero pressure or infinite volume and temperature.
For example, to evaluate enthalpy change between two points h(v1,T1) and h(v2,T2) we first compute the enthalpy departure function between volume v1 and infinite volume at T = T1, then add to that the ideal gas enthalpy change due to the temperature change from T1 to T2, then subtract the departure function value between v2 and infinite volume.
Departure functions are computed by integrating a function which depends on an equation of state and its derivative.
== General expressions ==
General expressions for the enthalpy H, entropy S and Gibbs free energy G are given by
H
i
g
−
H
R
T
=
∫
V
∞
[
T
(
∂
Z
∂
T
)
V
]
d
V
V
+
1
−
Z
S
i
g
−
S
R
=
∫
V
∞
[
T
(
∂
Z
∂
T
)
V
−
1
+
Z
]
d
V
V
−
ln
Z
G
i
g
−
G
R
T
=
∫
V
∞
(
1
−
Z
)
d
V
V
+
ln
Z
+
1
−
Z
{\displaystyle {\begin{aligned}{\frac {H^{\mathrm {ig} }-H}{RT}}&=\int _{V}^{\infty }\left[T\left({\frac {\partial Z}{\partial T}}\right)_{V}\right]{\frac {dV}{V}}+1-Z\\[2ex]{\frac {S^{\mathrm {ig} }-S}{R}}&=\int _{V}^{\infty }\left[T\left({\frac {\partial Z}{\partial T}}\right)_{V}-1+Z\right]{\frac {dV}{V}}-\ln Z\\[2ex]{\frac {G^{\mathrm {ig} }-G}{RT}}&=\int _{V}^{\infty }(1-Z){\frac {dV}{V}}+\ln Z+1-Z\end{aligned}}}
== Departure functions for Peng–Robinson equation of state ==
The Peng–Robinson equation of state relates the three interdependent state properties pressure P, temperature T, and molar volume Vm. From the state properties (P, Vm, T), one may compute the departure function for enthalpy per mole (denoted h) and entropy per mole (s):
h
T
,
P
−
h
T
,
P
i
d
e
a
l
=
R
T
C
[
T
r
(
Z
−
1
)
−
2.078
(
1
+
κ
)
α
ln
(
Z
+
2.414
B
Z
−
0.414
B
)
]
s
T
,
P
−
s
T
,
P
i
d
e
a
l
=
R
[
ln
(
Z
−
B
)
−
2.078
κ
(
1
+
κ
T
r
−
κ
)
ln
(
Z
+
2.414
B
Z
−
0.414
B
)
]
{\displaystyle {\begin{aligned}h_{T,P}-h_{T,P}^{\mathrm {ideal} }&=RT_{C}\left[T_{r}(Z-1)-2.078(1+\kappa ){\sqrt {\alpha }}\ln \left({\frac {Z+2.414B}{Z-0.414B}}\right)\right]\\[1.5ex]s_{T,P}-s_{T,P}^{\mathrm {ideal} }&=R\left[\ln(Z-B)-2.078\kappa \left({\frac {1+\kappa }{\sqrt {T_{r}}}}-\kappa \right)\ln \left({\frac {Z+2.414B}{Z-0.414B}}\right)\right]\end{aligned}}}
where
α
{\displaystyle \alpha }
is defined in the Peng-Robinson equation of state, Tr is the reduced temperature, Pr is the reduced pressure, Z is the compressibility factor, and
κ
=
0.37464
+
1.54226
ω
−
0.26992
ω
2
{\displaystyle \kappa =0.37464+1.54226\;\omega -0.26992\;\omega ^{2}}
B
=
0.07780
P
r
T
r
{\displaystyle B=0.07780{\frac {P_{r}}{T_{r}}}}
Typically, one knows two of the three state properties (P, Vm, T), and must compute the third directly from the equation of state under consideration. To calculate the third state property, it is necessary to know three constants for the species at hand: the critical temperature Tc, critical pressure Pc, and the acentric factor ω. But once these constants are known, it is possible to evaluate all of the above expressions and hence determine the enthalpy and entropy departures.
== References ==
== Correlated terms ==
Residual property (physics) | Wikipedia/Departure_function |
The Benedict–Webb–Rubin equation (BWR), named after Manson Benedict, G. B. Webb, and L. C. Rubin, is an equation of state used in fluid dynamics. Working at the research laboratory of the M. W. Kellogg Company, the three researchers rearranged the Beattie–Bridgeman equation of state and increased the number of experimentally determined constants to eight.
== The original BWR equation ==
P
=
ρ
R
T
+
(
B
0
R
T
−
A
0
−
C
0
T
2
)
ρ
2
+
(
b
R
T
−
a
)
ρ
3
+
α
a
ρ
6
+
c
ρ
3
T
2
(
1
+
γ
ρ
2
)
exp
(
−
γ
ρ
2
)
{\displaystyle P=\rho RT+\left(B_{0}RT-A_{0}-{\frac {C_{0}}{T^{2}}}\right)\rho ^{2}+\left(bRT-a\right)\rho ^{3}+\alpha a\rho ^{6}+{\frac {c\rho ^{3}}{T^{2}}}\left(1+\gamma \rho ^{2}\right)\exp \left(-\gamma \rho ^{2}\right)}
,
where
ρ
{\displaystyle \rho }
is the molar density.
== The BWRS equation of state ==
A modification of the Benedict–Webb–Rubin equation of state by Professor Kenneth E. Starling of the University of Oklahoma:
P
=
ρ
R
T
+
(
B
0
R
T
−
A
0
−
C
0
T
2
+
D
0
T
3
−
E
0
T
4
)
ρ
2
+
(
b
R
T
−
a
−
d
T
)
ρ
3
+
α
(
a
+
d
T
)
ρ
6
+
c
ρ
3
T
2
(
1
+
γ
ρ
2
)
exp
(
−
γ
ρ
2
)
{\displaystyle P=\rho RT+\left(B_{0}RT-A_{0}-{\frac {C_{0}}{T^{2}}}+{\frac {D_{0}}{T^{3}}}-{\frac {E_{0}}{T^{4}}}\right)\rho ^{2}+\left(bRT-a-{\frac {d}{T}}\right)\rho ^{3}+\alpha \left(a+{\frac {d}{T}}\right)\rho ^{6}+{\frac {c\rho ^{3}}{T^{2}}}\left(1+\gamma \rho ^{2}\right)\exp \left(-\gamma \rho ^{2}\right)}
,
where
ρ
{\displaystyle \rho }
is the molar density. The 11 mixture parameters (
B
0
{\displaystyle B_{0}}
,
A
0
{\displaystyle A_{0}}
, etc.) are calculated using the following relations
A
0
=
∑
i
∑
j
x
i
x
j
A
0
i
1
/
2
A
0
j
1
/
2
(
1
−
k
i
j
)
B
0
=
∑
i
x
i
B
0
i
C
0
=
∑
i
∑
j
x
i
x
j
C
0
i
1
/
2
C
0
j
1
/
2
(
1
−
k
i
j
)
3
D
0
=
∑
i
∑
j
x
i
x
j
D
0
i
1
/
2
D
0
j
1
/
2
(
1
−
k
i
j
)
4
E
0
=
∑
i
∑
j
x
i
x
j
E
0
i
1
/
2
E
0
j
1
/
2
(
1
−
k
i
j
)
5
α
=
[
∑
i
x
i
α
i
1
/
3
]
3
γ
=
[
∑
i
x
i
γ
i
1
/
2
]
2
a
=
[
∑
i
x
i
a
i
1
/
3
]
3
b
=
[
∑
i
x
i
b
i
1
/
3
]
3
c
=
[
∑
i
x
i
c
i
1
/
3
]
3
d
=
[
∑
i
x
i
d
i
1
/
3
]
3
{\displaystyle {\begin{aligned}&A_{0}=\sum _{i}\sum _{j}x_{i}x_{j}A_{0i}^{1/2}A_{0j}^{1/2}(1-k_{ij})\\&B_{0}=\sum _{i}x_{i}B_{0i}\\&C_{0}=\sum _{i}\sum _{j}x_{i}x_{j}C_{0i}^{1/2}C_{0j}^{1/2}(1-k_{ij})^{3}\\&D_{0}=\sum _{i}\sum _{j}x_{i}x_{j}D_{0i}^{1/2}D_{0j}^{1/2}(1-k_{ij})^{4}\\&E_{0}=\sum _{i}\sum _{j}x_{i}x_{j}E_{0i}^{1/2}E_{0j}^{1/2}(1-k_{ij})^{5}\\&\alpha =\left[\sum _{i}x_{i}\alpha _{i}^{1/3}\right]^{3}\\&\gamma =\left[\sum _{i}x_{i}\gamma _{i}^{1/2}\right]^{2}\\&a=\left[\sum _{i}x_{i}a_{i}^{1/3}\right]^{3}\\&b=\left[\sum _{i}x_{i}b_{i}^{1/3}\right]^{3}\\&c=\left[\sum _{i}x_{i}c_{i}^{1/3}\right]^{3}\\&d=\left[\sum _{i}x_{i}d_{i}^{1/3}\right]^{3}\end{aligned}}}
where
i
{\displaystyle i}
and
j
{\displaystyle j}
are indices for the components, and the summations go over all components.
B
0
i
{\displaystyle B_{0i}}
,
A
0
i
{\displaystyle A_{0i}}
, etc. are the parameters for the pure components for the
i
{\displaystyle i}
th component,
x
i
{\displaystyle x_{i}}
is the mole fraction of the
i
{\displaystyle i}
th component, and
k
i
j
{\displaystyle k_{ij}}
is an interaction parameter.
Values of the various parameters for 15 substances can be found in Starling's Fluid Properties for Light Petroleum Systems..
== The modified BWR equation (mBWR) ==
A further modification of the Benedict–Webb–Rubin equation of state by Jacobsen and Stewart:
P
=
∑
n
=
1
9
a
n
ρ
n
+
exp
(
−
γ
ρ
2
)
∑
n
=
10
15
a
n
ρ
2
n
−
17
{\displaystyle P=\sum _{n=1}^{9}a_{n}\rho ^{n}+\exp \left(-\gamma \rho ^{2}\right)\sum _{n=10}^{15}a_{n}\rho ^{2n-17}}
where:
γ
=
1
/
ρ
c
2
{\displaystyle \gamma =1/\rho _{c}^{2}}
The mBWR equation subsequently evolved into a 32 term version (Younglove and Ely, 1987) with numerical parameters determined by fitting the equation to empirical data for a reference fluid. Other fluids then are described by using reduced variables for temperature and density.
== See also ==
Real gas
== References ==
== Further reading ==
Benedict, Manson; Webb, George B.; Rubin, Louis C. (1942), "Mixtures of Methane, Ethane, Propane and n-Butane", Journal of Chemical Physics, 10 (12): 747–758, Bibcode:1942JChPh..10..747B, doi:10.1063/1.1723658, ISSN 0021-9606
Benedict, Manson; Webb, George B.; Rubin, Louis C. (1951), "An Empirical Equation for Thermodynamic Properties of Light Hydrocarbons and Their Mixtures. Constants for Twelve Hydrocarbons", Chemical Engineering Progress, 47 (8): 419–422
Benedict, Manson; Webb, George B.; Rubin, Louis C. (1951), "An Empirical Equation for Thermodynamic Properties of Light Hydrocarbons and Their Mixtures Fugacities and Liquid-Vapor Equilibria", Chemical Engineering Progress, 47 (9): 449–454. | Wikipedia/Benedict–Webb–Rubin_equation |
The state-transition equation is defined as the solution of the linear homogeneous state equation. The linear time-invariant state equation given by
d
x
(
t
)
d
t
=
A
x
(
t
)
+
B
u
(
t
)
+
E
w
(
t
)
,
{\displaystyle {\frac {d\mathbf {x} (t)}{dt}}=\mathbf {Ax} (t)+\mathbf {Bu} (t)+\mathbf {Ew} (t),}
with state vector x, control vector u, vector w of additive disturbances, and fixed matrices A, B, E can be solved by using either the classical method of solving linear differential equations or the Laplace transform method. The Laplace transform solution is presented in the following equations.
The Laplace transform of the above equation yields
s
X
(
s
)
−
x
(
0
)
=
A
X
(
s
)
+
B
U
(
s
)
+
E
W
(
s
)
{\displaystyle s\mathbf {X} (s)-\mathbf {x} (0)=\mathbf {AX} (s)+\mathbf {BU} (s)+\mathbf {EW} (s)}
where x(0) denotes initial-state vector evaluated at t = 0. Solving for X(s) gives
X
(
s
)
=
(
s
I
−
A
)
−
1
x
(
0
)
+
(
s
I
−
A
)
−
1
[
B
U
(
s
)
+
E
W
(
s
)
]
.
{\displaystyle \mathbf {X} (s)=(s\mathbf {I} -\mathbf {A} )^{-1}\mathbf {x} (0)+(s\mathbf {I} -\mathbf {A} )^{-1}[\mathbf {BU} (s)+\mathbf {EW} (s)].}
So, the state-transition equation can be obtained by taking inverse Laplace transform as
x
(
t
)
=
L
−
1
{
(
s
I
−
A
)
−
1
}
x
(
0
)
+
L
−
1
{
(
s
I
−
A
)
−
1
[
B
U
(
s
)
+
E
W
(
s
)
]
}
=
Φ
(
t
)
x
(
0
)
+
∫
0
t
Φ
(
t
−
τ
)
[
B
u
(
τ
)
+
E
w
(
τ
)
]
d
t
{\displaystyle {\begin{aligned}x(t)&={\mathcal {L}}^{-1}{\Bigl \{}(s\mathbf {I} -\mathbf {A} )^{-1}{\Bigr \}}\mathbf {x} (0)+{\mathcal {L}}^{-1}{\Bigl \{}(s\mathbf {I} -\mathbf {A} )^{-1}[\mathbf {BU} (s)+\mathbf {EW} (s)]{\Bigr \}}\\&=\mathbf {\Phi } (t)\mathbf {x} (0)+\int _{0}^{t}\mathbf {\Phi } (t-\tau )[\mathbf {Bu} (\tau )+\mathbf {Ew} (\tau )]dt\end{aligned}}}
where Φ(t) is the state transition matrix.
The state-transition equation as derived above is useful only when the initial time is defined to be at t = 0. In the study of control systems, specially discrete-data control systems, it is often desirable to break up a state-transition process into a sequence of transitions, so a more flexible initial time must be chosen. Let the initial time be represented by t0 and the corresponding initial state by x(t0), and assume that the input u(t) and the disturbance w(t) are applied at t ≥ 0.
Starting with the above equation by setting t = t0, and solving for x(0), we get
x
(
0
)
=
Φ
(
−
t
0
)
x
(
t
0
)
−
Φ
(
−
t
0
)
∫
0
t
0
Φ
(
t
0
−
τ
)
[
B
u
(
τ
)
+
E
w
(
τ
)
]
d
τ
.
{\displaystyle \mathbf {x} (0)=\mathbf {\Phi } (-t_{0})\mathbf {x} (t_{0})-\mathbf {\Phi } (-t_{0})\int _{0}^{t_{0}}\mathbf {\Phi } (t_{0}-\tau )[\mathbf {Bu} (\tau )+\mathbf {Ew} (\tau )]d\tau .}
Once the state-transition equation is determined, the output vector can be expressed as a function of the initial state.
== See also ==
Control theory
Control engineering
Automatic control
Feedback
Process control
PID loop
== External links ==
Control System Toolbox for design and analysis of control systems.
http://web.mit.edu/2.14/www/Handouts/StateSpaceResponse.pdf
Wikibooks:Control Systems/State-Space Equations
http://planning.cs.uiuc.edu/node411.html | Wikipedia/State-transition_equation |
The Birch–Murnaghan isothermal equation of state, published in 1947 by Albert Francis Birch of Harvard, is a relationship between the volume of a body and the pressure to which it is subjected. Birch proposed this equation based on the work of Francis Dominic Murnaghan of Johns Hopkins University published in 1944, so that the equation is named in honor of both scientists.
== Expressions for the equation of state ==
The third-order Birch–Murnaghan isothermal equation of state is given by
P
(
V
)
=
3
B
0
2
[
(
V
0
V
)
7
/
3
−
(
V
0
V
)
5
/
3
]
{
1
+
3
4
(
B
0
′
−
4
)
[
(
V
0
V
)
2
/
3
−
1
]
}
.
{\displaystyle P(V)={\frac {3B_{0}}{2}}\left[\left({\frac {V_{0}}{V}}\right)^{7/3}-\left({\frac {V_{0}}{V}}\right)^{5/3}\right]\left\{1+{\frac {3}{4}}\left(B_{0}^{\prime }-4\right)\left[\left({\frac {V_{0}}{V}}\right)^{2/3}-1\right]\right\}.}
where P is the pressure, V0 is the reference volume, V is the deformed volume, B0 is the bulk modulus, and B0' is the derivative of the bulk modulus with respect to pressure. The bulk modulus and its derivative are usually obtained from fits to experimental data and are defined as
B
0
=
−
V
(
∂
P
∂
V
)
P
=
0
{\displaystyle B_{0}=-V\left({\frac {\partial P}{\partial V}}\right)_{P=0}}
and
B
0
′
=
(
∂
B
∂
P
)
P
=
0
{\displaystyle B_{0}'=\left({\frac {\partial B}{\partial P}}\right)_{P=0}}
The expression for the equation of state is obtained by expanding the Helmholtz free energy in powers of the finite strain parameter f, defined as
f
=
1
2
[
(
V
0
V
)
2
/
3
−
1
]
,
{\displaystyle f={\frac {1}{2}}\left[\left({\frac {V_{0}}{V}}\right)^{2/3}-1\right]\,,}
in the form of a series.: 68–69
This is more evident by writing the equation in terms of f. Expanded to third order in finite strain, the equation reads,: 72
P
(
f
)
=
3
B
0
f
(
1
+
2
f
)
5
/
2
(
1
+
a
f
+
h
i
g
h
e
r
o
r
d
e
r
t
e
r
m
s
)
,
{\displaystyle P(f)=3B_{0}f(1+2f)^{5/2}(1+af+{\mathit {higher~order~terms}})\,,}
with
a
=
3
2
(
B
0
′
−
4
)
{\displaystyle a={\frac {3}{2}}(B_{0}'-4)}
.
The internal energy, E(V), is found by integration of the pressure:
E
(
V
)
=
E
0
+
9
V
0
B
0
16
{
[
(
V
0
V
)
2
/
3
−
1
]
3
B
0
′
+
[
(
V
0
V
)
2
/
3
−
1
]
2
[
6
−
4
(
V
0
V
)
2
/
3
]
}
.
{\displaystyle E(V)=E_{0}+{\frac {9V_{0}B_{0}}{16}}\left\{\left[\left({\frac {V_{0}}{V}}\right)^{2/3}-1\right]^{3}B_{0}^{\prime }+\left[\left({\frac {V_{0}}{V}}\right)^{2/3}-1\right]^{2}\left[6-4\left({\frac {V_{0}}{V}}\right)^{2/3}\right]\right\}.}
== See also ==
Albert Francis Birch
Francis Dominic Murnaghan
Murnaghan equation of state
== References == | Wikipedia/Birch–Murnaghan_equation_of_state |
In physics and thermodynamics, the Redlich–Kwong equation of state is an empirical, algebraic equation that relates temperature, pressure, and volume of gases. It is generally more accurate than the van der Waals equation and the ideal gas equation at temperatures above the critical temperature. It was formulated by Otto Redlich and Joseph Neng Shun Kwong in 1949. It showed that a two-parameter, cubic equation of state could well reflect reality in many situations, standing alongside the much more complicated Beattie–Bridgeman model and Benedict–Webb–Rubin equation that were used at the time. Although it was initially developed for gases, the Redlich–Kwong equation has been considered the most modified equation of state since those modifications have been aimed to generalize the predictive results obtained from it. Although this equation is not currently employed in practical applications, modifications derived from this mathematical model like the Soave Redlich-Kwong (SRK), and Peng Robinson have been improved and currently used in simulation and research of vapor–liquid equilibria.
== Equation ==
The Redlich–Kwong equation is formulated as:
p
=
R
T
V
m
−
b
−
a
T
V
m
(
V
m
+
b
)
,
{\displaystyle p={\frac {R\,T}{V_{m}-b}}-{\frac {a}{{\sqrt {T}}\;V_{m}\,(V_{m}+b)}},}
where:
p is the gas pressure
R is the gas constant,
T is temperature,
Vm is the molar volume (V/n),
a is a constant that corrects for attractive potential of molecules, and
b is a constant that corrects for volume.
The constants are different depending on which gas is being analyzed. The constants can be calculated from the critical point data of the gas:
a
=
1
9
(
2
3
−
1
)
R
2
T
c
2.5
P
c
=
0.42748
R
2
T
c
2.5
P
c
,
b
=
2
3
−
1
3
R
T
c
P
c
=
0.08664
R
T
c
P
c
,
{\displaystyle {\begin{aligned}a&={\frac {1}{9({\sqrt[{3}]{2}}-1)}}\,{\frac {R^{2}\,{T_{c}}^{2.5}}{P_{c}}}=0.42748\,{\frac {R^{2}\,{T_{c}}^{2.5}}{P_{c}}},\\[1ex]b&={\frac {{\sqrt[{3}]{2}}-1}{3}}\,{\frac {R\,T_{c}}{P_{c}}}=0.08664\,{\frac {R\,T_{c}}{P_{c}}},\end{aligned}}}
where:
Tc is the temperature at the critical point, and
Pc is the pressure at the critical point.
The Redlich–Kwong equation can also be represented as an equation for the compressibility factor of gas, as a function of temperature and pressure:
Z
=
p
V
m
R
T
=
1
1
−
h
−
A
2
B
h
1
+
h
{\displaystyle Z={\frac {p\,V_{m}}{R\,T}}={\frac {1}{1-h}}\ -{\frac {A^{2}}{B}}{\frac {h}{1+h}}}
where:
A
2
=
a
R
2
T
5
/
2
=
0.42748
T
c
5
/
2
P
c
T
5
/
2
{\displaystyle A^{2}={\frac {a}{R^{2}\,T^{5/2}}}={\frac {0.42748\,{T_{c}}^{5/2}}{P_{c}\,T^{5/2}}}}
B
=
b
R
T
=
0.08664
T
c
P
c
T
{\displaystyle B={\frac {b}{R\,T}}={\frac {0.08664\,T_{c}}{P_{c}\,T}}}
h
=
B
p
Z
=
b
V
m
{\displaystyle h={\frac {B\,p}{Z}}={\frac {b}{V_{m}}}}
Or more simply:
Z
=
p
V
m
R
T
=
V
m
V
m
−
b
−
a
R
T
3
/
2
(
V
m
+
b
)
{\displaystyle Z={\frac {pV_{m}}{RT}}={\frac {V_{m}}{V_{m}-b}}-{\frac {a}{RT^{3/2}\left(V_{m}+b\right)}}}
This equation only implicitly gives Z as a function of pressure and temperature, but is easily solved numerically, originally by graphical interpolation, and now more easily by computer. Moreover, analytic solutions to cubic functions have been known for centuries and are even faster for computers. The Redlich-Kwong equation of state may also be expressed as a cubic function of the molar volume.
For all Redlich–Kwong gases:
Z
c
=
1
3
{\displaystyle Z_{c}={\frac {1}{3}}}
where:
Zc is the compressibility factor at the critical point
Using
p
r
=
p
/
P
c
{\displaystyle p_{r}=p/P_{\text{c}}}
,
V
r
=
V
m
/
V
m,c
{\displaystyle V_{r}=V_{\text{m}}/V_{\text{m,c}}}
,
T
r
=
T
/
T
c
{\displaystyle T_{r}=T/T_{\text{c}}}
the equation of state can be written in the reduced form:
p
r
=
Z
c
−
1
T
r
V
r
−
0.08664
Z
c
−
1
−
0.42748
Z
c
−
2
T
r
V
r
(
V
r
+
0.08664
Z
c
−
1
)
{\displaystyle p_{r}={\frac {Z_{c}^{-1}T_{r}}{V_{r}-0.08664Z_{c}^{-1}}}-{\frac {0.42748Z_{c}^{-2}}{{\sqrt {T_{r}}}V_{r}\left(V_{r}+{0.08664}Z_{c}^{-1}\right)}}}
And since
Z
c
−
1
=
3
{\displaystyle Z_{c}^{-1}=3}
it follows:
p
r
=
3
T
r
V
r
−
b
′
−
1
b
′
T
r
V
r
(
V
r
+
b
′
)
{\displaystyle p_{r}={\frac {3T_{r}}{V_{r}-b'}}-{\frac {1}{b'{\sqrt {T_{r}}}V_{r}\left(V_{r}+b'\right)}}}
with
b
′
=
2
3
−
1
≈
0.26
{\displaystyle b'={\sqrt[{3}]{2}}-1\approx 0.26}
From the Redlich–Kwong equation, the fugacity coefficient of a gas can be estimated:
ln
ϕ
=
∫
0
P
Z
−
1
p
d
P
=
Z
−
1
−
ln
(
Z
−
B
P
)
−
A
2
B
ln
(
1
+
B
P
Z
)
{\displaystyle \ln \phi =\int _{0}^{P}{{\frac {Z-1}{p}}dP}=Z-1-\ln \left(Z-B\,P\right)-{\frac {A^{2}}{B}}\,\ln \left(1+{\frac {B\,P}{Z}}\right)}
=== Critical constants ===
It is possible to express the critical constants Tc and Pc as functions of a and b by reversing the following system of 2 equations a(Tc, Pc) and b(Tc, Pc) with 2 variables Tc, Pc:
a
=
1
9
(
2
3
−
1
)
R
2
T
c
5
/
2
P
c
=
1
9
(
2
3
−
1
)
R
2
T
c
5
/
2
2
3
−
1
3
R
T
c
b
⟹
a
=
b
R
T
c
3
/
2
3
(
2
3
−
1
)
2
⟹
T
c
=
3
2
/
3
(
2
3
−
1
)
4
/
3
(
a
b
R
)
2
/
3
{\displaystyle {\begin{aligned}a&={\frac {1}{9({\sqrt[{3}]{2}}-1)}}\,{\frac {R^{2}\,{T_{c}}^{5/2}}{P_{c}}}={\frac {1}{9({\sqrt[{3}]{2}}-1)}}\,{\frac {R^{2}\,{T_{c}}^{5/2}}{{\frac {{\sqrt[{3}]{2}}-1}{3}}\,{\frac {R\,T_{c}}{b}}}}\\[1ex]\implies a&={\frac {bR\,{T_{c}}^{3/2}}{3{\left({\sqrt[{3}]{2}}-1\right)}^{2}}}\\[1ex]\implies T_{c}&=3^{2/3}{\left({\sqrt[{3}]{2}}-1\right)}^{4/3}{\left({\frac {a}{bR}}\right)}^{2/3}\end{aligned}}}
b
=
2
3
−
1
3
R
T
c
P
c
⟹
P
c
=
2
3
−
1
3
R
T
c
b
⟹
P
c
=
(
2
3
−
1
)
7
/
3
3
1
/
3
R
1
/
3
a
2
/
3
b
5
/
3
{\displaystyle {\begin{aligned}b&={\frac {{\sqrt[{3}]{2}}-1}{3}}\,{\frac {R\,T_{c}}{P_{c}}}\\[1ex]\implies P_{c}&={\frac {{\sqrt[{3}]{2}}-1}{3}}\,{\frac {R\,T_{c}}{b}}\\[1ex]\implies P_{c}&={\frac {({\sqrt[{3}]{2}}-1)^{7/3}}{3^{1/3}}}R^{1/3}{\frac {a^{2/3}}{b^{5/3}}}\end{aligned}}}
Because of the definition of compressibility factor at critical condition, it is possible to reverse it to find the critical molar volume Vm,c, by knowing previous found Pc, Tc and Zc=1/3.
Z
=
P
V
m
R
T
⟹
Z
c
=
P
c
V
m
,
c
R
T
c
⟹
V
m
,
c
=
Z
c
R
T
c
P
c
{\displaystyle Z={\frac {PV_{m}}{RT}}\implies Z_{c}={\frac {P_{c}V_{m,c}}{RT_{c}}}\implies V_{m,c}=Z_{c}{\frac {RT_{c}}{P_{c}}}}
V
m
,
c
=
R
3
3
2
/
3
(
2
3
−
1
)
4
/
3
(
a
b
R
)
2
/
3
(
2
3
−
1
)
7
/
3
3
1
/
3
R
1
/
3
a
2
/
3
b
5
/
3
=
R
3
3
b
R
(
2
3
−
1
)
=
b
2
3
−
1
{\displaystyle V_{m,c}={\frac {R}{3}}{\frac {3^{2/3}({\sqrt[{3}]{2}}-1)^{4/3}({\frac {a}{bR}})^{2/3}}{{\frac {({\sqrt[{3}]{2}}-1)^{7/3}}{3^{1/3}}}R^{1/3}{\frac {a^{2/3}}{b^{5/3}}}}}={\frac {R}{3}}{\frac {3b}{R({\sqrt[{3}]{2}}-1)}}={\frac {b}{{\sqrt[{3}]{2}}-1}}}
=== Multiple components ===
The Redlich–Kwong equation was developed with an intent to also be applicable to mixtures of gases. In a mixture, the b term, representing the volume of the molecules, is an average of the b values of the components, weighted by the mole fractions:
b
=
∑
i
∑
j
x
i
x
j
b
i
j
,
{\displaystyle b=\sum _{i}\sum _{j}x_{i}\,x_{j}b_{ij},}
or
B
=
∑
i
x
i
B
i
{\displaystyle B=\sum _{i}x_{i}\,B_{i}}
where:
xi is the mole fraction of the ith component of the mixture,
bij is the covolume parameter of the i-j pair in the mixture, and
Bi is the B value of the ith component of the mixture
The cross-terms of bij (i.e. terms for which
i
≠
j
{\displaystyle i\neq j}
), are commonly computed as
b
i
j
=
b
i
+
b
j
2
(
1
−
l
i
j
)
,
{\displaystyle b_{ij}={\frac {b_{i}+b_{j}}{2}}(1-l_{ij}),}
where
l
i
j
{\displaystyle l_{ij}}
is an often empirically fitted interaction parameter accounting for asymmetry in the cross interactions.
The constant representing the attractive forces, a, is not linear with respect to mole fraction, but rather depends on the square of the mole fractions. That is:
a
=
∑
i
∑
j
x
i
x
j
a
i
j
{\displaystyle a=\sum _{i}\sum _{j}x_{i}\,x_{j}\,a_{i\,j}}
where:
ai j is the attractive term between a molecule of species i and species j,
xi is the mole fraction of the ith component of the mixture, and
xj is the mole fraction of the jth component of the mixture.
It is generally assumed that the attractive cross terms represent the geometric average of the individual a terms, adjusted using an interaction parameter
k
i
j
{\displaystyle k_{ij}}
, that is:
a
i
j
=
(
a
i
a
j
)
1
/
2
(
1
−
k
i
j
)
,
{\displaystyle a_{i\,j}=(a_{i}\,a_{j})^{1/2}(1-k_{ij}),}
Where the interaction parameter
k
i
j
{\displaystyle k_{ij}}
is an often empirically fitted parameter accounting for asymmetry in the molecular cross-interactions. In this case, the following equation for the attractive term is furnished:
A
=
∑
i
x
i
A
i
{\displaystyle A=\sum _{i}x_{i}\,A_{i}}
where Ai is the A term for the i'th component of the mixture.
These manners of creating a and b parameters for a mixture from the parameters of the pure fluids are commonly known as the van der Waals one-fluid mixing and combining rules.
== History ==
The Van der Waals equation, formulated in 1873 by Johannes Diderik van der Waals, is generally regarded as the first somewhat realistic equation of state (beyond the ideal gas law):
p
=
R
T
V
m
−
b
−
a
V
m
2
{\displaystyle p={\frac {RT}{V_{\mathrm {m} }-b}}-{\frac {a}{V_{\mathrm {m} }^{2}}}}
However, its modeling of real behavior is not sufficient for many applications, and by 1949, had fallen out of favor, with the Beattie–Bridgeman and Benedict–Webb–Rubin equations of state being used preferentially, both of which contain more parameters than the Van der Waals equation. The Redlich–Kwong equation was developed by Redlich and Kwong while they were both working for the Shell Development Company at Emeryville, California. Kwong had begun working at Shell in 1944, where he met Otto Redlich when he joined the group in 1945. The equation arose out of their work at Shell - they wanted an easy, algebraic way to relate the pressures, volumes, and temperatures of the gasses they were working with - mostly non-polar and slightly polar hydrocarbons (the Redlich–Kwong equation is less accurate for hydrogen-bonding gases). It was presented jointly in Portland, Oregon at the Symposium on Thermodynamics and Molecular Structure of Solutions in 1948, as part of the 14th Meeting of the American Chemical Society. The success of the Redlich–Kwong equation in modeling many real gases accurately demonstrate that a cubic, two-parameter equation of state can give adequate results, if it is properly constructed. After they demonstrated the viability of such equations, many others created equations of similar form to try to improve on the results of Redlich and Kwong.
== Derivation ==
The equation is essentially empirical – the derivation is neither direct nor rigorous. The Redlich–Kwong equation is very similar to the Van der Waals equation, with only a slight modification being made to the attractive term, giving that term a temperature dependence. At high pressures, the volume of all gases approaches some finite volume, largely independent of temperature, that is related to the size of the gas molecules. This volume is reflected in the b in the equation. It is empirically true that this volume is about 0.26Vc (where Vc is the volume at the critical point). This approximation is quite good for many small, non-polar compounds – the value ranges between about 0.24Vc and 0.28Vc. In order for the equation to provide a good approximation of volume at high pressures, it had to be constructed such that
b
=
0.26
V
c
.
{\displaystyle b=0.26\ V_{c}.}
The first term in the equation represents this high-pressure behavior.
The second term corrects for the attractive force of the molecules to each other. The functional form of a with respect to the critical temperature and pressure is empirically chosen to give the best fit at moderate pressures for most relatively non-polar gasses.
=== In reality ===
The values of a and b are completely determined by the equation's shape and cannot be empirically chosen. Requiring it to hold at its critical point
P
=
P
c
{\displaystyle P=P_{c}}
,
V
=
V
c
{\displaystyle V=V_{c}}
,
P
c
=
R
T
c
V
c
−
b
−
a
T
c
V
c
(
V
c
+
b
)
,
{\displaystyle P_{c}={\frac {R\,T_{c}}{V_{c}-b}}-{\frac {a}{{\sqrt {T_{c}}}\;V_{c}\,(V_{c}+b)}},}
enforcing the thermodynamic criteria for a critical point,
(
∂
P
∂
V
)
T
=
0
,
(
∂
2
P
∂
V
2
)
T
=
0
,
{\displaystyle \left({\frac {\partial P}{\partial V}}\right)_{T}=0,\left({\frac {\partial ^{2}P}{\partial V^{2}}}\right)_{T}=0,}
and without loss of generality defining
b
=
b
′
V
c
{\displaystyle b=b'V_{c}}
and
V
c
=
Z
c
R
T
c
/
P
c
{\displaystyle V_{c}=Z_{c}RT_{c}/P_{c}}
yields 3 constraints,
a
=
(
1
+
b
′
)
2
(
b
′
−
1
)
2
(
2
+
b
′
)
R
2
T
c
5
/
2
Z
c
P
c
a
=
(
1
+
b
′
)
3
(
1
−
b
′
)
3
(
3
+
3
b
′
+
b
′
2
)
R
2
T
c
5
/
2
Z
c
P
c
a
=
(
1
+
b
′
)
(
1
−
Z
c
+
b
′
Z
c
)
b
′
−
1
R
2
T
c
5
/
2
Z
c
P
c
.
{\displaystyle {\begin{aligned}a&={\frac {(1+b')^{2}}{(b'-1)^{2}(2+b')}}{\frac {R^{2}T_{c}^{5/2}Z_{c}}{P_{c}}}\\[2pt]a&={\frac {(1+b')^{3}}{(1-b')^{3}(3+3b'+b'^{2})}}{\frac {R^{2}T_{c}^{5/2}Z_{c}}{P_{c}}}\\[2pt]a&={\frac {(1+b')(1-Z_{c}+b'Z_{c})}{b'-1}}{\frac {R^{2}T_{c}^{5/2}Z_{c}}{P_{c}}}.\end{aligned}}}
Simultaneously solving these while requiring b' and Zc to be positive yields only one solution:
Z
c
=
1
3
,
b
′
=
2
3
−
1
,
a
=
P
c
V
c
2
T
c
b
′
=
1
2
3
−
1
R
2
T
c
5
/
2
9
P
c
.
{\displaystyle {\begin{aligned}Z_{c}&={\frac {1}{3}},\qquad \qquad b'={\sqrt[{3}]{2}}-1,\\[1ex]a&={\frac {P_{c}V_{c}^{2}{\sqrt {T_{c}}}}{b'}}={\frac {1}{{\sqrt[{3}]{2}}-1}}\,{\frac {R^{2}\,{T_{c}}^{5/2}}{9P_{c}}}.\end{aligned}}}
== Modification ==
The Redlich–Kwong equation was designed largely to predict the properties of small, non-polar molecules in the vapor phase, which it generally does well. However, it has been subject to various attempts to refine and improve it. In 1975, Redlich himself published an equation of state adding a third parameter, in order to better model the behavior of both long-chained molecules, as well as more polar molecules. His 1975 equation was not so much a modification to the original equation as a re-inventing of a new equation of state, and was also formulated so as to take advantage of computer calculation, which was not available at the time the original equation was published. Many others have offered competing equations of state, either modifications to the original equation, or equations quite different in form. It was recognized by the mid 1960s that to significantly improve the equation, the parameters, especially a, would need to become temperature dependent. As early as 1966, Barner noted that the Redlich–Kwong equation worked best for molecules with an acentric factor (ω) close to zero. He therefore proposed a modification to the attractive term:
a
=
α
+
γ
T
−
1.5
{\displaystyle a=\alpha +\gamma \,T^{-1.5}}
where
α is the attractive term in the original Redlich–Kwong equation
γ is a parameter related to ω, with γ = 0 for ω = 0
It soon became desirable to obtain an equation that would also model well the Vapor–liquid equilibrium (VLE) properties of fluids, in addition to the vapor-phase properties. Perhaps the best known application of the Redlich–Kwong equation was in calculating gas fugacities of hydrocarbon mixtures, which it does well, that was then used in the VLE model developed by Chao and Seader in 1961. However, in order for the Redlich–Kwong equation to stand on its own in modeling vapor–liquid equilibria, more substantial modifications needed to be made. The most successful of these modifications is the Soave modification to the equation, proposed in 1972. Soave's modification involved replacing the T1/2 power found in the denominator attractive term of the original equation with a more complicated temperature-dependent expression. He presented the equation as follows:
P
=
R
T
V
m
−
b
−
a
α
V
m
(
V
m
+
b
)
{\displaystyle P={\frac {R\,T}{V_{m}-b}}-{\frac {a\,\alpha }{V_{m}(V_{m}+b)}}}
where
α
=
(
1
+
(
0.480
+
1.574
ω
−
0.176
ω
2
)
(
1
−
T
r
)
)
2
,
{\displaystyle \alpha =\left(1+(0.480+1.574\,\omega -0.176\,\omega ^{2})(1-{\sqrt {T_{r}}})\right)^{2},}
a
=
1
9
(
2
3
−
1
)
R
2
T
c
2
P
c
=
0.42748
R
2
T
c
2
P
c
,
{\displaystyle a={\frac {1}{9({\sqrt[{3}]{2}}-1)}}\,{\frac {R^{2}\,{T_{c}}^{2}}{P_{c}}}=0.42748\,{\frac {R^{2}\,{T_{c}}^{2}}{P_{c}}},}
b
=
2
3
−
1
3
R
T
c
P
c
=
0.08664
R
T
c
P
c
,
{\displaystyle b={\frac {{\sqrt[{3}]{2}}-1}{3}}\,{\frac {R\,T_{c}}{P_{c}}}=0.08664\,{\frac {R\,T_{c}}{P_{c}}},}
Tr is the reduced temperature of the compound, and
ω is the acentric factor
The Peng–Robinson equation of state further modified the Redlich–Kwong equation by modifying the attractive term, giving
p
=
R
T
V
m
−
b
−
a
α
V
m
(
V
m
+
b
)
+
b
(
V
m
−
b
)
{\displaystyle p={\frac {R\,T}{V_{m}-b}}-{\frac {a\,\alpha }{V_{m}\,(V_{m}+b)+b\,(V_{m}-b)}}}
the parameters a, b, and α are slightly modified, with
a
=
0.457235
R
2
T
c
2
p
c
b
=
0.077796
R
T
c
p
c
α
=
(
1
+
(
0.37464
+
1.54226
ω
−
0.26992
ω
2
)
(
1
−
T
r
)
)
2
{\displaystyle {\begin{aligned}a&={\frac {0.457235\,R^{2}\,T_{c}^{2}}{p_{c}}}\\b&={\frac {0.077796\,R\,T_{c}}{p_{c}}}\\\alpha &=\left(1+(0.37464+1.54226\omega -0.26992\omega ^{2})(1-{\sqrt {T_{r}}})\right)^{2}\end{aligned}}}
The Peng–Robinson equation typically gives similar VLE equilibria properties as the Soave modification, but often gives better estimations of the liquid phase density.
Several modifications have been made that attempt to more accurately represent the first term, related to the molecular size. The first significant modification of the repulsive term beyond the Van der Waals equation's
P
hs
=
R
T
V
m
−
b
=
R
T
V
m
1
1
−
b
V
m
{\displaystyle P_{\text{hs}}={\frac {R\,T}{V_{m}-b}}={\frac {R\,T}{V_{m}}}\,{\frac {1}{1-{\frac {b}{V_{m}}}}}}
(where Phs represents a hard spheres equation of state term.) was developed in 1963 by Thiele:
P
hs
=
R
T
V
m
1
−
η
3
(
1
−
η
)
4
{\displaystyle P_{\text{hs}}={\frac {R\,T}{V_{m}}}\,{\frac {1-\eta ^{3}}{(1-\eta )^{4}}}}
where
η
=
b
4
V
m
.
{\displaystyle \eta ={\frac {b}{4\,V_{m}}}.}
This expression was improved by Carnahan and Starling to give
P
hs
=
R
T
V
m
1
+
η
+
η
2
−
η
3
(
1
−
η
)
3
{\displaystyle P_{\text{hs}}={\frac {R\,T}{V_{m}}}\,{\frac {1+\eta +\eta ^{2}-\eta ^{3}}{{\left(1-\eta \right)}^{3}}}}
The Carnahan-Starling hard-sphere equation of state has term been used extensively in developing other equations of state, and tends to give very good approximations for the repulsive term.
Beyond improved two-parameter equations of state, a number of three parameter equations have been developed, often with the third parameter depending on either Zc, the compressibility factor at the critical point, or ω, the acentric factor. Schmidt and Wenzel proposed an equation of state with an attractive term that incorporates the acentric factor:
P
=
R
T
V
m
−
b
−
a
V
m
2
+
(
1
+
3
ω
)
b
V
m
−
3
ω
b
2
{\displaystyle P={\frac {R\,T}{V_{m}-b}}-{\frac {a}{V_{m}^{2}+(1+3\,\omega )bV_{m}-3\omega b^{2}}}}
This equation reduces to the original Redlich–Kwong equation in the case when ω = 0, and to the Peng–Robinson equation when ω = 1/3.
== See also ==
== References == | Wikipedia/Redlich–Kwong_equation_of_state |
Statistical associating fluid theory (SAFT) is a chemical theory, based on perturbation theory, that uses statistical thermodynamics to explain how complex fluids and fluid mixtures form associations through hydrogen bonds. Widely used in industry and academia, it has become a standard approach for describing complex mixtures. Since it was first proposed in 1990, SAFT has been used in a large number of molecular-based equation of state models for describing the Helmholtz energy contribution due to association.
== Overview ==
SAFT is a Helmholtz energy term that can be used in equations of state that describe the thermodynamic and phase equilibrium properties of pure fluids and fluid mixtures. SAFT was developed using statistical mechanics. SAFT models the Helmholtz free energy contribution due to association, i.e. hydrogen bonding. SAFT can be used in combination with other Helmholtz free energy terms. Other Helmholtz energy contributions consider for example Lennard-Jones interactions, covalent chain-forming bonds, and association (interactions between segments caused by, for example, hydrogen bonding). SAFT has been applied to a wide range of fluids, including supercritical fluids, polymers, liquid crystals, electrolytes, surfactant solutions, and refrigerants.
== Development ==
SAFT evolved from thermodynamic theories, including perturbation theories developed in the 1960s, 1970s, and 1980s by John Barker and Douglas Henderson, Keith Gubbins and Chris Gray, and, in particular, Michael Wertheim's first-order, thermodynamic perturbation theory (TPT1) outlined in a series of papers in the 1980s.
The SAFT equation of state was developed using statistical mechanical methods (in particular the perturbation theory of Wertheim) to describe the interactions between molecules in a system. The idea of a SAFT equation of state was first proposed by Walter G. Chapman and by Chapman et al. in 1988 and 1989. Many different versions of the SAFT models have been proposed, but all use the same chain and association terms derived by Chapman et al. One of the first SAFT papers (1990) titled "New reference equation of state for associating liquids" by Walter G. Chapman, Keith Gubbins, George Jackson, and Maciej Radosz, was recognized in 2007 by Industrial and Engineering Chemistry Research as one of the most highly cited papers of the previous three decades. SAFT is one of the first theories to accurately describe (in comparison with molecular simulation) the effects on fluid properties of molecular size and shape in addition to association between molecules.
== Variations ==
Many variations of SAFT have been developed since the 1990s, including HR-SAFT (Huang-Radosz SAFT), PC-SAFT (perturbed chain SAFT), Polar SAFT, PCP-SAFT (PC-polar-SAFT), soft-SAFT, polar soft-SAFT, SAFT-VR (variable range), SAFT VR-Mie. Also, the SAFT term was used in combination with cubic equations of state for describing the dispersive-repulsive interactions, for example in the Cubic-Plus-Association (CPA) equation of state model and the SAFT + cubic model and non-random-lattice (NLF) models based on lattice field theory.
== References == | Wikipedia/Statistical_associating_fluid_theory |
In fluid mechanics, the Tait equation is an equation of state, used to relate liquid density to hydrostatic pressure. The equation was originally published by Peter Guthrie Tait in 1888 in the form
V
0
−
V
P
V
0
=
A
Π
+
P
{\displaystyle {\frac {V_{0}-V}{PV_{0}}}={\frac {A}{\Pi +P}}}
where
P
{\displaystyle P}
is the hydrostatic pressure in addition to the atmospheric one,
V
0
{\displaystyle V_{0}}
is the volume at atmospheric pressure,
V
{\displaystyle V}
is the volume under additional pressure
P
{\displaystyle P}
, and
A
,
Π
{\displaystyle A,\Pi }
are experimentally determined parameters.
A very detailed historical study on the Tait equation with the physical interpretation of the two parameters
A
{\displaystyle A}
and
Π
{\displaystyle \Pi }
is given in reference.
== Tait–Tammann equation of state ==
In 1895, the original isothermal Tait equation was replaced by Tammann with an equation of the form
K
=
−
V
0
(
∂
P
∂
V
)
T
=
V
0
(
B
+
P
)
C
{\displaystyle {K}=-{V_{0}}\left({\frac {\partial P}{\partial V}}\right)_{T}={V_{0}}{\frac {(B+P)}{C}}}
where
K
{\displaystyle K}
is the isothermal mixed bulk modulus.
This above equation is popularly known as the Tait equation.
The integrated form is commonly written
V
=
V
0
−
C
log
(
B
+
P
B
+
P
0
)
{\displaystyle V=V_{0}-C\log \left({\frac {B+P}{B+P_{0}}}\right)}
where
V
{\displaystyle V\ }
is the specific volume of the substance (in units of ml/g or m3/kg)
V
0
{\displaystyle V_{0}}
is the specific volume at
P
=
P
0
{\displaystyle P=P_{0}}
B
{\displaystyle B\ }
(same units as
P
0
{\displaystyle P_{0}}
) and
C
{\displaystyle C\ }
(same units as
V
0
{\displaystyle V_{0}}
) are functions of temperature
=== Pressure formula ===
The expression for the pressure in terms of the specific volume is
P
=
(
B
+
P
0
)
exp
(
−
V
−
V
0
C
)
−
B
.
{\displaystyle P=(B+P_{0})\exp \left(-{\frac {V-V_{0}}{C}}\right)-B\,.}
A highly detailed study on the Tait-Tammann equation of state with the physical interpretation of the two empirical parameters
C
{\displaystyle C}
and
B
{\displaystyle B}
is given in chapter 3 of reference. Expressions as a function of temperature for the two empirical parameters
C
{\displaystyle C}
and
B
{\displaystyle B}
are given for water, seawater, helium-4, and helium-3 in the entire liquid phase up to the critical temperature
T
c
{\displaystyle T_{c}}
. The special case of the supercooled phase of water is discussed in Appendix D of reference. The case of liquid argon between the triple point temperature and 148 K is dealt with in detail in section 6 of the reference.
== Tait–Murnaghan equation of state ==
Another popular isothermal equation of state that goes by the name "Tait equation" is the Murnaghan model which is sometimes expressed as
V
V
0
=
[
1
+
n
K
0
(
P
−
P
0
)
]
−
1
/
n
{\displaystyle {\frac {V}{V_{0}}}=\left[1+{\frac {n}{K_{0}}}\,(P-P_{0})\right]^{-1/n}}
where
V
{\displaystyle V}
is the specific volume at pressure
P
{\displaystyle P}
,
V
0
{\displaystyle V_{0}}
is the specific volume at pressure
P
0
{\displaystyle P_{0}}
,
K
0
{\displaystyle K_{0}}
is the bulk modulus at
P
0
{\displaystyle P_{0}}
, and
n
{\displaystyle n}
is a material parameter.
=== Pressure formula ===
This equation, in pressure form, can be written as
P
=
K
0
n
[
(
V
0
V
)
n
−
1
]
+
P
0
=
K
0
n
[
(
ρ
ρ
0
)
n
−
1
]
+
P
0
.
{\displaystyle P={\frac {K_{0}}{n}}\left[\left({\frac {V_{0}}{V}}\right)^{n}-1\right]+P_{0}={\frac {K_{0}}{n}}\left[\left({\frac {\rho }{\rho _{0}}}\right)^{n}-1\right]+P_{0}.}
where
ρ
,
ρ
0
{\displaystyle \rho ,\rho _{0}}
are mass densities at
P
,
P
0
{\displaystyle P,P_{0}}
, respectively.
For pure water, typical parameters are
P
0
{\displaystyle P_{0}}
= 101,325 Pa,
ρ
0
{\displaystyle \rho _{0}}
= 1000 kg/cu.m,
K
0
{\displaystyle K_{0}}
= 2.15 GPa, and
n
{\displaystyle n}
= 7.15.
Note that this form of the Tate equation of state is identical to that of the Murnaghan equation of state.
=== Bulk modulus formula ===
The tangent bulk modulus predicted by the MacDonald–Tait model is
K
=
K
0
(
V
0
V
)
n
.
{\displaystyle K=K_{0}\left({\frac {V_{0}}{V}}\right)^{n}\,.}
== Tumlirz–Tammann–Tait equation of state ==
A related equation of state that can be used to model liquids is the Tumlirz equation (sometimes called the Tammann equation and originally proposed by Tumlirz in 1909 and Tammann in 1911 for pure water). This relation has the form
V
(
P
,
S
,
T
)
=
V
∞
−
K
1
S
+
λ
P
0
+
K
2
S
+
P
{\displaystyle V(P,S,T)=V_{\infty }-K_{1}S+{\frac {\lambda }{P_{0}+K_{2}S+P}}}
where
V
(
P
,
S
,
T
)
{\displaystyle V(P,S,T)}
is the specific volume,
P
{\displaystyle P}
is the pressure,
S
{\displaystyle S}
is the salinity,
T
{\displaystyle T}
is the temperature, and
V
∞
{\displaystyle V_{\infty }}
is the specific volume when
P
=
∞
{\displaystyle P=\infty }
, and
K
1
,
K
2
,
P
0
{\displaystyle K_{1},K_{2},P_{0}}
are parameters that can be fit to experimental data.
The Tumlirz–Tammann version of the Tait equation for fresh water, i.e., when
S
=
0
{\displaystyle S=0}
, is
V
=
V
∞
+
λ
P
0
+
P
.
{\displaystyle V=V_{\infty }+{\frac {\lambda }{P_{0}+P}}\,.}
For pure water, the temperature-dependence of
V
∞
,
λ
,
P
0
{\displaystyle V_{\infty },\lambda ,P_{0}}
are:
λ
=
1788.316
+
21.55053
T
−
0.4695911
T
2
+
3.096363
×
10
−
3
T
3
−
0.7341182
×
10
−
5
T
4
P
0
=
5918.499
+
58.05267
T
−
1.1253317
T
2
+
6.6123869
×
10
−
3
T
3
−
1.4661625
×
10
−
5
T
4
V
∞
=
0.6980547
−
0.7435626
×
10
−
3
T
+
0.3704258
×
10
−
4
T
2
−
0.6315724
×
10
−
6
T
3
+
0.9829576
×
10
−
8
T
4
−
0.1197269
×
10
−
9
T
5
+
0.1005461
×
10
−
11
T
6
−
0.5437898
×
10
−
14
T
7
+
0.169946
×
10
−
16
T
8
−
0.2295063
×
10
−
19
T
9
{\displaystyle {\begin{aligned}\lambda &=1788.316+21.55053\,T-0.4695911\,T^{2}+3.096363\times 10^{-3}\,T^{3}-0.7341182\times 10^{-5}\,T^{4}\\P_{0}&=5918.499+58.05267\,T-1.1253317\,T^{2}+6.6123869\times 10^{-3}\,T^{3}-1.4661625\times 10^{-5}\,T^{4}\\V_{\infty }&=0.6980547-0.7435626\times 10^{-3}\,T+0.3704258\times 10^{-4}\,T^{2}-0.6315724\times 10^{-6}\,T^{3}\\&+0.9829576\times 10^{-8}\,T^{4}-0.1197269\times 10^{-9}\,T^{5}+0.1005461\times 10^{-11}\,T^{6}\\&-0.5437898\times 10^{-14}\,T^{7}+0.169946\times 10^{-16}\,T^{8}-0.2295063\times 10^{-19}\,T^{9}\end{aligned}}}
In the above fits, the temperature
T
{\displaystyle T}
is in degrees Celsius,
P
0
{\displaystyle P_{0}}
is in bars,
V
∞
{\displaystyle V_{\infty }}
is in cc/gm, and
λ
{\displaystyle \lambda }
is in bars-cc/gm.
=== Pressure formula ===
The inverse Tumlirz–Tammann–Tait relation for the pressure as a function of specific volume is
P
=
λ
V
−
V
∞
−
P
0
.
{\displaystyle P={\frac {\lambda }{V-V_{\infty }}}-P_{0}\,.}
=== Bulk modulus formula ===
The Tumlirz-Tammann-Tait formula for the instantaneous tangent bulk modulus of pure water is a quadratic function of
P
{\displaystyle P}
(for an alternative see )
K
=
−
V
∂
P
∂
V
=
V
λ
(
V
−
V
∞
)
2
=
(
P
0
+
P
)
+
V
∞
λ
(
P
0
+
P
)
2
.
{\displaystyle K=-V\,{\frac {\partial P}{\partial V}}={\frac {V\,\lambda }{(V-V_{\infty })^{2}}}=(P_{0}+P)+{\frac {V_{\infty }}{\lambda }}(P_{0}+P)^{2}\,.}
== Modified Tait equation of state ==
Following in particular the study of underwater explosions and more precisely the shock waves emitted, J.G. Kirkwood proposed in 1965 a more appropriate form of equation of state to describe high pressures (>1 kbar) by expressing the isentropic compressibility coefficient as
−
1
V
(
∂
V
∂
P
)
S
=
1
n
(
B
+
P
)
{\displaystyle -{\frac {1}{V}}\left({\frac {\partial V}{\partial P}}\right)_{S}={\frac {1}{n(B+P)}}}
where
S
{\displaystyle S}
represents here the entropy.
The two empirical parameters
n
{\displaystyle n}
and
B
{\displaystyle B}
are now function of entropy such that
n
{\displaystyle n\ }
is dimensionless
B
{\displaystyle B\ }
has the same units as
P
{\displaystyle P}
The integration leads to the following expression for the volume
V
(
P
,
S
)
{\displaystyle V(P,S)}
along the isentropic
S
{\displaystyle S}
V
V
0
=
(
1
+
P
0
B
)
1
/
n
(
1
+
P
B
)
−
1
/
n
{\displaystyle {\frac {V}{V_{0}}}=\left(1+{\frac {P_{0}}{B}}\right)^{1/n}\left(1+{\frac {P}{B}}\right)^{-1/n}}
where
V
0
=
V
(
P
0
,
S
)
{\displaystyle V_{0}=V(P_{0},S)}
.
=== Pressure formula ===
The expression for the pressure
P
(
V
,
S
)
{\displaystyle P(V,S)}
in terms of the specific volume along the isentropic
S
{\displaystyle S}
is
P
=
(
B
+
P
0
)
(
V
0
V
)
n
−
B
.
{\displaystyle P=(B+P_{0})\,\left({\cfrac {V_{0}}{V}}\right)^{n}-B\,.}
A highly detailed study on the Modified Tait equation of state with the physical interpretation of the two empirical parameters
n
{\displaystyle n}
and
B
{\displaystyle B}
is given in chapter 4 of reference. Expressions as a function of entropy for the two empirical parameters
n
{\displaystyle n}
and
B
{\displaystyle B}
are given for water, helium-3 and helium-4.
== See also ==
Equation of state
== References == | Wikipedia/Tait_equation |
The Mie–Grüneisen equation of state is an equation of state that relates the pressure and volume of a solid at a given temperature. It is used to determine the pressure in a shock-compressed solid. The Mie–Grüneisen relation is a special form of the Grüneisen model which describes the effect that changing the volume of a crystal lattice has on its vibrational properties. Several variations of the Mie–Grüneisen equation of state are in use.
The Grüneisen model can be expressed in the form
Γ
=
V
(
d
p
d
e
)
V
{\displaystyle \Gamma =V\left({\frac {dp}{de}}\right)_{V}}
where V is the volume, p is the pressure, e is the internal energy, and Γ is the Grüneisen parameter which represents the thermal pressure from a set of vibrating atoms. If we assume that Γ is independent of p and e, we can integrate Grüneisen's model to get
p
−
p
0
=
Γ
V
(
e
−
e
0
)
{\displaystyle p-p_{0}={\frac {\Gamma }{V}}(e-e_{0})}
where
p
0
{\displaystyle p_{0}}
and
e
0
{\displaystyle e_{0}}
are the pressure and internal energy at a reference state usually assumed to be the state at which the temperature is 0K. In that case p0 and e0 are independent of temperature and the values of these quantities can be estimated from the Hugoniot equations. The Mie–Grüneisen equation of state is a special form of the above equation.
== History ==
Gustav Mie, in 1903, developed an intermolecular potential for deriving high-temperature equations of state of solids. In 1912, Eduard Grüneisen extended Mie's model to temperatures below the Debye temperature at which quantum effects become important. Grüneisen's form of the equations is more convenient and has become the usual starting point for deriving Mie–Grüneisen equations of state.
== Expressions for the Mie–Grüneisen equation of state ==
A temperature-corrected version that is used in computational mechanics has the form: 61
p
=
ρ
0
C
0
2
χ
[
1
−
Γ
0
2
χ
]
(
1
−
s
χ
)
2
+
Γ
0
E
;
χ
:=
1
−
ρ
0
ρ
{\displaystyle p={\frac {\rho _{0}C_{0}^{2}\chi \left[1-{\frac {\Gamma _{0}}{2}}\,\chi \right]}{\left(1-s\chi \right)^{2}}}+\Gamma _{0}E;\quad \chi :=1-{\cfrac {\rho _{0}}{\rho }}}
where
C
0
{\displaystyle C_{0}}
is the bulk speed of sound,
ρ
0
{\displaystyle \rho _{0}}
is the initial density,
ρ
{\displaystyle \rho }
is the current density,
Γ
0
{\displaystyle \Gamma _{0}}
is Grüneisen's gamma at the reference state,
s
=
d
U
s
/
d
U
p
{\displaystyle s=dU_{s}/dU_{p}}
is a linear Hugoniot slope coefficient,
U
s
{\displaystyle U_{s}}
is the shock wave velocity,
U
p
{\displaystyle U_{p}}
is the particle velocity, and
E
{\displaystyle E}
is the internal energy per unit reference volume. An alternative form is
p
=
ρ
0
C
0
2
(
η
−
1
)
[
η
−
Γ
0
2
(
η
−
1
)
]
[
η
−
s
(
η
−
1
)
]
2
+
Γ
0
E
;
η
:=
ρ
ρ
0
.
{\displaystyle p={\frac {\rho _{0}C_{0}^{2}(\eta -1)\left[\eta -{\frac {\Gamma _{0}}{2}}(\eta -1)\right]}{\left[\eta -s(\eta -1)\right]^{2}}}+\Gamma _{0}E;\quad \eta :={\cfrac {\rho }{\rho _{0}}}\,.}
A rough estimate of the internal energy can be computed using
E
=
1
V
0
∫
C
v
d
T
≈
C
v
(
T
−
T
0
)
V
0
=
ρ
0
c
v
(
T
−
T
0
)
{\displaystyle E={\frac {1}{V_{0}}}\int C_{v}dT\approx {\frac {C_{v}(T-T_{0})}{V_{0}}}=\rho _{0}c_{v}(T-T_{0})}
where
V
0
{\displaystyle V_{0}}
is the reference volume at temperature
T
=
T
0
{\displaystyle T=T_{0}}
,
C
v
{\displaystyle C_{v}}
is the heat capacity and
c
v
{\displaystyle c_{v}}
is the specific heat capacity at constant volume. In many simulations, it is assumed that
C
p
{\displaystyle C_{p}}
and
C
v
{\displaystyle C_{v}}
are equal.
=== Parameters for various materials ===
== Derivation of the equation of state ==
From Grüneisen's model we have
where
p
0
{\displaystyle p_{0}}
and
e
0
{\displaystyle e_{0}}
are the pressure and internal energy at a reference state. The Hugoniot equations for the conservation of mass, momentum, and energy are
ρ
0
U
s
=
ρ
(
U
s
−
U
p
)
,
p
H
−
p
H
0
=
ρ
0
U
s
U
p
,
p
H
U
p
=
ρ
0
U
s
(
U
p
2
2
+
E
H
−
E
H
0
)
{\displaystyle {\begin{aligned}\rho _{0}U_{s}&=\rho (U_{s}-U_{p})\,,\\[1ex]p_{H}-p_{H0}&=\rho _{0}U_{s}U_{p}\,,\\[1ex]p_{H}U_{p}&=\rho _{0}U_{s}\left({\frac {U_{p}^{2}}{2}}+E_{H}-E_{H0}\right)\end{aligned}}}
where ρ0 is the reference density, ρ is the density due to shock compression, pH is the pressure on the Hugoniot, EH is the internal energy per unit mass on the Hugoniot, Us is the shock velocity, and Up is the particle velocity. From the conservation of mass, we have
U
p
U
s
=
1
−
ρ
0
ρ
=
1
−
V
V
0
=:
χ
.
{\displaystyle {\frac {U_{p}}{U_{s}}}=1-{\frac {\rho _{0}}{\rho }}=1-{\frac {V}{V_{0}}}=:\chi \,.}
Where we defined
V
=
1
/
ρ
{\displaystyle V=1/\rho }
, the specific volume (volume per unit mass).
For many materials Us and Up are linearly related, i.e., Us = C0 + s Up where C0 and s depend on the material. In that case, we have
U
s
=
C
0
+
s
χ
U
s
or
U
s
=
C
0
1
−
s
χ
.
{\displaystyle U_{s}=C_{0}+s\chi U_{s}\quad {\text{or}}\quad U_{s}={\frac {C_{0}}{1-s\chi }}\,.}
The momentum equation can then be written (for the principal Hugoniot where pH0 is zero) as
p
H
=
ρ
0
χ
U
s
2
=
ρ
0
C
0
2
χ
(
1
−
s
χ
)
2
.
{\displaystyle p_{H}=\rho _{0}\chi U_{s}^{2}={\frac {\rho _{0}C_{0}^{2}\chi }{(1-s\chi )^{2}}}\,.}
Similarly, from the energy equation we have
p
H
χ
U
s
=
1
2
ρ
χ
2
U
s
3
+
ρ
0
U
s
E
H
=
1
2
p
H
χ
U
s
+
ρ
0
U
s
E
H
.
{\displaystyle p_{H}\chi U_{s}={\tfrac {1}{2}}\rho \chi ^{2}U_{s}^{3}+\rho _{0}U_{s}E_{H}={\tfrac {1}{2}}p_{H}\chi U_{s}+\rho _{0}U_{s}E_{H}\,.}
Solving for eH, we have
E
H
=
1
2
p
H
χ
ρ
0
=
1
2
p
H
(
V
0
−
V
)
{\displaystyle E_{H}={\tfrac {1}{2}}{\frac {p_{H}\chi }{\rho _{0}}}={\tfrac {1}{2}}p_{H}(V_{0}-V)}
With these expressions for pH and EH, the Grüneisen model on the Hugoniot becomes
p
H
−
p
0
=
Γ
V
(
p
H
χ
V
0
2
−
e
0
)
or
ρ
0
C
0
2
χ
(
1
−
s
χ
)
2
(
1
−
χ
2
Γ
V
V
0
)
−
p
0
=
−
Γ
V
e
0
.
{\displaystyle p_{H}-p_{0}={\frac {\Gamma }{V}}\left({\frac {p_{H}\chi V_{0}}{2}}-e_{0}\right)\quad {\text{or}}\quad {\frac {\rho _{0}C_{0}^{2}\chi }{(1-s\chi )^{2}}}\left(1-{\frac {\chi }{2}}\,{\frac {\Gamma }{V}}\,V_{0}\right)-p_{0}=-{\frac {\Gamma }{V}}e_{0}\,.}
If we assume that Γ/V = Γ0/V0 and note that
p
0
=
−
d
e
0
/
d
V
{\displaystyle p_{0}=-de_{0}/dV}
, we get
The above ordinary differential equation can be solved for e0 with the initial condition e0 = 0 when V = V0 (χ = 0). The exact solution is
e
0
=
ρ
C
0
2
V
0
2
s
4
[
exp
(
Γ
0
χ
)
(
Γ
0
s
−
3
)
s
2
−
[
Γ
0
s
−
(
3
−
s
χ
)
]
s
2
1
−
s
χ
+
exp
[
−
Γ
0
s
(
1
−
s
χ
)
]
(
Γ
0
2
−
4
Γ
0
s
+
2
s
2
)
(
Ei
[
Γ
0
s
(
1
−
s
χ
)
]
−
Ei
[
Γ
0
s
]
)
]
{\displaystyle {\begin{aligned}e_{0}={\frac {\rho C_{0}^{2}V_{0}}{2s^{4}}}{\Biggl [}&\exp(\Gamma _{0}\chi )\left({\tfrac {\Gamma _{0}}{s}}-3\right)s^{2}-{\frac {\left[{\tfrac {\Gamma _{0}}{s}}-(3-s\chi )\right]s^{2}}{1-s\chi }}+\\&\exp \left[-{\tfrac {\Gamma _{0}}{s}}(1-s\chi )\right]\left(\Gamma _{0}^{2}-4\Gamma _{0}s+2s^{2}\right)\left({\text{Ei}}\left[{\tfrac {\Gamma _{0}}{s}}(1-s\chi )\right]-{\text{Ei}}\left[{\tfrac {\Gamma _{0}}{s}}\right]\right){\Biggr ]}\end{aligned}}}
where Ei[z] is the exponential integral. The expression for p0 is
p
0
=
−
d
e
0
d
V
=
ρ
C
0
2
2
s
4
(
1
−
χ
)
[
s
(
1
−
s
χ
)
2
(
−
Γ
0
2
(
1
−
χ
)
(
1
−
s
χ
)
+
Γ
0
[
s
{
4
(
χ
−
1
)
χ
s
−
2
χ
+
3
}
−
1
]
−
exp
(
Γ
0
χ
)
[
Γ
0
(
χ
−
1
)
−
1
]
(
1
−
s
χ
)
2
(
Γ
0
−
3
s
)
+
s
[
3
−
χ
s
{
(
χ
−
2
)
s
+
4
}
]
)
−
exp
[
−
Γ
0
s
(
1
−
s
χ
)
]
[
Γ
0
(
χ
−
1
)
−
1
]
(
Γ
0
2
−
4
Γ
0
s
+
2
s
2
)
(
Ei
[
Γ
0
s
(
1
−
s
χ
)
]
−
Ei
[
Γ
0
s
]
)
]
.
{\displaystyle {\begin{aligned}p_{0}=-{\frac {de_{0}}{dV}}={\frac {\rho C_{0}^{2}}{2s^{4}(1-\chi )}}{\Biggl [}&{\frac {s}{(1-s\chi )^{2}}}{\Bigl (}-\Gamma _{0}^{2}(1-\chi )(1-s\chi )+\Gamma _{0}[s\{4(\chi -1)\chi s-2\chi +3\}-1]\\&\qquad \qquad \quad -\exp(\Gamma _{0}\chi )[\Gamma _{0}(\chi -1)-1](1-s\chi )^{2}(\Gamma _{0}-3s)+s[3-\chi s\{(\chi -2)s+4\}]{\Bigr )}\\&-\exp \left[-{\tfrac {\Gamma _{0}}{s}}(1-s\chi )\right]\left[\Gamma _{0}(\chi -1)-1\right]\left(\Gamma _{0}^{2}-4\Gamma _{0}s+2s^{2}\right)\left({\text{Ei}}\left[{\tfrac {\Gamma _{0}}{s}}(1-s\chi )\right]-{\text{Ei}}\left[{\tfrac {\Gamma _{0}}{s}}\right]\right){\Biggr ]}\,.\end{aligned}}}
For commonly encountered compression problems, an approximation to the exact solution is a power series solution of the form
e
0
(
V
)
=
A
+
B
χ
(
V
)
+
C
χ
2
(
V
)
+
D
χ
3
(
V
)
+
⋯
{\displaystyle e_{0}(V)=A+B\chi (V)+C\chi ^{2}(V)+D\chi ^{3}(V)+\cdots }
and
p
0
(
V
)
=
−
d
e
0
d
V
=
−
d
e
0
d
χ
d
χ
d
V
=
1
V
0
(
B
+
2
C
χ
+
3
D
χ
2
+
⋯
)
.
{\displaystyle p_{0}(V)=-{\frac {de_{0}}{dV}}=-{\frac {de_{0}}{d\chi }}\,{\frac {d\chi }{dV}}={\frac {1}{V_{0}}}\,(B+2C\chi +3D\chi ^{2}+\cdots )\,.}
Substitution into the Grüneisen model gives us the Mie–Grüneisen equation of state
p
=
1
V
0
(
B
+
2
C
χ
+
3
D
χ
2
+
⋯
)
+
Γ
0
V
0
[
e
−
(
A
+
B
χ
+
C
χ
2
+
D
χ
3
+
⋯
)
]
.
{\displaystyle p={\frac {1}{V_{0}}}\,(B+2C\chi +3D\chi ^{2}+\cdots )+{\frac {\Gamma _{0}}{V_{0}}}\left[e-(A+B\chi +C\chi ^{2}+D\chi ^{3}+\cdots )\right]\,.}
If we assume that the internal energy e0 = 0 when V = V0 (χ = 0) we have A = 0. Similarly, if we assume p0 = 0 when V = V0 we have B = 0. The Mie–Grüneisen equation of state can then be written as
p
=
1
V
0
[
2
C
χ
(
1
−
Γ
0
2
χ
)
+
3
D
χ
2
(
1
−
Γ
0
3
χ
)
+
⋯
]
+
Γ
0
E
{\displaystyle p={\frac {1}{V_{0}}}\left[2C\chi \left(1-{\tfrac {\Gamma _{0}}{2}}\chi \right)+3D\chi ^{2}\left(1-{\tfrac {\Gamma _{0}}{3}}\chi \right)+\cdots \right]+\Gamma _{0}E}
where E is the internal energy per unit reference volume. Several forms of this equation of state are possible.
If we take the first-order term and substitute it into equation (2), we can solve for C to get
C
=
ρ
0
C
0
2
V
0
2
(
1
−
s
χ
)
2
.
{\displaystyle C={\frac {\rho _{0}C_{0}^{2}V_{0}}{2(1-s\chi )^{2}}}\,.}
Then we get the following expression for p:
p
=
ρ
0
C
0
2
χ
(
1
−
s
χ
)
2
(
1
−
Γ
0
2
χ
)
+
Γ
0
E
.
{\displaystyle p={\frac {\rho _{0}C_{0}^{2}\chi }{(1-s\chi )^{2}}}\left(1-{\tfrac {\Gamma _{0}}{2}}\chi \right)+\Gamma _{0}E\,.}
This is the commonly used first-order Mie–Grüneisen equation of state.
== See also ==
Impact (mechanics)
Shock wave
Shock (mechanics)
Shock tube
Hydrostatic shock
Viscoplasticity
== References == | Wikipedia/Mie–Grüneisen_equation_of_state |
The Anton–Schmidt equation is an empirical equation of state for crystalline solids, e.g. for pure metals or intermetallic compounds.
Quantum mechanical investigations of intermetallic compounds show that the dependency of the pressure under isotropic deformation can be described empirically by
p
(
V
)
=
−
β
(
V
V
0
)
n
ln
(
V
V
0
)
{\displaystyle p(V)=-\beta \left({\frac {V}{V_{0}}}\right)^{n}\ln \left({\frac {V}{V_{0}}}\right)}
.
Integration of
p
(
V
)
{\displaystyle p(V)}
leads to the equation of state for the total energy. The energy
E
{\displaystyle E}
required to compress a solid to volume
V
{\displaystyle V}
is
E
(
V
)
=
−
∫
V
∞
p
(
V
′
)
d
V
′
{\displaystyle E(V)=-\int _{V}^{\infty }p(V^{\prime })dV^{\prime }}
which gives
E
(
V
)
=
β
V
0
n
+
1
(
V
V
0
)
n
+
1
[
ln
(
V
V
0
)
−
1
n
+
1
]
−
E
∞
{\displaystyle E(V)={\frac {\beta V_{0}}{n+1}}\left({\frac {V}{V_{0}}}\right)^{n+1}\left[\ln \left({\frac {V}{V_{0}}}\right)-{\frac {1}{n+1}}\right]-E_{\infty }}
.
The fitting parameters
β
,
n
{\displaystyle \beta ,n}
and
V
0
{\displaystyle V_{0}}
are related to material properties, where
β
{\displaystyle \beta }
is the bulk modulus
K
0
{\displaystyle K_{0}}
at equilibrium volume
V
0
{\displaystyle V_{0}}
.
n
{\displaystyle n}
correlates with the Grüneisen parameter
n
=
−
1
6
−
γ
G
{\displaystyle n=-{\frac {1}{6}}-\gamma _{G}}
.
However, the fitting parameter
E
∞
{\displaystyle E_{\infty }}
does not reproduce the total energy of the free atoms.
The total energy equation is used to determine elastic and thermal material constants in quantum chemical simulation packages.
The equation of state has been used in cosmological contexts to describe the dark energy dynamics. However its use has been recently criticized since it appears disfavored than simpler equations of state adopted for the same purpose.
== See also ==
Murnaghan equation of state
Rose–Vinet equation of state
Birch–Murnaghan equation of state
== References == | Wikipedia/Anton-Schmidt_equation_of_state |
Entropy is one of the few quantities in the physical sciences that require a particular direction for time, sometimes called an arrow of time. As one goes "forward" in time, the second law of thermodynamics says, the entropy of an isolated system can increase, but not decrease. Thus, entropy measurement is a way of distinguishing the past from the future. In thermodynamic systems that are not isolated, local entropy can decrease over time, accompanied by a compensating entropy increase in the surroundings; examples include objects undergoing cooling, living systems, and the formation of typical crystals.
Much like temperature, despite being an abstract concept, everyone has an intuitive sense of the effects of entropy. For example, it is often very easy to tell the difference between a video being played forwards or backwards. A video may depict a wood fire that melts a nearby ice block; played in reverse, it would show a puddle of water turning a cloud of smoke into unburnt wood and freezing itself in the process. Surprisingly, in either case, the vast majority of the laws of physics are not broken by these processes, with the second law of thermodynamics being one of the only exceptions. When a law of physics applies equally when time is reversed, it is said to show T-symmetry; in this case, entropy is what allows one to decide if the video described above is playing forwards or in reverse as intuitively we identify that only when played forwards the entropy of the scene is increasing. Because of the second law of thermodynamics, entropy prevents macroscopic processes showing T-symmetry.
When studying at a microscopic scale, the above judgements cannot be made. Watching a single smoke particle buffeted by air, it would not be clear if a video was playing forwards or in reverse, and, in fact, it would not be possible as the laws which apply show T-symmetry. As it drifts left or right, qualitatively it looks no different; it is only when the gas is studied at a macroscopic scale that the effects of entropy become noticeable (see Loschmidt's paradox). On average it would be expected that the smoke particles around a struck match would drift away from each other, diffusing throughout the available space. It would be an astronomically improbable event for all the particles to cluster together, yet the movement of any one smoke particle cannot be predicted.
By contrast, certain subatomic interactions involving the weak nuclear force violate the conservation of parity, but only very rarely. According to the CPT theorem, this means they should also be time irreversible, and so establish an arrow of time. This, however, is neither linked to the thermodynamic arrow of time, nor has anything to do with the daily experience of time irreversibility.
== Overview ==
The second law of thermodynamics allows for the entropy to remain the same regardless of the direction of time. If the entropy is constant in either direction of time, there would be no preferred direction. However, the entropy can only be a constant if the system is in the highest possible state of disorder, such as a gas that always was, and always will be, uniformly spread out in its container. The existence of a thermodynamic arrow of time implies that the system is highly ordered in one time direction only, which would by definition be the "past". Thus this law is about the boundary conditions rather than the equations of motion.
The second law of thermodynamics is statistical in nature, and therefore its reliability arises from the huge number of particles present in macroscopic systems. It is not impossible, in principle, for all 6 × 1023 atoms in a mole of a gas to spontaneously migrate to one half of a container; it is only fantastically unlikely—so unlikely that no macroscopic violation of the Second Law has ever been observed.
The thermodynamic arrow is often linked to the cosmological arrow of time, because it is ultimately about the boundary conditions of the early universe. According to the Big Bang theory, the Universe was initially very hot with energy distributed uniformly. For a system in which gravity is important, such as the universe, this is a low-entropy state (compared to a high-entropy state of having all matter collapsed into black holes, a state to which the system may eventually evolve). As the Universe grows, its temperature drops, which leaves less energy [per unit volume of space] available to perform work in the future than was available in the past. Additionally, perturbations in the energy density grow (eventually forming galaxies and stars). Thus the Universe itself has a well-defined thermodynamic arrow of time. But this does not address the question of why the initial state of the universe was that of low entropy. If cosmic expansion were to halt and reverse due to gravity, the temperature of the Universe would once again grow hotter, but its entropy would also continue to increase due to the continued growth of perturbations and the eventual black hole formation, until the latter stages of the Big Crunch when entropy would be lower than now.
== An example of apparent irreversibility ==
Consider the situation in which a large container is filled with two separated liquids, for example a dye on one side and water on the other. With no barrier between the two liquids, the random jostling of their molecules will result in them becoming more mixed as time passes. However, if the dye and water are mixed then one does not expect them to separate out again when left to themselves. A movie of the mixing would seem realistic when played forwards, but unrealistic when played backwards.
If the large container is observed early on in the mixing process, it might be found only partially mixed. It would be reasonable to conclude that, without outside intervention, the liquid reached this state because it was more ordered in the past, when there was greater separation, and will be more disordered, or mixed, in the future.
Now imagine that the experiment is repeated, this time with only a few molecules, perhaps ten, in a very small container. One can easily imagine that by watching the random jostling of the molecules it might occur—by chance alone—that the molecules became neatly segregated, with all dye molecules on one side and all water molecules on the other. That this can be expected to occur from time to time can be concluded from the fluctuation theorem; thus it is not impossible for the molecules to segregate themselves. However, for a large number of molecules it is so unlikely that one would have to wait, on average, many times longer than the current age of the universe for it to occur. Thus a movie that showed a large number of molecules segregating themselves as described above would appear unrealistic and one would be inclined to say that the movie was being played in reverse. See Boltzmann's second law as a law of disorder.
== Mathematics of the arrow ==
The mathematics behind the arrow of time, entropy, and basis of the second law of thermodynamics derive from the following set-up, as detailed by Carnot (1824), Clapeyron (1832), and Clausius (1854):
Here, as common experience demonstrates, when a hot body T1, such as a furnace, is put into physical contact, such as being connected via a body of fluid (working body), with a cold body T2, such as a stream of cold water, energy will invariably flow from hot to cold in the form of heat Q, and given time the system will reach equilibrium. Entropy, defined as Q/T, was conceived by Rudolf Clausius as a function to measure the molecular irreversibility of this process, i.e. the dissipative work the atoms and molecules do on each other during the transformation.
In this diagram, one can calculate the entropy change ΔS for the passage of the quantity of heat Q from the temperature T1, through the "working body" of fluid (see heat engine), which was typically a body of steam, to the temperature T2. Moreover, one could assume, for the sake of argument, that the working body contains only two molecules of water.
Next, if we make the assignment, as originally done by Clausius:
S
=
Q
T
{\displaystyle S={\frac {Q}{T}}}
Then the entropy change or "equivalence-value" for this transformation is:
Δ
S
=
S
f
i
n
a
l
−
S
i
n
i
t
i
a
l
{\displaystyle \Delta S=S_{\mathit {final}}-S_{\mathit {initial}}\,}
which equals:
Δ
S
=
(
Q
T
2
−
Q
T
1
)
{\displaystyle \Delta S=\left({\frac {Q}{T_{2}}}-{\frac {Q}{T_{1}}}\right)}
and by factoring out Q, we have the following form, as was derived by Clausius:
Δ
S
=
Q
(
1
T
2
−
1
T
1
)
{\displaystyle \Delta S=Q\left({\frac {1}{T_{2}}}-{\frac {1}{T_{1}}}\right)}
Thus, for example, if Q was 50 units, T1 was initially 100 degrees, and T2 was 1 degree, then the entropy change for this process would be 49.5. Hence, entropy increased for this process, the process took a certain amount of "time", and one can correlate entropy increase with the passage of time. For this system configuration, subsequently, it is an "absolute rule". This rule is based on the fact that all natural processes are irreversible by virtue of the fact that molecules of a system, for example two molecules in a tank, not only do external work (such as to push a piston), but also do internal work on each other, in proportion to the heat used to do work (see: Mechanical equivalent of heat) during the process. Entropy accounts for the fact that internal inter-molecular friction exists.
== Correlations ==
An important difference between the past and the future is that in any system (such as a gas of particles) its initial conditions are usually such that its different parts are uncorrelated, but as the system evolves and its different parts interact with each other, they become correlated. For example, whenever dealing with a gas of particles, it is always assumed that its initial conditions are such that there is no correlation between the states of different particles (i.e. the speeds and locations of the different particles are completely random, up to the need to conform with the macrostate of the system). This is closely related to the second law of thermodynamics: For example, in a finite system interacting with finite heat reservoirs, entropy is equivalent to system-reservoir correlations, and thus both increase together.
Take for example (experiment A) a closed box that is, at the beginning, half-filled with ideal gas. As time passes, the gas obviously expands to fill the whole box, so that the final state is a box full of gas. This is an irreversible process, since if the box is full at the beginning (experiment B), it does not become only half-full later, except for the very unlikely situation where the gas particles have very special locations and speeds. But this is precisely because we always assume that the initial conditions in experiment B are such that the particles have random locations and speeds. This is not correct for the final conditions of the system in experiment A, because the particles have interacted between themselves, so that their locations and speeds have become dependent on each other, i.e. correlated. This can be understood if we look at experiment A backwards in time, which we'll call experiment C: now we begin with a box full of gas, but the particles do not have random locations and speeds; rather, their locations and speeds are so particular, that after some time they all move to one half of the box, which is the final state of the system (this is the initial state of experiment A, because now we're looking at the same experiment backwards!). The interactions between particles now do not create correlations between the particles, but in fact turn them into (at least seemingly) random, "canceling" the pre-existing correlations. The only difference between experiment C (which defies the Second Law of Thermodynamics) and experiment B (which obeys the Second Law of Thermodynamics) is that in the former the particles are uncorrelated at the end, while in the latter the particles are uncorrelated at the beginning.
In fact, if all the microscopic physical processes are reversible (see discussion below), then the Second Law of Thermodynamics can be proven for any isolated system of particles with initial conditions in which the particles states are uncorrelated. To do this, one must acknowledge the difference between the measured entropy of a system—which depends only on its macrostate (its volume, temperature etc.)—and its information entropy, which is the amount of information (number of computer bits) needed to describe the exact microstate of the system. The measured entropy is independent of correlations between particles in the system, because they do not affect its macrostate, but the information entropy does depend on them, because correlations lower the randomness of the system and thus lowers the amount of information needed to describe it. Therefore, in the absence of such correlations the two entropies are identical, but otherwise the information entropy is smaller than the measured entropy, and the difference can be used as a measure of the amount of correlations.
Now, by Liouville's theorem, time-reversal of all microscopic processes implies that the amount of information needed to describe the exact microstate of an isolated system (its information-theoretic joint entropy) is constant in time. This joint entropy is equal to the marginal entropy (entropy assuming no correlations) plus the entropy of correlation (mutual entropy, or its negative mutual information). If we assume no correlations between the particles initially, then this joint entropy is just the marginal entropy, which is just the initial thermodynamic entropy of the system, divided by the Boltzmann constant. However, if these are indeed the initial conditions (and this is a crucial assumption), then such correlations form with time. In other words, there is a decreasing mutual entropy (or increasing mutual information), and for a time that is not too long—the correlations (mutual information) between particles only increase with time. Therefore, the thermodynamic entropy, which is proportional to the marginal entropy, must also increase with time (note that "not too long" in this context is relative to the time needed, in a classical version of the system, for it to pass through all its possible microstates—a time that can be roughly estimated as
τ
e
S
{\displaystyle \tau e^{S}}
, where
τ
{\displaystyle \tau }
is the time between particle collisions and S is the system's entropy. In any practical case this time is huge compared to everything else). Note that the correlation between particles is not a fully objective quantity. One cannot measure the mutual entropy, one can only measure its change, assuming one can measure a microstate. Thermodynamics is restricted to the case where microstates cannot be distinguished, which means that only the marginal entropy, proportional to the thermodynamic entropy, can be measured, and, in a practical sense, always increases.
== Arrow of time in various phenomena ==
Phenomena that occur differently according to their time direction can ultimately be linked to the second law of thermodynamics, for example ice cubes melt in hot coffee rather than assembling themselves out of the coffee and a block sliding on a rough surface slows down rather than speeds up. The idea that we can remember the past and not the future is called the "psychological arrow of time" and it has deep connections with Maxwell's demon and the physics of information; memory is linked to the second law of thermodynamics if one views it as correlation between brain cells (or computer bits) and the outer world: Since such correlations increase with time, memory is linked to past events, rather than to future events.
== Current research ==
Current research focuses mainly on describing the thermodynamic arrow of time mathematically, either in classical or quantum systems, and on understanding its origin from the point of view of cosmological boundary conditions.
=== Dynamical systems ===
Some current research in dynamical systems indicates a possible "explanation" for the arrow of time. There are several ways to describe the time evolution of a dynamical system. In the classical framework, one considers an ordinary differential equation, where the parameter is explicitly time. By the very nature of differential equations, the solutions to such systems are inherently time-reversible. However, many of the interesting cases are either ergodic or mixing, and it is strongly suspected that mixing and ergodicity somehow underlie the fundamental mechanism of the arrow of time. While the strong suspicion may be but a fleeting sense of intuition, it cannot be denied that, when there are multiple parameters, the field of partial differential equations comes into play. In such systems there is the Feynman–Kac formula in play, which assures for specific cases, a one-to-one correspondence between specific linear stochastic differential equation and partial differential equation. Therefore, any partial differential equation system is tantamount to a random system of a single parameter, which is not reversible due to the aforementioned correspondence.
Mixing and ergodic systems do not have exact solutions, and thus proving time irreversibility in a mathematical sense is (as of 2006) impossible. The concept of "exact" solutions is an anthropic one. Does "exact" mean the same as closed form in terms of already know expressions, or does it mean simply a single finite sequence of strokes of a/the writing utensil/human finger? There are myriad of systems known to humanity that are abstract and have recursive definitions but no non-self-referential notation currently exists. As a result of this complexity, it is natural to look elsewhere for different examples and perspectives. Some progress can be made by studying discrete-time models or difference equations. Many discrete-time models, such as the iterated functions considered in popular fractal-drawing programs, are explicitly not time-reversible, as any given point "in the present" may have several different "pasts" associated with it: indeed, the set of all pasts is known as the Julia set. Since such systems have a built-in irreversibility, it is inappropriate to use them to explain why time is not reversible.
There are other systems that are chaotic, and are also explicitly time-reversible: among these is the baker's map, which is also exactly solvable. An interesting avenue of study is to examine solutions to such systems not by iterating the dynamical system over time, but instead, to study the corresponding Frobenius-Perron operator or transfer operator for the system. For some of these systems, it can be explicitly, mathematically shown that the transfer operators are not trace-class. This means that these operators do not have a unique eigenvalue spectrum that is independent of the choice of basis. In the case of the baker's map, it can be shown that several unique and inequivalent diagonalizations or bases exist, each with a different set of eigenvalues. It is this phenomenon that can be offered as an "explanation" for the arrow of time. That is, although the iterated, discrete-time system is explicitly time-symmetric, the transfer operator is not. Furthermore, the transfer operator can be diagonalized in one of two inequivalent ways: one that describes the forward-time evolution of the system, and one that describes the backwards-time evolution.
As of 2006, this type of time-symmetry breaking has been demonstrated for only a very small number of exactly-solvable, discrete-time systems. The transfer operator for more complex systems has not been consistently formulated, and its precise definition is mired in a variety of subtle difficulties. In particular, it has not been shown that it has a broken symmetry for the simplest exactly-solvable continuous-time ergodic systems, such as Hadamard's billiards, or the Anosov flow on the tangent space of PSL(2,R).
=== Quantum mechanics ===
Research on irreversibility in quantum mechanics takes several different directions. One avenue is the study of rigged Hilbert spaces, and in particular, how discrete and continuous eigenvalue spectra intermingle. For example, the rational numbers are completely intermingled with the real numbers, and yet have a unique, distinct set of properties. It is hoped that the study of Hilbert spaces with a similar inter-mingling will provide insight into the arrow of time.
Another distinct approach is through the study of quantum chaos by which attempts are made to quantize systems as classically chaotic, ergodic or mixing. The results obtained are not dissimilar from those that come from the transfer operator method. For example, the quantization of the Boltzmann gas, that is, a gas of hard (elastic) point particles in a rectangular box reveals that the eigenfunctions are space-filling fractals that occupy the entire box, and that the energy eigenvalues are very closely spaced and have an "almost continuous" spectrum (for a finite number of particles in a box, the spectrum must be, of necessity, discrete). If the initial conditions are such that all of the particles are confined to one side of the box, the system very quickly evolves into one where the particles fill the entire box. Even when all of the particles are initially on one side of the box, their wave functions do, in fact, permeate the entire box: they constructively interfere on one side, and destructively interfere on the other. Irreversibility is then argued by noting that it is "nearly impossible" for the wave functions to be "accidentally" arranged in some unlikely state: such arrangements are a set of zero measure. Because the eigenfunctions are fractals, much of the language and machinery of entropy and statistical mechanics can be imported to discuss and argue the quantum case.
=== Cosmology ===
Some processes that involve high energy particles and are governed by the weak force (such as K-meson decay) defy the symmetry between time directions. However, all known physical processes do preserve a more complicated symmetry (CPT symmetry), and are therefore unrelated to the second law of thermodynamics, or to the day-to-day experience of the arrow of time. A notable exception is the wave function collapse in quantum mechanics, an irreversible process which is considered either real (by the Copenhagen interpretation) or apparent only (by the many-worlds interpretation of quantum mechanics). In either case, the wave function collapse always follows quantum decoherence, a process which is understood to be a result of the second law of thermodynamics.
The universe was in a uniform, high density state at its very early stages, shortly after the Big Bang. The hot gas in the early universe was near thermodynamic equilibrium (see Horizon problem); in systems where gravitation plays a major role, this is a state of low entropy, due to the negative heat capacity of such systems (this is in contrary to non-gravitational systems where thermodynamic equilibrium is a state of maximum entropy). Moreover, due to its small volume compared to future epochs, the entropy was even lower as gas expansion increases its entropy. Thus the early universe can be considered to be highly ordered. Note that the uniformity of this early near-equilibrium state has been explained by the theory of cosmic inflation.
According to this theory the universe (or, rather, its accessible part, a radius of 46 billion light years around Earth) evolved from a tiny, totally uniform volume (a portion of a much bigger universe), which expanded greatly; hence it was highly ordered. Fluctuations were then created by quantum processes related to its expansion, in a manner supposed to be such that these fluctuations went through quantum decoherence, so that they became uncorrelated for any practical use. This is supposed to give the desired initial conditions needed for the Second Law of Thermodynamics; different decoherent states ultimately evolved to different specific arrangements of galaxies and stars.
The universe is apparently an open universe, so that its expansion will never terminate, but it is an interesting thought experiment to imagine what would have happened had the universe been closed. In such a case, its expansion would stop at a certain time in the distant future, and then begin to shrink. Moreover, a closed universe is finite.
It is unclear what would happen to the second law of thermodynamics in such a case. One could imagine at least two different scenarios, though in fact only the first one is plausible, as the other requires a highly smooth cosmic evolution, contrary to what is observed:
The broad consensus among the scientific community today is that smooth initial conditions lead to a highly non-smooth final state, and that this is in fact the source of the thermodynamic arrow of time. Gravitational systems tend to gravitationally collapse to compact bodies such as black holes (a phenomenon unrelated to wavefunction collapse), so the universe would end in a Big Crunch that is very different than a Big Bang run in reverse, since the distribution of the matter would be highly non-smooth; as the universe shrinks, such compact bodies merge to larger and larger black holes. It may even be that it is impossible for the universe to have both a smooth beginning and a smooth ending. Note that in this scenario the energy density of the universe in the final stages of its shrinkage is much larger than in the corresponding initial stages of its expansion (there is no destructive interference, unlike in the second scenario described below), and consists of mostly black holes rather than free particles.
A highly controversial view is that instead, the arrow of time will reverse. The quantum fluctuations—which in the meantime have evolved into galaxies and stars—will be in superposition in such a way that the whole process described above is reversed—i.e., the fluctuations are erased by destructive interference and total uniformity is achieved once again. Thus the universe ends in a Big Crunch, which is similar to its beginning in the Big Bang. Because the two are totally symmetric, and the final state is very highly ordered, entropy must decrease close to the end of the universe, so that the second law of thermodynamics reverses when the universe shrinks. This can be understood as follows: in the very early universe, interactions between fluctuations created entanglement (quantum correlations) between particles spread all over the universe; during the expansion, these particles became so distant that these correlations became negligible (see quantum decoherence). At the time the expansion halts and the universe starts to shrink, such correlated particles arrive once again at contact (after circling around the universe), and the entropy starts to decrease—because highly correlated initial conditions may lead to a decrease in entropy. Another way of putting it, is that as distant particles arrive, more and more order is revealed because these particles are highly correlated with particles that arrived earlier. In this scenario, the cosmological arrow of time is the reason for both the thermodynamic arrow of time and the quantum arrow of time. Both will slowly disappear as the universe will come to a halt, and will later be reversed.
In the first and more consensual scenario, it is the difference between the initial state and the final state of the universe that is responsible for the thermodynamic arrow of time. This is independent of the cosmological arrow of time.
== See also ==
Arrow of time
Cosmic inflation
Entropy
H-theorem
History of entropy
Loschmidt's paradox
== References ==
== Further reading ==
Kardar, Mehran (2007). Statistical Physics of Particles. Cambridge University Press. ISBN 978-0-521-87342-0. OCLC 860391091.
Halliwell, J.J.; et al. (1994). Halliwell, Jonathan J. (ed.). Physical origins of time asymmetry (1st paperback ed.). Cambridge: Cambridge Univ. Press. ISBN 978-0-521-56837-1. (technical).
Mackey, Michael C. (1992). Time's arrow: The origins of thermodynamic behavior. Springer study edition (1st ed.). New York Berlin: Springer. ISBN 978-3-540-94093-7. OCLC 28585247. ... it is shown that for there to be a global evolution of the entropy to its maximal value ... it is necessary and sufficient that the system have a property known as exactness. ... these criteria suggest that all currently formulated physical laws may not be at the foundation of the thermodynamic behavior we observe every day of our lives. (page xi) Dover has reprinted the monograph in 2003 (ISBN 0486432432). For a short paper listing "the essential points of that argument, correcting presentation points that were confusing ... and emphasizing conclusions more forcefully than previously" see Mackey, Michael C. (2001). "Microscopic Dynamics and the Second Law of Thermodynamics" (PDF). In Mugnai, D.; Ranfagni, A.; Schulman, L. S.; Istituto italiano per gli studi filosofici; Istituto di ricerca sulle onde elettromagnetiche (Italy) (eds.). Time's arrows, quantum measurement, and superluminal behavior: International Conference TAQMSB: Palazzo Serra di Cassano, Via Monte di Dio 14, 80132 Napoli, October 3-5, 2000. Monografie scientifiche. Roma: Consiglio nazionale delle ricerche. pp. 49–65. ISBN 978-88-8080-024-8. Archived from the original (PDF) on 2011-07-25.
Carroll, Sean M. (2010). From eternity to here: the quest for the ultimate theory of time (1st ed.). New York, NY: Dutton. ISBN 978-0-525-95133-9.
== External links ==
Thermodynamic Asymmetry in Time at the online Stanford Encyclopedia of Philosophy | Wikipedia/Entropy_as_an_arrow_of_time |
An open system is a system that has external interactions. Such interactions can take the form of information, energy, or material transfers into or out of the system boundary, depending on the discipline which defines the concept. An open system is contrasted with the concept of an isolated system which exchanges neither energy, matter, nor information with its environment. An open system is also known as a flow system.
The concept of an open system was formalized within a framework that enabled one to interrelate the theory of the organism, thermodynamics, and evolutionary theory. This concept was expanded upon with the advent of information theory and subsequently systems theory. Today the concept has its applications in the natural and social sciences.
In the natural sciences an open system is one whose border is permeable to both energy and mass. By contrast, a closed system is permeable to energy but not to matter.
The definition of an open system assumes that there are supplies of energy that cannot be depleted; in practice, this energy is supplied from some source in the surrounding environment, which can be treated as infinite for the purposes of study. One type of open system is the radiant energy system, which receives its energy from solar radiation – an energy source that can be regarded as inexhaustible for all practical purposes.
== Social sciences ==
In the social sciences an open system is a process that exchanges material, energy, people, capital and information with its environment. French/Greek philosopher Kostas Axelos argued that seeing the "world system" as inherently open (though unified) would solve many of the problems in the social sciences, including that of praxis (the relation of knowledge to practice), so that various social scientific disciplines would work together rather than create monopolies whereby the world appears only sociological, political, historical, or psychological. Axelos argues that theorizing a closed system contributes to making it closed, and is thus a conservative approach. The Althusserian concept of overdetermination (drawing on Sigmund Freud) posits that there are always multiple causes in every event.
David Harvey uses this to argue that when systems such as capitalism enter a phase of crisis, it can happen through one of a number of elements, such as gender roles, the relation to nature/the environment, or crises in accumulation. Looking at the crisis in accumulation, Harvey argues that phenomena such as foreign direct investment, privatization of state-owned resources, and accumulation by dispossession act as necessary outlets when capital has overaccumulated too much in private hands and cannot circulate effectively in the marketplace. He cites the forcible displacement of Mexican and Indian peasants since the 1970s and the 1997 Asian financial crisis, involving "hedge fund raising" of national currencies, as examples of this.
Structural functionalists such as Talcott Parsons and neofunctionalists such as Niklas Luhmann have incorporated system theory to describe society and its components.
The sociology of religion finds both open and closed systems within the field of religion.
== Thermodynamics ==
== Systems engineering ==
== See also ==
Business process
Complex system
Dynamical system
Glossary of systems theory
Ludwig von Bertalanffy
Maximum power principle
Non-equilibrium thermodynamics
Open system (computing)
Open System Environment Reference Model
Openness
Phantom loop
Thermodynamic system
== References ==
== Further reading ==
Khalil, E.L. (1995). Nonlinear thermodynamics and social science modeling: fad cycles, cultural development and identificational slips. The American Journal of Economics and Sociology, Vol. 54, Issue 4, pp. 423–438.
Weber, B.H. (1989). Ethical Implications Of The Interface Of Natural And Artificial Systems. Delicate Balance: Technics, Culture and Consequences: Conference Proceedings for the Institute of Electrical and Electronics Engineers.
== External links ==
OPEN SYSTEM, Principia Cybernetica Web, 2007. | Wikipedia/Open_system_(systems_theory) |
Information theory is the mathematical study of the quantification, storage, and communication of information. The field was established and formalized by Claude Shannon in the 1940s, though early contributions were made in the 1920s through the works of Harry Nyquist and Ralph Hartley. It is at the intersection of electronic engineering, mathematics, statistics, computer science, neurobiology, physics, and electrical engineering.
A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (which has two equally likely outcomes) provides less information (lower entropy, less uncertainty) than identifying the outcome from a roll of a die (which has six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy. Important sub-fields of information theory include source coding, algorithmic complexity theory, algorithmic information theory and information-theoretic security.
Applications of fundamental topics of information theory include source coding/data compression (e.g. for ZIP files), and channel coding/error detection and correction (e.g. for DSL). Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones and the development of the Internet and artificial intelligence. The theory has also found applications in other areas, including statistical inference, cryptography, neurobiology, perception, signal processing, linguistics, the evolution and function of molecular codes (bioinformatics), thermal physics, molecular dynamics, black holes, quantum computing, information retrieval, intelligence gathering, plagiarism detection, pattern recognition, anomaly detection, the analysis of music, art creation, imaging system design, study of outer space, the dimensionality of space, and epistemology.
== Overview ==
Information theory studies the transmission, processing, extraction, and utilization of information. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was formalized in 1948 by Claude Shannon in a paper entitled A Mathematical Theory of Communication, in which information is thought of as a set of possible messages, and the goal is to send these messages over a noisy channel, and to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the noisy-channel coding theorem, showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent.
Coding theory is concerned with finding explicit methods, called codes, for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and error-correction (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible.
A third class of information theory codes are cryptographic algorithms (both codes and ciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis, such as the unit ban.
== Historical background ==
The landmark event establishing the discipline of information theory and bringing it to immediate worldwide attention was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October 1948. Historian James Gleick rated the paper as the most important development of 1948, noting that the paper was "even more profound and more fundamental" than the transistor. He came to be known as the "father of information theory". Shannon outlined some of his initial ideas of information theory as early as 1939 in a letter to Vannevar Bush.
Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, all implicitly assuming events of equal probability. Harry Nyquist's 1924 paper, Certain Factors Affecting Telegraph Speed, contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation W = K log m (recalling the Boltzmann constant), where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant. Ralph Hartley's 1928 paper, Transmission of Information, uses the word information as a measurable quantity, reflecting the receiver's ability to distinguish one sequence of symbols from any other, thus quantifying information as H = log Sn = n log S, where S was the number of possible symbols, and n the number of symbols in a transmission. The unit of information was therefore the decimal digit, which since has sometimes been called the hartley in his honor as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers.
Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann and J. Willard Gibbs. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in Entropy in thermodynamics and information theory.
In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion:
"The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point."
With it came the ideas of:
the information entropy and redundancy of a source, and its relevance through the source coding theorem;
the mutual information, and the channel capacity of a noisy channel, including the promise of perfect loss-free communication given by the noisy-channel coding theorem;
the practical result of the Shannon–Hartley law for the channel capacity of a Gaussian channel; as well as
the bit—a new way of seeing the most fundamental unit of information.
== Quantities of information ==
Information theory is based on probability theory and statistics, where quantified information is usually described in terms of bits. Information theory often concerns itself with measures of information of the distributions associated with random variables. One of the most important measures is called entropy, which forms the building block of many other measures. Entropy allows quantification of measure of information in a single random variable. Another useful concept is mutual information defined on two random variables, which describes the measure of information in common between those variables, which can be used to describe their correlation. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution.
The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit or shannon, based on the binary logarithm. Other units include the nat, which is based on the natural logarithm, and the decimal digit, which is based on the common logarithm.
In what follows, an expression of the form p log p is considered by convention to be equal to zero whenever p = 0. This is justified because
lim
p
→
0
+
p
log
p
=
0
{\displaystyle \lim _{p\rightarrow 0+}p\log p=0}
for any logarithmic base.
=== Entropy of an information source ===
Based on the probability mass function of each source symbol to be communicated, the Shannon entropy H, in units of bits (per symbol), is given by
H
=
−
∑
i
p
i
log
2
(
p
i
)
{\displaystyle H=-\sum _{i}p_{i}\log _{2}(p_{i})}
where pi is the probability of occurrence of the i-th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the shannon in his honor. Entropy is also commonly computed using the natural logarithm (base e, where e is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base 28 = 256 will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol.
Intuitively, the entropy HX of a discrete random variable X is a measure of the amount of uncertainty associated with the value of X when only its distribution is known.
The entropy of a source that emits a sequence of N symbols that are independent and identically distributed (iid) is N ⋅ H bits (per message of N symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of length N will be less than N ⋅ H.
If one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. If
X
{\displaystyle \mathbb {X} }
is the set of all messages {x1, ..., xn} that X could be, and p(x) is the probability of some
x
∈
X
{\displaystyle x\in \mathbb {X} }
, then the entropy, H, of X is defined:
H
(
X
)
=
E
X
[
I
(
x
)
]
=
−
∑
x
∈
X
p
(
x
)
log
p
(
x
)
.
{\displaystyle H(X)=\mathbb {E} _{X}[I(x)]=-\sum _{x\in \mathbb {X} }p(x)\log p(x).}
(Here, I(x) is the self-information, which is the entropy contribution of an individual message, and
E
X
{\displaystyle \mathbb {E} _{X}}
is the expected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobable p(x) = 1/n; i.e., most unpredictable, in which case H(X) = log n.
The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the shannon (Sh) as unit:
H
b
(
p
)
=
−
p
log
2
p
−
(
1
−
p
)
log
2
(
1
−
p
)
.
{\displaystyle H_{\mathrm {b} }(p)=-p\log _{2}p-(1-p)\log _{2}(1-p).}
=== Joint entropy ===
The joint entropy of two discrete random variables X and Y is merely the entropy of their pairing: (X, Y). This implies that if X and Y are independent, then their joint entropy is the sum of their individual entropies.
For example, if (X, Y) represents the position of a chess piece—X the row and Y the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece.
H
(
X
,
Y
)
=
E
X
,
Y
[
−
log
p
(
x
,
y
)
]
=
−
∑
x
,
y
p
(
x
,
y
)
log
p
(
x
,
y
)
{\displaystyle H(X,Y)=\mathbb {E} _{X,Y}[-\log p(x,y)]=-\sum _{x,y}p(x,y)\log p(x,y)\,}
Despite similar notation, joint entropy should not be confused with cross-entropy.
=== Conditional entropy (equivocation) ===
The conditional entropy or conditional uncertainty of X given random variable Y (also called the equivocation of X about Y) is the average conditional entropy over Y:
H
(
X
|
Y
)
=
E
Y
[
H
(
X
|
y
)
]
=
−
∑
y
∈
Y
p
(
y
)
∑
x
∈
X
p
(
x
|
y
)
log
p
(
x
|
y
)
=
−
∑
x
,
y
p
(
x
,
y
)
log
p
(
x
|
y
)
.
{\displaystyle H(X|Y)=\mathbb {E} _{Y}[H(X|y)]=-\sum _{y\in Y}p(y)\sum _{x\in X}p(x|y)\log p(x|y)=-\sum _{x,y}p(x,y)\log p(x|y).}
Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that:
H
(
X
|
Y
)
=
H
(
X
,
Y
)
−
H
(
Y
)
.
{\displaystyle H(X|Y)=H(X,Y)-H(Y).\,}
=== Mutual information (transinformation) ===
Mutual information measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information of X relative to Y is given by:
I
(
X
;
Y
)
=
E
X
,
Y
[
S
I
(
x
,
y
)
]
=
∑
x
,
y
p
(
x
,
y
)
log
p
(
x
,
y
)
p
(
x
)
p
(
y
)
{\displaystyle I(X;Y)=\mathbb {E} _{X,Y}[SI(x,y)]=\sum _{x,y}p(x,y)\log {\frac {p(x,y)}{p(x)\,p(y)}}}
where SI (Specific mutual Information) is the pointwise mutual information.
A basic property of the mutual information is that
I
(
X
;
Y
)
=
H
(
X
)
−
H
(
X
|
Y
)
.
{\displaystyle I(X;Y)=H(X)-H(X|Y).\,}
That is, knowing Y, we can save an average of I(X; Y) bits in encoding X compared to not knowing Y.
Mutual information is symmetric:
I
(
X
;
Y
)
=
I
(
Y
;
X
)
=
H
(
X
)
+
H
(
Y
)
−
H
(
X
,
Y
)
.
{\displaystyle I(X;Y)=I(Y;X)=H(X)+H(Y)-H(X,Y).\,}
Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the posterior probability distribution of X given the value of Y and the prior distribution on X:
I
(
X
;
Y
)
=
E
p
(
y
)
[
D
K
L
(
p
(
X
|
Y
=
y
)
‖
p
(
X
)
)
]
.
{\displaystyle I(X;Y)=\mathbb {E} _{p(y)}[D_{\mathrm {KL} }(p(X|Y=y)\|p(X))].}
In other words, this is a measure of how much, on the average, the probability distribution on X will change if we are given the value of Y. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution:
I
(
X
;
Y
)
=
D
K
L
(
p
(
X
,
Y
)
‖
p
(
X
)
p
(
Y
)
)
.
{\displaystyle I(X;Y)=D_{\mathrm {KL} }(p(X,Y)\|p(X)p(Y)).}
Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution.
=== Kullback–Leibler divergence (information gain) ===
The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions: a "true" probability distribution
p
(
X
)
{\displaystyle p(X)}
, and an arbitrary probability distribution
q
(
X
)
{\displaystyle q(X)}
. If we compress data in a manner that assumes
q
(
X
)
{\displaystyle q(X)}
is the distribution underlying some data, when, in reality,
p
(
X
)
{\displaystyle p(X)}
is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined
D
K
L
(
p
(
X
)
‖
q
(
X
)
)
=
∑
x
∈
X
−
p
(
x
)
log
q
(
x
)
−
∑
x
∈
X
−
p
(
x
)
log
p
(
x
)
=
∑
x
∈
X
p
(
x
)
log
p
(
x
)
q
(
x
)
.
{\displaystyle D_{\mathrm {KL} }(p(X)\|q(X))=\sum _{x\in X}-p(x)\log {q(x)}\,-\,\sum _{x\in X}-p(x)\log {p(x)}=\sum _{x\in X}p(x)\log {\frac {p(x)}{q(x)}}.}
Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric).
Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number X is about to be drawn randomly from a discrete set with probability distribution
p
(
x
)
{\displaystyle p(x)}
. If Alice knows the true distribution
p
(
x
)
{\displaystyle p(x)}
, while Bob believes (has a prior) that the distribution is
q
(
x
)
{\displaystyle q(x)}
, then Bob will be more surprised than Alice, on average, upon seeing the value of X. The KL divergence is the (objective) expected value of Bob's (subjective) surprisal minus Alice's surprisal, measured in bits if the log is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him.
=== Directed Information ===
Directed information,
I
(
X
n
→
Y
n
)
{\displaystyle I(X^{n}\to Y^{n})}
, is an information theory measure that quantifies the information flow from the random process
X
n
=
{
X
1
,
X
2
,
…
,
X
n
}
{\displaystyle X^{n}=\{X_{1},X_{2},\dots ,X_{n}\}}
to the random process
Y
n
=
{
Y
1
,
Y
2
,
…
,
Y
n
}
{\displaystyle Y^{n}=\{Y_{1},Y_{2},\dots ,Y_{n}\}}
. The term directed information was coined by James Massey and is defined as
I
(
X
n
→
Y
n
)
≜
∑
i
=
1
n
I
(
X
i
;
Y
i
|
Y
i
−
1
)
{\displaystyle I(X^{n}\to Y^{n})\triangleq \sum _{i=1}^{n}I(X^{i};Y_{i}|Y^{i-1})}
,
where
I
(
X
i
;
Y
i
|
Y
i
−
1
)
{\displaystyle I(X^{i};Y_{i}|Y^{i-1})}
is the conditional mutual information
I
(
X
1
,
X
2
,
.
.
.
,
X
i
;
Y
i
|
Y
1
,
Y
2
,
.
.
.
,
Y
i
−
1
)
{\displaystyle I(X_{1},X_{2},...,X_{i};Y_{i}|Y_{1},Y_{2},...,Y_{i-1})}
.
In contrast to mutual information, directed information is not symmetric. The
I
(
X
n
→
Y
n
)
{\displaystyle I(X^{n}\to Y^{n})}
measures the information bits that are transmitted causally from
X
n
{\displaystyle X^{n}}
to
Y
n
{\displaystyle Y^{n}}
. The Directed information has many applications in problems where causality plays an important role such as capacity of channel with feedback, capacity of discrete memoryless networks with feedback, gambling with causal side information, compression with causal side information,
real-time control communication settings, and in statistical physics.
=== Other quantities ===
Other important information theoretic quantities include the Rényi entropy and the Tsallis entropy (generalizations of the concept of entropy), differential entropy (a generalization of quantities of information to continuous distributions), and the conditional mutual information. Also, pragmatic information has been proposed as a measure of how much information has been used in making a decision.
== Coding theory ==
Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source.
Data compression (source coding): There are two formulations for the compression problem:
lossless data compression: the data must be reconstructed exactly;
lossy data compression: allocates bits needed to reconstruct the data, within a specified fidelity level measured by a distortion function. This subset of information theory is called rate–distortion theory.
Error-correcting codes (channel coding): While data compression removes as much redundancy as possible, an error-correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the data efficiently and faithfully across a noisy channel.
This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal.
=== Source theory ===
Any process that generates successive messages can be considered a source of information. A memoryless source is one in which each message is an independent identically distributed random variable, whereas the properties of ergodicity and stationarity impose less restrictive constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory.
==== Rate ====
Information rate is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is:
r
=
lim
n
→
∞
H
(
X
n
|
X
n
−
1
,
X
n
−
2
,
X
n
−
3
,
…
)
;
{\displaystyle r=\lim _{n\to \infty }H(X_{n}|X_{n-1},X_{n-2},X_{n-3},\ldots );}
that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the average rate is:
r
=
lim
n
→
∞
1
n
H
(
X
1
,
X
2
,
…
X
n
)
;
{\displaystyle r=\lim _{n\to \infty }{\frac {1}{n}}H(X_{1},X_{2},\dots X_{n});}
that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result.
The information rate is defined as:
r
=
lim
n
→
∞
1
n
I
(
X
1
,
X
2
,
…
X
n
;
Y
1
,
Y
2
,
…
Y
n
)
;
{\displaystyle r=\lim _{n\to \infty }{\frac {1}{n}}I(X_{1},X_{2},\dots X_{n};Y_{1},Y_{2},\dots Y_{n});}
It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject of source coding.
=== Channel capacity ===
Communications over a channel is the primary motivation of information theory. However, channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality.
Consider the communications process over a discrete channel. A simple model of the process is shown below:
→
Message
W
Encoder
f
n
→
E
n
c
o
d
e
d
s
e
q
u
e
n
c
e
X
n
Channel
p
(
y
|
x
)
→
R
e
c
e
i
v
e
d
s
e
q
u
e
n
c
e
Y
n
Decoder
g
n
→
E
s
t
i
m
a
t
e
d
m
e
s
s
a
g
e
W
^
{\displaystyle {\xrightarrow[{\text{Message}}]{W}}{\begin{array}{|c| }\hline {\text{Encoder}}\\f_{n}\\\hline \end{array}}{\xrightarrow[{\mathrm {Encoded \atop sequence} }]{X^{n}}}{\begin{array}{|c| }\hline {\text{Channel}}\\p(y|x)\\\hline \end{array}}{\xrightarrow[{\mathrm {Received \atop sequence} }]{Y^{n}}}{\begin{array}{|c| }\hline {\text{Decoder}}\\g_{n}\\\hline \end{array}}{\xrightarrow[{\mathrm {Estimated \atop message} }]{\hat {W}}}}
Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let p(y|x) be the conditional probability distribution function of Y given X. We will consider p(y|x) to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of f(x), the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the channel capacity and is given by:
C
=
max
f
I
(
X
;
Y
)
.
{\displaystyle C=\max _{f}I(X;Y).\!}
This capacity has the following property related to communicating at information rate R (where R is usually bits per symbol). For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate R > C, it is impossible to transmit with arbitrarily small block error.
Channel coding is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity.
==== Capacity of particular channel models ====
A continuous-time analog communications channel subject to Gaussian noise—see Shannon–Hartley theorem.
A binary symmetric channel (BSC) with crossover probability p is a binary input, binary output channel that flips the input bit with probability p. The BSC has a capacity of 1 − Hb(p) bits per channel use, where Hb is the binary entropy function to the base-2 logarithm:
A binary erasure channel (BEC) with erasure probability p is a binary input, ternary output channel. The possible channel outputs are 0, 1, and a third symbol 'e' called an erasure. The erasure represents complete loss of information about an input bit. The capacity of the BEC is 1 − p bits per channel use.
==== Channels with memory and directed information ====
In practice many channels have memory. Namely, at time
i
{\displaystyle i}
the channel is given by the conditional probability
P
(
y
i
|
x
i
,
x
i
−
1
,
x
i
−
2
,
.
.
.
,
x
1
,
y
i
−
1
,
y
i
−
2
,
.
.
.
,
y
1
)
{\displaystyle P(y_{i}|x_{i},x_{i-1},x_{i-2},...,x_{1},y_{i-1},y_{i-2},...,y_{1})}
.
It is often more comfortable to use the notation
x
i
=
(
x
i
,
x
i
−
1
,
x
i
−
2
,
.
.
.
,
x
1
)
{\displaystyle x^{i}=(x_{i},x_{i-1},x_{i-2},...,x_{1})}
and the channel become
P
(
y
i
|
x
i
,
y
i
−
1
)
{\displaystyle P(y_{i}|x^{i},y^{i-1})}
.
In such a case the capacity is given by the mutual information rate when there is no feedback available and the Directed information rate in the case that either there is feedback or not (if there is no feedback the directed information equals the mutual information).
=== Fungible information ===
Fungible information is the information for which the means of encoding is not important. Classical information theorists and computer scientists are mainly concerned with information of this sort. It is sometimes referred as speakable information.
== Applications to other fields ==
=== Intelligence uses and secrecy applications ===
Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, the ban, was used in the Ultra project, breaking the German Enigma machine code and hastening the end of World War II in Europe. Shannon himself defined an important concept now called the unicity distance. Based on the redundancy of the plaintext, it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability.
Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A brute force attack can break systems based on asymmetric key algorithms or on most commonly used methods of symmetric key algorithms (sometimes called secret key algorithms), such as block ciphers. The security of all such methods comes from the assumption that no known attack can break them in a practical amount of time.
Information theoretic security refers to methods such as the one-time pad that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the key) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even information-theoretically secure methods; the Venona project was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material.
=== Pseudorandom number generation ===
Pseudorandom number generators are widely available in computer language libraries and application programs. They are, almost universally, unsuited to cryptographic use as they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termed cryptographically secure pseudorandom number generators, but even they require random seeds external to the software to work as intended. These can be obtained via extractors, if done carefully. The measure of sufficient randomness in extractors is min-entropy, a value related to Shannon entropy through Rényi entropy; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses.
=== Seismic exploration ===
One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory and digital signal processing offer a major improvement of resolution and image clarity over previous analog methods.
=== Semiotics ===
Semioticians Doede Nauta and Winfried Nöth both considered Charles Sanders Peirce as having created a theory of information in his works on semiotics.: 171 : 137 Nauta defined semiotic information theory as the study of "the internal processes of coding, filtering, and information processing.": 91
Concepts from information theory such as redundancy and code control have been used by semioticians such as Umberto Eco and Ferruccio Rossi-Landi to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones.
=== Integrated process organization of neural information ===
Quantitative information theoretic methods have been applied in cognitive science to analyze the integrated process organization of neural information in the context of the binding problem in cognitive neuroscience. In this context, either an information-theoretical measure, such as functional clusters (Gerald Edelman and Giulio Tononi's functional clustering model and dynamic core hypothesis (DCH)) or effective information (Tononi's integrated information theory (IIT) of consciousness), is defined (on the basis of a reentrant process organization, i.e. the synchronization of neurophysiological activity between groups of neuronal populations), or the measure of the minimization of free energy on the basis of statistical methods (Karl J. Friston's free energy principle (FEP), an information-theoretical measure which states that every adaptive change in a self-organized system leads to a minimization of free energy, and the Bayesian brain hypothesis).
=== Miscellaneous applications ===
Information theory also has applications in the search for extraterrestrial intelligence, black holes, bioinformatics, and gambling.
== See also ==
=== Applications ===
=== History ===
Hartley, R.V.L.
History of information theory
Shannon, C.E.
Timeline of information theory
Yockey, H.P.
Andrey Kolmogorov
=== Theory ===
=== Concepts ===
== References ==
== Further reading ==
=== The classic work ===
=== Other journal articles ===
=== Textbooks on information theory ===
=== Other books ===
== External links ==
"Information", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Lambert F. L. (1999), "Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms - Examples of Entropy Increase? Nonsense!", Journal of Chemical Education
IEEE Information Theory Society and ITSOC Monographs, Surveys, and Reviews Archived 2018-06-12 at the Wayback Machine | Wikipedia/Shannon_information_theory |
In quantum information theory, quantum relative entropy is a measure of distinguishability between two quantum states. It is the quantum mechanical analog of relative entropy.
== Motivation ==
For simplicity, it will be assumed that all objects in the article are finite-dimensional.
We first discuss the classical case. Suppose the probabilities of a finite sequence of events is given by the probability distribution P = {p1...pn}, but somehow we mistakenly assumed it to be Q = {q1...qn}. For instance, we can mistake an unfair coin for a fair one. According to this erroneous assumption, our uncertainty about the j-th event, or equivalently, the amount of information provided after observing the j-th event, is
−
log
q
j
.
{\displaystyle \;-\log q_{j}.}
The (assumed) average uncertainty of all possible events is then
−
∑
j
p
j
log
q
j
.
{\displaystyle \;-\sum _{j}p_{j}\log q_{j}.}
On the other hand, the Shannon entropy of the probability distribution p, defined by
−
∑
j
p
j
log
p
j
,
{\displaystyle \;-\sum _{j}p_{j}\log p_{j},}
is the real amount of uncertainty before observation. Therefore the difference between these two quantities
−
∑
j
p
j
log
q
j
−
(
−
∑
j
p
j
log
p
j
)
=
∑
j
p
j
log
p
j
−
∑
j
p
j
log
q
j
{\displaystyle \;-\sum _{j}p_{j}\log q_{j}-\left(-\sum _{j}p_{j}\log p_{j}\right)=\sum _{j}p_{j}\log p_{j}-\sum _{j}p_{j}\log q_{j}}
is a measure of the distinguishability of the two probability distributions p and q. This is precisely the classical relative entropy, or Kullback–Leibler divergence:
D
K
L
(
P
‖
Q
)
=
∑
j
p
j
log
p
j
q
j
.
{\displaystyle D_{\mathrm {KL} }(P\|Q)=\sum _{j}p_{j}\log {\frac {p_{j}}{q_{j}}}\!.}
Note
In the definitions above, the convention that 0·log 0 = 0 is assumed, since
lim
x
↘
0
x
log
(
x
)
=
0
{\displaystyle \lim _{x\searrow 0}x\log(x)=0}
. Intuitively, one would expect that an event of zero probability to contribute nothing towards entropy.
The relative entropy is not a metric. For example, it is not symmetric. The uncertainty discrepancy in mistaking a fair coin to be unfair is not the same as the opposite situation.
== Definition ==
As with many other objects in quantum information theory, quantum relative entropy is defined by extending the classical definition from probability distributions to density matrices. Let ρ be a density matrix. The von Neumann entropy of ρ, which is the quantum mechanical analog of the Shannon entropy, is given by
S
(
ρ
)
=
−
Tr
ρ
log
ρ
.
{\displaystyle S(\rho )=-\operatorname {Tr} \rho \log \rho .}
For two density matrices ρ and σ, the quantum relative entropy of ρ with respect to σ is defined by
S
(
ρ
‖
σ
)
=
−
Tr
ρ
log
σ
−
S
(
ρ
)
=
Tr
ρ
log
ρ
−
Tr
ρ
log
σ
=
Tr
ρ
(
log
ρ
−
log
σ
)
.
{\displaystyle S(\rho \|\sigma )=-\operatorname {Tr} \rho \log \sigma -S(\rho )=\operatorname {Tr} \rho \log \rho -\operatorname {Tr} \rho \log \sigma =\operatorname {Tr} \rho (\log \rho -\log \sigma ).}
We see that, when the states are classically related, i.e. ρσ = σρ, the definition coincides with the classical case, in the sense that if
ρ
=
S
D
1
S
T
{\displaystyle \rho =SD_{1}S^{\mathsf {T}}}
and
σ
=
S
D
2
S
T
{\displaystyle \sigma =SD_{2}S^{\mathsf {T}}}
with
D
1
=
diag
(
λ
1
,
…
,
λ
n
)
{\displaystyle D_{1}={\text{diag}}(\lambda _{1},\ldots ,\lambda _{n})}
and
D
2
=
diag
(
μ
1
,
…
,
μ
n
)
{\displaystyle D_{2}={\text{diag}}(\mu _{1},\ldots ,\mu _{n})}
(because
ρ
{\displaystyle \rho }
and
σ
{\displaystyle \sigma }
commute, they are simultaneously diagonalizable), then
S
(
ρ
‖
σ
)
=
∑
j
=
1
n
λ
j
ln
(
λ
j
μ
j
)
{\displaystyle S(\rho \|\sigma )=\sum _{j=1}^{n}\lambda _{j}\ln \left({\frac {\lambda _{j}}{\mu _{j}}}\right)}
is just the ordinary Kullback-Leibler divergence of the probability vector
(
λ
1
,
…
,
λ
n
)
{\displaystyle (\lambda _{1},\ldots ,\lambda _{n})}
with respect to the probability vector
(
μ
1
,
…
,
μ
n
)
{\displaystyle (\mu _{1},\ldots ,\mu _{n})}
.
=== Non-finite (divergent) relative entropy ===
In general, the support of a matrix M is the orthogonal complement of its kernel, i.e.
supp
(
M
)
=
ker
(
M
)
⊥
{\displaystyle {\text{supp}}(M)={\text{ker}}(M)^{\perp }}
. When considering the quantum relative entropy, we assume the convention that −s · log 0 = ∞ for any s > 0. This leads to the definition that
S
(
ρ
‖
σ
)
=
∞
{\displaystyle S(\rho \|\sigma )=\infty }
when
supp
(
ρ
)
∩
ker
(
σ
)
≠
{
0
}
.
{\displaystyle {\text{supp}}(\rho )\cap {\text{ker}}(\sigma )\neq \{0\}.}
This can be interpreted in the following way. Informally, the quantum relative entropy is a measure of our ability to distinguish two quantum states where larger values indicate states that are more different. Being orthogonal represents the most different quantum states can be. This is reflected by non-finite quantum relative entropy for orthogonal quantum states. Following the argument given in the Motivation section, if we erroneously assume the state
ρ
{\displaystyle \rho }
has support in
ker
(
σ
)
{\displaystyle {\text{ker}}(\sigma )}
, this is an error impossible to recover from.
However, one should be careful not to conclude that the divergence of the quantum relative entropy
S
(
ρ
‖
σ
)
{\displaystyle S(\rho \|\sigma )}
implies that the states
ρ
{\displaystyle \rho }
and
σ
{\displaystyle \sigma }
are orthogonal or even very different by other measures. Specifically,
S
(
ρ
‖
σ
)
{\displaystyle S(\rho \|\sigma )}
can diverge when
ρ
{\displaystyle \rho }
and
σ
{\displaystyle \sigma }
differ by a vanishingly small amount as measured by some norm. For example, let
σ
{\displaystyle \sigma }
have the diagonal representation
σ
=
∑
n
λ
n
|
f
n
⟩
⟨
f
n
|
{\displaystyle \sigma =\sum _{n}\lambda _{n}|f_{n}\rangle \langle f_{n}|}
with
λ
n
>
0
{\displaystyle \lambda _{n}>0}
for
n
=
0
,
1
,
2
,
…
{\displaystyle n=0,1,2,\ldots }
and
λ
n
=
0
{\displaystyle \lambda _{n}=0}
for
n
=
−
1
,
−
2
,
…
{\displaystyle n=-1,-2,\ldots }
where
{
|
f
n
⟩
,
n
∈
Z
}
{\displaystyle \{|f_{n}\rangle ,n\in \mathbb {Z} \}}
is an orthonormal set. The kernel of
σ
{\displaystyle \sigma }
is the space spanned by the set
{
|
f
n
⟩
,
n
=
−
1
,
−
2
,
…
}
{\displaystyle \{|f_{n}\rangle ,n=-1,-2,\ldots \}}
. Next let
ρ
=
σ
+
ϵ
|
f
−
1
⟩
⟨
f
−
1
|
−
ϵ
|
f
1
⟩
⟨
f
1
|
{\displaystyle \rho =\sigma +\epsilon |f_{-1}\rangle \langle f_{-1}|-\epsilon |f_{1}\rangle \langle f_{1}|}
for a small positive number
ϵ
{\displaystyle \epsilon }
. As
ρ
{\displaystyle \rho }
has support (namely the state
|
f
−
1
⟩
{\displaystyle |f_{-1}\rangle }
) in the kernel of
σ
{\displaystyle \sigma }
,
S
(
ρ
‖
σ
)
{\displaystyle S(\rho \|\sigma )}
is divergent even though the trace norm of the difference
(
ρ
−
σ
)
{\displaystyle (\rho -\sigma )}
is
2
ϵ
{\displaystyle 2\epsilon }
. This means that difference between
ρ
{\displaystyle \rho }
and
σ
{\displaystyle \sigma }
as measured by the trace norm is vanishingly small as
ϵ
→
0
{\displaystyle \epsilon \to 0}
even though
S
(
ρ
‖
σ
)
{\displaystyle S(\rho \|\sigma )}
is divergent (i.e. infinite). This property of the quantum relative entropy represents a serious shortcoming if not treated with care.
== Non-negativity of relative entropy ==
=== Corresponding classical statement ===
For the classical Kullback–Leibler divergence, it can be shown that
D
K
L
(
P
‖
Q
)
=
∑
j
p
j
log
p
j
q
j
≥
0
,
{\displaystyle D_{\mathrm {KL} }(P\|Q)=\sum _{j}p_{j}\log {\frac {p_{j}}{q_{j}}}\geq 0,}
and the equality holds if and only if P = Q. Colloquially, this means that the uncertainty calculated using erroneous assumptions is always greater than the real amount of uncertainty.
To show the inequality, we rewrite
D
K
L
(
P
‖
Q
)
=
∑
j
p
j
log
p
j
q
j
=
∑
j
(
−
log
q
j
p
j
)
(
p
j
)
.
{\displaystyle D_{\mathrm {KL} }(P\|Q)=\sum _{j}p_{j}\log {\frac {p_{j}}{q_{j}}}=\sum _{j}(-\log {\frac {q_{j}}{p_{j}}})(p_{j}).}
Notice that log is a concave function. Therefore -log is convex. Applying Jensen's inequality, we obtain
D
K
L
(
P
‖
Q
)
=
∑
j
(
−
log
q
j
p
j
)
(
p
j
)
≥
−
log
(
∑
j
q
j
p
j
p
j
)
=
0.
{\displaystyle D_{\mathrm {KL} }(P\|Q)=\sum _{j}(-\log {\frac {q_{j}}{p_{j}}})(p_{j})\geq -\log(\sum _{j}{\frac {q_{j}}{p_{j}}}p_{j})=0.}
Jensen's inequality also states that equality holds if and only if, for all i, qi = (Σqj) pi, i.e. p = q.
=== The result ===
Klein's inequality states that the quantum relative entropy
S
(
ρ
‖
σ
)
=
Tr
ρ
(
log
ρ
−
log
σ
)
.
{\displaystyle S(\rho \|\sigma )=\operatorname {Tr} \rho (\log \rho -\log \sigma ).}
is non-negative in general. It is zero if and only if ρ = σ.
Proof
Let ρ and σ have spectral decompositions
ρ
=
∑
i
p
i
v
i
v
i
∗
,
σ
=
∑
i
q
i
w
i
w
i
∗
.
{\displaystyle \rho =\sum _{i}p_{i}v_{i}v_{i}^{*}\;,\;\sigma =\sum _{i}q_{i}w_{i}w_{i}^{*}.}
So
log
ρ
=
∑
i
(
log
p
i
)
v
i
v
i
∗
,
log
σ
=
∑
i
(
log
q
i
)
w
i
w
i
∗
.
{\displaystyle \log \rho =\sum _{i}(\log p_{i})v_{i}v_{i}^{*}\;,\;\log \sigma =\sum _{i}(\log q_{i})w_{i}w_{i}^{*}.}
Direct calculation gives
S
(
ρ
‖
σ
)
=
∑
k
p
k
log
p
k
−
∑
i
,
j
(
p
i
log
q
j
)
|
v
i
∗
w
j
|
2
{\displaystyle S(\rho \|\sigma )=\sum _{k}p_{k}\log p_{k}-\sum _{i,j}(p_{i}\log q_{j})|v_{i}^{*}w_{j}|^{2}}
=
∑
i
p
i
(
log
p
i
−
∑
j
log
q
j
|
v
i
∗
w
j
|
2
)
{\displaystyle \qquad \quad \;=\sum _{i}p_{i}(\log p_{i}-\sum _{j}\log q_{j}|v_{i}^{*}w_{j}|^{2})}
=
∑
i
p
i
(
log
p
i
−
∑
j
(
log
q
j
)
P
i
j
)
,
{\displaystyle \qquad \quad \;=\sum _{i}p_{i}(\log p_{i}-\sum _{j}(\log q_{j})P_{ij}),}
where Pi j = |vi*wj|2.
Since the matrix (Pi j)i j is a doubly stochastic matrix and -log is a convex function, the above expression is
≥
∑
i
p
i
(
log
p
i
−
log
(
∑
j
q
j
P
i
j
)
)
.
{\displaystyle \geq \sum _{i}p_{i}(\log p_{i}-\log(\sum _{j}q_{j}P_{ij})).}
Define ri = Σjqj Pi j. Then {ri} is a probability distribution. From the non-negativity of classical relative entropy, we have
S
(
ρ
‖
σ
)
≥
∑
i
p
i
log
p
i
r
i
≥
0.
{\displaystyle S(\rho \|\sigma )\geq \sum _{i}p_{i}\log {\frac {p_{i}}{r_{i}}}\geq 0.}
The second part of the claim follows from the fact that, since -log is strictly convex, equality is achieved in
∑
i
p
i
(
log
p
i
−
∑
j
(
log
q
j
)
P
i
j
)
≥
∑
i
p
i
(
log
p
i
−
log
(
∑
j
q
j
P
i
j
)
)
{\displaystyle \sum _{i}p_{i}(\log p_{i}-\sum _{j}(\log q_{j})P_{ij})\geq \sum _{i}p_{i}(\log p_{i}-\log(\sum _{j}q_{j}P_{ij}))}
if and only if (Pi j) is a permutation matrix, which implies ρ = σ, after a suitable labeling of the eigenvectors {vi} and {wi}.:513
== Joint convexity of relative entropy ==
The relative entropy is jointly convex. For
0
≤
λ
≤
1
{\displaystyle 0\leq \lambda \leq 1}
and states
ρ
1
(
2
)
,
σ
1
(
2
)
{\displaystyle \rho _{1(2)},\sigma _{1(2)}}
we have
D
(
λ
ρ
1
+
(
1
−
λ
)
ρ
2
‖
λ
σ
1
+
(
1
−
λ
)
σ
2
)
≤
λ
D
(
ρ
1
‖
σ
1
)
+
(
1
−
λ
)
D
(
ρ
2
‖
σ
2
)
{\displaystyle D(\lambda \rho _{1}+(1-\lambda )\rho _{2}\|\lambda \sigma _{1}+(1-\lambda )\sigma _{2})\leq \lambda D(\rho _{1}\|\sigma _{1})+(1-\lambda )D(\rho _{2}\|\sigma _{2})}
== Monotonicity of relative entropy ==
The relative entropy decreases monotonically under completely positive trace preserving (CPTP) operations
N
{\displaystyle {\mathcal {N}}}
on density matrices,
S
(
N
(
ρ
)
‖
N
(
σ
)
)
≤
S
(
ρ
‖
σ
)
{\displaystyle S({\mathcal {N}}(\rho )\|{\mathcal {N}}(\sigma ))\leq S(\rho \|\sigma )}
.
This inequality is called monotonicity of quantum relative entropy and was first proved by Göran Lindblad.
== An entanglement measure ==
Let a composite quantum system have state space
H
=
⊗
k
H
k
{\displaystyle H=\otimes _{k}H_{k}}
and ρ be a density matrix acting on H.
The relative entropy of entanglement of ρ is defined by
D
R
E
E
(
ρ
)
=
min
σ
S
(
ρ
‖
σ
)
{\displaystyle \;D_{\mathrm {REE} }(\rho )=\min _{\sigma }S(\rho \|\sigma )}
where the minimum is taken over the family of separable states. A physical interpretation of the quantity is the optimal distinguishability of the state ρ from separable states.
Clearly, when ρ is not entangled
D
R
E
E
(
ρ
)
=
0
{\displaystyle \;D_{\mathrm {REE} }(\rho )=0}
by Klein's inequality.
== Relation to other quantum information quantities ==
One reason the quantum relative entropy is useful is that several other important quantum information quantities are special cases of it. Often, theorems are stated in terms of the quantum relative entropy, which lead to immediate corollaries concerning the other quantities. Below, we list some of these relations.
Let ρAB be the joint state of a bipartite system with subsystem A of dimension nA and B of dimension nB. Let ρA, ρB be the respective reduced states, and IA, IB the respective identities. The maximally mixed states are IA/nA and IB/nB. Then it is possible to show with direct computation that
S
(
ρ
A
|
|
I
A
/
n
A
)
=
l
o
g
(
n
A
)
−
S
(
ρ
A
)
,
{\displaystyle S(\rho _{A}||I_{A}/n_{A})=\mathrm {log} (n_{A})-S(\rho _{A}),\;}
S
(
ρ
A
B
|
|
ρ
A
⊗
ρ
B
)
=
S
(
ρ
A
)
+
S
(
ρ
B
)
−
S
(
ρ
A
B
)
=
I
(
A
:
B
)
,
{\displaystyle S(\rho _{AB}||\rho _{A}\otimes \rho _{B})=S(\rho _{A})+S(\rho _{B})-S(\rho _{AB})=I(A:B),}
S
(
ρ
A
B
|
|
ρ
A
⊗
I
B
/
n
B
)
=
l
o
g
(
n
B
)
+
S
(
ρ
A
)
−
S
(
ρ
A
B
)
=
l
o
g
(
n
B
)
−
S
(
B
|
A
)
,
{\displaystyle S(\rho _{AB}||\rho _{A}\otimes I_{B}/n_{B})=\mathrm {log} (n_{B})+S(\rho _{A})-S(\rho _{AB})=\mathrm {log} (n_{B})-S(B|A),}
where I(A:B) is the quantum mutual information and S(B|A) is the quantum conditional entropy.
== References ==
Vedral, V. (8 March 2002). "The role of relative entropy in quantum information theory". Reviews of Modern Physics. 74 (1). American Physical Society (APS): 197–234. arXiv:quant-ph/0102094. Bibcode:2002RvMP...74..197V. doi:10.1103/revmodphys.74.197. ISSN 0034-6861. S2CID 6370982.
Marco Tomamichel, "Quantum Information Processing with Finite Resources -- Mathematical Foundations". arXiv:1504.00233 | Wikipedia/Quantum_relative_entropy |
In statistical mechanics, the Gibbs algorithm, introduced by J. Willard Gibbs in 1902, is a criterion for choosing a probability distribution for the statistical ensemble of microstates of a thermodynamic system by minimizing the average log probability
⟨
ln
p
i
⟩
=
∑
i
p
i
ln
p
i
{\displaystyle \langle \ln p_{i}\rangle =\sum _{i}p_{i}\ln p_{i}\,}
subject to the probability distribution pi satisfying a set of constraints (usually expectation values) corresponding to the known macroscopic quantities. in 1948, Claude Shannon interpreted the negative of this quantity, which he called information entropy, as a measure of the uncertainty in a probability distribution. In 1957, E.T. Jaynes realized that this quantity could be interpreted as missing information about anything, and generalized the Gibbs algorithm to non-equilibrium systems with the principle of maximum entropy and maximum entropy thermodynamics.
Physicists call the result of applying the Gibbs algorithm the Gibbs distribution for the given constraints, most notably Gibbs's grand canonical ensemble for open systems when the average energy and the average number of particles are given. (See also partition function).
This general result of the Gibbs algorithm is then a maximum entropy probability distribution. Statisticians identify such distributions as belonging to exponential families.
== References == | Wikipedia/Gibbs_algorithm |
The standard Gibbs free energy of formation (Gf°) of a compound is the change of Gibbs free energy that accompanies the formation of 1 mole of a substance in its standard state from its constituent elements in their standard states (the most stable form of the element at 1 bar of pressure and the specified temperature, usually 298.15 K or 25 °C).
The table below lists the standard Gibbs function of formation for several elements and chemical compounds and is taken from Lange's Handbook of Chemistry. Note that all values are in kJ/mol. Far more extensive tables can be found in the CRC Handbook of Chemistry and Physics and the NIST JANAF tables. The NIST Chemistry WebBook (see link below) is an online resource that contains standard enthalpy of formation for various compounds along with the standard molar entropy for these compounds from which the standard Gibbs free energy of formation can be calculated.
== See also ==
Thermochemistry
Calorimetry
== References ==
Dean, John A., ed. (1999). Lange's Handbook of Chemistry (15th ed.). New York: McGraw-Hill. ISBN 978-0-07-016384-3.
== External links ==
NIST Chemistry WebBook | Wikipedia/Standard_Gibbs_free_energy_of_formation |
In chemistry, the standard molar entropy is the entropy content of one mole of pure substance at a standard state of pressure and any temperature of interest. These are often (but not necessarily) chosen to be the standard temperature and pressure.
The standard molar entropy at pressure =
P
0
{\displaystyle P^{0}}
is usually given the symbol S°, and has units of joules per mole per kelvin (J⋅mol−1⋅K−1). Unlike standard enthalpies of formation, the value of S° is absolute. That is, an element in its standard state has a definite, nonzero value of S at room temperature. The entropy of a pure crystalline structure can be 0 J⋅mol−1⋅K−1 only at 0 K, according to the third law of thermodynamics. However, this assumes that the material forms a 'perfect crystal' without any residual entropy. This can be due to crystallographic defects, dislocations, and/or incomplete rotational quenching within the solid, as originally pointed out by Linus Pauling. These contributions to the entropy are always present, because crystals always grow at a finite rate and at temperature. However, the residual entropy is often quite negligible and can be accounted for when it occurs using statistical mechanics.
== Thermodynamics ==
If a mole of a solid substance is a perfectly ordered solid at 0 K, then if the solid is warmed by its surroundings to 298.15 K without melting, its absolute molar entropy would be the sum of a series of N stepwise and reversible entropy changes. The limit of this sum as
N
→
∞
{\displaystyle N\rightarrow \infty }
becomes an integral:
S
∘
=
∑
k
=
1
N
Δ
S
k
=
∑
k
=
1
N
d
Q
k
T
→
∫
0
T
2
d
S
d
T
d
T
=
∫
0
T
2
C
p
k
T
d
T
{\displaystyle S^{\circ }=\sum _{k=1}^{N}\Delta S_{k}=\sum _{k=1}^{N}{\frac {dQ_{k}}{T}}\rightarrow \int _{0}^{T_{2}}{\frac {dS}{dT}}dT=\int _{0}^{T_{2}}{\frac {C_{p_{k}}}{T}}dT}
In this example,
T
2
=
298.15
K
{\displaystyle T_{2}=298.15K}
and
C
p
k
{\displaystyle C_{p_{k}}}
is the molar heat capacity at a constant pressure of the substance in the reversible process k. The molar heat capacity is not constant during the experiment because it changes depending on the (increasing) temperature of the substance. Therefore, a table of values for
C
p
k
T
{\displaystyle {\frac {C_{p_{k}}}{T}}}
is required to find the total molar entropy. The quantity
d
Q
k
T
{\displaystyle {\frac {dQ_{k}}{T}}}
represents the ratio of a very small exchange of heat energy to the temperature T. The total molar entropy is the sum of many small changes in molar entropy, where each small change can be considered a reversible process.
== Chemistry ==
The standard molar entropy of a gas at STP includes contributions from:
The heat capacity of one mole of the solid from 0 K to the melting point (including heat absorbed in any changes between different crystal structures).
The latent heat of fusion of the solid.
The heat capacity of the liquid from the melting point to the boiling point.
The latent heat of vaporization of the liquid.
The heat capacity of the gas from the boiling point to room temperature.
Changes in entropy are associated with phase transitions and chemical reactions. Chemical equations make use of the standard molar entropy of reactants and products to find the standard entropy of reaction:
Δ
S
∘
r
x
n
=
S
p
r
o
d
u
c
t
s
∘
−
S
r
e
a
c
t
a
n
t
s
∘
{\displaystyle {\Delta S^{\circ }}_{rxn}=S_{products}^{\circ }-S_{reactants}^{\circ }}
The standard entropy of reaction helps determine whether the reaction will take place spontaneously. According to the second law of thermodynamics, a spontaneous reaction always results in an increase in total entropy of the system and its surroundings:
(
Δ
S
t
o
t
a
l
=
Δ
S
s
y
s
t
e
m
+
Δ
S
s
u
r
r
o
u
n
d
i
n
g
s
)
>
0
{\displaystyle (\Delta S_{total}=\Delta S_{system}+\Delta S_{surroundings})>0}
Molar entropy is not the same for all gases. Under identical conditions, it is greater for a heavier gas.
== See also ==
Entropy
Heat
Gibbs free energy
Helmholtz free energy
Standard state
Third law of thermodynamics
== References ==
== External links ==
Table of Standard Thermodynamic Properties for Selected Substances | Wikipedia/Standard_molar_entropy |
In thermodynamics, dissipation is the result of an irreversible process that affects a thermodynamic system. In a dissipative process, energy (internal, bulk flow kinetic, or system potential) transforms from an initial form to a final form, where the capacity of the final form to do thermodynamic work is less than that of the initial form. For example, transfer of energy as heat is dissipative because it is a transfer of energy other than by thermodynamic work or by transfer of matter, and spreads previously concentrated energy. Following the second law of thermodynamics, in conduction and radiation from one body to another, the entropy varies with temperature (reduces the capacity of the combination of the two bodies to do work), but never decreases in an isolated system.
In mechanical engineering, dissipation is the irreversible conversion of mechanical energy into thermal energy with an associated increase in entropy.
Processes with defined local temperature produce entropy at a certain rate. The entropy production rate times local temperature gives the dissipated power. Important examples of irreversible processes are: heat flow through a thermal resistance, fluid flow through a flow resistance, diffusion (mixing), chemical reactions, and electric current flow through an electrical resistance (Joule heating).
== Definition ==
Dissipative thermodynamic processes are essentially irreversible because they produce entropy. Planck regarded friction as the prime example of an irreversible thermodynamic process. In a process in which the temperature is locally continuously defined, the local density of rate of entropy production times local temperature gives the local density of dissipated power.
A particular occurrence of a dissipative process cannot be described by a single individual Hamiltonian formalism. A dissipative process requires a collection of admissible individual Hamiltonian descriptions, exactly which one describes the actual particular occurrence of the process of interest being unknown. This includes friction and hammering, and all similar forces that result in decoherency of energy—that is, conversion of coherent or directed energy flow into an indirected or more isotropic distribution of energy.
=== Energy ===
"The conversion of mechanical energy into heat is called energy dissipation." – François Roddier The term is also applied to the loss of energy due to generation of unwanted heat in electric and electronic circuits.
=== Computational physics ===
In computational physics, numerical dissipation (also known as "Numerical diffusion") refers to certain side-effects that may occur as a result of a numerical solution to a differential equation. When the pure advection equation, which is free of dissipation, is solved by a numerical approximation method, the energy of the initial wave may be reduced in a way analogous to a diffusional process. Such a method is said to contain 'dissipation'. In some cases, "artificial dissipation" is intentionally added to improve the numerical stability characteristics of the solution.
=== Mathematics ===
A formal, mathematical definition of dissipation, as commonly used in the mathematical study of measure-preserving dynamical systems, is given in the article wandering set.
== Examples ==
=== In hydraulic engineering ===
Dissipation is the process of converting mechanical energy of downward-flowing water into thermal and acoustical energy. Various devices are designed in stream beds to reduce the kinetic energy of flowing waters to reduce their erosive potential on banks and river bottoms. Very often, these devices look like small waterfalls or cascades, where water flows vertically or over riprap to lose some of its kinetic energy.
=== Irreversible processes ===
Important examples of irreversible processes are:
Heat flow through a thermal resistance
Fluid flow through a flow resistance
Diffusion (mixing)
Chemical reactions
Electrical current flow through an electrical resistance (Joule heating).
=== Waves or oscillations ===
Waves or oscillations, lose energy over time, typically from friction or turbulence. In many cases, the "lost" energy raises the temperature of the system. For example, a wave that loses amplitude is said to dissipate. The precise nature of the effects depends on the nature of the wave: an atmospheric wave, for instance, may dissipate close to the surface due to friction with the land mass, and at higher levels due to radiative cooling.
== History ==
The concept of dissipation was introduced in the field of thermodynamics by William Thomson (Lord Kelvin) in 1852. Lord Kelvin deduced that a subset of the above-mentioned irreversible dissipative processes will occur unless a process is governed by a "perfect thermodynamic engine". The processes that Lord Kelvin identified were friction, diffusion, conduction of heat and the absorption of light.
== See also ==
Entropy production
General equation of heat transfer
Flood control
Principle of maximum entropy
Two-dimensional gas
== References ==
=== General References ===
"Dissipative system, a system that loses energy in the course of its time evolution." Benenson, W.; Harris, J. W.; Stocker, H.; Lutz, H. (2002). "6.1.3". Handbook of Physics. New York, NY: Springer-Verlag. p. 219. ISBN 978-0-387-21632-4. | Wikipedia/Dissipated_energy |
The non-random two-liquid model (abbreviated NRTL model) is an activity coefficient model introduced by Renon
and Prausnitz in 1968 that correlates the activity coefficients
γ
i
{\displaystyle \gamma _{i}}
of a compound with its mole fractions
x
i
{\displaystyle x_{i}}
in the liquid phase concerned. It is frequently applied in the field of chemical engineering to calculate phase equilibria. The concept of NRTL is based on the hypothesis of Wilson, who stated that the local concentration around a molecule in most mixtures is different from the bulk concentration. This difference is due to a difference between the interaction energy of the central molecule with the molecules of its own kind
U
i
i
{\displaystyle U_{ii}}
and that with the molecules of the other kind
U
i
j
{\displaystyle U_{ij}}
. The energy difference also introduces a non-randomness at the local molecular level. The NRTL model belongs to the so-called local-composition models. Other models of this type are the Wilson model, the UNIQUAC model, and the group contribution model UNIFAC. These local-composition models are not thermodynamically consistent for a one-fluid model for a real mixture due to the assumption that the local composition around molecule i is independent of the local composition around molecule j. This assumption is not true, as was shown by Flemr in 1976. However, they are consistent if a hypothetical two-liquid model is used. Models, which have consistency between bulk and the local molecular concentrations around different types of molecules are COSMO-RS, and COSMOSPACE.
== Derivation ==
Like Wilson (1964), Renon & Prausnitz (1968) began with local composition theory, but instead of using the Flory–Huggins volumetric expression as Wilson did, they assumed local compositions followed
x
21
x
11
=
x
2
x
1
exp
(
−
α
21
g
21
/
R
T
)
exp
(
−
α
11
g
11
/
R
T
)
{\displaystyle {\frac {x_{21}}{x_{11}}}={\frac {x_{2}}{x_{1}}}{\frac {\exp(-\alpha _{21}g_{21}/RT)}{\exp(-\alpha _{11}g_{11}/RT)}}}
with a new "non-randomness" parameter α. The excess Gibbs free energy was then determined to be
G
e
x
R
T
=
∑
i
N
x
i
∑
j
N
τ
j
i
G
j
i
x
j
∑
k
N
G
k
i
x
k
{\displaystyle {\frac {G^{ex}}{RT}}=\sum _{i}^{N}x_{i}{\frac {\sum _{j}^{N}\tau _{ji}G_{ji}x_{j}}{\sum _{k}^{N}G_{ki}x_{k}}}}
.
Unlike Wilson's equation, this can predict partially miscible mixtures. However, the cross term, like Wohl's expansion, is more suitable for
H
ex
{\displaystyle H^{\text{ex}}}
than
G
ex
{\displaystyle G^{\text{ex}}}
, and experimental data is not always sufficiently plentiful to yield three meaningful values, so later attempts to extend Wilson's equation to partial miscibility (or to extend Guggenheim's quasichemical theory for nonrandom mixtures to Wilson's different-sized molecules) eventually yielded variants like UNIQUAC.
== Equations for a binary mixture ==
For a binary mixture the following functions are used:
{
ln
γ
1
=
x
2
2
[
τ
21
(
G
21
x
1
+
x
2
G
21
)
2
+
τ
12
G
12
(
x
2
+
x
1
G
12
)
2
]
ln
γ
2
=
x
1
2
[
τ
12
(
G
12
x
2
+
x
1
G
12
)
2
+
τ
21
G
21
(
x
1
+
x
2
G
21
)
2
]
{\displaystyle \left\{{\begin{matrix}\ln \ \gamma _{1}=x_{2}^{2}\left[\tau _{21}\left({\frac {G_{21}}{x_{1}+x_{2}G_{21}}}\right)^{2}+{\frac {\tau _{12}G_{12}}{(x_{2}+x_{1}G_{12})^{2}}}\right]\\\\\ln \ \gamma _{2}=x_{1}^{2}\left[\tau _{12}\left({\frac {G_{12}}{x_{2}+x_{1}G_{12}}}\right)^{2}+{\frac {\tau _{21}G_{21}}{(x_{1}+x_{2}G_{21})^{2}}}\right]\end{matrix}}\right.}
with
{
ln
G
12
=
−
α
12
τ
12
ln
G
21
=
−
α
21
τ
21
{\displaystyle \left\{{\begin{matrix}\ln \ G_{12}=-\alpha _{12}\ \tau _{12}\\\ln \ G_{21}=-\alpha _{21}\ \tau _{21}\end{matrix}}\right.}
Here,
τ
12
{\displaystyle \tau _{12}}
and
τ
21
{\displaystyle \tau _{21}}
are the dimensionless interaction parameters, which are related to the interaction energy parameters
Δ
g
12
{\displaystyle \Delta g_{12}}
and
Δ
g
21
{\displaystyle \Delta g_{21}}
by:
{
τ
12
=
Δ
g
12
R
T
=
U
12
−
U
22
R
T
τ
21
=
Δ
g
21
R
T
=
U
21
−
U
11
R
T
{\displaystyle \left\{{\begin{matrix}\tau _{12}={\frac {\Delta g_{12}}{RT}}={\frac {U_{12}-U_{22}}{RT}}\\\tau _{21}={\frac {\Delta g_{21}}{RT}}={\frac {U_{21}-U_{11}}{RT}}\end{matrix}}\right.}
Here R is the gas constant and T the absolute temperature, and Uij is the energy between molecular surface i and j. Uii is the energy of evaporation. Here Uij has to be equal to Uji, but
Δ
g
i
j
{\displaystyle \Delta g_{ij}}
is not necessary equal to
Δ
g
j
i
{\displaystyle \Delta g_{ji}}
.
The parameters
α
12
{\displaystyle \alpha _{12}}
and
α
21
{\displaystyle \alpha _{21}}
are the so-called non-randomness parameter, for which usually
α
12
{\displaystyle \alpha _{12}}
is set equal to
α
21
{\displaystyle \alpha _{21}}
. For a liquid, in which the local distribution is random around the center molecule, the parameter
α
12
=
0
{\displaystyle \alpha _{12}=0}
. In that case, the equations reduce to the one-parameter Margules activity model:
{
ln
γ
1
=
x
2
2
[
τ
21
+
τ
12
]
=
A
x
2
2
ln
γ
2
=
x
1
2
[
τ
12
+
τ
21
]
=
A
x
1
2
{\displaystyle \left\{{\begin{matrix}\ln \ \gamma _{1}=x_{2}^{2}\left[\tau _{21}+\tau _{12}\right]=Ax_{2}^{2}\\\ln \ \gamma _{2}=x_{1}^{2}\left[\tau _{12}+\tau _{21}\right]=Ax_{1}^{2}\end{matrix}}\right.}
In practice,
α
12
{\displaystyle \alpha _{12}}
is set to 0.2, 0.3 or 0.48. The latter value is frequently used for aqueous systems. The high value reflects the ordered structure caused by hydrogen bonds. However, in the description of liquid-liquid equilibria, the non-randomness parameter is set to 0.2 to avoid wrong liquid-liquid description. In some cases, a better phase equilibria description is obtained by setting
α
12
=
−
1
{\displaystyle \alpha _{12}=-1}
. However this mathematical solution is impossible from a physical point of view since no system can be more random than random (
α
12
=
0
{\displaystyle \alpha _{12}=0}
). In general, NRTL offers more flexibility in the description of phase equilibria than other activity models due to the extra non-randomness parameters. However, in practice this flexibility is reduced in order to avoid wrong equilibrium description outside the range of regressed data.
The limiting activity coefficients, also known as the activity coefficients at infinite dilution, are calculated by:
{
ln
γ
1
∞
=
[
τ
21
+
τ
12
exp
(
−
α
12
τ
12
)
]
ln
γ
2
∞
=
[
τ
12
+
τ
21
exp
(
−
α
12
τ
21
)
]
{\displaystyle \left\{{\begin{matrix}\ln \ \gamma _{1}^{\infty }=\left[\tau _{21}+\tau _{12}\exp {(-\alpha _{12}\ \tau _{12})}\right]\\\ln \ \gamma _{2}^{\infty }=\left[\tau _{12}+\tau _{21}\exp {(-\alpha _{12}\ \tau _{21})}\right]\end{matrix}}\right.}
The expressions show that at
α
12
=
0
{\displaystyle \alpha _{12}=0}
, the limiting activity coefficients are equal. This situation occurs for molecules of equal size but of different polarities. It also shows, since three parameters are available, that multiple sets of solutions are possible.
== General equations ==
The general equation for
ln
(
γ
i
)
{\displaystyle \ln(\gamma _{i})}
for species
i
{\displaystyle i}
in a mixture of
n
{\displaystyle n}
components is:
ln
(
γ
i
)
=
∑
j
=
1
n
x
j
τ
j
i
G
j
i
∑
k
=
1
n
x
k
G
k
i
+
∑
j
=
1
n
x
j
G
i
j
∑
k
=
1
n
x
k
G
k
j
(
τ
i
j
−
∑
m
=
1
n
x
m
τ
m
j
G
m
j
∑
k
=
1
n
x
k
G
k
j
)
{\displaystyle \ln(\gamma _{i})={\frac {\displaystyle \sum _{j=1}^{n}{x_{j}\tau _{ji}G_{ji}}}{\displaystyle \sum _{k=1}^{n}{x_{k}G_{ki}}}}+\sum _{j=1}^{n}{\frac {x_{j}G_{ij}}{\displaystyle \sum _{k=1}^{n}{x_{k}G_{kj}}}}{\left({\tau _{ij}-{\frac {\displaystyle \sum _{m=1}^{n}{x_{m}\tau _{mj}G_{mj}}}{\displaystyle \sum _{k=1}^{n}{x_{k}G_{kj}}}}}\right)}}
with
G
i
j
=
exp
(
−
α
i
j
τ
i
j
)
{\displaystyle G_{ij}=\exp \left({-\alpha _{ij}\tau _{ij}}\right)}
α
i
j
=
α
i
j
0
+
α
i
j
1
T
{\displaystyle \alpha _{ij}=\alpha _{ij_{0}}+\alpha _{ij_{1}}T}
τ
i
,
j
=
A
i
j
+
B
i
j
T
+
C
i
j
T
2
+
D
i
j
ln
(
T
)
+
E
i
j
T
F
i
j
{\displaystyle \tau _{i,j}=A_{ij}+{\frac {B_{ij}}{T}}+{\frac {C_{ij}}{T^{2}}}+D_{ij}\ln {\left({T}\right)}+E_{ij}T^{F_{ij}}}
There are several different equation forms for
α
i
j
{\displaystyle \alpha _{ij}}
and
τ
i
j
{\displaystyle \tau _{ij}}
, the most general of which are shown above.
== Temperature dependent parameters ==
To describe phase equilibria over a large temperature regime, i.e. larger than 50 K, the interaction parameter has to be made temperature dependent.
Two formats are frequently used. The extended Antoine equation format:
τ
i
j
=
f
(
T
)
=
a
i
j
+
b
i
j
T
+
c
i
j
ln
T
+
d
i
j
T
{\displaystyle \tau _{ij}=f(T)=a_{ij}+{\frac {b_{ij}}{T}}+c_{ij}\ \ln \ T+d_{ij}T}
Here the logarithmic and linear terms are mainly used in the description of liquid-liquid equilibria (miscibility gap).
The other format is a second-order polynomial format:
Δ
g
i
j
=
f
(
T
)
=
a
i
j
+
b
i
j
⋅
T
+
c
i
j
T
2
{\displaystyle \Delta g_{ij}=f(T)=a_{ij}+b_{ij}\cdot T+c_{ij}T^{2}}
== Parameter determination ==
The NRTL parameters are fitted to activity coefficients that have been derived from experimentally determined phase equilibrium data (vapor–liquid, liquid–liquid, solid–liquid) as well as from heats of mixing. The source of the experimental data are often factual data banks like the Dortmund Data Bank. Other options are direct experimental work and predicted activity coefficients with UNIFAC and similar models.
It is noteworthy that for the same mixture several NRTL parameter sets might exist, and the choice of NRTL parameter set depends on the kind of phase equilibrium (i.e. solid–liquid (SL), liquid–liquid (LL), vapor–liquid (VL)). In the case of vapor–liquid equilibria, the fitted result importantly depends on which saturated vapor pressure of the pure components was used, and whether the gas phase was treated as an ideal or a real gas. Accurate saturated vapor pressure values are important in the determination or the description of an azeotrope. The gas fugacity coefficients are mostly set to unity (ideal gas assumption), but for vapor-liquid equilibria at high pressures (i.e. > 10 bar) an equation of state is needed to calculate the gas fugacity coefficient for a real gas description.
Determination of NRTL parameters from regression of LLE and VLE experimental data is a challenging problem because it involves solving isoactivity or isofugacity equations which are highly non-linear. In addition, parameters obtained from LLE of VLE may not always represent the experimental behaviour expected. For this reason it is necessary to confirm the thermodynamic consistency of the obtained parameters in the whole range of compositions (including binary subsystems, experimental and calculated tie-lines, calculated plait point locations from the Hessian matrix, etc.) by using a phase stability test such as the Gibbs free energy minor tangent criteria.
== Parameters for NRTL model ==
NRTL binary interaction parameters have been published in the Dechema data series and are provided by NIST and DDBST. There also exist machine-learning approaches that are able to predict NRTL parameters by using the SMILES notation for molecules as input.
== Literature == | Wikipedia/Non-random_two-liquid_model |
In thermodynamics, enthalpy–entropy compensation is a specific example of the compensation effect. The compensation effect refers to the behavior of a series of closely related chemical reactions (e.g., reactants in different solvents or reactants differing only in a single substituent), which exhibit a linear relationship between one of the following kinetic or thermodynamic parameters for describing the reactions:
Between the logarithm of the pre-exponential factors (or prefactors) and the activation energies
ln
A
i
=
α
+
E
a
,
i
R
β
{\displaystyle \ln A_{i}=\alpha +{\frac {E_{{\text{a}},i}}{R\beta }}}
where the series of closely related reactions are indicated by the index i, Ai are the preexponential factors, Ea,i are the activation energies, R is the gas constant, and α, β are constants.
Between enthalpies and entropies of activation (enthalpy–entropy compensation)
Δ
H
i
‡
=
α
+
β
Δ
S
i
‡
{\displaystyle \Delta H_{i}^{\ddagger }=\alpha +\beta \Delta S_{i}^{\ddagger }}
where H‡i are the enthalpies of activation and S‡i are the entropies of activation.
Between the enthalpy and entropy changes of a series of similar reactions (enthalpy–entropy compensation)
Δ
H
i
=
α
+
β
Δ
S
i
{\displaystyle \Delta H_{i}=\alpha +\beta \Delta S_{i}}
where Hi are the enthalpy changes and Si are the entropy changes.
When the activation energy is varied in the first instance, we may observe a related change in pre-exponential factors. An increase in A tends to compensate for an increase in Ea,i, which is why we call this phenomenon a compensation effect. Similarly, for the second and third instances, in accordance with the Gibbs free energy equation, with which we derive the listed equations, ΔH scales proportionately with ΔS. The enthalpy and entropy compensate for each other because of their opposite algebraic signs in the Gibbs equation.
A correlation between enthalpy and entropy has been observed for a wide variety of reactions. The correlation is significant because, for linear free-energy relationships (LFERs) to hold, one of three conditions for the relationship between enthalpy and entropy for a series of reactions must be met, with the most common encountered scenario being that which describes enthalpy–entropy compensation. The empirical relations above were noticed by several investigators beginning in the 1920s, since which the compensatory effects they govern have been identified under different aliases.
== Related terms ==
Many of the more popular terms used in discussing the compensation effect are specific to their field or phenomena. In these contexts, the unambiguous terms are preferred. The misapplication of and frequent crosstalk between fields on this matter has, however, often led to the use of inappropriate terms and a confusing picture. For the purposes of this entry different terms may refer to what may seem to be the same effect, but that either a term is being used as a shorthand (isokinetic and isoequilibrium relationships are different, yet are often grouped together synecdochically as isokinetic relationships for the sake of brevity) or is the correct term in context. This section should aid in resolving any uncertainties. (see Criticism section for more on the variety of terms)
compensation effect/rule : umbrella term for the observed linear relationship between: (i) the logarithm of the preexponential factors and the activation energies, (ii) enthalpies and entropies of activation, or (iii) between the enthalpy and entropy changes of a series of similar reactions.
enthalpy-entropy compensation : the linear relationship between either the enthalpies and entropies of activation or the enthalpy and entropy changes of a series of similar reactions.
isoequilibrium relation (IER), isoequilibrium effect : On a Van 't Hoff plot, there exists a common intersection point describing the thermodynamics of the reactions. At the isoequilibrium temperature β, all the reactions in the series should have the same equilibrium constant (Ki)
Δ
G
i
(
β
)
=
α
{\displaystyle \Delta G_{i}(\beta )=\alpha }
isokinetic relation (IKR), isokinetic effect : On an Arrhenius plot, there exists a common intersection point describing the kinetics of the reactions. At the isokinetic temperature β, all the reactions in the series should have the same rate constant (ki)
k
i
(
β
)
=
e
α
{\displaystyle k_{i}(\beta )=e^{\alpha }}
isoequilibrium temperature : used for thermodynamic LFERs; refers to β in the equations where it possesses dimensions of temperature
isokinetic temperature : used for kinetic LFERs; refers to β in the equations where it possesses dimensions of temperature
kinetic compensation : an increase in the preexponential factors tends to compensate for the increase in activation energy:
ln
A
=
ln
A
0
+
α
Δ
E
0
{\displaystyle \ln A=\ln A_{0}+\alpha \Delta E_{0}}
Meyer–Neldel rule (MNR) : primarily used in materials science and condensed matter physics; the MNR is often stated as the plot of the logarithm of the preexponential factor against activation energy is linear:
σ
(
T
)
=
σ
0
exp
(
−
E
a
k
B
T
)
{\displaystyle \sigma (T)=\sigma _{0}\exp \left(-{\frac {E_{\text{a}}}{k_{\rm {B}}T}}\right)}
where ln σ0 is the preexponential factor, Ea is the activation energy, σ is the conductivity, and kB is the Boltzmann constant, and T is temperature.
== Mathematics ==
=== Enthalpy–entropy compensation as a requirement for LFERs ===
Linear free-energy relationships (LFERs) exist when the relative influence of changing substituents on one reactant is similar to the effect on another reactant, and include linear Hammett plots, Swain–Scott plots, and Brønsted plots. LFERs are not always found to hold, and to see when one can expect them to, we examine the relationship between the free-energy differences for the two reactions under comparison. The extent to which the free energy of the new reaction is changed, via a change in substituent, is proportional to the extent to which the reference reaction was changed by the same substitution. A ratio of the free-energy differences is the reaction quotient or constant Q.
(
Δ
G
0
′
−
Δ
G
x
′
)
=
Q
(
Δ
G
0
−
Δ
G
x
)
{\displaystyle (\Delta G'_{0}-\Delta G'_{x})=Q(\Delta G_{0}-\Delta G_{x})}
The above equation may be rewritten as the difference (δ) in free-energy changes (ΔG):
δ
Δ
G
=
Q
δ
Δ
G
{\displaystyle \delta \Delta G=Q\delta \Delta G}
Substituting the Gibbs free-energy equation (ΔG = ΔH – TΔS) into the equation above yields a form that makes clear the requirements for LFERs to hold.
(
Δ
H
′
−
T
Δ
S
′
)
=
Q
(
Δ
H
−
T
Δ
S
)
{\displaystyle (\Delta H'-T\Delta S')=Q(\Delta H-T\Delta S)}
One should expect LFERs to hold if one of three conditions are met:
δΔH's are coincidentally the same for both the new reaction under study and the reference reaction, and the δΔS's are linearly proportional for the two reactions being compared.
δΔS's are coincidentally the same for both the new reaction under study and the reference reaction, and the δΔH's are linearly proportional for the two reactions being compared.
δΔH's and δΔS's are linearly related to each other for both the reference reaction and the new reaction.
The third condition describes the enthalpy–entropy effect and is the condition most commonly met.
=== Isokinetic and isoequilibrium temperature ===
For most reactions the activation enthalpy and activation entropy are unknown, but, if these parameters have been measured and a linear relationship is found to exist (meaning an LFER was found to hold), the following equation describes the relationship between ΔH‡i and ΔS‡i:
Δ
H
‡
=
β
Δ
S
‡
+
Δ
H
0
‡
{\displaystyle \Delta H^{\ddagger }=\beta \Delta S^{\ddagger }+\Delta H_{0}^{\ddagger }}
Inserting the Gibbs free-energy equation and combining like terms produces the following equation:
Δ
G
‡
=
Δ
H
0
‡
−
(
T
−
β
)
Δ
S
‡
{\displaystyle \Delta G^{\ddagger }=\Delta H_{0}^{\ddagger }-(T-\beta )\Delta S^{\ddagger }}
where ΔH‡0 is constant regardless of substituents and ΔS‡ is different for each substituent.
In this form, β has the dimension of temperature and is referred to as the isokinetic (or isoequilibrium) temperature.
Alternately, the isokinetic (or isoequilibrium) temperature may be reached by observing that, if a linear relationship is found, then the difference between the ΔH‡s for any closely related reactants will be related to the difference between ΔS‡'s for the same reactants:
δ
Δ
H
‡
=
β
δ
Δ
S
‡
{\displaystyle \delta \Delta H^{\ddagger }=\beta \delta \Delta S^{\ddagger }}
Using the Gibbs free-energy equation,
δ
Δ
G
‡
=
(
1
−
T
β
)
δ
Δ
S
‡
{\displaystyle \delta \Delta G^{\ddagger }=\left(1-{\frac {T}{\beta }}\right)\delta \Delta S^{\ddagger }}
In both forms, it is apparent that the difference in Gibbs free-energies of activations (δΔG‡) will be zero when the temperature is at the isokinetic (or isoequilibrium) temperature and hence identical for all members of the reaction set at that temperature.
Beginning with the Arrhenius equation and assuming kinetic compensation (obeying ln A = ln A0 + αΔE‡0), the isokinetic temperature may also be given by
k
B
β
=
1
α
.
{\displaystyle k_{\rm {B}}\beta ={\tfrac {1}{\alpha }}.}
The reactions will have approximately the same value of their rate constant k at an isokinetic temperature.
== History ==
In a 1925 paper, F.H. Constable described the linear relationship observed for the reaction parameters of the catalytic dehydrogenation of primary alcohols with copper-chromium oxide.
== Phenomenon explained ==
The foundations of the compensation effect are still not fully understood though many theories have been brought forward. Compensation of Arrhenius processes in solid-state materials and devices can be explained quite generally from the statistical physics of aggregating fundamental excitations from the thermal bath to surmount a barrier whose activation energy is significantly larger than the characteristic energy of the excitations used (e.g., optical phonons). To rationalize the occurrences of enthalpy-entropy compensation in protein folding and enzymatic reactions, a Carnot-cycle model in which a micro-phase transition plays a crucial role was proposed. In drug receptor binding, it has been suggested that enthalpy-entropy compensation arises due to an intrinsic property of hydrogen bonds. A mechanical basis for solvent-induced enthalpy-entropy compensation has been put forward and tested at the dilute gas limit. There is some evidence of enthalpy-entropy compensation in biochemical or metabolic networks particularly in the context of intermediate-free coupled reactions or processes. However, a single general statistical mechanical explanation applicable to all compensated processes has not yet been developed.
== Criticism ==
Kinetic relations have been observed in many systems and, since their conception, have gone by many terms, among which are the Meyer-Neldel effect or rule, the Barclay-Butler rule, the theta rule, and the Smith-Topley effect. Generally, chemists will talk about the isokinetic relation (IKR), from the importance of the isokinetic (or isoequilibrium) temperature, condensed matter physicists and material scientists use the Meyer-Neldel rule, and biologists will use the compensation effect or rule.
An interesting homework problem appears following Chapter 7: Structure-Reactivity Relationships in Kenneth Connors's textbook Chemical Kinetics: The Study of Reaction Rates:
From the last four digits of the office telephone numbers of the faculty in your department, systematically construct pairs of "rate constants" as two-digit numbers times 10−5 s−1 at temperatures 300 K and 315 K (obviously the larger rate constant of each pair to be associated with the higher temperature). Make a two-point Arrhenius plot for each faculty member, evaluating ΔH‡ and ΔS‡. Examine the plot of ΔH‡ against ΔS‡ for evidence of an isokinetic relationship.
The existence of any real compensation effect has been widely derided in recent years and attributed to the analysis of interdependent factors and chance. Because the physical roots remain to be fully understood, it has been called into question whether compensation is a truly physical phenomenon or a coincidence due to trivial mathematical connections between parameters. The compensation effect has been criticized in other respects, namely for being the result of random experimental and systematic errors producing the appearance of compensation. The principal complaint lodged states that compensation is an artifact of data from a limited temperature range or from a limited range for the free energies.
In response to the criticisms, investigators have stressed that compensatory phenomena are real, but appropriate and in-depth data analysis is always needed. The F-test has been used to such an aim, and it minimizes the deviations of points constrained to pass through an isokinetic temperature to the deviation of the points from the unconstrained line is achieved by comparing the mean deviations of points. Appropriate statistical tests should be performed as well. W. Linert wrote in a 1983 paper:
There are few topics in chemistry in which so many misunderstandings and controversies have arisen as in connection with the so-called isokinetic relationship (IKR) or compensation law. Up to date, a great many chemists appear to be inclined to dismiss the IKR as being accidental. The crucial problem is that the activation parameters are mutually dependent because of their determination from the experimental data. Therefore, it has been stressed repeatedly, the isokinetic plot (i.e., ΔH‡ against ΔS‡) is unfit in principle to substantiate a claim of an isokinetic relationship. At the same time, however, it is a fatal error to dismiss the IKR because of that fallacy.
Common among all defenders is the agreement that stringent criteria for the assignment of true compensation effects must be adhered to.
== References == | Wikipedia/Enthalpy–entropy_compensation |
The Gibbs–Helmholtz equation is a thermodynamic equation used to calculate changes in the Gibbs free energy of a system as a function of temperature. It was originally presented in an 1882 paper entitled "Die Thermodynamik chemischer Vorgänge" by Hermann von Helmholtz. It describes how the Gibbs free energy, which was presented originally by Josiah Willard Gibbs, varies with temperature. It was derived by Helmholtz first, and Gibbs derived it only 6 years later. The attribution to Gibbs goes back to Wilhelm Ostwald, who first translated Gibbs' monograph into German and promoted it in Europe.
The equation is:
where H is the enthalpy, T the absolute temperature and G the Gibbs free energy of the system, all at constant pressure p. The equation states that the change in the G/T ratio at constant pressure as a result of an infinitesimally small change in temperature is a factor H/T2.
Similar equations include
== Chemical reactions and work ==
The typical applications of this equation are to chemical reactions. The equation reads:
(
∂
(
Δ
G
⊖
/
T
)
∂
T
)
p
=
−
Δ
H
⊖
T
2
{\displaystyle \left({\frac {\partial (\Delta G^{\ominus }/T)}{\partial T}}\right)_{p}=-{\frac {\Delta H^{\ominus }}{T^{2}}}}
with ΔG as the change in Gibbs energy due to reaction, and ΔH as the enthalpy of reaction (often, but not necessarily, assumed to be independent of temperature). The o denotes the use of standard states, and particularly the choice of a particular standard pressure (1 bar), to calculate ΔG and ΔH.
Integrating with respect to T (again p is constant) yields:
Δ
G
⊖
(
T
2
)
T
2
−
Δ
G
⊖
(
T
1
)
T
1
=
Δ
H
⊖
(
1
T
2
−
1
T
1
)
{\displaystyle {\frac {\Delta G^{\ominus }(T_{2})}{T_{2}}}-{\frac {\Delta G^{\ominus }(T_{1})}{T_{1}}}=\Delta H^{\ominus }\left({\frac {1}{T_{2}}}-{\frac {1}{T_{1}}}\right)}
This equation quickly enables the calculation of the Gibbs free energy change for a chemical reaction at any temperature T2 with knowledge of just the standard Gibbs free energy change of formation and the standard enthalpy change of formation for the individual components.
Also, using the reaction isotherm equation, that is
Δ
G
⊖
T
=
−
R
ln
K
{\displaystyle {\frac {\Delta G^{\ominus }}{T}}=-R\ln K}
which relates the Gibbs energy to a chemical equilibrium constant, the van 't Hoff equation can be derived.
Since the change in a system's Gibbs energy is equal to the maximum amount of non-expansion work that the system can do in a process, the Gibbs–Helmholtz equation may be used to estimate how much non-expansion work can be done by a chemical process as a function of temperature. For example, the capacity of rechargeable electric batteries can be estimated as a function of temperature using the Gibbs–Helmholtz equation.
== Derivation ==
=== Background ===
The definition of the Gibbs function is
H
=
G
+
S
T
{\displaystyle H=G+ST}
where H is the enthalpy defined by:
H
=
U
+
p
V
{\displaystyle H=U+pV}
Taking differentials of each definition to find dH and dG, then using the fundamental thermodynamic relation (always true for reversible or irreversible processes):
d
U
=
T
d
S
−
p
d
V
{\displaystyle dU=T\,dS-p\,dV}
where S is the entropy, V is volume, (minus sign due to reversibility, in which dU = 0: work other than pressure-volume may be done and is equal to −pV) leads to the "reversed" form of the initial fundamental relation into a new master equation:
d
G
=
−
S
d
T
+
V
d
p
{\displaystyle dG=-S\,dT+V\,dp}
This is the Gibbs free energy for a closed system. The Gibbs–Helmholtz equation can be derived by this second master equation, and the chain rule for partial derivatives.
== Sources ==
== External links ==
Gibbs–Helmholtz equation, by W. R. Salzman (2004).
Gibbs-Helmholtz Equation, by P. Mander (2013) | Wikipedia/Gibbs–Helmholtz_equation |
In electrochemistry, the Nernst equation is a chemical thermodynamical relationship that permits the calculation of the reduction potential of a reaction (half-cell or full cell reaction) from the standard electrode potential, absolute temperature, the number of electrons involved in the redox reaction, and activities (often approximated by concentrations) of the chemical species undergoing reduction and oxidation respectively. It was named after Walther Nernst, a German physical chemist who formulated the equation.
== Expression ==
=== General form with chemical activities ===
When an oxidized species (Ox) accepts a number z of electrons ( e−) to be converted in its reduced form (Red), the half-reaction is expressed as:
Ox
+
ze
−
⟶
Red
{\displaystyle {\ce {Ox + ze- -> Red}}}
The reaction quotient (Qr), also often called the ion activity product (IAP), is the ratio between the chemical activities (a) of the reduced form (the reductant, aRed) and the oxidized form (the oxidant, aOx). The chemical activity of a dissolved species corresponds to its true thermodynamic concentration taking into account the electrical interactions between all ions present in solution at elevated concentrations. For a given dissolved species, its chemical activity (a) is the product of its activity coefficient (γ) by its molar (mol/L solution), or molal (mol/kg water), concentration (C): a = γ C. So, if the concentration (C, also denoted here below with square brackets [ ]) of all the dissolved species of interest are sufficiently low and that their activity coefficients are close to unity, their chemical activities can be approximated by their concentrations as commonly done when simplifying, or idealizing, a reaction for didactic purposes:
Q
r
=
a
Red
a
Ox
=
[
Red
]
[
Ox
]
{\displaystyle Q_{r}={\frac {a_{\text{Red}}}{a_{\text{Ox}}}}={\frac {[\operatorname {Red} ]}{[\operatorname {Ox} ]}}}
At chemical equilibrium, the ratio Qr of the activity of the reaction product (aRed) by the reagent activity (aOx) is equal to the equilibrium constant K of the half-reaction:
K
=
a
Red
a
Ox
{\displaystyle K={\frac {a_{\text{Red}}}{a_{\text{Ox}}}}}
The standard thermodynamics also says that the actual Gibbs free energy ΔG is related to the free energy change under standard state ΔGo by the relationship:
Δ
G
=
Δ
G
⊖
+
R
T
ln
Q
r
{\displaystyle \Delta G=\Delta G^{\ominus }+RT\ln Q_{r}}
where Qr is the reaction quotient and R is the universal ideal gas constant.
The cell potential E associated with the electrochemical reaction is defined as the decrease in Gibbs free energy per coulomb of charge transferred, which leads to the relationship
Δ
G
=
−
z
F
E
.
{\displaystyle \Delta G=-zFE.}
The constant F (the Faraday constant) is a unit conversion factor F = NAq, where NA is the Avogadro constant and q is the fundamental electron charge. This immediately leads to the Nernst equation, which for an electrochemical half-cell is
E
red
=
E
red
⊖
−
R
T
z
F
ln
Q
r
=
E
red
⊖
−
R
T
z
F
ln
a
Red
a
Ox
.
{\displaystyle E_{\text{red}}=E_{\text{red}}^{\ominus }-{\frac {RT}{zF}}\ln Q_{r}=E_{\text{red}}^{\ominus }-{\frac {RT}{zF}}\ln {\frac {a_{\text{Red}}}{a_{\text{Ox}}}}.}
For a complete electrochemical reaction (full cell), the equation can be written as
E
cell
=
E
cell
⊖
−
R
T
z
F
ln
Q
r
{\displaystyle E_{\text{cell}}=E_{\text{cell}}^{\ominus }-{\frac {RT}{zF}}\ln Q_{r}}
where:
Ered is the half-cell reduction potential at the temperature of interest,
Eored is the standard half-cell reduction potential,
Ecell is the cell potential (electromotive force) at the temperature of interest,
Eocell is the standard cell potential in volts,
R is the universal ideal gas constant: R = 8.31446261815324 J K−1 mol−1,
T is the temperature in kelvins,
z is the number of electrons transferred in the cell reaction or half-reaction,
F is Faraday's constant, the magnitude of charge (in coulombs) per mole of electrons: F = 96485.3321233100184 C mol−1,
Qr is the reaction quotient of the cell reaction, and,
a is the chemical activity for the relevant species, where aRed is the activity of the reduced form and aOx is the activity of the oxidized form.
=== Thermal voltage ===
At room temperature (25 °C), the thermal voltage
V
T
=
R
T
F
{\displaystyle V_{T}={\frac {RT}{F}}}
is approximately 25.693 mV. The Nernst equation is frequently expressed in terms of base-10 logarithms (i.e., common logarithms) rather than natural logarithms, in which case it is written:
E
=
E
⊖
−
V
T
z
ln
a
Red
a
Ox
=
E
⊖
−
λ
V
T
z
log
10
a
Red
a
Ox
.
{\displaystyle E=E^{\ominus }-{\frac {V_{T}}{z}}\ln {\frac {a_{\text{Red}}}{a_{\text{Ox}}}}=E^{\ominus }-{\frac {\lambda V_{T}}{z}}\log _{10}{\frac {a_{\text{Red}}}{a_{\text{Ox}}}}.}
where λ = ln(10) ≈ 2.3026 and λVT ≈ 0.05916 Volt.
=== Form with activity coefficients and concentrations ===
Similarly to equilibrium constants, activities are always measured with respect to the standard state (1 mol/L for solutes, 1 atm for gases, and T = 298.15 K, i.e., 25 °C or 77 °F). The chemical activity of a species i, ai, is related to the measured concentration Ci via the relationship ai = γi Ci, where γi is the activity coefficient of the species i. Because activity coefficients tend to unity at low concentrations, or are unknown or difficult to determine at medium and high concentrations, activities in the Nernst equation are frequently replaced by simple concentrations and then, formal standard reduction potentials
E
red
⊖
′
{\displaystyle E_{\text{red}}^{\ominus '}}
used.
Taking into account the activity coefficients (
γ
{\displaystyle \gamma }
) the Nernst equation becomes:
E
red
=
E
red
⊖
−
R
T
z
F
ln
(
γ
Red
γ
Ox
C
Red
C
Ox
)
{\displaystyle E_{\text{red}}=E_{\text{red}}^{\ominus }-{\frac {RT}{zF}}\ln \left({\frac {\gamma _{\text{Red}}}{\gamma _{\text{Ox}}}}{\frac {C_{\text{Red}}}{C_{\text{Ox}}}}\right)}
E
red
=
E
red
⊖
−
R
T
z
F
(
ln
γ
Red
γ
Ox
+
ln
C
Red
C
Ox
)
{\displaystyle E_{\text{red}}=E_{\text{red}}^{\ominus }-{\frac {RT}{zF}}\left(\ln {\frac {\gamma _{\text{Red}}}{\gamma _{\text{Ox}}}}+\ln {\frac {C_{\text{Red}}}{C_{\text{Ox}}}}\right)}
E
red
=
(
E
red
⊖
−
R
T
z
F
ln
γ
Red
γ
Ox
)
⏟
E
red
⊖
′
−
R
T
z
F
ln
C
Red
C
Ox
{\displaystyle E_{\text{red}}=\underbrace {\left(E_{\text{red}}^{\ominus }-{\frac {RT}{zF}}\ln {\frac {\gamma _{\text{Red}}}{\gamma _{\text{Ox}}}}\right)} _{E_{\text{red}}^{\ominus '}}-{\frac {RT}{zF}}\ln {\frac {C_{\text{Red}}}{C_{\text{Ox}}}}}
Where the first term including the activity coefficients (
γ
{\displaystyle \gamma }
) is denoted
E
red
⊖
′
{\displaystyle E_{\text{red}}^{\ominus '}}
and called the formal standard reduction potential, so that
E
red
{\displaystyle E_{\text{red}}}
can be directly expressed as a function of
E
red
⊖
′
{\displaystyle E_{\text{red}}^{\ominus '}}
and the concentrations in the simplest form of the Nernst equation:
E
red
=
E
red
⊖
′
−
R
T
z
F
ln
C
Red
C
Ox
{\displaystyle E_{\text{red}}=E_{\text{red}}^{\ominus '}-{\frac {RT}{zF}}\ln {\frac {C_{\text{Red}}}{C_{\text{Ox}}}}}
=== Formal standard reduction potential ===
When wishing to use simple concentrations in place of activities, but that the activity coefficients are far from unity and can no longer be neglected and are unknown or too difficult to determine, it can be convenient to introduce the notion of the "so-called" standard formal reduction potential (
E
red
⊖
′
{\displaystyle E_{\text{red}}^{\ominus '}}
) which is related to the standard reduction potential as follows:
E
red
⊖
′
=
E
red
⊖
−
R
T
z
F
ln
γ
Red
γ
Ox
{\displaystyle E_{\text{red}}^{\ominus '}=E_{\text{red}}^{\ominus }-{\frac {RT}{zF}}\ln {\frac {\gamma _{\text{Red}}}{\gamma _{\text{Ox}}}}}
So that the Nernst equation for the half-cell reaction can be correctly formally written in terms of concentrations as:
E
red
=
E
red
⊖
′
−
R
T
z
F
ln
C
Red
C
Ox
{\displaystyle E_{\text{red}}=E_{\text{red}}^{\ominus '}-{\frac {RT}{zF}}\ln {\frac {C_{\text{Red}}}{C_{\text{Ox}}}}}
and likewise for the full cell expression.
According to Wenzel (2020), a formal reduction potential
E
red
⊖
′
{\displaystyle E_{\text{red}}^{\ominus '}}
is the reduction potential that applies to a half reaction under a set of specified conditions such as, e.g., pH, ionic strength, or the concentration of complexing agents.
The formal reduction potential
E
red
⊖
′
{\displaystyle E_{\text{red}}^{\ominus '}}
is often a more convenient, but conditional, form of the standard reduction potential, taking into account activity coefficients and specific conditions characteristics of the reaction medium. Therefore, its value is a conditional value, i.e., that it depends on the experimental conditions and because the ionic strength affects the activity coefficients,
E
red
⊖
′
{\displaystyle E_{\text{red}}^{\ominus '}}
will vary from medium to medium. Several definitions of the formal reduction potential can be found in the literature, depending on the pursued objective and the experimental constraints imposed by the studied system. The general definition of
E
red
⊖
′
{\displaystyle E_{\text{red}}^{\ominus '}}
refers to its value determined when
C
red
C
ox
=
1
{\displaystyle {\frac {C_{\text{red}}}{C_{\text{ox}}}}=1}
. A more particular case is when
E
red
⊖
′
{\displaystyle E_{\text{red}}^{\ominus '}}
is also determined at pH 7, as e.g. for redox reactions important in biochemistry or biological systems.
==== Determination of the formal standard reduction potential when Cred/Cox = 1 ====
The formal standard reduction potential
E
red
⊖
′
{\displaystyle E_{\text{red}}^{\ominus '}}
can be defined as the measured reduction potential
E
red
{\displaystyle E_{\text{red}}}
of the half-reaction at unity concentration ratio of the oxidized and reduced species (i.e., when Cred/Cox = 1) under given conditions.
Indeed:
as,
E
red
=
E
red
⊖
{\displaystyle E_{\text{red}}=E_{\text{red}}^{\ominus }}
, when
a
red
a
ox
=
1
{\displaystyle {\frac {a_{\text{red}}}{a_{\text{ox}}}}=1}
,
E
red
=
E
red
⊖
′
{\displaystyle E_{\text{red}}=E_{\text{red}}^{\ominus '}}
, when
C
red
C
ox
=
1
{\displaystyle {\frac {C_{\text{red}}}{C_{\text{ox}}}}=1}
,
because
ln
1
=
0
{\displaystyle \ln {1}=0}
, and that the term
γ
red
γ
ox
{\displaystyle {\frac {\gamma _{\text{red}}}{\gamma _{\text{ox}}}}}
is included in
E
red
⊖
′
{\displaystyle E_{\text{red}}^{\ominus '}}
.
The formal reduction potential makes possible to more simply work with molar (mol/L, M) or molal (mol/kg H2O, m) concentrations in place of activities. Because molar and molal concentrations were once referred as formal concentrations, it could explain the origin of the adjective formal in the expression formal potential.
The formal potential is thus the reversible potential of an electrode at equilibrium immersed in a solution where reactants and products are at unit concentration. If any small incremental change of potential causes a change in the direction of the reaction, i.e. from reduction to oxidation or vice versa, the system is close to equilibrium, reversible and is at its formal potential. When the formal potential is measured under standard conditions (i.e. the activity of each dissolved species is 1 mol/L, T = 298.15 K = 25 °C = 77 °F, Pgas = 1 bar) it becomes de facto a standard potential. According to Brown and Swift (1949):
"A formal potential is defined as the potential of a half-cell, measured against the standard hydrogen electrode, when the total concentration of each oxidation state is one formal".
In this case, as for the standard reduction potentials, the concentrations of dissolved species remain equal to one molar (M) or one molal (m), and so are said to be one formal (F). So, expressing the concentration C in molarity M (1 mol/L):
C
red
C
ox
=
1
M
red
1
M
ox
=
1
{\displaystyle {\frac {C_{\text{red}}}{C_{\text{ox}}}}={\frac {1\,\mathrm {M} _{\text{red}}}{1\,\mathrm {M} _{\text{ox}}}}=1}
The term formal concentration (F) is now largely ignored in the current literature and can be commonly assimilated to molar concentration (M), or molality (m) in case of thermodynamic calculations.
The formal potential is also found halfway between the two peaks in a cyclic voltammogram, where at this point the concentration of Ox (the oxidized species) and Red (the reduced species) at the electrode surface are equal.
The activity coefficients
γ
r
e
d
{\displaystyle \gamma _{red}}
and
γ
o
x
{\displaystyle \gamma _{ox}}
are included in the formal potential
E
red
⊖
′
{\displaystyle E_{\text{red}}^{\ominus '}}
, and because they depend on experimental conditions such as temperature, ionic strength, and pH,
E
red
⊖
′
{\displaystyle E_{\text{red}}^{\ominus '}}
cannot be referred as an immutable standard potential but needs to be systematically determined for each specific set of experimental conditions.
Formal reduction potentials are applied to simplify calculations of a considered system under given conditions and measurements interpretation. The experimental conditions in which they are determined and their relationship to the standard reduction potentials must be clearly described to avoid to confuse them with standard reduction potentials.
==== Formal standard reduction potential at pH 7 ====
Formal standard reduction potentials (
E
red
⊖
′
{\displaystyle E_{\text{red}}^{\ominus '}}
) are also commonly used in biochemistry and cell biology for referring to standard reduction potentials measured at pH 7, a value closer to the pH of most physiological and intracellular fluids than the standard state pH of 0. The advantage is to defining a more appropriate redox scale better corresponding to real conditions than the standard state. Formal standard reduction potentials (
E
red
⊖
′
{\displaystyle E_{\text{red}}^{\ominus '}}
) allow to more easily estimate if a redox reaction supposed to occur in a metabolic process or to fuel microbial activity under some conditions is feasible or not.
While, standard reduction potentials always refer to the standard hydrogen electrode (SHE), with [ H+] = 1 M corresponding to a pH 0, and
E
red H+
⊖
{\displaystyle E_{\text{red H+}}^{\ominus }}
fixed arbitrarily to zero by convention, it is no longer the case at a pH of 7. Then, the reduction potential
E
red
{\displaystyle E_{\text{red}}}
of a hydrogen electrode operating at pH 7 is −0.413 V with respect to the standard hydrogen electrode (SHE).
=== Expression of the Nernst equation as a function of pH ===
The
E
h
{\displaystyle E_{h}}
and pH of a solution are related by the Nernst equation as commonly represented by a Pourbaix diagram (
E
h
{\displaystyle E_{h}}
– pH plot).
E
h
{\displaystyle E_{h}}
explicitly denotes
E
red
{\displaystyle E_{\text{red}}}
expressed versus the standard hydrogen electrode (SHE). For a half cell equation, conventionally written as a reduction reaction (i.e., electrons accepted by an oxidant on the left side):
a
A
+
b
B
+
h
H
+
+
z
e
−
↽
−
−
⇀
c
C
+
d
D
{\displaystyle a\,A+b\,B+h\,{\ce {H+}}+z\,e^{-}\quad {\ce {<=>}}\quad c\,C+d\,D}
The half-cell standard reduction potential
E
red
⊖
{\displaystyle E_{\text{red}}^{\ominus }}
is given by
E
red
⊖
(
volt
)
=
−
Δ
G
⊖
z
F
{\displaystyle E_{\text{red}}^{\ominus }({\text{volt}})=-{\frac {\Delta G^{\ominus }}{zF}}}
where
Δ
G
⊖
{\displaystyle \Delta G^{\ominus }}
is the standard Gibbs free energy change, z is the number of electrons involved, and F is the Faraday's constant. The Nernst equation relates pH and
E
h
{\displaystyle E_{h}}
as follows:
E
h
=
E
red
=
E
red
⊖
−
0.05916
z
log
(
{
C
}
c
{
D
}
d
{
A
}
a
{
B
}
b
)
−
0.05916
h
z
pH
{\displaystyle E_{h}=E_{\text{red}}=E_{\text{red}}^{\ominus }-{\frac {0.05916}{z}}\log \left({\frac {\{C\}^{c}\{D\}^{d}}{\{A\}^{a}\{B\}^{b}}}\right)-{\frac {0.05916\,h}{z}}{\text{pH}}}
where curly brackets indicate activities, and exponents are shown in the conventional manner. This equation is the equation of a straight line for
E
red
{\displaystyle E_{\text{red}}}
as a function of pH with a slope of
−
0.05916
(
h
z
)
{\displaystyle -0.05916\,\left({\frac {h}{z}}\right)}
volt (pH has no units).
This equation predicts lower
E
red
{\displaystyle E_{\text{red}}}
at higher pH values. This is observed for the reduction of O2 into H2O, or OH−, and for the reduction of H+ into H2.
E
red
{\displaystyle E_{\text{red}}}
is then often noted as
E
h
{\displaystyle E_{h}}
to indicate that it refers to the standard hydrogen electrode (SHE) whose
E
red
{\displaystyle E_{\text{red}}}
= 0 by convention under standard conditions (T = 298.15 K = 25 °C = 77 F, Pgas = 1 atm (1.013 bar), concentrations = 1 M and thus pH = 0).
==== Main factors affecting the formal standard reduction potentials ====
The main factor affecting the formal reduction potentials in biochemical or biological processes is most often the pH. To determine approximate values of formal reduction potentials, neglecting in a first approach changes in activity coefficients due to ionic strength, the Nernst equation has to be applied taking care to first express the relationship as a function of pH. The second factor to be considered are the values of the concentrations taken into account in the Nernst equation. To define a formal reduction potential for a biochemical reaction, the pH value, the concentrations values and the hypotheses made on the activity coefficients must always be explicitly indicated. When using, or comparing, several formal reduction potentials they must also be internally consistent.
Problems may occur when mixing different sources of data using different conventions or approximations (i.e., with different underlying hypotheses). When working at the frontier between inorganic and biological processes (e.g., when comparing abiotic and biotic processes in geochemistry when microbial activity could also be at work in the system), care must be taken not to inadvertently directly mix standard reduction potentials versus SHE (pH = 0) with formal reduction potentials (pH = 7). Definitions must be clearly expressed and carefully controlled, especially if the sources of data are different and arise from different fields (e.g., picking and mixing data from classical electrochemistry and microbiology textbooks without paying attention to the different conventions on which they are based).
==== Examples with a Pourbaix diagram ====
To illustrate the dependency of the reduction potential on pH, one can simply consider the two oxido-reduction equilibria determining the water stability domain in a Pourbaix diagram (Eh–pH plot). When water is submitted to electrolysis by applying a sufficient difference of electrical potential between two electrodes immersed in water, hydrogen is produced at the cathode (reduction of water protons) while oxygen is formed at the anode (oxidation of water oxygen atoms). The same may occur if a reductant stronger than hydrogen (e.g., metallic Na) or an oxidant stronger than oxygen (e.g., F2) enters in contact with water and reacts with it. In the Eh–pH plot here beside (the simplest possible version of a Pourbaix diagram), the water stability domain (grey surface) is delimited in term of redox potential by two inclined red dashed lines:
Lower stability line with hydrogen gas evolution due to the proton reduction at very low Eh:
2 H+ + 2 e− ⇌ H2 (cathode: reduction)
Higher stability line with oxygen gas evolution due to water oxygen oxidation at very high Eh:
2 H2O ⇌ O2 + 4 H+ + 4 e− (anode: oxidation)
When solving the Nernst equation for each corresponding reduction reaction (need to revert the water oxidation reaction producing oxygen), both equations have a similar form because the number of protons and the number of electrons involved within a reaction are the same and their ratio is one (2 H+/2 e− for H2 and 4 H+/4 e− with O2 respectively), so it simplifies when solving the Nernst equation expressed as a function of pH.
The result can be numerically expressed as follows:
E
red
=
E
red
⊖
−
0.05916
p
H
{\displaystyle E_{\text{red}}=E_{\text{red}}^{\ominus }-0.05916\ pH}
Note that the slopes of the two water stability domain upper and lower lines are the same (−59.16 mV/pH unit), so they are parallel on a Pourbaix diagram. As the slopes are negative, at high pH, both hydrogen and oxygen evolution requires a much lower reduction potential than at low pH.
For the reduction of H+ into H2 the here above mentioned relationship becomes:
E
red
=
−
0.05916
p
H
{\displaystyle E_{\text{red}}=-0.05916\ pH}
because by convention
E
red
⊖
{\displaystyle E_{\text{red}}^{\ominus }}
= 0 V for the standard hydrogen electrode (SHE: pH = 0). So, at pH = 7,
E
red
{\displaystyle E_{\text{red}}}
= −0.414 V for the reduction of protons.
For the reduction of O2 into 2 H2O the here above mentioned relationship becomes:
E
red
=
1.229
−
0.05916
p
H
{\displaystyle E_{\text{red}}=1.229-0.05916\ pH}
because
E
red
⊖
{\displaystyle E_{\text{red}}^{\ominus }}
= +1.229 V with respect to the standard hydrogen electrode (SHE: pH = 0). So, at pH = 7,
E
red
{\displaystyle E_{\text{red}}}
= +0.815 V for the reduction of oxygen.
The offset of −414 mV in
E
red
{\displaystyle E_{\text{red}}}
is the same for both reduction reactions because they share the same linear relationship as a function of pH and the slopes of their lines are the same. This can be directly verified on a Pourbaix diagram. For other reduction reactions, the value of the formal reduction potential at a pH of 7, commonly referred for biochemical reactions, also depends on the slope of the corresponding line in a Pourbaix diagram i.e. on the ratio h⁄z of the number of H+ to the number of e− involved in the reduction reaction, and thus on the stoichiometry of the half-reaction. The determination of the formal reduction potential at pH = 7 for a given biochemical half-reaction requires thus to calculate it with the corresponding Nernst equation as a function of pH. One cannot simply apply an offset of −414 mV to the Eh value (SHE) when the ratio h⁄z differs from 1.
== Applications in biology ==
Beside important redox reactions in biochemistry and microbiology, the Nernst equation is also used in physiology for calculating the electric potential of a cell membrane with respect to one type of ion. It can be linked to the acid dissociation constant.
=== Nernst potential ===
The Nernst equation has a physiological application when used to calculate the potential of an ion of charge z across a membrane. This potential is determined using the concentration of the ion both inside and outside the cell:
E
=
R
T
z
F
ln
[
ion outside cell
]
[
ion inside cell
]
=
2.3026
R
T
z
F
log
10
[
ion outside cell
]
[
ion inside cell
]
.
{\displaystyle E={\frac {RT}{zF}}\ln {\frac {[{\text{ion outside cell}}]}{[{\text{ion inside cell}}]}}=2.3026{\frac {RT}{zF}}\log _{10}{\frac {[{\text{ion outside cell}}]}{[{\text{ion inside cell}}]}}.}
When the membrane is in thermodynamic equilibrium (i.e., no net flux of ions), and if the cell is permeable to only one ion, then the membrane potential must be equal to the Nernst potential for that ion.
=== Goldman equation ===
When the membrane is permeable to more than one ion, as is inevitably the case, the resting potential can be determined from the Goldman equation, which is a solution of G-H-K influx equation under the constraints that total current density driven by electrochemical force is zero:
E
m
=
R
T
F
ln
(
∑
i
N
P
M
i
+
[
M
i
+
]
o
u
t
+
∑
j
M
P
A
j
−
[
A
j
−
]
i
n
∑
i
N
P
M
i
+
[
M
i
+
]
i
n
+
∑
j
M
P
A
j
−
[
A
j
−
]
o
u
t
)
,
{\displaystyle E_{\mathrm {m} }={\frac {RT}{F}}\ln {\left({\frac {\displaystyle \sum _{i}^{N}P_{\mathrm {M} _{i}^{+}}\left[\mathrm {M} _{i}^{+}\right]_{\mathrm {out} }+\displaystyle \sum _{j}^{M}P_{\mathrm {A} _{j}^{-}}\left[\mathrm {A} _{j}^{-}\right]_{\mathrm {in} }}{\displaystyle \sum _{i}^{N}P_{\mathrm {M} _{i}^{+}}\left[\mathrm {M} _{i}^{+}\right]_{\mathrm {in} }+\displaystyle \sum _{j}^{M}P_{\mathrm {A} _{j}^{-}}\left[\mathrm {A} _{j}^{-}\right]_{\mathrm {out} }}}\right)},}
where
Em is the membrane potential (in volts, equivalent to joules per coulomb),
Pion is the permeability for that ion (in meters per second),
[ion]out is the extracellular concentration of that ion (in moles per cubic meter, to match the other SI units, though the units strictly don't matter, as the ion concentration terms become a dimensionless ratio),
[ion]in is the intracellular concentration of that ion (in moles per cubic meter),
R is the ideal gas constant (joules per kelvin per mole),
T is the temperature in kelvins,
F is the Faraday's constant (coulombs per mole).
The potential across the cell membrane that exactly opposes net diffusion of a particular ion through the membrane is called the Nernst potential for that ion. As seen above, the magnitude of the Nernst potential is determined by the ratio of the concentrations of that specific ion on the two sides of the membrane. The greater this ratio the greater the tendency for the ion to diffuse in one direction, and therefore the greater the Nernst potential required to prevent the diffusion. A similar expression exists that includes r (the absolute value of the transport ratio). This takes transporters with unequal exchanges into account. See: sodium-potassium pump where the transport ratio would be 2/3, so r equals 1.5 in the formula below. The reason why we insert a factor r = 1.5 here is that current density by electrochemical force Je.c.(Na+) + Je.c.(K+) is no longer zero, but rather Je.c.(Na+) + 1.5Je.c.(K+) = 0 (as for both ions flux by electrochemical force is compensated by that by the pump, i.e. Je.c. = −Jpump), altering the constraints for applying GHK equation. The other variables are the same as above. The following example includes two ions: potassium (K+) and sodium (Na+). Chloride is assumed to be in equilibrium.
E
m
=
R
T
F
ln
(
r
P
K
+
[
K
+
]
o
u
t
+
P
N
a
+
[
N
a
+
]
o
u
t
r
P
K
+
[
K
+
]
i
n
+
P
N
a
+
[
N
a
+
]
i
n
)
.
{\displaystyle E_{m}={\frac {RT}{F}}\ln {\left({\frac {rP_{\mathrm {K} ^{+}}\left[\mathrm {K} ^{+}\right]_{\mathrm {out} }+P_{\mathrm {Na} ^{+}}\left[\mathrm {Na} ^{+}\right]_{\mathrm {out} }}{rP_{\mathrm {K} ^{+}}\left[\mathrm {K} ^{+}\right]_{\mathrm {in} }+P_{\mathrm {Na} ^{+}}\left[\mathrm {Na} ^{+}\right]_{\mathrm {in} }}}\right)}.}
When chloride (Cl−) is taken into account,
E
m
=
R
T
F
ln
(
r
P
K
+
[
K
+
]
o
u
t
+
P
N
a
+
[
N
a
+
]
o
u
t
+
P
C
l
−
[
C
l
−
]
i
n
r
P
K
+
[
K
+
]
i
n
+
P
N
a
+
[
N
a
+
]
i
n
+
P
C
l
−
[
C
l
−
]
o
u
t
)
.
{\displaystyle E_{m}={\frac {RT}{F}}\ln {\left({\frac {rP_{\mathrm {K} ^{+}}\left[\mathrm {K} ^{+}\right]_{\mathrm {out} }+P_{\mathrm {Na} ^{+}}\left[\mathrm {Na} ^{+}\right]_{\mathrm {out} }+P_{\mathrm {Cl} ^{-}}\left[\mathrm {Cl} ^{-}\right]_{\mathrm {in} }}{rP_{\mathrm {K} ^{+}}\left[\mathrm {K} ^{+}\right]_{\mathrm {in} }+P_{\mathrm {Na} ^{+}}\left[\mathrm {Na} ^{+}\right]_{\mathrm {in} }+P_{\mathrm {Cl} ^{-}}\left[\mathrm {Cl} ^{-}\right]_{\mathrm {out} }}}\right)}.}
== Derivation ==
=== Using Boltzmann factor ===
For simplicity, we will consider a solution of redox-active molecules that undergo a one-electron reversible reaction
Ox + e− ⇌ Red
and that have a standard potential of zero, and in which the activities are well represented by the concentrations (i.e. unit activity coefficient). The chemical potential μc of this solution is the difference between the energy barriers for taking electrons from and for giving electrons to the working electrode that is setting the solution's electrochemical potential. The ratio of oxidized to reduced molecules, [Ox]/[Red], is equivalent to the probability of being oxidized (giving electrons) over the probability of being reduced (taking electrons), which we can write in terms of the Boltzmann factor for these processes:
[
R
e
d
]
[
O
x
]
=
exp
(
−
[
barrier for gaining an electron
]
/
k
T
)
exp
(
−
[
barrier for losing an electron
]
/
k
T
)
=
exp
(
μ
c
k
T
)
.
{\displaystyle {\begin{aligned}{\frac {[\mathrm {Red} ]}{[\mathrm {Ox} ]}}&={\frac {\exp \left(-[{\text{barrier for gaining an electron}}]/kT\right)}{\exp \left(-[{\text{barrier for losing an electron}}]/kT\right)}}\\[6px]&=\exp \left({\frac {\mu _{\mathrm {c} }}{kT}}\right).\end{aligned}}}
Taking the natural logarithm of both sides gives
μ
c
=
k
T
ln
[
R
e
d
]
[
O
x
]
.
{\displaystyle \mu _{\mathrm {c} }=kT\ln {\frac {[\mathrm {Red} ]}{[\mathrm {Ox} ]}}.}
If μc ≠ 0 at [Ox]/[Red] = 1, we need to add in this additional constant:
μ
c
=
μ
c
⊖
+
k
T
ln
[
R
e
d
]
[
O
x
]
.
{\displaystyle \mu _{\mathrm {c} }=\mu _{\mathrm {c} }^{\ominus }+kT\ln {\frac {[\mathrm {Red} ]}{[\mathrm {Ox} ]}}.}
Dividing the equation by e to convert from chemical potentials to electrode potentials, and remembering that k/e = R/F, we obtain the Nernst equation for the one-electron process Ox + e− ⇌ Red :
E
=
E
⊖
−
k
T
e
ln
[
R
e
d
]
[
O
x
]
=
E
⊖
−
R
T
F
ln
[
R
e
d
]
[
O
x
]
.
{\displaystyle {\begin{aligned}E&=E^{\ominus }-{\frac {kT}{e}}\ln {\frac {[\mathrm {Red} ]}{[\mathrm {Ox} ]}}\\&=E^{\ominus }-{\frac {RT}{F}}\ln {\frac {[\mathrm {Red} ]}{[\mathrm {Ox} ]}}.\end{aligned}}}
=== Using thermodynamics (chemical potential) ===
Quantities here are given per molecule, not per mole, and so Boltzmann constant k and the electron charge e are used instead of the gas constant R and Faraday's constant F. To convert to the molar quantities given in most chemistry textbooks, it is simply necessary to multiply by the Avogadro constant: R = kNA and F = eNA. The entropy of a molecule is defined as
S
=
d
e
f
k
ln
Ω
,
{\displaystyle S\ {\stackrel {\mathrm {def} }{=}}\ k\ln \Omega ,}
where Ω is the number of states available to the molecule. The number of states must vary linearly with the volume V of the system (here an idealized system is considered for better understanding, so that activities are posited very close to the true concentrations). Fundamental statistical proof of the mentioned linearity goes beyond the scope of this section, but to see this is true it is simpler to consider usual isothermal process for an ideal gas where the change of entropy ΔS = nR ln(V2/V1) takes place. It follows from the definition of entropy and from the condition of constant temperature and quantity of gas n that the change in the number of states must be proportional to the relative change in volume V2/V1. In this sense there is no difference in statistical properties of ideal gas atoms compared with the dissolved species of a solution with activity coefficients equaling one: particles freely "hang around" filling the provided volume), which is inversely proportional to the concentration c, so we can also write the entropy as
S
=
k
ln
(
c
o
n
s
t
a
n
t
×
V
)
=
−
k
ln
(
c
o
n
s
t
a
n
t
×
c
)
.
{\displaystyle S=k\ln \ (\mathrm {constant} \times V)=-k\ln \ (\mathrm {constant} \times c).}
The change in entropy from some state 1 to another state 2 is therefore
Δ
S
=
S
2
−
S
1
=
−
k
ln
c
2
c
1
,
{\displaystyle \Delta S=S_{2}-S_{1}=-k\ln {\frac {c_{2}}{c_{1}}},}
so that the entropy of state 2 is
S
2
=
S
1
−
k
ln
c
2
c
1
.
{\displaystyle S_{2}=S_{1}-k\ln {\frac {c_{2}}{c_{1}}}.}
If state 1 is at standard conditions, in which c1 is unity (e.g., 1 atm or 1 M), it will merely cancel the units of c2. We can, therefore, write the entropy of an arbitrary molecule A as
S
(
A
)
=
S
⊖
(
A
)
−
k
ln
[
A
]
,
{\displaystyle S(\mathrm {A} )=S^{\ominus }(\mathrm {A} )-k\ln[\mathrm {A} ],}
where
S
⊖
{\displaystyle S^{\ominus }}
is the entropy at standard conditions and [A] denotes the concentration of A. The change in entropy for a reaction
is then given by
Δ
S
r
x
n
=
(
y
S
(
Y
)
+
z
S
(
Z
)
)
−
(
a
S
(
A
)
+
b
S
(
B
)
)
=
Δ
S
r
x
n
⊖
−
k
ln
[
Y
]
y
[
Z
]
z
[
A
]
a
[
B
]
b
.
{\displaystyle \Delta S_{\mathrm {rxn} }={\big (}yS(\mathrm {Y} )+zS(\mathrm {Z} ){\big )}-{\big (}aS(\mathrm {A} )+bS(\mathrm {B} ){\big )}=\Delta S_{\mathrm {rxn} }^{\ominus }-k\ln {\frac {[\mathrm {Y} ]^{y}[\mathrm {Z} ]^{z}}{[\mathrm {A} ]^{a}[\mathrm {B} ]^{b}}}.}
We define the ratio in the last term as the reaction quotient:
Q
r
=
∏
j
a
j
ν
j
∏
i
a
i
ν
i
≈
[
Z
]
z
[
Y
]
y
[
A
]
a
[
B
]
b
,
{\displaystyle Q_{r}={\frac {\displaystyle \prod _{j}a_{j}^{\nu _{j}}}{\displaystyle \prod _{i}a_{i}^{\nu _{i}}}}\approx {\frac {[\mathrm {Z} ]^{z}[\mathrm {Y} ]^{y}}{[\mathrm {A} ]^{a}[\mathrm {B} ]^{b}}},}
where the numerator is a product of reaction product activities, aj, each raised to the power of a stoichiometric coefficient, νj, and the denominator is a similar product of reactant activities. All activities refer to a time t. Under certain circumstances (see chemical equilibrium) each activity term such as aνjj may be replaced by a concentration term, [A].In an electrochemical cell, the cell potential E is the chemical potential available from redox reactions (E = μc/e). E is related to the Gibbs free energy change ΔG only by a constant:
ΔG = −zFE, where n is the number of electrons transferred and F is the Faraday constant. There is a negative sign because a spontaneous reaction has a negative Gibbs free energy ΔG and a positive potential E. The Gibbs free energy is related to the entropy by G = H − TS, where H is the enthalpy and T is the temperature of the system. Using these relations, we can now write the change in Gibbs free energy,
Δ
G
=
Δ
H
−
T
Δ
S
=
Δ
G
⊖
+
k
T
ln
Q
r
,
{\displaystyle \Delta G=\Delta H-T\Delta S=\Delta G^{\ominus }+kT\ln Q_{r},}
and the cell potential,
E
=
E
⊖
−
k
T
z
e
ln
Q
r
.
{\displaystyle E=E^{\ominus }-{\frac {kT}{ze}}\ln Q_{r}.}
This is the more general form of the Nernst equation.
For the redox reaction Ox + z e− → Red,
Q
r
=
[
R
e
d
]
[
O
x
]
,
{\displaystyle Q_{r}={\frac {[\mathrm {Red} ]}{[\mathrm {Ox} ]}},}
and we have:
E
=
E
⊖
−
k
T
z
e
ln
[
R
e
d
]
[
O
x
]
=
E
⊖
−
R
T
z
F
ln
[
R
e
d
]
[
O
x
]
=
E
⊖
−
R
T
z
F
ln
Q
r
.
{\displaystyle {\begin{aligned}E&=E^{\ominus }-{\frac {kT}{ze}}\ln {\frac {[\mathrm {Red} ]}{[\mathrm {Ox} ]}}\\&=E^{\ominus }-{\frac {RT}{zF}}\ln {\frac {[\mathrm {Red} ]}{[\mathrm {Ox} ]}}\\&=E^{\ominus }-{\frac {RT}{zF}}\ln Q_{r}.\end{aligned}}}
The cell potential at standard temperature and pressure (STP)
E
⊖
{\displaystyle E^{\ominus }}
is often replaced by the formal potential
E
⊖
′
{\displaystyle E^{\ominus '}}
, which includes the activity coefficients of the dissolved species under given experimental conditions (T, P, ionic strength, pH, and complexing agents) and is the potential that is actually measured in an electrochemical cell.
== Relation to the chemical equilibrium ==
The standard Gibbs free energy
Δ
G
⊖
{\displaystyle \Delta G^{\ominus }}
is related to the equilibrium constant K as follows:
Δ
G
⊖
=
−
R
T
ln
K
{\displaystyle \Delta G^{\ominus }=-RT\ln {K}}
At the same time,
Δ
G
⊖
{\displaystyle \Delta G^{\ominus }}
is also equal to the product of the total charge (zF) transferred during the reaction and the cell potential (
E
c
e
l
l
⊖
{\displaystyle E_{cell}^{\ominus }}
):
Δ
G
⊖
=
−
z
F
E
c
e
l
l
⊖
{\displaystyle \Delta G^{\ominus }=-zFE_{cell}^{\ominus }}
The sign is negative, because the considered system performs the work and thus releases energy.
So,
−
z
F
E
c
e
l
l
⊖
=
−
R
T
ln
K
{\displaystyle -zFE_{cell}^{\ominus }=-RT\ln {K}}
And therefore:
E
c
e
l
l
⊖
=
R
T
z
F
ln
K
{\displaystyle E_{cell}^{\ominus }={\frac {RT}{zF}}\ln {K}}
Starting from the Nernst equation, one can also demonstrate the same relationship in the reverse way.
At chemical equilibrium, or thermodynamic equilibrium, the electrochemical potential (E) = 0 and therefore the reaction quotient (Qr) attains the special value known as the equilibrium constant (Keq):
Qr = Keq
Therefore,
0
=
E
⊖
−
R
T
z
F
ln
K
R
T
z
F
ln
K
=
E
⊖
ln
K
=
z
F
E
⊖
R
T
{\displaystyle {\begin{aligned}0&=E^{\ominus }-{\frac {RT}{zF}}\ln K\\{\frac {RT}{zF}}\ln K&=E^{\ominus }\\\ln K&={\frac {zFE^{\ominus }}{RT}}\end{aligned}}}
Or at standard state,
log
10
K
=
z
E
⊖
λ
V
T
=
z
E
⊖
0.05916
V
at
T
=
298.15
K
{\displaystyle \log _{10}K={\frac {zE^{\ominus }}{\lambda V_{T}}}={\frac {zE^{\ominus }}{0.05916{\text{ V}}}}\quad {\text{at }}T=298.15~{\text{K}}}
We have thus related the standard electrode potential and the equilibrium constant of a redox reaction.
== Limitations ==
In dilute solutions, the Nernst equation can be expressed directly in the terms of concentrations (since activity coefficients are close to unity). But at higher concentrations, the true activities of the ions must be used. This complicates the use of the Nernst equation, since estimation of non-ideal activities of ions generally requires experimental measurements. The Nernst equation also only applies when there is no net current flow through the electrode. The activity of ions at the electrode surface changes when there is current flow, and there are additional overpotential and resistive loss terms which contribute to the measured potential.
At very low concentrations of the potential-determining ions, the potential predicted by Nernst equation approaches toward ±∞. This is physically meaningless because, under such conditions, the exchange current density becomes very low, and there may be no thermodynamic equilibrium necessary for Nernst equation to hold. The electrode is called unpoised in such case. Other effects tend to take control of the electrochemical behavior of the system, like the involvement of the solvated electron in electricity transfer and electrode equilibria, as analyzed by Alexander Frumkin and B. Damaskin, Sergio Trasatti, etc.
=== Time dependence of the potential ===
The expression of time dependence has been established by Karaoglanoff.
== Significance in other scientific fields ==
The Nernst equation has been involved in the scientific controversy about cold fusion. Fleischmann and Pons, claiming that cold fusion could exist, calculated that a palladium cathode immersed in a heavy water electrolysis cell could achieve up to 1027 atmospheres of pressure inside the crystal lattice of the metal of the cathode, enough pressure to cause spontaneous nuclear fusion. In reality, only 10,000–20,000 atmospheres were achieved. The American physicist John R. Huizenga claimed their original calculation was affected by a misinterpretation of the Nernst equation. He cited a paper about Pd–Zr alloys.
The Nernst equation allows the calculation of the extent of reaction between two redox systems and can be used, for example, to assess whether a particular reaction will go to completion or not. At chemical equilibrium, the electromotive forces (emf) of the two half cells are equal. This allows the equilibrium constant K of the reaction to be calculated and hence the extent of the reaction.
== See also ==
Concentration cell
Dependency of reduction potential on pH
Electrode potential
Galvanic cell
Goldman equation
Membrane potential
Nernst–Planck equation
Pourbaix diagram
Reduction potential
Solvated electron
Standard electrode potential
Standard electrode potential (data page)
Standard apparent reduction potentials in biochemistry at pH 7 (data page)
== References ==
== External links ==
Nernst/Goldman Equation Simulator
Nernst Equation Calculator
Interactive Nernst/Goldman Java Applet
DoITPoMS Teaching and Learning Package- "The Nernst Equation and Pourbaix Diagrams"
"20.5: Gibbs energy and redox reactions". Chemistry LibreTexts. 2014-11-18. Retrieved 2021-12-06. | Wikipedia/Nernst_equation |
Infrared thermography (IRT), thermal video or thermal imaging, is a process where a thermal camera captures and creates an image of an object by using infrared radiation emitted from the object in a process, which are examples of infrared imaging science. Thermographic cameras usually detect radiation in the long-infrared range of the electromagnetic spectrum (roughly 9,000–14,000 nanometers or 9–14 μm) and produce images of that radiation, called thermograms. Since infrared radiation is emitted by all objects with a temperature above absolute zero according to the black body radiation law, thermography makes it possible to see one's environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature; therefore, thermography allows one to see variations in temperature. When viewed through a thermal imaging camera, warm objects stand out well against cooler backgrounds; humans and other warm-blooded animals become easily visible against the environment, day or night. As a result, thermography is particularly useful to the military and other users of surveillance cameras.
Some physiological changes in human beings and other warm-blooded animals can also be monitored with thermal imaging during clinical diagnostics. Thermography is used in allergy detection and veterinary medicine. Some alternative medicine practitioners promote its use for breast screening, despite the FDA warning that "those who opt for this method instead of mammography may miss the chance to detect cancer at its earliest stage". Government and airport personnel used thermography to detect suspected swine flu cases during the 2009 pandemic.
Thermography has a long history, although its use has increased dramatically with the commercial and industrial applications of the past fifty years. Firefighters use thermography to see through smoke, to find persons, and to localize the base of a fire. Maintenance technicians use thermography to locate overheating joints and sections of power lines, which are a sign of impending failure. Building construction technicians can see thermal signatures that indicate heat leaks in faulty thermal insulation and can use the results to improve the efficiency of heating and air-conditioning units.
The appearance and operation of a modern thermographic camera is often similar to a camcorder. Often the live thermogram reveals temperature variations so clearly that a photograph is not necessary for analysis. A recording module is therefore not always built-in.
Specialized thermal imaging cameras use focal plane arrays (FPAs) that respond to longer wavelengths (mid- and long-wavelength infrared). The most common types are InSb, InGaAs, HgCdTe and QWIP FPA. The newest technologies use low-cost, uncooled microbolometers as FPA sensors. Their resolution is considerably lower than that of optical cameras, mostly 160×120 or 320×240 pixels, up to 1280 × 1024 for the most expensive models. Thermal imaging cameras are much more expensive than their visible-spectrum counterparts, and higher-end models are often export-restricted due to the military uses for this technology. Older bolometers or more sensitive models such as InSb require cryogenic cooling, usually by a miniature Stirling cycle refrigerator or liquid nitrogen.
== Thermal energy ==
Thermal images, or thermograms, are visual displays of the total infrared energy emitted, transmitted, and reflected by an object. Because there are multiple sources of the infrared energy, it is sometimes difficult to get an accurate temperature of an object using this method. A thermal imaging camera uses processing algorithms to reconstruct a temperature image. Note that the image shows an approximation of the temperature of an object, as the camera integrates multiple sources of data in the areas surrounding the object to estimate its temperature.
This phenomenon may become clearer upon consideration of the formula:
Incident Radiant Power = Emitted Radiant Power + Transmitted Radiant Power + Reflected Radiant Power;
where incident radiant power is the radiant power profile when viewed through a thermal imaging camera.
Emitted radiant power is generally what is intended to be measured;
transmitted radiant power is the radiant power that passes through the subject from a remote thermal source, and;
reflected radiant power is the amount of radiant power that reflects off the surface of the object from a remote thermal source.
This phenomenon occurs everywhere, all the time. It is a process known as radiant heat exchange, since radiant power × time equals radiant energy. However, in the case of infrared thermography, the above equation is used to describe the radiant power within the spectral wavelength passband of the thermal imaging camera in use. The radiant heat exchange requirements described in the equation apply equally at every wavelength in the electromagnetic spectrum.
If the object is radiating at a higher temperature than its surroundings, then power transfer takes place radiating from warm to cold following the principle stated in the second law of thermodynamics. So if there is a cool area in the thermogram, that object will be absorbing radiation emitted by surrounding warm objects.
The ability of objects to emit is called emissivity, to absorb radiation is called absorptivity. Under outdoor environments, convective cooling from wind may also need to be considered when trying to get an accurate temperature reading.
== Emissivity ==
Emissivity (or emissivity coefficient) represents a material's ability to emit thermal radiation, which is an optical property of matter. A material's emissivity can theoretically range from 0 (completely not-emitting) to 1 (completely emitting). An example of a substance with low emissivity would be silver, with an emissivity coefficient of 0.02. An example of a substance with high emissivity would be asphalt, with an emissivity coefficient of .98.
A black body is a theoretical object with an emissivity of 1 that radiates thermal radiation characteristic of its contact temperature. That is, if the contact temperature of a thermally uniform black body radiator were 50 °C (122 °F), it would emit the characteristic black-body radiation of 50 °C (122 °F). An ordinary object emits less infrared radiation than a theoretical black body. In other words, the ratio of the actual emission to the maximum theoretical emission is an object's emissivity.
Each material has a different emissivity which may vary by temperature and infrared wavelength. For example, clean metal surfaces have emissivity that decreases at longer wavelengths; many dielectric materials, such as quartz (SiO2), sapphire (Al2O3), calcium fluoride (CaF2), etc. have emissivity that increases at longer wavelength; simple oxides, such as iron oxide (Fe2O3) display relatively flat emissivity in the infrared spectrum.
== Measurement ==
A thermal imaging camera requires a series of mathematical algorithms to build a visible image, since the camera is only able to see electromagnetic radiation invisible to the human eye. The output image can be in JPG or any other image formats.
The spectrum and amount of thermal radiation depend strongly on an object's surface temperature. This enables thermal imaging of an object's temperature. However, other factors also influence the received radiation, which limits the accuracy of this technique: for example, the emissivity of the object.
For a non-contact temperature measurement, the emissivity setting needs to be set properly. An object of low emissivity could have its temperature underestimated by the detector, since it only detects emitted infrared rays. For a quick estimate, a thermographer may refer to an emissivity table for a given type of object, and enter that value into the imager. It would then calculate the object's contact temperature based on the entered emissivity and the infrared radiation as detected by the imager.
For a more accurate measurement, a thermographer may apply a standard material of known, high emissivity to the surface of the object. The standard material might be an industrial emissivity spray produced specifically for the purpose, or as simple as standard black insulation tape, with an emissivity of about 0.97. The object's known temperature can then be measured using the standard emissivity. If desired, the object's actual emissivity (on a part of the object not covered by the standard material) can be determined by adjusting the imager's setting to the known temperature. There are situations, however, when such an emissivity test is not possible due to dangerous or inaccessible conditions, then the thermographer must rely on tables.
Other variables can affect the measurement, including absorption and ambient temperature of the transmitting medium (usually air). Also, surrounding infrared radiation can be reflected in the object. All these settings will affect the calculated temperature of the object being viewed.
=== Color scale ===
Images from infrared cameras tend to be monochrome because the cameras generally use an image sensor that does not distinguish different wavelengths of infrared radiation. Color image sensors require a complex construction to differentiate wavelengths, and color has less meaning outside of the normal visible spectrum because the differing wavelengths do not map uniformly into the color vision system used by humans.
Sometimes these monochromatic images are displayed in pseudo-color, where changes in color are used rather than changes in intensity to display changes in the signal. This technique, called density slicing, is useful because although humans have much greater dynamic range in intensity detection than color overall, the ability to see fine intensity differences in bright areas is fairly limited.
In temperature measurement the brightest (warmest) parts of the image are customarily colored white, intermediate temperatures reds and yellows, and the dimmest (coolest) parts black. A scale should be shown next to a false color image to relate colors to temperatures.
== Cameras ==
A thermographic camera (also called an infrared camera or thermal imaging camera, thermal camera or thermal imager) is a device that creates an image using infrared (IR) radiation, similar to a normal camera that forms an image using visible light. Instead of the 400–700 nanometre (nm) range of the visible light camera, infrared cameras are sensitive to wavelengths from about 1,000 nm (1 micrometre or μm) to about 14,000 nm (14 μm). The practice of capturing and analyzing the data they provide is called thermography.
Thermal cameras convert the energy in the far infrared wavelength into a visible light display. All objects above absolute zero emit thermal infrared energy, so thermal cameras can passively see all objects, regardless of ambient light. However, most thermal cameras are sensitive to objects warmer than −50 °C (−58 °F).
Some specification parameters of an infrared camera system are number of pixels, frame rate, responsivity, noise-equivalent power, noise-equivalent temperature difference (NETD), spectral band, distance-to-spot ratio (D:S), minimum focus distance, sensor lifetime, minimum resolvable temperature difference (MRTD), field of view, dynamic range, input power, and mass and volume.
Their resolution is considerably lower than that of optical cameras, often around 160×120 or 320×240 pixels, although more expensive ones can achieve a resolution of 1280×1024 pixels. Thermographic cameras are much more expensive than their visible-spectrum counterparts, though low-performance add-on thermal cameras for smartphones became available for hundreds of US dollars in 2014.
=== Types ===
Thermographic cameras can be broadly divided into two types: those with cooled infrared image detectors and those with uncooled detectors.
==== Cooled infrared detectors ====
Cooled detectors are typically contained in a vacuum-sealed case or Dewar and cryogenically cooled. Cooling is necessary for the operation of the semiconductor materials used. Typical operating temperatures range from 4 K (−269 °C) to just below room temperature, depending on the detector technology. Most modern cooled detectors operate in the 60 Kelvin (K) to 100 K range (-213 to -173 °C), depending on type and performance level.
Without cooling, these sensors (which detect and convert light in much the same way as common digital cameras, but are made of different materials) would be 'blinded' or flooded by their own radiation. The drawbacks of cooled infrared cameras are that they are expensive both to produce and to run. Cooling is both energy-intensive and time-consuming.
The camera may need several minutes to cool down before it can begin working. The most commonly used cooling systems are peltier coolers which, although inefficient and limited in cooling capacity, are relatively simple and compact. To obtain better image quality or for imaging low temperature objects Stirling cryocoolers are needed. Although the cooling apparatus may be comparatively bulky and expensive, cooled infrared cameras provide greatly superior image quality compared to uncooled ones, particularly of objects near or below room temperature. Additionally, the greater sensitivity of cooled cameras also allow the use of higher F-number lenses, making high performance long focal length lenses both smaller and cheaper for cooled detectors.
An alternative to Stirling coolers is to use gases bottled at high pressure, nitrogen being a common choice. The pressurised gas is expanded via a micro-sized orifice and passed over a miniature heat exchanger resulting in regenerative cooling via the Joule–Thomson effect. For such systems the supply of pressurized gas is a logistical concern for field use.
Materials used for cooled infrared detection include photodetectors based on a wide range of narrow gap semiconductors including indium antimonide (3-5 μm), indium arsenide, mercury cadmium telluride (MCT) (1-2 μm, 3-5 μm, 8-12 μm), lead sulfide, and lead selenide. Infrared photodetectors can also be created with structures of high bandgap semiconductors such as in quantum well infrared photodetectors.
Cooled bolometer technologies can be superconducting or non-superconducting. Superconducting detectors offer extreme sensitivity, with some able to register individual photons. For example, ESA's Superconducting camera (SCAM). However, they are not in regular use outside of scientific research. In principle, superconducting tunneling junction devices could be used as infrared sensors because of their very narrow gap. Small arrays have been demonstrated, but they have not been broadly adopted for use because their high sensitivity requires careful shielding from background radiation.
==== Uncooled infrared detectors ====
Uncooled thermal cameras use a sensor operating at ambient temperature, or a sensor stabilized at a temperature close to ambient using small temperature control elements. Modern uncooled detectors all use sensors that work by the change of resistance, voltage or current when heated by infrared radiation. These changes are then measured and compared to the values at the operating temperature of the sensor.
In uncooled detectors the temperature differences at the sensor pixels are minute; a 1 °C difference at the scene induces just a 0.03 °C difference at the sensor. The pixel response time is also fairly slow, at the range of tens of milliseconds.
Uncooled infrared sensors can be stabilized to an operating temperature to reduce image noise, but they are not cooled to low temperatures and do not require bulky, expensive, energy consuming cryogenic coolers. This makes infrared cameras smaller and less costly. However, their resolution and image quality tend to be lower than cooled detectors. This is due to differences in their fabrication processes, limited by currently available technology. An uncooled thermal camera also needs to deal with its own heat signature.
Uncooled detectors are mostly based on pyroelectric and ferroelectric materials or microbolometer technology. The material are used to form pixels with highly temperature-dependent properties, which are thermally insulated from the environment and read electronically.
Ferroelectric detectors operate close to phase transition temperature of the sensor material; the pixel temperature is read as the highly temperature-dependent polarization charge. The achieved NETD of ferroelectric detectors with f/1 optics and 320×240 sensors is 70-80 mK. A possible sensor assembly consists of barium strontium titanate bump-bonded by polyimide thermally insulated connection.
Silicon microbolometers can reach NETD down to 20 mK. They consist of a layer of amorphous silicon, or a thin film vanadium(V) oxide sensing element suspended on silicon nitride bridge above the silicon-based scanning electronics. The electric resistance of the sensing element is measured once per frame.
Current improvements of uncooled focal plane arrays (UFPA) are focused primarily on higher sensitivity and pixel density. In 2013 DARPA announced a five-micron LWIR camera that uses a 1280 × 720 focal plane array (FPA).
Some of the materials used for the sensor arrays are amorphous silicon (a-Si), vanadium(V) oxide (VOx), lanthanum barium manganite (LBMO), lead zirconate titanate (PZT), lanthanum doped lead zirconate titanate (PLZT), lead scandium tantalate (PST), lead lanthanum titanate (PLT), lead titanate (PT), lead zinc niobate (PZN), lead strontium titanate (PSrT), barium strontium titanate (BST), barium titanate (BT), antimony sulfoiodide (SbSI), and polyvinylidene difluoride (PVDF).
=== CCD and CMOS thermography ===
Non-specialized charge-coupled device (CCD) and CMOS sensors have most of their spectral sensitivity in the visible light wavelength range. However, by utilizing the "trailing" area of their spectral sensitivity, namely the part of the infrared spectrum called near-infrared (NIR), and by using off-the-shelf CCTV camera it is possible under certain circumstances to obtain true thermal images of objects with temperatures at about 280 °C (536 °F) and higher.
At temperatures of 600 °C and above, inexpensive cameras with CCD and CMOS sensors have also been used for pyrometry in the visible spectrum. They have been used for soot in flames, burning coal particles, heated materials, SiC filaments, and smoldering embers. This pyrometry has been performed using external filters or only the sensor's Bayer filters. It has been performed using color ratios, grayscales, and/or a hybrid of both.
=== Infrared films ===
Infrared (IR) film is sensitive to black-body radiation in the 250 to 500 °C (482 to 932 °F) range, while the range of thermography is approximately −50 to 2,000 °C (−58 to 3,632 °F). So, for an IR film to work thermographically, the measured object must be over 250 °C (482 °F) or be reflecting infrared radiation from something that is at least that hot.
=== Comparison with night-vision devices ===
Starlight-type night-vision devices generally only magnify ambient light and are not thermal imagers.
Some infrared cameras marketed as night vision are sensitive to near-infrared just beyond the visual spectrum, and can see emitted or reflected near-infrared in complete visual darkness. However, these are not usually used for thermography due to the high equivalent black-body temperature required, but are instead used with active near-IR illumination sources.
== Passive vs. active thermography ==
All objects above the absolute zero temperature (0 K) emit infrared radiation. Hence, an excellent way to measure thermal variations is to use an infrared sensing device, usually a focal plane array (FPA) infrared camera capable of detecting radiation in the mid (3 to 5 μm) and long (7 to 14 μm) wave infrared bands, denoted as MWIR and LWIR, corresponding to two of the high transmittance infrared windows. Abnormal temperature profiles at the surface of an object are an indication of a potential problem.
In passive thermography, the features of interest are naturally at a higher or lower temperature than the background. Passive thermography has many applications such as surveillance of people on a scene and medical diagnosis (specifically thermology).
In active thermography, an energy source is required to produce a thermal contrast between the feature of interest and the background. The active approach is necessary in many cases given that the inspected parts are usually in equilibrium with the surroundings. Given the super-linearities of the black-body radiation, active thermography can also be used to enhance the resolution of imaging systems beyond their diffraction limit or to achieve super-resolution microscopy.
== Advantages ==
Thermography shows a visual picture so temperatures over a large area can be compared. It is capable of catching moving targets in real time. It is able to find deterioration, i.e., higher temperature components prior to their failure. It can be used to measure or observe in areas inaccessible or hazardous for other methods. It is a non-destructive test method. It can be used to find defects in shafts, pipes, and other metal or plastic parts. It can be used to detect objects in dark areas. It has some medical application, essentially in physiotherapy.
== Limitations and disadvantages ==
Quality thermography cameras often have a high price (often US$3,000 or more) due to the expense of the larger pixel array (state of the art 1280×1024), although less expensive models (with pixel arrays of 40×40 up to 160×120 pixels) are also available. Fewer pixels compared to traditional cameras reduce the image quality making it more difficult to distinguish proximate targets within the same field of view.
There is also a difference in refresh rate. Some cameras may only have a refreshing value of 5 –15 Hz, other (e.g. FLIR X8500sc) 180 Hz or even more in no full window mode.
Also the lens can be integrated or not.
Many models do not provide the irradiance measurements used to construct the output image; the loss of this information without a correct calibration for emissivity, distance, and ambient temperature and relative humidity entails that the resultant images are inherently incorrect measurements of temperature.
Images can be difficult to interpret accurately when based upon certain objects, specifically objects with erratic temperatures, although this problem is reduced in active thermal imaging.
Thermographic cameras create thermal images based on the radiant heat energy it receives. As radiation levels are influenced by the emissivity and reflection of radiation such as sunlight from the surface being measured this causes errors in the measurements.
Most cameras have ±2% accuracy or worse in measurement of temperature and are not as accurate as contact methods.
Methods and instruments are limited to directly detecting surface temperatures.
== Applications ==
Thermography finds many uses, and thermal imaging cameras are excellent tools for the maintenance of electrical and mechanical systems in industry and commerce. For example, firefighters use it to see through smoke, find people, and localize hotspots of fires. Power line maintenance technicians locate overheating joints and parts, a telltale sign of their failure, to eliminate potential hazards. Where thermal insulation becomes faulty, building construction technicians can see heat leaks to improve the efficiencies of cooling or heating air-conditioning.
By using proper camera settings, electrical systems can be scanned and problems can be found. Faults with steam traps in steam heating systems are easy to locate.
In the energy savings area, thermal imaging cameras can see the effective radiation temperature of an object as well as what that object is radiating towards, which can help locate sources of thermal leaks and overheated regions.
Cooled infrared cameras can be found at major astronomy research telescopes, even those that are not infrared telescopes. Examples include telescopes such as UKIRT, the Spitzer Space Telescope, WISE and the James Webb Space Telescope
For automotive night vision, thermal imaging cameras are also installed in some luxury cars to aid the driver, the first being the 2000 Cadillac DeVille.
In smartphones, a thermal camera was first integrated into the Cat S60 in 2016.
=== Industry ===
In manufacturing, engineering and research, thermography can be used for:
Process control
Research and development of new products
Condition monitoring
Electrical distribution equipment diagnosis and maintenance, such as transformer yards and distribution panels
Nondestructive testing
Fault diagnosis and troubleshooting
Program process monitoring
Quality control in production environments
Predictive maintenance (early failure warning) on mechanical and electrical equipment
Data center monitoring
Inspecting photovoltaic power plants
In building inspection, thermography can be used in:
Roof inspection, such as for low slope and flat roofing
Building diagnostics, including building envelope inspections, and energy losses in buildings
Locating pest infestations
Energy auditing of building insulation and detection of refrigerant leaks
Home performance
Moisture detection in walls and roofs (and thus in turn often part of mold remediation)
Masonry wall structural analysis
=== Health ===
Some physiological activities, particularly responses such as fever, in human beings and other warm-blooded animals can also be monitored with non-contact thermography. This can be compared to contact thermography such as with traditional thermometers.
Healthcare-related uses include:
Dynamic angiothermography
Peripheral vascular disease screening.
Medical imaging in infrared
Thermography (medical) - Medical testing for diagnosis
Carotid artery stenosis (CAS) screening through skin thermal maps.
Active Dynamic Thermography (ADT) for medical applications.
Neuromusculoskeletal disorders.
Extracranial cerebral and facial vascular disease.
Facial emotion recognition.
Thyroid gland abnormalities.
Various other neoplastic, metabolic, and inflammatory conditions.
=== Security and defence ===
Thermography is often used in surveillance, security, firefighting, law enforcement, and anti-terrorism:
Quarantine monitoring of visitors to a country
Technical surveillance counter-measures
Search and rescue operations
Firefighting operations
UAV surveillance
In weapons systems, thermography can be used in military and police target detection and acquisition:
Forward-looking infrared
Infrared search and track
Night vision
Infrared targeting
Thermal weapon sight
In computer hacking, a thermal attack is an approach that exploits heat traces left after interacting with interfaces, such as touchscreens or keyboards, to uncover the user's input.
=== Other applications ===
Other areas in which these techniques are used:
Thermal mapping
Archaeological kite aerial thermography
Thermology
Veterinary thermal imaging
Thermal imaging in ornithology and other wildlife monitoring
Nighttime wildlife photography
Stereo vision
Chemical imaging
Volcanology
Agriculture, e.g., Seed-counting machine
Baby monitoring systems
Chemical imaging
Pollution effluent detection
Aerial archaeology
Flame detector
Meteorology (thermal images from weather satellites are used to determine cloud temperature/height and water vapor concentrations, depending on the wavelength)
Cricket Umpire Decision Review System. To detect faint contact of the ball with the bat (and hence a heat patch signature on the bat after contact).
Autonomous navigation
== Standards ==
ASTM International (ASTM)
ASTM C1060, Standard Practice for Thermographic Inspection of Insulation Installations in Envelope Cavities of Frame Buildings
ASTM C1153, Standard Practice for the Location of Wet Insulation in Roofing Systems Using Infrared Imaging
ATSM D4788, Standard Test Method for Detecting Delamination in Bridge Decks Using Infrared Thermography
ASTM E1186, Standard Practices for Air Leakage Site Detection in Building Envelopes and Air Barrier Systems
ASTM E1934, Standard Guide for Examining Electrical and Mechanical Equipment with Infrared Thermography
International Organization for Standardization (ISO)
ISO 6781, Thermal insulation – Qualitative detection of thermal irregularities in building envelopes – Infrared method
ISO 18434-1, Condition monitoring and diagnostics of machines – Thermography – Part 1: General procedures
ISO 18436-7, Condition monitoring and diagnostics of machines – Requirements for qualification and assessment of personnel – Part 7: Thermography
=== Regulation ===
Higher-end thermographic cameras are often deemed dual-use military grade equipment, and are export-restricted, particularly if the resolution is 640×480 or greater, unless the refresh rate is 9 Hz or less. The export from the USA of specific thermal cameras is regulated by International Traffic in Arms Regulations.
== In biology ==
Thermography, by strict definition, is a measurement using an instrument, but some living creatures have natural organs that function as counterparts to bolometers, and thus possess a crude type of thermal imaging capability. This is called thermoception. One of the best known examples is infrared sensing in snakes.
== History ==
=== Discovery and research of infrared radiation ===
Infrared was discovered in 1800 by Sir William Herschel as a form of radiation beyond red light. These "infrared rays" (infra is the Latin prefix for "below") were used mainly for thermal measurement. There are four basic laws of IR radiation: Kirchhoff's law of thermal radiation, Stefan–Boltzmann law, Planck's law, and Wien's displacement law. The development of detectors was mainly focused on the use of thermometers and bolometers until World War I. A significant step in the development of detectors occurred in 1829, when Leopoldo Nobili, using the Seebeck effect, created the first known thermocouple, fabricating an improved thermometer, a crude thermopile. He described this instrument to Macedonio Melloni. Initially, they jointly developed a greatly improved instrument. Subsequently, Melloni worked alone, creating an instrument in 1833 (a multielement thermopile) that could detect a person 10 metres away. The next significant step in improving detectors was the bolometer, invented in 1880 by Samuel Pierpont Langley. Langley and his assistant Charles Greeley Abbot continued to make improvements in this instrument. By 1901, it could detect radiation from a cow from 400 metres away and was sensitive to differences in temperature of one hundred thousandths (0.00001 C) of a degree Celsius. The first commercial thermal imaging camera was sold in 1965 for high voltage power line inspections.
The first civil sector application of IR technology may have been a device to detect the presence of icebergs and steamships using a mirror and thermopile, patented in 1913. This was soon outdone by the first accurate IR iceberg detector, which did not use thermopiles, patented in 1914 by R.D. Parker. This was followed by G.A. Barker's proposal to use the IR system to detect forest fires in 1934. The technique was not genuinely industrialized until it was used to analyze heating uniformity in hot steel strips in 1935.
=== First thermographic camera ===
In 1929, Hungarian physicist Kálmán Tihanyi invented the infrared-sensitive (night vision) electronic television camera for anti-aircraft defense in Britain. The first American thermographic camera developed was an infrared line scanner. This was created by the US military and Texas Instruments in 1947 and took one hour to produce a single image. While several approaches were investigated to improve the speed and accuracy of the technology, one of the most crucial factors dealt with scanning an image, which the AGA company was able to commercialize using a cooled photoconductor.
The first British infrared linescan system was Yellow Duckling of the mid-1950s. This used a continuously rotating mirror and detector, with Y-axis scanning by the motion of the carrier aircraft. Although unsuccessful in its intended application of submarine tracking by wake detection, it was applied to land-based surveillance and became the foundation of military IR linescan.
This work was further developed at the Royal Signals and Radar Establishment in the UK when they discovered that mercury cadmium telluride was a photoconductor that required much less cooling. Honeywell in the United States also developed arrays of detectors that could cool at a lower temperature, but they scanned mechanically. This method had several disadvantages which could be overcome using an electronic scanning system. In 1969 Michael Francis Tompsett at English Electric Valve Company in the UK patented a camera that scanned pyro-electronically and which reached a high level of performance after several other breakthroughs during the 1970s. Tompsett also proposed an idea for solid-state thermal-imaging arrays, which eventually led to modern hybridized single-crystal-slice imaging devices.
By using video camera tubes such as vidicons with a pyroelectric material such as triglycine sulfate (TGS) as their targets, a vidicon sensitive over a broad portion of the infrared spectrum is possible. This technology was a precursor to modern microbolometer technology, and mainly used in firefighting thermal cameras.
=== Smart sensors ===
One of the essential areas of development for security systems was for the ability to intelligently evaluate a signal, as well as warning of a threat's presence. Under the encouragement of the US Strategic Defense Initiative, "smart sensors" began to appear. These are sensors that could integrate sensing, signal extraction, processing, and comprehension. There are two main types of smart sensors. One, similar to what is called a "vision chip" when used in the visible range, allow for preprocessing using smart sensing techniques due to the increase in growth of integrated microcircuitry. The other technology is more oriented to specific use and fulfills its preprocessing goal through its design and structure.
Towards the end of the 1990s, the use of infrared was moving towards civilian use. There was a dramatic lowering of costs for uncooled arrays, which along with the significant increase in developments, led to a dual-use market encompassing both civilian and military uses. These uses include environmental control, building/art analysis, functional medical diagnostics, and car guidance and collision avoidance systems.
== See also ==
== References ==
best thermal scanning service provider in india
== External links ==
Infrared Tube, infrared imaging science demonstrations
Compix, Some uses of thermographic images in electronics
Thermographic Images, Infrared pictures
Uncooled thermal imaging works round the clock by Lawrence Mayes
Archaeological aerial thermography
IR Thermometry & Thermography Applications Repository , IR Thermometry & Thermography Applications Repository | Wikipedia/Thermography |
Design optimization is an engineering design methodology using a mathematical formulation of a design problem to support selection of the optimal design among many alternatives. Design optimization involves the following stages:
Variables: Describe the design alternatives
Objective: Elected functional combination of variables (to be maximized or minimized)
Constraints: Combination of Variables expressed as equalities or inequalities that must be satisfied for any acceptable design alternative
Feasibility: Values for set of variables that satisfies all constraints and minimizes/maximizes Objective.
== Design optimization problem ==
The formal mathematical (standard form) statement of the design optimization problem is
minimize
f
(
x
)
s
u
b
j
e
c
t
t
o
h
i
(
x
)
=
0
,
i
=
1
,
…
,
m
1
g
j
(
x
)
≤
0
,
j
=
1
,
…
,
m
2
and
x
∈
X
⊆
R
n
{\displaystyle {\begin{aligned}&{\operatorname {minimize} }&&f(x)\\&\operatorname {subject\;to} &&h_{i}(x)=0,\quad i=1,\dots ,m_{1}\\&&&g_{j}(x)\leq 0,\quad j=1,\dots ,m_{2}\\&\operatorname {and} &&x\in X\subseteq R^{n}\end{aligned}}}
where
x
{\displaystyle x}
is a vector of n real-valued design variables
x
1
,
x
2
,
.
.
.
,
x
n
{\displaystyle x_{1},x_{2},...,x_{n}}
f
(
x
)
{\displaystyle f(x)}
is the objective function
h
i
(
x
)
{\displaystyle h_{i}(x)}
are
m
1
{\displaystyle m_{1}}
equality constraints
g
j
(
x
)
{\displaystyle g_{j}(x)}
are
m
2
{\displaystyle m_{2}}
inequality constraints
X
{\displaystyle X}
is a set constraint that includes additional restrictions on
x
{\displaystyle x}
besides those implied by the equality and inequality constraints.
The problem formulation stated above is a convention called the negative null form, since all constraint function are expressed as equalities and negative inequalities with zero on the right-hand side. This convention is used so that numerical algorithms developed to solve design optimization problems can assume a standard expression of the mathematical problem.
We can introduce the vector-valued functions
h
=
(
h
1
,
h
2
,
…
,
h
m
1
)
and
g
=
(
g
1
,
g
2
,
…
,
g
m
2
)
{\displaystyle {\begin{aligned}&&&{h=(h_{1},h_{2},\dots ,h_{m1})}\\\operatorname {and} \\&&&{g=(g_{1},g_{2},\dots ,g_{m2})}\end{aligned}}}
to rewrite the above statement in the compact expression
minimize
f
(
x
)
s
u
b
j
e
c
t
t
o
h
(
x
)
=
0
,
g
(
x
)
≤
0
,
x
∈
X
⊆
R
n
{\displaystyle {\begin{aligned}&{\operatorname {minimize} }&&f(x)\\&\operatorname {subject\;to} &&h(x)=0,\quad g(x)\leq 0,\quad x\in X\subseteq R^{n}\\\end{aligned}}}
We call
h
,
g
{\displaystyle h,g}
the set or system of (functional) constraints and
X
{\displaystyle X}
the set constraint.
== Application ==
Design optimization applies the methods of mathematical optimization to design problem formulations and it is sometimes used interchangeably with the term engineering optimization. When the objective function f is a vector rather than a scalar, the problem becomes a multi-objective optimization one. If the design optimization problem has more than one mathematical solutions the methods of global optimization are used to identified the global optimum.
Optimization Checklist
Problem Identification
Initial Problem Statement
Analysis Models
Optimal Design Model
Model Transformation
Local Iterative Techniques
Global Verification
Final Review
A detailed and rigorous description of the stages and practical applications with examples can be found in the book Principles of Optimal Design.
Practical design optimization problems are typically solved numerically and many optimization software exist in academic and commercial forms. There are several domain-specific applications of design optimization posing their own specific challenges in formulating and solving the resulting problems; these include, shape optimization, wing-shape optimization, topology optimization, architectural design optimization, power optimization. Several books, articles and journal publications are listed below for reference.
One modern application of design optimization is structural design optimization (SDO) is in building and construction sector. SDO emphasizes automating and optimizing structural designs and dimensions to satisfy a variety of performance objectives. These advancements aim to optimize the configuration and dimensions of structures to optimize augmenting strength, minimize material usage, reduce costs, enhance energy efficiency, improve sustainability, and optimize several other performance criteria. Concurrently, structural design automation endeavors to streamline the design process, mitigate human errors, and enhance productivity through computer-based tools and optimization algorithms. Prominent practices and technologies in this domain include the parametric design, generative design, building information modelling (BIM) technology, machine learning (ML), and artificial intelligence (AI), as well as integrating finite element analysis (FEA) with simulation tools.
== Journals ==
Journal of Engineering for Industry
Journal of Mechanical Design
Journal of Mechanisms, Transmissions, and Automation in Design
Design Science
Engineering Optimization
Journal of Engineering Design
Computer-Aided Design
Journal of Optimization Theory and Applications
Structural and Multidisciplinary Optimization
Journal of Product Innovation Management
International Journal of Research in Marketing
== See also ==
Design Decisions Wiki (DDWiki) : Established by the Design Decisions Laboratory at Carnegie Mellon University in 2006 as a central resource for sharing information and tools to analyze and support decision-making
== References ==
== Further reading ==
Rutherford., Aris, ([2016], ©1961). The optimal design of chemical reactors : a study in dynamic programming. Saint Louis: Academic Press/Elsevier Science. ISBN 9781483221434. OCLC 952932441
Jerome., Bracken, ([1968]). Selected applications of nonlinear programming. McCormick, Garth P.,. New York,: Wiley. ISBN 0471094404. OCLC 174465
L., Fox, Richard ([1971]). Optimization methods for engineering design. Reading, Mass.,: Addison-Wesley Pub. Co. ISBN 0201020785. OCLC 150744
Johnson, Ray C. Mechanical Design Synthesis With Optimization Applications. New York: Van Nostrand Reinhold Co, 1971.
1905-, Zener, Clarence, ([1971]). Engineering design by geometric programming. New York,: Wiley-Interscience. ISBN 0471982008. OCLC 197022
H., Mickle, Marlin ([1972]). Optimization in systems engineering. Sze, T. W., 1921-2017,. Scranton,: Intext Educational Publishers. ISBN 0700224076. OCLC 340906.
Optimization and design; [papers]. Avriel, M.,, Rijckaert, M. J.,, Wilde, Douglass J.,, NATO Science Committee., Katholieke Universiteit te Leuven (1970- ). Englewood Cliffs, N.J.,: Prentice-Hall. [1973]. ISBN 0136380158. OCLC 618414.
J., Wilde, Douglass (1978). Globally optimal design. New York: Wiley. ISBN 0471038989. OCLC 3707693.
J., Haug, Edward (1979). Applied optimal design : mechanical and structural systems. Arora, Jasbir S.,. New York: Wiley. ISBN 047104170X. OCLC 4775674.
Uri., Kirsch, (1981). Optimum structural design : concepts, methods, and applications. New York: McGraw-Hill. ISBN 0070348448. OCLC 6735289.
Uri., Kirsch, (1993). Structural optimization : fundamentals and applications. Berlin: Springer-Verlag. ISBN 3540559191. OCLC 27676129.
Structural optimization : recent developments and applications. Lev, Ovadia E., American Society of Civil Engineers. Structural Division., American Society of Civil Engineers. Structural Division. Committee on Electronic Computation. Committee on Optimization. New York, N.Y.: ASCE. 1981. ISBN 0872622819. OCLC 8182361.
Foundations of structural optimization : a unified approach. Morris, A. J. Chichester [West Sussex]: Wiley. 1982. ISBN 0471102008. OCLC 8031383.
N., Siddall, James (1982). Optimal engineering design : principles and applications. New York: M. Dekker. ISBN 0824716337. OCLC 8389250.
1944-, Ravindran, A., (2006). Engineering optimization : methods and applications. Reklaitis, G. V., 1942-, Ragsdell, K. M. (2nd ed.). Hoboken, N.J.: John Wiley & Sons. ISBN 0471558141. OCLC 61463772.
N.,, Vanderplaats, Garret (1984). Numerical optimization techniques for engineering design : with applications. New York: McGraw-Hill. ISBN 0070669643. OCLC 9785595.
T., Haftka, Raphael (1990). Elements of Structural Optimization. Gürdal, Zafer., Kamat, Manohar P. (Second rev. edition ed.). Dordrecht: Springer Netherlands. ISBN 9789401578622. OCLC 851381183.
S., Arora, Jasbir (2011). Introduction to optimum design (3rd ed.). Boston, MA: Academic Press. ISBN 9780123813756. OCLC 760173076.
S.,, Janna, William. Design of fluid thermal systems (SI edition ; fourth edition ed.). Stamford, Connecticut. ISBN 9781285859651. OCLC 881509017.
Structural optimization : status and promise. Kamat, Manohar P. Washington, DC: American Institute of Aeronautics and Astronautics. 1993. ISBN 156347056X. OCLC 27918651.
Mathematical programming for industrial engineers. Avriel, M., Golany, B. New York: Marcel Dekker. 1996. ISBN 0824796209. OCLC 34474279.
Hans., Eschenauer, (1997). Applied structural mechanics : fundamentals of elasticity, load-bearing structures, structural optimization : including exercises. Olhoff, Niels., Schnell, W. Berlin: Springer. ISBN 3540612327. OCLC 35184040.
1956-, Belegundu, Ashok D., (2011). Optimization concepts and applications in engineering. Chandrupatla, Tirupathi R., 1944- (2nd ed.). New York: Cambridge University Press. ISBN 9781139037808. OCLC 746750296.
Okechi., Onwubiko, Chinyere (2000). Introduction to engineering design optimization. Upper Saddle River, NJ: Prentice-Hall. ISBN 0201476738. OCLC 41368373.
Optimization in action : proceedings of the Conference on Optimization in Action held at the University of Bristol in January 1975. Dixon, L. C. W. (Laurence Charles Ward), 1935-, Institute of Mathematics and Its Applications. London: Academic Press. 1976. ISBN 0122185501. OCLC 2715969.
P., Williams, H. (2013). Model building in mathematical programming (5th ed.). Chichester, West Sussex: Wiley. ISBN 9781118506189. OCLC 810039791.
Integrated design of multiscale, multifunctional materials and products. McDowell, David L., 1956-. Oxford: Butterworth-Heinemann. 2010. ISBN 9781856176620. OCLC 610001448.
M.,, Dede, Ercan. Multiphysics simulation : electromechanical system applications and optimization. Lee, Jaewook,, Nomura, Tsuyoshi,. London. ISBN 9781447156406. OCLC 881071474.
1962-, Liu, G. P. (Guo Ping), (2001). Multiobjective optimisation and control. Yang, Jian-Bo, 1961-, Whidborne, J. F. (James Ferris), 1960-. Baldock, Hertfordshire: Research Studies Press. ISBN 0585491941. OCLC 54380075.
=== Structural Topology Optimization === | Wikipedia/Design_Optimization |
Energy economics is a broad scientific subject area which includes topics related to supply and use of energy in societies. Considering the cost of energy services and associated value gives economic meaning to the efficiency at which energy can be produced. Energy services can be defined as functions that generate and provide energy to the “desired end services or states”. The efficiency of energy services is dependent on the engineered technology used to produce and supply energy. The goal is to minimise energy input required (e.g. kWh, mJ, see Units of Energy) to produce the energy service, such as lighting (lumens), heating (temperature) and fuel (natural gas). The main sectors considered in energy economics are transportation and building, although it is relevant to a broad scale of human activities, including households and businesses at a microeconomic level and resource management and environmental impacts at a macroeconomic level.
Interdisciplinary scientist Vaclav Smil has asserted that "every economic activity is fundamentally nothing but a conversion of one kind of energy to another, and monies are just a convenient (and often rather unrepresentative) proxy for valuing the energy flows."
== History ==
Energy related issues have been actively present in economic literature since the 1973 oil crisis, but have their roots much further back in the history. As early as 1865, W.S. Jevons expressed his concern about the eventual depletion of coal resources in his book The Coal Question. One of the best known early attempts to work on the economics of exhaustible resources (incl. fossil fuel) was made by H. Hotelling, who derived a price path for non-renewable resources, known as Hotelling's rule.
Development of energy economics theory over the last two centuries can be attributed to three main economic subjects – the rebound effect, the energy efficiency gap and more recently, 'green nudges'.
The Rebound Effect (1860s to 1930s)
While energy efficiency is improved with new technology, expected energy savings are less-than proportional to the efficiency gains due to behavioural responses. There are three behavioural sub-theories to be considered: the direct rebound effect, which anticipates increased use of the energy service that was improved; the indirect rebound effect, which considers an increased income effect created by savings then allowing for increased energy consumption, and; the economy-wide effect, which results from an increase in energy prices due to the newly developed technology improvements.
The Energy Efficiency Gap (1980s to 1990s)
Suboptimal investment in improvement of energy efficiency resulting from market failures/barriers prevents the optimal use of energy. From an economical standpoint, a rational decision-maker with perfect information will optimally choose between the trade-off of initial investment and energy costs. However, due to uncertainties such as environmental externalities, the optimal potential energy efficiency is not always able to be achieved, thus creating an energy efficiency gap.
Green Nudges (1990s to Current)
While the energy efficiency gap considers economical investments, it does not consider behavioural anomalies in energy consumers. Growing concerns surrounding climate change and other environmental impacts have led to what economists would describe as irrational behaviours being exhibited by energy consumers. A contribution to this has been government interventions, coined "green nudges’ by Thaler and Sustein (2008), such as feedback on energy bills. Now that it is realised people do not behave rationally, research into energy economics is more focused on behaviours and impacting decision-making to close the energy efficiency gap.
== Economic factors ==
Due to diversity of issues and methods applied and shared with a number of academic disciplines, energy economics does not present itself as a self-contained academic discipline, but it is an applied subdiscipline of economics. From the list of main topics of economics, some relate strongly to energy economics:
Energy economics also draws heavily on results of energy engineering, geology, political sciences, ecology etc. Recent focus of energy economics includes the following issues:
Some institutions of higher education (universities) recognise energy economics as a viable career opportunity, offering this as a curriculum. The University of Cambridge, Massachusetts Institute of Technology and the Vrije Universiteit Amsterdam are the top three research universities, and Resources for the Future the top research institute. There are numerous other research departments, companies, and professionals offering energy economics studies and consultations.
== International Association for Energy Economics ==
International Association for Energy Economics (IAEE) is an international non-profit society of professionals interested in energy economics. IAEE was founded in 1977, during the period of the energy crisis. IAEE is incorporated under United States laws and has headquarters in Cleveland.
The IAEE operates through a 17-member Council of elected and appointed members. Council and officer members serve in a voluntary position.
IAEE has over 4,500 members worldwide (in over 100 countries). There are more than 25 national chapters, in countries where membership exceeds 25 individual members. Some of the regularly active national chapters of the IAEE are; USAEE - United States; GEE - Germany; BIEE - Great Britain; AEE - France; AIEE - Italy.
=== Publications ===
The International Association for Energy Economics publishes three publications throughout the year:
The Energy Journal, a quarterly academic publication
the Economics of Energy & Environmental Policy, a semi-annual publication
the Energy Forum
=== Conferences ===
The IAEE conferences address critical issues of vital concern and importance to governments and industries and provide a forum where policy issues are presented, considered and discussed at both formal sessions and informal social functions.
IAEE typically holds five Conferences each year. The main annual conference for IAEE is the IAEE International Conference which is organized at diverse locations around the world. From the year 1996 on these conferences have taken place (or will take place) in the following cities:
2021 - Online Conference
2020 - No Conference
2019 - Montreal, Canada
2018 - Groningen, The Netherlands.
2017 - Singapore.
2016 - Bergen, Norway.
2015 - Antalya, Turkey.
2014 - New York City, United States.
2013 - Daegu, South Korea.
2012 - Perth, Australia (35th).
2011 - Stockholm, Sweden.
2010 - Rio, Brazil.
2009 - San Francisco, United States.
2008 - Istanbul, Turkey.
2007 - Wellington, New Zealand.
2006 - Potsdam, Germany.
2005 - Taipei, China (Taipei).
2003 - Prague, Czech Republic.
2002 - Aberdeen, Scotland.
2001 - Houston, Texas.
2000 - Sydney, Australia.
1999 - Rome, Italy.
1998 - Quebec, Canada.
1997 - New Delhi, India.
1996 - Budapest, Hungary.
Other annual IAEE conferences are the North American Conference and the European Conference.
=== IAEE Awards ===
The Association's Immediate Past President annually chairs the Awards committee that selects the award recipients.
Outstanding Contributions to the Profession
Outstanding Contributions to the IAEE
The Energy Journal Campbell Watkins Best Paper Award
Economics of Energy & Environmental Policy Best Paper Award
Journalism Award
== Sources, links and portals ==
Leading journals of energy economics include:
Energy Economics
The Energy Journal
Resource and Energy Economics
There are several other journals that regularly publish papers in energy economics:
Energy – The International Journal
Energy Policy
International Journal of Global Energy Issues
Journal of Energy Markets
Utilities Policy
Much progress in energy economics has been made through the conferences of the International Association for Energy Economics, the model comparison exercises of the (Stanford) Energy Modeling Forum and the meetings of the International Energy Workshop.
IDEAS/RePEc has a collection of recent working papers.
== Leading energy economists ==
The top 20 leading energy economists as of December 2016 are:
== See also ==
== References ==
== Further reading ==
How to Measure the True Cost of Fossil Fuels March 30, 2013 Scientific American
Bhattacharyya, S. (2011). Energy Economics: Concepts, Issues, Markets, and Governance. London: Springer-Verlag limited.
Herberg, Mikkal (2014). Energy Security and the Asia-Pacific: Course Reader. United States: The National Bureau of Asian Research.
Zweifel, P., Praktiknjo, A., Erdmann, G. (2017). Energy Economics - Theory and Applications Archived 2017-04-26 at the Wayback Machine. Berlin, Heidelberg: Springer-Verlag.
== External links ==
United States Association for Energy Economics
UIA - International Association for Energy Economics (IAEE)
The Distinguished Lecturer Series
IAEE Newsletter | Wikipedia/Energy_economics |
A lighting control system is intelligent network-based lighting control that incorporates communication between various system inputs and outputs related to lighting control with the use of one or more central computing devices. Lighting control systems are widely used on both indoor and outdoor lighting of commercial, industrial, and residential spaces. Lighting control systems are sometimes referred to under the term smart lighting. Lighting control systems serve to provide the right amount of light where and when it is needed.
Lighting control systems are employed to maximize the energy savings from the lighting system, satisfy building codes, or comply with green building and energy conservation programs. Lighting control systems may include a lighting technology designed for energy efficiency, convenience and security. This may include high efficiency fixtures and automated controls that make adjustments based on conditions such as occupancy or daylight availability. Lighting is the deliberate application of light to achieve some aesthetic or practical effect (e.g. illumination of a security breach). It includes task lighting, accent lighting, and general lighting.
== Lighting controls ==
The term lighting controls is typically used to indicate stand-alone control of the lighting within a space. This may include occupancy sensors, timeclocks, and photocells that are hard-wired to control fixed groups of lights independently. Adjustment occurs manually at each devices location. The efficiency of and market for residential lighting controls has been characterized by the Consortium for Energy Efficiency.
The term lighting control system refers to an intelligent networked system of devices related to lighting control. These devices may include relays, occupancy sensors, photocells, light control switches or touchscreens, and signals from other building systems (such as fire alarm or HVAC). Adjustment of the system occurs both at device locations and at central computer locations via software programs or other interface devices.
=== Advantages ===
The major advantage of a lighting control system over stand-alone lighting controls or conventional manual switching is the ability to control individual lights or groups of lights from a single user interface device. This ability to control multiple light sources from a user device allows complex lighting scenes to be created. A room may have multiple scenes pre-set, each one created for different activities in the room. A major benefit of lighting control systems is reduced energy consumption. Longer lamp life is also gained when dimming and switching off lights when not in use. Wireless lighting control systems provide additional benefits including reduced installation costs and increased flexibility over where switches and sensors may be placed.
=== Minimizing energy usage ===
Lighting applications represents 19% of the world's energy use and 6% of all greenhouse emissions. In the United States, 65 percent of energy consumption is used by commercial and industrial sectors, and 22 percent of this is used for lighting.
Smart lighting enables households and users to remotely control cooling, heating, lighting and appliances, minimizing unnecessary light and energy use. This ability saves energy and provides a level of comfort and convenience. From outside the traditional lighting industry, the future success of lighting will require involvement of a number of stakeholders and stakeholder communities. The concept of smart lighting also involves utilizing natural light from the sun to reduce the use of man-made lighting, and the simple concept of people turning off lighting when they leave a room.
=== Convenience ===
A smart lighting system can ensure that dark areas are illuminated when in use. The lights actively respond to the activities of the occupants based on sensors and intelligence (logic) that anticipates the lighting needs of an occupant. This can enhance comfort, improve safety, reduce manual effort, and improve energy efficiency.
=== Security ===
Lights can be used to dissuade those from areas they should not be. A security breach, for example, is an event that could trigger floodlights at the breach point. Preventative measures include illuminating key access points (such as walkways) at night and automatically adjusting the lighting when a household is away to make it appear as though there are occupants.
== Automated control ==
Lighting control systems typically provide the ability to automatically adjust a lighting device's output based on:
Chronological time (time of day)
Solar time (sunrise/sunset)
Occupancy using occupancy sensors
Daylight availability using photocells
Alarm conditions
Program logic (combination of events)
=== Chronological time ===
Chronological time schedules incorporate specific times of the day, week, month or year.
=== Solar time ===
Solar time schedules incorporate sunrise and sunset times, often used to switch outdoor lighting. Solar time scheduling requires that the location of the building be set. This is accomplished using the building's geographic location via either latitude and longitude or by picking the nearest city in a given database giving the approximate location and corresponding solar times.
=== Occupancy ===
Space occupancy is primarily determined with occupancy sensors. Smart lighting that utilizes occupancy sensors can work in unison with other lighting connected to the same network to adjust lighting per various conditions. The table below shows potential electricity savings from using occupancy sensors to control lighting in various types of spaces.
==== Ultrasonic ====
The advantages of ultrasonic devices are that they are sensitive to all types of motion and generally there are zero coverage gaps, since they can detect movements not within the line of sight.
=== Daylight availability ===
Electric lighting energy use can be adjusted by automatically dimming and/or switching electric lights in response to the level of available daylight. Reducing the amount of electric lighting used when daylight is available is known as daylight harvesting.
==== Daylight sensing ====
In response to daylighting technology, daylight-linked automated response systems have been developed to further reduce energy consumption. These technologies are helpful, but they do have their downfalls. Many times, rapid and frequent switching of the lights on and off can occur, particularly during unstable weather conditions or when daylight levels are changing around the switching illuminance. Not only does this disturb occupants, it can also reduce lamp life. A variation of this technology is the 'differential switching' or 'dead-band' photoelectric control which has multiple illuminances it switches from to reduce occupants being disturbed.
=== Alarm conditions ===
Alarm conditions typically include inputs from other building systems such as the fire alarm or HVAC system, which may trigger an emergency 'all lights on' or ' all lights flashing' command for example.
=== Program logic ===
Program logic can tie all of the above elements together using constructs such as if-then-else statements and logical operators. Digital Addressable Lighting Interface (DALI) is specified in the IEC 62386 standard.
=== Automatic dimming ===
The use of automatic light dimming is an aspect of smart lighting that serves to reduce energy consumption. Manual light dimming also has the same effect of reducing energy use.
=== Use of sensors ===
In the paper "Energy savings due to occupancy sensors and personal controls: a pilot field study", Galasiu, A.D. and Newsham, G.R have confirmed that automatic lighting systems including occupancy sensors and individual (personal) controls are suitable for open-plan office environments and can save a significant amount of energy (about 32%) when compared to a conventional lighting system, even when the installed lighting power density of the automatic lighting system is ~50% higher than that of the conventional system.
==== Components ====
A complete sensor consists of a motion detector, an electronic control unit, and a controllable switch/relay. The detector senses motion and determines whether there are occupants in the space. It also has a timer that signals the electronic control unit after a set period of inactivity. The control unit uses this signal to activate the switch/relay to turn equipment on or off. For lighting applications, there are three main sensor types: passive infrared, ultrasonic, and hybrid.
==== Others ====
Motion-detecting (microwave), heating-sensing (infrared), and sound-sensing; optical cameras, infrared motion, optical trip wires, door contact sensors, thermal cameras, micro radars, daylight sensors.
== Standards and protocols ==
In the 1980s there was a strong requirement to make commercial lighting more controllable so that it could become more energy efficient. Initially this was done with analog control, allowing fluorescent ballasts and dimmers to be controlled from a central source. This was a step in the right direction, but cabling was complicated and therefore not cost effective.
Tridonic was an early company to go digital with their broadcast protocols, DSI, in 1991. DSI was a basic protocol as it transmitted one control value to change the brightness of all the fixtures attached to the line. What made this protocol more attractive, and able to compete with the established analog option, was the simple wiring.
There are two types of lighting control systems which are:
Analog lighting control
Digital lighting control
Examples for analog lighting control systems are:
0-10V based system.
AMX192 based systems (often referred to as AMX) (USA standard).
D54 based systems (European standard).
In production lighting 0-10V system was replaced by analog multiplexed systems such as D54 and AMX192, which themselves have been almost completely replaced by DMX512. For dimmable fluorescent lamps (where it operates instead at 1-10 V, where 1 V is minimum and 0 V is off) the system is being replaced by DSI, which itself is in the process of being replaced by DALI.
Examples for digital lighting control systems are:
DALI based system.
DSI based system
KNX based systems
Those are all wired lighting control systems.
There are also wireless lighting control systems that are based on some standard protocols like MIDI, ZigBee, Bluetooth Mesh, and others. The standard for digital addressable lighting interface, mostly in professional and commercial deployments, is IEC 62386-104. This standard specifies the underlying technologies, which in wireless are VEmesh, which operates in the industrial Sub-1 GHz frequency band and Bluetooth Mesh, which operates in the 2.4 GHz frequency band.
Other notable protocols, standards and systems include:
=== Bluetooth lighting control ===
The new type of control for lighting system is using Bluetooth connection directly to the lighting system. It is recently introduced by Philips HUE and company new name as Signify formerly known as Philips Lighting. This system will need a smartphone or tablet where the user can install a special Philips Hue Bluetooth app. The Bluetooth bulbs don't need a Philips Hue bridge to function. There is no need to have a Wi-Fi or data connection for controlling the lights with that system.
=== Smart lighting ecosystem ===
Smart lighting systems can be controlled using the internet to adjust lighting brightness and schedules. One technology involves a smart lighting network that assigns IP addresses to light bulbs.
=== Information transmitting with smart light ===
Schubert predicts that revolutionary lighting systems will provide an entirely new means of sensing and broadcasting information. By blinking far too rapidly for any human to notice, the light will pick up data from sensors and carry it from room to room, reporting such information as the location of every person within a high-security building. A major focus of the Future Chips Constellation is smart lighting, a revolutionary new field in photonics based on efficient light sources that are fully tunable in terms of such factors as spectral content, emission pattern, polarization, color temperature, and intensity. Schubert, who leads the group, says smart lighting will not only offer better, more efficient illumination; it will provide "totally new functionalities."
== Theatrical lighting control ==
Architectural lighting control systems can integrate with a theater's on-off and dimmer controls, and are often used for house lights and stage lighting, and can include worklights, rehearsal lighting, and lobby lighting. Control stations can be placed in several locations in the building and range in complexity from single buttons that bring up preset options-looks, to in-wall or desktop LCD touchscreen consoles. Much of the technology is related to residential and commercial lighting control systems.
The benefit of architectural lighting control systems in the theater is the ability for theater staff to turn worklights and house lights on and off without having to use a lighting control console. Alternately, the light designer can control these same lights with light cues from the lighting control console so that, for instance, the transition from houselights being up before a show starts and the first light cue of the show is controlled by one system.
== Smart-lighting emergency ballast for fluorescent lamps ==
The function of a traditional emergency lighting system is the supply of a minimum illuminating level when a line voltage failure appears. Therefore, emergency lighting systems have to store energy in a battery module to supply lamps in case of failure. In this kind of lighting systems the internal damages, for example battery overcharging, damaged lamps and starting circuit failure must be detected and repaired by specialist workers.
For this reason, the smart lighting prototype can check its functional state every fourteen days and dump the result into a LED display. With these features they can test themselves checking their functional state and displaying their internal damages. Also the maintenance cost can be decreased.
=== Overview ===
The main idea is the substitution of the simple line voltage sensing block that appears in the traditional systems by a more complex one based on a microcontroller. This new circuit will assume the functions of line voltage sensing and inverter activation, by one side, and the supervision of all the system: lamp and battery state, battery charging, external communications, correct operation of the power stage, etc., by the other side.
The system has a great flexibility, for instance, it would be possible the communication of several devices with a master computer, which would know the state of each device all the time.
A new emergency lighting system based on an intelligent module has been developed. The micro-controller as a control and supervision device guarantees increase in the installation security and a maintenance cost saving.
Another important advantage is the cost saving for mass production specially whether a microcontroller with the program in ROM memory is used.
== Advances in photonics ==
The advances achieved in photonics are already transforming society just as electronics revolutionized the world in recent decades and it will continue to contribute more in the future. From the statistics, North America's optoelectronics market grew to more than $20 billion in 2003. The LED (light-emitting diode) market is expected to reach $5 billion in 2007, and the solid-state lighting market is predicted to be $50 billion in 15–20 years, as stated by E. Fred Schubert, Wellfleet Senior Distinguished Professor of the Future Chips Constellation at Rensselaer.
== Notable inventors ==
Alexander Nikolayevich Lodygin – carbon-rod filament incandescent lamp (1874)
Joseph Swan – carbonized-thread filament incandescent lamp (1878)
Thomas Edison – long-lasting incandescent lamp with high-resistance filament (1880)
John Richardson Wigham – electric lighthouse illumination (1885)
Nick Holonyak – light-emitting diode (1962)
Howard Borden, Gerald Pighini, Mohamed Atalla, Monsanto – LED lamp (1968)
Shuji Nakamura, Isamu Akasaki, Hiroshi Amano – blue LED (1992)
== See also ==
Lists
List of light sources
List of lighting design applications
Timeline of lighting technology
== References == | Wikipedia/Lighting_control_system |
Occupant-centric building controls or Occupant-centric controls (OCC) is a control strategy for the indoor environment, that specifically focuses on meeting the current needs of building occupants while decreasing building energy consumption. OCC can be used to control lighting and appliances, but is most commonly used to control heating, ventilation, and air conditioning (HVAC). OCC use real-time data collected on indoor environmental conditions, occupant presence and occupant preferences as inputs to energy system control strategies. By responding to real-time inputs, OCC is able to flexibly provide the proper level of energy services, such as heating and cooling, when and where it is needed by occupants. Ensuring that building energy services are provided in the right quantity is intended to improve occupant comfort while providing these services only at the right time and in the right location is intended to reduce overall energy use.
In contrast to OCC, conventional building control strategies, known as Building Energy Management Systems (BEMS), typically use predetermined temperature setpoints and setback schedules. These temperatures and temperature schedules are often determined by industry standards with no input from the building occupants. Conventional BEMS typically have static operation parameters that give minimal flexibility to meet the changing needs of building occupants throughout the day, the changing needs of new building tenants, or the diverse thermal needs of any given group of building occupants.
The American Society for Heating, Refrigeration and Air-conditioning Engineers has outlined that thermal comfort of occupants is influenced both by environmental conditions such as radiative heat, humidity, air speed and season as well as personal factors such as physiology, clothing worn and activity level. This dynamic and personalized nature of thermal comfort has traditionally made it complex it integrate into HVAC controls but an increase in sensing and computing capabilities along with a decrease in sensing and computing costs has made it possible for OCC to be an effective and scalable means of controlling building energy systems. With buildings consuming over 33% of global energy, and producing almost 40% of CO2 emissions, OCC could play a significant role in reducing global energy consumption and CO2 emissions.
== Background ==
=== Occupant-Centric Control Inputs ===
OCC relies on real-time occupancy and occupant preference data as inputs to the control algorithm. This data must be continually collected by various methods and can be collected on various scales including whole-building, floor, room, and sub-room. Often, it is most useful to collect data on a scale that matches the thermal zoning of the building. A thermal zone is a section of a building that is all conditioned under the same temperature setpoint.
Data on occupant presence (occupied or unoccupied) and occupancy levels (number of occupants) can be collected with either explicit or implicit sensors. Explicit sensors directly measure occupancy and can include passive infrared sensors, ultrasonic motion detectors, and entranceway counting cameras. Implicit sensors measure a parameter that can be correlated to occupancy through some calibrated relationship. Examples of implicit occupancy sensors includes CO2 sensors and Wi-Fi-connected device count. The selection of occupancy sensing devices depends on the size of the space being monitored, the budget for sensors, the desired accuracy, the goal of the sensor (detecting occupant presence or count), and security considerations.
Unlike occupant presence data, acquiring occupant preference data requires direct feedback from building occupants. This feedback can be solicited or unsolicited. Unsolicited occupant preference data can include the time and magnitude of a manual thermostat setpoint change. While this can be a good indicator of occupant thermal dissatisfaction, thermostat setpoint changes can be infrequent creating a barrier to integrating occupant preference into OCC. Solicited occupant preference information is often used as a means of acquiring more occupant preference information and takes the form of just-in-time surveys or Ecological Momentary Assessments (EMA). These surveys, typically deployed to computers, smart phones, or smart watches, can ask participants about their thermal sensation, thermal satisfaction or any other factor that reflects their comfort in the space. Implementing occupant preference information into OCC is still in its early stages and its practical application is still being studied in the academic environment.
=== Predictive Controls ===
OCC can be categorized as either reactive control or predictive control. Reactive control uses the real-time occupant preference and presents feedback to immediately alter the conditions of the space. While this approach is useful for controlling systems with fast response times such as lighting systems, reactive OCC is not ideal for systems with slow response times such as HVAC. For these slow response systems, predictive control allows building services, such as heating, to be provided at the right time without a lag between the time a service is needed and the time when the service is provided.
Unlike reactive controls, predictive controls use real-time occupant preference and presence data to inform and train predictive control algorithms rather than directly impact the system operation. Predictive controls have a ‘prediction horizon’ which is the amount of time ahead that an OCC will need to change a setpoint or ventilation rate to achieve a certain temperature or indoor air quality level. The needed prediction horizon for an OCC will vary depending on the response time of the building. Building attributes that contribute to the need for a longer prediction horizon when controlling HVAC systems include large open rooms, high thermal mass, and spaces with rapid changes in occupancy levels.
For commercial HVAC OCC, predictive algorithms will be informed by the six information grades (IGs) outlined by ASHRAE. These IGs are occupant presence, occupant count, and occupant preference, each considered at the building and thermal zone level. From occupant presence data, an OCC may predict the earliest occupant arrival time and latest departure time. From occupant count, an OCC may predict the maximum expected number of building occupants and when. From occupant preference data, an OCC may predict the desired temperature and humidity of the space throughout the day. With this information, an OCC could predict when it would need to change temperature setpoints and ventilation rates to achieve a desired temperature, and air quality level at a specific time. Predictive algorithms needs a sufficient amount of data as well as relatively regular occupant preference and presence patterns to develop accurate control predictions.
=== Occupant-Centric Control Development ===
The development of OCC is currently being supported by the International Energy Agency (IEA) Energy in Buildings and Community (EBC) Annex 79. Annex 79, which will run from 2018 to 2023, is an international collaborative initiative focused on developing and deploying technology, data collection methods, simulation methods, control algorithms, implementation policies, and application strategies aimed at occupant-centric building design and controls. This collaborative is focused on applying the knowledge gained from the previous Annex 66 which ran from 2013 to 2018. Annex 66 worked to understand how occupant behavior relates to building energy consumption as well as how building operation and design influence occupant thermal comfort. This was done primarily by collecting occupant behavior data and developing occupant simulation methods.
Additional groups working to develop OCC include the ASHRAE Multidisciplinary Task Group on Occupant Behavior in Buildings (MGT.OBB), and the National Science Foundation Future of Work Center for Intelligent Environments.
=== Occupant-Centric Control Algorithms ===
OCC is still in development where the creation and evaluation of various control algorithms are the main focus of study. Algorithms that have been studied for OCC include, but are not limited to, iterative data fusion methods, unsupervised machine learning, and reinforcement learning. Each of these algorithms has varying levels of computational complexity, needed inputs, and energy reduction potential.
Iterative data fusion methods are an example of reactive OCC controls and are a means of combining data from two or more sources. For this method, preference data from multiple occupants and data on indoor environmental conditions is used to balance the two optimization goals of average occupant satisfaction and energy savings. To balance these goals, each time new data is put into the system, the algorithm will determine if any control action is needed, such as changing the temperature setpoint, based on a set of control rules that determine how well the optimization goals are being met
Unsupervised machine learning can be used to cluster occupants based on their ‘thermal personalities’. These clusters can then be used to inform reactive or predictive controls by understanding the thermal preferences of the specific occupants in the space. For this method, solicited occupant preference information is fed into an unsupervised machine algorithm that will group occupants based on how similar their thermal preferences are. The number and size of the groups depends on the type of unsupervised algorithm used as well as the data being analyzed.
Reinforcement machine learning can be used as a predictive control algorithm with the goal of optimizing occupant satisfaction and energy savings. For this method, the algorithm accepts occupant presence and preference data and uses it to learn occupant preferences without the need to train the algorithm on previous data. The algorithm will evaluate each control decision it makes in order to maximize its reward which is based on its ability to optimize occupant satisfaction and energy savings. This algorithm is capable of making continual adjustments based on new information it receives.
== References == | Wikipedia/Occupant-centric_building_controls |
Green building certification systems are a set of rating systems and tools that are used to assess a building or a construction project's performance from a sustainability and environmental perspective. Such ratings aim to improve the overall quality of buildings and infrastructures, integrate a life cycle approach in its design and construction, and promote the fulfillment of the United Nations Sustainable Development Goals by the construction industry. Buildings that have been assessed and are deemed to meet a certain level of performance and quality, receive a certificate proving this achievement.
According to the Global Status Report 2017 published by United Nations Environment Programme (UNEP) in coordination with the International Energy Agency (IEA), buildings and construction activities together contribute to 36% of the global energy use and 39% of carbon dioxide (CO2) emissions. Through certification, the associated environmental impacts during the lifecycle of buildings and other infrastructures (typically design, construction, operation and maintenance) could be better understood and mitigated. Currently, more than 100 building certifications systems exist around the world. The most popular building certification models today are BREEAM (UK), LEED (US), and DGNB (Germany).
== History ==
In the mid-1980s, environmental issues were in the news and public attention due to different international disasters such as the Bhopal disaster (1984), Chernobyl nuclear explosion (1986) and the Exxon Valdez tanker spill (1989). Lifecycle assessments (LCAs) were starting to gain traction from its initial stages in the 1970s to the 1980s and it was in 1991 that the term was first coined. With increasing cognizance of environmental impacts due to human activities, a more comprehensive assessment of buildings utilizing the principles of LCA was much sought after. In 1990, the first Sustainability Assessment Method for buildings, BREEAM was released. In 1993, Rick Fedrizzi, David Gottfried and Mike Italiano formed the world's first Green Building Council (GBC) with the U.S. Green Building Council. Their mission was to promote sustainability-focused practices in the building and construction industry and advance sustainable building principles. USGBC was further responsible for the creation of the Leadership in Energy and Environmental Design (LEED) green building rating system in 1998. The integration of energy usage, materials performance and other building-related environmental issues, along with an aim towards standardizing the comparison of assessments led to more comprehensive building assessment methods.
With the principles of green building gaining momentum, several more GBCs were established across the world. In 2002, the World Green Building Council was officially formed to bring all the GBCs under one roof. GBCs from Australia, Brazil, Canada, India, Japan, Mexico, Spain, and USA were the founding members. As of 2018–19, there are 69 Green Building Councils under the World Green Building Council organization.
== Goals and benefits of building certification ==
The goal of all certification rating systems is to provide tools and methods to assess the environmental and resource-efficient performance of a building. The main objectives of such tools are:
optimize building performance and minimize environmental impacts
provide a way to quantify a building's environmental effects
set standards and benchmarks to assess buildings objectively
Furthermore, the result of such an assessment is to provide a certificate verifying the achievement of the desired performance and quality of the building. Some benefits of certifying a building or a property include:
the negative impacts of a building on the environment can be better understood and this knowledge can be utilized to reduce such impacts.
holistic sustainability considerations will be made for the fulfillment of technical, economic, social and functional requirements of the building.
promotes sustainable design and construction principles throughout the building lifecycle.
increases the monetary value of a building or a property in the real estate market.
== Building certification systems used around the world ==
=== Germany ===
The German Sustainable Building Council (Deutsche Gesellschaft für nachhaltiges Bauen e.V., DGNB in German) introduced its own green building certification in 2009 together with the German Federal Ministry of Traffic, Construction and Urban Development (Bundesministerium für Verkehr, Bau und Stadtentwicklung in German). The DGNB certification is voluntary and is based on German codes and standards (DIN and VDI). It is generally regarded as more comprehensive than BREEAM and LEED. The DGNB System is based on the three main paradigms of:
Life-cycle assessment
Holistic sustainability (environment, economy and society)
Performance-based approach.
It also takes into consideration the economic aspects and as such, also assesses the associated Life Cycle Costs and Value Creation of the building. It has six assessment categories and further assigns different weights to each category indicator.
The assessment is done by an auditor who is appointed by the project contractor. The auditor supports the contractor and supervises the construction process from the initial registration up to the certification and the project conclusion.
=== Sweden ===
The Sweden Green Building Council introduced its own certification system in 2011 with Miljöbyggnad which is based on Swedish standards and legislations. It is currently in its 3rd iteration with Miljöbyggnad 3.1 released in April 2020.
Miljöbyggnad has three levels of certification: Bronze, Silver and Gold. It is used to certify both new and existing constructions. It assesses 3 categories, namely:
Energy consumption
Indoor environment
Materials and chemicals.
Among these categories, there are 15 further sub-categories which have its own set of requirements for each certification levels. For example, for the "Energy use" sub-category, Bronze level requires energy use to be less than 65% of the requirements of the BBR, the Swedish Building Code. After 3 years, another follow-up inspection is conducted to see that everything is in the correct order and the standards are still being met.
Besides Miljöbyggnad, the Sweden Green Building Council also administers the Swedish version of the British BREEAM adapted for the Swedish construction practices and standards, called BREEAM-SE. It was first introduced in Sweden in 2013 and is used to certify new constructions.
=== Taiwan ===
EEWH is Taiwan's Green Building Label. EEWH is the abbreviation of "Ecology, Energy Saving, Waste Reduction, and Health" and is established in 1999 as the fourth green building label certification in the world. It is currently the only green building evaluation system independently developed in tropical and subtropical climates, and most especially for high-temperature and high-humidity climates. Apart from operating in Taiwan, there is also an international version of the EEWH certification for buildings abroad.
There are six types of EEWH, which are divided into "basic type, accommodation type, factory building type, old building improvement type, community type, and overseas version". Each type is based on the four major axes of "ecology, energy saving, waste reduction, and health". According to the standards of the nine indicators of "biodiversity, greening amount, base water conservation, daily energy saving, carbon dioxide reduction, waste reduction, indoor environment, water resources, sewage and waste improvement", to determine the "qualified grade, bronze grade, silver grade Level, Gold Level, Diamond Level" five certification levels, generally audit rewards if qualified level can get 2% volume, bronze level 4%, silver level 6%, gold level 8%, diamond level 10%.
=== United Kingdom ===
The Building Research Establishment Environmental Assessment Method (BREEAM) is recognized as the first Sustainability Assessment Method for buildings. It was launched in 1990 by the UK-based organization Building Research Establishment (BRE).
BREEAM certification is carried out on the basis of a scoring system where projects are assessed on the basis of 10 categories (with individual weightings differing by project type) as follows:
Management
Health and well-being
Hazards
Energy
Transport
Water
Materials
Waste
Land use and ecology
Pollution
Surface water run-off
Each category is sub-divided into a range of assessment indicators, each having its own aim, target and benchmarks. When a target or benchmark is reached, the asset is given credits (or points) by a qualified BREEAM assessor. The category score is calculated according to the number of credits attained and the category weightings. Once the development has been fully assessed, the final performance rating is determined by the sum of the weighted category scores. The final performance rating is specified as:
A qualified assessor evaluates a building or project and ensures that it meets the quality and performance standards of the selected scheme. In some countries such as Netherlands, Germany and Sweden, there are national operators that officially certify for BREEAM adapted to that country's standards, processes and construction methods.
BREEAM certification has also been made mandatory for governmental construction projects in the UK. According to the Common Minimum Standards for governmental construction, an environmental assessment is required on all public projects and further states that, "where BREEAM is used, all new projects are to achieve an 'excellent' rating and all refurbishment projects are to achieve at least 'very good' rating."
=== United States ===
In 1998, the US Green Building Council devised its own building certification system through the Leadership in Energy and Environmental Design (LEED) certification. It has its own set of criteria for assessment and utilizes the ASHRAE codes and standards. Due to its simplicity and ease-of-use, the LEED quickly gained international recognition within a short period. Over the years, LEED has undergone many changes and is now currently in its fourth iteration, which was launched in late 2013.
LEED rating systems differ according to the type of the project. The different types of rating systems fall under:
Building Design and Construction: For new construction or major renovations
Interior Design and Construction: For commercial interior fit-out projects
Building Operations and Maintenance: For existing buildings undergoing improvement but with little construction work
Neighborhood Development: For new land development projects or redevelopment projects
Homes: For single family, low-rise multi-family or mid-rise multi-family homes
Cities and Communities: For entire cities or sub-sections of a city. Assessment of a city's water consumption, energy use, waste, transportation etc.
LEED Recertification: For occupied and currently-in-use projects that have already received LEED certification but aiming to maintain and improve the building.
LEED Zero: For projects with net-zero goals in carbon emissions and resource use.
LEED certification is voluntary and a qualified assessor evaluates the projects on the basis of various established categories. These categories are as follows:
Integrative process
Location and transportation
Sustainable sites
Water efficiency
Energy and atmosphere
Materials and resources
Indoor environmental quality
Innovation in design
Regional priority
The four levels of LEED certification are: Platinum, Gold, Silver and Certified.
== Commercial Kitchens ==
EcoChef is a sustainability certification program specifically designed for commercial kitchens, focusing on energy efficiency, waste reduction, and health-conscious cooking environments. Launched in 2025, EcoChef evaluates kitchens based on a set of environmental and performance-based standards that align with broader decarbonization efforts in the foodservice industry. The certification process includes multiple levels—Bronze, Silver, Gold, and Platinum—each requiring increasingly rigorous sustainability practices.
EcoChef also offers professional accreditations, including the EcoChef Associate, EcoChef Culinarian, and EcoChef Practitioner, which equip culinary and design professionals with the knowledge to implement sustainable practices in their operations. Unlike general green building certifications such as LEED or WELL, EcoChef specifically addresses the unique challenges of commercial kitchens, including ventilation efficiency, water conservation, and the adoption of induction cooking and other energy-efficient technologies.
For more information, visit EcoChef's official website.
== References == | Wikipedia/Green_building_certification_systems |
Verification and validation of computer simulation models is conducted during the development of a simulation model with the ultimate goal of producing an accurate and credible model. "Simulation models are increasingly being used to solve problems and to aid in decision-making. The developers and users of these models, the decision makers using information obtained from the results of these models, and the individuals affected by decisions based on such models are all rightly concerned with whether a model and its results are "correct". This concern is addressed through verification and validation of the simulation model.
Simulation models are approximate imitations of real-world systems and they never exactly imitate the real-world system. Due to that, a model should be verified and validated to the degree needed for the model's intended purpose or application.
The verification and validation of a simulation model starts after functional specifications have been documented and initial model development has been completed. Verification and validation is an iterative process that takes place throughout the development of a model.
== Verification ==
In the context of computer simulation, verification of a model is the process of confirming that it is correctly implemented with respect to the conceptual model (it matches specifications and assumptions deemed acceptable for the given purpose of application).
During verification the model is tested to find and fix errors in the implementation of the model.
Various processes and techniques are used to assure the model matches specifications and assumptions with respect to the model concept.
The objective of model verification is to ensure that the implementation of the model is correct.
There are many techniques that can be utilized to verify a model.
These include, but are not limited to, having the model checked by an expert, making logic flow diagrams that include each logically possible action, examining the model output for reasonableness under a variety of settings of the input parameters, and using an interactive debugger.
Many software engineering techniques used for software verification are applicable to simulation model verification.
== Validation ==
Validation checks the accuracy of the model's representation of the real system. Model validation is defined to mean "substantiation that a computerized model within its domain of applicability possesses a satisfactory range of accuracy consistent with the intended application of the model". A model should be built for a specific purpose or set of objectives and its validity determined for that purpose.
There are many approaches that can be used to validate a computer model. The approaches range from subjective reviews to objective statistical tests. One approach that is commonly used is to have the model builders determine validity of the model through a series of tests.
Naylor and Finger [1967] formulated a three-step approach to model validation that has been widely followed:
Step 1. Build a model that has high face validity.
Step 2. Validate model assumptions.
Step 3. Compare the model input-output transformations to corresponding input-output transformations for the real system.
=== Face validity ===
A model that has face validity appears to be a reasonable imitation of a real-world system to people who are knowledgeable of the real world system. Face validity is tested by having users and people knowledgeable with the system examine model output for reasonableness and in the process identify deficiencies. An added advantage of having the users involved in validation is that the model's credibility to the users and the user's confidence in the model increases. Sensitivity to model inputs can also be used to judge face validity. For example, if a simulation of a fast food restaurant drive through was run twice with customer arrival rates of 20 per hour and 40 per hour then model outputs such as average wait time or maximum number of customers waiting would be expected to increase with the arrival rate.
=== Validation of model assumptions ===
Assumptions made about a model generally fall into two categories: structural assumptions about how system works and data assumptions. Also we can consider the simplification assumptions that are those that we use to simplify the reality.
==== Structural assumptions ====
Assumptions made about how the system operates and how it is physically arranged are structural assumptions. For example, the number of servers in a fast food drive through lane and if there is more than one how are they utilized? Do the servers work in parallel where a customer completes a transaction by visiting a single server or does one server take orders and handle payment while the other prepares and serves the order. Many structural problems in the model come from poor or incorrect assumptions. If possible the workings of the actual system should be closely observed to understand how it operates. The systems structure and operation should also be verified with users of the actual system.
==== Data assumptions ====
There must be a sufficient amount of appropriate data available to build a conceptual model and validate a model. Lack of appropriate data is often the reason attempts to validate a model fail. Data should be verified to come from a reliable source. A typical error is assuming an inappropriate statistical distribution for the data. The assumed statistical model should be tested using goodness of fit tests and other techniques. Examples of goodness of fit tests are the Kolmogorov–Smirnov test and the chi-square test. Any outliers in the data should be checked.
==== Simplification assumptions ====
Are those assumptions that we know that are not true, but are needed to simplify the problem we want to solve. The use of this assumptions must be restricted to assure that the model is correct enough to serve as an answer for the problem we want to solve.
=== Validating input-output transformations ===
The model is viewed as an input-output transformation for these tests. The validation test consists of comparing outputs from the system under consideration to model outputs for the same set of input conditions. Data recorded while observing the system must be available in order to perform this test. The model output that is of primary interest should be used as the measure of performance. For example, if system under consideration is a fast food drive through where input to model is customer arrival time and the output measure of performance is average customer time in line, then the actual arrival time and time spent in line for customers at the drive through would be recorded. The model would be run with the actual arrival times and the model average time in line would be compared with the actual average time spent in line using one or more tests.
==== Hypothesis testing ====
Statistical hypothesis testing using the t-test can be used as a basis to accept the model as valid or reject it as invalid.
The hypothesis to be tested is
H0 the model measure of performance = the system measure of performance
versus
H1 the model measure of performance ≠ the system measure of performance.
The test is conducted for a given sample size and level of significance or α. To perform the test a number n statistically independent runs of the model are conducted and an average or expected value, E(Y), for the variable of interest is produced. Then the test statistic, t0 is computed for the given α, n, E(Y) and the observed value for the system μ0
t
0
=
(
E
(
Y
)
−
u
0
)
/
(
S
/
n
)
{\displaystyle t_{0}={(E(Y)-u_{0})}/{(S/{\sqrt {n}})}}
and the critical value for α and n-1 the degrees of freedom
t
a
/
2
,
n
−
1
{\displaystyle t_{a/2,n-1}}
is calculated.
If
|
t
0
|
>
t
a
/
2
,
n
−
1
{\displaystyle \left\vert t_{0}\right\vert >t_{a/2,n-1}}
reject H0, the model needs adjustment.
There are two types of error that can occur using hypothesis testing, rejecting a valid model called type I error or "model builders risk" and accepting an invalid model called Type II error, β, or "model user's risk". The level of significance or α is equal the probability of type I error. If α is small then rejecting the null hypothesis is a strong conclusion. For example, if α = 0.05 and the null hypothesis is rejected there is only a 0.05 probability of rejecting a model that is valid. Decreasing the probability of a type II error is very important. The probability of correctly detecting an invalid model is 1 - β. The probability of a type II error is dependent of the sample size and the actual difference between the sample value and the observed value. Increasing the sample size decreases the risk of a type II error.
===== Model accuracy as a range =====
A statistical technique where the amount of model accuracy is specified as a range has recently been developed. The technique uses hypothesis testing to accept a model if the difference between a model's variable of interest and a system's variable of interest is within a specified range of accuracy. A requirement is that both the system data and model data be approximately Normally Independent and Identically Distributed (NIID). The t-test statistic is used in this technique. If the mean of the model is μm and the mean of system is μs then the difference between the model and the system is D = μm - μs. The hypothesis to be tested is if D is within the acceptable range of accuracy. Let L = the lower limit for accuracy and U = upper limit for accuracy. Then
H0 L ≤ D ≤ U
versus
H1 D < L or D > U
is to be tested.
The operating characteristic (OC) curve is the probability that the null hypothesis is accepted when it is true. The OC curve characterizes the probabilities of both type I and II errors. Risk curves for model builder's risk and model user's can be developed from the OC curves. Comparing curves with fixed sample size tradeoffs between model builder's risk and model user's risk can be seen easily in the risk curves. If model builder's risk, model user's risk, and the upper and lower limits for the range of accuracy are all specified then the sample size needed can be calculated.
==== Confidence intervals ====
Confidence intervals can be used to evaluate if a model is "close enough" to a system for some variable of interest. The difference between the known model value, μ0, and the system value, μ, is checked to see if it is less than a value small enough that the model is valid with respect that variable of interest. The value is denoted by the symbol ε. To perform the test a number, n, statistically independent runs of the model are conducted and a mean or expected value, E(Y) or μ for simulation output variable of interest Y, with a standard deviation S is produced. A confidence level is selected, 100(1-α). An interval, [a,b], is constructed by
a
=
E
(
Y
)
−
t
a
/
2
,
n
−
1
S
/
n
a
n
d
b
=
E
(
Y
)
+
t
a
/
2
,
n
−
1
S
/
n
{\displaystyle a=E(Y)-t_{a/2,n-1}S/{\sqrt {n}}\qquad and\qquad b=E(Y)+t_{a/2,n-1}S/{\sqrt {n}}}
,
where
t
a
/
2
,
n
−
1
{\displaystyle t_{a/2,n-1}}
is the critical value from the t-distribution for the given level of significance and n-1 degrees of freedom.
If |a-μ0| > ε and |b-μ0| > ε then the model needs to be calibrated since in both cases the difference is larger than acceptable.
If |a-μ0| < ε and |b-μ0| < ε then the model is acceptable as in both cases the error is close enough.
If |a-μ0| < ε and |b-μ0| > ε or vice versa then additional runs of the model are needed to shrink the interval.
==== Graphical comparisons ====
If statistical assumptions cannot be satisfied or there is insufficient data for the system a graphical comparisons of model outputs to system outputs can be used to make a subjective decisions, however other objective tests are preferable.
== ASME Standards ==
Documents and standards involving verification and validation of computational modeling and simulation are developed by the American Society of Mechanical Engineers (ASME) Verification and Validation (V&V) Committee. ASME V&V 10 provides guidance in assessing and increasing the credibility of computational solid mechanics models through the processes of verification, validation, and uncertainty quantification. ASME V&V 10.1 provides a detailed example to illustrate the concepts described in ASME V&V 10. ASME V&V 20 provides a detailed methodology for validating computational simulations as applied to fluid dynamics and heat transfer. ASME V&V 40 provides a framework for establishing model credibility requirements for computational modeling, and presents examples specific in the medical device industry.
== See also ==
Verification and validation
Software verification and validation
== References == | Wikipedia/Verification_and_validation_of_computer_simulation_models |
The experience sampling method (ESM), also referred to as a daily diary method, or ecological momentary assessment (EMA), is an intensive longitudinal research methodology that involves asking participants to report on their thoughts, feelings, behaviors, and/or environment on multiple occasions over time. Participants report on their thoughts, feelings, behaviors, and/or environment in the moment (right then, not later; right there, not elsewhere) or shortly thereafter. Participants can be given a journal with many identical pages. Each page can have a psychometric scale, open-ended questions, or anything else used to assess their condition in that place and time. ESM studies can also operate fully automatized on portable electronic devices or via the internet. The experience sampling method was developed by Suzanne Prescott during doctoral work at University of Chicago's Committee on Human Development with assistance from her dissertation advisor Mihaly Csikszentmihalyi. Early studies that used ESM were coauthored by fellow students Reed W. Larson and Ronald Graef, whose dissertations both used the method.
== Overview ==
There are different ways to signal participants when to take notes in their journal or complete a questionnaire, like using preprogrammed stopwatches. An observer can have an identically programmed stopwatch, so the observer can record specific events as the participants are recording their feelings or other behaviors. It is best to avoid letting subjects know in advance when they will record their feelings, so they can't anticipate the event, and will just be "acting naturally" when they stop and take notes on their current condition. Conversely, some statistical techniques require roughly equidistant time intervals, which has the limitation that assessments can be anticipated. Validity in these studies comes from repetition, so you can look for patterns, like participants reporting greater happiness right after meals. For instance, Stieger and Reips were able to replicate and refine past research about the dynamics of well-being fluctuations during the day (low in the morning, high in the evening) and over the course of a week (low just before the beginning of the week, highest near the end of the week). These correlations can then be tested by other means for cause and effect, such as vector autoregression, since ESM just shows correlation. Moreover, by using the experience sampling method different research questions can be analyzed regarding the use of mobile devices in research. Following on from this, Stieger and colleagues used the experience sampling method to show that smartphones can be used to transfer computer-based tasks (CBTs) from the lab to the field.
Some authors also use the term experience sampling to encompass passive data derived from sources such as smartphones, wearable sensors, the Internet of Things, email and social media that do not require explicit input from participants. These methods can be advantageous as they impose less demand on participants improving compliance and allowing data to be collected for much longer periods, are less likely to change the behaviour being studied and allow data to be sampled at much higher rates and with greater precision. Many research questions can benefit from both active and passive forms of experience sampling.
== In clinical practice ==
Increasingly, ESM is being tested as a clinical monitoring tool in psychiatric and psychological treatments. Patients then use ESM to monitor themselves for several weeks or months and discuss feedback based on their ESM data with their clinician. Patients and clinicians are enthusiastic about the clinical use of ESM. Qualitative studies suggest ESM may increase insight and awareness, help personalize treatments, and improve communication between patient and clinician. ESM may be viewed as an improved form of registration and monitoring already often used in psychiatric treatments, and may therefore be an excellent fit. Randomized controlled trials so far show mixed evidence for the efficacy of ESM in improving symptoms and functioning in patients with depression, although many more trials in diverse clinical populations are currently underway.
Several tools are being developed to aid clinicians in using personalized ESM diaries in treatment such as PETRA and m-Path. PETRA is a Dutch tool with which patients and clinicians can construct a personalized ESM diary and examine personalized feedback together. PETRA is developed in collaboration with patients and clinicians and integrated in electronic personal health records (PHR) to facilitate easy access. m-Path is a freely accessible flexible platform to facilitate real-time monitoring as well as real-life interventions. Practitioners are able to create new questionnaires and interventions from scratch or can use existing templates shared by the community.
== See also ==
Ambulatory assessment
Diary studies
Event sampling methodology
List of psychological research methods
Quantified self
== References == | Wikipedia/Experience_sampling_method |
Energy management includes planning and operation of energy production and energy consumption units as well as energy distribution and storage. Energy management is performed via Energy Management Systems (EMS), which are designed with hardware and software components to implement the tasks. Energy Management can be classified into Building Energy Management, Grid-scale Energy Management (including Grid energy storage), and Marine Energy Management.
Energy management objectives are resource conservation, climate protection and cost savings, while the users have permanent access to the energy they need. It is connected closely to environmental management, production management, logistics and other established business functions. The VDI-Guideline 4602 released a definition which includes the economic dimension: "Energy management is the proactive, organized and systematic coordination of procurement, conversion, distribution and use of energy to meet the requirements, taking into account environmental and economic objectives". It is a systematic endeavor to optimize energy efficiency for specific political, economic, and environmental objectives through Engineering and Management techniques.
== Energy efficiency ==
=== Base line of energy assessment ===
One of the initial steps for an effective energy cost control program is the base line energy assessment, which examines the pattern of existing energy usage by the government or any sub-entity of the government or private organization. This program will set the reference point for improvements in energy efficiency. Energy efficiency can improve the existing energy usage and benchmarking of every individual section such as area, sub-area, and the industry.
== Organizational integration ==
It is important to integrate the energy management in the organizational structure, so that the energy management can be implemented. Responsibilities and the interaction of the decision makers should be regularized. The delegation of functions and competencies extend from the top management to the executive worker. Furthermore, a comprehensive coordination can ensure the fulfillment of the tasks.
It is advisable to establish a separate organizational unit "energy management" in large or energy-intensive companies. This unit supports the senior management and keeps track. It depends on the basic form of the organizational structure, where this unit is connected. In case of a functional organization the unit is located directly between the first (CEO) and the second hierarchical level (corporate functions such as production, procurement, marketing). In a divisional organization, there should be a central and several sector-specific energy management units. So the diverse needs of the individual sectors and the coordination between the branches and the head office can be fulfilled. In a matrix organization the energy management can be included as a matrix function and thus approach most functions directly.
== Energy management in operational functions ==
=== Facility management ===
Facility management is an important part of energy management, because a huge proportion (average 25 per cent) of complete operating costs are energy costs. According to the International Facility Management Association (IFMA), facility management is "a profession that encompasses multiple disciplines to ensure functionality of the built environment by integrating people, place, processes and technology."
The central task of energy management is to reduce costs for the provision of energy in buildings and facilities without compromising work processes. Especially the availability and service life of the equipment and the ease of use should remain the same. The German Facility Management Association (GEFMA e.V.) has published guidelines (e.g. GEFMA 124-1 and 124–2), which contain methods and ways of dealing with the integration of energy management in the context of a successful facility management. In this topic the facility manager has to deal with economic, ecological, risk-based and quality-based targets. He tries to minimize the total cost of the energy-related processes (supply, distribution and use).
The most important key figure in this context is kilowatt-hours per square meter per year (kWh/m2a). Based on this key figure properties can be classified according to their energy consumption.
Europe: In Germany a low-energy house can have a maximum energy consumption of 70 kWh/m2a.
North America: In the United States, the ENERGY STAR program is the largest program defining low-energy homes. Homes earning ENERGY STAR certification use at least 15% less energy than standard new homes built to the International Residential Code, although homes typically achieve 20–30% savings.
In comparison, the passive house ultra-low-energy standard, currently undergoing adoption in some other European countries, has a maximum space heating requirement of 15 kWh/m2a. A passive house is a very well insulated and virtually airtight building. It does not require a conventional heating system. It is heated by solar gain and internal gains from people. Energy losses are minimized.
There are also buildings that produce more energy (for example by solar water heating or photovoltaic systems) over the course of a year than it imports from external sources. These buildings are called energy-plus-houses.
In addition, the work regulations manage competencies, roles and responsibilities. Because the systems also include risk factors (e.g. oil tanks, gas lines), you must ensure that all tasks are clearly described and distributed. A clear regulation can help to avoid liability risks.
=== Logistics ===
Logistics is the management of the flow of resources between the point of origin and the point of destination in order to meet some requirements - for example of customers or corporations. Especially the core logistics task, transportation of the goods, can save costs and protect the environment through efficient energy management. The relevant factors are the choice of means of transportation, duration and length of transportation, and cooperation with logistics service providers.
Transport logistics accounts for more than 24% of CO2 emissions worldwide. For this reason Green Logistics has become more prevalent in business.
Possible courses of action in terms of green logistics are:
Shift to ecofriendly transport carrier such as railroad and waterway
Route and load optimization
Formation of corporate networks that are connected by logistics services
Optimizing physical logistics processes by providing sophisticated IT support
Besides transportation of goods, the transport of persons should be an important part of the logistic strategy of organizations. For business trips, it can be important to attract attention to the choice and the proportionality of the means of transport. It can be balanced whether a physical presence is mandatory or a telephone or video conference is just as useful. Home Office is another possibility in which the company can protect the environment indirectly.
=== Energy procurement ===
Procurement is the acquisition of goods or services. Energy prices fluctuate constantly, which can significantly affect the energy bill of organizations. Therefore, poor energy procurement decisions can be expensive. Organizations can control and reduce energy costs by taking a proactive and efficient approach to buying energy. Even a change of the energy source can be a profitable and eco-friendly alternative.
=== Production ===
Production is the act of creating output, a good or service which has value and contributes to the utility of individuals. This central process may differ depending on the industry. Industrial companies have facilities that require a lot of energy. Service companies, in turn, do not need many materials, their energy-related focus is mainly facility management or Green IT. Therefore, the energy-related focus has to be identified first, then evaluated and optimized.
=== Production planning and control ===
Production is typically the area with the largest energy consumption within an organization. Therefore, also the production planning and control becomes very important. It deals with the operational, temporal, quantitative and spatial planning, control and management of all processes that are necessary in the production of goods and commodities. The "production planner" should plan the production processes so that they operate in an energy efficient way. For example, a strong power consumer can be moved into the night time. Peaks should be avoided for the benefit of a unified load profile.
The impending changes in the structure of energy production require an increasing demand for storage capacity. The production planning and control has to deal with the problem of limited storability of energy. In principle there is the possibility to store energy electrically, mechanically, or chemically. Another trend-setting technology is lithium-based electrochemical storage, which can be used in electric vehicles or as an option to control the power grid. The German Federal Ministry of Economics and Technology realized the significance of this topic and established an initiative with the aim to promote technological breakthroughs and support the rapid introduction of new energy storage.
=== Maintenance ===
Maintenance is the combination of all technical and administrative actions, including supervision actions, intended to retain an item in, or restore it to, a state in which it can perform a required function. Detailed maintenance is essential to support the energy management. Hereby power losses and cost increases can be avoided.
==== Energy management challenge ====
Energy efficiency and management is key for industries worldwide.
Examples of how it is possible to save energy and costs with the help of maintenance:
Defrost the fridges
Check the barometer of cars and trucks
Insulation of hot systems
Improve leaks in building envelopes
== Energy strategies ==
A long-term energy strategy should be part of the overall strategy of a company. This strategy may include the objective of increasing the use of renewable energies. Furthermore, criteria for decisions on energy investments, such as yield expectations, are determined. By formulating an energy strategy companies have the opportunity to avoid risks and to assure a competitive advance against their business rivals.
=== Potential energy strategies ===
According to Kals, these are the energy strategies:
Passive Strategy: There is no systematic planning. The issue of energy and environmental management is not perceived as an independent field of action. The organization only deals with the most essential subjects.
Strategy of short-term profit maximization: The management is concentrating exclusively on measures that have a relatively short payback period and a high return. Measures with low profitability are not considered.
Strategy of long-term profit maximization: This strategy includes that you have a high knowledge of the energy price and technology development. The relevant measures (for example, heat exchangers or power stations) can have durations of several decades. Moreover, these measures can help to improve the image and increase the motivation of the employees.
Realization of all financially attractive energy measures: This strategy has the goal to implement all measures that have a positive return on investment.
Maximum strategy: For the climate protection one is willing to change even the object of the company.
In reality, you usually find hybrid forms of different strategies.
=== Energy strategies of companies ===
Many companies are trying to promote its image and time protect the climate through a proactive and public energy strategy. General Motors (GM) strategy is based on continuous improvement. Furthermore, they have six principles: e.g. restoring and preserving the environment, reducing waste and pollutants, educating the public about environmental conservation, collaboration for the development of environmental laws and regulations.
Nokia created its first climate strategy in 2006. The strategy tries to evaluate the energy consumption and greenhouse gas emissions of products and operations and sets reduction targets accordingly. Furthermore, their environmental efforts is based on four key issues: substance management, energy efficiency, recycling, promoting environmental sustainability.
The energy strategy of Volkswagen (VW) is based on environmentally friendly products and a resource-efficient production according to the "Group Strategy 2018". Almost all locations of the Group are certified to the international standard ISO 14001 for environmental management systems.
When looking at the energy strategies of companies it is important to you have the topic greenwashing in mind. This is a form of propaganda in which green strategies are used to promote the opinion that an organization's aims are environmentally friendly.
=== Energy strategies of governments ===
Even many countries formulate energy strategies. The Swiss Federal Council decided in May 2011 to resign nuclear energy medium-dated. The nuclear power plants will be shut down at the end of life and will not be replaced. In Compensation they put the focus on energy efficiency, renewable energies, fossil energy sources and the development of water power.
The European Union has clear instructions for its members. The "20-20-20-targets" include, that the Member States have to reduce greenhouse gas emissions by 20% below 1990 levels, increase energy efficiency by 20% and achieve a 20% share of renewable energy in total energy consumption by 2020.
=== Ethical and normative basis of the energy strategies ===
The basis of every energy strategy is the corporate culture and the related ethical standards applying in the company. Ethics, in the sense of business ethics, examines ethical principles and moral or ethical issues that arise in a business environment. Ethical standards can appear in company guidelines, energy and environmental policies or other documents.
The most relevant ethical ideas for the energy management are:
Utilitarianism: This form of ethics has the maxim that the one acts are good or right, whose consequences are optimal for the welfare of all those affected by the action (principle of maximum happiness). In terms of energy management, the existence of external costs should be considered. They do not directly affect those who profit from the economic activity but non-participants like future generations. This error in the market mechanism can be solved by the internalization of external costs.
Argumentation Ethics: This fundamental ethical idea says that everyone who is affected by the decision, must be involved in decision making. This is done in a fair dialogue, the result is completely uncertain.
Deontological ethics: The deontological ethics assigns individuals and organizations certain obligations. A general example is the Golden Rule: "One should treat others as one would like others to treat oneself." Therefore, everyone should manage their duties and make an energy economic contribution.
== See also ==
Energy demand management
Energy management (degree) – specialized degree for those working in the petroleum industry
Energy management system
Energy storage as a service (ESaaS)
Federal Energy Management Program
Power management – by electrical devices
Management of energy in a particular context:
Hotel energy management
Marine energy management
== References == | Wikipedia/Energy_management |
Heating, ventilation, and air conditioning (HVAC ) is the use of various technologies to control the temperature, humidity, and purity of the air in an enclosed space. Its goal is to provide thermal comfort and acceptable indoor air quality. HVAC system design is a subdiscipline of mechanical engineering, based on the principles of thermodynamics, fluid mechanics, and heat transfer. "Refrigeration" is sometimes added to the field's abbreviation as HVAC&R or HVACR, or "ventilation" is dropped, as in HACR (as in the designation of HACR-rated circuit breakers).
HVAC is an important part of residential structures such as single family homes, apartment buildings, hotels, and senior living facilities; medium to large industrial and office buildings such as skyscrapers and hospitals; vehicles such as cars, trains, airplanes, ships and submarines; and in marine environments, where safe and healthy building conditions are regulated with respect to temperature and humidity, using fresh air from outdoors.
Ventilating or ventilation (the "V" in HVAC) is the process of exchanging or replacing air in any space to provide high indoor air quality which involves temperature control, oxygen replenishment, and removal of moisture, odors, smoke, heat, dust, airborne bacteria, carbon dioxide, and other gases. Ventilation removes unpleasant smells and excessive moisture, introduces outside air, and keeps interior air circulating. Building ventilation methods are categorized as mechanical (forced) or natural.
== Overview ==
The three major functions of heating, ventilation, and air conditioning are interrelated, especially with the need to provide thermal comfort and acceptable indoor air quality within reasonable installation, operation, and maintenance costs. HVAC systems can be used in both domestic and commercial environments. HVAC systems can provide ventilation, and maintain pressure relationships between spaces. The means of air delivery and removal from spaces is known as room air distribution.
=== Individual systems ===
In modern buildings, the design, installation, and control systems of these functions are integrated into one or more HVAC systems. For very small buildings, contractors normally estimate the capacity and type of system needed and then design the system, selecting the appropriate refrigerant and various components needed. For larger buildings, building service designers, mechanical engineers, or building services engineers analyze, design, and specify the HVAC systems. Specialty mechanical contractors and suppliers then fabricate, install and commission the systems. Building permits and code-compliance inspections of the installations are normally required for all sizes of buildings
=== District networks ===
Although HVAC is executed in individual buildings or other enclosed spaces (like NORAD's underground headquarters), the equipment involved is in some cases an extension of a larger district heating (DH) or district cooling (DC) network, or a combined DHC network. In such cases, the operating and maintenance aspects are simplified and metering becomes necessary to bill for the energy that is consumed, and in some cases energy that is returned to the larger system. For example, at a given time one building may be utilizing chilled water for air conditioning and the warm water it returns may be used in another building for heating, or for the overall heating-portion of the DHC network (likely with energy added to boost the temperature).
Basing HVAC on a larger network helps provide an economy of scale that is often not possible for individual buildings, for utilizing renewable energy sources such as solar heat, winter's cold, the cooling potential in some places of lakes or seawater for free cooling, and the enabling function of seasonal thermal energy storage. Utilizing natural sources for HVAC can significantly benefit the environment and promote awareness of alternative methods.
== History ==
HVAC is based on inventions and discoveries made by Nikolay Lvov, Michael Faraday, Rolla C. Carpenter, Willis Carrier, Edwin Ruud, Reuben Trane, James Joule, William Rankine, Sadi Carnot, Alice Parker and many others.
Multiple inventions within this time frame preceded the beginnings of the first comfort air conditioning system, which was designed in 1902 by Alfred Wolff (Cooper, 2003) for the New York Stock Exchange, while Willis Carrier equipped the Sacketts-Wilhems Printing Company with the process AC unit the same year. Coyne College was the first school to offer HVAC training in 1899. The first residential AC was installed by 1914, and by the 1950s there was "widespread adoption of residential AC".
The invention of the components of HVAC systems went hand-in-hand with the Industrial Revolution, and new methods of modernization, higher efficiency, and system control are constantly being introduced by companies and inventors worldwide.
== Heating ==
Heaters are appliances whose purpose is to generate heat (i.e. warmth) for the building. This can be done via central heating. Such a system contains a boiler, furnace, or heat pump to heat water, steam, or air in a central location such as a furnace room in a home, or a mechanical room in a large building. The heat can be transferred by convection, conduction, or radiation. Space heaters are used to heat single rooms and only consist of a single unit.
=== Generation ===
Heaters exist for various types of fuel, including solid fuels, liquids, and gases. Another type of heat source is electricity, normally heating ribbons composed of high resistance wire (see Nichrome). This principle is also used for baseboard heaters and portable heaters. Electrical heaters are often used as backup or supplemental heat for heat pump systems.
The heat pump gained popularity in the 1950s in Japan and the United States. Heat pumps can extract heat from various sources, such as environmental air, exhaust air from a building, or from the ground. Heat pumps transfer heat from outside the structure into the air inside. Initially, heat pump HVAC systems were only used in moderate climates, but with improvements in low temperature operation and reduced loads due to more efficient homes, they are increasing in popularity in cooler climates. They can also operate in reverse to cool an interior.
=== Distribution ===
==== Water/steam ====
In the case of heated water or steam, piping is used to transport the heat to the rooms. Most modern hot water boiler heating systems have a circulator, which is a pump, to move hot water through the distribution system (as opposed to older gravity-fed systems). The heat can be transferred to the surrounding air using radiators, hot water coils (hydro-air), or other heat exchangers. The radiators may be mounted on walls or installed within the floor to produce floor heat.
The use of water as the heat transfer medium is known as hydronics. The heated water can also supply an auxiliary heat exchanger to supply hot water for bathing and washing.
==== Air ====
Warm air systems distribute the heated air through ductwork systems of supply and return air through metal or fiberglass ducts. Many systems use the same ducts to distribute air cooled by an evaporator coil for air conditioning. The air supply is normally filtered through air filters to remove dust and pollen particles.
=== Dangers ===
The use of furnaces, space heaters, and boilers as a method of indoor heating could result in incomplete combustion and the emission of carbon monoxide, nitrogen oxides, formaldehyde, volatile organic compounds, and other combustion byproducts. Incomplete combustion occurs when there is insufficient oxygen; the inputs are fuels containing various contaminants and the outputs are harmful byproducts, most dangerously carbon monoxide, which is a tasteless and odorless gas with serious adverse health effects.
Without proper ventilation, carbon monoxide can be lethal at concentrations of 1000 ppm (0.1%). However, at several hundred ppm, carbon monoxide exposure induces headaches, fatigue, nausea, and vomiting. Carbon monoxide binds with hemoglobin in the blood, forming carboxyhemoglobin, reducing the blood's ability to transport oxygen. The primary health concerns associated with carbon monoxide exposure are its cardiovascular and neurobehavioral effects. Carbon monoxide can cause atherosclerosis (the hardening of arteries) and can also trigger heart attacks. Neurologically, carbon monoxide exposure reduces hand to eye coordination, vigilance, and continuous performance. It can also affect time discrimination.
== Ventilation ==
Ventilation is the process of changing or replacing air in any space to control the temperature or remove any combination of moisture, odors, smoke, heat, dust, airborne bacteria, or carbon dioxide, and to replenish oxygen. It plays a critical role in maintaining a healthy indoor environment by preventing the buildup of harmful pollutants and ensuring the circulation of fresh air. Different methods, such as natural ventilation through windows and mechanical ventilation systems, can be used depending on the building design and air quality needs. Ventilation often refers to the intentional delivery of the outside air to the building indoor space. It is one of the most important factors for maintaining acceptable indoor air quality in buildings.
Although ventilation plays a key role in indoor air quality, it may not be sufficient on its own. A clear understanding of both indoor and outdoor air quality parameters is needed to improve the performance of ventilation in terms of ... In scenarios where outdoor pollution would deteriorate indoor air quality, other treatment devices such as filtration may also be necessary.
Methods for ventilating a building may be divided into mechanical/forced and natural types.
=== Mechanical or forced ===
Mechanical, or forced, ventilation is provided by an air handler (AHU) and used to control indoor air quality. Excess humidity, odors, and contaminants can often be controlled via dilution or replacement with outside air. However, in humid climates more energy is required to remove excess moisture from ventilation air.
Kitchens and bathrooms typically have mechanical exhausts to control odors and sometimes humidity. Factors in the design of such systems include the flow rate (which is a function of the fan speed and exhaust vent size) and noise level. Direct drive fans are available for many applications and can reduce maintenance needs.
In summer, ceiling fans and table/floor fans circulate air within a room for the purpose of reducing the perceived temperature by increasing evaporation of perspiration on the skin of the occupants. Because hot air rises, ceiling fans may be used to keep a room warmer in the winter by circulating the warm stratified air from the ceiling to the floor.
=== Passive ===
Natural ventilation is the ventilation of a building with outside air without using fans or other mechanical systems. It can be via operable windows, louvers, or trickle vents when spaces are small and the architecture permits. ASHRAE defined Natural ventilation as the flow of air through open windows, doors, grilles, and other planned building envelope penetrations, and as being driven by natural and/or artificially produced pressure differentials.
Natural ventilation strategies also include cross ventilation, which relies on wind pressure differences on opposite sides of a building. By strategically placing openings, such as windows or vents, on opposing walls, air is channeled through the space to enhance cooling and ventilation. Cross ventilation is most effective when there are clear, unobstructed paths for airflow within the building.
In more complex schemes, warm air is allowed to rise and flow out high building openings to the outside (stack effect), causing cool outside air to be drawn into low building openings. Natural ventilation schemes can use very little energy, but care must be taken to ensure comfort. In warm or humid climates, maintaining thermal comfort solely via natural ventilation might not be possible. Air conditioning systems are used, either as backups or supplements. Air-side economizers also use outside air to condition spaces, but do so using fans, ducts, dampers, and control systems to introduce and distribute cool outdoor air when appropriate.
An important component of natural ventilation is air change rate or air changes per hour: the hourly rate of ventilation divided by the volume of the space. For example, six air changes per hour means an amount of new air, equal to the volume of the space, is added every ten minutes. For human comfort, a minimum of four air changes per hour is typical, though warehouses might have only two. Too high of an air change rate may be uncomfortable, akin to a wind tunnel which has thousands of changes per hour. The highest air change rates are for crowded spaces, bars, night clubs, commercial kitchens at around 30 to 50 air changes per hour.
Room pressure can be either positive or negative with respect to outside the room. Positive pressure occurs when there is more air being supplied than exhausted, and is common to reduce the infiltration of outside contaminants.
=== Airborne diseases ===
Natural ventilation is a key factor in reducing the spread of airborne illnesses such as tuberculosis, the common cold, influenza, meningitis or COVID-19. Opening doors and windows are good ways to maximize natural ventilation, which would make the risk of airborne contagion much lower than with costly and maintenance-requiring mechanical systems. Old-fashioned clinical areas with high ceilings and large windows provide the greatest protection. Natural ventilation costs little and is maintenance free, and is particularly suited to limited-resource settings and tropical climates, where the burden of TB and institutional TB transmission is highest. In settings where respiratory isolation is difficult and climate permits, windows and doors should be opened to reduce the risk of airborne contagion. Natural ventilation requires little maintenance and is inexpensive.
Natural ventilation is not practical in much of the infrastructure because of climate. This means that the facilities need to have effective mechanical ventilation systems and or use Ceiling Level UV or FAR UV ventilation systems.
Ventilation is measured in terms of Air Changes Per Hour (ACH). As of 2023, the CDC recommends that all spaces have a minimum of 5 ACH. For hospital rooms with airborne contagions the CDC recommends a minimum of 12 ACH. The challenges in facility ventilation are public unawareness, ineffective government oversight, poor building codes that are based on comfort levels, poor system operations, poor maintenance, and lack of transparency.
UVC or Ultraviolet Germicidal Irradiation is a function used in modern air conditioners which reduces airborne viruses, bacteria, and fungi, through the use of a built-in LED UV light that emits a gentle glow across the evaporator. As the cross-flow fan circulates the room air, any viruses are guided through the sterilization module’s irradiation range, rendering them instantly inactive.
== Air conditioning ==
An air conditioning system, or a standalone air conditioner, provides cooling and/or humidity control for all or part of a building. Air conditioned buildings often have sealed windows, because open windows would work against the system intended to maintain constant indoor air conditions. Outside, fresh air is generally drawn into the system by a vent into a mix air chamber for mixing with the space return air. Then the mixture air enters an indoor or outdoor heat exchanger section where the air is to be cooled down, then be guided to the space creating positive air pressure. The percentage of return air made up of fresh air can usually be manipulated by adjusting the opening of this vent. Typical fresh air intake is about 10% of the total supply air.
Air conditioning and refrigeration are provided through the removal of heat. Heat can be removed through radiation, convection, or conduction. The heat transfer medium is a refrigeration system, such as water, air, ice, and chemicals are referred to as refrigerants. A refrigerant is employed either in a heat pump system in which a compressor is used to drive thermodynamic refrigeration cycle, or in a free cooling system that uses pumps to circulate a cool refrigerant (typically water or a glycol mix).
It is imperative that the air conditioning horsepower is sufficient for the area being cooled. Underpowered air conditioning systems will lead to power wastage and inefficient usage. Adequate horsepower is required for any air conditioner installed.
=== Refrigeration cycle ===
The refrigeration cycle uses four essential elements to cool, which are compressor, condenser, metering device, and evaporator.
At the inlet of a compressor, the refrigerant inside the system is in a low pressure, low temperature, gaseous state. The compressor pumps the refrigerant gas up to high pressure and temperature.
From there it enters a heat exchanger (sometimes called a condensing coil or condenser) where it loses heat to the outside, cools, and condenses into its liquid phase.
An expansion valve (also called metering device) regulates the refrigerant liquid to flow at the proper rate.
The liquid refrigerant is returned to another heat exchanger where it is allowed to evaporate, hence the heat exchanger is often called an evaporating coil or evaporator. As the liquid refrigerant evaporates it absorbs heat from the inside air, returns to the compressor, and repeats the cycle. In the process, heat is absorbed from indoors and transferred outdoors, resulting in cooling of the building.
In variable climates, the system may include a reversing valve that switches from heating in winter to cooling in summer. By reversing the flow of refrigerant, the heat pump refrigeration cycle is changed from cooling to heating or vice versa. This allows a facility to be heated and cooled by a single piece of equipment by the same means, and with the same hardware.
=== Free cooling ===
Free cooling systems can have very high efficiencies, and are sometimes combined with seasonal thermal energy storage so that the cold of winter can be used for summer air conditioning. Common storage mediums are deep aquifers or a natural underground rock mass accessed via a cluster of small-diameter, heat-exchanger-equipped boreholes. Some systems with small storages are hybrids, using free cooling early in the cooling season, and later employing a heat pump to chill the circulation coming from the storage. The heat pump is added-in because the storage acts as a heat sink when the system is in cooling (as opposed to charging) mode, causing the temperature to gradually increase during the cooling season.
Some systems include an "economizer mode", which is sometimes called a "free-cooling mode". When economizing, the control system will open (fully or partially) the outside air damper and close (fully or partially) the return air damper. This will cause fresh, outside air to be supplied to the system. When the outside air is cooler than the demanded cool air, this will allow the demand to be met without using the mechanical supply of cooling (typically chilled water or a direct expansion "DX" unit), thus saving energy. The control system can compare the temperature of the outside air vs. return air, or it can compare the enthalpy of the air, as is frequently done in climates where humidity is more of an issue. In both cases, the outside air must be less energetic than the return air for the system to enter the economizer mode.
=== Packaged split system ===
Central, "all-air" air-conditioning systems (or package systems) with a combined outdoor condenser/evaporator unit are often installed in North American residences, offices, and public buildings, but are difficult to retrofit (install in a building that was not designed to receive it) because of the bulky air ducts required. (Minisplit ductless systems are used in these situations.) Outside of North America, packaged systems are only used in limited applications involving large indoor space such as stadiums, theatres or exhibition halls.
An alternative to packaged systems is the use of separate indoor and outdoor coils in split systems. Split systems are preferred and widely used worldwide except in North America. In North America, split systems are most often seen in residential applications, but they are gaining popularity in small commercial buildings. Split systems are used where ductwork is not feasible or where the space conditioning efficiency is of prime concern. The benefits of ductless air conditioning systems include easy installation, no ductwork, greater zonal control, flexibility of control, and quiet operation. In space conditioning, the duct losses can account for 30% of energy consumption. The use of minisplits can result in energy savings in space conditioning as there are no losses associated with ducting.
With the split system, the evaporator coil is connected to a remote condenser unit using refrigerant piping between an indoor and outdoor unit instead of ducting air directly from the outdoor unit. Indoor units with directional vents mount onto walls, suspended from ceilings, or fit into the ceiling. Other indoor units mount inside the ceiling cavity so that short lengths of duct handle air from the indoor unit to vents or diffusers around the rooms.
Split systems are more efficient and the footprint is typically smaller than the package systems. On the other hand, package systems tend to have a slightly lower indoor noise level compared to split systems since the fan motor is located outside.
=== Dehumidification ===
Dehumidification (air drying) in an air conditioning system is provided by the evaporator. Since the evaporator operates at a temperature below the dew point, moisture in the air condenses on the evaporator coil tubes. This moisture is collected at the bottom of the evaporator in a pan and removed by piping to a central drain or onto the ground outside.
A dehumidifier is an air-conditioner-like device that controls the humidity of a room or building. It is often employed in basements that have a higher relative humidity because of their lower temperature (and propensity for damp floors and walls). In food retailing establishments, large open chiller cabinets are highly effective at dehumidifying the internal air. Conversely, a humidifier increases the humidity of a building.
The HVAC components that dehumidify the ventilation air deserve careful attention because outdoor air constitutes most of the annual humidity load for nearly all buildings.
=== Humidification ===
=== Maintenance ===
All modern air conditioning systems, even small window package units, are equipped with internal air filters. These are generally of a lightweight gauze-like material, and must be replaced or washed as conditions warrant. For example, a building in a high dust environment, or a home with furry pets, will need to have the filters changed more often than buildings without these dirt loads. Failure to replace these filters as needed will contribute to a lower heat exchange rate, resulting in wasted energy, shortened equipment life, and higher energy bills; low air flow can result in iced-over evaporator coils, which can completely stop airflow. Additionally, very dirty or plugged filters can cause overheating during a heating cycle, which can result in damage to the system or even fire.
Because an air conditioner moves heat between the indoor coil and the outdoor coil, both must be kept clean. This means that, in addition to replacing the air filter at the evaporator coil, it is also necessary to regularly clean the condenser coil. Failure to keep the condenser clean will eventually result in harm to the compressor because the condenser coil is responsible for discharging both the indoor heat (as picked up by the evaporator) and the heat generated by the electric motor driving the compressor.
== Energy efficiency ==
HVAC is significantly responsible for promoting energy efficiency of buildings as the building sector consumes the largest percentage of global energy. Since the 1980s, manufacturers of HVAC equipment have been making an effort to make the systems they manufacture more efficient. This was originally driven by rising energy costs, and has more recently been driven by increased awareness of environmental issues. Additionally, improvements to the HVAC system efficiency can also help increase occupant health and productivity. In the US, the EPA has imposed tighter restrictions over the years. There are several methods for making HVAC systems more efficient.
=== Heating energy ===
In the past, water heating was more efficient for heating buildings and was the standard in the United States. Today, forced air systems can double for air conditioning and are more popular.
Some benefits of forced air systems, which are now widely used in churches, schools, and high-end residences, are
Better air conditioning effects
Energy savings of up to 15–20%
Even conditioning
A drawback is the installation cost, which can be slightly higher than traditional HVAC systems.
Energy efficiency can be improved even more in central heating systems by introducing zoned heating. This allows a more granular application of heat, similar to non-central heating systems. Zones are controlled by multiple thermostats. In water heating systems the thermostats control zone valves, and in forced air systems they control zone dampers inside the vents which selectively block the flow of air. In this case, the control system is very critical to maintaining a proper temperature.
Forecasting is another method of controlling building heating by calculating the demand for heating energy that should be supplied to the building in each time unit.
=== Ground source heat pump ===
Ground source, or geothermal, heat pumps are similar to ordinary heat pumps, but instead of transferring heat to or from outside air, they rely on the stable, even temperature of the earth to provide heating and air conditioning. Many regions experience seasonal temperature extremes, which would require large-capacity heating and cooling equipment to heat or cool buildings. For example, a conventional heat pump system used to heat a building in Montana's −57 °C (−70 °F) low temperature or cool a building in the highest temperature ever recorded in the US—57 °C (134 °F) in Death Valley, California, in 1913 would require a large amount of energy due to the extreme difference between inside and outside air temperatures. A metre below the earth's surface, however, the ground remains at a relatively constant temperature. Utilizing this large source of relatively moderate temperature earth, a heating or cooling system's capacity can often be significantly reduced. Although ground temperatures vary according to latitude, at 1.8 metres (6 ft) underground, temperatures generally only range from 7 to 24 °C (45 to 75 °F).
=== Solar air conditioning ===
Photovoltaic solar panels offer a new way to potentially decrease the operating cost of air conditioning. Traditional air conditioners run using alternating current, and hence, any direct-current solar power needs to be inverted to be compatible with these units. New variable-speed DC-motor units allow solar power to more easily run them since this conversion is unnecessary, and since the motors are tolerant of voltage fluctuations associated with variance in supplied solar power (e.g., due to cloud cover).
=== Ventilation energy recovery ===
Energy recovery systems sometimes utilize heat recovery ventilation or energy recovery ventilation systems that employ heat exchangers or enthalpy wheels to recover sensible or latent heat from exhausted air. This is done by transfer of energy from the stale air inside the home to the incoming fresh air from outside.
=== Air conditioning energy ===
The performance of vapor compression refrigeration cycles is limited by thermodynamics. These air conditioning and heat pump devices move heat rather than convert it from one form to another, so thermal efficiencies do not appropriately describe the performance of these devices. The Coefficient of performance (COP) measures performance, but this dimensionless measure has not been adopted. Instead, the Energy Efficiency Ratio (EER) has traditionally been used to characterize the performance of many HVAC systems. EER is the Energy Efficiency Ratio based on a 35 °C (95 °F) outdoor temperature. To more accurately describe the performance of air conditioning equipment over a typical cooling season a modified version of the EER, the Seasonal Energy Efficiency Ratio (SEER), or in Europe the ESEER, is used. SEER ratings are based on seasonal temperature averages instead of a constant 35 °C (95 °F) outdoor temperature. The current industry minimum SEER rating is 14 SEER. Engineers have pointed out some areas where efficiency of the existing hardware could be improved. For example, the fan blades used to move the air are usually stamped from sheet metal, an economical method of manufacture, but as a result they are not aerodynamically efficient. A well-designed blade could reduce the electrical power required to move the air by a third.
=== Demand-controlled kitchen ventilation ===
Demand-controlled kitchen ventilation (DCKV) is a building controls approach to controlling the volume of kitchen exhaust and supply air in response to the actual cooking loads in a commercial kitchen. Traditional commercial kitchen ventilation systems operate at 100% fan speed independent of the volume of cooking activity and DCKV technology changes that to provide significant fan energy and conditioned air savings. By deploying smart sensing technology, both the exhaust and supply fans can be controlled to capitalize on the affinity laws for motor energy savings, reduce makeup air heating and cooling energy, increasing safety, and reducing ambient kitchen noise levels.
== Air filtration and cleaning ==
Air cleaning and filtration removes particles, contaminants, vapors and gases from the air. The filtered and cleaned air then is used in heating, ventilation, and air conditioning. Air cleaning and filtration should be taken in account when protecting our building environments. If present, contaminants can come out from the HVAC systems if not removed or filtered properly.
Clean air delivery rate (CADR) is the amount of clean air an air cleaner provides to a room or space. When determining CADR, the amount of airflow in a space is taken into account. For example, an air cleaner with a flow rate of 30 cubic metres (1,000 cu ft) per minute and an efficiency of 50% has a CADR of 15 cubic metres (500 cu ft) per minute. Along with CADR, filtration performance is very important when it comes to the air in our indoor environment. This depends on the size of the particle or fiber, the filter packing density and depth, and the airflow rate.
== Industry and standards ==
The HVAC industry is a worldwide enterprise, with roles including operation and maintenance, system design and construction, equipment manufacturing and sales, and in education and research. The HVAC industry was historically regulated by the manufacturers of HVAC equipment, but regulating and standards organizations such as HARDI (Heating, Air-conditioning and Refrigeration Distributors International), ASHRAE, SMACNA, ACCA (Air Conditioning Contractors of America), Uniform Mechanical Code, International Mechanical Code, and AMCA have been established to support the industry and encourage high standards and achievement. (UL as an omnibus agency is not specific to the HVAC industry.)
The starting point in carrying out an estimate both for cooling and heating depends on the exterior climate and interior specified conditions. However, before taking up the heat load calculation, it is necessary to find fresh air requirements for each area in detail, as pressurization is an important consideration.
=== International ===
ISO 16813:2006 is one of the ISO building environment standards. It establishes the general principles of building environment design. It takes into account the need to provide a healthy indoor environment for the occupants as well as the need to protect the environment for future generations and promote collaboration among the various parties involved in building environmental design for sustainability. ISO16813 is applicable to new construction and the retrofit of existing buildings.
The building environmental design standard aims to:
provide the constraints concerning sustainability issues from the initial stage of the design process, with building and plant life cycle to be considered together with owning and operating costs from the beginning of the design process;
assess the proposed design with rational criteria for indoor air quality, thermal comfort, acoustical comfort, visual comfort, energy efficiency, and HVAC system controls at every stage of the design process;
iterate decisions and evaluations of the design throughout the design process.
=== United States ===
==== Licensing ====
In the United States, federal licensure is generally handled by EPA certified (for installation and service of HVAC devices).
Many U.S. states have licensing for boiler operation. Some of these are listed as follows:
Arkansas
Georgia
Michigan
Minnesota
Montana
New Jersey
North Dakota
Ohio
Oklahoma
Oregon
Finally, some U.S. cities may have additional labor laws that apply to HVAC professionals.
==== Societies ====
Many HVAC engineers are members of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). ASHRAE regularly organizes two annual technical committees and publishes recognized standards for HVAC design, which are updated every four years.
Another popular society is AHRI, which provides regular information on new refrigeration technology, and publishes relevant standards and codes.
==== Codes ====
Codes such as the UMC and IMC do include much detail on installation requirements, however. Other useful reference materials include items from SMACNA, ACGIH, and technical trade journals.
American design standards are legislated in the Uniform Mechanical Code or International Mechanical Code. In certain states, counties, or cities, either of these codes may be adopted and amended via various legislative processes. These codes are updated and published by the International Association of Plumbing and Mechanical Officials (IAPMO) or the International Code Council (ICC) respectively, on a 3-year code development cycle. Typically, local building permit departments are charged with enforcement of these standards on private and certain public properties.
==== Technicians ====
An HVAC technician is a tradesman who specializes in heating, ventilation, air conditioning, and refrigeration. HVAC technicians in the US can receive training through formal training institutions, where most earn associate degrees. Training for HVAC technicians includes classroom lectures and hands-on tasks, and can be followed by an apprenticeship wherein the recent graduate works alongside a professional HVAC technician for a temporary period. HVAC techs who have been trained can also be certified in areas such as air conditioning, heat pumps, gas heating, and commercial refrigeration.
=== United Kingdom ===
The Chartered Institution of Building Services Engineers is a body that covers the essential Service (systems architecture) that allow buildings to operate. It includes the electrotechnical, heating, ventilating, air conditioning, refrigeration and plumbing industries. To train as a building services engineer, the academic requirements are GCSEs (A-C) / Standard Grades (1-3) in Maths and Science, which are important in measurements, planning and theory. Employers will often want a degree in a branch of engineering, such as building environment engineering, electrical engineering or mechanical engineering. To become a full member of CIBSE, and so also to be registered by the Engineering Council UK as a chartered engineer, engineers must also attain an Honours Degree and a master's degree in a relevant engineering subject. CIBSE publishes several guides to HVAC design relevant to the UK market, and also the Republic of Ireland, Australia, New Zealand and Hong Kong. These guides include various recommended design criteria and standards, some of which are cited within the UK building regulations, and therefore form a legislative requirement for major building services works. The main guides are:
Guide A: Environmental Design
Guide B: Heating, Ventilating, Air Conditioning and Refrigeration
Guide C: Reference Data
Guide D: Transportation systems in Buildings
Guide E: Fire Safety Engineering
Guide F: Energy Efficiency in Buildings
Guide G: Public Health Engineering
Guide H: Building Control Systems
Guide J: Weather, Solar and Illuminance Data
Guide K: Electricity in Buildings
Guide L: Sustainability
Guide M: Maintenance Engineering and Management
Within the construction sector, it is the job of the building services engineer to design and oversee the installation and maintenance of the essential services such as gas, electricity, water, heating and lighting, as well as many others. These all help to make buildings comfortable and healthy places to live and work in. Building Services is part of a sector that has over 51,000 businesses and employs represents 2–3% of the GDP.
=== Australia ===
The Air Conditioning and Mechanical Contractors Association of Australia (AMCA), Australian Institute of Refrigeration, Air Conditioning and Heating (AIRAH), Australian Refrigeration Mechanical Association and CIBSE are responsible.
=== Asia ===
Asian architectural temperature-control have different priorities than European methods. For example, Asian heating traditionally focuses on maintaining temperatures of objects such as the floor or furnishings such as Kotatsu tables and directly warming people, as opposed to the Western focus, in modern periods, on designing air systems.
==== Philippines ====
The Philippine Society of Ventilating, Air Conditioning and Refrigerating Engineers (PSVARE) along with Philippine Society of Mechanical Engineers (PSME) govern on the codes and standards for HVAC / MVAC (MVAC means "mechanical ventilation and air conditioning") in the Philippines.
==== India ====
The Indian Society of Heating, Refrigerating and Air Conditioning Engineers (ISHRAE) was established to promote the HVAC industry in India. ISHRAE is an associate of ASHRAE. ISHRAE was founded at New Delhi in 1981 and a chapter was started in Bangalore in 1989. Between 1989 & 1993, ISHRAE chapters were formed in all major cities in India.
== See also ==
== References ==
== Further reading ==
International Mechanical Code (2012 (Second Printing)) by the International Code Council, Thomson Delmar Learning.
Modern Refrigeration and Air Conditioning (August 2003) by Althouse, Turnquist, and Bracciano, Goodheart-Wilcox Publisher; 18th edition.
The Cost of Cool.
Whai is LEV?
== External links ==
Media related to Climate control at Wikimedia Commons | Wikipedia/Environmental_control |
Energy modeling or energy system modeling is the process of building computer models of energy systems in order to analyze them. Such models often employ scenario analysis to investigate different assumptions about the technical and economic conditions at play. Outputs may include the system feasibility, greenhouse gas emissions, cumulative financial costs, natural resource use, and energy efficiency of the system under investigation. A wide range of techniques are employed, ranging from broadly economic to broadly engineering. Mathematical optimization is often used to determine the least-cost in some sense. Models can be international, regional, national, municipal, or stand-alone in scope. Governments maintain national energy models for energy policy development.
Energy models are usually intended to contribute variously to system operations, engineering design, or energy policy development. This page concentrates on policy models. Individual building energy simulations are explicitly excluded, although they too are sometimes called energy models. IPCC-style integrated assessment models, which also contain a representation of the world energy system and are used to examine global transformation pathways through to 2050 or 2100 are not considered here in detail.
Energy modeling has increased in importance as the need for climate change mitigation has grown in importance. The energy supply sector is the largest contributor to global greenhouse gas emissions. The IPCC reports that climate change mitigation will require a fundamental transformation of the energy supply system, including the substitution of unabated (not captured by CCS) fossil fuel conversion technologies by low-GHG alternatives.
== Model types ==
A wide variety of model types are in use. This section attempts to categorize the key types and their usage. The divisions provided are not hard and fast and mixed-paradigm models exist. In addition, the results from more general models can be used to inform the specification of more detailed models, and vice versa, thereby creating a hierarchy of models. Models may, in general, need to capture "complex dynamics such as:
energy system operation
technology stock turnover
technology innovation
firm and household behaviour
energy and non-energy capital investment and labour market adjustment dynamics leading to economic restructuring
infrastructure deployment and urban planning": S28–S29 : point form added
Models may be limited in scope to the electricity sector or they may attempt to cover an energy system in its entirety (see below).
Most energy models are used for scenario analysis. A scenario is a coherent set of assumptions about a possible system. New scenarios are tested against a baseline scenario – normally business-as-usual (BAU) – and the differences in outcome noted.
The time horizon of the model is an important consideration. Single-year models – set in either the present or the future (say 2050) – assume a non-evolving capital structure and focus instead on the operational dynamics of the system. Single-year models normally embed considerable temporal (typically hourly resolution) and technical detail (such as individual generation plant and transmissions lines). Long-range models – cast over one or more decades (from the present until say 2050) – attempt to encapsulate the structural evolution of the system and are used to investigate capacity expansion and energy system transition issues.
Models often use mathematical optimization to solve for redundancy in the specification of the system. Some of the techniques used derive from operations research. Most rely on linear programming (including mixed-integer programming), although some use nonlinear programming. Solvers may use classical or genetic optimisation, such as CMA-ES. Models may be recursive-dynamic, solving sequentially for each time interval, and thus evolving through time. Or they may be framed as a single forward-looking intertemporal problem, and thereby assume perfect foresight. Single-year engineering-based models usually attempt to minimize the short-run financial cost, while single-year market-based models use optimization to determine market clearing. Long-range models, usually spanning decades, attempt to minimize both the short and long-run costs as a single intertemporal problem.
The demand-side (or end-user domain) has historically received relatively scant attention, often modeled by just a simple demand curve. End-user energy demand curves, in the short-run at least, are normally found to be highly inelastic.
As intermittent energy sources and energy demand management grow in importance, models have needed to adopt an hourly temporal resolution in order to better capture their real-time dynamics. Long-range models are often limited to calculations at yearly intervals, based on typical day profiles, and are hence less suited to systems with significant variable renewable energy. Day-ahead dispatching optimization is used to aid in the planning of systems with a significant portion of intermittent energy production in which uncertainty around future energy predictions is accounted for using stochastic optimization.
Implementing languages include GAMS, MathProg, MATLAB, Mathematica, Python, Pyomo, R, Fortran, Java, C, C++, and Vensim. Occasionally spreadsheets are used.
As noted, IPCC-style integrated models (also known as integrated assessment models or IAM) are not considered here in any detail. Integrated models combine simplified sub-models of the world economy, agriculture and land-use, and the global climate system in addition to the world energy system. Examples include GCAM, MESSAGE, and REMIND.
Published surveys on energy system modeling have focused on techniques, general classification, an overview, decentralized planning, modeling methods, renewables integration, energy efficiency policies, electric vehicle integration, international development, and the use of layered models to support climate protection policy. Deep Decarbonization Pathways Project researchers have also analyzed model typologies.: S30–S31 A 2014 paper outlines the modeling challenges ahead as energy systems become more complex and human and social factors become increasingly relevant.
=== Electricity sector models ===
Electricity sector models are used to model electricity systems. The scope may be national or regional, depending on circumstances. For instance, given the presence of national interconnectors, the western European electricity system may be modeled in its entirety.
Engineering-based models usually contain a good characterization of the technologies involved, including the high-voltage AC transmission grid where appropriate. Some models (for instance, models for Germany) may assume a single common bus or "copper plate" where the grid is strong. The demand-side in electricity sector models is typically represented by a fixed load profile.
Market-based models, in addition, represent the prevailing electricity market, which may include nodal pricing.
Game theory and agent-based models are used to capture and study strategic behavior within electricity markets.
=== Energy system models ===
In addition to the electricity sector, energy system models include the heat, gas, mobility, and other sectors as appropriate. Energy system models are often national in scope, but may be municipal or international.
So-called top-down models are broadly economic in nature and based on either partial equilibrium or general equilibrium. General equilibrium models represent a specialized activity and require dedicated algorithms. Partial equilibrium models are more common.
So-called bottom-up models capture the engineering well and often rely on techniques from operations research. Individual plants are characterized by their efficiency curves (also known as input/output relations), nameplate capacities, investment costs (capex), and operating costs (opex). Some models allow for these parameters to depend on external conditions, such as ambient temperature.
Producing hybrid top-down/bottom-up models to capture both the economics and the engineering has proved challenging.
== Established models ==
This section lists some of the major models in use. These are typically run by national governments.
In a community effort, a large number of existing energy system models were collected in model fact sheets on the Open Energy Platform.
=== LEAP ===
LEAP, the Low Emissions Analysis Platform (formerly known as the Long-range Energy Alternatives Planning System) is a software tool for energy policy analysis, air pollution abatement planning and climate change mitigation assessment.
LEAP was developed at the Stockholm Environment Institute's (SEI) US Center. LEAP can be used to examine city, statewide, national, and regional energy systems. LEAP is normally used for studies of between 20–50 years. Most of its calculations occur at yearly intervals. LEAP allows policy analysts to create and evaluate alternative scenarios and to compare their energy requirements, social costs and benefits, and environmental impacts. As of June 2021, LEAP has over 6000 users in 200 countries and territories
=== Power system simulation ===
General Electric's MAPS (Multi-Area Production Simulation) is a production simulation model used by various Regional Transmission Organizations and Independent System Operators in the United States to plan for the economic impact of proposed electric transmission and generation facilities in FERC-regulated electric wholesale markets. Portions of the model may also be used for the commitment and dispatch phase (updated on 5 minute intervals) in operation of wholesale electric markets for RTO and ISO regions. ABB's PROMOD is a similar software package. These ISO and RTO regions also utilize a GE software package called MARS (Multi-Area Reliability Simulation) to ensure the power system meets reliability criteria (a loss of load expectation (LOLE) of no greater than 0.1 days per year). Further, a GE software package called PSLF (Positive Sequence Load Flow) and a Siemens software package called PSSE (Power System Simulation for Engineering) analyzes load flow on the power system for short-circuits and stability during preliminary planning studies by RTOs and ISOs.
=== MARKAL/TIMES ===
MARKAL (MARKet ALlocation) is an integrated energy systems modeling platform, used to analyze energy, economic, and environmental issues at the global, national, and municipal level over time-frames of up to several decades. MARKAL can be used to quantify the impacts of policy options on technology development and natural resource depletion. The software was developed by the Energy Technology Systems Analysis Programme (ETSAP) of the International Energy Agency (IEA) over a period of almost two decades.
TIMES (The Integrated MARKAL-EFOM System) is an evolution of MARKAL – both energy models have many similarities. TIMES succeeded MARKAL in 2008. Both models are technology explicit, dynamic partial equilibrium models of energy markets. In both cases, the equilibrium is determined by maximizing the total consumer and producer surplus via linear programming. Both MARKAL and TIMES are written in GAMS.
The TIMES model generator was also developed under the Energy Technology Systems Analysis Program (ETSAP). TIMES combines two different, but complementary, systematic approaches to modeling energy – a technical engineering approach and an economic approach. TIMES is a technology rich, bottom-up model generator, which uses linear programming to produce a least-cost energy system, optimized according to a number of user-specified constraints, over the medium to long-term. It is used for "the exploration of possible energy futures based on contrasted scenarios".: 7
As of 2015, the MARKAL and TIMES model generators are in use in 177 institutions spread over 70 countries.: 5
=== NEMS ===
NEMS (National Energy Modeling System) is a long-standing United States government policy model, run by the Department of Energy (DOE). NEMS computes equilibrium fuel prices and quantities for the US energy sector. To do so, the software iteratively solves a sequence of linear programs and nonlinear equations. NEMS has been used to explicitly model the demand-side, in particular to determine consumer technology choices in the residential and commercial building sectors.
NEMS is used to produce the Annual Energy Outlook each year – for instance in 2015.
== Criticisms ==
Public policy energy models have been criticized for being insufficiently transparent. The source code and data sets should at least be available for peer review, if not explicitly published. To improve transparency and public acceptance, some models are undertaken as open-source software projects, often developing a diverse community as they proceed. OSeMOSYS is an example of such a model. The Open Energy Outlook is an open community that has produced a long-term outlook of the U.S. energy system using the open-source TEMOA model.
Not a criticism per se, but it is necessary to understand that model results do not constitute future predictions.
== See also ==
General
Climate change mitigation – actions to limit long-term climate change
Climate change mitigation scenarios – possible futures in which global warming is reduced by deliberate actions
Economic model
Energy system – the interpretation of the energy sector in system terms
Energy Modeling Forum – a Stanford University-based modeling forum
Open Energy Modelling Initiative – an open source energy modeling initiative, centered on Europe
Open energy system databases – database projects which collect, clean, and republish energy-related datasets
Open energy system models – a review of energy system models that are also open source
Power system simulation
Models
iNEMS (Integrated National Energy Modeling System) – a national energy model for China
MARKAL – an energy model
NEMS – the US government national energy model
POLES (Prospective Outlook on Long-term Energy Systems) – an energy sector world simulation model
KAPSARC Energy Model - an energy sector model for Saudi Arabia
== Further reading ==
Introductory video on open energy system modeling with python language example
Introductory video with reference to public policy
== References ==
== External links ==
COST TD1207 Mathematical Optimization in the Decision Support Systems for Efficient and Robust Energy Networks wiki – a typology for optimization models
EnergyPLAN — a freeware energy model from the Department of Development and Planning, Aalborg University, Denmark
Open Energy Modelling Initiative open models page – a list of open energy models
model.energy — an online "toy" model utilizing the PyPSA framework that allows the public to experiment
Building Energy Modeling Tools by National Renewable Energy Laboratory | Wikipedia/Energy_modeling |
In analytical mechanics (particularly Lagrangian mechanics), generalized forces are conjugate to generalized coordinates. They are obtained from the applied forces Fi, i = 1, …, n, acting on a system that has its configuration defined in terms of generalized coordinates. In the formulation of virtual work, each generalized force is the coefficient of the variation of a generalized coordinate.
== Virtual work ==
Generalized forces can be obtained from the computation of the virtual work, δW, of the applied forces.: 265
The virtual work of the forces, Fi, acting on the particles Pi, i = 1, ..., n, is given by
δ
W
=
∑
i
=
1
n
F
i
⋅
δ
r
i
{\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot \delta \mathbf {r} _{i}}
where δri is the virtual displacement of the particle Pi.
=== Generalized coordinates ===
Let the position vectors of each of the particles, ri, be a function of the generalized coordinates, qj, j = 1, ..., m. Then the virtual displacements δri are given by
δ
r
i
=
∑
j
=
1
m
∂
r
i
∂
q
j
δ
q
j
,
i
=
1
,
…
,
n
,
{\displaystyle \delta \mathbf {r} _{i}=\sum _{j=1}^{m}{\frac {\partial \mathbf {r} _{i}}{\partial q_{j}}}\delta q_{j},\quad i=1,\ldots ,n,}
where δqj is the virtual displacement of the generalized coordinate qj.
The virtual work for the system of particles becomes
δ
W
=
F
1
⋅
∑
j
=
1
m
∂
r
1
∂
q
j
δ
q
j
+
⋯
+
F
n
⋅
∑
j
=
1
m
∂
r
n
∂
q
j
δ
q
j
.
{\displaystyle \delta W=\mathbf {F} _{1}\cdot \sum _{j=1}^{m}{\frac {\partial \mathbf {r} _{1}}{\partial q_{j}}}\delta q_{j}+\dots +\mathbf {F} _{n}\cdot \sum _{j=1}^{m}{\frac {\partial \mathbf {r} _{n}}{\partial q_{j}}}\delta q_{j}.}
Collect the coefficients of δqj so that
δ
W
=
∑
i
=
1
n
F
i
⋅
∂
r
i
∂
q
1
δ
q
1
+
⋯
+
∑
i
=
1
n
F
i
⋅
∂
r
i
∂
q
m
δ
q
m
.
{\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {r} _{i}}{\partial q_{1}}}\delta q_{1}+\dots +\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {r} _{i}}{\partial q_{m}}}\delta q_{m}.}
=== Generalized forces ===
The virtual work of a system of particles can be written in the form
δ
W
=
Q
1
δ
q
1
+
⋯
+
Q
m
δ
q
m
,
{\displaystyle \delta W=Q_{1}\delta q_{1}+\dots +Q_{m}\delta q_{m},}
where
Q
j
=
∑
i
=
1
n
F
i
⋅
∂
r
i
∂
q
j
,
j
=
1
,
…
,
m
,
{\displaystyle Q_{j}=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {r} _{i}}{\partial q_{j}}},\quad j=1,\ldots ,m,}
are called the generalized forces associated with the generalized coordinates qj, j = 1, ..., m.
=== Velocity formulation ===
In the application of the principle of virtual work it is often convenient to obtain virtual displacements from the velocities of the system. For the n particle system, let the velocity of each particle Pi be Vi, then the virtual displacement δri can also be written in the form
δ
r
i
=
∑
j
=
1
m
∂
V
i
∂
q
˙
j
δ
q
j
,
i
=
1
,
…
,
n
.
{\displaystyle \delta \mathbf {r} _{i}=\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}}\delta q_{j},\quad i=1,\ldots ,n.}
This means that the generalized force, Qj, can also be determined as
Q
j
=
∑
i
=
1
n
F
i
⋅
∂
V
i
∂
q
˙
j
,
j
=
1
,
…
,
m
.
{\displaystyle Q_{j}=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}},\quad j=1,\ldots ,m.}
== D'Alembert's principle ==
D'Alembert formulated the dynamics of a particle as the equilibrium of the applied forces with an inertia force (apparent force), called D'Alembert's principle. The inertia force of a particle, Pi, of mass mi is
F
i
∗
=
−
m
i
A
i
,
i
=
1
,
…
,
n
,
{\displaystyle \mathbf {F} _{i}^{*}=-m_{i}\mathbf {A} _{i},\quad i=1,\ldots ,n,}
where Ai is the acceleration of the particle.
If the configuration of the particle system depends on the generalized coordinates qj, j = 1, ..., m, then the generalized inertia force is given by
Q
j
∗
=
∑
i
=
1
n
F
i
∗
⋅
∂
V
i
∂
q
˙
j
,
j
=
1
,
…
,
m
.
{\displaystyle Q_{j}^{*}=\sum _{i=1}^{n}\mathbf {F} _{i}^{*}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}},\quad j=1,\ldots ,m.}
D'Alembert's form of the principle of virtual work yields
δ
W
=
(
Q
1
+
Q
1
∗
)
δ
q
1
+
⋯
+
(
Q
m
+
Q
m
∗
)
δ
q
m
.
{\displaystyle \delta W=(Q_{1}+Q_{1}^{*})\delta q_{1}+\dots +(Q_{m}+Q_{m}^{*})\delta q_{m}.}
== See also ==
Lagrangian mechanics
Generalized coordinates
Degrees of freedom (physics and chemistry)
Virtual work
== References == | Wikipedia/Generalized_force |
A thermodynamic system is a body of matter and/or radiation separate from its surroundings that can be studied using the laws of thermodynamics.
Thermodynamic systems can be passive and active according to internal processes. According to internal processes, passive systems and active systems are distinguished: passive, in which there is a redistribution of available energy, active, in which one type of energy is converted into another.
Depending on its interaction with the environment, a thermodynamic system may be an isolated system, a closed system, or an open system. An isolated system does not exchange matter or energy with its surroundings. A closed system may exchange heat, experience forces, and exert forces, but does not exchange matter. An open system can interact with its surroundings by exchanging both matter and energy.
The physical condition of a thermodynamic system at a given time is described by its state, which can be specified by the values of a set of thermodynamic state variables. A thermodynamic system is in thermodynamic equilibrium when there are no macroscopically apparent flows of matter or energy within it or between it and other systems.
== Overview ==
Thermodynamic equilibrium is characterized not only by the absence of any flow of mass or energy, but by “the absence of any tendency toward change on a macroscopic scale.”
Equilibrium thermodynamics, as a subject in physics, considers macroscopic bodies of matter and energy in states of internal thermodynamic equilibrium. It uses the concept of thermodynamic processes, by which bodies pass from one equilibrium state to another by transfer of matter and energy between them. The term 'thermodynamic system' is used to refer to bodies of matter and energy in the special context of thermodynamics. The possible equilibria between bodies are determined by the physical properties of the walls that separate the bodies. Equilibrium thermodynamics in general does not measure time. Equilibrium thermodynamics is a relatively simple and well settled subject. One reason for this is the existence of a well defined physical quantity called 'the entropy of a body'.
Non-equilibrium thermodynamics, as a subject in physics, considers bodies of matter and energy that are not in states of internal thermodynamic equilibrium, but are usually participating in processes of transfer that are slow enough to allow description in terms of quantities that are closely related to thermodynamic state variables. It is characterized by presence of flows of matter and energy. For this topic, very often the bodies considered have smooth spatial inhomogeneities, so that spatial gradients, for example a temperature gradient, are well enough defined. Thus the description of non-equilibrium thermodynamic systems is a field theory, more complicated than the theory of equilibrium thermodynamics. Non-equilibrium thermodynamics is a growing subject, not an established edifice. Example theories and modeling approaches include the GENERIC formalism for complex fluids, viscoelasticity, and soft materials. In general, it is not possible to find an exactly defined entropy for non-equilibrium problems. For many non-equilibrium thermodynamical problems, an approximately defined quantity called 'time rate of entropy production' is very useful. Non-equilibrium thermodynamics is mostly beyond the scope of the present article.
Another kind of thermodynamic system is considered in most engineering. It takes part in a flow process. The account is in terms that approximate, well enough in practice in many cases, equilibrium thermodynamical concepts. This is mostly beyond the scope of the present article, and is set out in other articles, for example the article Flow process.
== History ==
The classification of thermodynamic systems arose with the development of thermodynamics as a science.
Theoretical studies of thermodynamic processes in the period from the first theory of heat engines (Saadi Carnot, France, 1824) to the theory of dissipative structures (Ilya Prigozhin, Belgium, 1971) mainly concerned the patterns of interaction of thermodynamic systems with the environment.
At the same time, thermodynamic systems were mainly classified as isolated, closed and open, with corresponding properties in various thermodynamic states, for example, in states close to equilibrium, nonequilibrium and strongly nonequilibrium.
In 2010, Boris Dobroborsky (Israel, Russia) proposed a classification of thermodynamic systems according to internal processes consisting in energy redistribution (passive systems) and energy conversion (active systems).
== Passive systems ==
If there is a temperature difference inside the thermodynamic system, for example in a rod, one end of which is warmer than the other, then thermal energy transfer processes occur in it, in which the temperature of the colder part rises and the warmer part decreases. As a result, after some time, the temperature in the rod will equalize – the rod will come to a state of thermodynamic equilibrium.
== Active systems ==
If the process of converting one type of energy into another takes place inside a thermodynamic system, for example, in chemical reactions, in electric or pneumatic motors, when one solid body rubs against another, then the processes of energy release or absorption will occur, and the thermodynamic system will always tend to a non-equilibrium state with respect to the environment.
== Systems in equilibrium ==
In isolated systems it is consistently observed that as time goes on internal rearrangements diminish and stable conditions are approached. Pressures and temperatures tend to equalize, and matter arranges itself into one or a few relatively homogeneous phases. A system in which all processes of change have gone practically to completion is considered in a state of thermodynamic equilibrium. The thermodynamic properties of a system in equilibrium are unchanging in time. Equilibrium system states are much easier to describe in a deterministic manner than non-equilibrium states. In some cases, when analyzing a thermodynamic process, one can assume that each intermediate state in the process is at equilibrium. Such a process is called quasistatic.
For a process to be reversible, each step in the process must be reversible. For a step in a process to be reversible, the system must be in equilibrium throughout the step. That ideal cannot be accomplished in practice because no step can be taken without perturbing the system from equilibrium, but the ideal can be approached by making changes slowly.
The very existence of thermodynamic equilibrium, defining states of thermodynamic systems, is the essential, characteristic, and most fundamental postulate of thermodynamics, though it is only rarely cited as a numbered law. According to Bailyn, the commonly rehearsed statement of the zeroth law of thermodynamics is a consequence of this fundamental postulate. In reality, practically nothing in nature is in strict thermodynamic equilibrium, but the postulate of thermodynamic equilibrium often provides very useful idealizations or approximations, both theoretically and experimentally; experiments can provide scenarios of practical thermodynamic equilibrium.
In equilibrium thermodynamics the state variables do not include fluxes because in a state of thermodynamic equilibrium all fluxes have zero values by definition. Equilibrium thermodynamic processes may involve fluxes but these must have ceased by the time a thermodynamic process or operation is complete bringing a system to its eventual thermodynamic state. Non-equilibrium thermodynamics allows its state variables to include non-zero fluxes, which describe transfers of mass or energy or entropy between a system and its surroundings.
== Walls ==
A system is enclosed by walls that bound it and connect it to its surroundings. Often a wall restricts passage across it by some form of matter or energy, making the connection indirect. Sometimes a wall is no more than an imaginary two-dimensional closed surface through which the connection to the surroundings is direct.
A wall can be fixed (e.g. a constant volume reactor) or moveable (e.g. a piston). For example, in a reciprocating engine, a fixed wall means the piston is locked at its position; then, a constant volume process may occur. In that same engine, a piston may be unlocked and allowed to move in and out. Ideally, a wall may be declared adiabatic, diathermal, impermeable, permeable, or semi-permeable. Actual physical materials that provide walls with such idealized properties are not always readily available.
The system is delimited by walls or boundaries, either actual or notional, across which conserved (such as matter and energy) or unconserved (such as entropy) quantities can pass into and out of the system. The space outside the thermodynamic system is known as the surroundings, a reservoir, or the environment. The properties of the walls determine what transfers can occur. A wall that allows transfer of a quantity is said to be permeable to it, and a thermodynamic system is classified by the permeabilities of its several walls. A transfer between system and surroundings can arise by contact, such as conduction of heat, or by long-range forces such as an electric field in the surroundings.
A system with walls that prevent all transfers is said to be isolated. This is an idealized conception, because in practice some transfer is always possible, for example by gravitational forces. It is an axiom of thermodynamics that an isolated system eventually reaches internal thermodynamic equilibrium, when its state no longer changes with time.
The walls of a closed system allow transfer of energy as heat and as work, but not of matter, between it and its surroundings. The walls of an open system allow transfer both of matter and of energy. This scheme of definition of terms is not uniformly used, though it is convenient for some purposes. In particular, some writers use 'closed system' where 'isolated system' is here used.
Anything that passes across the boundary and effects a change in the contents of the system must be accounted for in an appropriate balance equation. The volume can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824. It could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics.
== Surroundings ==
The system is the part of the universe being studied, while the surroundings is the remainder of the universe that lies outside the boundaries of the system. It is also known as the environment or the reservoir. Depending on the type of system, it may interact with the system by exchanging mass, energy (including heat and work), momentum, electric charge, or other conserved properties. The environment is ignored in the analysis of the system, except in regards to these interactions.
== Closed system ==
In a closed system, no mass may be transferred in or out of the system boundaries. The system always contains the same amount of matter, but (sensible) heat and (boundary) work can be exchanged across the boundary of the system. Whether a system can exchange heat, work, or both is dependent on the property of its boundary.
Adiabatic boundary – not allowing any heat exchange: A thermally isolated system
Rigid boundary – not allowing exchange of work: A mechanically isolated system
One example is fluid being compressed by a piston in a cylinder. Another example of a closed system is a bomb calorimeter, a type of constant-volume calorimeter used in measuring the heat of combustion of a particular reaction. Electrical energy travels across the boundary to produce a spark between the electrodes and initiates combustion. Heat transfer occurs across the boundary after combustion but no mass transfer takes place either way.
The first law of thermodynamics for energy transfers for closed system may be stated:
Δ
U
=
Q
−
W
{\displaystyle \Delta U=Q-W}
where
U
{\displaystyle U}
denotes the internal energy of the system,
Q
{\displaystyle Q}
heat added to the system,
W
{\displaystyle W}
the work done by the system. For infinitesimal changes the first law for closed systems may stated:
d
U
=
δ
Q
−
δ
W
.
{\displaystyle \mathrm {d} U=\delta Q-\delta W.}
If the work is due to a volume expansion by
d
V
{\displaystyle \mathrm {d} V}
at a pressure
P
{\displaystyle P}
then:
δ
W
=
P
d
V
.
{\displaystyle \delta W=P\mathrm {d} V.}
For a quasi-reversible heat transfer, the second law of thermodynamics reads:
δ
Q
=
T
d
S
{\displaystyle \delta Q=T\mathrm {d} S}
where
T
{\displaystyle T}
denotes the thermodynamic temperature and
S
{\displaystyle S}
the entropy of the system. With these relations the fundamental thermodynamic relation, used to compute changes in internal energy, is expressed as:
d
U
=
T
d
S
−
P
d
V
.
{\displaystyle \mathrm {d} U=T\mathrm {d} S-P\mathrm {d} V.}
For a simple system, with only one type of particle (atom or molecule), a closed system amounts to a constant number of particles. For systems undergoing a chemical reaction, there may be all sorts of molecules being generated and destroyed by the reaction process. In this case, the fact that the system is closed is expressed by stating that the total number of each elemental atom is conserved, no matter what kind of molecule it may be a part of. Mathematically:
∑
j
=
1
m
a
i
j
N
j
=
b
i
0
{\displaystyle \sum _{j=1}^{m}a_{ij}N_{j}=b_{i}^{0}}
where
N
j
{\displaystyle N_{j}}
denotes the number of
j
{\displaystyle j}
-type molecules,
a
i
j
{\displaystyle a_{ij}}
the number of atoms of element
i
{\displaystyle i}
in molecule
j
{\displaystyle j}
, and
b
i
0
{\displaystyle b_{i}^{0}}
the total number of atoms of element
i
{\displaystyle i}
in the system, which remains constant, since the system is closed. There is one such equation for each element in the system.
== Isolated system ==
An isolated system is more restrictive than a closed system as it does not interact with its surroundings in any way. Mass and energy remains constant within the system, and no energy or mass transfer takes place across the boundary. As time passes in an isolated system, internal differences in the system tend to even out and pressures and temperatures tend to equalize, as do density differences. A system in which all equalizing processes have gone practically to completion is in a state of thermodynamic equilibrium.
Truly isolated physical systems do not exist in reality (except perhaps for the universe as a whole), because, for example, there is always gravity between a system with mass and masses elsewhere. However, real systems may behave nearly as an isolated system for finite (possibly very long) times. The concept of an isolated system can serve as a useful model approximating many real-world situations. It is an acceptable idealization used in constructing mathematical models of certain natural phenomena.
In the attempt to justify the postulate of entropy increase in the second law of thermodynamics, Boltzmann's H-theorem used equations, which assumed that a system (for example, a gas) was isolated. That is all the mechanical degrees of freedom could be specified, treating the walls simply as mirror boundary conditions. This inevitably led to Loschmidt's paradox. However, if the stochastic behavior of the molecules in actual walls is considered, along with the randomizing effect of the ambient, background thermal radiation, Boltzmann's assumption of molecular chaos can be justified.
The second law of thermodynamics for isolated systems states that the entropy of an isolated system not in equilibrium tends to increase over time, approaching maximum value at equilibrium. Overall, in an isolated system, the internal energy is constant and the entropy can never decrease. A closed system's entropy can decrease e.g. when heat is extracted from the system.
Isolated systems are not equivalent to closed systems. Closed systems cannot exchange matter with the surroundings, but can exchange energy. Isolated systems can exchange neither matter nor energy with their surroundings, and as such are only theoretical and do not exist in reality (except, possibly, the entire universe).
'Closed system' is often used in thermodynamics discussions when 'isolated system' would be correct – i.e. there is an assumption that energy does not enter or leave the system.
== Selective transfer of matter ==
For a thermodynamic process, the precise physical properties of the walls and surroundings of the system are important, because they determine the possible processes.
An open system has one or several walls that allow transfer of matter. To account for the internal energy of the open system, this requires energy transfer terms in addition to those for heat and work. It also leads to the idea of the chemical potential.
A wall selectively permeable only to a pure substance can put the system in diffusive contact with a reservoir of that pure substance in the surroundings. Then a process is possible in which that pure substance is transferred between system and surroundings. Also, across that wall a contact equilibrium with respect to that substance is possible. By suitable thermodynamic operations, the pure substance reservoir can be dealt with as a closed system. Its internal energy and its entropy can be determined as functions of its temperature, pressure, and mole number.
A thermodynamic operation can render impermeable to matter all system walls other than the contact equilibrium wall for that substance. This allows the definition of an intensive state variable, with respect to a reference state of the surroundings, for that substance. The intensive variable is called the chemical potential; for component substance i it is usually denoted μi. The corresponding extensive variable can be the number of moles Ni of the component substance in the system.
For a contact equilibrium across a wall permeable to a substance, the chemical potentials of the substance must be same on either side of the wall. This is part of the nature of thermodynamic equilibrium, and may be regarded as related to the zeroth law of thermodynamics.
== Open system ==
In an open system, there is an exchange of energy and matter between the system and the surroundings. The presence of reactants in an open beaker is an example of an open system. Here the boundary is an imaginary surface enclosing the beaker and reactants. It is named closed, if borders are impenetrable for substance, but allow transit of energy in the form of heat, and isolated, if there is no exchange of heat and substances. The open system cannot exist in the equilibrium state. To describe deviation of the thermodynamic system from equilibrium, in addition to constitutive variables that was described above, a set of internal variables
ξ
1
,
ξ
2
,
…
{\displaystyle \xi _{1},\xi _{2},\ldots }
have been introduced. The equilibrium state is considered to be stable and the main property of the internal variables, as measures of non-equilibrium of the system, is their trending to disappear; the local law of disappearing can be written as relaxation equation for each internal variable
where
τ
i
=
τ
i
(
T
,
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle \tau _{i}=\tau _{i}(T,x_{1},x_{2},\ldots ,x_{n})}
is a relaxation time of a corresponding variable. It is convenient to consider the initial value
ξ
i
0
{\displaystyle \xi _{i}^{0}}
equal to zero.
The specific contribution to the thermodynamics of open non-equilibrium systems was made by Ilya Prigogine, who investigated a system of chemically reacting substances. In this case the internal variables appear to be measures of incompleteness of chemical reactions, that is measures of how much the considered system with chemical reactions is out of equilibrium. The theory can be generalized, to consider any deviations from the equilibrium state, such as structure of the system, gradients of temperature, difference of concentrations of substances and so on, to say nothing of degrees of completeness of all chemical reactions, to be internal variables.
The increments of Gibbs free energy
G
{\displaystyle G}
and entropy
S
{\displaystyle S}
at
T
=
const
{\displaystyle T={\text{const}}}
and
p
=
const
{\displaystyle p={\text{const}}}
are determined as
The stationary states of the system exist due to exchange of both thermal energy (
Δ
Q
α
{\displaystyle \Delta Q_{\alpha }}
) and a stream of particles. The sum of the last terms in the equations presents the total energy coming into the system with the stream of particles of substances
Δ
N
α
{\displaystyle \Delta N_{\alpha }}
that can be positive or negative; the quantity
μ
α
{\displaystyle \mu _{\alpha }}
is chemical potential of substance
α
{\displaystyle \alpha }
.The middle terms in equations (2) and (3) depict energy dissipation (entropy production) due to the relaxation of internal variables
ξ
j
{\displaystyle \xi _{j}}
, while
Ξ
j
{\displaystyle \Xi _{j}}
are thermodynamic forces.
This approach to the open system allows describing the growth and development of living objects in thermodynamic terms.
== See also ==
Dynamical system
Energy system
Isolated system
Mechanical system
Physical system
Quantum system
Thermodynamic cycle
Thermodynamic process
Two-state quantum system
GENERIC formalism
== References ==
== Sources ==
Abbott, M.M.; van Hess, H. G. (1989). Thermodynamics with Chemical Applications (2nd ed.). McGraw Hill.
Bailyn, M. (1994). A Survey of Thermodynamics. New York: American Institute of Physics Press. ISBN 0-88318-797-3.
Callen, H. B. (1985) [1960]. Thermodynamics and an Introduction to Thermostatistics (2nd ed.). New York: Wiley. ISBN 0-471-86256-8.
Carnot, Sadi (1824). Réflexions sur la puissance motrice du feu et sur les machines propres à développer cette puissance (in French). Paris: Bachelier.
Haase, R. (1971). "Survey of Fundamental Laws". In Eyring, H.; Henderson, D.; Jost, W. (eds.). Thermodynamics. Physical Chemistry: An Advanced Treatise. Vol. 1. New York: Academic Press. pp. 1–97. LCCN 73-117081.
Dobroborsky B.S. Machine safety and the human factor / Edited by Doctor of Technical Sciences, prof. S.A. Volkov. — St. Petersburg: SPbGASU, 2011. — pp. 33–35. — 114 p. — ISBN 978-5-9227-0276-8. (Ru)
Halliday, David; Resnick, Robert; Walker, Jearl (2008). Fundamentals of Physics (8th ed.). Wiley.
Moran, Michael J.; Shapiro, Howard N. (2008). Fundamentals of Engineering Thermodynamics (6th ed.). Wiley.
Rex, Andrew; Finn, C. B. P. (2017). Finn's Thermal Physics (3rd ed.). Taylor & Francis. ISBN 978-1-498-71887-5.
Tisza, László (1966). Generalized Thermodynamics. MIT Press.
Tschoegl, N. W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics. Amsterdam: Elsevier. ISBN 0-444-50426-5. | Wikipedia/Thermodynamic_systems |
The journal Studies in Applied Mathematics is published by Wiley–Blackwell on behalf of the Massachusetts Institute of Technology.
It features scholarly articles on mathematical applications in allied fields, notably computer science, mechanics, astrophysics, geophysics, biophysics, and high-energy physics.
Its pedigree came from the Journal of Mathematics and Physics which was founded by the MIT Mathematics Department in 1920. The Journal changed to its present name in 1969.
The journal was edited from 1969 by David Benney of the Department of Mathematics, Massachusetts Institute of Technology.
According to ISI Journal Citation Reports, in 2020, it ranked 26th among the 265 journals in the Applied Mathematics category.
== Notes ==
== External links ==
Journal Home Page
MIT Faculty Page of Dr. David Benney | Wikipedia/Journal_of_Mathematics_and_Physics |
The Energy Science and Technology Database (EDB) is a multidisciplinary file containing worldwide references to basic and applied scientific and technical research literature. The information is collected for use by (United States) government managers, researchers at the national laboratories, and other research efforts sponsored by the U.S. Department of Energy, and the results of this research are transferred to the public. Abstracts are included for records from 1976 to the present day.
== Nuclear Science Abstracts ==
The EDB also contains the Nuclear Science Abstracts which is a comprehensive abstract and index collection to the international nuclear science and technology literature for the period 1948 through 1976. Included are scientific and technical reports of the US Atomic Energy Commission, United States Energy Research and Development Administration and its contractors, other agencies, universities, and industrial and research organizations. Approximately 25% of the records in the file contain abstracts. Nuclear Science Abstracts contains over 900,000 bibliographic records. In comparison, the entire Energy Science and Technology Database contains over 3 million bibliographic records.
== EDB Scope ==
Moreover, this database is designed to be a source for any individual who requires access to worldwide energy related information. This database is applicable to the following:
Obtaining results of current energy research efforts.
Access subject specific information on energy sources, use, and conservation; environmental effects; waste processing and disposal; regulatory consideration, as well as basic scientific studies.
Review energy information from a wide variety of sources, including journal literature, conference, patents, books, monographs, theses, and engineering and software materials.
Access historical records of the US Atomic Energy Commission, and US Energy Research and Development Administration.
Review subject specific information on nuclear science from a wide variety of sources, including books, conference proceedings, papers, patents, dissertations, engineering drawings, and journal literature.
=== Subject coverage ===
Subject coverage includes:
== Sources ==
A combination of national, and international agencies, as well as multiple non-governmental organizations are the source for, and provide information to, this database. Information is provided through the Energy Technology Data Exchange (ETDE), which is the International Energy Agency's (IEA) multilateral information program, and through the International Atomic Energy Agency's International Nuclear Information System (INIS), and IEA Coal Research. Other source information is provided by the U.S. Department of Energy, its contractors, other government agencies, professional societies. Engineering and software materials, references to journal literature, conferences, patents, books, monographs, and thesis, make up the files of this database. Approximately 50% of these references are from non-U.S. sources.
== See also ==
List of academic databases and search engines
== References ==
This article incorporates public domain material from websites or documents of the United States government.(Dept. of Commerce) | Wikipedia/Nuclear_Science_Abstracts |
Inspec is a major indexing database of scientific and technical literature, published by the Institution of Engineering and Technology (IET), and formerly by the Institution of Electrical Engineers (IEE), one of the IET's forerunners.
Inspec coverage is extensive in the fields of physics, computing, control, and engineering. Its subject coverage includes astronomy, electronics, communications, computers and computing, computer science, control engineering, electrical engineering, information technology, physics, manufacturing, production and mechanical engineering. Now, due to emerging concept of technology for business, Inspec also includes information technology for business in its portfolio. Inspec indexed few journals publishing high quality research by integrating technology into management, economics and social sciences domains. The sample journals include Annual Review of Financial Economics, Aslib Journal of Information Management, Australian Journal of Management and, International Journal of Management, Economics and Social Sciences.
Inspec was started in 1967 as an outgrowth of the Science Abstracts service. The electronic records were distributed on magnetic tape. In the 1980s, it was available in the U.S. through the Knowledge Index, a low-priced dial-up version of the Dialog service for individual users, which made it popular. For nearly 50 years, the IET has employed scientists to manually review items to be included in Inspec, hand-indexing the literature using their own expertise of the subject area and make a judgement call about which terms and classification codes should be applied. Thanks to this work, a significant thesaurus has been developed which enables content to be indexed far more accurately and in context, which in turn helps end-users discover relevant literature that may otherwise have remained hidden from typical search queries, making Inspec an essential tool for prior art, patentability searches and patent drafting.
Access to Inspec is currently by the Internet through Inspec Direct and various resellers.
== Print counterparts ==
Inspec has several print counterparts:
Computer and Control Abstracts (ISSN 0036-8113)
Electrical and Electronics Abstracts (ISSN 0036-8105)
Physics Abstracts (ISSN 0036-8091)
Science Abstracts
Electrical engineering Abstracts* Electronics Abstracts
Control theory Abstracts
Information technology Abstracts
Physics Indexes
Electrical engineering Indexes
Electronics Indexes
Control theory Indexes
Information technology Indexes
Business automation Abstracts (Journals featuring management, economics and Social Sciences; organizations; management information systems related research)
=== Computer and Control Abstracts ===
Computer and Control Abstracts (ISSN 0036-8113 Frequency: 12 per year) covers computers and computing, and information technology.
=== Electrical and Electronics Abstracts ===
Electrical and Electronics Abstracts (ISSN 0036-8105 Frequency: 12 per year) covers all topics in telecommunications, electronics, radio, electrical power and optoelectronics. Printed indexes by subject, author and other indexes, and a subject guide are produced twice per year.
=== Physics Abstracts ===
Physics Abstracts (ISSN 0036-8091 LCCN 76-646597 Frequency: 24 per year) is an abstracting and indexing service first published by the Institution of Electrical Engineers. It was first circulated as Science Abstracts, volume 1 through volume 5 from 1898 to 1902. From 1903 to 1971 the database had different titles. These closely related names were Science Abstracts. Section A, Physics and Science Abstracts. Section A, Physics Abstracts from volume 6 to volume 74.
By 1972 other societies were associated as authors of this service such as the American Institute of Physics. In 1975 or 1976 the Institute of Electrical and Electronics Engineers also became an author. By 1980 this database was also issued as INSPEC-Physics on various formats. It was also available as part of INSPEC database. Presently it is part of Inspec, Section A - Physics database. At the same time, the Physics Abstracts title was employed throughout the 1990s.
=== Notable editor ===
The science fiction writer Arthur C. Clarke, with a B.S. degree (physics and mathematics honors) (King's college), was an assistant editor for Physics Abstracts from 1949 to 1951. This position allowed Clarke to access to "all of the world's leading scientific journals."
=== Science Abstracts ===
The first issue of Science Abstracts was published in January 1898. During that first year, a total of 1,423 abstracts were published at monthly intervals, and at the end of the year an author and subject index were added. The first issue contained 110 abstracts and was divided into 10 sections:
General Physics
Light
Heat
Sound
Electricity
Electrochemistry and Chemical Physics
General Electrical Engineering
Dynamos, Electric Motors and Transformers
Power Distribution, Traction and Lighting
Telegraphy and Telephony
Science Abstracts was the result of a joint collaboration between the Institution of Electrical Engineers (IEE) and The Physical Society of London. The publication was (at that time) provided without charge to all members of both societies. The cost of the publication was mainly borne by the IEE and The Physical Society. Financial contributions were also received from the Institution of Civil Engineers, The Royal Society and the British Association for the Advancement of Science
By 1902, the annual number of abstracts published had increased to 2,362. By May 1903 it was decided to split the publication into two parts: A (Physics) and B (Electrical Engineering). This decision allowed the subject's scope to widen, particularly in physics. As a result, this allowed a larger quantity of material to be covered.
Since 1967, electronic access to Science Abstracts has been provided by INSPEC.
== See also ==
List of academic databases and search engines
== References ==
== External links ==
PACC Physics Abstracts Classification and Contents.
1993 to 2008 Physics Abstracts - Series A: Science Abstracts. Absolute Backorder Service, Inc. 2011.
Inspec page at the Institution of Engineering and Technology
Inspec Direct page
The History of Science Abstracts and Inspec
Memoirs of Science Abstracts editorial staff —Arthur C. Clarke | Wikipedia/Computer_&_Control_Abstracts |
The Energy Citations Database (ECD) was created in 2001 in order to make scientific literature citations, and electronic documents, publicly accessible from U.S. Department of Energy (DOE), and its predecessor agencies, at no cost to the user. This database also contains all the unclassified materials from Energy Research Abstracts. Classified materials are not available to the public. ECD does include the unclassified, unlimited distribution scientific and technical reports from the Department of Energy and its predecessor agencies, the Atomic Energy Commission and the Energy Research and Development Administration. The database is usually updated twice per week.
ECD provides free access to over 2.6 million science research citations with continued growth through regular updates. There are over 221,000 electronic documents, primarily from 1943 forward, available via the database. Citations and documents are made publicly available by the Regional Federal Depository Libraries. These institutions maintain and make available DOE research literature, providing access to non‑electronic documents prior to 1994, and electronic access to more recent documents.
ECD was created and developed by DOE's Office of Scientific and Technical Information with the science-attentive citizen in mind. It contains energy and energy‑related scientific and technical information collected by the DOE and its predecessor agencies.
== Scope ==
Topics, or subjects, and Department of Energy disciplines of interest in Energy Citations Database (ECD) are wide-ranging. Scientific and technical research encompass chemistry, physics, materials, environmental science, geology, engineering, mathematics, climatology, oceanography, computer science, and related disciplines. It includes bibliographic citations to report scientific literature, conference papers, journal articles, books, dissertations, and patents.
=== Stated capabilities ===
Bibliographic citations for scientific and technical information dating from 1943 to the present day. Search capabilities include full text, bibliographic citation, title, creator/author, subject, identifier numbers, publication date, system entry date, resource/document type, research organization, sponsoring organization, and/or any combination of these.
Commensurate with the above search capabilities is sorting results by various means.
Results can be sorted by relevance, publication date, system entry date, resource/document type, title, research organization, sponsoring organization, or the unique Office of Scientific Information (OSTI) Identifier. Furthermore, acquiring a count of search results, combined with a link to the actual results is available.
ability to receive weekly Alerts in topics of interest;
information about acquiring a non-electronic document
== Research and database in predecessor agencies ==
Since the late 1940s, the Office of Scientific and Technical Information (OSTI) and its predecessor organizations have been responsible for the management of scientific and technical information (STI) for the Department of Energy (DOE) and its predecessor agencies, the Atomic Energy Commission (AEC) and the Energy Research and Development Administration (ERDA). Growth and development of STI management has incorporated planning, developing, maintaining, and administering all services and facilities required to accomplish the dissemination of scientific and technical information for the encouragement of scientific progress.
=== Atomic Energy Commission ===
In 1942, the Manhattan Project was established by the United States Army to conduct atomic research with the goal of ending World War II. This research was performed in a manner that helped to cement the ongoing bond between basic scientific research and national security. After the war, the authority to continue this research was transferred from the Army to the United States Atomic Energy Commission (AEC) through the Atomic Energy Act of 1946. This Act was signed into law by President Harry S. Trumanun on August 1, 1946, and entrusted the AEC with the government monopoly in the field of atomic research and development.
=== Energy Reorganization Act ===
The Energy Reorganization Act of 1974 abolished the Atomic Energy Commission and established the Energy Research and Development Administration (ERDA). ERDA was created to achieve two goals:
First was to focus the Federal Government's energy research and development activities within a unified agency whose major function would be to promote the speedy development of various energy technologies. The second, was to separate nuclear licensing and regulatory functions from the development and production of nuclear power and weapons.
=== Department of Energy ===
To achieve a major Federal energy reorganization, the Department of Energy (DOE) was activated on October 1, 1977. DOE became the twelfth cabinet-level department in the Federal Government and brought together for the first time most of the government's energy programs and defense responsibilities that included the design, construction, and testing of nuclear weapons.
== Features section ==
The Energy Citations Database features noteworthy topics of discussion in the features section.
=== SciDAC ===
SciDAC is a specially designed program within the Office of Science of the U.S. Department of Energy. It enables scientific discovery through advanced computing (SciDAC), and is driven by a spirit of collaboration. Discipline scientists, applied mathematicians, and computer scientists are working together to maximize use of the most sophisticated high-power computers for scientific discovery. Research results are promulgated through the SciDAC review magazine. Supercomputer Modeling and Visualization is covered in the Spring 2010 issue of this magazine.
== Regional Federal Depository Libraries ==
There are nearly 1,250 depository libraries throughout the United States and its territories. Access to all documents (hundreds of thousands) is no-fee access. Expert assisted searches are available, on site.
Federal depository libraries have been established by Congress to ensure that the American public has access to its Government's information. The Federal Depository Library Program (FDLP) involves the acquisition, format conversion, and distribution of depository materials to libraries throughout the United States and the coordination of Federal depository libraries in the 50 states, the District of Columbia and U.S. territories.
The U.S. Government Printing Office administers the FDLP.
=== Federal depository library coverage ===
Coverage generally encompasses:
Health and Nutrition
Laws, Statistics, and Presidential Materials
Science and Technology
Business and Careers
Education
History
World Maps
Available formats are publications, journals, electronic resources, microfiche, microfilm and various other formats encompassing hundreds of thousands of topics.
== References ==
This article incorporates public domain material from websites or documents of the United States Department of Energy.
This article incorporates public domain material from websites or documents of the United States government. | Wikipedia/Energy_Research_Abstracts |
Inspec is a major indexing database of scientific and technical literature, published by the Institution of Engineering and Technology (IET), and formerly by the Institution of Electrical Engineers (IEE), one of the IET's forerunners.
Inspec coverage is extensive in the fields of physics, computing, control, and engineering. Its subject coverage includes astronomy, electronics, communications, computers and computing, computer science, control engineering, electrical engineering, information technology, physics, manufacturing, production and mechanical engineering. Now, due to emerging concept of technology for business, Inspec also includes information technology for business in its portfolio. Inspec indexed few journals publishing high quality research by integrating technology into management, economics and social sciences domains. The sample journals include Annual Review of Financial Economics, Aslib Journal of Information Management, Australian Journal of Management and, International Journal of Management, Economics and Social Sciences.
Inspec was started in 1967 as an outgrowth of the Science Abstracts service. The electronic records were distributed on magnetic tape. In the 1980s, it was available in the U.S. through the Knowledge Index, a low-priced dial-up version of the Dialog service for individual users, which made it popular. For nearly 50 years, the IET has employed scientists to manually review items to be included in Inspec, hand-indexing the literature using their own expertise of the subject area and make a judgement call about which terms and classification codes should be applied. Thanks to this work, a significant thesaurus has been developed which enables content to be indexed far more accurately and in context, which in turn helps end-users discover relevant literature that may otherwise have remained hidden from typical search queries, making Inspec an essential tool for prior art, patentability searches and patent drafting.
Access to Inspec is currently by the Internet through Inspec Direct and various resellers.
== Print counterparts ==
Inspec has several print counterparts:
Computer and Control Abstracts (ISSN 0036-8113)
Electrical and Electronics Abstracts (ISSN 0036-8105)
Physics Abstracts (ISSN 0036-8091)
Science Abstracts
Electrical engineering Abstracts* Electronics Abstracts
Control theory Abstracts
Information technology Abstracts
Physics Indexes
Electrical engineering Indexes
Electronics Indexes
Control theory Indexes
Information technology Indexes
Business automation Abstracts (Journals featuring management, economics and Social Sciences; organizations; management information systems related research)
=== Computer and Control Abstracts ===
Computer and Control Abstracts (ISSN 0036-8113 Frequency: 12 per year) covers computers and computing, and information technology.
=== Electrical and Electronics Abstracts ===
Electrical and Electronics Abstracts (ISSN 0036-8105 Frequency: 12 per year) covers all topics in telecommunications, electronics, radio, electrical power and optoelectronics. Printed indexes by subject, author and other indexes, and a subject guide are produced twice per year.
=== Physics Abstracts ===
Physics Abstracts (ISSN 0036-8091 LCCN 76-646597 Frequency: 24 per year) is an abstracting and indexing service first published by the Institution of Electrical Engineers. It was first circulated as Science Abstracts, volume 1 through volume 5 from 1898 to 1902. From 1903 to 1971 the database had different titles. These closely related names were Science Abstracts. Section A, Physics and Science Abstracts. Section A, Physics Abstracts from volume 6 to volume 74.
By 1972 other societies were associated as authors of this service such as the American Institute of Physics. In 1975 or 1976 the Institute of Electrical and Electronics Engineers also became an author. By 1980 this database was also issued as INSPEC-Physics on various formats. It was also available as part of INSPEC database. Presently it is part of Inspec, Section A - Physics database. At the same time, the Physics Abstracts title was employed throughout the 1990s.
=== Notable editor ===
The science fiction writer Arthur C. Clarke, with a B.S. degree (physics and mathematics honors) (King's college), was an assistant editor for Physics Abstracts from 1949 to 1951. This position allowed Clarke to access to "all of the world's leading scientific journals."
=== Science Abstracts ===
The first issue of Science Abstracts was published in January 1898. During that first year, a total of 1,423 abstracts were published at monthly intervals, and at the end of the year an author and subject index were added. The first issue contained 110 abstracts and was divided into 10 sections:
General Physics
Light
Heat
Sound
Electricity
Electrochemistry and Chemical Physics
General Electrical Engineering
Dynamos, Electric Motors and Transformers
Power Distribution, Traction and Lighting
Telegraphy and Telephony
Science Abstracts was the result of a joint collaboration between the Institution of Electrical Engineers (IEE) and The Physical Society of London. The publication was (at that time) provided without charge to all members of both societies. The cost of the publication was mainly borne by the IEE and The Physical Society. Financial contributions were also received from the Institution of Civil Engineers, The Royal Society and the British Association for the Advancement of Science
By 1902, the annual number of abstracts published had increased to 2,362. By May 1903 it was decided to split the publication into two parts: A (Physics) and B (Electrical Engineering). This decision allowed the subject's scope to widen, particularly in physics. As a result, this allowed a larger quantity of material to be covered.
Since 1967, electronic access to Science Abstracts has been provided by INSPEC.
== See also ==
List of academic databases and search engines
== References ==
== External links ==
PACC Physics Abstracts Classification and Contents.
1993 to 2008 Physics Abstracts - Series A: Science Abstracts. Absolute Backorder Service, Inc. 2011.
Inspec page at the Institution of Engineering and Technology
Inspec Direct page
The History of Science Abstracts and Inspec
Memoirs of Science Abstracts editorial staff —Arthur C. Clarke | Wikipedia/Physics_Abstracts |
SPIN (Searchable Physics Information Notices) bibliographic database is an indexing and abstracting service produced by the American Institute of Physics (AIP). The content focus of SPIN is described as the most significant areas of physics research. This type of literature coverage spans the major physical science journals and magazines. Major conference proceedings that are reported by the American Institute of Physics, member societies, as well as affiliated organizations are also included as part of this database. References, or citations, provide access to more than 1.5 million articles as of 2010. SPIN has no print counterpart.
== Journals ==
Delivery of timely indexing and abstracting is for, what are deemed to be, the significant or important physics and astronomy journals from the United States, Russia, and Ukraine. Citations for journal articles are derived from original publications of the AIP, which includes published translated works. At the same time, citations are included from member societies, and selectively chosen American journals. Citations become typically available online on the same date as the corresponding journal article.
== Sources ==
Overall, the source citations are derived from material published by the AIP and member societies, which are English-speaking, Russian, and Ukrainian journals and conference proceedings. Certain American physics-related articles are also sources of citations. About 60 journals have cover to cover indexing, and about 100 journals, overall, are indexed.
== Scope ==
Subject coverage encompasses the following:
Applied physics, Electromagnetic technology, Microelectronics
Atomic physics and Molecular physics
Biological physics and Medical physics
Classical physics and Quantum physics
Condensed matter physics
Elementary particle physics
General physics, Optics, Acoustics, and Fluid dynamics
Geophysics, Astronomy, Astrophysics
Materials science
Nuclear physics
Plasma physics
Physical chemistry
== See also ==
List of academic databases and search engines
== References ==
== External links ==
AIP'S SPIN Database Reaches One Million Records. American Institute of Physics. March 1, 2002.
Can everything published in physics can be found in the arXiv?. The Scholarly Kitchen. Society for Scholarly Publishing. June, 2010.
AIP partnerships (society publishing). July 2010. | Wikipedia/SPIN_bibliographic_database |
In meteorology, convective available potential energy (commonly abbreviated as CAPE), is a measure of the capacity of the atmosphere to support upward air movement that can lead to cloud formation and storms. Some atmospheric conditions, such as very warm, moist, air in an atmosphere that cools rapidly with height, can promote strong and sustained upward air movement, possibly stimulating the formation of cumulus clouds or cumulonimbus (thunderstorm clouds). In that situation the potential energy of the atmosphere to cause upward air movement is very high, so CAPE (a measure of potential energy) would be high and positive. By contrast, other conditions, such as a less warm air parcel or a parcel in an atmosphere with a temperature inversion (in which the temperature increases above a certain height) have much less capacity to support vigorous upward air movement, thus the potential energy level (CAPE) would be much lower, as would the probability of thunderstorms.
More technically, CAPE is the integrated amount of work that the upward (positive) buoyancy force would perform on a given mass of air (called an air parcel) if it rose vertically through the entire atmosphere. Positive CAPE will cause the air parcel to rise, while negative CAPE will cause the air parcel to sink.
Nonzero CAPE is an indicator of atmospheric instability in any given atmospheric sounding, a necessary condition for the development of cumulus and cumulonimbus clouds with attendant severe weather hazards.
== Mechanics ==
CAPE exists within the conditionally unstable layer of the troposphere, the free convective layer (FCL), where an ascending air parcel is warmer than the ambient air. CAPE is measured in joules per kilogram of air (J/kg). Any value greater than 0 J/kg indicates instability and an increasing possibility of thunderstorms and hail. Generic CAPE is calculated by integrating vertically the local buoyancy of a parcel from the level of free convection (LFC) to the equilibrium level (EL):
C
A
P
E
=
∫
z
f
z
n
g
(
T
v
,
p
a
r
c
e
l
−
T
v
,
e
n
v
T
v
,
e
n
v
)
d
z
{\displaystyle \mathrm {CAPE} =\int _{z_{\mathrm {f} }}^{z_{\mathrm {n} }}g\left({\frac {T_{\mathrm {v,parcel} }-T_{\mathrm {v,env} }}{T_{\mathrm {v,env} }}}\right)\,dz}
Where
z
f
{\displaystyle z_{\mathrm {f} }}
is the height of the level of free convection and
z
n
{\displaystyle z_{\mathrm {n} }}
is the height of the equilibrium level (neutral buoyancy), where
T
v
,
p
a
r
c
e
l
{\displaystyle T_{\mathrm {v,parcel} }}
is the virtual temperature of the specific parcel, where
T
v
,
e
n
v
{\displaystyle T_{\mathrm {v,env} }}
is the virtual temperature of the environment (note that temperatures must be in the Kelvin scale), and where
g
{\displaystyle g}
is the acceleration due to gravity. This integral is the work done by the buoyant force minus the work done against gravity, hence it's the excess energy that can become kinetic energy.
CAPE for a given region is most often calculated from a thermodynamic or sounding diagram (e.g., a Skew-T log-P diagram) using air temperature and dew point data usually measured by a weather balloon.
CAPE is effectively positive buoyancy, expressed B+ or simply B; the opposite of convective inhibition (CIN), which is expressed as B-, and can be thought of as "negative CAPE". As with CIN, CAPE is usually expressed in J/kg but may also be expressed as m2/s2, as the values are equivalent. In fact, CAPE is sometimes referred to as positive buoyant energy (PBE). This type of CAPE is the maximum energy available to an ascending parcel and to moist convection. When a layer of CIN is present, the layer must be eroded by surface heating or mechanical lifting, so that convective boundary layer parcels may reach their level of free convection (LFC).
On a sounding diagram, CAPE is the positive area above the LFC, the area between the parcel's virtual temperature line and the environmental virtual temperature line where the ascending parcel is warmer than the environment. Neglecting the virtual temperature correction may result in substantial relative errors in the calculated value of CAPE for small CAPE values. CAPE may also exist below the LFC, but if a layer of CIN (subsidence) is present, it is unavailable to deep, moist convection until CIN is exhausted. When there is mechanical lift to saturation, cloud base begins at the lifted condensation level (LCL); absent forcing, cloud base begins at the convective condensation level (CCL) where heating from below causes spontaneous buoyant lifting to the point of condensation when the convective temperature is reached. When CIN is absent or is overcome, saturated parcels at the LCL or CCL, which had been small cumulus clouds, will rise to the LFC, and then spontaneously rise until hitting the stable layer of the equilibrium level. The result is deep, moist convection (DMC), or simply, a thunderstorm.
When a parcel is unstable, it will continue to move vertically, in either direction, dependent on whether it receives upward or downward forcing, until it reaches a stable layer (though momentum, gravity, and other forcing may cause the parcel to continue). There are multiple types of CAPE, downdraft CAPE (DCAPE), estimates the potential strength of rain and evaporatively cooled downdrafts. Other types of CAPE may depend on the depth being considered. Other examples are surface based CAPE (SBCAPE), mixed layer or mean layer CAPE (MLCAPE), most unstable or maximum usable CAPE (MUCAPE), and normalized CAPE (NCAPE).
Fluid elements displaced upwards or downwards in such an atmosphere expand or compress adiabatically in order to remain in pressure equilibrium with their surroundings, and in this manner become less or more dense.
If the adiabatic decrease or increase in density is less than the decrease or increase in the density of the ambient (not moved) medium, then the displaced fluid element will be subject to downwards or upwards pressure, which will function to restore it to its original position. Hence, there will be a counteracting force to the initial displacement. Such a condition is referred to as convective stability.
On the other hand, if adiabatic decrease or increase in density is greater than in the ambient fluid, the upwards or downwards displacement will be met with an additional force in the same direction exerted by the ambient fluid. In these circumstances, small deviations from the initial state will become amplified. This condition is referred to as convective instability.
Convective instability is also termed static instability, because the instability does not depend on the existing motion of the air; this contrasts with dynamic instability where instability is dependent on the motion of air and its associated effects such as dynamic lifting.
== Significance to thunderstorms ==
Thunderstorms form when air parcels are lifted vertically. Deep, moist convection requires a parcel to be lifted to the LFC where it then rises spontaneously until reaching a layer of non-positive buoyancy. The atmosphere is warm at the surface and lower levels of the troposphere where there is mixing (the planetary boundary layer (PBL)), but becomes substantially cooler with height. The temperature profile of the atmosphere, the change in temperature, the degree that it cools with height, is the lapse rate. When the rising air parcel cools more slowly than the surrounding atmosphere, it remains warmer and less dense. The parcel continues to rise freely (convectively; without mechanical lift) through the atmosphere until it reaches an area of air less dense (warmer) than itself.
The amount, and shape, of the positive-buoyancy area modulates the speed of updrafts, thus extreme CAPE can result in explosive thunderstorm development; such rapid development usually occurs when CAPE stored by a capping inversion is released when the "lid" is broken by heating or mechanical lift. The amount of CAPE also modulates how low-level vorticity is entrained and then stretched in the updraft, with importance to tornadogenesis. The most important CAPE for tornadoes is within the lowest 1 to 3 km (0.6 to 1.9 mi) of the atmosphere, whilst deep layer CAPE and the width of CAPE at mid-levels is important for supercells. Tornado outbreaks tend to occur within high CAPE environments. Large CAPE is required for the production of very large hail, owing to updraft strength, although a rotating updraft may be stronger with less CAPE. Large CAPE also promotes lightning activity.
Two notable days for severe weather exhibited CAPE values over 5 kJ/kg. Two hours before the 1999 Oklahoma tornado outbreak occurred on May 3, 1999, the CAPE value sounding at Oklahoma City was at 5.89 kJ/kg. A few hours later, an F5 tornado ripped through the southern suburbs of the city. Also on May 4, 2007, CAPE values of 5.5 kJ/kg were reached and an EF5 tornado tore through Greensburg, Kansas. On these days, it was apparent that conditions were ripe for tornadoes and CAPE wasn't a crucial factor. However, extreme CAPE, by modulating the updraft (and downdraft), can allow for exceptional events, such as the deadly F5 tornadoes that hit Plainfield, Illinois on August 28, 1990, and Jarrell, Texas on May 27, 1997, on days which weren't readily apparent as conducive to large tornadoes. CAPE was estimated to exceed 8 kJ/kg in the environment of the Plainfield storm and was around 7 kJ/kg for the Jarrell storm.
Severe weather and tornadoes can develop in an area of low CAPE values. The surprise severe weather event that occurred in Illinois and Indiana on April 20, 2004, is a good example. Importantly in that case, was that although overall CAPE was weak, there was strong CAPE in the lowest levels of the troposphere which enabled an outbreak of minisupercells producing large, long-track, intense tornadoes.
== Example from meteorology ==
A good example of convective instability can be found in our own atmosphere. If dry mid-level air is drawn over very warm, moist air in the lower troposphere, a hydrolapse (an area of rapidly decreasing dew point temperatures with height) results in the region where the moist boundary layer and mid-level air meet. As daytime heating increases mixing within the moist boundary layer, some of the moist air will begin to interact with the dry mid-level air above it. Owing to thermodynamic processes, as the dry mid-level air is slowly saturated its temperature begins to drop, increasing the adiabatic lapse rate. Under certain conditions, the lapse rate can increase significantly in a short amount of time, resulting in convection. High convective instability can lead to severe thunderstorms and tornadoes as moist air which is trapped in the boundary layer eventually becomes highly negatively buoyant relative to the adiabatic lapse rate and escapes as a rapidly rising bubble of humid air triggering the development of a cumulus or cumulonimbus cloud.
== Limitations ==
As with most parameters used in meteorology, there are some caveats to keep in mind, one of which is what CAPE represents physically and in what instances CAPE can be used. One example where the more common method for determining CAPE might start to break down is in the presence of tropical cyclones (TCs), such as tropical depressions, tropical storms, or hurricanes.
The more common method of determining CAPE can break down near tropical cyclones because CAPE assumes that liquid water is lost instantaneously during condensation. This process is thus irreversible upon adiabatic descent. This process is not realistic for tropical cyclones. To make the process more realistic for tropical cyclones is to use Reversible CAPE (RCAPE for short). RCAPE assumes the opposite extreme to the standard convention of CAPE and is that no liquid water will be lost during the process. This new process gives parcels a greater density related to water loading.
RCAPE is calculated using the same formula as CAPE, the difference in the formula being in the virtual temperature. In this new formulation, we replace the parcel saturation mixing ratio (which leads to the condensation and vanishing of liquid water) with the parcel water content. This slight change can drastically change the values we get through the integration.
RCAPE does have some limitations, one of which is that RCAPE assumes no evaporation keeping consistent for the use within a TC but should be used sparingly elsewhere.
Another limitation of both CAPE and RCAPE is that currently, both systems do not consider entrainment.
== See also ==
Atmospheric thermodynamics
Lifted index
Maximum potential intensity
== References ==
== Further reading ==
Barry, R.G. and Chorley, R.J. Atmosphere, weather and climate (7th ed) Routledge 1998 p. 80-81 ISBN 0-415-16020-0
== External links ==
Map of current global CAPE | Wikipedia/Convective_available_potential_energy |
The Clausius–Clapeyron relation, in chemical thermodynamics, specifies the temperature dependence of pressure, most importantly vapor pressure, at a discontinuous phase transition between two phases of matter of a single constituent. It is named after Rudolf Clausius and Benoît Paul Émile Clapeyron. However, this relation was in fact originally derived by Sadi Carnot in his Reflections on the Motive Power of Fire, which was published in 1824 but largely ignored until it was rediscovered by Clausius, Clapeyron, and Lord Kelvin decades later. Kelvin said of Carnot's argument that "nothing in the whole range of Natural Philosophy is more remarkable than the establishment of general laws by such a process of reasoning."
Kelvin and his brother James Thomson confirmed the relation experimentally in 1849–50, and it was historically important as a very early successful application of theoretical thermodynamics. Its relevance to meteorology and climatology is the increase of the water-holding capacity of the atmosphere by about 7% for every 1 °C (1.8 °F) rise in temperature.
== Definition ==
=== Exact Clapeyron equation ===
On a pressure–temperature (P–T) diagram, for any phase change the line separating the two phases is known as the coexistence curve. The Clapeyron relation gives the slope of the tangents to this curve. Mathematically,
d
P
d
T
=
L
T
Δ
v
=
Δ
s
Δ
v
,
{\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} T}}={\frac {L}{T\,\Delta v}}={\frac {\Delta s}{\Delta v}},}
where
d
P
/
d
T
{\displaystyle \mathrm {d} P/\mathrm {d} T}
is the slope of the tangent to the coexistence curve at any point,
L
{\displaystyle L}
is the molar change in enthalpy (latent heat, the amount of energy absorbed in the transformation),
T
{\displaystyle T}
is the temperature,
Δ
v
{\displaystyle \Delta v}
is the molar volume change of the phase transition, and
Δ
s
{\displaystyle \Delta s}
is the molar entropy change of the phase transition. Alternatively, the specific values may be used instead of the molar ones.
=== Clausius–Clapeyron equation ===
The Clausius–Clapeyron equation: 509 applies to vaporization of liquids where vapor follows ideal gas law using the ideal gas constant
R
{\displaystyle R}
and liquid volume is neglected as being much smaller than vapor volume V. It is often used to calculate vapor pressure of a liquid.
d
P
d
T
=
P
L
T
2
R
,
{\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} T}}={\frac {PL}{T^{2}R}},}
v
=
V
n
=
R
T
P
.
{\displaystyle v={\frac {V}{n}}={\frac {RT}{P}}.}
The equation expresses this in a more convenient form just in terms of the latent heat, for moderate temperatures and pressures.
== Derivations ==
=== Derivation from state postulate ===
Using the state postulate, take the molar entropy
s
{\displaystyle s}
for a homogeneous substance to be a function of molar volume
v
{\displaystyle v}
and temperature
T
{\displaystyle T}
.: 508
d
s
=
(
∂
s
∂
v
)
T
d
v
+
(
∂
s
∂
T
)
v
d
T
.
{\displaystyle \mathrm {d} s=\left({\frac {\partial s}{\partial v}}\right)_{T}\,\mathrm {d} v+\left({\frac {\partial s}{\partial T}}\right)_{v}\,\mathrm {d} T.}
The Clausius–Clapeyron relation describes a Phase transition in a closed system composed of two contiguous phases, condensed matter and ideal gas, of a single substance, in mutual thermodynamic equilibrium, at constant temperature and pressure. Therefore,: 508
d
s
=
(
∂
s
∂
v
)
T
d
v
.
{\displaystyle \mathrm {d} s=\left({\frac {\partial s}{\partial v}}\right)_{T}\,\mathrm {d} v.}
Using the appropriate Maxwell relation gives: 508
d
s
=
(
∂
P
∂
T
)
v
d
v
,
{\displaystyle \mathrm {d} s=\left({\frac {\partial P}{\partial T}}\right)_{v}\,\mathrm {d} v,}
where
P
{\displaystyle P}
is the pressure. Since pressure and temperature are constant, the derivative of pressure with respect to temperature does not change.: 57, 62, 671 Therefore, the partial derivative of molar entropy may be changed into a total derivative
d
s
=
d
P
d
T
d
v
,
{\displaystyle \mathrm {d} s={\frac {\mathrm {d} P}{\mathrm {d} T}}\,\mathrm {d} v,}
and the total derivative of pressure with respect to temperature may be factored out when integrating from an initial phase
α
{\displaystyle \alpha }
to a final phase
β
{\displaystyle \beta }
,: 508 to obtain
d
P
d
T
=
Δ
s
Δ
v
,
{\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} T}}={\frac {\Delta s}{\Delta v}},}
where
Δ
s
≡
s
β
−
s
α
{\displaystyle \Delta s\equiv s_{\beta }-s_{\alpha }}
and
Δ
v
≡
v
β
−
v
α
{\displaystyle \Delta v\equiv v_{\beta }-v_{\alpha }}
are respectively the change in molar entropy and molar volume. Given that a phase change is an internally reversible process, and that our system is closed, the first law of thermodynamics holds:
d
u
=
δ
q
+
δ
w
=
T
d
s
−
P
d
v
,
{\displaystyle \mathrm {d} u=\delta q+\delta w=T\,\mathrm {d} s-P\,\mathrm {d} v,}
where
u
{\displaystyle u}
is the internal energy of the system. Given constant pressure and temperature (during a phase change) and the definition of molar enthalpy
h
{\displaystyle h}
, we obtain
d
h
=
T
d
s
+
v
d
P
,
{\displaystyle \mathrm {d} h=T\,\mathrm {d} s+v\,\mathrm {d} P,}
d
h
=
T
d
s
,
{\displaystyle \mathrm {d} h=T\,\mathrm {d} s,}
d
s
=
d
h
T
.
{\displaystyle \mathrm {d} s={\frac {\mathrm {d} h}{T}}.}
Given constant pressure and temperature (during a phase change), we obtain: 508
Δ
s
=
Δ
h
T
.
{\displaystyle \Delta s={\frac {\Delta h}{T}}.}
Substituting the definition of molar latent heat
L
=
Δ
h
{\displaystyle L=\Delta h}
gives
Δ
s
=
L
T
.
{\displaystyle \Delta s={\frac {L}{T}}.}
Substituting this result into the pressure derivative given above (
d
P
/
d
T
=
Δ
s
/
Δ
v
{\displaystyle \mathrm {d} P/\mathrm {d} T=\Delta s/\Delta v}
), we obtain: 508
d
P
d
T
=
L
T
Δ
v
.
{\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} T}}={\frac {L}{T\,\Delta v}}.}
This result (also known as the Clapeyron equation) equates the slope
d
P
/
d
T
{\displaystyle \mathrm {d} P/\mathrm {d} T}
of the coexistence curve
P
(
T
)
{\displaystyle P(T)}
to the function
L
/
(
T
Δ
v
)
{\displaystyle L/(T\,\Delta v)}
of the molar latent heat
L
{\displaystyle L}
, the temperature
T
{\displaystyle T}
, and the change in molar volume
Δ
v
{\displaystyle \Delta v}
. Instead of the molar values, corresponding specific values may also be used.
=== Derivation from Gibbs–Duhem relation ===
Suppose two phases,
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
, are in contact and at equilibrium with each other. Their chemical potentials are related by
μ
α
=
μ
β
.
{\displaystyle \mu _{\alpha }=\mu _{\beta }.}
Furthermore, along the coexistence curve,
d
μ
α
=
d
μ
β
.
{\displaystyle \mathrm {d} \mu _{\alpha }=\mathrm {d} \mu _{\beta }.}
One may therefore use the Gibbs–Duhem relation
d
μ
=
M
(
−
s
d
T
+
v
d
P
)
{\displaystyle \mathrm {d} \mu =M(-s\,\mathrm {d} T+v\,\mathrm {d} P)}
(where
s
{\displaystyle s}
is the specific entropy,
v
{\displaystyle v}
is the specific volume, and
M
{\displaystyle M}
is the molar mass) to obtain
−
(
s
β
−
s
α
)
d
T
+
(
v
β
−
v
α
)
d
P
=
0.
{\displaystyle -(s_{\beta }-s_{\alpha })\,\mathrm {d} T+(v_{\beta }-v_{\alpha })\,\mathrm {d} P=0.}
Rearrangement gives
d
P
d
T
=
s
β
−
s
α
v
β
−
v
α
=
Δ
s
Δ
v
,
{\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} T}}={\frac {s_{\beta }-s_{\alpha }}{v_{\beta }-v_{\alpha }}}={\frac {\Delta s}{\Delta v}},}
from which the derivation of the Clapeyron equation continues as in the previous section.
=== Ideal gas approximation at low temperatures ===
When the phase transition of a substance is between a gas phase and a condensed phase (liquid or solid), and occurs at temperatures much lower than the critical temperature of that substance, the specific volume of the gas phase
v
g
{\displaystyle v_{\text{g}}}
greatly exceeds that of the condensed phase
v
c
{\displaystyle v_{\text{c}}}
. Therefore, one may approximate
Δ
v
=
v
g
(
1
−
v
c
v
g
)
≈
v
g
{\displaystyle \Delta v=v_{\text{g}}\left(1-{\frac {v_{\text{c}}}{v_{\text{g}}}}\right)\approx v_{\text{g}}}
at low temperatures. If pressure is also low, the gas may be approximated by the ideal gas law, so that
v
g
=
R
T
P
,
{\displaystyle v_{\text{g}}={\frac {RT}{P}},}
where
P
{\displaystyle P}
is the pressure,
R
{\displaystyle R}
is the specific gas constant, and
T
{\displaystyle T}
is the temperature. Substituting into the Clapeyron equation
d
P
d
T
=
L
T
Δ
v
,
{\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} T}}={\frac {L}{T\,\Delta v}},}
we can obtain the Clausius–Clapeyron equation: 509
d
P
d
T
=
P
L
T
2
R
{\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} T}}={\frac {PL}{T^{2}R}}}
for low temperatures and pressures,: 509 where
L
{\displaystyle L}
is the specific latent heat of the substance. Instead of the specific, corresponding molar values (i.e.
L
{\displaystyle L}
in kJ/mol and R = 8.31 J/(mol⋅K)) may also be used.
Let
(
P
1
,
T
1
)
{\displaystyle (P_{1},T_{1})}
and
(
P
2
,
T
2
)
{\displaystyle (P_{2},T_{2})}
be any two points along the coexistence curve between two phases
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
. In general,
L
{\displaystyle L}
varies between any two such points, as a function of temperature. But if
L
{\displaystyle L}
is approximated as constant,
d
P
P
≅
L
R
d
T
T
2
,
{\displaystyle {\frac {\mathrm {d} P}{P}}\cong {\frac {L}{R}}{\frac {\mathrm {d} T}{T^{2}}},}
∫
P
1
P
2
d
P
P
≅
L
R
∫
T
1
T
2
d
T
T
2
,
{\displaystyle \int _{P_{1}}^{P_{2}}{\frac {\mathrm {d} P}{P}}\cong {\frac {L}{R}}\int _{T_{1}}^{T_{2}}{\frac {\mathrm {d} T}{T^{2}}},}
ln
P
|
P
=
P
1
P
2
≅
−
L
R
⋅
1
T
|
T
=
T
1
T
2
,
{\displaystyle \ln P{\Big |}_{P=P_{1}}^{P_{2}}\cong -{\frac {L}{R}}\cdot \left.{\frac {1}{T}}\right|_{T=T_{1}}^{T_{2}},}
or: 672
ln
P
2
P
1
≅
−
L
R
(
1
T
2
−
1
T
1
)
.
{\displaystyle \ln {\frac {P_{2}}{P_{1}}}\cong -{\frac {L}{R}}\left({\frac {1}{T_{2}}}-{\frac {1}{T_{1}}}\right).}
These last equations are useful because they relate equilibrium or saturation vapor pressure and temperature to the latent heat of the phase change without requiring specific-volume data. For instance, for water near its normal boiling point, with a molar enthalpy of vaporization of 40.7 kJ/mol and R = 8.31 J/(mol⋅K),
P
vap
(
T
)
≅
1
bar
⋅
exp
[
−
40
700
K
8.31
(
1
T
−
1
373
K
)
]
.
{\displaystyle P_{\text{vap}}(T)\cong 1~{\text{bar}}\cdot \exp \left[-{\frac {40\,700~{\text{K}}}{8.31}}\left({\frac {1}{T}}-{\frac {1}{373~{\text{K}}}}\right)\right].}
=== Clapeyron's derivation ===
In the original work by Clapeyron, the following argument is advanced.
Clapeyron considered a Carnot process of saturated water vapor with horizontal isobars. As the pressure is a function of temperature alone, the isobars are also isotherms. If the process involves an infinitesimal amount of water,
d
x
{\displaystyle \mathrm {d} x}
, and an infinitesimal difference in temperature
d
T
{\displaystyle \mathrm {d} T}
, the heat absorbed is
Q
=
L
d
x
,
{\displaystyle Q=L\,\mathrm {d} x,}
and the corresponding work is
W
=
d
p
d
T
d
T
(
V
″
−
V
′
)
,
{\displaystyle W={\frac {\mathrm {d} p}{\mathrm {d} T}}\,\mathrm {d} T(V''-V'),}
where
V
″
−
V
′
{\displaystyle V''-V'}
is the difference between the volumes of
d
x
{\displaystyle \mathrm {d} x}
in the liquid phase and vapor phases.
The ratio
W
/
Q
{\displaystyle W/Q}
is the efficiency of the Carnot engine,
d
T
/
T
{\displaystyle \mathrm {d} T/T}
. Substituting and rearranging gives
d
p
d
T
=
L
T
(
v
″
−
v
′
)
,
{\displaystyle {\frac {\mathrm {d} p}{\mathrm {d} T}}={\frac {L}{T(v''-v')}},}
where lowercase
v
″
−
v
′
{\displaystyle v''-v'}
denotes the change in specific volume during the transition.
== Applications ==
=== Chemistry and chemical engineering ===
For transitions between a gas and a condensed phase with the approximations described above, the expression may be rewritten as
ln
(
P
1
P
0
)
=
L
R
(
1
T
0
−
1
T
1
)
{\displaystyle \ln \left({\frac {P_{1}}{P_{0}}}\right)={\frac {L}{R}}\left({\frac {1}{T_{0}}}-{\frac {1}{T_{1}}}\right)}
where
P
0
,
P
1
{\displaystyle P_{0},P_{1}}
are the pressures at temperatures
T
0
,
T
1
{\displaystyle T_{0},T_{1}}
respectively and
R
{\displaystyle R}
is the ideal gas constant. For a liquid–gas transition,
L
{\displaystyle L}
is the molar latent heat (or molar enthalpy) of vaporization; for a solid–gas transition,
L
{\displaystyle L}
is the molar latent heat of sublimation. If the latent heat is known, then knowledge of one point on the coexistence curve, for instance (1 bar, 373 K) for water, determines the rest of the curve. Conversely, the relationship between
ln
P
{\displaystyle \ln P}
and
1
/
T
{\displaystyle 1/T}
is linear, and so linear regression is used to estimate the latent heat.
=== Meteorology and climatology ===
Atmospheric water vapor drives many important meteorologic phenomena (notably, precipitation), motivating interest in its dynamics. The Clausius–Clapeyron equation for water vapor under typical atmospheric conditions (near standard temperature and pressure) is
d
e
s
d
T
=
L
v
(
T
)
e
s
R
v
T
2
,
{\displaystyle {\frac {\mathrm {d} e_{s}}{\mathrm {d} T}}={\frac {L_{v}(T)e_{s}}{R_{v}T^{2}}},}
where
The temperature dependence of the latent heat
L
v
(
T
)
{\displaystyle L_{v}(T)}
can be neglected in this application. The August–Roche–Magnus formula provides a solution under that approximation:
e
s
(
T
)
=
6.1094
exp
(
17.625
T
T
+
243.04
)
,
{\displaystyle e_{s}(T)=6.1094\exp \left({\frac {17.625T}{T+243.04}}\right),}
where
e
s
{\displaystyle e_{s}}
is in hPa, and
T
{\displaystyle T}
is in degrees Celsius (whereas everywhere else on this page,
T
{\displaystyle T}
is an absolute temperature, e.g. in kelvins).
This is also sometimes called the Magnus or Magnus–Tetens approximation, though this attribution is historically inaccurate. But see also the discussion of the accuracy of different approximating formulae for saturation vapour pressure of water.
Under typical atmospheric conditions, the denominator of the exponent depends weakly on
T
{\displaystyle T}
(for which the unit is degree Celsius). Therefore, the August–Roche–Magnus equation implies that saturation water vapor pressure changes approximately exponentially with temperature under typical atmospheric conditions, and hence the water-holding capacity of the atmosphere increases by about 7% for every 1 °C rise in temperature.
== Example ==
One of the uses of this equation is to determine if a phase transition will occur in a given situation. Consider the question of how much pressure is needed to melt ice at a temperature
Δ
T
{\displaystyle {\Delta T}}
below 0 °C. Note that water is unusual in that its change in volume upon melting is negative. We can assume
Δ
P
=
L
T
Δ
v
Δ
T
,
{\displaystyle \Delta P={\frac {L}{T\,\Delta v}}\,\Delta T,}
and substituting in
we obtain
Δ
P
Δ
T
=
−
13.5
MPa
/
K
.
{\displaystyle {\frac {\Delta P}{\Delta T}}=-13.5~{\text{MPa}}/{\text{K}}.}
To provide a rough example of how much pressure this is, to melt ice at −7 °C (the temperature many ice skating rinks are set at) would require balancing a small car (mass ~ 1000 kg) on a thimble (area ~ 1 cm2). This shows that ice skating cannot be simply explained by pressure-caused melting point depression, and in fact the mechanism is quite complex.
== Second derivative ==
While the Clausius–Clapeyron relation gives the slope of the coexistence curve, it does not provide any information about its curvature or second derivative. The second derivative of the coexistence curve of phases 1 and 2 is given by
d
2
P
d
T
2
=
1
v
2
−
v
1
[
c
p
2
−
c
p
1
T
−
2
(
v
2
α
2
−
v
1
α
1
)
d
P
d
T
]
+
1
v
2
−
v
1
[
(
v
2
κ
T
2
−
v
1
κ
T
1
)
(
d
P
d
T
)
2
]
,
{\displaystyle {\begin{aligned}{\frac {\mathrm {d} ^{2}P}{\mathrm {d} T^{2}}}&={\frac {1}{v_{2}-v_{1}}}\left[{\frac {c_{p2}-c_{p1}}{T}}-2(v_{2}\alpha _{2}-v_{1}\alpha _{1}){\frac {\mathrm {d} P}{\mathrm {d} T}}\right]\\{}&+{\frac {1}{v_{2}-v_{1}}}\left[(v_{2}\kappa _{T2}-v_{1}\kappa _{T1})\left({\frac {\mathrm {d} P}{\mathrm {d} T}}\right)^{2}\right],\end{aligned}}}
where subscripts 1 and 2 denote the different phases,
c
p
{\displaystyle c_{p}}
is the specific heat capacity at constant pressure,
α
=
(
1
/
v
)
(
d
v
/
d
T
)
P
{\displaystyle \alpha =(1/v)(\mathrm {d} v/\mathrm {d} T)_{P}}
is the thermal expansion coefficient, and
κ
T
=
−
(
1
/
v
)
(
d
v
/
d
P
)
T
{\displaystyle \kappa _{T}=-(1/v)(\mathrm {d} v/\mathrm {d} P)_{T}}
is the isothermal compressibility.
== See also ==
Van 't Hoff equation
Antoine equation
Lee–Kesler method
== References ==
== Bibliography ==
== Notes == | Wikipedia/August-Roche-Magnus_approximation |
The lapse rate is the rate at which an atmospheric variable, normally temperature in Earth's atmosphere, falls with altitude. Lapse rate arises from the word lapse (in its "becoming less" sense, not its "interruption" sense). In dry air, the adiabatic lapse rate (i.e., decrease in temperature of a parcel of air that rises in the atmosphere without exchanging energy with surrounding air) is 9.8 °C/km (5.4 °F per 1,000 ft). The saturated adiabatic lapse rate (SALR), or moist adiabatic lapse rate (MALR), is the decrease in temperature of a parcel of water-saturated air that rises in the atmosphere. It varies with the temperature and pressure of the parcel and is often in the range 3.6 to 9.2 °C/km (2 to 5 °F/1000 ft), as obtained from the International Civil Aviation Organization (ICAO). The environmental lapse rate is the decrease in temperature of air with altitude for a specific time and place (see below). It can be highly variable between circumstances.
Lapse rate corresponds to the vertical component of the spatial gradient of temperature. Although this concept is most often applied to the Earth's troposphere, it can be extended to any gravitationally supported parcel of gas.
== Environmental lapse rate ==
A formal definition from the Glossary of Meteorology is:
The decrease of an atmospheric variable with height, the variable being temperature unless otherwise specified.
Typically, the lapse rate is the negative of the rate of temperature change with altitude change:
Γ
=
−
d
T
d
z
{\displaystyle \Gamma =-{\frac {\mathrm {d} T}{\mathrm {d} z}}}
where
Γ
{\displaystyle \Gamma }
(sometimes
L
{\displaystyle L}
) is the lapse rate given in units of temperature divided by units of altitude, T is temperature, and z is altitude.
The environmental lapse rate (ELR), is the actual rate of decrease of temperature with altitude in the atmosphere at a given time and location.
As an average, the International Civil Aviation Organization (ICAO) defines an international standard atmosphere (ISA) with a temperature lapse rate of 6.50 °C/km (3.56 °F or 1.98 °C/1,000 ft) from sea level to 11 km (36,090 ft or 6.8 mi). From 11 km up to 20 km (65,620 ft or 12.4 mi), the constant temperature is −56.5 °C (−69.7 °F), which is the lowest assumed temperature in the ISA. The standard atmosphere contains no moisture.
Unlike the idealized ISA, the temperature of the actual atmosphere does not always fall at a uniform rate with height. For example, there can be an inversion layer in which the temperature increases with altitude.
== Cause ==
The temperature profile of the atmosphere is a result of the interaction between radiative heating from sunlight, cooling to space via thermal radiation, and upward heat transport via natural convection (which carries hot air and latent heat upward). Above the tropopause, convection does not occur and all cooling is radiative.
Within the troposphere, the lapse rate is essentially the consequence of a balance between (a) radiative cooling of the air, which by itself would lead to a high lapse rate; and (b) convection, which is activated when the lapse rate exceeds a critical value; convection stabilizes the environmental lapse rate.
Sunlight hits the surface of the earth (land and sea) and heats them. The warm surface heats the air above it. In addition, nearly a third of absorbed sunlight is absorbed within the atmosphere, heating the atmosphere directly.
Thermal conduction helps transfer heat from the surface to the air; this conduction occurs within the few millimeters of air closest to the surface. However, above that thin interface layer, thermal conduction plays a negligible role in transferring heat within the atmosphere; this is because the thermal conductivity of air is very low.: 387
The air is radiatively cooled by greenhouse gases (water vapor, carbon dioxide, etc.) and clouds emitting longwave thermal radiation to space.
If radiation were the only way to transfer energy within the atmosphere, then the lapse rate near the surface would be roughly 40 °C/km and the greenhouse effect of gases in the atmosphere would keep the ground at roughly 333 K (60 °C; 140 °F).: 59–60
However, when air gets hot or humid, its density decreases. Thus, air which has been heated by the surface tends to rise and carry internal energy upward, especially if the air has been moistened by evaporation from water surfaces. This is the process of convection. Vertical convective motion stops when a parcel of air at a given altitude has the same density as the other air at the same elevation.
Convection carries hot, moist air upward and cold, dry air downward, with a net effect of transferring heat upward. This makes the air below cooler than it would otherwise be and the air above warmer. Because convection is available to transfer heat within the atmosphere, the lapse rate in the troposphere is reduced to around 6.5 °C/km and the greenhouse effect is reduced to a point where Earth has its observed surface temperature of around 288 K (15 °C; 59 °F).
== Convection and adiabatic expansion ==
As convection causes parcels of air to rise or fall, there is little heat transfer between those parcels and the surrounding air. Air has low thermal conductivity, and the bodies of air involved are very large; so transfer of heat by conduction is negligibly small. Also, intra-atmospheric radiative heat transfer is relatively slow and so is negligible for moving air. Thus, when air ascends or descends, there is little exchange of heat with the surrounding air. A process in which no heat is exchanged with the environment is referred to as an adiabatic process.
Air expands as it moves upward, and contracts as it moves downward. The expansion of rising air parcels, and the contraction of descending air parcels, are adiabatic processes, to a good approximation. When a parcel of air expands, it pushes on the air around it, doing thermodynamic work. Since the upward-moving and expanding parcel does work but gains no heat, it loses internal energy so that its temperature decreases. Downward-moving and contracting air has work done on it, so it gains internal energy and its temperature increases.
Adiabatic processes for air have a characteristic temperature-pressure curve. As air circulates vertically, the air takes on that characteristic gradient, called the adiabatic lapse rate. When the air contains little water, this lapse rate is known as the dry adiabatic lapse rate: the rate of temperature decrease is 9.8 °C/km (5.4 °F per 1,000 ft) (3.0 °C/1,000 ft). The reverse occurs for a sinking parcel of air.
When the environmental lapse rate is less than the adiabatic lapse rate the atmosphere is stable and convection will not occur.: 63 The environmental lapse is forced towards the adiabatic lapse rate whenever air is convecting vertically.
Only the troposphere (up to approximately 12 kilometres (39,000 ft) of altitude) in the Earth's atmosphere undergoes convection: the stratosphere does not generally convect. However, some exceptionally energetic convection processes, such as volcanic eruption columns and overshooting tops associated with severe supercell thunderstorms, may locally and temporarily inject convection through the tropopause and into the stratosphere.
Energy transport in the atmosphere is more complex than the interaction between radiation and dry convection. The water cycle (including evaporation, condensation, precipitation) transports latent heat and affects atmospheric humidity levels, significantly influencing the temperature profile, as described below.
== Mathematics of the adiabatic lapse rate ==
The following calculations derive the temperature as a function of altitude for a packet of air which is ascending or descending without exchanging heat with its environment.
=== Dry adiabatic lapse rate ===
Thermodynamics defines an adiabatic process as:
P
d
V
=
−
V
d
P
γ
{\displaystyle P\,\mathrm {d} V=-{\frac {V\,\mathrm {d} P}{\gamma }}}
the first law of thermodynamics can be written as
m
c
v
d
T
−
V
d
P
γ
=
0
{\displaystyle mc_{\text{v}}\,\mathrm {d} T-{\frac {V\,\mathrm {d} P}{\gamma }}=0}
Also, since the density
ρ
=
m
/
V
{\displaystyle \rho =m/V}
and
γ
=
c
p
/
c
v
{\displaystyle \gamma =c_{\text{p}}/c_{\text{v}}}
, we can show that:
ρ
c
p
d
T
−
d
P
=
0
{\displaystyle \rho c_{\text{p}}\,\mathrm {d} T-\mathrm {d} P=0}
where
c
p
{\displaystyle c_{\text{p}}}
is the specific heat at constant pressure.
Assuming an atmosphere in hydrostatic equilibrium:
d
P
=
−
ρ
g
d
z
{\displaystyle \mathrm {d} P=-\rho g\,\mathrm {d} z}
where g is the standard gravity. Combining these two equations to eliminate the pressure, one arrives at the result for the dry adiabatic lapse rate (DALR),
Γ
d
=
−
d
T
d
z
=
g
c
p
=
9.8
∘
C
/
km
{\displaystyle \Gamma _{\text{d}}=-{\frac {\mathrm {d} T}{\mathrm {d} z}}={\frac {g}{c_{\text{p}}}}=9.8\ ^{\circ }{\text{C}}/{\text{km}}}
The DALR (
Γ
d
{\displaystyle \Gamma _{\text{d}}}
) is the temperature gradient experienced in an ascending or descending packet of air that is not saturated with water vapor, i.e., with less than 100% relative humidity.
=== Moist adiabatic lapse rate ===
The presence of water within the atmosphere (usually the troposphere) complicates the process of convection. Water vapor contains latent heat of vaporization. As a parcel of air rises and cools, it eventually becomes saturated; that is, the vapor pressure of water in equilibrium with liquid water has decreased (as temperature has decreased) to the point where it is equal to the actual vapor pressure of water. With further decrease in temperature the water vapor in excess of the equilibrium amount condenses, forming cloud, and releasing heat (latent heat of condensation). Before saturation, the rising air follows the dry adiabatic lapse rate. After saturation, the rising air follows the moist (or wet) adiabatic lapse rate. The release of latent heat is an important source of energy in the development of thunderstorms.
While the dry adiabatic lapse rate is a constant 9.8 °C/km (5.4 °F per 1,000 ft, 3 °C/1,000 ft), the moist adiabatic lapse rate varies strongly with temperature. A typical value is around 5 °C/km, (9 °F/km, 2.7 °F/1,000 ft, 1.5 °C/1,000 ft). The formula for the saturated adiabatic lapse rate (SALR) or moist adiabatic lapse rate (MALR) is given by:
Γ
w
=
g
(
1
+
H
v
r
R
sd
T
)
(
c
pd
+
H
v
2
r
R
sw
T
2
)
{\displaystyle \Gamma _{\text{w}}=g\,{\frac {\left(1+{\dfrac {H_{\text{v}}\,r}{R_{\text{sd}}\,T}}\right)}{\left(c_{\text{pd}}+{\dfrac {H_{\text{v}}^{2}\,r}{R_{\text{sw}}\,T^{2}}}\right)}}}
where:
The SALR or MALR (
Γ
w
{\displaystyle \Gamma _{\text{w}}}
) is the temperature gradient experienced in an ascending or descending packet of air that is saturated with water vapor, i.e., with 100% relative humidity.
== Effect on weather ==
The varying environmental lapse rates throughout the Earth's atmosphere are of critical importance in meteorology, particularly within the troposphere. They are used to determine if the parcel of rising air will rise high enough for its water to condense to form clouds, and, having formed clouds, whether the air will continue to rise and form bigger shower clouds, and whether these clouds will get even bigger and form cumulonimbus clouds (thunder clouds).
As unsaturated air rises, its temperature drops at the dry adiabatic rate. The dew point also drops (as a result of decreasing air pressure) but much more slowly, typically about 2 °C per 1,000 m. If unsaturated air rises far enough, eventually its temperature will reach its dew point, and condensation will begin to form. This altitude is known as the lifting condensation level (LCL) when mechanical lift is present and the convective condensation level (CCL) when mechanical lift is absent, in which case, the parcel must be heated from below to its convective temperature. The cloud base will be somewhere within the layer bounded by these parameters.
The difference between the dry adiabatic lapse rate and the rate at which the dew point drops is around 4.5 °C per 1,000 m. Given a difference in temperature and dew point readings on the ground, one can easily find the LCL by multiplying the difference by 125 m/°C.
If the environmental lapse rate is less than the moist adiabatic lapse rate, the air is absolutely stable — rising air will cool faster than the surrounding air and lose buoyancy. This often happens in the early morning, when the air near the ground has cooled overnight. Cloud formation in stable air is unlikely.
If the environmental lapse rate is between the moist and dry adiabatic lapse rates, the air is conditionally unstable — an unsaturated parcel of air does not have sufficient buoyancy to rise to the LCL or CCL, and it is stable to weak vertical displacements in either direction. If the parcel is saturated it is unstable and will rise to the LCL or CCL, and either be halted due to an inversion layer of convective inhibition, or if lifting continues, deep, moist convection (DMC) may ensue, as a parcel rises to the level of free convection (LFC), after which it enters the free convective layer (FCL) and usually rises to the equilibrium level (EL).
If the environmental lapse rate is larger than the dry adiabatic lapse rate, it has a superadiabatic lapse rate, the air is absolutely unstable — a parcel of air will gain buoyancy as it rises both below and above the lifting condensation level or convective condensation level. This often happens in the afternoon mainly over land masses. In these conditions, the likelihood of cumulus clouds, showers or even thunderstorms is increased.
Meteorologists use radiosondes to measure the environmental lapse rate and compare it to the predicted adiabatic lapse rate to forecast the likelihood that air will rise. Charts of the environmental lapse rate are known as thermodynamic diagrams, examples of which include Skew-T log-P diagrams and tephigrams. (See also Thermals).
The difference in moist adiabatic lapse rate and the dry rate is the cause of foehn wind phenomenon (also known as "Chinook winds" in parts of North America). The phenomenon exists because warm moist air rises through orographic lifting up and over the top of a mountain range or large mountain. The temperature decreases with the dry adiabatic lapse rate, until it hits the dew point, where water vapor in the air begins to condense. Above that altitude, the adiabatic lapse rate decreases to the moist adiabatic lapse rate as the air continues to rise. Condensation is also commonly followed by precipitation on the top and windward sides of the mountain. As the air descends on the leeward side, it is warmed by adiabatic compression at the dry adiabatic lapse rate. Thus, the foehn wind at a certain altitude is warmer than the corresponding altitude on the windward side of the mountain range. In addition, because the air has lost much of its original water vapor content, the descending air creates an arid region on the leeward side of the mountain.
== Impact on the greenhouse effect ==
If the environmental lapse rate was zero, so that the atmosphere was the same temperature at all elevations, then there would be no greenhouse effect. This doesn't mean the lapse rate and the greenhouse effect are the same thing, just that the lapse rate is a prerequisite for the greenhouse effect.
The presence of greenhouse gases on a planet causes radiative cooling of the air, which leads to the formation of a non-zero lapse rate. So, the presence of greenhouse gases leads to there being a greenhouse effect at a global level. However, this need not be the case at a localized level.
The localized greenhouse effect is stronger in locations where the lapse rate is stronger. In Antarctica, thermal inversions in the atmosphere (so that air at higher altitudes is warmer) sometimes cause the localized greenhouse effect to become negative (signifying enhanced radiative cooling to space instead of inhibited radiative cooling as is the case for a positive greenhouse effect).
== Lapse rate in an isolated column of gas ==
A question has sometimes arisen as to whether a temperature gradient will arise in a column of still air in a gravitational field without external energy flows. This issue was addressed by James Clerk Maxwell, who established in 1868 that if any temperature gradient forms, then that temperature gradient must be universal (i.e., the gradient must be same for all materials) or the second law of thermodynamics would be violated. Maxwell also concluded that the universal result must be one in which the temperature is uniform, i.e., the lapse rate is zero.
Santiago and Visser (2019) confirm the correctness of Maxwell's conclusion (zero lapse rate) provided relativistic effects are neglected. When relativity is taken into account, gravity gives rise to an extremely small lapse rate, the Tolman gradient (derived by R. C. Tolman in 1930). At Earth's surface, the Tolman gradient would be about
Γ
t
=
T
s
×
(
10
−
16
{\displaystyle \Gamma _{t}=T_{s}\times (10^{-16}}
m
−
1
)
{\displaystyle ^{-1})}
, where
T
s
{\displaystyle T_{s}}
is the temperature of the gas at the elevation of Earth's surface. Santiago and Visser remark that "gravity is the only force capable of creating temperature gradients in thermal equilibrium states without violating the laws of thermodynamics" and "the existence of Tolman's temperature gradient is not at all controversial (at least not within the general relativity community)."
== See also ==
Adiabatic process
Atmospheric thermodynamics
Fluid dynamics
Foehn wind
Lapse rate climate feedback
Scale height
== Notes ==
== References ==
== Further reading ==
Beychok, Milton R. (2005). Fundamentals Of Stack Gas Dispersion (4th ed.). author-published. ISBN 978-0-9644588-0-2. www.air-dispersion.com
R. R. Rogers and M. K. Yau (1989). Short Course in Cloud Physics (3rd ed.). Butterworth-Heinemann. ISBN 978-0-7506-3215-7.
== External links ==
Definition, equations and tables of lapse rate from the Planetary Data system.
National Science Digital Library glossary:
Lapse Rate
Environmental lapse rate
Absolute stable air
An introduction to lapse rate calculation from first principles from U. Texas | Wikipedia/Lapse_rate |
A diffuser is "a device for reducing the velocity and increasing the static pressure of a fluid passing through a system”. The fluid's static pressure rise as it passes through a duct is commonly referred to as pressure recovery. In contrast, a nozzle is used to increase the discharge velocity and lower the pressure of a fluid passing through it.
Frictional effects during analysis can sometimes be important, but usually they are neglected. Ducts containing fluids flowing at low velocity can usually be analyzed using Bernoulli's principle. Analyzing ducts flowing at higher velocities with Mach numbers in excess of 0.3 usually require compressible flow relations.
A typical subsonic diffuser is a duct that increases in area in the direction of flow. As the area increases, fluid velocity decreases, and static pressure rises.
== Supersonic diffusers ==
A supersonic diffuser is a duct that decreases in area in the direction of flow which causes the fluid temperature, pressure, and density to increase, and velocity to decrease. These changes occur because the fluid is compressible. Shock waves may also play an important role in a supersonic diffuser.
== Applications ==
Diffusers are very common in heating, ventilating, and air-conditioning systems. Diffusers are used in both all-air and air-water HVAC systems, as part of room air distribution subsystems, and serve several purposes:
To deliver both conditioning and ventilating air
Evenly distribute the flow of air, in the desired directions
To enhance mixing of room air into the primary air being discharged
Often to cause the air jet(s) to attach to a ceiling or other surface, taking advantage of the Coandă effect
To create low-velocity air movement in the occupied portion of room
Accomplish the above while producing the minimum amount of noise
When possible, dampers, extractors, and other flow control devices should not be placed near diffusers' inlets (necks), either not being used at all or being placed far upstream. They have been shown to dramatically increase noise production. For as-cataloged diffuser performance, a straight section of duct needs serve a diffuser. An elbow, or kinked flex duct, just before a diffuser often leads to poor air distribution and increased noise.
Diffusers can be as a shape of round, rectangular, or can be as linear slot diffusers (LSDs). E.g., linear slot diffusers take the form of one or several long, narrow slots, mostly semi-concealed in a fixed or suspended ceiling with airfoils behind the slots directing the airflow in the desired direction.
Occasionally, diffusers are mostly used in a reverse fashion, as air inlets or returns. This is especially true for a linear slot diffuser and 'perf' diffusers. But more commonly, grilles are used as return or exhaust air inlets.
== See also ==
Bernoulli's principle
Compressible flow
Duct (flow)
Mass flow rate
Air conditioning
ASHRAE
SMACNA
Nozzle
== References == | Wikipedia/Diffuser_(thermodynamics) |
Energy Management Software (EMS) is a general term and category referring to a variety of energy-related software applications, which provide energy management including utility bill tracking, real-time energy metering, consumption control (building HVAC and lighting control systems), generation control (solar PV and ESS), building simulation and modeling, carbon and sustainability reporting, IT equipment management, grid services (demand response, virtual power plant, etc), and/or energy audits. Managing energy can require a system of systems approach.
Energy management software often provides tools for reducing energy costs and consumption for buildings, communities or industries. EMS collects energy data and uses it for three main purposes: Reporting, Monitoring and Engagement. Reporting may include verification of energy data, benchmarking, and setting high-level energy use reduction targets. Monitoring may include trend analysis and tracking energy consumption to identify cost-saving opportunities. Engagement can mean real-time responses (automated or manual), or the initiation of a dialogue between occupants and building managers to promote energy conservation. One engagement method that has recently gained popularity is the real-time energy consumption display available in web applications or an onsite energy dashboard/display.
== Metering and Data collection ==
Energy Management Software collects historic and/or real-time interval data, with intervals varying from quarterly billing statements to minute-by-minute smart meter readings. In addition to energy consumption, an EMS collects data related to variables that impact energy consumption such as number of people in the building, outside temperature, number of produced units, and more. The data are collected from interval meters, Building Automation Systems (BAS), directly from utilities, directly from sensors on electrical circuits, or other sources. Past bills can be used to provide a comparison between pre- and post-EMS energy consumption.
== Data analytics ==
Through Energy Data Analytics, EMS assists the users in the composition of mathematical formulas for analyzing, forecasting and tracking energy conservation measures to quantify the success of the measure, once implemented. Energy analytics help energy managers combine across energy and non-energy data to create key performance indicators, calculate carbon footprint, greenhouse gas, renewable heat incentives and energy efficiency certifications to meet local climate change policies, directives, regulation and certifications. Energy analytics also include intelligent algorithms such as classification and machine learning to analyse the energy consumption of buildings and/or its equipment that build up a memory of energy use patterns, learn the good and bad energy consumption behaviours and notify in case of abnormal energy use.
== Reporting ==
Reporting tools are targeted at owners and executives who want to automate energy and emissions auditing. Cost and consumption data from a number of buildings can be aggregated or compared with the software, saving time relative to manual reporting. EMS offers more detailed energy information than utility billing can provide; another advantage is that outside factors affecting energy use, such as weather condition or building occupancy, can be accounted for as part of the reporting process. This information can be used to prioritize energy savings initiatives and balance energy savings against energy-related capital expenditures.
Bill verification can be used to compare metered consumption against billed consumption. Bill analysis can also demonstrate the impact of different energy costs, for example by comparing electrical demand charges to consumption costs.
Greenhouse gas (GHG) accounting can calculate direct or indirect GHG emissions, which may be used for internal reporting or enterprise carbon accounting.
== Monitoring ==
Monitoring tools track and display real-time and historical data. Often, EMS includes various benchmarking tools, such as energy consumption per square foot, weather normalization or more advanced analysis using energy modelling algorithms to identify anomalous consumption. Seeing exactly when energy is used, combined with anomaly recognition, can allow Facility or Energy Managers to identify savings opportunities.
Initiatives such as demand shaving, replacement of malfunctioning equipment, retrofits of inefficient equipment, and removal of unnecessary loads can be discovered and coordinated using the EMS. For example, an unexpected energy spike at a specific time each day may indicate an improperly set or malfunctioning timer. These tools can also be used for Energy Monitoring and Targeting. EMS uses models to correct for variable factors such as weather when performing historical comparisons to verify the effect of conservation and efficiency initiatives.
EMS may offer alerts, via text or email messages, when consumption values exceed pre-defined thresholds based on consumption or cost. These thresholds may be set at absolute levels, or use an energy model to determine when consumption is abnormally high or low. More recently, smartphones and tablets are becoming mainstream platforms for EMS.
== Automated and Manual Engagement ==
Engagement can refer to automated or manual responses to collected and analyzed energy data. Building control systems can respond as readily to energy fluctuation as a heating system can respond to temperature variation. Demand spikes can trigger equipment power-down processes, with or without human intervention.
Another objective of Engagement is to connect occupants’ daily choices with building energy consumption. By displaying real-time consumption information, occupants see the immediate impact of their actions. The software can be used to promote energy conservation initiatives, offer advice to the occupants, or provide a forum for feedback on sustainability initiatives.
People-driven energy conservation programs, such as those sponsored by Energy Education, can be highly effective in reducing energy use and cost.
Letting occupants know their real-time consumption alone can be responsible for a 7% reduction in energy consumption.
== Sustainability management ==
Monitoring the flows of energy in building allows the users to directly monitor part of the sustainable goals of companies. Allowing them to affect them indirectly. Thats reason why EMS are becoming tool for Sustainability Managers in corporate sphere. Developing new branch of Sustainability Management System (SMS), that can direct part of EMS.
Monitoring the energy flows allow Scope 1 and Scope 2 emissions to be calculated based on EMS data.
== See also ==
Energy & Facility Management Software
Building automation
Energy monitoring and targeting
Energy saving
Google PowerMeter
EnergyCAP
RETScreen
Energy Management System
== References ==
== External links ==
U.S. Dept of Energy's Building Technologies Program Archived 2015-07-06 at the Wayback Machine | Wikipedia/Energy_management_software |
In multivariate calculus, a differential or differential form is said to be exact or perfect (exact differential), as contrasted with an inexact differential, if it is equal to the general differential
d
Q
{\displaystyle dQ}
for some differentiable function
Q
{\displaystyle Q}
in an orthogonal coordinate system (hence
Q
{\displaystyle Q}
is a multivariable function whose variables are independent, as they are always expected to be when treated in multivariable calculus).
An exact differential is sometimes also called a total differential, or a full differential, or, in the study of differential geometry, it is termed an exact form.
The integral of an exact differential over any integral path is path-independent, and this fact is used to identify state functions in thermodynamics.
== Overview ==
=== Definition ===
Even if we work in three dimensions here, the definitions of exact differentials for other dimensions are structurally similar to the three dimensional definition. In three dimensions, a form of the type
A
(
x
,
y
,
z
)
d
x
+
B
(
x
,
y
,
z
)
d
y
+
C
(
x
,
y
,
z
)
d
z
{\displaystyle A(x,y,z)\,dx+B(x,y,z)\,dy+C(x,y,z)\,dz}
is called a differential form. This form is called exact on an open domain
D
⊂
R
3
{\displaystyle D\subset \mathbb {R} ^{3}}
in space if there exists some differentiable scalar function
Q
=
Q
(
x
,
y
,
z
)
{\displaystyle Q=Q(x,y,z)}
defined on
D
{\displaystyle D}
such that
d
Q
≡
(
∂
Q
∂
x
)
y
,
z
d
x
+
(
∂
Q
∂
y
)
x
,
z
d
y
+
(
∂
Q
∂
z
)
x
,
y
d
z
,
d
Q
=
A
d
x
+
B
d
y
+
C
d
z
{\displaystyle dQ\equiv \left({\frac {\partial Q}{\partial x}}\right)_{y,z}\,dx+\left({\frac {\partial Q}{\partial y}}\right)_{x,z}\,dy+\left({\frac {\partial Q}{\partial z}}\right)_{x,y}\,dz,\quad dQ=A\,dx+B\,dy+C\,dz}
throughout
D
{\displaystyle D}
, where
x
,
y
,
z
{\displaystyle x,y,z}
are orthogonal coordinates (e.g., Cartesian, cylindrical, or spherical coordinates). In other words, in some open domain of a space, a differential form is an exact differential if it is equal to the general differential of a differentiable function in an orthogonal coordinate system.
The subscripts outside the parenthesis in the above mathematical expression indicate which variables are being held constant during differentiation. Due to the definition of the partial derivative, these subscripts are not required, but they are explicitly shown here as reminders.
=== Integral path independence ===
The exact differential for a differentiable scalar function
Q
{\displaystyle Q}
defined in an open domain
D
⊂
R
n
{\displaystyle D\subset \mathbb {R} ^{n}}
is equal to
d
Q
=
∇
Q
⋅
d
r
{\displaystyle dQ=\nabla Q\cdot d\mathbf {r} }
, where
∇
Q
{\displaystyle \nabla Q}
is the gradient of
Q
{\displaystyle Q}
,
⋅
{\displaystyle \cdot }
represents the scalar product, and
d
r
{\displaystyle d\mathbf {r} }
is the general differential displacement vector, if an orthogonal coordinate system is used. If
Q
{\displaystyle Q}
is of differentiability class
C
1
{\displaystyle C^{1}}
(continuously differentiable), then
∇
Q
{\displaystyle \nabla Q}
is a conservative vector field for the corresponding potential
Q
{\displaystyle Q}
by the definition. For three dimensional spaces, expressions such as
d
r
=
(
d
x
,
d
y
,
d
z
)
{\displaystyle d\mathbf {r} =(dx,dy,dz)}
and
∇
Q
=
(
∂
Q
∂
x
,
∂
Q
∂
y
,
∂
Q
∂
z
)
{\displaystyle \nabla Q=\left({\frac {\partial Q}{\partial x}},{\frac {\partial Q}{\partial y}},{\frac {\partial Q}{\partial z}}\right)}
can be made.
The gradient theorem states
∫
i
f
d
Q
=
∫
i
f
∇
Q
(
r
)
⋅
d
r
=
Q
(
f
)
−
Q
(
i
)
{\displaystyle \int _{i}^{f}dQ=\int _{i}^{f}\nabla Q(\mathbf {r} )\cdot d\mathbf {r} =Q\left(f\right)-Q\left(i\right)}
that does not depend on which integral path between the given path endpoints
i
{\displaystyle i}
and
f
{\displaystyle f}
is chosen. So it is concluded that the integral of an exact differential is independent of the choice of an integral path between given path endpoints (path independence).
For three dimensional spaces, if
∇
Q
{\displaystyle \nabla Q}
defined on an open domain
D
⊂
R
3
{\displaystyle D\subset \mathbb {R} ^{3}}
is of differentiability class
C
1
{\displaystyle C^{1}}
(equivalently
Q
{\displaystyle Q}
is of
C
2
{\displaystyle C^{2}}
), then this integral path independence can also be proved by using the vector calculus identity
∇
×
(
∇
Q
)
=
0
{\displaystyle \nabla \times (\nabla Q)=\mathbf {0} }
and Stokes' theorem.
∮
∂
Σ
∇
Q
⋅
d
r
=
∬
Σ
(
∇
×
∇
Q
)
⋅
d
a
=
0
{\displaystyle \oint _{\partial \Sigma }\nabla Q\cdot d\mathbf {r} =\iint _{\Sigma }(\nabla \times \nabla Q)\cdot d\mathbf {a} =0}
for a simply closed loop
∂
Σ
{\displaystyle \partial \Sigma }
with the smooth oriented surface
Σ
{\displaystyle \Sigma }
in it. If the open domain
D
{\displaystyle D}
is simply connected open space (roughly speaking, a single piece open space without a hole within it), then any irrotational vector field (defined as a
C
1
{\displaystyle C^{1}}
vector field
v
{\displaystyle \mathbf {v} }
which curl is zero, i.e.,
∇
×
v
=
0
{\displaystyle \nabla \times \mathbf {v} =\mathbf {0} }
) has the path independence by the Stokes' theorem, so the following statement is made; In a simply connected open region, any
C
1
{\displaystyle C^{1}}
vector field that has the path-independence property (so it is a conservative vector field.) must also be irrotational and vice versa. The equality of the path independence and conservative vector fields is shown here.
==== Thermodynamic state function ====
In thermodynamics, when
d
Q
{\displaystyle dQ}
is exact, the function
Q
{\displaystyle Q}
is a state function of the system: a mathematical function which depends solely on the current equilibrium state, not on the path taken to reach that state. Internal energy
U
{\displaystyle U}
, Entropy
S
{\displaystyle S}
, Enthalpy
H
{\displaystyle H}
, Helmholtz free energy
A
{\displaystyle A}
, and Gibbs free energy
G
{\displaystyle G}
are state functions. Generally, neither work
W
{\displaystyle W}
nor heat
Q
{\displaystyle Q}
is a state function. (Note:
Q
{\displaystyle Q}
is commonly used to represent heat in physics. It should not be confused with the use earlier in this article as the parameter of an exact differential.)
=== One dimension ===
In one dimension, a differential form
A
(
x
)
d
x
{\displaystyle A(x)\,dx}
is exact if and only if
A
{\displaystyle A}
has an antiderivative (but not necessarily one in terms of elementary functions). If
A
{\displaystyle A}
has an antiderivative and let
Q
{\displaystyle Q}
be an antiderivative of
A
{\displaystyle A}
so
d
Q
d
x
=
A
{\displaystyle {\frac {dQ}{dx}}=A}
, then
A
(
x
)
d
x
{\displaystyle A(x)\,dx}
obviously satisfies the condition for exactness. If
A
{\displaystyle A}
does not have an antiderivative, then we cannot write
d
Q
=
d
Q
d
x
d
x
{\displaystyle dQ={\frac {dQ}{dx}}dx}
with
A
=
d
Q
d
x
{\displaystyle A={\frac {dQ}{dx}}}
for a differentiable function
Q
{\displaystyle Q}
so
A
(
x
)
d
x
{\displaystyle A(x)\,dx}
is inexact.
=== Two and three dimensions ===
By symmetry of second derivatives, for any "well-behaved" (non-pathological) function
Q
{\displaystyle Q}
, we have
∂
2
Q
∂
x
∂
y
=
∂
2
Q
∂
y
∂
x
.
{\displaystyle {\frac {\partial ^{2}Q}{\partial x\,\partial y}}={\frac {\partial ^{2}Q}{\partial y\,\partial x}}.}
Hence, in a simply-connected region R of the xy-plane, where
x
,
y
{\displaystyle x,y}
are independent, a differential form
A
(
x
,
y
)
d
x
+
B
(
x
,
y
)
d
y
{\displaystyle A(x,y)\,dx+B(x,y)\,dy}
is an exact differential if and only if the equation
(
∂
A
∂
y
)
x
=
(
∂
B
∂
x
)
y
{\displaystyle \left({\frac {\partial A}{\partial y}}\right)_{x}=\left({\frac {\partial B}{\partial x}}\right)_{y}}
holds. If it is an exact differential so
A
=
∂
Q
∂
x
{\displaystyle A={\frac {\partial Q}{\partial x}}}
and
B
=
∂
Q
∂
y
{\displaystyle B={\frac {\partial Q}{\partial y}}}
, then
Q
{\displaystyle Q}
is a differentiable (smoothly continuous) function along
x
{\displaystyle x}
and
y
{\displaystyle y}
, so
(
∂
A
∂
y
)
x
=
∂
2
Q
∂
y
∂
x
=
∂
2
Q
∂
x
∂
y
=
(
∂
B
∂
x
)
y
{\displaystyle \left({\frac {\partial A}{\partial y}}\right)_{x}={\frac {\partial ^{2}Q}{\partial y\partial x}}={\frac {\partial ^{2}Q}{\partial x\partial y}}=\left({\frac {\partial B}{\partial x}}\right)_{y}}
. If
(
∂
A
∂
y
)
x
=
(
∂
B
∂
x
)
y
{\displaystyle \left({\frac {\partial A}{\partial y}}\right)_{x}=\left({\frac {\partial B}{\partial x}}\right)_{y}}
holds, then
A
{\displaystyle A}
and
B
{\displaystyle B}
are differentiable (again, smoothly continuous) functions along
y
{\displaystyle y}
and
x
{\displaystyle x}
respectively, and
(
∂
A
∂
y
)
x
=
∂
2
Q
∂
y
∂
x
=
∂
2
Q
∂
x
∂
y
=
(
∂
B
∂
x
)
y
{\displaystyle \left({\frac {\partial A}{\partial y}}\right)_{x}={\frac {\partial ^{2}Q}{\partial y\partial x}}={\frac {\partial ^{2}Q}{\partial x\partial y}}=\left({\frac {\partial B}{\partial x}}\right)_{y}}
is only the case.
For three dimensions, in a simply-connected region R of the xyz-coordinate system, by a similar reason, a differential
d
Q
=
A
(
x
,
y
,
z
)
d
x
+
B
(
x
,
y
,
z
)
d
y
+
C
(
x
,
y
,
z
)
d
z
{\displaystyle dQ=A(x,y,z)\,dx+B(x,y,z)\,dy+C(x,y,z)\,dz}
is an exact differential if and only if between the functions A, B and C there exist the relations
(
∂
A
∂
y
)
x
,
z
=
(
∂
B
∂
x
)
y
,
z
{\displaystyle \left({\frac {\partial A}{\partial y}}\right)_{x,z}\!\!\!=\left({\frac {\partial B}{\partial x}}\right)_{y,z}}
;
(
∂
A
∂
z
)
x
,
y
=
(
∂
C
∂
x
)
y
,
z
{\displaystyle \left({\frac {\partial A}{\partial z}}\right)_{x,y}\!\!\!=\left({\frac {\partial C}{\partial x}}\right)_{y,z}}
;
(
∂
B
∂
z
)
x
,
y
=
(
∂
C
∂
y
)
x
,
z
.
{\displaystyle \left({\frac {\partial B}{\partial z}}\right)_{x,y}\!\!\!=\left({\frac {\partial C}{\partial y}}\right)_{x,z}.}
These conditions are equivalent to the following sentence: If G is the graph of this vector valued function then for all tangent vectors X,Y of the surface G then s(X, Y) = 0 with s the symplectic form.
These conditions, which are easy to generalize, arise from the independence of the order of differentiations in the calculation of the second derivatives. So, in order for a differential dQ, that is a function of four variables, to be an exact differential, there are six conditions (the combination
C
(
4
,
2
)
=
6
{\displaystyle C(4,2)=6}
) to satisfy.
== Partial differential relations ==
If a differentiable function
z
(
x
,
y
)
{\displaystyle z(x,y)}
is one-to-one (injective) for each independent variable, e.g.,
z
(
x
,
y
)
{\displaystyle z(x,y)}
is one-to-one for
x
{\displaystyle x}
at a fixed
y
{\displaystyle y}
while it is not necessarily one-to-one for
(
x
,
y
)
{\displaystyle (x,y)}
, then the following total differentials exist because each independent variable is a differentiable function for the other variables, e.g.,
x
(
y
,
z
)
{\displaystyle x(y,z)}
.
d
x
=
(
∂
x
∂
y
)
z
d
y
+
(
∂
x
∂
z
)
y
d
z
{\displaystyle dx={\left({\frac {\partial x}{\partial y}}\right)}_{z}\,dy+{\left({\frac {\partial x}{\partial z}}\right)}_{y}\,dz}
d
z
=
(
∂
z
∂
x
)
y
d
x
+
(
∂
z
∂
y
)
x
d
y
.
{\displaystyle dz={\left({\frac {\partial z}{\partial x}}\right)}_{y}\,dx+{\left({\frac {\partial z}{\partial y}}\right)}_{x}\,dy.}
Substituting the first equation into the second and rearranging, we obtain
d
z
=
(
∂
z
∂
x
)
y
[
(
∂
x
∂
y
)
z
d
y
+
(
∂
x
∂
z
)
y
d
z
]
+
(
∂
z
∂
y
)
x
d
y
,
{\displaystyle dz={\left({\frac {\partial z}{\partial x}}\right)}_{y}\left[{\left({\frac {\partial x}{\partial y}}\right)}_{z}dy+{\left({\frac {\partial x}{\partial z}}\right)}_{y}dz\right]+{\left({\frac {\partial z}{\partial y}}\right)}_{x}dy,}
d
z
=
[
(
∂
z
∂
x
)
y
(
∂
x
∂
y
)
z
+
(
∂
z
∂
y
)
x
]
d
y
+
(
∂
z
∂
x
)
y
(
∂
x
∂
z
)
y
d
z
,
{\displaystyle dz=\left[{\left({\frac {\partial z}{\partial x}}\right)}_{y}{\left({\frac {\partial x}{\partial y}}\right)}_{z}+{\left({\frac {\partial z}{\partial y}}\right)}_{x}\right]dy+{\left({\frac {\partial z}{\partial x}}\right)}_{y}{\left({\frac {\partial x}{\partial z}}\right)}_{y}dz,}
[
1
−
(
∂
z
∂
x
)
y
(
∂
x
∂
z
)
y
]
d
z
=
[
(
∂
z
∂
x
)
y
(
∂
x
∂
y
)
z
+
(
∂
z
∂
y
)
x
]
d
y
.
{\displaystyle \left[1-{\left({\frac {\partial z}{\partial x}}\right)}_{y}{\left({\frac {\partial x}{\partial z}}\right)}_{y}\right]dz=\left[{\left({\frac {\partial z}{\partial x}}\right)}_{y}{\left({\frac {\partial x}{\partial y}}\right)}_{z}+{\left({\frac {\partial z}{\partial y}}\right)}_{x}\right]dy.}
Since
y
{\displaystyle y}
and
z
{\displaystyle z}
are independent variables,
d
y
{\displaystyle dy}
and
d
z
{\displaystyle dz}
may be chosen without restriction. For this last equation to generally hold, the bracketed terms must be equal to zero. The left bracket equal to zero leads to the reciprocity relation while the right bracket equal to zero goes to the cyclic relation as shown below.
=== Reciprocity relation ===
Setting the first term in brackets equal to zero yields
(
∂
z
∂
x
)
y
(
∂
x
∂
z
)
y
=
1.
{\displaystyle {\left({\frac {\partial z}{\partial x}}\right)}_{y}{\left({\frac {\partial x}{\partial z}}\right)}_{y}=1.}
A slight rearrangement gives a reciprocity relation,
(
∂
z
∂
x
)
y
=
1
(
∂
x
∂
z
)
y
.
{\displaystyle {\left({\frac {\partial z}{\partial x}}\right)}_{y}={\frac {1}{{\left({\frac {\partial x}{\partial z}}\right)}_{y}}}.}
There are two more permutations of the foregoing derivation that give a total of three reciprocity relations between
x
{\displaystyle x}
,
y
{\displaystyle y}
and
z
{\displaystyle z}
.
=== Cyclic relation ===
The cyclic relation is also known as the cyclic rule or the Triple product rule. Setting the second term in brackets equal to zero yields
(
∂
z
∂
x
)
y
(
∂
x
∂
y
)
z
=
−
(
∂
z
∂
y
)
x
.
{\displaystyle {\left({\frac {\partial z}{\partial x}}\right)}_{y}{\left({\frac {\partial x}{\partial y}}\right)}_{z}=-{\left({\frac {\partial z}{\partial y}}\right)}_{x}.}
Using a reciprocity relation for
∂
z
∂
y
{\displaystyle {\tfrac {\partial z}{\partial y}}}
on this equation and reordering gives a cyclic relation (the triple product rule),
(
∂
x
∂
y
)
z
(
∂
y
∂
z
)
x
(
∂
z
∂
x
)
y
=
−
1.
{\displaystyle {\left({\frac {\partial x}{\partial y}}\right)}_{z}{\left({\frac {\partial y}{\partial z}}\right)}_{x}{\left({\frac {\partial z}{\partial x}}\right)}_{y}=-1.}
If, instead, reciprocity relations for
∂
x
∂
y
{\displaystyle {\tfrac {\partial x}{\partial y}}}
and
∂
y
∂
z
{\displaystyle {\tfrac {\partial y}{\partial z}}}
are used with subsequent rearrangement, a standard form for implicit differentiation is obtained:
(
∂
y
∂
x
)
z
=
−
(
∂
z
∂
x
)
y
(
∂
z
∂
y
)
x
.
{\displaystyle {\left({\frac {\partial y}{\partial x}}\right)}_{z}=-{\frac {{\left({\frac {\partial z}{\partial x}}\right)}_{y}}{{\left({\frac {\partial z}{\partial y}}\right)}_{x}}}.}
== Some useful equations derived from exact differentials in two dimensions ==
(See also Bridgman's thermodynamic equations for the use of exact differentials in the theory of thermodynamic equations)
Suppose we have five state functions
z
,
x
,
y
,
u
{\displaystyle z,x,y,u}
, and
v
{\displaystyle v}
. Suppose that the state space is two-dimensional and any of the five quantities are differentiable. Then by the chain rule
but also by the chain rule:
and
so that (by substituting (2) and (3) into (1)):
which implies that (by comparing (4) with (1)):
Letting
v
=
y
{\displaystyle v=y}
in (5) gives:
Letting
u
=
y
{\displaystyle u=y}
in (5) gives:
Letting
u
=
y
{\displaystyle u=y}
and
v
=
z
{\displaystyle v=z}
in (7) gives:
using (
∂
a
/
∂
b
)
c
=
1
/
(
∂
b
/
∂
a
)
c
{\displaystyle \partial a/\partial b)_{c}=1/(\partial b/\partial a)_{c}}
gives the triple product rule:
== See also ==
Closed and exact differential forms for a higher-level treatment
Differential (mathematics)
Inexact differential
Integrating factor for solving non-exact differential equations by making them exact
Exact differential equation
== References ==
Perrot, P. (1998). A to Z of Thermodynamics. New York: Oxford University Press.
Dennis G. Zill (14 May 2008). A First Course in Differential Equations. Cengage Learning. ISBN 978-0-495-10824-5.
== External links ==
Inexact Differential – from Wolfram MathWorld
Exact and Inexact Differentials – University of Arizona
Exact and Inexact Differentials – University of Texas
Exact Differential – from Wolfram MathWorld | Wikipedia/Exact_differential |
The laws of thermodynamics are a set of scientific laws which define a group of physical quantities, such as temperature, energy, and entropy, that characterize thermodynamic systems in thermodynamic equilibrium. The laws also use various parameters for thermodynamic processes, such as thermodynamic work and heat, and establish relationships between them. They state empirical facts that form a basis of precluding the possibility of certain phenomena, such as perpetual motion. In addition to their use in thermodynamics, they are important fundamental laws of physics in general and are applicable in other natural sciences.
Traditionally, thermodynamics has recognized three fundamental laws, simply named by an ordinal identification, the first law, the second law, and the third law. A more fundamental statement was later labelled as the zeroth law after the first three laws had been established.
The zeroth law of thermodynamics defines thermal equilibrium and forms a basis for the definition of temperature: if two systems are each in thermal equilibrium with a third system, then they are in thermal equilibrium with each other.
The first law of thermodynamics states that, when energy passes into or out of a system (as work, heat, or matter), the system's internal energy changes in accordance with the law of conservation of energy. This also results in the observation that, in an externally isolated system, even with internal changes, the sum of all forms of energy must remain constant, as energy cannot be created or destroyed.
The second law of thermodynamics states that in a natural thermodynamic process, the sum of the entropies of the interacting thermodynamic systems never decreases. A common corollary of the statement is that heat does not spontaneously pass from a colder body to a warmer body.
The third law of thermodynamics states that a system's entropy approaches a constant value as the temperature approaches absolute zero. With the exception of non-crystalline solids (glasses), the entropy of a system at absolute zero is typically close to zero.
The first and second laws prohibit two kinds of perpetual motion machines, respectively: the perpetual motion machine of the first kind which produces work with no energy input, and the perpetual motion machine of the second kind which spontaneously converts thermal energy into mechanical work.
== History ==
The history of thermodynamics is fundamentally interwoven with the history of physics and the history of chemistry, and ultimately dates back to theories of heat in antiquity. The laws of thermodynamics are the result of progress made in this field over the nineteenth and early twentieth centuries. The first established thermodynamic principle, which eventually became the second law of thermodynamics, was formulated by Sadi Carnot in 1824 in his book Reflections on the Motive Power of Fire. By 1860, as formalized in the works of scientists such as Rudolf Clausius and William Thomson, what are now known as the first and second laws were established. Later, Nernst's theorem (or Nernst's postulate), which is now known as the third law, was formulated by Walther Nernst over the period 1906–1912. While the numbering of the laws is universal today, various textbooks throughout the 20th century have numbered the laws differently. In some fields, the second law was considered to deal with the efficiency of heat engines only, whereas what was called the third law dealt with entropy increases. Gradually, this resolved itself and a zeroth law was later added to allow for a self-consistent definition of temperature. Additional laws have been suggested, but have not achieved the generality of the four accepted laws, and are generally not discussed in standard textbooks.
== Zeroth law ==
The zeroth law of thermodynamics provides for the foundation of temperature as an empirical parameter in thermodynamic systems and establishes the transitive relation between the temperatures of multiple bodies in thermal equilibrium. The law may be stated in the following form:
If two systems are both in thermal equilibrium with a third system, then they are in thermal equilibrium with each other.
Though this version of the law is one of the most commonly stated versions, it is only one of a diversity of statements that are labeled as "the zeroth law". Some statements go further, so as to supply the important physical fact that temperature is one-dimensional and that one can conceptually arrange bodies in a real number sequence from colder to hotter.
These concepts of temperature and of thermal equilibrium are fundamental to thermodynamics and were clearly stated in the nineteenth century. The name 'zeroth law' was invented by Ralph H. Fowler in the 1930s, long after the first, second, and third laws were widely recognized. The law allows the definition of temperature in a non-circular way without reference to entropy, its conjugate variable. Such a temperature definition is said to be 'empirical'.
== First law ==
The first law of thermodynamics is a version of the law of conservation of energy, adapted for thermodynamic processes. In general, the conservation law states that the total energy of an isolated system is constant; energy can be transformed from one form to another, but can be neither created nor destroyed.
In a closed system (i.e. there is no transfer of matter into or out of the system), the first law states that the change in internal energy of the system (ΔUsystem) is equal to the difference between the heat supplied to the system (Q) and the work (W) done by the system on its surroundings. (Note, an alternate sign convention, not used in this article, is to define W as the work done on the system by its surroundings):
Δ
U
s
y
s
t
e
m
=
Q
−
W
.
{\displaystyle \Delta U_{\rm {system}}=Q-W.}
For processes that include the transfer of matter, a further statement is needed.
When two initially isolated systems are combined into a new system, then the total internal energy of the new system, Usystem, will be equal to the sum of the internal energies of the two initial systems, U1 and U2:
U
s
y
s
t
e
m
=
U
1
+
U
2
.
{\displaystyle U_{\rm {system}}=U_{1}+U_{2}.}
The First Law encompasses several principles:
Conservation of energy, which says that energy can be neither created nor destroyed, but can only change form. A particular consequence of this is that the total energy of an isolated system does not change.
The concept of internal energy and its relationship to temperature. If a system has a definite temperature, then its total energy has three distinguishable components, termed kinetic energy (energy due to the motion of the system as a whole), potential energy (energy resulting from an externally imposed force field), and internal energy. The establishment of the concept of internal energy distinguishes the first law of thermodynamics from the more general law of conservation of energy.
E
t
o
t
a
l
=
K
E
s
y
s
t
e
m
+
P
E
s
y
s
t
e
m
+
U
s
y
s
t
e
m
{\displaystyle E_{\rm {total}}=KE_{\rm {system}}+PE_{\rm {system}}+U_{\rm {system}}}
Work is a process of transferring energy to or from a system in ways that can be described by macroscopic mechanical forces acting between the system and its surroundings. The work done by the system can come from its overall kinetic energy, from its overall potential energy, or from its internal energy. For example, when a machine (not a part of the system) lifts a system upwards, some energy is transferred from the machine to the system. The system's energy increases as work is done on the system and in this particular case, the energy increase of the system is manifested as an increase in the system's gravitational potential energy. Work added to the system increases the potential energy of the system.
When matter is transferred into a system, the internal energy and potential energy associated with it are transferred into the new combined system.
(
u
Δ
M
)
i
n
=
Δ
U
s
y
s
t
e
m
{\displaystyle \left(u\,\Delta M\right)_{\rm {in}}=\Delta U_{\rm {system}}}
where u denotes the internal energy per unit mass of the transferred matter, as measured while in the surroundings; and ΔM denotes the amount of transferred mass.
The flow of heat is a form of energy transfer. Heat transfer is the natural process of moving energy to or from a system, other than by work or the transfer of matter. In a diathermal system, the internal energy can only be changed by the transfer of energy as heat:
Δ
U
s
y
s
t
e
m
=
Q
.
{\displaystyle \Delta U_{\rm {system}}=Q.}
Combining these principles leads to one traditional statement of the first law of thermodynamics: it is not possible to construct a machine which will perpetually output work without an equal amount of energy input to that machine. Or more briefly, a perpetual motion machine of the first kind is impossible.
== Second law ==
The second law of thermodynamics indicates the irreversibility of natural processes, and in many cases, the tendency of natural processes to lead towards spatial homogeneity of matter and energy, especially of temperature. It can be formulated in a variety of interesting and important ways. One of the simplest is the Clausius statement, that heat does not spontaneously pass from a colder to a hotter body.
It implies the existence of a quantity called the entropy of a thermodynamic system. In terms of this quantity it implies that
When two initially isolated systems in separate but nearby regions of space, each in thermodynamic equilibrium with itself but not necessarily with each other, are then allowed to interact, they will eventually reach a mutual thermodynamic equilibrium. The sum of the entropies of the initially isolated systems is less than or equal to the total entropy of the final combination. Equality occurs just when the two original systems have all their respective intensive variables (temperature, pressure) equal; then the final system also has the same values.
The second law is applicable to a wide variety of processes, both reversible and irreversible. According to the second law, in a reversible heat transfer, an element of heat transferred,
δ
Q
{\displaystyle \delta Q}
, is the product of the temperature (
T
{\displaystyle T}
), both of the system and of the sources or destination of the heat, with the increment (
d
S
{\displaystyle dS}
) of the system's conjugate variable, its entropy (
S
{\displaystyle S}
):
δ
Q
=
T
d
S
.
{\displaystyle \delta Q=T\,dS\,.}
While reversible processes are a useful and convenient theoretical limiting case, all natural processes are irreversible. A prime example of this irreversibility is the transfer of heat by conduction or radiation. It was known long before the discovery of the notion of entropy that when two bodies, initially of different temperatures, come into direct thermal connection, then heat immediately and spontaneously flows from the hotter body to the colder one.
Entropy may also be viewed as a physical measure concerning the microscopic details of the motion and configuration of a system, when only the macroscopic states are known. Such details are often referred to as disorder on a microscopic or molecular scale, and less often as dispersal of energy. For two given macroscopically specified states of a system, there is a mathematically defined quantity called the 'difference of information entropy between them'. This defines how much additional microscopic physical information is needed to specify one of the macroscopically specified states, given the macroscopic specification of the other – often a conveniently chosen reference state which may be presupposed to exist rather than explicitly stated. A final condition of a natural process always contains microscopically specifiable effects which are not fully and exactly predictable from the macroscopic specification of the initial condition of the process. This is why entropy increases in natural processes – the increase tells how much extra microscopic information is needed to distinguish the initial macroscopically specified state from the final macroscopically specified state. Equivalently, in a thermodynamic process, energy spreads.
== Third law ==
The third law of thermodynamics can be stated as:
A system's entropy approaches a constant value as its temperature approaches absolute zero.
At absolute zero temperature, the system is in the state with the minimum thermal energy, the ground state. The constant value (not necessarily zero) of entropy at this point is called the residual entropy of the system. With the exception of non-crystalline solids (e.g. glass) the residual entropy of a system is typically close to zero. However, it reaches zero only when the system has a unique ground state (i.e., the state with the minimum thermal energy has only one configuration, or microstate). Microstates are used here to describe the probability of a system being in a specific state, as each microstate is assumed to have the same probability of occurring, so macroscopic states with fewer microstates are less probable. In general, entropy is related to the number of possible microstates according to the Boltzmann principle
S
=
k
B
l
n
Ω
{\displaystyle S=k_{\mathrm {B} }\,\mathrm {ln} \,\Omega }
where S is the entropy of the system, kB is the Boltzmann constant, and Ω the number of microstates. At absolute zero there is only 1 microstate possible (Ω = 1 as all the atoms are identical for a pure substance, and as a result all orders are identical as there is only one combination) and
ln
(
1
)
=
0
{\displaystyle \ln(1)=0}
.
== Onsager relations ==
The Onsager reciprocal relations have been considered the fourth law of thermodynamics. They describe the relation between thermodynamic flows and forces in non-equilibrium thermodynamics, under the assumption that thermodynamic variables can be defined locally in a condition of local equilibrium. These relations are derived from statistical mechanics under the principle of microscopic reversibility (in the absence of external magnetic fields). Given a set of extensive parameters Xi (energy, mass, entropy, number of particles and so on) and thermodynamic forces Fi (related to their related intrinsic parameters, such as temperature and pressure), the Onsager theorem states that
d
J
k
d
F
i
|
F
i
=
0
=
d
J
i
d
F
k
|
F
k
=
0
{\displaystyle {\frac {\mathrm {d} J_{k}}{\mathrm {d} F_{i}}}{\bigg |}_{F_{i}=0}~=~{\frac {\mathrm {d} J_{i}}{\mathrm {d} F_{k}}}{\bigg |}_{F_{k}=0}}
where i, k = 1,2,3,... index every parameter and its related force, and
J
i
=
d
X
i
d
t
{\displaystyle J_{i}={\frac {\mathrm {d} X_{i}}{\mathrm {d} t}}}
are called the thermodynamic flows.
== See also ==
Chemical thermodynamics
Enthalpy
Entropy production
Ginsberg's theorem (Parody of the laws of thermodynamics)
H-theorem
Statistical mechanics
Table of thermodynamic equations
== References ==
== Further reading ==
Atkins, Peter (2007). Four Laws That Drive the Universe. OUP Oxford. ISBN 978-0199232369
Goldstein, Martin & Inge F. (1993). The Refrigerator and the Universe. Harvard Univ. Press. ISBN 978-0674753259
Guggenheim, E.A. (1985). Thermodynamics. An Advanced Treatment for Chemists and Physicists, seventh edition. ISBN 0-444-86951-4
Adkins, C. J., (1968) Equilibrium Thermodynamics. McGraw-Hill ISBN 0-07-084057-1
== External links ==
Media related to Laws of thermodynamics at Wikimedia Commons | Wikipedia/Laws_of_Thermodynamics |
In thermodynamics, Bridgman's thermodynamic equations are a basic set of thermodynamic equations, derived using a method of generating multiple thermodynamic identities involving a number of thermodynamic quantities. The equations are named after the American physicist Percy Williams Bridgman. (See also the exact differential article for general differential relationships).
The extensive variables of the system are fundamental. Only the entropy S , the volume V and the four most common thermodynamic potentials will be considered. The four most common thermodynamic potentials are:
The first derivatives of the internal energy with respect to its (extensive) natural variables S and V yields the intensive parameters of the system - The pressure P and the temperature T . For a simple system in which the particle numbers are constant, the second derivatives of the thermodynamic potentials can all be expressed in terms of only three material properties
Bridgman's equations are a series of relationships between all of the above quantities.
== Introduction ==
Many thermodynamic equations are expressed in terms of partial derivatives. For example, the expression for the heat capacity at constant pressure is:
C
P
=
(
∂
H
∂
T
)
P
{\displaystyle C_{P}=\left({\frac {\partial H}{\partial T}}\right)_{P}}
which is the partial derivative of the enthalpy with respect to temperature while holding pressure constant. We may write this equation as:
C
P
=
(
∂
H
)
P
(
∂
T
)
P
{\displaystyle C_{P}={\frac {(\partial H)_{P}}{(\partial T)_{P}}}}
This method of rewriting the partial derivative was described by Bridgman (and also Lewis & Randall), and allows the use of the following collection of expressions to express many thermodynamic equations. For example from the equations below we have:
(
∂
H
)
P
=
C
P
{\displaystyle (\partial H)_{P}=C_{P}}
and
(
∂
T
)
P
=
1
{\displaystyle (\partial T)_{P}=1}
Dividing, we recover the proper expression for CP.
The following summary restates various partial terms in terms of the thermodynamic potentials, the state parameters S, T, P, V, and the following three material properties which are easily measured experimentally.
(
∂
V
∂
T
)
P
=
α
V
{\displaystyle \left({\frac {\partial V}{\partial T}}\right)_{P}=\alpha V}
(
∂
V
∂
P
)
T
=
−
β
T
V
{\displaystyle \left({\frac {\partial V}{\partial P}}\right)_{T}=-\beta _{T}V}
(
∂
H
∂
T
)
P
=
C
P
=
c
P
N
{\displaystyle \left({\frac {\partial H}{\partial T}}\right)_{P}=C_{P}=c_{P}N}
== Bridgman's thermodynamic equations ==
Note that Lewis and Randall use F and E for the Gibbs energy and internal energy, respectively, rather than G and U which are used in this article.
(
∂
T
)
P
=
−
(
∂
P
)
T
=
1
{\displaystyle (\partial T)_{P}=-(\partial P)_{T}=1}
(
∂
V
)
P
=
−
(
∂
P
)
V
=
(
∂
V
∂
T
)
P
{\displaystyle (\partial V)_{P}=-(\partial P)_{V}=\left({\frac {\partial V}{\partial T}}\right)_{P}}
(
∂
S
)
P
=
−
(
∂
P
)
S
=
C
p
T
{\displaystyle (\partial S)_{P}=-(\partial P)_{S}={\frac {C_{p}}{T}}}
(
∂
U
)
P
=
−
(
∂
P
)
U
=
C
P
−
P
(
∂
V
∂
T
)
P
{\displaystyle (\partial U)_{P}=-(\partial P)_{U}=C_{P}-P\left({\frac {\partial V}{\partial T}}\right)_{P}}
(
∂
H
)
P
=
−
(
∂
P
)
H
=
C
P
{\displaystyle (\partial H)_{P}=-(\partial P)_{H}=C_{P}}
(
∂
G
)
P
=
−
(
∂
P
)
G
=
−
S
{\displaystyle (\partial G)_{P}=-(\partial P)_{G}=-S}
(
∂
A
)
P
=
−
(
∂
P
)
A
=
−
S
−
P
(
∂
V
∂
T
)
P
{\displaystyle (\partial A)_{P}=-(\partial P)_{A}=-S-P\left({\frac {\partial V}{\partial T}}\right)_{P}}
(
∂
V
)
T
=
−
(
∂
T
)
V
=
−
(
∂
V
∂
P
)
T
{\displaystyle (\partial V)_{T}=-(\partial T)_{V}=-\left({\frac {\partial V}{\partial P}}\right)_{T}}
(
∂
S
)
T
=
−
(
∂
T
)
S
=
(
∂
V
∂
T
)
P
{\displaystyle (\partial S)_{T}=-(\partial T)_{S}=\left({\frac {\partial V}{\partial T}}\right)_{P}}
(
∂
U
)
T
=
−
(
∂
T
)
U
=
T
(
∂
V
∂
T
)
P
+
P
(
∂
V
∂
P
)
T
{\displaystyle (\partial U)_{T}=-(\partial T)_{U}=T\left({\frac {\partial V}{\partial T}}\right)_{P}+P\left({\frac {\partial V}{\partial P}}\right)_{T}}
(
∂
H
)
T
=
−
(
∂
T
)
H
=
−
V
+
T
(
∂
V
∂
T
)
P
{\displaystyle (\partial H)_{T}=-(\partial T)_{H}=-V+T\left({\frac {\partial V}{\partial T}}\right)_{P}}
(
∂
G
)
T
=
−
(
∂
T
)
G
=
−
V
{\displaystyle (\partial G)_{T}=-(\partial T)_{G}=-V}
(
∂
A
)
T
=
−
(
∂
T
)
A
=
P
(
∂
V
∂
P
)
T
{\displaystyle (\partial A)_{T}=-(\partial T)_{A}=P\left({\frac {\partial V}{\partial P}}\right)_{T}}
(
∂
S
)
V
=
−
(
∂
V
)
S
=
C
P
T
(
∂
V
∂
P
)
T
+
(
∂
V
∂
T
)
P
2
{\displaystyle (\partial S)_{V}=-(\partial V)_{S}={\frac {C_{P}}{T}}\left({\frac {\partial V}{\partial P}}\right)_{T}+\left({\frac {\partial V}{\partial T}}\right)_{P}^{2}}
(
∂
U
)
V
=
−
(
∂
V
)
U
=
C
P
(
∂
V
∂
P
)
T
+
T
(
∂
V
∂
T
)
P
2
{\displaystyle (\partial U)_{V}=-(\partial V)_{U}=C_{P}\left({\frac {\partial V}{\partial P}}\right)_{T}+T\left({\frac {\partial V}{\partial T}}\right)_{P}^{2}}
(
∂
H
)
V
=
−
(
∂
V
)
H
=
C
P
(
∂
V
∂
P
)
T
+
T
(
∂
V
∂
T
)
P
2
−
V
(
∂
V
∂
T
)
P
{\displaystyle (\partial H)_{V}=-(\partial V)_{H}=C_{P}\left({\frac {\partial V}{\partial P}}\right)_{T}+T\left({\frac {\partial V}{\partial T}}\right)_{P}^{2}-V\left({\frac {\partial V}{\partial T}}\right)_{P}}
(
∂
G
)
V
=
−
(
∂
V
)
G
=
−
V
(
∂
V
∂
T
)
P
−
S
(
∂
V
∂
P
)
T
{\displaystyle (\partial G)_{V}=-(\partial V)_{G}=-V\left({\frac {\partial V}{\partial T}}\right)_{P}-S\left({\frac {\partial V}{\partial P}}\right)_{T}}
(
∂
A
)
V
=
−
(
∂
V
)
A
=
−
S
(
∂
V
∂
P
)
T
{\displaystyle (\partial A)_{V}=-(\partial V)_{A}=-S\left({\frac {\partial V}{\partial P}}\right)_{T}}
(
∂
U
)
S
=
−
(
∂
S
)
U
=
P
C
P
T
(
∂
V
∂
P
)
T
+
P
(
∂
V
∂
T
)
P
2
{\displaystyle (\partial U)_{S}=-(\partial S)_{U}={\frac {PC_{P}}{T}}\left({\frac {\partial V}{\partial P}}\right)_{T}+P\left({\frac {\partial V}{\partial T}}\right)_{P}^{2}}
(
∂
H
)
S
=
−
(
∂
S
)
H
=
−
V
C
P
T
{\displaystyle (\partial H)_{S}=-(\partial S)_{H}=-{\frac {VC_{P}}{T}}}
(
∂
G
)
S
=
−
(
∂
S
)
G
=
−
V
C
P
T
+
S
(
∂
V
∂
T
)
P
{\displaystyle (\partial G)_{S}=-(\partial S)_{G}=-{\frac {VC_{P}}{T}}+S\left({\frac {\partial V}{\partial T}}\right)_{P}}
(
∂
A
)
S
=
−
(
∂
S
)
A
=
P
C
P
T
(
∂
V
∂
P
)
T
+
P
(
∂
V
∂
T
)
P
2
+
S
(
∂
V
∂
T
)
P
{\displaystyle (\partial A)_{S}=-(\partial S)_{A}={\frac {PC_{P}}{T}}\left({\frac {\partial V}{\partial P}}\right)_{T}+P\left({\frac {\partial V}{\partial T}}\right)_{P}^{2}+S\left({\frac {\partial V}{\partial T}}\right)_{P}}
(
∂
H
)
U
=
−
(
∂
U
)
H
=
−
V
C
P
+
P
V
(
∂
V
∂
T
)
P
−
P
C
P
(
∂
V
∂
P
)
T
−
P
T
(
∂
V
∂
T
)
P
2
{\displaystyle (\partial H)_{U}=-(\partial U)_{H}=-VC_{P}+PV\left({\frac {\partial V}{\partial T}}\right)_{P}-PC_{P}\left({\frac {\partial V}{\partial P}}\right)_{T}-PT\left({\frac {\partial V}{\partial T}}\right)_{P}^{2}}
(
∂
G
)
U
=
−
(
∂
U
)
G
=
−
V
C
P
+
P
V
(
∂
V
∂
T
)
P
+
S
T
(
∂
V
∂
T
)
P
+
S
P
(
∂
V
∂
P
)
T
{\displaystyle (\partial G)_{U}=-(\partial U)_{G}=-VC_{P}+PV\left({\frac {\partial V}{\partial T}}\right)_{P}+ST\left({\frac {\partial V}{\partial T}}\right)_{P}+SP\left({\frac {\partial V}{\partial P}}\right)_{T}}
(
∂
A
)
U
=
−
(
∂
U
)
A
=
P
(
C
P
+
S
)
(
∂
V
∂
P
)
T
+
P
T
(
∂
V
∂
T
)
P
2
+
S
T
(
∂
V
∂
T
)
P
{\displaystyle (\partial A)_{U}=-(\partial U)_{A}=P(C_{P}+S)\left({\frac {\partial V}{\partial P}}\right)_{T}+PT\left({\frac {\partial V}{\partial T}}\right)_{P}^{2}+ST\left({\frac {\partial V}{\partial T}}\right)_{P}}
(
∂
G
)
H
=
−
(
∂
H
)
G
=
−
V
(
C
P
+
S
)
+
T
S
(
∂
V
∂
T
)
P
{\displaystyle (\partial G)_{H}=-(\partial H)_{G}=-V(C_{P}+S)+TS\left({\frac {\partial V}{\partial T}}\right)_{P}}
(
∂
A
)
H
=
−
(
∂
H
)
A
=
−
[
S
+
P
(
∂
V
∂
T
)
P
]
[
V
−
T
(
∂
V
∂
T
)
P
]
+
P
C
P
(
∂
V
∂
P
)
T
{\displaystyle (\partial A)_{H}=-(\partial H)_{A}=-\left[S+P\left({\frac {\partial V}{\partial T}}\right)_{P}\right]\left[V-T\left({\frac {\partial V}{\partial T}}\right)_{P}\right]+PC_{P}\left({\frac {\partial V}{\partial P}}\right)_{T}}
(
∂
A
)
G
=
−
(
∂
G
)
A
=
−
S
[
V
+
P
(
∂
V
∂
P
)
T
]
−
P
V
(
∂
V
∂
T
)
P
{\displaystyle (\partial A)_{G}=-(\partial G)_{A}=-S\left[V+P\left({\frac {\partial V}{\partial P}}\right)_{T}\right]-PV\left({\frac {\partial V}{\partial T}}\right)_{P}}
== See also ==
Table of thermodynamic equations
Exact differential
== References ==
Bridgman, P.W. (1914). "A Complete Collection of Thermodynamic Formulas". Physical Review. 3 (4): 273–281. Bibcode:1914PhRv....3..273B. doi:10.1103/PhysRev.3.273.
Lewis, G.N.; Randall, M. (1961). Thermodynamics (2nd ed.). New York: McGraw-Hill Book Company. | Wikipedia/Bridgman_equations |
A thermodynamic potential (or more accurately, a thermodynamic potential energy) is a scalar quantity used to represent the thermodynamic state of a system. Just as in mechanics, where potential energy is defined as capacity to do work, similarly different potentials have different meanings. The concept of thermodynamic potentials was introduced by Pierre Duhem in 1886. Josiah Willard Gibbs in his papers used the term fundamental functions. Effects of changes in thermodynamic potentials can sometimes be measured directly, while their absolute magnitudes can only be assessed using computational chemistry or similar methods.
One main thermodynamic potential that has a physical interpretation is the internal energy U. It is the energy of configuration of a given system of conservative forces (that is why it is called potential) and only has meaning with respect to a defined set of references (or data). Expressions for all other thermodynamic energy potentials are derivable via Legendre transforms from an expression for U. In other words, each thermodynamic potential is equivalent to other thermodynamic potentials; each potential is a different expression of the others.
In thermodynamics, external forces, such as gravity, are counted as contributing to total energy rather than to thermodynamic potentials. For example, the working fluid in a steam engine sitting on top of Mount Everest has higher total energy due to gravity than it has at the bottom of the Mariana Trench, but the same thermodynamic potentials. This is because the gravitational potential energy belongs to the total energy rather than to thermodynamic potentials such as internal energy.
== Description and interpretation ==
Five common thermodynamic potentials are:
where T = temperature, S = entropy, p = pressure, V = volume. Ni is the number of particles of type i in the system and μi is the chemical potential for an i-type particle. The set of all Ni are also included as natural variables but may be ignored when no chemical reactions are occurring which cause them to change. The Helmholtz free energy is in ISO/IEC standard called Helmholtz energy or Helmholtz function. It is often denoted by the symbol F, but the use of A is preferred by IUPAC, ISO and IEC.
These five common potentials are all potential energies, but there are also entropy potentials. The thermodynamic square can be used as a tool to recall and derive some of the potentials.
Just as in mechanics, where potential energy is defined as capacity to do work, similarly different potentials have different meanings like the below:
Internal energy (U) is the capacity to do work plus the capacity to release heat.
Gibbs energy (G) is the capacity to do non-mechanical work.
Enthalpy (H) is the capacity to do non-mechanical work plus the capacity to release heat.
Helmholtz energy (F) is the capacity to do mechanical work plus non-mechanical work.
From these meanings (which actually apply in specific conditions, e.g. constant pressure, temperature, etc.), for positive changes (e.g., ΔU > 0), we can say that ΔU is the energy added to the system, ΔF is the total work done on it, ΔG is the non-mechanical work done on it, and ΔH is the sum of non-mechanical work done on the system and the heat given to it.
Note that the sum of internal energy is conserved, but the sum of Gibbs energy, or Helmholtz energy, are not conserved, despite being named "energy". They can be better interpreted as the potential to perform "useful work", and the potential can be wasted.
Thermodynamic potentials are very useful when calculating the equilibrium results of a chemical reaction, or when measuring the properties of materials in a chemical reaction. The chemical reactions usually take place under some constraints such as constant pressure and temperature, or constant entropy and volume, and when this is true, there is a corresponding thermodynamic potential that comes into play. Just as in mechanics, the system will tend towards a lower value of a potential and at equilibrium, under these constraints, the potential will take the unchanging minimum value. The thermodynamic potentials can also be used to estimate the total amount of energy available from a thermodynamic system under the appropriate constraint.
In particular: (see principle of minimum energy for a derivation)
When the entropy S and "external parameters" (e.g. volume) of a closed system are held constant, the internal energy U decreases and reaches a minimum value at equilibrium. This follows from the first and second laws of thermodynamics and is called the principle of minimum energy. The following three statements are directly derivable from this principle.
When the temperature T and external parameters of a closed system are held constant, the Helmholtz free energy F decreases and reaches a minimum value at equilibrium.
When the pressure p and external parameters of a closed system are held constant, the enthalpy H decreases and reaches a minimum value at equilibrium.
When the temperature T, pressure p and external parameters of a closed system are held constant, the Gibbs free energy G decreases and reaches a minimum value at equilibrium.
== Natural variables ==
For each thermodynamic potential, there are thermodynamic variables that need to be held constant to specify the potential value at a thermodynamical equilibrium state, such as independent variables for a mathematical function. These variables are termed the natural variables of that potential. The natural variables are important not only to specify the potential value at the equilibrium, but also because if a thermodynamic potential can be determined as a function of its natural variables, all of the thermodynamic properties of the system can be found by taking partial derivatives of that potential with respect to its natural variables and this is true for no other combination of variables. If a thermodynamic potential is not given as a function of its natural variables, it will not, in general, yield all of the thermodynamic properties of the system.
The set of natural variables for each of the above four thermodynamic potentials is formed from a combination of the T, S, p, V variables, excluding any pairs of conjugate variables; there is no natural variable set for a potential including the T-S or p-V variables together as conjugate variables for energy. An exception for this rule is the Ni − μi conjugate pairs as there is no reason to ignore these in the thermodynamic potentials, and in fact we may additionally define the four potentials for each species. Using IUPAC notation in which the brackets contain the natural variables (other than the main four), we have:
If there is only one species, then we are done. But, if there are, say, two species, then there will be additional potentials such as
U
[
μ
1
,
μ
2
]
=
U
−
μ
1
N
1
−
μ
2
N
2
{\displaystyle U[\mu _{1},\mu _{2}]=U-\mu _{1}N_{1}-\mu _{2}N_{2}}
and so on. If there are D dimensions to the thermodynamic space, then there are 2D unique thermodynamic potentials. For the most simple case, a single phase ideal gas, there will be three dimensions, yielding eight thermodynamic potentials.
== Fundamental equations ==
The definitions of the thermodynamic potentials may be differentiated and, along with the first and second laws of thermodynamics, a set of differential equations known as the fundamental equations follow. (Actually they are all expressions of the same fundamental thermodynamic relation, but are expressed in different variables.) By the first law of thermodynamics, any differential change in the internal energy U of a system can be written as the sum of heat flowing into the system subtracted by the work done by the system on the environment, along with any change due to the addition of new particles to the system:
d
U
=
δ
Q
−
δ
W
+
∑
i
μ
i
d
N
i
{\displaystyle \mathrm {d} U=\delta Q-\delta W+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}}
where δQ is the infinitesimal heat flow into the system, and δW is the infinitesimal work done by the system, μi is the chemical potential of particle type i and Ni is the number of the type i particles. (Neither δQ nor δW are exact differentials, i.e., they are thermodynamic process path-dependent. Small changes in these variables are, therefore, represented with δ rather than d.)
By the second law of thermodynamics, we can express the internal energy change in terms of state functions and their differentials. In case of reversible changes we have:
δ
Q
=
T
d
S
{\displaystyle \delta Q=T\,\mathrm {d} S}
δ
W
=
p
d
V
{\displaystyle \delta W=p\,\mathrm {d} V}
where
T is temperature,
S is entropy,
p is pressure, and
V is volume,
and the equality holds for reversible processes.
This leads to the standard differential form of the internal energy in case of a quasistatic reversible change:
d
U
=
T
d
S
−
p
d
V
+
∑
i
μ
i
d
N
i
{\displaystyle \mathrm {d} U=T\mathrm {d} S-p\mathrm {d} V+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}}
Since U, S and V are thermodynamic functions of state (also called state functions), the above relation also holds for arbitrary non-reversible changes. If the system has more external variables than just the volume that can change, the fundamental thermodynamic relation generalizes to:
d
U
=
T
d
S
−
p
d
V
+
∑
j
μ
j
d
N
j
+
∑
i
X
i
d
x
i
{\displaystyle dU=T\,\mathrm {d} S-p\,\mathrm {d} V+\sum _{j}\mu _{j}\,\mathrm {d} N_{j}+\sum _{i}X_{i}\,\mathrm {d} x_{i}}
Here the Xi are the generalized forces corresponding to the external variables xi.
Applying Legendre transforms repeatedly, the following differential relations hold for the four potentials (fundamental thermodynamic equations or fundamental thermodynamic relation):
The infinitesimals on the right-hand side of each of the above equations are of the natural variables of the potential on the left-hand side. Similar equations can be developed for all of the other thermodynamic potentials of the system. There will be one fundamental equation for each thermodynamic potential, resulting in a total of 2D fundamental equations.
The differences between the four thermodynamic potentials can be summarized as follows:
d
(
p
V
)
=
d
H
−
d
U
=
d
G
−
d
F
{\displaystyle \mathrm {d} (pV)=\mathrm {d} H-\mathrm {d} U=\mathrm {d} G-\mathrm {d} F}
d
(
T
S
)
=
d
U
−
d
F
=
d
H
−
d
G
{\displaystyle \mathrm {d} (TS)=\mathrm {d} U-\mathrm {d} F=\mathrm {d} H-\mathrm {d} G}
== Equations of state ==
We can use the above equations to derive some differential definitions of some thermodynamic parameters. If we define Φ to stand for any of the thermodynamic potentials, then the above equations are of the form:
d
Φ
=
∑
i
x
i
d
y
i
{\displaystyle \mathrm {d} \Phi =\sum _{i}x_{i}\,\mathrm {d} y_{i}}
where xi and yi are conjugate pairs, and the yi are the natural variables of the potential Φ. From the chain rule it follows that:
x
j
=
(
∂
Φ
∂
y
j
)
{
y
i
≠
j
}
{\displaystyle x_{j}=\left({\frac {\partial \Phi }{\partial y_{j}}}\right)_{\{y_{i\neq j}\}}}
where {yi ≠ j} is the set of all natural variables of Φ except yj that are held as constants. This yields expressions for various thermodynamic parameters in terms of the derivatives of the potentials with respect to their natural variables. These equations are known as equations of state since they specify parameters of the thermodynamic state. If we restrict ourselves to the potentials U (Internal energy), F (Helmholtz energy), H (Enthalpy) and G (Gibbs energy), then we have the following equations of state (subscripts showing natural variables that are held as constants):
+
T
=
(
∂
U
∂
S
)
V
,
{
N
i
}
=
(
∂
H
∂
S
)
p
,
{
N
i
}
{\displaystyle +T=\left({\frac {\partial U}{\partial S}}\right)_{V,\{N_{i}\}}=\left({\frac {\partial H}{\partial S}}\right)_{p,\{N_{i}\}}}
−
p
=
(
∂
U
∂
V
)
S
,
{
N
i
}
=
(
∂
F
∂
V
)
T
,
{
N
i
}
{\displaystyle -p=\left({\frac {\partial U}{\partial V}}\right)_{S,\{N_{i}\}}=\left({\frac {\partial F}{\partial V}}\right)_{T,\{N_{i}\}}}
+
V
=
(
∂
H
∂
p
)
S
,
{
N
i
}
=
(
∂
G
∂
p
)
T
,
{
N
i
}
{\displaystyle +V=\left({\frac {\partial H}{\partial p}}\right)_{S,\{N_{i}\}}=\left({\frac {\partial G}{\partial p}}\right)_{T,\{N_{i}\}}}
−
S
=
(
∂
G
∂
T
)
p
,
{
N
i
}
=
(
∂
F
∂
T
)
V
,
{
N
i
}
{\displaystyle -S=\left({\frac {\partial G}{\partial T}}\right)_{p,\{N_{i}\}}=\left({\frac {\partial F}{\partial T}}\right)_{V,\{N_{i}\}}}
μ
j
=
(
∂
ϕ
∂
N
j
)
X
,
Y
,
{
N
i
≠
j
}
{\displaystyle ~\mu _{j}=\left({\frac {\partial \phi }{\partial N_{j}}}\right)_{X,Y,\{N_{i\neq j}\}}}
where, in the last equation, ϕ is any of the thermodynamic potentials (U, F, H, or G), and
X
,
Y
,
{
N
i
≠
j
}
{\displaystyle {X,Y,\{N_{i\neq j}\}}}
are the set of natural variables for that potential, excluding Ni. If we use all thermodynamic potentials, then we will have more equations of state such as
−
N
j
=
(
∂
U
[
μ
j
]
∂
μ
j
)
S
,
V
,
{
N
i
≠
j
}
{\displaystyle -N_{j}=\left({\frac {\partial U[\mu _{j}]}{\partial \mu _{j}}}\right)_{S,V,\{N_{i\neq j}\}}}
and so on. In all, if the thermodynamic space is D dimensions, then there will be D equations for each potential, resulting in a total of D 2D equations of state because 2D thermodynamic potentials exist. If the D equations of state for a particular potential are known, then the fundamental equation for that potential (i.e., the exact differential of the thermodynamic potential) can be determined. This means that all thermodynamic information about the system will be known because the fundamental equations for any other potential can be found via the Legendre transforms and the corresponding equations of state for each potential as partial derivatives of the potential can also be found.
== Measurement of thermodynamic potentials ==
The above equations of state suggest methods to experimentally measure changes in the thermodynamic potentials using physically measurable parameters. For example the free energy expressions
+
V
=
(
∂
G
∂
p
)
T
,
{
N
i
}
{\displaystyle +V=\left({\frac {\partial G}{\partial p}}\right)_{T,\{N_{i}\}}}
and
−
p
=
(
∂
F
∂
V
)
T
,
{
N
i
}
{\displaystyle -p=\left({\frac {\partial F}{\partial V}}\right)_{T,\{N_{i}\}}}
can be integrated at constant temperature and quantities to obtain:
Δ
G
=
∫
P
1
P
2
V
d
p
{\displaystyle \Delta G=\int _{P1}^{P2}V\,\mathrm {d} p\,\,\,\,}
(at constant T, {Nj} )
Δ
F
=
−
∫
V
1
V
2
p
d
V
{\displaystyle \Delta F=-\int _{V1}^{V2}p\,\mathrm {d} V\,\,\,\,}
(at constant T, {Nj} )
which can be measured by monitoring the measurable variables of pressure, temperature and volume. Changes in the enthalpy and internal energy can be measured by calorimetry (which measures the amount of heat ΔQ released or absorbed by a system). The expressions
+
T
=
(
∂
U
∂
S
)
V
,
{
N
i
}
=
(
∂
H
∂
S
)
p
,
{
N
i
}
{\displaystyle +T=\left({\frac {\partial U}{\partial S}}\right)_{V,\{N_{i}\}}=\left({\frac {\partial H}{\partial S}}\right)_{p,\{N_{i}\}}}
can be integrated:
Δ
H
=
∫
S
1
S
2
T
d
S
=
Δ
Q
{\displaystyle \Delta H=\int _{S1}^{S2}T\,\mathrm {d} S=\Delta Q\,\,\,\,}
(at constant P, {Nj} )
Δ
U
=
∫
S
1
S
2
T
d
S
=
Δ
Q
{\displaystyle \Delta U=\int _{S1}^{S2}T\,\mathrm {d} S=\Delta Q\,\,\,\,}
(at constant V, {Nj} )
Note that these measurements are made at constant {Nj } and are therefore not applicable to situations in which chemical reactions take place.
== Maxwell relations ==
Again, define xi and yi to be conjugate pairs, and the yi to be the natural variables of some potential Φ. We may take the "cross differentials" of the state equations, which obey the following relationship:
(
∂
∂
y
j
(
∂
Φ
∂
y
k
)
{
y
i
≠
k
}
)
{
y
i
≠
j
}
=
(
∂
∂
y
k
(
∂
Φ
∂
y
j
)
{
y
i
≠
j
}
)
{
y
i
≠
k
}
{\displaystyle \left({\frac {\partial }{\partial y_{j}}}\left({\frac {\partial \Phi }{\partial y_{k}}}\right)_{\{y_{i\neq k}\}}\right)_{\{y_{i\neq j}\}}=\left({\frac {\partial }{\partial y_{k}}}\left({\frac {\partial \Phi }{\partial y_{j}}}\right)_{\{y_{i\neq j}\}}\right)_{\{y_{i\neq k}\}}}
From these we get the Maxwell relations. There will be (D − 1)/2 of them for each potential giving a total of D(D − 1)/2 equations in all. If we restrict ourselves the U, F, H, G
(
∂
T
∂
V
)
S
,
{
N
i
}
=
−
(
∂
p
∂
S
)
V
,
{
N
i
}
{\displaystyle \left({\frac {\partial T}{\partial V}}\right)_{S,\{N_{i}\}}=-\left({\frac {\partial p}{\partial S}}\right)_{V,\{N_{i}\}}}
(
∂
T
∂
p
)
S
,
{
N
i
}
=
+
(
∂
V
∂
S
)
p
,
{
N
i
}
{\displaystyle \left({\frac {\partial T}{\partial p}}\right)_{S,\{N_{i}\}}=+\left({\frac {\partial V}{\partial S}}\right)_{p,\{N_{i}\}}}
(
∂
S
∂
V
)
T
,
{
N
i
}
=
+
(
∂
p
∂
T
)
V
,
{
N
i
}
{\displaystyle \left({\frac {\partial S}{\partial V}}\right)_{T,\{N_{i}\}}=+\left({\frac {\partial p}{\partial T}}\right)_{V,\{N_{i}\}}}
(
∂
S
∂
p
)
T
,
{
N
i
}
=
−
(
∂
V
∂
T
)
p
,
{
N
i
}
{\displaystyle \left({\frac {\partial S}{\partial p}}\right)_{T,\{N_{i}\}}=-\left({\frac {\partial V}{\partial T}}\right)_{p,\{N_{i}\}}}
Using the equations of state involving the chemical potential we get equations such as:
(
∂
T
∂
N
j
)
V
,
S
,
{
N
i
≠
j
}
=
(
∂
μ
j
∂
S
)
V
,
{
N
i
}
{\displaystyle \left({\frac {\partial T}{\partial N_{j}}}\right)_{V,S,\{N_{i\neq j}\}}=\left({\frac {\partial \mu _{j}}{\partial S}}\right)_{V,\{N_{i}\}}}
and using the other potentials we can get equations such as:
(
∂
N
j
∂
V
)
S
,
μ
j
,
{
N
i
≠
j
}
=
−
(
∂
p
∂
μ
j
)
S
,
V
{
N
i
≠
j
}
{\displaystyle \left({\frac {\partial N_{j}}{\partial V}}\right)_{S,\mu _{j},\{N_{i\neq j}\}}=-\left({\frac {\partial p}{\partial \mu _{j}}}\right)_{S,V\{N_{i\neq j}\}}}
(
∂
N
j
∂
N
k
)
S
,
V
,
μ
j
,
{
N
i
≠
j
,
k
}
=
−
(
∂
μ
k
∂
μ
j
)
S
,
V
{
N
i
≠
j
}
{\displaystyle \left({\frac {\partial N_{j}}{\partial N_{k}}}\right)_{S,V,\mu _{j},\{N_{i\neq j,k}\}}=-\left({\frac {\partial \mu _{k}}{\partial \mu _{j}}}\right)_{S,V\{N_{i\neq j}\}}}
== Euler relations ==
Again, define xi and yi to be conjugate pairs, and the yi to be the natural variables of the internal energy.
Since all of the natural variables of the internal energy U are extensive quantities
U
(
{
α
y
i
}
)
=
α
U
(
{
y
i
}
)
{\displaystyle U(\{\alpha y_{i}\})=\alpha U(\{y_{i}\})}
it follows from Euler's homogeneous function theorem that the internal energy can be written as:
U
(
{
y
i
}
)
=
∑
j
y
j
(
∂
U
∂
y
j
)
{
y
i
≠
j
}
{\displaystyle U(\{y_{i}\})=\sum _{j}y_{j}\left({\frac {\partial U}{\partial y_{j}}}\right)_{\{y_{i\neq j}\}}}
From the equations of state, we then have:
U
=
T
S
−
p
V
+
∑
i
μ
i
N
i
{\displaystyle U=TS-pV+\sum _{i}\mu _{i}N_{i}}
This formula is known as an Euler relation, because Euler's theorem on homogeneous functions leads to it. (It was not discovered by Euler in an investigation of thermodynamics, which did not exist in his day.).
Substituting into the expressions for the other main potentials we have:
F
=
−
p
V
+
∑
i
μ
i
N
i
{\displaystyle F=-pV+\sum _{i}\mu _{i}N_{i}}
H
=
T
S
+
∑
i
μ
i
N
i
{\displaystyle H=TS+\sum _{i}\mu _{i}N_{i}}
G
=
∑
i
μ
i
N
i
{\displaystyle G=\sum _{i}\mu _{i}N_{i}}
As in the above sections, this process can be carried out on all of the other thermodynamic potentials. Thus, there is another Euler relation, based on the expression of entropy as a function of internal energy and other extensive variables. Yet other Euler relations hold for other fundamental equations for energy or entropy, as respective functions of other state variables including some intensive state variables.
== Gibbs–Duhem relation ==
Deriving the Gibbs–Duhem equation from basic thermodynamic state equations is straightforward. Equating any thermodynamic potential definition with its Euler relation expression yields:
U
=
T
S
−
P
V
+
∑
i
μ
i
N
i
{\displaystyle U=TS-PV+\sum _{i}\mu _{i}N_{i}}
Differentiating, and using the second law:
d
U
=
T
d
S
−
P
d
V
+
∑
i
μ
i
d
N
i
{\displaystyle \mathrm {d} U=T\mathrm {d} S-P\mathrm {d} V+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}}
yields:
0
=
S
d
T
−
V
d
P
+
∑
i
N
i
d
μ
i
{\displaystyle 0=S\mathrm {d} T-V\mathrm {d} P+\sum _{i}N_{i}\mathrm {d} \mu _{i}}
Which is the Gibbs–Duhem relation. The Gibbs–Duhem is a relationship among the intensive parameters of the system. It follows that for a simple system with I components, there will be I + 1 independent parameters, or degrees of freedom. For example, a simple system with a single component will have two degrees of freedom, and may be specified by only two parameters, such as pressure and volume for example. The law is named after Josiah Willard Gibbs and Pierre Duhem.
== Stability conditions ==
As the internal energy is a convex function of entropy and volume, the stability condition requires that the second derivative of internal energy with entropy or volume to be positive. It is commonly expressed as
d
2
U
>
0
{\displaystyle d^{2}U>0}
. Since the maximum principle of entropy is equivalent to minimum principle of internal energy, the combined criteria for stability or thermodynamic equilibrium is expressed as
d
2
U
>
0
{\displaystyle d^{2}U>0}
and
d
U
=
0
{\displaystyle dU=0}
for parameters, entropy and volume. This is analogous to
d
2
S
<
0
{\displaystyle d^{2}S<0}
and
d
S
=
0
{\displaystyle dS=0}
condition for entropy at equilibrium. The same concept can be applied to the various thermodynamic potentials by identifying if they are convex or concave of respective their variables.
(
∂
2
F
∂
T
2
)
V
,
N
≤
0
{\displaystyle {\biggl (}{\frac {\partial ^{2}F}{\partial T^{2}}}{\biggr )}_{V,N}\leq 0}
and
(
∂
2
F
∂
V
2
)
T
,
N
≥
0
{\displaystyle {\biggl (}{\frac {\partial ^{2}F}{\partial V^{2}}}{\biggr )}_{T,N}\geq 0}
Where Helmholtz energy is a concave function of temperature and convex function of volume.
(
∂
2
H
∂
P
2
)
S
,
N
≤
0
{\displaystyle {\biggl (}{\frac {\partial ^{2}H}{\partial P^{2}}}{\biggr )}_{S,N}\leq 0}
and
(
∂
2
H
∂
S
2
)
P
,
N
≥
0
{\displaystyle {\biggl (}{\frac {\partial ^{2}H}{\partial S^{2}}}{\biggr )}_{P,N}\geq 0}
Where enthalpy is a concave function of pressure and convex function of entropy.
(
∂
2
G
∂
T
2
)
P
,
N
≤
0
{\displaystyle {\biggl (}{\frac {\partial ^{2}G}{\partial T^{2}}}{\biggr )}_{P,N}\leq 0}
and
(
∂
2
G
∂
P
2
)
T
,
N
≤
0
{\displaystyle {\biggl (}{\frac {\partial ^{2}G}{\partial P^{2}}}{\biggr )}_{T,N}\leq 0}
Where Gibbs potential is a concave function of both pressure and temperature.
In general the thermodynamic potentials (the internal energy and its Legendre transforms), are convex functions of their extrinsic variables and concave functions of intrinsic variables. The stability conditions impose that isothermal compressibility is positive and that for non-negative temperature,
C
P
>
C
V
{\displaystyle C_{P}>C_{V}}
.
== Chemical reactions ==
Changes in these quantities are useful for assessing the degree to which a chemical reaction will proceed. The relevant quantity depends on the reaction conditions, as shown in the following table. Δ denotes the change in the potential and at equilibrium the change will be zero.
Most commonly one considers reactions at constant p and T, so the Gibbs free energy is the most useful potential in studies of chemical reactions.
== See also ==
Coomber's relationship
== Notes ==
== References ==
== Further reading ==
McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994, ISBN 0-07-051400-3
Thermodynamics, From Concepts to Applications (2nd Edition), A. Shavit, C. Gutfinger, CRC Press (Taylor and Francis Group, USA), 2009, ISBN 9781420073683
Chemical Thermodynamics, D.J.G. Ives, University Chemistry, Macdonald Technical and Scientific, 1971, ISBN 0-356-03736-3
Elements of Statistical Thermodynamics (2nd Edition), L.K. Nash, Principles of Chemistry, Addison-Wesley, 1974, ISBN 0-201-05229-6
Statistical Physics (2nd Edition), F. Mandl, Manchester Physics, John Wiley & Sons, 2008, ISBN 9780471566588
== External links ==
Thermodynamic Potentials – Georgia State University
Chemical Potential Energy: The 'Characteristic' vs the Concentration-Dependent Kind | Wikipedia/Euler_integral_(thermodynamics) |
The Clausius–Clapeyron relation, in chemical thermodynamics, specifies the temperature dependence of pressure, most importantly vapor pressure, at a discontinuous phase transition between two phases of matter of a single constituent. It is named after Rudolf Clausius and Benoît Paul Émile Clapeyron. However, this relation was in fact originally derived by Sadi Carnot in his Reflections on the Motive Power of Fire, which was published in 1824 but largely ignored until it was rediscovered by Clausius, Clapeyron, and Lord Kelvin decades later. Kelvin said of Carnot's argument that "nothing in the whole range of Natural Philosophy is more remarkable than the establishment of general laws by such a process of reasoning."
Kelvin and his brother James Thomson confirmed the relation experimentally in 1849–50, and it was historically important as a very early successful application of theoretical thermodynamics. Its relevance to meteorology and climatology is the increase of the water-holding capacity of the atmosphere by about 7% for every 1 °C (1.8 °F) rise in temperature.
== Definition ==
=== Exact Clapeyron equation ===
On a pressure–temperature (P–T) diagram, for any phase change the line separating the two phases is known as the coexistence curve. The Clapeyron relation gives the slope of the tangents to this curve. Mathematically,
d
P
d
T
=
L
T
Δ
v
=
Δ
s
Δ
v
,
{\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} T}}={\frac {L}{T\,\Delta v}}={\frac {\Delta s}{\Delta v}},}
where
d
P
/
d
T
{\displaystyle \mathrm {d} P/\mathrm {d} T}
is the slope of the tangent to the coexistence curve at any point,
L
{\displaystyle L}
is the molar change in enthalpy (latent heat, the amount of energy absorbed in the transformation),
T
{\displaystyle T}
is the temperature,
Δ
v
{\displaystyle \Delta v}
is the molar volume change of the phase transition, and
Δ
s
{\displaystyle \Delta s}
is the molar entropy change of the phase transition. Alternatively, the specific values may be used instead of the molar ones.
=== Clausius–Clapeyron equation ===
The Clausius–Clapeyron equation: 509 applies to vaporization of liquids where vapor follows ideal gas law using the ideal gas constant
R
{\displaystyle R}
and liquid volume is neglected as being much smaller than vapor volume V. It is often used to calculate vapor pressure of a liquid.
d
P
d
T
=
P
L
T
2
R
,
{\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} T}}={\frac {PL}{T^{2}R}},}
v
=
V
n
=
R
T
P
.
{\displaystyle v={\frac {V}{n}}={\frac {RT}{P}}.}
The equation expresses this in a more convenient form just in terms of the latent heat, for moderate temperatures and pressures.
== Derivations ==
=== Derivation from state postulate ===
Using the state postulate, take the molar entropy
s
{\displaystyle s}
for a homogeneous substance to be a function of molar volume
v
{\displaystyle v}
and temperature
T
{\displaystyle T}
.: 508
d
s
=
(
∂
s
∂
v
)
T
d
v
+
(
∂
s
∂
T
)
v
d
T
.
{\displaystyle \mathrm {d} s=\left({\frac {\partial s}{\partial v}}\right)_{T}\,\mathrm {d} v+\left({\frac {\partial s}{\partial T}}\right)_{v}\,\mathrm {d} T.}
The Clausius–Clapeyron relation describes a Phase transition in a closed system composed of two contiguous phases, condensed matter and ideal gas, of a single substance, in mutual thermodynamic equilibrium, at constant temperature and pressure. Therefore,: 508
d
s
=
(
∂
s
∂
v
)
T
d
v
.
{\displaystyle \mathrm {d} s=\left({\frac {\partial s}{\partial v}}\right)_{T}\,\mathrm {d} v.}
Using the appropriate Maxwell relation gives: 508
d
s
=
(
∂
P
∂
T
)
v
d
v
,
{\displaystyle \mathrm {d} s=\left({\frac {\partial P}{\partial T}}\right)_{v}\,\mathrm {d} v,}
where
P
{\displaystyle P}
is the pressure. Since pressure and temperature are constant, the derivative of pressure with respect to temperature does not change.: 57, 62, 671 Therefore, the partial derivative of molar entropy may be changed into a total derivative
d
s
=
d
P
d
T
d
v
,
{\displaystyle \mathrm {d} s={\frac {\mathrm {d} P}{\mathrm {d} T}}\,\mathrm {d} v,}
and the total derivative of pressure with respect to temperature may be factored out when integrating from an initial phase
α
{\displaystyle \alpha }
to a final phase
β
{\displaystyle \beta }
,: 508 to obtain
d
P
d
T
=
Δ
s
Δ
v
,
{\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} T}}={\frac {\Delta s}{\Delta v}},}
where
Δ
s
≡
s
β
−
s
α
{\displaystyle \Delta s\equiv s_{\beta }-s_{\alpha }}
and
Δ
v
≡
v
β
−
v
α
{\displaystyle \Delta v\equiv v_{\beta }-v_{\alpha }}
are respectively the change in molar entropy and molar volume. Given that a phase change is an internally reversible process, and that our system is closed, the first law of thermodynamics holds:
d
u
=
δ
q
+
δ
w
=
T
d
s
−
P
d
v
,
{\displaystyle \mathrm {d} u=\delta q+\delta w=T\,\mathrm {d} s-P\,\mathrm {d} v,}
where
u
{\displaystyle u}
is the internal energy of the system. Given constant pressure and temperature (during a phase change) and the definition of molar enthalpy
h
{\displaystyle h}
, we obtain
d
h
=
T
d
s
+
v
d
P
,
{\displaystyle \mathrm {d} h=T\,\mathrm {d} s+v\,\mathrm {d} P,}
d
h
=
T
d
s
,
{\displaystyle \mathrm {d} h=T\,\mathrm {d} s,}
d
s
=
d
h
T
.
{\displaystyle \mathrm {d} s={\frac {\mathrm {d} h}{T}}.}
Given constant pressure and temperature (during a phase change), we obtain: 508
Δ
s
=
Δ
h
T
.
{\displaystyle \Delta s={\frac {\Delta h}{T}}.}
Substituting the definition of molar latent heat
L
=
Δ
h
{\displaystyle L=\Delta h}
gives
Δ
s
=
L
T
.
{\displaystyle \Delta s={\frac {L}{T}}.}
Substituting this result into the pressure derivative given above (
d
P
/
d
T
=
Δ
s
/
Δ
v
{\displaystyle \mathrm {d} P/\mathrm {d} T=\Delta s/\Delta v}
), we obtain: 508
d
P
d
T
=
L
T
Δ
v
.
{\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} T}}={\frac {L}{T\,\Delta v}}.}
This result (also known as the Clapeyron equation) equates the slope
d
P
/
d
T
{\displaystyle \mathrm {d} P/\mathrm {d} T}
of the coexistence curve
P
(
T
)
{\displaystyle P(T)}
to the function
L
/
(
T
Δ
v
)
{\displaystyle L/(T\,\Delta v)}
of the molar latent heat
L
{\displaystyle L}
, the temperature
T
{\displaystyle T}
, and the change in molar volume
Δ
v
{\displaystyle \Delta v}
. Instead of the molar values, corresponding specific values may also be used.
=== Derivation from Gibbs–Duhem relation ===
Suppose two phases,
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
, are in contact and at equilibrium with each other. Their chemical potentials are related by
μ
α
=
μ
β
.
{\displaystyle \mu _{\alpha }=\mu _{\beta }.}
Furthermore, along the coexistence curve,
d
μ
α
=
d
μ
β
.
{\displaystyle \mathrm {d} \mu _{\alpha }=\mathrm {d} \mu _{\beta }.}
One may therefore use the Gibbs–Duhem relation
d
μ
=
M
(
−
s
d
T
+
v
d
P
)
{\displaystyle \mathrm {d} \mu =M(-s\,\mathrm {d} T+v\,\mathrm {d} P)}
(where
s
{\displaystyle s}
is the specific entropy,
v
{\displaystyle v}
is the specific volume, and
M
{\displaystyle M}
is the molar mass) to obtain
−
(
s
β
−
s
α
)
d
T
+
(
v
β
−
v
α
)
d
P
=
0.
{\displaystyle -(s_{\beta }-s_{\alpha })\,\mathrm {d} T+(v_{\beta }-v_{\alpha })\,\mathrm {d} P=0.}
Rearrangement gives
d
P
d
T
=
s
β
−
s
α
v
β
−
v
α
=
Δ
s
Δ
v
,
{\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} T}}={\frac {s_{\beta }-s_{\alpha }}{v_{\beta }-v_{\alpha }}}={\frac {\Delta s}{\Delta v}},}
from which the derivation of the Clapeyron equation continues as in the previous section.
=== Ideal gas approximation at low temperatures ===
When the phase transition of a substance is between a gas phase and a condensed phase (liquid or solid), and occurs at temperatures much lower than the critical temperature of that substance, the specific volume of the gas phase
v
g
{\displaystyle v_{\text{g}}}
greatly exceeds that of the condensed phase
v
c
{\displaystyle v_{\text{c}}}
. Therefore, one may approximate
Δ
v
=
v
g
(
1
−
v
c
v
g
)
≈
v
g
{\displaystyle \Delta v=v_{\text{g}}\left(1-{\frac {v_{\text{c}}}{v_{\text{g}}}}\right)\approx v_{\text{g}}}
at low temperatures. If pressure is also low, the gas may be approximated by the ideal gas law, so that
v
g
=
R
T
P
,
{\displaystyle v_{\text{g}}={\frac {RT}{P}},}
where
P
{\displaystyle P}
is the pressure,
R
{\displaystyle R}
is the specific gas constant, and
T
{\displaystyle T}
is the temperature. Substituting into the Clapeyron equation
d
P
d
T
=
L
T
Δ
v
,
{\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} T}}={\frac {L}{T\,\Delta v}},}
we can obtain the Clausius–Clapeyron equation: 509
d
P
d
T
=
P
L
T
2
R
{\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} T}}={\frac {PL}{T^{2}R}}}
for low temperatures and pressures,: 509 where
L
{\displaystyle L}
is the specific latent heat of the substance. Instead of the specific, corresponding molar values (i.e.
L
{\displaystyle L}
in kJ/mol and R = 8.31 J/(mol⋅K)) may also be used.
Let
(
P
1
,
T
1
)
{\displaystyle (P_{1},T_{1})}
and
(
P
2
,
T
2
)
{\displaystyle (P_{2},T_{2})}
be any two points along the coexistence curve between two phases
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
. In general,
L
{\displaystyle L}
varies between any two such points, as a function of temperature. But if
L
{\displaystyle L}
is approximated as constant,
d
P
P
≅
L
R
d
T
T
2
,
{\displaystyle {\frac {\mathrm {d} P}{P}}\cong {\frac {L}{R}}{\frac {\mathrm {d} T}{T^{2}}},}
∫
P
1
P
2
d
P
P
≅
L
R
∫
T
1
T
2
d
T
T
2
,
{\displaystyle \int _{P_{1}}^{P_{2}}{\frac {\mathrm {d} P}{P}}\cong {\frac {L}{R}}\int _{T_{1}}^{T_{2}}{\frac {\mathrm {d} T}{T^{2}}},}
ln
P
|
P
=
P
1
P
2
≅
−
L
R
⋅
1
T
|
T
=
T
1
T
2
,
{\displaystyle \ln P{\Big |}_{P=P_{1}}^{P_{2}}\cong -{\frac {L}{R}}\cdot \left.{\frac {1}{T}}\right|_{T=T_{1}}^{T_{2}},}
or: 672
ln
P
2
P
1
≅
−
L
R
(
1
T
2
−
1
T
1
)
.
{\displaystyle \ln {\frac {P_{2}}{P_{1}}}\cong -{\frac {L}{R}}\left({\frac {1}{T_{2}}}-{\frac {1}{T_{1}}}\right).}
These last equations are useful because they relate equilibrium or saturation vapor pressure and temperature to the latent heat of the phase change without requiring specific-volume data. For instance, for water near its normal boiling point, with a molar enthalpy of vaporization of 40.7 kJ/mol and R = 8.31 J/(mol⋅K),
P
vap
(
T
)
≅
1
bar
⋅
exp
[
−
40
700
K
8.31
(
1
T
−
1
373
K
)
]
.
{\displaystyle P_{\text{vap}}(T)\cong 1~{\text{bar}}\cdot \exp \left[-{\frac {40\,700~{\text{K}}}{8.31}}\left({\frac {1}{T}}-{\frac {1}{373~{\text{K}}}}\right)\right].}
=== Clapeyron's derivation ===
In the original work by Clapeyron, the following argument is advanced.
Clapeyron considered a Carnot process of saturated water vapor with horizontal isobars. As the pressure is a function of temperature alone, the isobars are also isotherms. If the process involves an infinitesimal amount of water,
d
x
{\displaystyle \mathrm {d} x}
, and an infinitesimal difference in temperature
d
T
{\displaystyle \mathrm {d} T}
, the heat absorbed is
Q
=
L
d
x
,
{\displaystyle Q=L\,\mathrm {d} x,}
and the corresponding work is
W
=
d
p
d
T
d
T
(
V
″
−
V
′
)
,
{\displaystyle W={\frac {\mathrm {d} p}{\mathrm {d} T}}\,\mathrm {d} T(V''-V'),}
where
V
″
−
V
′
{\displaystyle V''-V'}
is the difference between the volumes of
d
x
{\displaystyle \mathrm {d} x}
in the liquid phase and vapor phases.
The ratio
W
/
Q
{\displaystyle W/Q}
is the efficiency of the Carnot engine,
d
T
/
T
{\displaystyle \mathrm {d} T/T}
. Substituting and rearranging gives
d
p
d
T
=
L
T
(
v
″
−
v
′
)
,
{\displaystyle {\frac {\mathrm {d} p}{\mathrm {d} T}}={\frac {L}{T(v''-v')}},}
where lowercase
v
″
−
v
′
{\displaystyle v''-v'}
denotes the change in specific volume during the transition.
== Applications ==
=== Chemistry and chemical engineering ===
For transitions between a gas and a condensed phase with the approximations described above, the expression may be rewritten as
ln
(
P
1
P
0
)
=
L
R
(
1
T
0
−
1
T
1
)
{\displaystyle \ln \left({\frac {P_{1}}{P_{0}}}\right)={\frac {L}{R}}\left({\frac {1}{T_{0}}}-{\frac {1}{T_{1}}}\right)}
where
P
0
,
P
1
{\displaystyle P_{0},P_{1}}
are the pressures at temperatures
T
0
,
T
1
{\displaystyle T_{0},T_{1}}
respectively and
R
{\displaystyle R}
is the ideal gas constant. For a liquid–gas transition,
L
{\displaystyle L}
is the molar latent heat (or molar enthalpy) of vaporization; for a solid–gas transition,
L
{\displaystyle L}
is the molar latent heat of sublimation. If the latent heat is known, then knowledge of one point on the coexistence curve, for instance (1 bar, 373 K) for water, determines the rest of the curve. Conversely, the relationship between
ln
P
{\displaystyle \ln P}
and
1
/
T
{\displaystyle 1/T}
is linear, and so linear regression is used to estimate the latent heat.
=== Meteorology and climatology ===
Atmospheric water vapor drives many important meteorologic phenomena (notably, precipitation), motivating interest in its dynamics. The Clausius–Clapeyron equation for water vapor under typical atmospheric conditions (near standard temperature and pressure) is
d
e
s
d
T
=
L
v
(
T
)
e
s
R
v
T
2
,
{\displaystyle {\frac {\mathrm {d} e_{s}}{\mathrm {d} T}}={\frac {L_{v}(T)e_{s}}{R_{v}T^{2}}},}
where
The temperature dependence of the latent heat
L
v
(
T
)
{\displaystyle L_{v}(T)}
can be neglected in this application. The August–Roche–Magnus formula provides a solution under that approximation:
e
s
(
T
)
=
6.1094
exp
(
17.625
T
T
+
243.04
)
,
{\displaystyle e_{s}(T)=6.1094\exp \left({\frac {17.625T}{T+243.04}}\right),}
where
e
s
{\displaystyle e_{s}}
is in hPa, and
T
{\displaystyle T}
is in degrees Celsius (whereas everywhere else on this page,
T
{\displaystyle T}
is an absolute temperature, e.g. in kelvins).
This is also sometimes called the Magnus or Magnus–Tetens approximation, though this attribution is historically inaccurate. But see also the discussion of the accuracy of different approximating formulae for saturation vapour pressure of water.
Under typical atmospheric conditions, the denominator of the exponent depends weakly on
T
{\displaystyle T}
(for which the unit is degree Celsius). Therefore, the August–Roche–Magnus equation implies that saturation water vapor pressure changes approximately exponentially with temperature under typical atmospheric conditions, and hence the water-holding capacity of the atmosphere increases by about 7% for every 1 °C rise in temperature.
== Example ==
One of the uses of this equation is to determine if a phase transition will occur in a given situation. Consider the question of how much pressure is needed to melt ice at a temperature
Δ
T
{\displaystyle {\Delta T}}
below 0 °C. Note that water is unusual in that its change in volume upon melting is negative. We can assume
Δ
P
=
L
T
Δ
v
Δ
T
,
{\displaystyle \Delta P={\frac {L}{T\,\Delta v}}\,\Delta T,}
and substituting in
we obtain
Δ
P
Δ
T
=
−
13.5
MPa
/
K
.
{\displaystyle {\frac {\Delta P}{\Delta T}}=-13.5~{\text{MPa}}/{\text{K}}.}
To provide a rough example of how much pressure this is, to melt ice at −7 °C (the temperature many ice skating rinks are set at) would require balancing a small car (mass ~ 1000 kg) on a thimble (area ~ 1 cm2). This shows that ice skating cannot be simply explained by pressure-caused melting point depression, and in fact the mechanism is quite complex.
== Second derivative ==
While the Clausius–Clapeyron relation gives the slope of the coexistence curve, it does not provide any information about its curvature or second derivative. The second derivative of the coexistence curve of phases 1 and 2 is given by
d
2
P
d
T
2
=
1
v
2
−
v
1
[
c
p
2
−
c
p
1
T
−
2
(
v
2
α
2
−
v
1
α
1
)
d
P
d
T
]
+
1
v
2
−
v
1
[
(
v
2
κ
T
2
−
v
1
κ
T
1
)
(
d
P
d
T
)
2
]
,
{\displaystyle {\begin{aligned}{\frac {\mathrm {d} ^{2}P}{\mathrm {d} T^{2}}}&={\frac {1}{v_{2}-v_{1}}}\left[{\frac {c_{p2}-c_{p1}}{T}}-2(v_{2}\alpha _{2}-v_{1}\alpha _{1}){\frac {\mathrm {d} P}{\mathrm {d} T}}\right]\\{}&+{\frac {1}{v_{2}-v_{1}}}\left[(v_{2}\kappa _{T2}-v_{1}\kappa _{T1})\left({\frac {\mathrm {d} P}{\mathrm {d} T}}\right)^{2}\right],\end{aligned}}}
where subscripts 1 and 2 denote the different phases,
c
p
{\displaystyle c_{p}}
is the specific heat capacity at constant pressure,
α
=
(
1
/
v
)
(
d
v
/
d
T
)
P
{\displaystyle \alpha =(1/v)(\mathrm {d} v/\mathrm {d} T)_{P}}
is the thermal expansion coefficient, and
κ
T
=
−
(
1
/
v
)
(
d
v
/
d
P
)
T
{\displaystyle \kappa _{T}=-(1/v)(\mathrm {d} v/\mathrm {d} P)_{T}}
is the isothermal compressibility.
== See also ==
Van 't Hoff equation
Antoine equation
Lee–Kesler method
== References ==
== Bibliography ==
== Notes == | Wikipedia/Clapeyron_equation |
In the thermodynamics of equilibrium, a state function, function of state, or point function for a thermodynamic system is a mathematical function relating several state variables or state quantities (that describe equilibrium states of a system) that depend only on the current equilibrium thermodynamic state of the system (e.g. gas, liquid, solid, crystal, or emulsion), not the path which the system has taken to reach that state. A state function describes equilibrium states of a system, thus also describing the type of system. A state variable is typically a state function so the determination of other state variable values at an equilibrium state also determines the value of the state variable as the state function at that state. The ideal gas law is a good example. In this law, one state variable (e.g., pressure, volume, temperature, or the amount of substance in a gaseous equilibrium system) is a function of other state variables so is regarded as a state function. A state function could also describe the number of a certain type of atoms or molecules in a gaseous, liquid, or solid form in a heterogeneous or homogeneous mixture, or the amount of energy required to create such a system or change the system into a different equilibrium state.
Internal energy, enthalpy, and entropy are examples of state quantities or state functions because they quantitatively describe an equilibrium state of a thermodynamic system, regardless of how the system has arrived in that state. They are expressed by exact differentials. In contrast, mechanical work and heat are process quantities or path functions because their values depend on a specific "transition" (or "path") between two equilibrium states that a system has taken to reach the final equilibrium state, being expressed by inexact differentials. Exchanged heat (in certain discrete amounts) can be associated with changes of state function such as enthalpy. The description of the system heat exchange is done by a state function, and thus enthalpy changes point to an amount of heat. This can also apply to entropy when heat is compared to temperature. The description breaks down for quantities exhibiting hysteresis.
== History ==
It is likely that the term "functions of state" was used in a loose sense during the 1850s and 1860s by those such as Rudolf Clausius, William Rankine, Peter Tait, and William Thomson. By the 1870s, the term had acquired a use of its own. In his 1873 paper "Graphical Methods in the Thermodynamics of Fluids", Willard Gibbs states: "The quantities v, p, t, ε, and η are determined when the state of the body is given, and it may be permitted to call them functions of the state of the body."
== Overview ==
A thermodynamic system is described by a number of thermodynamic parameters (e.g. temperature, volume, or pressure) which are not necessarily independent. The number of parameters needed to describe the system is the dimension of the state space of the system (D). For example, a monatomic gas with a fixed number of particles is a simple case of a two-dimensional system (D = 2). Any two-dimensional system is uniquely specified by two parameters. Choosing a different pair of parameters, such as pressure and volume instead of pressure and temperature, creates a different coordinate system in two-dimensional thermodynamic state space but is otherwise equivalent. Pressure and temperature can be used to find volume, pressure and volume can be used to find temperature, and temperature and volume can be used to find pressure. An analogous statement holds for higher-dimensional spaces, as described by the state postulate.
Generally, a state space is defined by an equation of the form
F
(
P
,
V
,
T
,
…
)
=
0
{\displaystyle F(P,V,T,\ldots )=0}
, where P denotes pressure, T denotes temperature, V denotes volume, and the ellipsis denotes other possible state variables like particle number N and entropy S. If the state space is two-dimensional as in the above example, it can be visualized as a three-dimensional graph (a surface in three-dimensional space). However, the labels of the axes are not unique (since there are more than three state variables in this case), and only two independent variables are necessary to define the state.
When a system changes state continuously, it traces out a "path" in the state space. The path can be specified by noting the values of the state parameters as the system traces out the path, whether as a function of time or a function of some other external variable. For example, having the pressure P(t) and volume V(t) as functions of time from time t0 to t1 will specify a path in two-dimensional state space. Any function of time can then be integrated over the path. For example, to calculate the work done by the system from time t0 to time t1, calculate
W
(
t
0
,
t
1
)
=
∫
0
1
P
d
V
=
∫
t
0
t
1
P
(
t
)
d
V
(
t
)
d
t
d
t
{\textstyle W(t_{0},t_{1})=\int _{0}^{1}P\,dV=\int _{t_{0}}^{t_{1}}P(t){\frac {dV(t)}{dt}}\,dt}
. In order to calculate the work W in the above integral, the functions P(t) and V(t) must be known at each time t over the entire path. In contrast, a state function only depends upon the system parameters' values at the endpoints of the path. For example, the following equation can be used to calculate the work plus the integral of V dP over the path:
Φ
(
t
0
,
t
1
)
=
∫
t
0
t
1
P
d
V
d
t
d
t
+
∫
t
0
t
1
V
d
P
d
t
d
t
=
∫
t
0
t
1
d
(
P
V
)
d
t
d
t
=
P
(
t
1
)
V
(
t
1
)
−
P
(
t
0
)
V
(
t
0
)
.
{\displaystyle {\begin{aligned}\Phi (t_{0},t_{1})&=\int _{t_{0}}^{t_{1}}P{\frac {dV}{dt}}\,dt+\int _{t_{0}}^{t_{1}}V{\frac {dP}{dt}}\,dt\\&=\int _{t_{0}}^{t_{1}}{\frac {d(PV)}{dt}}\,dt=P(t_{1})V(t_{1})-P(t_{0})V(t_{0}).\end{aligned}}}
In the equation,
d
(
P
V
)
d
t
d
t
=
d
(
P
V
)
{\displaystyle {\frac {d(PV)}{dt}}dt=d(PV)}
can be expressed as the exact differential of the function P(t)V(t). Therefore, the integral can be expressed as the difference in the value of P(t)V(t) at the end points of the integration. The product PV is therefore a state function of the system.
The notation d will be used for an exact differential. In other words, the integral of dΦ will be equal to Φ(t1) − Φ(t0). The symbol δ will be reserved for an inexact differential, which cannot be integrated without full knowledge of the path. For example, δW = PdV will be used to denote an infinitesimal increment of work.
State functions represent quantities or properties of a thermodynamic system, while non-state functions represent a process during which the state functions change. For example, the state function PV is proportional to the internal energy of an ideal gas, but the work W is the amount of energy transferred as the system performs work. Internal energy is identifiable; it is a particular form of energy. Work is the amount of energy that has changed its form or location.
== List of state functions ==
The following are considered to be state functions in thermodynamics:
== See also ==
Markov property
Conservative vector field
Nonholonomic system
Equation of state
State variable
== Notes ==
== References ==
Callen, Herbert B. (1985). Thermodynamics and an Introduction to Thermostatistics. Wiley & Sons. ISBN 978-0-471-86256-7.
Gibbs, Josiah Willard (1873). "Graphical Methods in the Thermodynamics of Fluids". Transactions of the Connecticut Academy. II. ASIN B00088UXBK – via WikiSource.
Mandl, F. (May 1988). Statistical physics (2nd ed.). Wiley & Sons. ISBN 978-0-471-91533-1.
== External links ==
Media related to State functions at Wikimedia Commons | Wikipedia/State_functions |
The principle of minimum energy is essentially a restatement of the second law of thermodynamics. It states that for a closed system, with constant external parameters and entropy, the internal energy will decrease and approach a minimum value at equilibrium. External parameters generally means the volume, but may include other parameters which are specified externally, such as a constant magnetic field.
In contrast, for isolated systems (and fixed external parameters), the second law states that the entropy will increase to a maximum value at equilibrium. An isolated system has a fixed total energy and mass. A closed system, on the other hand, is a system which is connected to another, and cannot exchange matter (i.e. particles), but can transfer other forms of energy (e.g. heat), to or from the other system. If, rather than an isolated system, we have a closed system, in which the entropy rather than the energy remains constant, then it follows from the first and second laws of thermodynamics that the energy of that system will drop to a minimum value at equilibrium, transferring its energy to the other system. To restate:
The maximum entropy principle: For a closed system with fixed internal energy (i.e. an isolated system), the entropy is maximized at equilibrium.
The minimum energy principle: For a closed system with fixed entropy, the total energy is minimized at equilibrium.
== Mathematical explanation ==
The total energy of the system is
U
(
S
,
X
1
,
X
2
,
…
)
{\displaystyle U(S,X_{1},X_{2},\dots )}
where S is entropy, and the
X
i
{\displaystyle X_{i}}
are the other extensive parameters of the system (e.g. volume, particle number, etc.). The entropy of the system may likewise be written as a function of the other extensive parameters as
S
(
U
,
X
1
,
X
2
,
…
)
{\displaystyle S(U,X_{1},X_{2},\dots )}
. Suppose that X is one of the
X
i
{\displaystyle X_{i}}
which varies as a system approaches equilibrium, and that it is the only such parameter which is varying. The principle of maximum entropy may then be stated as:
(
∂
S
∂
X
)
U
=
0
{\displaystyle \left({\frac {\partial S}{\partial X}}\right)_{U}=0}
and
(
∂
2
S
∂
X
2
)
U
<
0
{\displaystyle \left({\frac {\partial ^{2}S}{\partial X^{2}}}\right)_{U}<0}
at equilibrium.
The first condition states that entropy is at an extremum, and the second condition states that entropy is at a maximum. Note that for the partial derivatives, all extensive parameters are assumed constant except for the variables contained in the partial derivative, but only U, S, or X are shown. It follows from the properties of an exact differential (see equation 8 in the exact differential article) and from the energy/entropy equation of state that, for a closed system:
(
∂
U
∂
X
)
S
=
−
(
∂
S
∂
X
)
U
(
∂
S
∂
U
)
X
=
−
T
(
∂
S
∂
X
)
U
=
0
{\displaystyle \left({\frac {\partial U}{\partial X}}\right)_{S}=-\,{\frac {\left({\frac {\partial S}{\partial X}}\right)_{U}}{\left({\frac {\partial S}{\partial U}}\right)_{X}}}=-T\left({\frac {\partial S}{\partial X}}\right)_{U}=0}
It is seen that the energy is at an extremum at equilibrium. By similar but somewhat more lengthy argument it can be shown that
(
∂
2
U
∂
X
2
)
S
=
−
T
(
∂
2
S
∂
X
2
)
U
{\displaystyle \left({\frac {\partial ^{2}U}{\partial X^{2}}}\right)_{S}=-T\left({\frac {\partial ^{2}S}{\partial X^{2}}}\right)_{U}}
which is greater than zero, showing that the energy is, in fact, at a minimum.
== Examples ==
Consider, for one, the familiar example of a marble on the edge of a bowl. If we consider the marble and bowl to be an isolated system, then when the marble drops, the potential energy will be converted to the kinetic energy of motion of the marble. Frictional forces will convert this kinetic energy to heat, and at equilibrium, the marble will be at rest at the bottom of the bowl, and the marble and the bowl will be at a slightly higher temperature. The total energy of the marble-bowl system will be unchanged. What was previously the potential energy of the marble, will now reside in the increased heat energy of the marble-bowl system. This will be an application of the maximum entropy principle as set forth in the principle of minimum potential energy, since due to the heating effects, the entropy has increased to the maximum value possible given the fixed energy of the system.
If, on the other hand, the marble is lowered very slowly to the bottom of the bowl, so slowly that no heating effects occur (i.e. reversibly), then the entropy of the marble and bowl will remain constant, and the potential energy of the marble will be transferred as energy to the surroundings. The surroundings will maximize its entropy given its newly acquired energy, which is equivalent to the energy having been transferred as heat. Since the potential energy of the system is now at a minimum with no increase in the energy due to heat of either the marble or the bowl, the total energy of the system is at a minimum. This is an application of the minimum energy principle.
Alternatively, suppose we have a cylinder containing an ideal gas, with cross sectional area A and a variable height x. Suppose that a weight of mass m has been placed on top of the cylinder. It presses down on the top of the cylinder with a force of mg where g is the acceleration due to gravity.
Suppose that x is smaller than its equilibrium value. The upward force of the gas is greater than the downward force of the weight, and if allowed to freely move, the gas in the cylinder would push the weight upward rapidly, and there would be frictional forces that would convert the energy to heat. If we specify that an external agent presses down on the weight so as to very slowly (reversibly) allow the weight to move upward to its equilibrium position, then there will be no heat generated and the entropy of the system will remain constant while energy is transferred as work to the external agent. The total energy of the system at any value of x is given by the internal energy of the gas plus the potential energy of the weight:
U
=
T
S
−
P
A
x
+
μ
N
+
m
g
x
{\displaystyle U=TS-PAx+\mu N+mgx\,}
where T is temperature, S is entropy, P is pressure, μ is the chemical potential, N is the number of particles in the gas, and the volume has been written as V=Ax. Since the system is closed, the particle number N is constant and a small change in the energy of the system would be given by:
d
U
=
T
d
S
−
P
A
d
x
+
m
g
d
x
{\displaystyle dU=T\,dS-PA\,dx+mg\,dx}
Since the entropy is constant, we may say that dS=0 at equilibrium and by the principle of minimum energy, we may say that dU=0 at equilibrium, yielding the equilibrium condition:
0
=
−
P
A
+
m
g
{\displaystyle 0=-PA+mg\,}
which simply states that the upward gas pressure force (PA) on the upper face of the cylinder is equal to the downward force of the mass due to gravitation (mg).
== Thermodynamic potentials ==
The principle of minimum energy can be generalized to apply to constraints other than fixed entropy. For other constraints, other state functions with dimensions of energy will be minimized. These state functions are known as thermodynamic potentials. Thermodynamic potentials are at first glance just simple algebraic combinations of the energy terms in the expression for the internal energy. For a simple, multicomponent system, the internal energy may be written:
U
(
S
,
V
,
{
N
j
}
)
=
T
S
−
P
V
+
∑
j
μ
j
N
j
{\displaystyle U(S,V,\{N_{j}\})=TS-PV+\sum _{j}\mu _{j}N_{j}\,}
where the intensive parameters (T, P, μj) are functions of the internal energy's natural variables
(
S
,
V
,
{
N
j
}
)
{\displaystyle (S,V,\{N_{j}\})}
via the equations of state. As an example of another thermodynamic potential, the Helmholtz free energy is written:
A
(
T
,
V
,
{
N
j
}
)
=
U
−
T
S
{\displaystyle A(T,V,\{N_{j}\})=U-TS\,}
where temperature has replaced entropy as a natural variable. In order to understand the value of the thermodynamic potentials, it is necessary to view them in a different light. They may in fact be seen as (negative) Legendre transforms of the internal energy, in which certain of the extensive parameters are replaced by the derivative of internal energy with respect to that variable (i.e. the conjugate to that variable). For example, the Helmholtz free energy may be written:
A
(
T
,
V
,
{
N
j
}
)
=
m
i
n
S
(
U
(
S
,
V
,
{
N
j
}
)
−
T
S
)
{\displaystyle A(T,V,\{N_{j}\})={\underset {S}{\mathrm {min} }}(U(S,V,\{N_{j}\})-TS)\,}
and the minimum will occur when the variable T becomes equal to the temperature since
T
=
(
∂
U
∂
S
)
V
,
{
N
j
}
{\displaystyle T=\left({\frac {\partial U}{\partial S}}\right)_{V,\{N_{j}\}}}
The Helmholtz free energy is a useful quantity when studying thermodynamic transformations in which the temperature is held constant. Although the reduction in the number of variables is a useful simplification, the main advantage comes from the fact that the Helmholtz free energy is minimized at equilibrium with respect to any unconstrained internal variables for a closed system at constant temperature and volume. This follows directly from the principle of minimum energy which states that at constant entropy, the internal energy is minimized. This can be stated as:
U
0
(
S
0
)
=
m
i
n
x
(
U
(
S
0
,
x
)
)
{\displaystyle U_{0}(S_{0})={\underset {x}{\mathrm {min} }}(U(S_{0},x))\,}
where
U
0
{\displaystyle U_{0}}
and
S
0
{\displaystyle S_{0}}
are the value of the internal energy and the (fixed) entropy at equilibrium. The volume and particle number variables have been replaced by x which stands for any internal unconstrained variables.
The minimization is with respect to the unconstrained variables. In the case of chemical reactions this is usually the number of particles or mole fractions, subject to the conservation of elements. At equilibrium, these will take on their equilibrium values, and the internal energy
U
0
{\displaystyle U_{0}}
will be a function only of the chosen value of entropy
S
0
{\displaystyle S_{0}}
. By the definition of the Legendre transform, the Helmholtz free energy will be:
A
(
T
,
x
)
=
m
i
n
S
(
U
(
S
,
x
)
−
T
S
)
{\displaystyle A(T,x)={\underset {S}{\mathrm {min} }}(U(S,x)-TS)\,}
The Helmholtz free energy at equilibrium will be:
A
0
(
T
0
)
=
m
i
n
S
0
(
U
0
(
S
0
)
−
T
0
S
0
)
{\displaystyle A_{0}(T_{0})={\underset {S_{0}}{\mathrm {min} }}(U_{0}(S_{0})-T_{0}S_{0})}
where
T
0
{\displaystyle T_{0}}
is the (unknown) temperature at equilibrium. Substituting the expression for
U
0
{\displaystyle U_{0}}
:
A
0
=
m
i
n
S
0
(
m
i
n
x
(
U
(
S
0
,
x
)
)
−
T
0
S
0
)
{\displaystyle A_{0}={\underset {S_{0}}{\mathrm {min} }}({\underset {x}{\mathrm {min} }}(U(S_{0},x))-T_{0}S_{0})}
By exchanging the order of the extrema:
A
0
=
m
i
n
x
(
m
i
n
S
0
(
U
(
S
0
,
x
)
−
T
0
S
0
)
)
=
m
i
n
x
(
A
0
(
T
0
,
x
)
)
{\displaystyle A_{0}={\underset {x}{\mathrm {min} }}({\underset {S_{0}}{\mathrm {min} }}(U(S_{0},x)-T_{0}S_{0}))={\underset {x}{\mathrm {min} }}(A_{0}(T_{0},x))}
showing that the Helmholtz free energy is minimized at equilibrium.
The Enthalpy and Gibbs free energy, are similarly derived.
== References ==
Callen, Herbert B. (1985). Thermodynamics and an Introduction to Thermostatistics (2nd ed.). New York: John Wiley & Sons. ISBN 0-471-86256-8. OCLC 485487601. | Wikipedia/Principle_of_minimum_energy |
The Northwest Energy Efficiency Alliance (NEEA) is a non-profit organization working to accelerate energy efficiency in the Pacific Northwest through the acceleration and adoption of energy-efficient products, services and practices. NEEA is supported by and works in partnership with more than 140 Northwest utilities, the Bonneville Power Administration and Energy Trust of Oregon. NEEA's efforts serve Idaho, Montana, Oregon and Washington.
== History ==
In the mid-1990s, with energy efficiency programs at risk of being stranded by deregulation, the Northwest Power and Conservation Council called for the creation of a regional organization to encourage energy efficiency practices. NEEA was incorporated as a non-profit corporation in the fall of 1996 with funding from all investor-owned utilities in the region and the Bonneville Power Administration, which represented publicly owned utilities. In order to ensure the necessary collaboration would take place, the first Board represented all of the primary stakeholders including regulators from the four states, public and privately owned utilities, energy efficiency businesses, and representatives for the four state governments.
Since 1997, the region, in collaboration with NEEA, has saved enough energy to power more than 900,000 homes each year from their energy efficiency efforts.
== Funders ==
Avista Utilities
Bonneville Power Administration
Cascade Natural Gas
Clark Public Utilities
Chelan PUD
Cowlitz County PUD
Energy Trust of Oregon
Idaho Power
NorthWestern Energy
NW Natural
Pacific Power
Portland General Electric
Puget Sound Energy
Seattle City Light
Snohomish County PUD
Tacoma Power
== References ==
== External links ==
Northwest Energy Efficiency Alliance website | Wikipedia/Northwest_Energy_Efficiency_Alliance |
Johnson Controls International plc is an American, Irish-domiciled multinational conglomerate headquartered in Cork, Ireland, that produces fire, HVAC, and security equipment for buildings. As of mid-2019, it employed 105,000 people in around 2,000 locations across six continents. In 2017 it was listed as 389th in the Fortune Global 500. It became ineligible for the Fortune 500 in subsequent years since it relocated its headquarters outside the U.S.
The company was formed via the merger of American company Johnson Controls with Tyco International, announced on 25 January 2016. The merger led to the avoidance of taxation on foreign market operations and a financial windfall for the CEO of Johnson Controls at that time, Alex Molinaroli.
== History ==
In 1883, Warren S. Johnson, a professor at the Whitewater Normal School (now University of Wisconsin–Whitewater) in Whitewater, Wisconsin, received a patent for the first electric room thermostat. His invention helped launch the building control industry and was the impetus for a new company. Johnson and a group of Milwaukee investors led by William Plankinton incorporated the Johnson Electric Service Company in 1885 to manufacture, install and service automatic temperature regulation systems for buildings. After Johnson died in 1911, the company decided to focus on its temperature control business for non-residential buildings.
In 1970, the company took over clock manufacturer Standard Electric Time Company. The company was renamed Johnson Controls in 1974. In 1978, Johnson Controls acquired the battery company Globe-Union. That same year, the company divested itself of the Standard Electric Time Company and sold it to Faraday. In 1985, Johnson Controls acquired automotive seating companies Hoover Universal and Ferro Manufacturing. In 1989, Johnson acquired Pan Am World Services.
During the 2008–2009 recession, the company's president, Keith Wandell, lobbied Congress for a bailout of the companies that Johnson supplied. The Johnson Controls plant in Lakeshore, Ontario, closed in late March 2010 and the property was sold. In 2013, Stephen Roell retired and Alex Molinaroli took his position as CEO and chairman of the board.
=== Subsequent history ===
On 31 October 2016, the former Johnson Controls Automotive Experience division was spun off as a separate, publicly traded company, Adient, and began trading on the New York Stock Exchange. In March 2017, it was announced that Scott Safety, its safety gear business, would be bought by 3M for $2 billion.
On 1 September 2017, George Oliver was appointed as chairman and CEO, an acceleration by 6 months from the original plans.
On 12 May 2021, Johnson Controls completed the acquisition of Silent-Aire. (Silent-Aire was a Canadian firm that specialized in data center cooling systems. The deal was structured as follows: Johnson Controls paid $630 million upfront, and additional payments were made contingent upon reaching certain milestones, with total price capped at $870 million.)
In October 2021, it was announced that Johnson Controls had picked Ava Robotics to power its new 'Tyco Security Robot'. This fully autonomous security robot includes sensors, touchscreen and integrates two Tyco Illustra cameras to bring access control, video surveillance and security robotics together.
In September 2023, Johnson Controls' experienced a ransomware attack, encrypting numerous company devices and servers, prompting the company to immediately shut down specific IT systems.
In July 2024, Johnson Controls said that it will sell a portfolio of its heating and ventilation units to Germany's Bosch Group for $6.7 billion.
== Women's work rights ==
In 1982, Johnson Controls enacted what it called a "fetal protection policy", which denied women the right to work on the battery production line because of the potential harm to a fetus they might conceive. Women were allowed to work on the production line only if they could prove that "... their inability to bear children had been medically documented."
In April 1984, the United Automobile Workers sued Johnson Controls on behalf of three employees. These employees were Mary Craig, who had chosen to be sterilized to avoid losing her job, Elsie Nason, a 50-year-old divorcee, who had suffered a loss of compensation when she was transferred from a high paying job that exposed her to lead, and Donald Penney, who had been denied a request for a leave of absence for the purpose of lowering his blood lead levels because he intended to become a father. The case was argued before the Supreme Court of the United States on 10 October 1990 and was decided on 20 March 1991. The Court ruled in favor of the plaintiffs. This was a landmark ruling because it affirmed that "... it is no more appropriate for the courts than it is for individual employers to decide whether a woman's reproductive role is more important to herself and her family than her economic role."
== Business units ==
The company's operations focus on Building Efficiency.
=== Building Technologies and Solutions ===
The Building Technologies and Solutions business unit designs, produces, installs and services heating, ventilation and air conditioning systems, industrial refrigeration, building management systems, fire and security systems and mechanical equipment for commercial and residential buildings. The brands produced under this business unit are York, TempMaster, Metasys, Panoptix, Frick and Sabroe. This unit also works with organizations to reduce the energy consumption and operating costs of their buildings. This includes retrofitting existing buildings such as the Empire State Building and working on maximizing efficiency in new construction such as the Burj Khalifa in Dubai. Building Technologies & Solutions is the company's longest-running business unit, dating to 1885 when Johnson founded the Johnson Electric Service Company after patenting the electric thermostat in 1883. As of 2012, the business unit operated from 700 branch offices in more than 150 countries.
Johnson Controls was one of the defendants in a multimillion-dollar federal court lawsuit in San Juan, Puerto Rico in a case where 98 people perished and 140 were injured in a fire at the DuPont Plaza Hotel and its casino on New Year's Eve, 31 December 1986. The plaintiffs claimed that Johnson Controls sold and installed an energy management system that failed to give early warning of the fire. After nine months of trial, the company and its energy management system were absolved of blame when the court issued a directed verdict. When the trial was completed the plaintiffs had accumulated approximately $220,908,549.00 in damages as a result of various settlements and a jury verdict against some other defendants.
=== Former business units ===
==== Power Solutions ====
This unit was sold to Brookfield Business Partners and re-made into a new company, Clarios, as of 1 May 2019.
The Power Solutions business unit designs and manufactures automotive batteries for passenger cars, heavy and light duty trucks, utility vehicles, motorcycles, golf carts and boats. It supplies more than one third of the world's lead-acid batteries to automakers and aftermarket retailers including Wal-Mart, Sears, Toyota, and BMW. Lead acid battery brands produced under this business unit include Continental, OPTIMA, Heliar, LTH, Delkor and VARTA automotive batteries. This part of the company also manufactures Lithium-ion cells and complete battery systems to power hybrid and electric vehicles such as the Ford Fusion and Daimler's S-Class 400. Additionally, it manufactures absorbent glass mat (AGM) and enhanced flooded batteries (EFB) batteries to power Start-Stop vehicles such as the Chevy Malibu and Ford Fusion. As of 2012, the business unit operated from 60 locations worldwide. On 13 November 2018 Johnson Controls agrees to sell its Power Solutions Division to Brookfield Business Partners.
==== Automotive Experience ====
This business unit was spun off into a new company named Adient on 31 October 2016.
==== Global WorkPlace Solutions ====
The Global WorkPlace Solutions business unit provides outsourced facilities management services globally. It also manages corporate real estate on behalf of its customers including acquiring and disposing of property, administering leases, and managing building related projects such as equipment replacements. On 23 September 2015, CBRE, Inc. purchased the Global Workplace Solutions business unit, retaining the name "Global Workplace Solutions".
== Joint ventures ==
Amaron: Amara Raja Batteries of India signed a joint venture with Johnson Controls in December 1997 to manufacture automotive batteries in India, under the brand name "Amaron". Amara Raja Batteries terminated the partnership with Johnson Controls on 1 April 2019.
Brookfield Johnson Controls: A joint venture with Brookfield Properties to provide commercial property management services in Canada. Established in 1992, it was known as Brookfield LePage Johnson Controls or BLJC until May 2015. In 2013, Johnson Controls and Brookfield Asset Management formed a similar joint venture in Australia and New Zealand. In 2015, JCI pulled out and the company continued as Brookfield Global Integrated Solutions.
Diniz Johnson Controls : A joint venture with Diniz Holding in Turkey building complete automotive seats for major OEMs.
Johnson Controls Hitachi : A joint venture in 2015 with Hitachi in Japan for RAC, PAC, VRF and Chiller business.
Johnson Controls-Saft Advanced Power Solutions: Johnson Controls-Saft Advanced Power Solutions (JCS) was a joint venture between Johnson Controls and French battery company Saft Groupe S.A. It was officially launched in January 2006. VARTA established a JCS development centre at its German HQ, following the setting-up of VARTA-Saft joint venture. Johnson Controls is exhibiting a plug-in hybrid concept called the re3. Johnson Controls produced cells for lithium-ion hybrid vehicle batteries in France under the joint venture with Saft. Battery assemblies were developed and produced in Hannover(Germany), Zwickau (Germany) and Milwaukee (US) Johnson Controls was increasingly dissatisfied with the restrictions of the agreement and also sought a more important ally. In May 2011, Johnson Controls requested the dissolution of Johnson Controls-Saft Advanced Power Solutions LLC to the Delaware Court of Chancery.The two companies agreed to the separation and Johnson Controls paid Saft $145 million for its shares in the joint venture, as well as for the right to use certain technology. Johnson Controls retained the Michigan facility built by the partnership. The French joint facility was transferred to Saft.
== Brands ==
=== Coleman Heating & Air Conditioning ===
Coleman Heating & Air Conditioning is a major manufacturing brand of HVAC equipment, and was formerly an independent HVAC manufacturing company. The company began as a division of the Coleman Company in 1958 and was acquired by Evcon in 1990, which in turn was acquired by Johnson Controls in 1996. Of the twelve largest American furnace brand names represented at Gas Furnace Guide, the Coleman brand received an average ranking of 3.7 out of 5 stars.
=== York International ===
York International is the final name of a company started in York, Pennsylvania, US, in 1874, which developed the York brand of refrigeration and HVAC equipment. The York brand has been owned since August 2005 by Johnson Controls, when it was sold to them for $3.2 billion. At the time of the acquisition, it was the world's largest independent manufacturer of air conditioning, heating, and refrigeration machinery. Its stock symbol was formerly YRK.
=== Manufacturing ===
Johnson Controls operates HVAC manufacturing plants in the United States in Wichita, Kansas and Norman, Oklahoma. The Wichita plant primarily produces residential unitary equipment, such as air conditioners, furnaces, and heat pumps for the North American Market under various brands including York and Coleman. The Norman plant primarily produces rooftop units (RTUs) for commercial use.
== Controversies ==
=== Merger with Tyco ===
On 25 January 2016, Johnson Controls announced that it would merge with Tyco International to create Johnson Controls International plc, a company headquartered in Cork, Ireland. The merger was completed in September 2016. Merging with the Irish company allowed Johnson Controls to become an Irish company itself, and enjoy sharply lowered corporate taxes, a process known as a tax inversion. This restructuring came at great expense of the workforce which was reduced by 52% between 2016 and 2022. The same occurred after the takeover of York International in 2005, which led to a reduction of 76% of the workforce between 2005 and 2016.
Hillary Clinton condemned the company for wanting to escape United States taxes through the merger after having "begged" the government for financial help in 2008. The Johnson deal was termed "outrageous" by Fortune magazine. The firm estimated that it would save about US$150 million a year by avoiding American taxes.
=== Tyco international scandal ===
In 2002, former chairman and chief executive Dennis Kozlowski and former chief financial officer Mark H. Swartz were accused of the theft of more than US$150 million from the company. During their trial in March 2004, they contended the board of directors authorized it as compensation.
Kozlowski was tried twice. The first attempt was a ruled mistrial when one of the jurors was threatened by the public after being reported to have made an OK sign towards Kozlowski's lawyers. Kozlowski testified on his own behalf during the second trial, stating that his pay package was "confusing" and "almost embarrassingly big," but that he never committed a crime as the company's top executive.
On 17 June 2005, after a retrial, Kozlowski and Swartz were convicted on all but one of the more than 30 counts against them. The verdicts carry potential jail terms of up to 25 years in state prison. Kozlowski and Swartz were each sentenced to no less than eight years and four months and no more than 25 years in prison. Then in May 2007, New Hampshire Federal District Court Judge Paul Barbadoro approved a class action settlement whereby Tyco agreed to pay $2.92 billion (in conjunction with $225 million by Pricewaterhouse Coopers, their auditors) to a class of defrauded shareholders represented by Grant & Eisenhofer P.A., Schiffrin, Barroway, Topaz & Kessler, and Milberg Weiss & Bershad.
On 17 January 2014, Kozlowski was granted parole from Lincoln Correctional Facility in New York City.
=== Bribery charges in China ===
In 2016, Johnson Controls agreed to pay $14.4M to settle Foreign Corrupt Practices Act charges with the SEC. According to the SEC, employees of China Marine, a subsidiary of Johnson Controls, employed sham vendors to transfer $4.9M worth of bribes to Chinese government-owned shipyards, to win over businesses and enrich themselves.
== References ==
== External links ==
Official website
Business data for Johnson Controls: | Wikipedia/Johnson_Controls |
World energy supply and consumption refers to the global supply of energy resources and its consumption. The system of global energy supply consists of the energy development, refinement, and trade of energy. Energy supplies may exist in various forms such as raw resources or more processed and refined forms of energy. The raw energy resources include for example coal, unprocessed oil and gas, uranium. In comparison, the refined forms of energy include for example refined oil that becomes fuel and electricity. Energy resources may be used in various different ways, depending on the specific resource (e.g. coal), and intended end use (industrial, residential, etc.). Energy production and consumption play a significant role in the global economy. It is needed in industry and global transportation. The total energy supply chain, from production to final consumption, involves many activities that cause a loss of useful energy.
As of 2022, energy consumption is still about 80% from fossil fuels. The Gulf States and Russia are major energy exporters. Their customers include for example the European Union and China, who are not producing enough energy in their own countries to satisfy their energy demand. Total energy consumption tends to increase by about 1–2% per year. More recently, renewable energy has been growing rapidly, averaging about 20% increase per year in the 2010s.
Two key problems with energy production and consumption are greenhouse gas emissions and environmental pollution. Of about 50 billion tonnes worldwide annual total greenhouse gas emissions, 36 billion tonnes of carbon dioxide was a result of energy use (almost all from fossil fuels) in 2021. Many scenarios have been envisioned to reduce greenhouse gas emissions, usually by the name of net zero emissions.
There is a clear connection between energy consumption per capita, and GDP per capita.
A significant lack of energy supplies is called an energy crisis.
== Primary energy production ==
Primary energy refers to the first form of energy encountered, as raw resources collected directly from energy production, before any conversion or transformation of the energy occurs.
Energy production is usually classified as:
Fossil, using coal, crude oil, and natural gas;
Nuclear, using uranium;
Renewable, using biomass, geothermal, hydropower, solar, wind, tidal, wave, among others.
Primary energy assessment by IEA follows certain rules to ease measurement of different kinds of energy. These rules are controversial. Water and air flow energy that drives hydro and wind turbines, and sunlight that powers solar panels, are not taken as PE, which is set at the electric energy produced. But fossil and nuclear energy are set at the reaction heat, which is about three times the electric energy. This measurement difference can lead to underestimating the economic contribution of renewable energy.
Enerdata displays data for "Total energy / production: Coal, Oil, Gas, Biomass, Heat and Electricity" and for "Renewables / % in electricity production: Renewables, non-renewables".
The table lists worldwide PE and the countries producing most (76%) of that in 2021, using Enerdata. The amounts are rounded and given in million tonnes of oil equivalent per year (1 Mtoe = 11.63 TWh (41.9 petajoules), where 1 TWh = 109 kWh) and % of Total. Renewable is Biomass plus Heat plus renewable percentage of Electricity production (hydro, wind, solar). Nuclear is nonrenewable percentage of Electricity production. The above-mentioned underestimation of hydro, wind and solar energy, compared to nuclear and fossil energy, applies also to Enerdata.
The 2021 world total energy production of 14,800 MToe corresponds to a little over 172 PWh / year, or about 19.6 TW of power generation.
== Energy conversion ==
Energy resources must be processed in order to make it suitable for final consumption. For example, there may be various impurities in raw coal mined or raw natural gas that was produced from an oil well that may make it unsuitable to be burned in a power plant.
Primary energy is converted in many ways to energy carriers, also known as secondary energy:
Coal mainly goes to thermal power stations. Coke is derived by destructive distillation of bituminous coal.
Crude oil goes mainly to oil refineries
Natural-gas goes to natural-gas processing plants to remove contaminants such as water, carbon dioxide and hydrogen sulfide, and to adjust the heating value. It is used as fuel gas, also in thermal power stations.
Nuclear reaction heat is used in thermal power stations.
Biomass is used directly or converted to biofuel.
Electricity generators are driven by steam or gas turbines in a thermal plant, or water turbines in a hydropower station, or wind turbines, usually in a wind farm. The invention of the solar cell in 1954 started electricity generation by solar panels, connected to a power inverter. Mass production of panels around the year 2000 made this economic.
== Energy trade ==
Much primary and converted energy is traded among countries. The table lists countries with large difference of export and import in 2021, expressed in Mtoe. A negative value indicates that much energy import is needed for the economy. Russian gas exports were reduced a lot in 2022, as pipelines to Asia plus LNG export capacity is much less than the gas no longer sent to Europe.
Transport of energy carriers is done by tanker ship, tank truck, LNG carrier, rail freight transport, pipeline and by electric power transmission.
== Total energy supply ==
Total energy supply (TES) indicates the sum of production and imports subtracting exports and storage changes. For the whole world TES nearly equals primary energy PE because imports and exports cancel out, but for countries TES and PE differ in quantity, and also in quality as secondary energy is involved, e.g., import of an oil refinery product. TES is all energy required to supply energy for end users.
The tables list TES and PE for some countries where these differ much, both in 2021 and TES history. Most growth of TES since 1990 occurred in Asia. The amounts are rounded and given in Mtoe. Enerdata labels TES as Total energy consumption.
25% of worldwide primary production is used for conversion and transport, and 6% for non-energy products like lubricants, asphalt and petrochemicals. In 2019 TES was 606 EJ and final consumption was 418 EJ, 69% of TES. Most of the energy lost by conversion occurs in thermal electricity plants and the energy industry own use.
=== Discussion about energy loss ===
There are different qualities of energy. Heat, especially at a relatively low temperature, is low-quality energy of random motion, whereas electricity is high-quality energy that flows smoothly through wires. It takes around 3 kWh of heat to produce 1 kWh of electricity. But by the same token, a kilowatt-hour of this high-quality electricity can be used to pump several kilowatt-hours of heat into a building using a heat pump. It turns out that the loss of useful energy incurred in thermal electricity plants is very much more than the loss due to, say, resistance in power lines, because of quality differences. Electricity can also be used in many ways in which heat cannot.
In fact, the loss in thermal plants is due to poor conversion of chemical energy of fuel to motion by combustion. Otherwise chemical energy of fuel is not inherently low-quality; for example, conversion of chemical energy to electricity in batteries can approach 100%. So energy loss in thermal plants is real loss.
== Final consumption ==
Total final consumption (TFC) is the worldwide consumption of energy by end-users (whereas primary energy consumption (Eurostat) or total energy supply (IEA) is total energy demand and thus also includes what the energy sector uses itself and transformation and distribution losses). This energy consists of fuel (78%) and electricity (22%). The tables list amounts, expressed in million tonnes of oil equivalent per year (1 Mtoe = 11.63 TWh) and how much of these is renewable energy. Non-energy products are not considered here. The data are of 2018. The world's renewable share of TFC was 18% in 2018: 7% traditional biomass, 3.6% hydropower and 7.4% other renewables.
In the period 2005–2017 worldwide final consumption of coal increased by 23%, of oil and gas increased by 18%, and that of electricity increased by 41%.
Fuel comes in three types: Fossil fuel is natural gas, fuel derived from petroleum (LPG, gasoline, kerosene, gas/diesel, fuel oil), or from coal (anthracite, bituminous coal, coke, blast furnace gas). Secondly, there is renewable fuel (biofuel and fuel derived from waste). And lastly, the fuel used for district heating.
The amounts of fuel in the tables are based on lower heating value.
The first table lists final consumption in the countries/regions which use most (85%), and per person as of 2018. In developing countries fuel consumption per person is low and more renewable. Canada, Venezuela and Brazil generate most electricity with hydropower.
The next table shows countries consuming most (85%) in Europe.
=== Energy for energy ===
Some fuel and electricity is used to construct, maintain and demolish/recycle installations that produce fuel and electricity, such as oil platforms, uranium isotope separators and wind turbines. For these producers to be economical the ratio of energy returned on energy invested (EROEI) or energy return on investment (EROI) should be large enough.
If the final energy delivered for consumption is E and the EROI equals R, then the net energy available is E-E/R. The percentage available energy is 100-100/R. For R>10 more than 90% is available but for R=2 only 50% and for R=1 none. This steep decline is known as the net energy cliff.
== Availability of data ==
Many countries publish statistics on the energy supply and consumption of either their own country, of other countries of interest, or of all countries combined in one chart. One of the largest organizations in this field, the International Energy Agency (IEA), sells yearly comprehensive energy data which makes this data paywalled and difficult to access for internet users. The organization Enerdata on the other hand publishes a free Yearbook, making the data more accessible. Another trustworthy organization that provides accurate energy data, mainly referring to the USA, is the U.S. Energy Information Administration.
== Trends and outlook ==
Due to the COVID-19 pandemic, there was a significant decline in energy usage worldwide in 2020, but total energy demand worldwide had recovered by 2021, and has hit a record high in 2022.
In 2022, consumers worldwide spent nearly USD 10 trillion on energy, averaging more than USD 1,200 per person. This reflects a 20% increase over the previous five-year average, highlighting the significant economic impact and the increasing financial burden of energy consumption on a global scale.: 13
=== IEA scenarios ===
In World Energy Outlook 2023 the IEA notes that "We are on track to see all fossil fuels peak before 2030".: 18 The IEA presents three scenarios:: 17
The Stated Policies Scenario (STEPS) provides an outlook based on the latest policy settings. The share of fossil fuel in global energy supply – stuck for decades around 80% – starts to edge downwards and reaches 73% by 2030.: 18 This undercuts the rationale for any increase in fossil fuel investment.: 19 Renewables are set to contribute 80% of new power capacity to 2030, with solar PV alone accounting for more than half.: 20 The STEPS sees a peak in energy-related CO2 emissions in the mid-2020s but emissions remain high enough to push up global average temperatures to around 2.4 °C in 2100.: 22 Total energy demand continues to increase through to 2050.: 23 Total energy investment remains at about US$3 trillion per year.: 49
The Announced Pledges Scenario (APS) assumes all national energy and climate targets made by governments are met in full and on time. The APS is associated with a temperature rise of 1.7 °C in 2100 (with a 50% probability).: 92 Total energy investment rises to about US$4 trillion per year after 2030.: 49
The Net Zero Emissions by 2050 (NZE) Scenario limits global warming to 1.5 °C.: 17 The share of fossil fuel reaches 62% in 2030.: 101 Methane emissions from fossil fuel supply cuts by 75% in 2030.: 45 Total energy investment rises to almost US$5 trillion per year after 2030.: 49 Clean energy investment needs to rise everywhere, but the steepest increases are needed in emerging market and developing economies other than China, requiring enhanced international support.: 46 The share of electricity in final consumption exceeds 50% by 2050 in NZE. The share of nuclear power in electricity generation remains broadly stable over time in all scenarios, about 9%.: 106
The IEA's "Electricity 2024" report details a 2.2% growth in global electricity demand for 2023, forecasting an annual increase of 3.4% through 2026, with notable contributions from emerging economies like China and India, despite a slump in advanced economies due to economic and inflationary pressures. The report underscores the significant impact of data centers, artificial intelligence and cryptocurrency, projecting a potential doubling of electricity consumption to 1,000 TWh by 2026, which is on par with Japan's current usage. Notably, 85% of the additional demand is expected to originate from China and India, with India's demand alone predicted to grow over 6% annually until 2026, driven by economic expansion and increasing air conditioning use.
Southeast Asia's electricity demand is also forecasted to climb by 5% annually through 2026. In the United States, a decrease was seen in 2023, but a moderate rise is anticipated in the coming years, largely fueled by data centers. The report also anticipates that a surge in electricity generation from low-emissions sources will meet the global demand growth over the next three years, with renewable energy sources predicted to surpass coal by early 2025.
=== Alternative scenarios ===
The goal set in the Paris Agreement to limit climate change will be difficult to achieve. Various scenarios for achieving the Paris Climate Agreement Goals have been developed, using IEA data but proposing transition to nearly 100% renewables by mid-century, along with steps such as reforestation. Nuclear power and carbon capture are excluded in these scenarios. The researchers say the costs will be far less than the $5 trillion per year governments currently spend subsidizing the fossil fuel industries responsible for climate change.: ix
In the +2.0 C (global warming) Scenario total primary energy demand in 2040 can be 450 EJ = 10,755 Mtoe, or 400 EJ = 9560 Mtoe in the +1.5 Scenario, well below the current production. Renewable sources can increase their share to 300 EJ in the +2.0 C Scenario or 330 EJ in the +1.5 Scenario in 2040. In 2050 renewables can cover nearly all energy demand. Non-energy consumption will still include fossil fuels.: xxvii Fig. 5
Global electricity generation from renewable energy sources will reach 88% by 2040 and 100% by 2050 in the alternative scenarios. "New" renewables—mainly wind, solar and geothermal energy—will contribute 83% of the total electricity generated.: xxiv The average annual investment required between 2015 and 2050, including costs for additional power plants to produce hydrogen and synthetic fuels and for plant replacement, will be around $1.4 trillion.: 182
Shifts from domestic aviation to rail and from road to rail are needed. Passenger car use must decrease in the OECD countries (but increase in developing world regions) after 2020. The passenger car use decline will be partly compensated by strong increase in public transport rail and bus systems.: xxii Fig.4
CO2 emission can reduce from 32 Gt in 2015 to 7 Gt (+2.0 Scenario) or 2.7 Gt (+1.5 Scenario) in 2040, and to zero in 2050.: xxviii
== See also ==
Electric energy consumption – Worldwide consumption of electricity
Energy demand management – Modification of consumer energy usage during peak hours
Energy intensity – Measure of an economy's energy inefficiency
Energy policy – How a government or business deals with energy
Sustainable energy – Energy that responsibly meets social, economic, and environmental needs
World Energy Outlook – Publication of the International Energy Agency
World energy resources – Estimated maximum capacity for energy production on Earth
Lists
List of countries by energy intensity
List of countries by electricity consumption
List of countries by electricity production
List of countries by energy consumption per capita
List of countries by greenhouse gas emissions
List of countries by energy consumption and production
== Notes ==
== References ==
== External links ==
Enerdata - World Energy & Climate Statistics
International Energy Outlook, by the U.S. Energy Information Administration
World Energy Outlook from the IEA | Wikipedia/Global_energy_consumption |
Energy Savings Performance Contracts (ESPCs), also known as Energy Performance Contracts, are an alternative financing mechanism authorized by the United States Congress designed to accelerate investment in cost effective energy conservation measures in existing Federal buildings. ESPCs allow Federal agencies to accomplish energy savings projects without up-front capital costs and without special Congressional appropriations. The Energy Policy Act of 1992 (EPACT 1992) authorized Federal agencies to use private sector financing to implement energy conservation methods and energy efficiency technologies.
An ESPC is a partnership between a Federal agency and an energy service company (ESCO). The ESCO conducts a comprehensive energy audit for the Federal facility and identifies improvements to save energy. In consultation with the Federal agency, the ESCO designs and constructs a project that meets the agency's needs and arranges the necessary financing. The ESCO guarantees that the improvements will generate energy cost savings sufficient to pay for the project over the term of the contract. After the contract ends, all additional cost savings accrue to the agency. The savings must be guaranteed and the Federal agencies may enter into a multiyear contract for a period not to exceed 25 years.
== Federal policies ==
Energy Savings Performance Contracts are regulations created by the Federal Energy Management Program (FEMP) of the United States Department of Energy (DOE) as required by the Energy Policy Act of 1992. The final DOE ruling came into effect on May 10, 1995. The use of ESPCs by Federal agencies was reauthorized in the Energy Policy Act of 2005 (EPACT 2005) through the end of Fiscal Year (FY) 2016 and permanently reauthorized in The Energy Independence and Security Act of 2007 (EISA).
Energy Performance Contracts are also used extensively in the US Department of Housing & Urban Development's (HUD's) Public Housing Program as a means of reducing utility costs. Unlike federal ESPCs, Public Housing ESPCs are projects approved by HUD and implemented by state-chartered Public Housing Authorities (PHAs) with or without the assistance of an ESCo. Because PHAs are legally authorized to carry debt, ESCos involved in the Public Housing EPC process typically do not need to provide financing to the project, but rather are simply providers of architectural/engineering services.
== Impacts of ESPCs ==
As of March 2010 more than 550 ESPC projects worth $3.6 billion were awarded to 25 Federal Agencies and organizations in 49 states and the District of Columbia (D.C.). These projects saved an estimated 30.2 trillion BTU annually, equivalent to the energy consumed by 318,300, and $11 billion in energy costs, $9.6 billion goes to fund energy efficiency projects and $1.4 billion is reduced Federal Government spending.
The initial program was started by John Rogers working for the Naval Facilities Engineering Command. The actual implementation was not started until congress passed the required legislation.
== U.S. Department of Energy ESPCs ==
United States Department of Energy (DOE) energy savings performance contracts are indefinite delivery/indefinite quantity (IDIQ) contracts designed to make ESPCs as practical and cost-effective as possible for Federal agencies. The Department of Energy awarded these "umbrella" contracts to ESCOs based on their ability to meet terms and conditions established in IDIQ contracts. DOE ESPCs can be used for any federally owned facility worldwide.
=== Benefits ===
DOE energy savings performance contracts help Federal agencies meet energy efficiency, renewable energy, water conservation, and emissions reduction goals by streamlining contract funding for energy management projects. The streamlined financing provides multiple benefits, including:
Increased quality and value through:
Access to private-sector expertise in energy efficiency, renewable energy, water conservation, and reduced emissions
Built-in incentives for ESCOs to provide high-quality equipment, timely services, and thorough project commissioning
Infrastructure improvements to enhance mission support
Healthier, safer working and living environments
Flexible, practical contract and procurement processes to ensures your project, your way
Expert, objective technical support through FEMP assistance, including:
FEMP-provided legal and financing guidance, project facilitators, advanced technology experts, and training for Federal agencies
Smart project management that:
Ensures building efficiency improvements and new equipment without upfront capital costs
Finances energy improvements without relying on special Congressional appropriations
Guarantees energy and related operation and maintenance cost savings
Enhances the ability to plan and budget energy, operation, and maintenance accounts
Minimizes vulnerability to budget impacts due to volatile energy prices, weather, and equipment failure
== Cost ==
The FEMP ESPC program costs about $10 million annually to administer and an additional $1 million annually to monitor contract performance. On average, FEMP spends about $500,000 to develop each contract, essentially providing a very generous boost to the ESCOs for project development. Additionally, with interest charges over as many as 25 years per contract, the projects are actually costing a lot more than if Congress would just appropriate funding for them rather than financing them over long periods of time.
== Performance Issues ==
FEMP has identified a number of ways to monitor the performance of ESPCs during the life of the contracts. Initially, The vast majority of contracts were based on "stipulated" rather the actual measured savings. This means that the parties agree on engineering calculations at the beginning of the contract and never have to measure actual performance after that. A DOE internal study found that with many contracts, it was impossible to tell if performance goals, and thus, dollar savings, were actually being achieved. After just a few years in operation, with staff turnover and other operational issues taking precedence, many of those projects were no longer being monitored.
These days, the Federal Energy Management Program (FEMP) Has implemented a number of strategies to improve the performance of these projects and reduce risk to US Federal Agencies. The current FEMP M&V Guidelines (version 4.0 as of 2024) require contractors to provide IMPVP compliant measurement and verification activities with limited or no stipulated savings.
== U.S Department of the Army ESPCs ==
The United States Department of the Army (DOA) use of ESPCs is focused on the reduction of energy and water consumption, with ancillary benefits of achieving facility improvements, improving the quality of life in the Army, and ultimately reducing the overall energy costs of the Installations. ESPCs do not apply to the town of Willington, CT .
== Energy Performance Contracting in Switzerland ==
In Switzerland, the swissesco business association is promoting and developing EPC. The business model does support the goals of the Swiss Federal Energy Strategy 2050 by promoting energy efficiency. swissesco was established in September 2015. Its members are energy companies, engineers, financial institutions as well as Cantons, cities and municipalities. The association is supported by the Swiss Federal Office of Energy SFOE and several of the Cantons.
swissesco works on the following strategic priorities:
Establishment of a business environment that supports the development of the Swiss energy services (ESCO) market as well as concrete EPC projects
Standardisation of methodologies and processes
Information about Energy Performance Contracting
Coordination of activities with other business associations and institutions
Establishment and management of a knowledge database for the energy services market
In 2016, swissesco developed the Swiss Guide for Energy Performance Contracting with the support of the Swiss Federal Office of Energy SFOE. It is available in German and French. It explains in detail the planning, development and implementation of EPC projects in Switzerland. In other countries there have been similar efforts in the past, but they did not include the particular legel environment for tendering procedures in Switzerland. The Swiss Guide is comparable to the German document of the Deutschen Energieagentur DENA or the German federal state Hessen as well as to the efforts by the European Energy Service Initiative EESI. The Swiss Guide is free for download and explains how EPC works and what the do's and don'ts are. The public tender procedure is explained step-by-step and illustrated by useful infographics. The Guide also includes useful tools for the analysis and implementation of EPC projects, such as templates for contracts.
== Energy efficiency contract in Germany ==
In Germany, a model contract has been developed for small businesses and craft enterprises. The participation of civil energy cooperatives in efficiency measures will also be further developed. ESC is widespread in the private sector, such as industrial companies, hotels or large office buildings, as well as in the public sector (school buildings, administrative buildings, barracks, hospitals, etc.).
In principle, all measures in the field of building technology are possible, and in large contracts also in the field of the building envelope. In most cases, the system control system is replaced and connected to the central building management system to allow for monitoring. In addition, for example, the heating boiler can be replaced or the distribution system can be updated. Switching to solar energy can also bring financial benefits in the form of lower energy bills. There are also combinations with energy supply contract models. In both traditional stand-alone implementation and energy performance contracting projects, a predetermined economic limit determines the choice of measures. Since 2016, the realization of integrated building renovation including building envelopes has been demonstrated in an additionally developed energy efficiency contracting model in a research project and subsequently put into practice. This also includes effects not related to energy savings such as: for example, savings in maintenance and repair costs of replaced old systems are included in the calculation of profitability. In this case, these costs are determined as a part of the saved costs.
== References == | Wikipedia/Energy_Savings_Performance_Contract |
An energy audit is an inspection survey and an analysis of energy flows for energy conservation in a building. It may include a process or system to reduce the amount of energy input into the system without negatively affecting the output. In commercial and industrial real estate, an energy audit is the first step in identifying opportunities to reduce energy expense and carbon footprint.
== Principle ==
When the object of study is an occupied building then reducing energy consumption while maintaining or improving human comfort, health and safety are of primary concern. Beyond simply identifying the sources of energy use, an energy audit seeks to prioritize the energy uses according to the greatest to least cost-effective opportunities for energy savings.
== Home energy audit ==
A home energy audit is a service where the energy efficiency of a house is evaluated by a person using professional equipment (such as blower doors and infrared cameras), with the aim to suggest the best ways to improve energy efficiency in heating and cooling the house.
An energy audit of a home may involve recording various characteristics of the building envelope including the walls, ceilings, floors, doors, windows, and skylights. For each of these components the area and resistance to heat flow (R-value) is measured or estimated. The leakage rate or infiltration of air through the building envelope is of concern, which can be affected by window construction and quality of door seals such as weatherstripping. This exercise aims to quantify the building's overall thermal performance. The audit may also assess the efficiency, physical condition, and programming of mechanical systems such as the heating, ventilation, air conditioning equipment, and thermostat.
A home energy audit may include a written report estimating energy use given local climate criteria, thermostat settings, roof overhang, and solar orientation. This could show energy use for a given time period, say a year, and the impact of any suggested improvements per year. The accuracy of energy estimates are greatly improved when the homeowner's billing history is available showing the quantities of electricity, natural gas, fuel oil, or other energy sources consumed over a one or two-year period.
Some of the greatest effects on energy use are user behavior, climate, and age of the home. An energy audit may therefore include an interview of the homeowners to understand their patterns of use over time. The energy billing history from the local utility company can be calibrated using heating degree day and cooling degree day data obtained from recent, local weather data in combination with the thermal energy model of the building. Advances in computer-based thermal modeling can take into account many variables affecting energy use.
A home energy audit is often used to identify cost effective ways to improve the comfort and efficiency of buildings. In addition, homes may qualify for energy efficiency grants from central government.
Recently, the improvement of smartphone technology has enabled homeowners to perform relatively sophisticated energy audits of their own homes. This technique has been identified as a method to accelerate energy efficiency improvements.
=== In the United States ===
In the United States, this kind of service can often be facilitated by:
Public utility companies, or their energy conservation department.
Independent, private-sector companies such as energy services company, insulation contractor, or air sealing specialist.
(US) State energy office.
Utility companies may provide this service, and loans and other incentives. Some public utilities offer energy audits as part of a coordinated service to plan or install home energy upgrades. Utilities may also provide incentives to switch, for example, if you are an oil customer considering switching to natural gas.
Where to look for insulation recommendations:
Local building inspector's office.
Local or state building codes.
US Department of Energy.
Your local Builders Association
Residential energy auditors are accredited by the Building Performance Institute (BPI) or the Residential Energy Services Network (RESNET).
There are also some simplified tools available, with which a homeowner can quickly assess energy improvement potential. Often these are supplied for free by state agencies or local utilities, who produce a report with estimates of usage by device/area (since they have usage information already). Examples include the Energy Trust of Oregon program and the Seattle Home Resource Profile. Such programs may also include free compact fluorescent lights.
A simple do-it-yourself home energy audit can be performed without using any specialized tools. With an attentive and planned assessment, a homeowner can spot many problems that cause energy losses and make decisions about possible energy efficiency upgrades. During a home energy audit it is important to have a checklist of areas that were inspected and problems identified. Once the audit is completed, a plan for suggested actions needs to be developed.
==== New York City ====
In New York City, local laws such as Local Law 87 require buildings larger than 50,000 square feet (4,600 m2) to have an energy audit once every ten years, as assigned by its parcel number. Energy auditors must be certified to perform this work, although there is no oversight to enforce the rule. Because Local Law 87 requires a licensed Professional Engineer to oversee the work, choosing a well-established engineering firm is the safest route.
These laws are the results of New York City's PlaNYC to reduce energy used by buildings, which is the greatest source of pollution in New York City. Some engineering firms provide free energy audits for facilities committed to implementing the energy saving measures found.
=== In Lebanon ===
Since 2002, The Lebanese Center for Energy Conservation (LCEC) initiated a nationwide program on energy audits for medium and large consuming facilities. By the end of 2008, LCEC has financed and supervised more than 100 audits.
LCEC launched an energy audit program to assist Lebanese energy consuming tertiary and public buildings and industrial plants in the management of their energy through this program.
The long-term objective of LCEC is to create a market for ESCOs, whereby any beneficiary can contact directly a specialized ESCO to conduct an energy audit, implement energy conservation measures and monitor energy saving program according to a standardized energy performance contract.
Currently, LCEC is helping in the funding of the energy audit study and thus is linking both the beneficiary and the energy audit firm. LCEC also targets the creation of a special fund used for the implementation of the energy conservation measures resulting from the study.
LCEC set a minimum standard for the ESCOs qualifications in Lebanon and published a list of qualified ESCOs on its website.
== Industrial energy audits ==
Increasingly in the last several decades, industrial energy audits have exploded as the demand to lower increasingly expensive energy costs and move towards a sustainable future have made energy audits greatly important. Their importance is magnified since energy spending is a major expense to industrial companies (energy spending accounts for ~ 10% of the average manufacturer's expenses). This growing trend should only continue as energy costs continue to rise.
While the overall concept is similar to a home or residential energy audit, industrial energy audits require a different skillset. Weatherproofing and insulating a house are the main focus of residential energy audits. For industrial applications, it is the HVAC, lighting, and production equipment that use the most energy, and hence are the primary focus of energy audits.
== Types of energy audit ==
The term energy audit is commonly used to describe a broad spectrum of energy studies ranging from a quick walk-through of a facility to identify major problem areas to a comprehensive analysis of the implications of alternative energy efficiency measures sufficient to satisfy the financial criteria of sophisticated investors.
Numerous audit procedures have been developed for non-residential (tertiary) buildings (ASHRAE; IEA-EBC Annex 11; Krarti, 2000). Audit is required to identify the most efficient and cost-effective Energy Conservation Opportunities (ECOs) or Measures (ECMs). Energy conservation opportunities (or measures) can consist in more efficient use or of partial or global replacement of the existing installation.
According to the audit methodologies developed in IEA EBC Annex 11, by ASHRAE and by Krarti (2000), the main components of an audit process are:
The analysis of building and utility data, including study of the installed equipment and analysis of energy bills;
The survey of the real operating conditions;
The understanding of the building behaviour and of the interactions with weather, occupancy and operating schedules;
The selection and the evaluation of energy conservation measures;
The estimation of energy saving potential;
The identification of customer concerns and needs.
Common types/levels of energy audits are distinguished below, although the actual tasks performed and level of effort may vary with the consultant providing services under these broad headings. The only way to ensure that a proposed audit will meet your specific needs is to spell out those requirements in a detailed scope of work. Taking the time to prepare a formal solicitation will also assure the building owner of receiving competitive and comparable proposals.
Generally, four levels of analysis can be outlined (ASHRAE):
Level 0 – Benchmarking: This first analysis consists in a preliminary Whole Building Energy Use (WBEU) analysis based on the analysis of the historic utility use and costs and the comparison of the performances of the buildings to those of similar buildings. This benchmarking of the studied installation allows determining if further analysis is required;
Level I – Walk-through audit: Preliminary analysis made to assess building energy efficiency to identify not only simple and low-cost improvements but also a list of energy conservation measures (ECMs, or energy conservation opportunities, ECOs) to orient the future detailed audit. This inspection is based on visual verifications, study of installed equipment and operating data and detailed analysis of recorded energy consumption collected during the benchmarking phase;
Level II – Detailed/General energy audit: Based on the results of the pre-audit, this type of energy audit consists in energy use survey in order to provide a comprehensive analysis of the studied installation, a more detailed analysis of the facility, a breakdown of the energy use and a first quantitative evaluation of the ECOs/ECMs selected to correct the defects or improve the existing installation. This level of analysis can involve advanced on-site measurements and sophisticated computer-based simulation tools to evaluate precisely the selected energy retrofits;
Level III – Investment-Grade audit: Detailed Analysis of Capital-Intensive Modifications focusing on potential costly ECOs requiring rigorous engineering study.
=== Benchmarking ===
The impossibility of describing all possible situations that might be encountered during an audit means that it is necessary to find a way of describing what constitutes good, average and bad energy performance across a range of situations. The aim of benchmarking is to answer this question. Benchmarking mainly consists in comparing the measured consumption with reference consumption of other similar buildings or generated by simulation tools to identify excessive or unacceptable running costs. As mentioned before, benchmarking is also necessary to identify buildings presenting interesting energy saving potential.
An important issue in benchmarking is the use of performance indices to characterize the building.
These indexes can be:
Comfort indexes, comparing the actual comfort conditions to the comfort requirements;
Energy indexes, consisting in energy demands divided by heated/conditioned area, allowing comparison with reference values of the indexes coming from regulation or similar buildings;
Energy demands, directly compared to “reference” energy demands generated by means of simulation tools.
Typically, benchmarks are established based on the energy outlets (loads) within the building and are then further parsed into "base loads" and "weather sensitive loads". These are established through a simple regression analysis of energy consumption and demand (if metered) correlated to weather (temperature and degree - day) data during the period for which utility data is available. Aggregate base loads will represent as the intercept of this regression and the slope will typically represent the combination of building envelope conduction and infiltration losses less losses or gains from the base loads themselves. For example, while lighting is typically a base load, the heat generated from that lighting must be subtracted from the weather sensitive cooling load derived from the slope to gain an accurate picture of the true contribution of the building envelope on cooling energy use and demand.
=== Walk-through (or) preliminary audit ===
The preliminary audit (alternatively called a simple audit, screening audit or walk-through audit) is the simplest and quickest type of audit. It involves minimal interviews with site-operating personnel, a brief review of facility utility bills and other operating data, and a walk-through of the facility to become familiar with the building operation and to identify any glaring areas of energy waste or inefficiency.
Typically, only major problem areas will be covered during this type of audit. Corrective measures are briefly described, and quick estimates of implementation cost, potential operating cost savings, and simple payback periods are provided. A list of energy conservation measures (ECMs, or energy conservation opportunities, ECOs) requiring further consideration is also provided.
This level of detail, while not sufficient for reaching a final decision on implementing proposed measure, is adequate to prioritize energy-efficiency projects and to determine the need for a more detailed audit.
=== General audit ===
The general audit (alternatively called a mini-audit, site energy audit or detailed energy audit or complete site energy audit) expands on the preliminary audit described above by collecting more detailed information about facility operation and by performing a more detailed evaluation of energy conservation measures. Utility bills are collected for a 12- to 36-month period to allow the auditor to evaluate the facility's energy demand rate structures and energy usage profiles. If interval meter data is available, the detailed energy profiles that such data makes possible will typically be analyzed for signs of energy waste. Additional metering of specific energy-consuming systems is often performed to supplement utility data. In-depth interviews with facility operating personnel are conducted to provide a better understanding of major energy consuming systems and to gain insight into short- and longer-term energy consumption patterns.
This type of audit will be able to identify all energy-conservation measures appropriate for the facility, given its operating parameters. A detailed financial analysis is performed for each measure based on detailed implementation cost estimates, site-specific operating cost savings, and the customer's investment criteria. Sufficient detail is provided to justify project implementation.
The evolution of cloud-based energy auditing software platforms is enabling the managers of commercial buildings to collaborate with general and specialty trades contractors in performing general and energy system-specific audits. The benefit of software-enabled collaboration is the ability to identify the full range of energy efficiency options that may be applicable to the specific building under study with "live time" cost and benefit estimates supplied by local contractors.
=== Investment-grade audit ===
In most corporate settings, upgrades to a facility's energy infrastructure must compete for capital funding with non-energy-related investments. Both energy and non-energy investments are rated on a single set of financial criteria that generally stress the expected return on investment (ROI). The projected operating savings from the implementation of energy projects must be developed such that they provide a high level of confidence. In fact, investors often demand guaranteed savings.
The investment-grade audit expands on the detailed audit described above and relies on a complete engineering study in order to detail technical and economical issues necessary to justify the investment related to the transformations.
== Simulation-based energy audit procedure for non-residential buildings ==
A complete audit procedure, very similar to the ones proposed by ASHRAE and Krarti (2000), has been proposed in the frame of the AUDITAC and HARMONAC projects to help in the implementation of the EPB (“Energy Performance of Buildings”) directive in Europe and to fit to the current European market.
The following procedure proposes to make an intensive use of modern BES tools at each step of the audit process, from benchmarking to detailed audit and financial study:
Benchmarking stage: While normalization is required to allow comparison between data recorded on the studied installation and reference values deduced from case studies or statistics. The use of simulation models, to perform a code-compliant simulation of the installation under study, allows to assess directly the studied installation, without any normalization needed. Indeed, applying a simulation-based benchmarking tool allows an individual normalization and allows avoiding size and climate normalization.
Preliminary audit stage: Global monthly consumptions are generally insufficient to allow an accurate understanding of the building's behaviour. Even if the analysis of the energy bills does not allow identifying with accuracy the different energy consumers present in the facility, the consumption records can be used to calibrate building and system simulation models. To assess the existing system and to simulate correctly the building's thermal behaviour, the simulation model has to be calibrated on the studied installation. The iterations needed to perform the calibration of the model can also be fully integrated in the audit process and help in identifying required measurements and critical issues.
Detailed audit stage: At this stage, on-site measurements, sub-metering and monitoring data are used to refine the calibration of the BES tool. Extensive attention is given to understanding not only the operating characteristics of all energy consuming systems, but also situations that cause load profile variations on short and longer term bases (e.g. daily, weekly, monthly, annual). When the calibration criteria are satisfied, the savings related to the selected ECOs/ECMs can be quantified.
Investment-grade audit stage: At this stage, the results provided by the calibrated BES tool can be used to assess the selected ECOs/ECMs and orient the detailed engineering study.
== Specific audit techniques ==
=== Infrared thermography audit ===
The advent of high-resolution thermography has enabled inspectors to identify potential issues within the building envelope by taking a thermal image of the various surfaces of a building. For purposes of an energy audit, the thermographer will analyze the patterns within the surface temperatures to identify heat transfer through convection, radiation, or conduction. It is important to note that the thermography only identifies surface temperatures, and analysis must be applied to determine the reasons for the patterns within the surface temperatures. Thermal analysis of a home generally costs between 300 and 600 dollars.
For those who cannot afford a thermal inspection, it is possible to get a general feel for the heat loss with a non-contact infrared thermometer and several sheets of reflective insulation. The method involves measuring the temperatures on the inside surfaces of several exterior walls to establish baseline temperatures. After this, reflective barrier insulation is taped securely to the walls in 8-foot (2.4 m) by 1.5-foot (0.46 m) strips and the temperatures are measured in the center of the insulated areas at 1-hour intervals for 12 hours (the reflective barrier is pulled away from the wall to measure the temperature in the center of the area which it has covered). The best manner in which to do this is when the temperature differential (Delta T) between the inside and outside of the structure is at least 40 degrees. A well-insulated wall will commonly change approximately 1 degree per hour if the difference between external and internal temperatures is an average of 40 degrees. A poorly insulated wall can drop as much as 10 degrees in an hour.
=== Pollution audits ===
With increases in carbon dioxide emissions or other greenhouse gases, pollution audits are now a prominent factor in most energy audits. Implementing energy efficient technologies help prevent utility generated pollution.
Online pollution and emission calculators can help approximate the emissions of other prominent air pollutants in addition to carbon dioxide.
Pollution audits generally take electricity and heating fuel consumption numbers over a two-year period and provide approximations for carbon dioxide, VOCs, nitrous oxides, carbon monoxide, sulfur dioxide, mercury, cadmium, lead, mercury compounds, cadmium compounds and lead compounds.
== History ==
Energy audits initially became popular in response to the energy crisis of 1973 and later years. Interest in energy audits has recently increased as a result of growing understanding of human impact upon global warming and climate change. Energy audits are also popular due to financial incentives for homeowners.
== Building energy rating systems ==
Australia – House Energy Rating
Canada – EnerGuide
UK – National Home Energy Rating, Standard Assessment Procedure, Energy Performance Certificate
US – Home energy rating, Energy Star
== See also ==
ASHRAE
Efficient energy use
Energy efficiency implementation
Energy law
Energy recovery
Heat recovery ventilation
Home performance
Pinch analysis
Utility bill audit
== References ==
== Further reading ==
Wulfinghoff, Donald. (2000). Energy Efficiency Manual. Energy Institute Press. ISBN 0-9657926-7-6
Clark, William. (1998) Retrofitting for energy conservation. Mc Graw Hill. ISBN 0-07-011920-1
Thumann, Albert. (2012). Handbook of Energy Audits. 9th Edition. The Fairmont Press. ISBN 0-88173-685-6
Krarti, M. (2000). Energy audit of building systems: an engineering approach. CRC Press. ISBN 0-8493-9587-9
== External links ==
Energy Star: Home Energy Audits
US Department of Energy: Energy Efficiency | Wikipedia/Energy_audit |
Energy Star (trademarked ENERGY STAR) is an energy-efficiency program established in 1992. It is administered by the U.S. Environmental Protection Agency (EPA) in partnership with the U.S. Department of Energy (DOE). The EPA establishes energy efficiency specifications, and those that meet these specifications are eligible to display the ENERGY STAR logo.
More than 75 product categories are eligible for the ENERGY STAR label, including appliances, electronics, lighting, heating and cooling systems, and commercial equipment such as food service products. In the United States, the ENERGY STAR label often appears with the EnergyGuide label of eligible appliances to highlight energy-efficient products and compare energy use and operating costs.
One of the most successful voluntary initiatives introduced by the U.S. government, the program has saved 5 trillion kilowatt-hours of electricity, more than US$500 billion in energy costs, and prevented 4 billion metric tons of greenhouse gas emissions.
Elements of the ENERGY STAR program are implemented in Canada, Japan, and Switzerland. In 2018, a 15-year long agreement with the European Union expired. A previous agreement with the European Free Trade Association also ended.
== History ==
The Energy Star program was established by the Environmental Protection Agency in 1992 and operates under the authority of the Clean Air Act, section 103(g), and the 2005 Energy Policy Act, section 131 (which amended the Energy Policy and Conservation Act, section 324). Since 1992, Energy Star and its partners are estimated to have reduced various energy bills by at least $430 billion.
The EPA manages Energy Star products, as well as home and commercial/industrial programs. The EPA develops and manages Energy Star Portfolio Manager, an online energy tracking and benchmarking tool for commercial buildings. The DOE manages Home Performance with Energy Star and provides technical support, including test procedure development for products and some verification testing of products.
Initiated as a voluntary labeling program designed to identify and promote energy-efficient products, Energy Star began with labels for computers and their peripherals. In 1995 the program was significantly expanded, introducing labels for residential heating and cooling systems and new homes. In 2000, the Consortium for Energy Efficiency was directed by members to begin an annual survey of Energy Star impact.
According to the U.S. Energy and Employment Report for 2016, 290,000 American workers are involved in the manufacture of Energy Star certified products and building materials. The report also projects that employment in energy efficiency will grow much faster than other areas of the energy sector—9 percent in 2017 vs. average projected growth of 5 percent across all of the energy sector—and that Energy Star will be an integral part of that market.
In May 2025 leaks from a meeting on the ongoing reorganization of the EPA revealed closing branches managing the Energy Star program. CNN journalists were unable to receive a clarification on the matter from the agency. Democrat Jeanne Shaheen and Republican Susan Collins, journalists, and multiple institutions see such a change as having hugely negative impact on finances of the population. According to the critical voices, each year the program provides US citizens $40 billion in energy savings, while costing the government $32 million.
== Specifications ==
Energy Star specifications differ with each item, and are set by the EPA.
=== Computers ===
Energy Star 4.0 specifications for computers became effective on July 20, 2007. The requirements are more stringent than the previous specification and existing equipment designs can no longer use the service mark unless re-qualified. They require the use of 80 Plus Bronze level or higher power supplies. Energy Star 5.0 became effective on July 1, 2009. Energy Star 6.1 became effective on September 10, 2014. Energy Star 7.1 became effective on November 16, 2018. The Version 8.0 specification for computers was finalized on October 15, 2019 and became effective on October 15, 2020.
=== Servers ===
The EPA released Version 1.0 of the Computer Server specifications on May 15, 2009. It covered standalone servers with one to four processor sockets. A second tier to the specification adding active state power and performance reporting for all qualified servers, as well as blade and multi-node server idle state requirements became effective December 16, 2013. The Version 2.0 Energy Star specification for Computer Servers came into effect on December 16, 2013. The Version 3.0 Energy Star specification for Enterprise Servers came into effect on June 17, 2019 [1].
=== Appliances ===
As of early 2008, average refrigerators need 20% savings over the minimum standard. Dishwashers need at least 41% savings. Most appliances as well as heating and cooling systems have a yellow EnergyGuide label showing the annual cost of operation compared to other models. This label is created through the Federal Trade Commission and often shows if an appliance is Energy Star rated. While an Energy Star label indicates that the appliance is more energy efficient than the minimum guidelines, purchasing an Energy Star labeled product does not always mean one is getting the most energy efficient option available. For example, dehumidifiers that are rated under 25 US pints (12 L) per day of water extraction receive an Energy Star rating if they have an energy factor of 1.2 (higher is better), while those rated 25 US pints (12 L) to 35 US pints (17 L) per day receive an Energy Star rating for an energy factor of 1.4 or higher. Thus a higher-capacity but non-Energy Star rated dehumidifier may be a more energy efficient alternative than an Energy Star rated but lower-capacity model. The Energy Star program's savings calculator has also been criticized for unrealistic assumptions in its model that tend to magnify savings benefits to the average consumer.
Another factor yet to be considered by the EPA and DOE is the overall effect of energy-saving requirements on the durability and expected service life of a mass-market appliance built to a consumer-level cost standard. For example, a refrigerator may be made more efficient by the use of more insulative spacing and a smaller-capacity compressor using electronics to control operation and temperature. However, this may come at the cost of reduced interior storage (or increased exterior mass) or a reduced service life due to compressor or electronic failures. In particular, electronic controls used on new-generation appliances are subject to damage from shock, vibration, moisture, or power spikes on the electrical circuit to which they are attached. Critics have pointed out that even if a new appliance is energy efficient, any consumer appliance that does not provide customer satisfaction, or must be replaced twice as often as its predecessor contributes to landfill pollution and waste of natural resources used to construct its replacement.
=== Heating and cooling systems ===
Energy Star qualified heat pumps, boilers, air conditioning systems, and furnaces are available. In addition, cooling and heating bills can be significantly lowered with air sealing and duct sealing. Air sealing reduces the outdoor air that penetrates a building, and duct sealing prevents attic or basement air from entering ducts and lessening the heating/cooling system’s efficiency. Energy Star qualified room air conditioners are at least 10% more energy efficient than the minimum U.S. federal government standards.
=== Home electronics ===
Energy Star qualified televisions use 30% less energy than average. In November 2008, television specifications were improved to limit on-mode power use, in addition to standby power which is limited by the current specifications. Standby power consumption for televisions must be 3 watts or less. A wider range of Energy Star qualified televisions will be available.
Other qualified home electronics include cordless phones, battery chargers, VCRs and external power adapters, most of which use 90% less energy.
=== Imaging equipment ===
The Energy Star Program Requirements for Imaging Products are focused on product families such as electrophotographic (EP) printers, inkjet printers (e.g., thermal), copiers, facsimile machines and other imaging equipment including MFD's (multifunctional devices). Typical Electrical Consumption (TEC) of a product family are measured and reported against an allowance set by the maximum throughput of the device. Operation modes (OM) are measured and reported for devices such as inkjet products against an allowance set by the functions present in the EUT (equipment under test). Devices that included "adders" such as Ethernet, on-board memory, wireless, etc. are mathematically "added" to increase the OM allowance. On February 1, 2011, the EPA/DOE added the requirement that all products registered under the Energy Star service mark, must be tested by an AB (Accredited Body) or CB (Certification Body) Laboratory.
=== Lighting ===
The Energy Star is awarded to only certain bulbs that meet strict efficiency, quality, and lifetime criteria. Energy Star qualified fluorescent lighting uses 75% less energy and lasts up to ten times longer than normal incandescent lights.
Energy Star Qualified light-emitting diode (LED) Lighting:
Reduces energy costs — uses only 20–25% of the electricity that incandescent bulbs use, and last as much as 25 times as long. LEDs use 25%-30% of the amount of energy as halogen incandescent bulbs, and last 8–25 times as long.
Reduces cooling costs — LEDs produce very little heat.
To qualify for Energy Star certification, LED lighting products must pass a variety of tests to prove that the products will display the following characteristics:
Brightness is equal to or greater than existing lighting technologies (incandescent or fluorescent) and light is well distributed over the area lighted by the fixture.
Light output remains constant over time, only decreasing towards the end of the rated lifetime (at least 35,000 hours or 12 years based on use of 8 hours per day).
Excellent color quality. The shade of white light appears clear and consistent over time.
Efficiency is as good as or better than fluorescent lighting.
Light comes on instantly when turned on.
No flicker when dimmed.
No off-state power draw. The fixture does not use power when it is turned off, with the exception of external controls, whose power should not exceed 0.5 watts in the off state.
=== New Homes ===
New homes or apartments that earn the Energy Star label have been verified to meet energy efficiency requirements set by U.S. EPA. Energy Star certified homes are at least 10% more efficient than homes built to code and achieve a 20% improvement on average, while providing homeowners with better quality, performance, and comfort. Nearly 1.9 million Energy Star certified homes and apartments have been certified to date. These high-performing homes can be found across the U.S. and include a complete thermal enclosure system, a high-efficiency heating, ventilation and cooling system, a comprehensive water management system, and energy-efficient lighting and appliances. Together, U.S. homeowners living in certified homes saved $360 million on their energy bills in 2016 alone. In 2020, ENERGY STAR separated single-family and multifamily construction types into their own programs: Single-Family New Construction (SFNC) and Multifamily New Construction (MFNC).
A new tier of ENERGY STAR certification, called the ENERGY STAR NextGen Certified Homes and Apartments, will be launched in 2023. This new certification uses a baseline of the ENERGY STAR Single-Family and Multifamily certification, with additional requirements such as heat pump water heaters and EV-ready charging capabilities.
== Energy performance ratings ==
The Energy Star program has developed energy performance rating systems for several commercial and institutional building types and manufacturing facilities. These ratings, on a scale of 1 to 100, provide a means for benchmarking the energy efficiency of specific buildings and industrial plants against the energy performance of similar facilities. The ratings are used by building and energy managers to evaluate the energy performance of existing buildings and industrial plants. The rating systems are also used by EPA to determine if a building or plant can qualify to earn Energy Star recognition. In 2020 Energy Star released an updated guide for verifying Energy Star certifications.
Energy Star ratings have been compared to other clean energy rating systems and green building certification systems such as those by independent firms like MiQ, or LEED certifications for office buildings.
=== Buildings ===
The number of space types that can receive the energy performance rating in Portfolio Manager is expanding and now includes housing, bank/financial institutions, courthouses, hospitals (acute care and children's), hotels and motels, houses of worship, K-12 schools, medical offices, offices, residence halls/dormitories, retail stores, supermarkets, warehouses (refrigerated and non-refrigerated), hotels (see hotel energy management), data centers, senior care facilities, and wastewater facilities.
See the technical descriptions for models used in the rating system at. These documents provide detailed information on the methodologies used to create the energy performance ratings including details on rating objectives, regression techniques, and the steps applied to compute a rating. Energy Star maintains a 1–100 national benchmarking rating for buildings based on building attributes depending on the category, including floor area, occupancy and energy consumption data into a free online tool provided by Energy Star.
Energy Star energy performance ratings have been incorporated into some green buildings standards, such as LEED for Existing Buildings. In the U.S., builders of energy efficient homes are able to qualify for Federal Income tax credits and deductions.
Energy Star estimated in 2020 that energy use in commercial buildings accounts for 20% of greenhouse gas emissions, costing more than $100B per year.
=== Industrial facilities ===
Some examples of specialised industrial facilities which Energy Star has designed specific performance ratings for include:
Automobile assembly plants (see automotive industry)
Cement plants
Corn mills
Container glass manufacturing
Flat glass manufacturing
Potato processing plants
Juice processing
Petroleum refineries
Pharmaceutical manufacturing plants
== Small business award ==
The U.S. Environmental Protection Agency (EPA) annually recognizes small businesses that demonstrate abilities to reduce waste, conserve energy, and recycle. The businesses use resources and ideas outlined in the Energy Star program. The award was established in 1999.
== Controversies ==
In March 2010, the Government Accountability Office (GAO) performed covert testing of the Energy Star product certification process and found that Energy Star was for the most part a self-certification program that was vulnerable to fraud and abuse. While the GAO demonstrated, by submitting fake products from made-up companies, that cheating was possible, they found no evidence of consumer fraud relating to the quality or performance of Energy Star qualified products.
In response, the Environmental Protection Agency instituted third-party certification of all Energy Star products starting in 2011. Under this regime, products are tested in an EPA-recognized laboratory and reviewed by an EPA-recognized certification body before they can carry the label. In order to be recognized, labs and certification bodies must meet specified criteria and be subject to oversight by a recognized accreditation body. In addition, a percentage of Energy Star certified product models in each category are subject to off-the-shelf verification testing each year.
As of 2017, there are 23 independent certification bodies and 255 independent laboratories recognized for purposes of Energy Star product certification and testing. Most cover multiple product types. In 2016, 1,881 product models were subject to verification testing with an overall compliance rate of 95%.
In March 2017 the Trump Administration proposed a budget that would eliminate the program. This prompted an outpouring of expressions of support for the Energy Star program from environmental groups, energy efficiency advocates, and businesses.
== Adoption in building codes ==
The current and projected status of energy codes and standards adoption is show in the maps at the link.
The following cities have mandatory reporting requirements.
Atlanta, GA
Austin, TX
Boston, MA
Minneapolis, MN
New York, NY
Philadelphia, PA
San Francisco, CA
Seattle, WA
Washington, DC
== See also ==
ASUE (Germany)
Bureau of Energy Efficiency (India)
Energy performance certificate
EnerWorks
European Union energy label
Green computing
Green energy
Nationwide House Energy Rating Scheme (Australia)
Miscellaneous electric load
One Watt Initiative
Plug load
Power management
Weatherization
== References ==
== External links ==
Energy Star
Energy Star Australia
Energy Star Canada
Energy Consumption Calculator
Energy Star entry at Ecolabelling.org
Energy Efficiency Breakdown of the costs, savings, and energy efficiency of Energy Star appliances
Energy Star qualified Energy Service & Product Providers list
EPA recognized Certification Bodies (CBs) and Laboratories
Energy Star 5.0 Computer specification (November 14, 2008)
10 CFR 430, Subpart B, Appendix A to Subpart B of Part 430 – Uniform Test Method for Measuring the Energy Consumption of Electric Refrigerators and Electric Refrigerator–Freezers | Wikipedia/Energy_Star |
In mathematics, the support function hA of a non-empty closed convex set A in
R
n
{\displaystyle \mathbb {R} ^{n}}
describes the (signed) distances of supporting hyperplanes of A from the origin. The support function is a convex function on
R
n
{\displaystyle \mathbb {R} ^{n}}
.
Any non-empty closed convex set A is uniquely determined by hA. Furthermore, the support function, as a function of the set A, is compatible with many natural geometric operations, like scaling, translation, rotation and Minkowski addition.
Due to these properties, the support function is one of the most central basic concepts in convex geometry.
== Definition ==
The support function
h
A
:
R
n
→
R
{\displaystyle h_{A}\colon \mathbb {R} ^{n}\to \mathbb {R} }
of a non-empty closed convex set A in
R
n
{\displaystyle \mathbb {R} ^{n}}
is given by
h
A
(
x
)
=
sup
{
x
⋅
a
:
a
∈
A
}
,
{\displaystyle h_{A}(x)=\sup\{x\cdot a:a\in A\},}
x
∈
R
n
{\displaystyle x\in \mathbb {R} ^{n}}
; see
. Its interpretation is most intuitive when x is a unit vector:
by definition, A is contained in the closed half space
{
y
∈
R
n
:
y
⋅
x
⩽
h
A
(
x
)
}
{\displaystyle \{y\in \mathbb {R} ^{n}:y\cdot x\leqslant h_{A}(x)\}}
and there is at least one point of A in the boundary
H
(
x
)
=
{
y
∈
R
n
:
y
⋅
x
=
h
A
(
x
)
}
{\displaystyle H(x)=\{y\in \mathbb {R} ^{n}:y\cdot x=h_{A}(x)\}}
of this half space. The hyperplane H(x) is therefore called a supporting hyperplane
with exterior (or outer) unit normal vector x.
The word exterior is important here, as
the orientation of x plays a role, the set H(x) is in general different from H(−x).
Now hA(x) is the (signed) distance of H(x) from the origin.
== Examples ==
The support function of a singleton A = {a} is
h
A
(
x
)
=
x
⋅
a
{\displaystyle h_{A}(x)=x\cdot a}
.
The support function of the Euclidean unit ball
B
=
{
y
∈
R
n
:
‖
y
‖
2
≤
1
}
{\displaystyle B=\{y\in \mathbb {R} ^{n}\,:\,\|y\|_{2}\leq 1\}}
is
h
B
(
x
)
=
‖
x
‖
2
{\displaystyle h_{B}(x)=\|x\|_{2}}
where
‖
⋅
‖
2
{\displaystyle \|\cdot \|_{2}}
is the 2-norm.
If A is a line segment through the origin with endpoints −a and a, then
h
A
(
x
)
=
|
x
⋅
a
|
{\displaystyle h_{A}(x)=|x\cdot a|}
.
== Properties ==
=== As a function of x ===
The support function of a compact nonempty convex set is real valued and continuous, but if the
set is closed and unbounded, its support function is extended real valued (it takes the value
∞
{\displaystyle \infty }
). As any nonempty closed convex set is the intersection of
its supporting half spaces, the function hA determines A uniquely.
This can be used to describe certain geometric properties of convex sets analytically.
For instance, a set A is point symmetric with respect to the origin if and only if hA
is an even function.
In general, the support function is not differentiable.
However, directional derivatives exist and yield support functions of support sets. If A is compact and convex,
and hA'(u;x) denotes the directional derivative of
hA at u ≠ 0 in direction x,
we have
h
A
′
(
u
;
x
)
=
h
A
∩
H
(
u
)
(
x
)
x
∈
R
n
.
{\displaystyle h_{A}'(u;x)=h_{A\cap H(u)}(x)\qquad x\in \mathbb {R} ^{n}.}
Here H(u) is the supporting hyperplane of A with exterior normal vector u, defined
above. If A ∩ H(u) is a singleton {y}, say, it follows that the support function is differentiable at
u and its gradient coincides with y. Conversely, if hA is differentiable at u, then A ∩ H(u) is a singleton. Hence hA is differentiable at all points u ≠ 0
if and only if A is strictly convex (the boundary of A does not contain any line segments).
More generally, when
A
{\displaystyle A}
is convex and closed then for any
u
∈
R
n
∖
{
0
}
{\displaystyle u\in \mathbb {R} ^{n}\setminus \{0\}}
,
∂
h
A
(
u
)
=
H
(
u
)
∩
A
,
{\displaystyle \partial h_{A}(u)=H(u)\cap A\,,}
where
∂
h
A
(
u
)
{\displaystyle \partial h_{A}(u)}
denotes the set of subgradients of
h
A
{\displaystyle h_{A}}
at
u
{\displaystyle u}
.
It follows directly from its definition that the support function is positive homogeneous:
h
A
(
α
x
)
=
α
h
A
(
x
)
,
α
≥
0
,
x
∈
R
n
,
{\displaystyle h_{A}(\alpha x)=\alpha h_{A}(x),\qquad \alpha \geq 0,x\in \mathbb {R} ^{n},}
and subadditive:
h
A
(
x
+
y
)
≤
h
A
(
x
)
+
h
A
(
y
)
,
x
,
y
∈
R
n
.
{\displaystyle h_{A}(x+y)\leq h_{A}(x)+h_{A}(y),\qquad x,y\in \mathbb {R} ^{n}.}
It follows that hA is a convex function.
It is crucial in convex geometry that these properties characterize support functions:
Any positive homogeneous, convex, real valued function on
R
n
{\displaystyle \mathbb {R} ^{n}}
is the
support function of a nonempty compact convex set. Several proofs are known,
one is using the fact that the Legendre transform of a positive homogeneous, convex, real valued function
is the (convex) indicator function of a compact convex set.
Many authors restrict the support function to the Euclidean unit sphere
and consider it as a function on Sn-1.
The homogeneity property shows that this restriction determines the
support function on
R
n
{\displaystyle \mathbb {R} ^{n}}
, as defined above.
=== As a function of A ===
The support functions of a dilated or translated set are closely related to the original set A:
h
α
A
(
x
)
=
α
h
A
(
x
)
,
α
≥
0
,
x
∈
R
n
{\displaystyle h_{\alpha A}(x)=\alpha h_{A}(x),\qquad \alpha \geq 0,x\in \mathbb {R} ^{n}}
and
h
A
+
b
(
x
)
=
h
A
(
x
)
+
x
⋅
b
,
x
,
b
∈
R
n
.
{\displaystyle h_{A+b}(x)=h_{A}(x)+x\cdot b,\qquad x,b\in \mathbb {R} ^{n}.}
The latter generalises to
h
A
+
B
(
x
)
=
h
A
(
x
)
+
h
B
(
x
)
,
x
∈
R
n
,
{\displaystyle h_{A+B}(x)=h_{A}(x)+h_{B}(x),\qquad x\in \mathbb {R} ^{n},}
where A + B denotes the Minkowski sum:
A
+
B
:=
{
a
+
b
∈
R
n
∣
a
∈
A
,
b
∈
B
}
.
{\displaystyle A+B:=\{\,a+b\in \mathbb {R} ^{n}\mid a\in A,\ b\in B\,\}.}
The Hausdorff distance d H(A, B)
of two nonempty compact convex sets A and B can be expressed in terms of support functions,
d
H
(
A
,
B
)
=
‖
h
A
−
h
B
‖
∞
{\displaystyle d_{\mathrm {H} }(A,B)=\|h_{A}-h_{B}\|_{\infty }}
where, on the right hand side, the uniform norm on the unit sphere is used.
The properties of the support function as a function of the set A are sometimes summarized in saying
that
τ
{\displaystyle \tau }
:A
↦
{\displaystyle \mapsto }
h A maps the family of non-empty
compact convex sets to the cone of all real-valued continuous functions on the sphere whose positive
homogeneous extension is convex. Abusing terminology slightly,
τ
{\displaystyle \tau }
is sometimes called linear, as it respects Minkowski addition, although it is not
defined on a linear space, but rather on an (abstract) convex cone of nonempty compact convex sets.
The mapping
τ
{\displaystyle \tau }
is an isometry between this cone, endowed with the Hausdorff metric, and
a subcone of the family of continuous functions on Sn-1 with the uniform norm.
== Variants ==
In contrast to the above, support functions are sometimes defined on the boundary of A rather than on
Sn-1, under the assumption that there exists a unique exterior unit normal at each boundary point.
Convexity is not needed for the definition.
For an oriented regular surface, M, with a unit normal vector, N, defined everywhere on its surface, the support function
is then defined by
x
↦
x
⋅
N
(
x
)
{\displaystyle {x}\mapsto {x}\cdot N({x})}
.
In other words, for any
x
∈
M
{\displaystyle {x}\in M}
, this support function gives the
signed distance of the unique hyperplane that touches M in x.
== See also ==
Barrier cone
Supporting functional
== References == | Wikipedia/Support_function |
In thermodynamics, the Gibbs free energy (or Gibbs energy as the recommended name; symbol
G
{\displaystyle G}
) is a thermodynamic potential that can be used to calculate the maximum amount of work, other than pressure–volume work, that may be performed by a thermodynamically closed system at constant temperature and pressure. It also provides a necessary condition for processes such as chemical reactions that may occur under these conditions. The Gibbs free energy is expressed as
G
(
p
,
T
)
=
U
+
p
V
−
T
S
=
H
−
T
S
{\displaystyle G(p,T)=U+pV-TS=H-TS}
where:
U
{\textstyle U}
is the internal energy of the system
H
{\textstyle H}
is the enthalpy of the system
S
{\textstyle S}
is the entropy of the system
T
{\textstyle T}
is the temperature of the system
V
{\textstyle V}
is the volume of the system
p
{\textstyle p}
is the pressure of the system (which must be equal to that of the surroundings for mechanical equilibrium).
The Gibbs free energy change (
Δ
G
=
Δ
H
−
T
Δ
S
{\displaystyle \Delta G=\Delta H-T\Delta S}
, measured in joules in SI) is the maximum amount of non-volume expansion work that can be extracted from a closed system (one that can exchange heat and work with its surroundings, but not matter) at fixed temperature and pressure. This maximum can be attained only in a completely reversible process. When a system transforms reversibly from an initial state to a final state under these conditions, the decrease in Gibbs free energy equals the work done by the system to its surroundings, minus the work of the pressure forces.
The Gibbs energy is the thermodynamic potential that is minimized when a system reaches chemical equilibrium at constant pressure and temperature when not driven by an applied electrolytic voltage. Its derivative with respect to the reaction coordinate of the system then vanishes at the equilibrium point. As such, a reduction in
G
{\displaystyle G}
is necessary for a reaction to be spontaneous under these conditions.
The concept of Gibbs free energy, originally called available energy, was developed in the 1870s by the American scientist Josiah Willard Gibbs. In 1873, Gibbs described this "available energy" as: 400
the greatest amount of mechanical work which can be obtained from a given quantity of a certain substance in a given initial state, without increasing its total volume or allowing heat to pass to or from external bodies, except such as at the close of the processes are left in their initial condition.
The initial state of the body, according to Gibbs, is supposed to be such that "the body can be made to pass from it to states of dissipated energy by reversible processes". In his 1876 magnum opus On the Equilibrium of Heterogeneous Substances, a graphical analysis of multi-phase chemical systems, he engaged his thoughts on chemical-free energy in full.
If the reactants and products are all in their thermodynamic standard states, then the defining equation is written as
Δ
G
∘
=
Δ
H
∘
−
T
Δ
S
∘
{\displaystyle \Delta G^{\circ }=\Delta H^{\circ }-T\Delta S^{\circ }}
, where
H
{\displaystyle H}
is enthalpy,
T
{\displaystyle T}
is absolute temperature, and
S
{\displaystyle S}
is entropy.
== Overview ==
According to the second law of thermodynamics, for systems reacting at fixed temperature and pressure without input of non-Pressure Volume (pV) work, there is a general natural tendency to achieve a minimum of the Gibbs free energy.
A quantitative measure of the favorability of a given reaction under these conditions is the change ΔG (sometimes written "delta G" or "dG") in Gibbs free energy that is (or would be) caused by the reaction. As a necessary condition for the reaction to occur at constant temperature and pressure, ΔG must be smaller than the non-pressure-volume (non-pV, e.g. electrical) work, which is often equal to zero (then ΔG must be negative). ΔG equals the maximum amount of non-pV work that can be performed as a result of the chemical reaction for the case of a reversible process. If analysis indicates a positive ΔG for a reaction, then energy — in the form of electrical or other non-pV work — would have to be added to the reacting system for ΔG to be smaller than the non-pV work and make it possible for the reaction to occur.: 298–299
One can think of ∆G as the amount of "free" or "useful" energy available to do non-pV work at constant temperature and pressure. The equation can be also seen from the perspective of the system taken together with its surroundings (the rest of the universe). First, one assumes that the given reaction at constant temperature and pressure is the only one that is occurring. Then the entropy released or absorbed by the system equals the entropy that the environment must absorb or release, respectively. The reaction will only be allowed if the total entropy change of the universe is zero or positive. This is reflected in a negative ΔG, and the reaction is called an exergonic process.
If two chemical reactions are coupled, then an otherwise endergonic reaction (one with positive ΔG) can be made to happen. The input of heat into an inherently endergonic reaction, such as the elimination of cyclohexanol to cyclohexene, can be seen as coupling an unfavorable reaction (elimination) to a favorable one (burning of coal or other provision of heat) such that the total entropy change of the universe is greater than or equal to zero, making the total Gibbs free energy change of the coupled reactions negative.
In traditional use, the term "free" was included in "Gibbs free energy" to mean "available in the form of useful work". The characterization becomes more precise if we add the qualification that it is the energy available for non-pressure-volume work. (An analogous, but slightly different, meaning of "free" applies in conjunction with the Helmholtz free energy, for systems at constant temperature). However, an increasing number of books and journal articles do not include the attachment "free", referring to G as simply "Gibbs energy". This is the result of a 1988 IUPAC meeting to set unified terminologies for the international scientific community, in which the removal of the adjective "free" was recommended. This standard, however, has not yet been universally adopted.
The name "free enthalpy" was also used for G in the past.
== History ==
The quantity called "free energy" is a more advanced and accurate replacement for the outdated term affinity, which was used by chemists in the earlier years of physical chemistry to describe the force that caused chemical reactions.
In 1873, Josiah Willard Gibbs published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, in which he sketched the principles of his new equation that was able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e., bodies composed of part solid, part liquid, and part vapor, and by using a three-dimensional volume-entropy-internal energy graph, Gibbs was able to determine three states of equilibrium, i.e., "necessarily stable", "neutral", and "unstable", and whether or not changes would ensue. Further, Gibbs stated:
If we wish to express in a single equation the necessary and sufficient condition of thermodynamic equilibrium for a substance when surrounded by a medium of constant pressure p and temperature T, this equation may be written:
when δ refers to the variation produced by any variations in the state of the parts of the body, and (when different parts of the body are in different states) in the proportion in which the body is divided between the different states. The condition of stable equilibrium is that the value of the expression in the parenthesis shall be a minimum.
In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body...
Thereafter, in 1882, the German scientist Hermann von Helmholtz characterized the affinity as the largest quantity of work which can be gained when the reaction is carried out in a reversible manner, e.g., electrical work in a reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system (Gibbs free energy G at T = constant, P = constant or Helmholtz free energy F at T = constant, V = constant), whilst the heat given out is usually a measure of the diminution of the total energy of the system (internal energy). Thus, G or F is the amount of energy "free" for work under the given conditions.
Until this point, the general view had been such that: "all chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish". Over the next 60 years, the term affinity came to be replaced with the term free energy. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world.: 206
== Definitions ==
The Gibbs free energy is defined as
G
(
p
,
T
)
=
U
+
p
V
−
T
S
,
{\displaystyle G(p,T)=U+pV-TS,}
which is the same as
G
(
p
,
T
)
=
H
−
T
S
,
{\displaystyle G(p,T)=H-TS,}
where:
U is the internal energy (SI unit: joule),
p is pressure (SI unit: pascal),
V is volume (SI unit: m3),
T is the temperature (SI unit: kelvin),
S is the entropy (SI unit: joule per kelvin),
H is the enthalpy (SI unit: joule).
The expression for the infinitesimal reversible change in the Gibbs free energy as a function of its "natural variables" p and T, for an open system, subjected to the operation of external forces (for instance, electrical or magnetic) Xi, which cause the external parameters of the system ai to change by an amount dai, can be derived as follows from the first law for reversible processes:
T
d
S
=
d
U
+
p
d
V
−
∑
i
=
1
k
μ
i
d
N
i
+
∑
i
=
1
n
X
i
d
a
i
+
⋯
d
(
T
S
)
−
S
d
T
=
d
U
+
d
(
p
V
)
−
V
d
p
−
∑
i
=
1
k
μ
i
d
N
i
+
∑
i
=
1
n
X
i
d
a
i
+
⋯
d
(
U
−
T
S
+
p
V
)
=
V
d
p
−
S
d
T
+
∑
i
=
1
k
μ
i
d
N
i
−
∑
i
=
1
n
X
i
d
a
i
+
⋯
d
G
=
V
d
p
−
S
d
T
+
∑
i
=
1
k
μ
i
d
N
i
−
∑
i
=
1
n
X
i
d
a
i
+
⋯
{\displaystyle {\begin{aligned}T\,\mathrm {d} S&=\mathrm {d} U+p\,\mathrm {d} V-\sum _{i=1}^{k}\mu _{i}\,\mathrm {d} N_{i}+\sum _{i=1}^{n}X_{i}\,\mathrm {d} a_{i}+\cdots \\\mathrm {d} (TS)-S\,\mathrm {d} T&=\mathrm {d} U+\mathrm {d} (pV)-V\,\mathrm {d} p-\sum _{i=1}^{k}\mu _{i}\,\mathrm {d} N_{i}+\sum _{i=1}^{n}X_{i}\,\mathrm {d} a_{i}+\cdots \\\mathrm {d} (U-TS+pV)&=V\,\mathrm {d} p-S\,\mathrm {d} T+\sum _{i=1}^{k}\mu _{i}\,\mathrm {d} N_{i}-\sum _{i=1}^{n}X_{i}\,\mathrm {d} a_{i}+\cdots \\\mathrm {d} G&=V\,\mathrm {d} p-S\,\mathrm {d} T+\sum _{i=1}^{k}\mu _{i}\,\mathrm {d} N_{i}-\sum _{i=1}^{n}X_{i}\,\mathrm {d} a_{i}+\cdots \end{aligned}}}
where:
μi is the chemical potential of the ith chemical component. (SI unit: joules per particle or joules per mole)
Ni is the number of particles (or number of moles) composing the ith chemical component.
This is one form of the Gibbs fundamental equation. In the infinitesimal expression, the term involving the chemical potential accounts for changes in Gibbs free energy resulting from an influx or outflux of particles. In other words, it holds for an open system or for a closed, chemically reacting system where the Ni are changing. For a closed, non-reacting system, this term may be dropped.
Any number of extra terms may be added, depending on the particular system being considered. Aside from mechanical work, a system may, in addition, perform numerous other types of work. For example, in the infinitesimal expression, the contractile work energy associated with a thermodynamic system that is a contractile fiber that shortens by an amount −dl under a force f would result in a term f dl being added. If a quantity of charge −de is acquired by a system at an electrical potential Ψ, the electrical work associated with this is −Ψ de, which would be included in the infinitesimal expression. Other work terms are added on per system requirements.
Each quantity in the equations above can be divided by the amount of substance, measured in moles, to form molar Gibbs free energy. The Gibbs free energy is one of the most important thermodynamic functions for the characterization of a system. It is a factor in determining outcomes such as the voltage of an electrochemical cell, and the equilibrium constant for a reversible reaction. In isothermal, isobaric systems, Gibbs free energy can be thought of as a "dynamic" quantity, in that it is a representative measure of the competing effects of the enthalpic and entropic driving forces involved in a thermodynamic process.
The temperature dependence of the Gibbs energy for an ideal gas is given by the Gibbs–Helmholtz equation, and its pressure dependence is given by
G
N
=
G
∘
N
+
k
T
ln
p
p
∘
.
{\displaystyle {\frac {G}{N}}={\frac {G^{\circ }}{N}}+kT\ln {\frac {p}{p^{\circ }}}.}
or more conveniently as its chemical potential:
G
N
=
μ
=
μ
∘
+
k
T
ln
p
p
∘
.
{\displaystyle {\frac {G}{N}}=\mu =\mu ^{\circ }+kT\ln {\frac {p}{p^{\circ }}}.}
In non-ideal systems, fugacity comes into play.
== Derivation ==
The Gibbs free energy total differential with respect to natural variables may be derived by Legendre transforms of the internal energy.
d
U
=
T
d
S
−
p
d
V
+
∑
i
μ
i
d
N
i
.
{\displaystyle \mathrm {d} U=T\,\mathrm {d} S-p\,\mathrm {d} V+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}.}
The definition of G from above is
G
=
U
+
p
V
−
T
S
{\displaystyle G=U+pV-TS}
.
Taking the total differential, we have
d
G
=
d
U
+
p
d
V
+
V
d
p
−
T
d
S
−
S
d
T
.
{\displaystyle \mathrm {d} G=\mathrm {d} U+p\,\mathrm {d} V+V\,\mathrm {d} p-T\,\mathrm {d} S-S\,\mathrm {d} T.}
Replacing dU with the result from the first law gives
d
G
=
T
d
S
−
p
d
V
+
∑
i
μ
i
d
N
i
+
p
d
V
+
V
d
p
−
T
d
S
−
S
d
T
=
V
d
p
−
S
d
T
+
∑
i
μ
i
d
N
i
.
{\displaystyle {\begin{aligned}\mathrm {d} G&=T\,\mathrm {d} S-p\,\mathrm {d} V+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}+p\,\mathrm {d} V+V\,\mathrm {d} p-T\,\mathrm {d} S-S\,\mathrm {d} T\\&=V\,\mathrm {d} p-S\,\mathrm {d} T+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}.\end{aligned}}}
The natural variables of G are then p, T, and {Ni}.
=== Homogeneous systems ===
Because S, V, and Ni are extensive variables, an Euler relation allows easy integration of dU:
U
=
T
S
−
p
V
+
∑
i
μ
i
N
i
.
{\displaystyle U=TS-pV+\sum _{i}\mu _{i}N_{i}.}
Because some of the natural variables of G are intensive, dG may not be integrated using Euler relations as is the case with internal energy. However, simply substituting the above integrated result for U into the definition of G gives a standard expression for G:
G
=
U
+
p
V
−
T
S
=
(
T
S
−
p
V
+
∑
i
μ
i
N
i
)
+
p
V
−
T
S
=
∑
i
μ
i
N
i
.
{\displaystyle {\begin{aligned}G&=U+pV-TS\\&=\left(TS-pV+\sum _{i}\mu _{i}N_{i}\right)+pV-TS\\&=\sum _{i}\mu _{i}N_{i}.\end{aligned}}}
This result shows that the chemical potential of a substance
i
{\displaystyle i}
is its (partial) mol(ecul)ar Gibbs free energy. It applies to homogeneous, macroscopic systems, but not to all thermodynamic systems.
== Gibbs free energy of reactions ==
The system under consideration is held at constant temperature and pressure, and is closed (no matter can come in or out). The Gibbs energy of any system is
G
=
U
+
p
V
−
T
S
{\displaystyle G=U+pV-TS}
and an infinitesimal change in G, at constant temperature and pressure, yields
d
G
=
d
U
+
p
d
V
−
T
d
S
.
{\displaystyle dG=dU+pdV-TdS.}
By the first law of thermodynamics, a change in the internal energy U is given by
d
U
=
δ
Q
+
δ
W
{\displaystyle dU=\delta Q+\delta W}
where δQ is energy added as heat, and δW is energy added as work. The work done on the system may be written as δW = −pdV + δWx, where −pdV is the mechanical work of compression/expansion done on or by the system and δWx is all other forms of work, which may include electrical, magnetic, etc. Then
d
U
=
δ
Q
−
p
d
V
+
δ
W
x
{\displaystyle dU=\delta Q-pdV+\delta W_{x}}
and the infinitesimal change in G is
d
G
=
δ
Q
−
T
d
S
+
δ
W
x
.
{\displaystyle dG=\delta Q-TdS+\delta W_{x}.}
The second law of thermodynamics states that for a closed system at constant temperature (in a heat bath),
T
d
S
≥
δ
Q
{\displaystyle TdS\geq \delta Q}
, and so it follows that
d
G
≤
δ
W
x
{\displaystyle dG\leq \delta W_{x}}
Assuming that only mechanical work is done, this simplifies to
d
G
≤
0
{\displaystyle dG\leq 0}
This means that for such a system when not in equilibrium, the Gibbs energy will always be decreasing, and in equilibrium, the infinitesimal change dG will be zero. In particular, this will be true if the system is experiencing any number of internal chemical reactions on its path to equilibrium.
=== In electrochemical thermodynamics ===
When electric charge dQele is passed between the electrodes of an electrochemical cell generating an emf
E
{\displaystyle {\mathcal {E}}}
, an electrical work term appears in the expression for the change in Gibbs energy:
d
G
=
−
S
d
T
+
V
d
p
+
E
d
Q
e
l
e
,
{\displaystyle dG=-SdT+Vdp+{\mathcal {E}}dQ_{ele},}
where S is the entropy, V is the system volume, p is its pressure and T is its absolute temperature.
The combination (
E
{\displaystyle {\mathcal {E}}}
, Qele) is an example of a conjugate pair of variables. At constant pressure the above equation produces a Maxwell relation that links the change in open cell voltage with temperature T (a measurable quantity) to the change in entropy S when charge is passed isothermally and isobarically. The latter is closely related to the reaction entropy of the electrochemical reaction that lends the battery its power. This Maxwell relation is:
(
∂
E
∂
T
)
Q
e
l
e
,
p
=
−
(
∂
S
∂
Q
e
l
e
)
T
,
p
{\displaystyle \left({\frac {\partial {\mathcal {E}}}{\partial T}}\right)_{Q_{ele},p}=-\left({\frac {\partial S}{\partial Q_{ele}}}\right)_{T,p}}
If a mole of ions goes into solution (for example, in a Daniell cell, as discussed below) the charge through the external circuit is
Δ
Q
e
l
e
=
−
n
0
F
0
,
{\displaystyle \Delta Q_{ele}=-n_{0}F_{0}\,,}
where n0 is the number of electrons/ion, and F0 is the Faraday constant and the minus sign indicates discharge of the cell. Assuming constant pressure and volume, the thermodynamic properties of the cell are related strictly to the behavior of its emf by
Δ
H
=
−
n
0
F
0
(
E
−
T
d
E
d
T
)
,
{\displaystyle \Delta H=-n_{0}F_{0}\left({\mathcal {E}}-T{\frac {d{\mathcal {E}}}{dT}}\right),}
where ΔH is the enthalpy of reaction. The quantities on the right are all directly measurable.
== Useful identities to derive the Nernst equation ==
During a reversible electrochemical reaction at constant temperature and pressure, the following equations involving the Gibbs free energy hold:
Δ
r
G
=
Δ
r
G
∘
+
R
T
ln
Q
r
{\displaystyle \Delta _{\text{r}}G=\Delta _{\text{r}}G^{\circ }+RT\ln Q_{\text{r}}}
(see chemical equilibrium),
Δ
r
G
∘
=
−
R
T
ln
K
eq
{\displaystyle \Delta _{\text{r}}G^{\circ }=-RT\ln K_{\text{eq}}}
(for a system at chemical equilibrium),
Δ
r
G
=
w
elec,rev
=
−
n
F
E
{\displaystyle \Delta _{\text{r}}G=w_{\text{elec,rev}}=-nF{\mathcal {E}}}
(for a reversible electrochemical process at constant temperature and pressure),
Δ
r
G
∘
=
−
n
F
E
∘
{\displaystyle \Delta _{\text{r}}G^{\circ }=-nF{\mathcal {E}}^{\circ }}
(definition of
E
∘
{\displaystyle {\mathcal {E}}^{\circ }}
),
and rearranging gives
n
F
E
∘
=
R
T
ln
K
eq
,
n
F
E
=
n
F
E
∘
−
R
T
ln
Q
r
,
E
=
E
∘
−
R
T
n
F
ln
Q
r
,
{\displaystyle {\begin{aligned}nF{\mathcal {E}}^{\circ }&=RT\ln K_{\text{eq}},\\nF{\mathcal {E}}&=nF{\mathcal {E}}^{\circ }-RT\ln Q_{\text{r}},\\{\mathcal {E}}&={\mathcal {E}}^{\circ }-{\frac {RT}{nF}}\ln Q_{\text{r}},\end{aligned}}}
which relates the cell potential resulting from the reaction to the equilibrium constant and reaction quotient for that reaction (Nernst equation),
where
ΔrG, Gibbs free energy change per mole of reaction,
ΔrG°, Gibbs free energy change per mole of reaction for unmixed reactants and products at standard conditions (i.e. 298 K, 100 kPa, 1 M of each reactant and product),
R, gas constant,
T, absolute temperature,
ln, natural logarithm,
Qr, reaction quotient (unitless),
Keq, equilibrium constant (unitless),
welec,rev, electrical work in a reversible process (chemistry sign convention),
n, number of moles of electrons transferred in the reaction,
F = NAe ≈ 96485 C/mol, Faraday constant (charge per mole of electrons),
E
{\displaystyle {\mathcal {E}}}
, cell potential,
E
∘
{\displaystyle {\mathcal {E}}^{\circ }}
, standard cell potential.
Moreover, we also have
K
eq
=
e
−
Δ
r
G
∘
R
T
,
Δ
r
G
∘
=
−
R
T
(
ln
K
eq
)
=
−
2.303
R
T
(
log
10
K
eq
)
,
{\displaystyle {\begin{aligned}K_{\text{eq}}&=e^{-{\frac {\Delta _{\text{r}}G^{\circ }}{RT}}},\\\Delta _{\text{r}}G^{\circ }&=-RT\left(\ln K_{\text{eq}}\right)=-2.303\,RT\left(\log _{10}K_{\text{eq}}\right),\end{aligned}}}
which relates the equilibrium constant with Gibbs free energy. This implies that at equilibrium
Q
r
=
K
eq
{\displaystyle Q_{\text{r}}=K_{\text{eq}}}
and
Δ
r
G
=
0.
{\displaystyle \Delta _{\text{r}}G=0.}
== Standard Gibbs energy change of formation ==
The standard Gibbs free energy of formation of a compound is the change of Gibbs free energy that accompanies the formation of 1 mole of that substance from its component elements, in their standard states (the most stable form of the element at 25 °C and 100 kPa). Its symbol is ΔfG˚.
All elements in their standard states (diatomic oxygen gas, graphite, etc.) have standard Gibbs free energy change of formation equal to zero, as there is no change involved.
ΔfG = ΔfG˚ + RT ln Qf,
where Qf is the reaction quotient.
At equilibrium, ΔfG = 0, and Qf = K, so the equation becomes
ΔfG˚ = −RT ln K,
where K is the equilibrium constant of the formation reaction of the substance from the elements in their standard states.
== Graphical interpretation by Gibbs ==
Gibbs free energy was originally defined graphically. In 1873, American scientist Willard Gibbs published his first thermodynamics paper, "Graphical Methods in the Thermodynamics of Fluids", in which Gibbs used the two coordinates of the entropy and volume to represent the state of the body. In his second follow-up paper, "A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces", published later that year, Gibbs added in the third coordinate of the energy of the body, defined on three figures. In 1874, Scottish physicist James Clerk Maxwell used Gibbs' figures to make a 3D energy-entropy-volume thermodynamic surface of a fictitious water-like substance. Thus, in order to understand the concept of Gibbs free energy, it may help to understand its interpretation by Gibbs as section AB on his figure 3, and as Maxwell sculpted that section on his 3D surface figure.
== See also ==
Bioenergetics
Calphad (CALculation of PHAse Diagrams)
Critical point (thermodynamics)
Electron equivalent
Enthalpy–entropy compensation
Free entropy
Gibbs–Helmholtz equation
Grand potential
Non-random two-liquid model (NRTL model) – Gibbs energy of excess and mixing calculation and activity coefficients
Spinodal – Spinodal Curves (Hessian matrix)
Standard molar entropy
Thermodynamic free energy
UNIQUAC model – Gibbs energy of excess and mixing calculation and activity coefficients
== Notes and references ==
== External links ==
IUPAC definition (Gibbs energy)
Gibbs Free Energy – Georgia State University | Wikipedia/Gibbs_energy |
In thermodynamics, the Helmholtz free energy (or Helmholtz energy) is a thermodynamic potential that measures the useful work obtainable from a closed thermodynamic system at a constant temperature (isothermal). The change in the Helmholtz energy during a process is equal to the maximum amount of work that the system can perform in a thermodynamic process in which temperature is held constant. At constant temperature, the Helmholtz free energy is minimized at equilibrium.
In contrast, the Gibbs free energy or free enthalpy is most commonly used as a measure of thermodynamic potential (especially in chemistry) when it is convenient for applications that occur at constant pressure. For example, in explosives research Helmholtz free energy is often used, since explosive reactions by their nature induce pressure changes. It is also frequently used to define fundamental equations of state of pure substances.
The concept of free energy was developed by Hermann von Helmholtz, a German physicist, and first presented in 1882 in a lecture called "On the thermodynamics of chemical processes". From the German word Arbeit (work), the International Union of Pure and Applied Chemistry (IUPAC) recommends the symbol A and the name Helmholtz energy. In physics, the symbol F is also used in reference to free energy or Helmholtz function.
== Definition ==
The Helmholtz free energy is defined as
A
≡
U
−
T
S
,
{\displaystyle A\equiv U-TS,}
where
A is the Helmholtz free energy (sometimes also called F, particularly in the field of physics) (SI: joules, CGS: ergs),
U is the internal energy of the system (SI: joules, CGS: ergs),
T is the absolute temperature (kelvins) of the surroundings, modelled as a heat bath,
S is the entropy of the system (SI: joules per kelvin, CGS: ergs per kelvin).
The Helmholtz energy is the Legendre transformation of the internal energy U, in which temperature replaces entropy as the independent variable.
== Formal development ==
The first law of thermodynamics in a closed system provides
d
U
=
δ
Q
+
δ
W
,
{\displaystyle \mathrm {d} U=\delta Q\ +\delta W,}
where
U
{\displaystyle U}
is the internal energy,
δ
Q
{\displaystyle \delta Q}
is the energy added as heat, and
δ
W
{\displaystyle \delta W}
is the work done on the system. The second law of thermodynamics for a reversible process yields
δ
Q
=
T
d
S
{\displaystyle \delta Q=T\,\mathrm {d} S}
. In case of a reversible change, the work done can be expressed as
δ
W
=
−
p
d
V
{\displaystyle \delta W=-p\,\mathrm {d} V}
(ignoring electrical and other non-PV work) and so:
d
U
=
T
d
S
−
p
d
V
.
{\displaystyle \mathrm {d} U=T\,\mathrm {d} S-p\,\mathrm {d} V.}
Applying the product rule for differentiation to
d
(
T
S
)
=
T
d
S
+
S
d
T
{\displaystyle \mathrm {d} (TS)=T\mathrm {d} S\,+S\mathrm {d} T}
, it follows
d
U
=
d
(
T
S
)
−
S
d
T
−
p
d
V
,
{\displaystyle \mathrm {d} U=\mathrm {d} (TS)-S\,\mathrm {d} T-p\,\mathrm {d} V,}
and
d
(
U
−
T
S
)
=
−
S
d
T
−
p
d
V
.
{\displaystyle \mathrm {d} (U-TS)=-S\,\mathrm {d} T-p\,\mathrm {d} V.}
The definition of
A
=
U
−
T
S
{\displaystyle A=U-TS}
allows us to rewrite this as
d
A
=
−
S
d
T
−
p
d
V
.
{\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-p\,\mathrm {d} V.}
Because A is a thermodynamic function of state, this relation is also valid for a process (without electrical work or composition change) that is not reversible.
== Minimum free energy and maximum work principles ==
The laws of thermodynamics are only directly applicable to systems in thermal equilibrium. If we wish to describe phenomena like chemical reactions, then the best we can do is to consider suitably chosen initial and final states in which the system is in (metastable) thermal equilibrium. If the system is kept at fixed volume and is in contact with a heat bath at some constant temperature, then we can reason as follows.
Since the thermodynamical variables of the system are well defined in the initial state and the final state, the internal energy increase
Δ
U
{\displaystyle \Delta U}
, the entropy increase
Δ
S
{\displaystyle \Delta S}
, and the total amount of work that can be extracted, performed by the system,
W
{\displaystyle W}
, are well defined quantities. Conservation of energy implies
Δ
U
bath
+
Δ
U
+
W
=
0.
{\displaystyle \Delta U_{\text{bath}}+\Delta U+W=0.}
The volume of the system is kept constant. This means that the volume of the heat bath does not change either, and we can conclude that the heat bath does not perform any work. This implies that the amount of heat that flows into the heat bath is given by
Q
bath
=
Δ
U
bath
=
−
(
Δ
U
+
W
)
.
{\displaystyle Q_{\text{bath}}=\Delta U_{\text{bath}}=-(\Delta U+W).}
The heat bath remains in thermal equilibrium at temperature T no matter what the system does. Therefore, the entropy change of the heat bath is
Δ
S
bath
=
Q
bath
T
=
−
Δ
U
+
W
T
.
{\displaystyle \Delta S_{\text{bath}}={\frac {Q_{\text{bath}}}{T}}=-{\frac {\Delta U+W}{T}}.}
The total entropy change is thus given by
Δ
S
bath
+
Δ
S
=
−
Δ
U
−
T
Δ
S
+
W
T
.
{\displaystyle \Delta S_{\text{bath}}+\Delta S=-{\frac {\Delta U-T\Delta S+W}{T}}.}
Since the system is in thermal equilibrium with the heat bath in the initial and the final states, T is also the temperature of the system in these states. The fact that the system's temperature does not change allows us to express the numerator as the free energy change of the system:
Δ
S
bath
+
Δ
S
=
−
Δ
A
+
W
T
.
{\displaystyle \Delta S_{\text{bath}}+\Delta S=-{\frac {\Delta A+W}{T}}.}
Since the total change in entropy must always be larger or equal to zero, we obtain the inequality
W
≤
−
Δ
A
.
{\displaystyle W\leq -\Delta A.}
We see that the total amount of work that can be extracted in an isothermal process is limited by the free-energy decrease, and that increasing the free energy in a reversible process requires work to be done on the system. If no work is extracted from the system, then
Δ
A
≤
0
,
{\displaystyle \Delta A\leq 0,}
and thus for a system kept at constant temperature and volume and not capable of performing electrical or other non-PV work, the total free energy during a spontaneous change can only decrease.
This result seems to contradict the equation
d
A
=
−
S
d
T
−
p
d
V
{\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-p\,\mathrm {d} V}
, as keeping T and V constant seems to imply
d
A
=
0
{\displaystyle \mathrm {d} A=0}
, and hence
A
=
c
o
n
s
t
.
{\displaystyle A=\mathrm {const.} }
In reality there is no contradiction: In a simple one-component system, to which the validity of the equation
d
A
=
−
S
d
T
−
p
d
V
{\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-p\,\mathrm {d} V}
is restricted, no process can occur at constant T and V, since there is a unique
P
(
T
,
V
)
{\displaystyle P(T,V)}
relation, and thus T, V, and P are all fixed. To allow for spontaneous processes at constant T and V, one needs to enlarge the thermodynamical state space of the system. In case of a chemical reaction, one must allow for changes in the numbers Nj of particles of each type j. The differential of the free energy then generalizes to
d
A
=
−
S
d
T
−
P
d
V
+
∑
j
μ
j
d
N
j
,
{\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-P\,\mathrm {d} V+\sum _{j}\mu _{j}\,\mathrm {d} N_{j},}
where the
N
j
{\displaystyle N_{j}}
are the numbers of particles of type j and the
μ
j
{\displaystyle \mu _{j}}
are the corresponding chemical potentials. This equation is then again valid for both reversible and non-reversible changes. In case of a spontaneous change at constant T and V, the last term will thus be negative.
In case there are other external parameters, the above relation further generalizes to
d
A
=
−
S
d
T
−
∑
i
X
i
d
x
i
+
∑
j
μ
j
d
N
j
.
{\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-\sum _{i}X_{i}\,\mathrm {d} x_{i}+\sum _{j}\mu _{j}\,\mathrm {d} N_{j}.}
Here the
x
i
{\displaystyle x_{i}}
are the external variables, and the
X
i
{\displaystyle X_{i}}
the corresponding generalized forces.
== Relation to the canonical partition function ==
A system kept at constant volume, temperature, and particle number is described by the canonical ensemble. The probability of finding the system in some energy eigenstate r, for any microstate i, is given by
P
r
=
e
−
β
E
r
Z
,
{\displaystyle P_{r}={\frac {e^{-\beta E_{r}}}{Z}},}
where
β
=
1
k
T
,
{\displaystyle \beta ={\frac {1}{kT}},}
E
r
{\displaystyle E_{r}}
is the energy of accessible state
r
{\displaystyle r}
Z
=
∑
i
e
−
β
E
i
.
{\textstyle Z=\sum _{i}e^{-\beta E_{i}}.}
Z is called the partition function of the system. The fact that the system does not have a unique energy means that the various thermodynamical quantities must be defined as expectation values. In the thermodynamical limit of infinite system size, the relative fluctuations in these averages will go to zero.
The average internal energy of the system is the expectation value of the energy and can be expressed in terms of Z as follows:
U
≡
⟨
E
⟩
=
∑
r
P
r
E
r
=
∑
r
e
−
β
E
r
E
r
Z
=
∑
r
−
∂
∂
β
e
−
β
E
r
Z
=
−
∂
∂
β
∑
r
e
−
β
E
r
Z
=
−
∂
log
Z
∂
β
.
{\displaystyle U\equiv \langle E\rangle =\sum _{r}P_{r}E_{r}=\sum _{r}{\frac {e^{-\beta E_{r}}E_{r}}{Z}}=\sum _{r}{\frac {-{\frac {\partial }{\partial \beta }}e^{-\beta E_{r}}}{Z}}={\frac {-{\frac {\partial }{\partial \beta }}\sum _{r}e^{-\beta E_{r}}}{Z}}=-{\frac {\partial \log Z}{\partial \beta }}.}
If the system is in state r, then the generalized force corresponding to an external variable x is given by
X
r
=
−
∂
E
r
∂
x
.
{\displaystyle X_{r}=-{\frac {\partial E_{r}}{\partial x}}.}
The thermal average of this can be written as
X
=
∑
r
P
r
X
r
=
1
β
∂
log
Z
∂
x
.
{\displaystyle X=\sum _{r}P_{r}X_{r}={\frac {1}{\beta }}{\frac {\partial \log Z}{\partial x}}.}
Suppose that the system has one external variable
x
{\displaystyle x}
. Then changing the system's temperature parameter by
d
β
{\displaystyle d\beta }
and the external variable by
d
x
{\displaystyle dx}
will lead to a change in
log
Z
{\displaystyle \log Z}
:
d
(
log
Z
)
=
∂
log
Z
∂
β
d
β
+
∂
log
Z
∂
x
d
x
=
−
U
d
β
+
β
X
d
x
.
{\displaystyle d(\log Z)={\frac {\partial \log Z}{\partial \beta }}\,d\beta +{\frac {\partial \log Z}{\partial x}}\,dx=-U\,d\beta +\beta X\,dx.}
If we write
U
d
β
{\displaystyle U\,d\beta }
as
U
d
β
=
d
(
β
U
)
−
β
d
U
,
{\displaystyle U\,d\beta =d(\beta U)-\beta \,dU,}
we get
d
(
log
Z
)
=
−
d
(
β
U
)
+
β
d
U
+
β
X
d
x
.
{\displaystyle d(\log Z)=-d(\beta U)+\beta \,dU+\beta X\,dx.}
This means that the change in the internal energy is given by
d
U
=
1
β
d
(
log
Z
+
β
U
)
−
X
d
x
.
{\displaystyle dU={\frac {1}{\beta }}\,d(\log Z+\beta U)-X\,dx.}
In the thermodynamic limit, the fundamental thermodynamic relation should hold:
d
U
=
T
d
S
−
X
d
x
.
{\displaystyle dU=T\,dS-X\,dx.}
This then implies that the entropy of the system is given by
S
=
k
log
Z
+
U
T
+
c
,
{\displaystyle S=k\log Z+{\frac {U}{T}}+c,}
where c is some constant. The value of c can be determined by considering the limit T → 0. In this limit the entropy becomes
S
=
k
log
Ω
0
{\displaystyle S=k\log \Omega _{0}}
, where
Ω
0
{\displaystyle \Omega _{0}}
is the ground-state degeneracy. The partition function in this limit is
Ω
0
e
−
β
U
0
{\displaystyle \Omega _{0}e^{-\beta U_{0}}}
, where
U
0
{\displaystyle U_{0}}
is the ground-state energy. Thus, we see that
c
=
0
{\displaystyle c=0}
and that
=== Relating free energy to other variables ===
Combining the definition of Helmholtz free energy
A
=
U
−
T
S
{\displaystyle A=U-TS}
along with the fundamental thermodynamic relation
d
A
=
−
S
d
T
−
P
d
V
+
μ
d
N
,
{\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-P\,\mathrm {d} V+\mu \,\mathrm {d} N,}
one can find expressions for entropy, pressure and chemical potential:
S
=
−
(
∂
A
∂
T
)
|
V
,
N
,
P
=
−
(
∂
A
∂
V
)
|
T
,
N
,
μ
=
(
∂
A
∂
N
)
|
T
,
V
.
{\displaystyle S=\left.-\left({\frac {\partial A}{\partial T}}\right)\right|_{V,N},\quad P=\left.-\left({\frac {\partial A}{\partial V}}\right)\right|_{T,N},\quad \mu =\left.\left({\frac {\partial A}{\partial N}}\right)\right|_{T,V}.}
These three equations, along with the free energy in terms of the partition function,
A
=
−
k
T
log
Z
,
{\displaystyle A=-kT\log Z,}
allow an efficient way of calculating thermodynamic variables of interest given the partition function and are often used in density of state calculations. One can also do Legendre transformations for different systems. For example, for a system with a magnetic field or potential, it is true that
m
=
−
(
∂
A
∂
B
)
|
T
,
N
,
V
=
(
∂
A
∂
Q
)
|
N
,
T
.
{\displaystyle m=\left.-\left({\frac {\partial A}{\partial B}}\right)\right|_{T,N},\quad V=\left.\left({\frac {\partial A}{\partial Q}}\right)\right|_{N,T}.}
== Bogoliubov inequality ==
Computing the free energy is an intractable problem for all but the simplest models in statistical physics. A powerful approximation method is mean-field theory, which is a variational method based on the Bogoliubov inequality. This inequality can be formulated as follows.
Suppose we replace the real Hamiltonian
H
{\displaystyle H}
of the model by a trial Hamiltonian
H
~
{\displaystyle {\tilde {H}}}
, which has different interactions and may depend on extra parameters that are not present in the original model. If we choose this trial Hamiltonian such that
⟨
H
~
⟩
=
⟨
H
⟩
,
{\displaystyle \left\langle {\tilde {H}}\right\rangle =\langle H\rangle ,}
where both averages are taken with respect to the canonical distribution defined by the trial Hamiltonian
H
~
{\displaystyle {\tilde {H}}}
, then the Bogoliubov inequality states
A
≤
A
~
,
{\displaystyle A\leq {\tilde {A}},}
where
A
{\displaystyle A}
is the free energy of the original Hamiltonian, and
A
~
{\displaystyle {\tilde {A}}}
is the free energy of the trial Hamiltonian. We will prove this below.
By including a large number of parameters in the trial Hamiltonian and minimizing the free energy, we can expect to get a close approximation to the exact free energy.
The Bogoliubov inequality is often applied in the following way. If we write the Hamiltonian as
H
=
H
0
+
Δ
H
,
{\displaystyle H=H_{0}+\Delta H,}
where
H
0
{\displaystyle H_{0}}
is some exactly solvable Hamiltonian, then we can apply the above inequality by defining
H
~
=
H
0
+
⟨
Δ
H
⟩
0
.
{\displaystyle {\tilde {H}}=H_{0}+\langle \Delta H\rangle _{0}.}
Here we have defined
⟨
X
⟩
0
{\displaystyle \langle X\rangle _{0}}
to be the average of X over the canonical ensemble defined by
H
0
{\displaystyle H_{0}}
. Since
H
~
{\displaystyle {\tilde {H}}}
defined this way differs from
H
0
{\displaystyle H_{0}}
by a constant, we have in general
⟨
X
⟩
0
=
⟨
X
⟩
.
{\displaystyle \langle X\rangle _{0}=\langle X\rangle .}
where
⟨
X
⟩
{\displaystyle \langle X\rangle }
is still the average over
H
~
{\displaystyle {\tilde {H}}}
, as specified above. Therefore,
⟨
H
~
⟩
=
⟨
H
0
+
⟨
Δ
H
⟩
⟩
=
⟨
H
⟩
,
{\displaystyle \left\langle {\tilde {H}}\right\rangle ={\big \langle }H_{0}+\langle \Delta H\rangle {\big \rangle }=\langle H\rangle ,}
and thus the inequality
A
≤
A
~
{\displaystyle A\leq {\tilde {A}}}
holds. The free energy
A
~
{\displaystyle {\tilde {A}}}
is the free energy of the model defined by
H
0
{\displaystyle H_{0}}
plus
⟨
Δ
H
⟩
{\displaystyle \langle \Delta H\rangle }
. This means that
A
~
=
⟨
H
0
⟩
0
−
T
S
0
+
⟨
Δ
H
⟩
0
=
⟨
H
⟩
0
−
T
S
0
,
{\displaystyle {\tilde {A}}=\langle H_{0}\rangle _{0}-TS_{0}+\langle \Delta H\rangle _{0}=\langle H\rangle _{0}-TS_{0},}
and thus
A
≤
⟨
H
⟩
0
−
T
S
0
.
{\displaystyle A\leq \langle H\rangle _{0}-TS_{0}.}
=== Proof of the Bogoliubov inequality ===
For a classical model we can prove the Bogoliubov inequality as follows. We denote the canonical probability distributions for the Hamiltonian and the trial Hamiltonian by
P
r
{\displaystyle P_{r}}
and
P
~
r
{\displaystyle {\tilde {P}}_{r}}
, respectively. From Gibbs' inequality we know that:
∑
r
P
~
r
log
(
P
~
r
)
≥
∑
r
P
~
r
log
(
P
r
)
{\displaystyle \sum _{r}{\tilde {P}}_{r}\log \left({\tilde {P}}_{r}\right)\geq \sum _{r}{\tilde {P}}_{r}\log \left(P_{r}\right)\,}
holds. To see this, consider the difference between the left hand side and the right hand side. We can write this as:
∑
r
P
~
r
log
(
P
~
r
P
r
)
{\displaystyle \sum _{r}{\tilde {P}}_{r}\log \left({\frac {{\tilde {P}}_{r}}{P_{r}}}\right)\,}
Since
log
(
x
)
≥
1
−
1
x
{\displaystyle \log \left(x\right)\geq 1-{\frac {1}{x}}\,}
it follows that:
∑
r
P
~
r
log
(
P
~
r
P
r
)
≥
∑
r
(
P
~
r
−
P
r
)
=
0
{\displaystyle \sum _{r}{\tilde {P}}_{r}\log \left({\frac {{\tilde {P}}_{r}}{P_{r}}}\right)\geq \sum _{r}\left({\tilde {P}}_{r}-P_{r}\right)=0\,}
where in the last step we have used that both probability distributions are normalized to 1.
We can write the inequality as:
⟨
log
P
~
r
⟩
≥
⟨
log
P
r
⟩
{\displaystyle \left\langle \log {\tilde {P}}_{r}\right\rangle \geq \left\langle \log P_{r}\right\rangle }
where the averages are taken with respect to
P
~
r
{\displaystyle {\tilde {P}}_{r}}
. If we now substitute in here the expressions for the probability distributions:
P
r
=
exp
[
−
β
H
(
r
)
]
Z
{\displaystyle P_{r}={\frac {\exp \left[-\beta H(r)\right]}{Z}}}
and
P
~
r
=
exp
[
−
β
H
~
(
r
)
]
Z
~
{\displaystyle {\tilde {P}}_{r}={\frac {\exp \left[-\beta {\tilde {H}}(r)\right]}{\tilde {Z}}}}
we get:
⟨
−
β
H
~
−
log
Z
~
⟩
≥
⟨
−
β
H
−
log
Z
⟩
{\displaystyle \left\langle -\beta {\tilde {H}}-\log {\tilde {Z}}\right\rangle \geq \left\langle -\beta H-\log Z\right\rangle }
Since the averages of
H
{\displaystyle H}
and
H
~
{\displaystyle {\tilde {H}}}
are, by assumption, identical we have:
A
≤
A
~
{\displaystyle A\leq {\tilde {A}}}
Here we have used that the partition functions are constants with respect to taking averages and that the free energy is proportional to minus the logarithm of the partition function.
We can easily generalize this proof to the case of quantum mechanical models. We denote the eigenstates of
H
~
{\displaystyle {\tilde {H}}}
by
|
r
⟩
{\displaystyle \left|r\right\rangle }
. We denote the diagonal components of the density matrices for the canonical distributions for
H
{\displaystyle H}
and
H
~
{\displaystyle {\tilde {H}}}
in this basis as:
P
r
=
⟨
r
|
exp
[
−
β
H
]
Z
|
r
⟩
{\displaystyle P_{r}=\left\langle r\left|{\frac {\exp \left[-\beta H\right]}{Z}}\right|r\right\rangle \,}
and
P
~
r
=
⟨
r
|
exp
[
−
β
H
~
]
Z
~
|
r
⟩
=
exp
(
−
β
E
~
r
)
Z
~
{\displaystyle {\tilde {P}}_{r}=\left\langle r\left|{\frac {\exp \left[-\beta {\tilde {H}}\right]}{\tilde {Z}}}\right|r\right\rangle ={\frac {\exp \left(-\beta {\tilde {E}}_{r}\right)}{\tilde {Z}}}\,}
where the
E
~
r
{\displaystyle {\tilde {E}}_{r}}
are the eigenvalues of
H
~
{\displaystyle {\tilde {H}}}
We assume again that the averages of H and
H
~
{\displaystyle {\tilde {H}}}
in the canonical ensemble defined by
H
~
{\displaystyle {\tilde {H}}}
are the same:
⟨
H
~
⟩
=
⟨
H
⟩
{\displaystyle \left\langle {\tilde {H}}\right\rangle =\left\langle H\right\rangle \,}
where
⟨
H
⟩
=
∑
r
P
~
r
⟨
r
|
H
|
r
⟩
{\displaystyle \left\langle H\right\rangle =\sum _{r}{\tilde {P}}_{r}\left\langle r\left|H\right|r\right\rangle \,}
The inequality
∑
r
P
~
r
log
P
~
r
≥
∑
r
P
~
r
log
P
r
{\displaystyle \sum _{r}{\tilde {P}}_{r}\log {\tilde {P}}_{r}\geq \sum _{r}{\tilde {P}}_{r}\log P_{r}}
still holds as both the
P
r
{\displaystyle P_{r}}
and the
P
~
r
{\displaystyle {\tilde {P}}_{r}}
sum to 1. On the left-hand side we can replace:
log
P
~
r
=
−
β
E
~
r
−
log
Z
~
{\displaystyle \log {\tilde {P}}_{r}=-\beta {\tilde {E}}_{r}-\log {\tilde {Z}}}
On the right-hand side we can use the inequality
⟨
e
X
⟩
r
≥
e
⟨
X
⟩
r
{\displaystyle \left\langle e^{X}\right\rangle _{r}\geq e^{{\left\langle X\right\rangle }_{r}}}
where we have introduced the notation
⟨
Y
⟩
r
≡
⟨
r
|
Y
|
r
⟩
{\displaystyle \left\langle Y\right\rangle _{r}\equiv \left\langle r\left|Y\right|r\right\rangle \,}
for the expectation value of the operator Y in the state r. See here for a proof. Taking the logarithm of this inequality gives:
log
[
⟨
e
X
⟩
r
]
≥
⟨
X
⟩
r
{\displaystyle \log \left[\left\langle e^{X}\right\rangle _{r}\right]\geq \left\langle X\right\rangle _{r}\,}
This allows us to write:
log
P
r
=
log
[
⟨
exp
(
−
β
H
−
log
Z
)
⟩
r
]
≥
⟨
−
β
H
−
log
Z
⟩
r
{\displaystyle \log P_{r}=\log \left[\left\langle \exp \left(-\beta H-\log Z\right)\right\rangle _{r}\right]\geq \left\langle -\beta H-\log Z\right\rangle _{r}}
The fact that the averages of H and
H
~
{\displaystyle {\tilde {H}}}
are the same then leads to the same conclusion as in the classical case:
A
≤
A
~
{\displaystyle A\leq {\tilde {A}}}
== Generalized Helmholtz energy ==
In the more general case, the mechanical term
p
d
V
{\displaystyle p\mathrm {d} V}
must be replaced by the product of volume, stress, and an infinitesimal strain:
d
A
=
V
∑
i
j
σ
i
j
d
ε
i
j
−
S
d
T
+
∑
i
μ
i
d
N
i
,
{\displaystyle \mathrm {d} A=V\sum _{ij}\sigma _{ij}\,\mathrm {d} \varepsilon _{ij}-S\,\mathrm {d} T+\sum _{i}\mu _{i}\,\mathrm {d} N_{i},}
where
σ
i
j
{\displaystyle \sigma _{ij}}
is the stress tensor, and
ε
i
j
{\displaystyle \varepsilon _{ij}}
is the strain tensor. In the case of linear elastic materials that obey Hooke's law, the stress is related to the strain by
σ
i
j
=
C
i
j
k
l
ε
k
l
,
{\displaystyle \sigma _{ij}=C_{ijkl}\varepsilon _{kl},}
where we are now using Einstein notation for the tensors, in which repeated indices in a product are summed. We may integrate the expression for
d
A
{\displaystyle \mathrm {d} A}
to obtain the Helmholtz energy:
A
=
1
2
V
C
i
j
k
l
ε
i
j
ε
k
l
−
S
T
+
∑
i
μ
i
N
i
=
1
2
V
σ
i
j
ε
i
j
−
S
T
+
∑
i
μ
i
N
i
.
{\displaystyle {\begin{aligned}A&={\frac {1}{2}}VC_{ijkl}\varepsilon _{ij}\varepsilon _{kl}-ST+\sum _{i}\mu _{i}N_{i}\\&={\frac {1}{2}}V\sigma _{ij}\varepsilon _{ij}-ST+\sum _{i}\mu _{i}N_{i}.\end{aligned}}}
== Application to fundamental equations of state ==
The Helmholtz free energy function for a pure substance (together with its partial derivatives) can be used to determine all other thermodynamic properties for the substance. See, for example, the equations of state for water, as given by the IAPWS in their IAPWS-95 release.
== Application to training auto-encoders ==
Hinton and Zemel "derive an objective function for training auto-encoder based on the minimum description length (MDL) principle". "The description length of an input vector using a particular code is the sum of the code cost and reconstruction cost. They define this to be the energy of the code. Given an input vector, they define the energy of a code to be the sum of the code cost and the reconstruction cost." The true expected combined cost is
A
=
∑
i
p
i
E
i
−
H
,
{\displaystyle A=\sum _{i}p_{i}E_{i}-H,}
"which has exactly the form of Helmholtz free energy".
== See also ==
Gibbs free energy and thermodynamic free energy for thermodynamics history overview and discussion of free energy
Grand potential
Enthalpy
Statistical mechanics
This page details the Helmholtz energy from the point of view of thermal and statistical physics.
Bennett acceptance ratio for an efficient way to calculate free energy differences and comparison with other methods.
== References ==
== Further reading ==
Atkins' Physical Chemistry, 7th edition, by Peter Atkins and Julio de Paula, Oxford University Press
HyperPhysics Helmholtz Free Energy Helmholtz and Gibbs Free Energies | Wikipedia/Helmholtz_energy |
In mathematics and mathematical optimization, the convex conjugate of a function is a generalization of the Legendre transformation which applies to non-convex functions. It is also known as Legendre–Fenchel transformation, Fenchel transformation, or Fenchel conjugate (after Adrien-Marie Legendre and Werner Fenchel). The convex conjugate is widely used for constructing the dual problem in optimization theory, thus generalizing Lagrangian duality.
== Definition ==
Let
X
{\displaystyle X}
be a real topological vector space and let
X
∗
{\displaystyle X^{*}}
be the dual space to
X
{\displaystyle X}
. Denote by
⟨
⋅
,
⋅
⟩
:
X
∗
×
X
→
R
{\displaystyle \langle \cdot ,\cdot \rangle :X^{*}\times X\to \mathbb {R} }
the canonical dual pairing, which is defined by
⟨
x
∗
,
x
⟩
↦
x
∗
(
x
)
.
{\displaystyle \left\langle x^{*},x\right\rangle \mapsto x^{*}(x).}
For a function
f
:
X
→
R
∪
{
−
∞
,
+
∞
}
{\displaystyle f:X\to \mathbb {R} \cup \{-\infty ,+\infty \}}
taking values on the extended real number line, its convex conjugate is the function
f
∗
:
X
∗
→
R
∪
{
−
∞
,
+
∞
}
{\displaystyle f^{*}:X^{*}\to \mathbb {R} \cup \{-\infty ,+\infty \}}
whose value at
x
∗
∈
X
∗
{\displaystyle x^{*}\in X^{*}}
is defined to be the supremum:
f
∗
(
x
∗
)
:=
sup
{
⟨
x
∗
,
x
⟩
−
f
(
x
)
:
x
∈
X
}
,
{\displaystyle f^{*}\left(x^{*}\right):=\sup \left\{\left\langle x^{*},x\right\rangle -f(x)~\colon ~x\in X\right\},}
or, equivalently, in terms of the infimum:
f
∗
(
x
∗
)
:=
−
inf
{
f
(
x
)
−
⟨
x
∗
,
x
⟩
:
x
∈
X
}
.
{\displaystyle f^{*}\left(x^{*}\right):=-\inf \left\{f(x)-\left\langle x^{*},x\right\rangle ~\colon ~x\in X\right\}.}
This definition can be interpreted as an encoding of the convex hull of the function's epigraph in terms of its supporting hyperplanes.
== Examples ==
For more examples, see § Table of selected convex conjugates.
The convex conjugate of an affine function
f
(
x
)
=
⟨
a
,
x
⟩
−
b
{\displaystyle f(x)=\left\langle a,x\right\rangle -b}
is
f
∗
(
x
∗
)
=
{
b
,
x
∗
=
a
+
∞
,
x
∗
≠
a
.
{\displaystyle f^{*}\left(x^{*}\right)={\begin{cases}b,&x^{*}=a\\+\infty ,&x^{*}\neq a.\end{cases}}}
The convex conjugate of a power function
f
(
x
)
=
1
p
|
x
|
p
,
1
<
p
<
∞
{\displaystyle f(x)={\frac {1}{p}}|x|^{p},1<p<\infty }
is
f
∗
(
x
∗
)
=
1
q
|
x
∗
|
q
,
1
<
q
<
∞
,
where
1
p
+
1
q
=
1.
{\displaystyle f^{*}\left(x^{*}\right)={\frac {1}{q}}|x^{*}|^{q},1<q<\infty ,{\text{where}}{\tfrac {1}{p}}+{\tfrac {1}{q}}=1.}
The convex conjugate of the absolute value function
f
(
x
)
=
|
x
|
{\displaystyle f(x)=\left|x\right|}
is
f
∗
(
x
∗
)
=
{
0
,
|
x
∗
|
≤
1
∞
,
|
x
∗
|
>
1.
{\displaystyle f^{*}\left(x^{*}\right)={\begin{cases}0,&\left|x^{*}\right|\leq 1\\\infty ,&\left|x^{*}\right|>1.\end{cases}}}
The convex conjugate of the exponential function
f
(
x
)
=
e
x
{\displaystyle f(x)=e^{x}}
is
f
∗
(
x
∗
)
=
{
x
∗
ln
x
∗
−
x
∗
,
x
∗
>
0
0
,
x
∗
=
0
∞
,
x
∗
<
0.
{\displaystyle f^{*}\left(x^{*}\right)={\begin{cases}x^{*}\ln x^{*}-x^{*},&x^{*}>0\\0,&x^{*}=0\\\infty ,&x^{*}<0.\end{cases}}}
The convex conjugate and Legendre transform of the exponential function agree except that the domain of the convex conjugate is strictly larger as the Legendre transform is only defined for positive real numbers.
=== Connection with expected shortfall (average value at risk) ===
See this article for example.
Let F denote a cumulative distribution function of a random variable X. Then (integrating by parts),
f
(
x
)
:=
∫
−
∞
x
F
(
u
)
d
u
=
E
[
max
(
0
,
x
−
X
)
]
=
x
−
E
[
min
(
x
,
X
)
]
{\displaystyle f(x):=\int _{-\infty }^{x}F(u)\,du=\operatorname {E} \left[\max(0,x-X)\right]=x-\operatorname {E} \left[\min(x,X)\right]}
has the convex conjugate
f
∗
(
p
)
=
∫
0
p
F
−
1
(
q
)
d
q
=
(
p
−
1
)
F
−
1
(
p
)
+
E
[
min
(
F
−
1
(
p
)
,
X
)
]
=
p
F
−
1
(
p
)
−
E
[
max
(
0
,
F
−
1
(
p
)
−
X
)
]
.
{\displaystyle f^{*}(p)=\int _{0}^{p}F^{-1}(q)\,dq=(p-1)F^{-1}(p)+\operatorname {E} \left[\min(F^{-1}(p),X)\right]=pF^{-1}(p)-\operatorname {E} \left[\max(0,F^{-1}(p)-X)\right].}
=== Ordering ===
A particular interpretation has the transform
f
inc
(
x
)
:=
arg
sup
t
t
⋅
x
−
∫
0
1
max
{
t
−
f
(
u
)
,
0
}
d
u
,
{\displaystyle f^{\text{inc}}(x):=\arg \sup _{t}t\cdot x-\int _{0}^{1}\max\{t-f(u),0\}\,du,}
as this is a nondecreasing rearrangement of the initial function f; in particular,
f
inc
=
f
{\displaystyle f^{\text{inc}}=f}
for f nondecreasing.
== Properties ==
The convex conjugate of a closed convex function is again a closed convex function. The convex conjugate of a polyhedral convex function (a convex function with polyhedral epigraph) is again a polyhedral convex function.
=== Order reversing ===
Declare that
f
≤
g
{\displaystyle f\leq g}
if and only if
f
(
x
)
≤
g
(
x
)
{\displaystyle f(x)\leq g(x)}
for all
x
.
{\displaystyle x.}
Then convex-conjugation is order-reversing, which by definition means that if
f
≤
g
{\displaystyle f\leq g}
then
f
∗
≥
g
∗
.
{\displaystyle f^{*}\geq g^{*}.}
For a family of functions
(
f
α
)
α
{\displaystyle \left(f_{\alpha }\right)_{\alpha }}
it follows from the fact that supremums may be interchanged that
(
inf
α
f
α
)
∗
(
x
∗
)
=
sup
α
f
α
∗
(
x
∗
)
,
{\displaystyle \left(\inf _{\alpha }f_{\alpha }\right)^{*}(x^{*})=\sup _{\alpha }f_{\alpha }^{*}(x^{*}),}
and from the max–min inequality that
(
sup
α
f
α
)
∗
(
x
∗
)
≤
inf
α
f
α
∗
(
x
∗
)
.
{\displaystyle \left(\sup _{\alpha }f_{\alpha }\right)^{*}(x^{*})\leq \inf _{\alpha }f_{\alpha }^{*}(x^{*}).}
=== Biconjugate ===
The convex conjugate of a function is always lower semi-continuous. The biconjugate
f
∗
∗
{\displaystyle f^{**}}
(the convex conjugate of the convex conjugate) is also the closed convex hull, i.e. the largest lower semi-continuous convex function with
f
∗
∗
≤
f
.
{\displaystyle f^{**}\leq f.}
For proper functions
f
,
{\displaystyle f,}
f
=
f
∗
∗
{\displaystyle f=f^{**}}
if and only if
f
{\displaystyle f}
is convex and lower semi-continuous, by the Fenchel–Moreau theorem.
=== Fenchel's inequality ===
For any function f and its convex conjugate f *, Fenchel's inequality (also known as the Fenchel–Young inequality) holds for every
x
∈
X
{\displaystyle x\in X}
and
p
∈
X
∗
{\displaystyle p\in X^{*}}
:
⟨
p
,
x
⟩
≤
f
(
x
)
+
f
∗
(
p
)
.
{\displaystyle \left\langle p,x\right\rangle \leq f(x)+f^{*}(p).}
Furthermore, the equality holds only when
p
∈
∂
f
(
x
)
{\displaystyle p\in \partial f(x)}
.
The proof follows from the definition of convex conjugate:
f
∗
(
p
)
=
sup
x
~
{
⟨
p
,
x
~
⟩
−
f
(
x
~
)
}
≥
⟨
p
,
x
⟩
−
f
(
x
)
.
{\displaystyle f^{*}(p)=\sup _{\tilde {x}}\left\{\langle p,{\tilde {x}}\rangle -f({\tilde {x}})\right\}\geq \langle p,x\rangle -f(x).}
=== Convexity ===
For two functions
f
0
{\displaystyle f_{0}}
and
f
1
{\displaystyle f_{1}}
and a number
0
≤
λ
≤
1
{\displaystyle 0\leq \lambda \leq 1}
the convexity relation
(
(
1
−
λ
)
f
0
+
λ
f
1
)
∗
≤
(
1
−
λ
)
f
0
∗
+
λ
f
1
∗
{\displaystyle \left((1-\lambda )f_{0}+\lambda f_{1}\right)^{*}\leq (1-\lambda )f_{0}^{*}+\lambda f_{1}^{*}}
holds. The
∗
{\displaystyle {*}}
operation is a convex mapping itself.
=== Infimal convolution ===
The infimal convolution (or epi-sum) of two functions
f
{\displaystyle f}
and
g
{\displaystyle g}
is defined as
(
f
◻
g
)
(
x
)
=
inf
{
f
(
x
−
y
)
+
g
(
y
)
∣
y
∈
R
n
}
.
{\displaystyle \left(f\operatorname {\Box } g\right)(x)=\inf \left\{f(x-y)+g(y)\mid y\in \mathbb {R} ^{n}\right\}.}
Let
f
1
,
…
,
f
m
{\displaystyle f_{1},\ldots ,f_{m}}
be proper, convex and lower semicontinuous functions on
R
n
.
{\displaystyle \mathbb {R} ^{n}.}
Then the infimal convolution is convex and lower semicontinuous (but not necessarily proper), and satisfies
(
f
1
◻
⋯
◻
f
m
)
∗
=
f
1
∗
+
⋯
+
f
m
∗
.
{\displaystyle \left(f_{1}\operatorname {\Box } \cdots \operatorname {\Box } f_{m}\right)^{*}=f_{1}^{*}+\cdots +f_{m}^{*}.}
The infimal convolution of two functions has a geometric interpretation: The (strict) epigraph of the infimal convolution of two functions is the Minkowski sum of the (strict) epigraphs of those functions.
=== Maximizing argument ===
If the function
f
{\displaystyle f}
is differentiable, then its derivative is the maximizing argument in the computation of the convex conjugate:
f
′
(
x
)
=
x
∗
(
x
)
:=
arg
sup
x
∗
⟨
x
,
x
∗
⟩
−
f
∗
(
x
∗
)
{\displaystyle f^{\prime }(x)=x^{*}(x):=\arg \sup _{x^{*}}{\langle x,x^{*}\rangle }-f^{*}\left(x^{*}\right)}
and
f
∗
′
(
x
∗
)
=
x
(
x
∗
)
:=
arg
sup
x
⟨
x
,
x
∗
⟩
−
f
(
x
)
;
{\displaystyle f^{{*}\prime }\left(x^{*}\right)=x\left(x^{*}\right):=\arg \sup _{x}{\langle x,x^{*}\rangle }-f(x);}
hence
x
=
∇
f
∗
(
∇
f
(
x
)
)
,
{\displaystyle x=\nabla f^{*}\left(\nabla f(x)\right),}
x
∗
=
∇
f
(
∇
f
∗
(
x
∗
)
)
,
{\displaystyle x^{*}=\nabla f\left(\nabla f^{*}\left(x^{*}\right)\right),}
and moreover
f
′
′
(
x
)
⋅
f
∗
′
′
(
x
∗
(
x
)
)
=
1
,
{\displaystyle f^{\prime \prime }(x)\cdot f^{{*}\prime \prime }\left(x^{*}(x)\right)=1,}
f
∗
′
′
(
x
∗
)
⋅
f
′
′
(
x
(
x
∗
)
)
=
1.
{\displaystyle f^{{*}\prime \prime }\left(x^{*}\right)\cdot f^{\prime \prime }\left(x(x^{*})\right)=1.}
=== Scaling properties ===
If for some
γ
>
0
,
{\displaystyle \gamma >0,}
g
(
x
)
=
α
+
β
x
+
γ
⋅
f
(
λ
x
+
δ
)
{\displaystyle g(x)=\alpha +\beta x+\gamma \cdot f\left(\lambda x+\delta \right)}
, then
g
∗
(
x
∗
)
=
−
α
−
δ
x
∗
−
β
λ
+
γ
⋅
f
∗
(
x
∗
−
β
λ
γ
)
.
{\displaystyle g^{*}\left(x^{*}\right)=-\alpha -\delta {\frac {x^{*}-\beta }{\lambda }}+\gamma \cdot f^{*}\left({\frac {x^{*}-\beta }{\lambda \gamma }}\right).}
=== Behavior under linear transformations ===
Let
A
:
X
→
Y
{\displaystyle A:X\to Y}
be a bounded linear operator. For any convex function
f
{\displaystyle f}
on
X
,
{\displaystyle X,}
(
A
f
)
∗
=
f
∗
A
∗
{\displaystyle \left(Af\right)^{*}=f^{*}A^{*}}
where
(
A
f
)
(
y
)
=
inf
{
f
(
x
)
:
x
∈
X
,
A
x
=
y
}
{\displaystyle (Af)(y)=\inf\{f(x):x\in X,Ax=y\}}
is the preimage of
f
{\displaystyle f}
with respect to
A
{\displaystyle A}
and
A
∗
{\displaystyle A^{*}}
is the adjoint operator of
A
.
{\displaystyle A.}
A closed convex function
f
{\displaystyle f}
is symmetric with respect to a given set
G
{\displaystyle G}
of orthogonal linear transformations,
f
(
A
x
)
=
f
(
x
)
{\displaystyle f(Ax)=f(x)}
for all
x
{\displaystyle x}
and all
A
∈
G
{\displaystyle A\in G}
if and only if its convex conjugate
f
∗
{\displaystyle f^{*}}
is symmetric with respect to
G
.
{\displaystyle G.}
== Table of selected convex conjugates ==
The following table provides Legendre transforms for many common functions as well as a few useful properties.
== See also ==
Dual problem
Fenchel's duality theorem
Legendre transformation
Young's inequality for products
== References ==
Arnol'd, Vladimir Igorevich (1989). Mathematical Methods of Classical Mechanics (Second ed.). Springer. ISBN 0-387-96890-3. MR 0997295.
Rockafellar, R. Tyrrell; Wets, Roger J.-B. (26 June 2009). Variational Analysis. Grundlehren der mathematischen Wissenschaften. Vol. 317. Berlin New York: Springer Science & Business Media. ISBN 9783642024313. OCLC 883392544.
Rockafellar, R. Tyrell (1970). Convex Analysis. Princeton: Princeton University Press. ISBN 0-691-01586-4. MR 0274683.
== Further reading ==
Touchette, Hugo (2014-10-16). "Legendre-Fenchel transforms in a nutshell" (PDF). Archived from the original (PDF) on 2017-04-07. Retrieved 2017-01-09.
Touchette, Hugo (2006-11-21). "Elements of convex analysis" (PDF). Archived from the original (PDF) on 2015-05-26. Retrieved 2008-03-26.
Ellerman, David Patterson (1995-03-21). "Chapter 12: Parallel Addition, Series-Parallel Duality, and Financial Mathematics". Intellectual Trespassing as a Way of Life: Essays in Philosophy, Economics, and Mathematics (PDF). The worldly philosophy: studies in intersection of philosophy and economics. Rowman & Littlefield Publishers, Inc. pp. 237–268. ISBN 0-8476-7932-2. Archived (PDF) from the original on 2016-03-05. Retrieved 2019-08-09. Series G - Reference, Information and Interdisciplinary Subjects Series [1] (271 pages)
Ellerman, David Patterson (May 2004) [1995-03-21]. "Introduction to Series-Parallel Duality" (PDF). University of California at Riverside. CiteSeerX 10.1.1.90.3666. Archived from the original on 2019-08-10. Retrieved 2019-08-09. [2] (24 pages) | Wikipedia/Legendre-Fenchel_transformation |
In mathematics, a real-valued function is called convex if the line segment between any two distinct points on the graph of the function lies above or on the graph between the two points. Equivalently, a function is convex if its epigraph (the set of points on or above the graph of the function) is a convex set.
In simple terms, a convex function graph is shaped like a cup
∪
{\displaystyle \cup }
(or a straight line like a linear function), while a concave function's graph is shaped like a cap
∩
{\displaystyle \cap }
.
A twice-differentiable function of a single variable is convex if and only if its second derivative is nonnegative on its entire domain. Well-known examples of convex functions of a single variable include a linear function
f
(
x
)
=
c
x
{\displaystyle f(x)=cx}
(where
c
{\displaystyle c}
is a real number), a quadratic function
c
x
2
{\displaystyle cx^{2}}
(
c
{\displaystyle c}
as a nonnegative real number) and an exponential function
c
e
x
{\displaystyle ce^{x}}
(
c
{\displaystyle c}
as a nonnegative real number).
Convex functions play an important role in many areas of mathematics. They are especially important in the study of optimization problems where they are distinguished by a number of convenient properties. For instance, a strictly convex function on an open set has no more than one minimum. Even in infinite-dimensional spaces, under suitable additional hypotheses, convex functions continue to satisfy such properties and as a result, they are the most well-understood functionals in the calculus of variations. In probability theory, a convex function applied to the expected value of a random variable is always bounded above by the expected value of the convex function of the random variable. This result, known as Jensen's inequality, can be used to deduce inequalities such as the arithmetic–geometric mean inequality and Hölder's inequality.
== Definition ==
Let
X
{\displaystyle X}
be a convex subset of a real vector space and let
f
:
X
→
R
{\displaystyle f:X\to \mathbb {R} }
be a function.
Then
f
{\displaystyle f}
is called convex if and only if any of the following equivalent conditions hold:
For all
0
≤
t
≤
1
{\displaystyle 0\leq t\leq 1}
and all
x
1
,
x
2
∈
X
{\displaystyle x_{1},x_{2}\in X}
:
f
(
t
x
1
+
(
1
−
t
)
x
2
)
≤
t
f
(
x
1
)
+
(
1
−
t
)
f
(
x
2
)
{\displaystyle f\left(tx_{1}+(1-t)x_{2}\right)\leq tf\left(x_{1}\right)+(1-t)f\left(x_{2}\right)}
The right hand side represents the straight line between
(
x
1
,
f
(
x
1
)
)
{\displaystyle \left(x_{1},f\left(x_{1}\right)\right)}
and
(
x
2
,
f
(
x
2
)
)
{\displaystyle \left(x_{2},f\left(x_{2}\right)\right)}
in the graph of
f
{\displaystyle f}
as a function of
t
;
{\displaystyle t;}
increasing
t
{\displaystyle t}
from
0
{\displaystyle 0}
to
1
{\displaystyle 1}
or decreasing
t
{\displaystyle t}
from
1
{\displaystyle 1}
to
0
{\displaystyle 0}
sweeps this line. Similarly, the argument of the function
f
{\displaystyle f}
in the left hand side represents the straight line between
x
1
{\displaystyle x_{1}}
and
x
2
{\displaystyle x_{2}}
in
X
{\displaystyle X}
or the
x
{\displaystyle x}
-axis of the graph of
f
.
{\displaystyle f.}
So, this condition requires that the straight line between any pair of points on the curve of
f
{\displaystyle f}
be above or just meeting the graph.
For all
0
<
t
<
1
{\displaystyle 0<t<1}
and all
x
1
,
x
2
∈
X
{\displaystyle x_{1},x_{2}\in X}
such that
x
1
≠
x
2
{\displaystyle x_{1}\neq x_{2}}
:
f
(
t
x
1
+
(
1
−
t
)
x
2
)
≤
t
f
(
x
1
)
+
(
1
−
t
)
f
(
x
2
)
{\displaystyle f\left(tx_{1}+(1-t)x_{2}\right)\leq tf\left(x_{1}\right)+(1-t)f\left(x_{2}\right)}
The difference of this second condition with respect to the first condition above is that this condition does not include the intersection points (for example,
(
x
1
,
f
(
x
1
)
)
{\displaystyle \left(x_{1},f\left(x_{1}\right)\right)}
and
(
x
2
,
f
(
x
2
)
)
{\displaystyle \left(x_{2},f\left(x_{2}\right)\right)}
) between the straight line passing through a pair of points on the curve of
f
{\displaystyle f}
(the straight line is represented by the right hand side of this condition) and the curve of
f
;
{\displaystyle f;}
the first condition includes the intersection points as it becomes
f
(
x
1
)
≤
f
(
x
1
)
{\displaystyle f\left(x_{1}\right)\leq f\left(x_{1}\right)}
or
f
(
x
2
)
≤
f
(
x
2
)
{\displaystyle f\left(x_{2}\right)\leq f\left(x_{2}\right)}
at
t
=
0
{\displaystyle t=0}
or
1
,
{\displaystyle 1,}
or
x
1
=
x
2
.
{\displaystyle x_{1}=x_{2}.}
In fact, the intersection points do not need to be considered in a condition of convex using
f
(
t
x
1
+
(
1
−
t
)
x
2
)
≤
t
f
(
x
1
)
+
(
1
−
t
)
f
(
x
2
)
{\displaystyle f\left(tx_{1}+(1-t)x_{2}\right)\leq tf\left(x_{1}\right)+(1-t)f\left(x_{2}\right)}
because
f
(
x
1
)
≤
f
(
x
1
)
{\displaystyle f\left(x_{1}\right)\leq f\left(x_{1}\right)}
and
f
(
x
2
)
≤
f
(
x
2
)
{\displaystyle f\left(x_{2}\right)\leq f\left(x_{2}\right)}
are always true (so not useful to be a part of a condition).
The second statement characterizing convex functions that are valued in the real line
R
{\displaystyle \mathbb {R} }
is also the statement used to define convex functions that are valued in the extended real number line
[
−
∞
,
∞
]
=
R
∪
{
±
∞
}
,
{\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \},}
where such a function
f
{\displaystyle f}
is allowed to take
±
∞
{\displaystyle \pm \infty }
as a value. The first statement is not used because it permits
t
{\displaystyle t}
to take
0
{\displaystyle 0}
or
1
{\displaystyle 1}
as a value, in which case, if
f
(
x
1
)
=
±
∞
{\displaystyle f\left(x_{1}\right)=\pm \infty }
or
f
(
x
2
)
=
±
∞
,
{\displaystyle f\left(x_{2}\right)=\pm \infty ,}
respectively, then
t
f
(
x
1
)
+
(
1
−
t
)
f
(
x
2
)
{\displaystyle tf\left(x_{1}\right)+(1-t)f\left(x_{2}\right)}
would be undefined (because the multiplications
0
⋅
∞
{\displaystyle 0\cdot \infty }
and
0
⋅
(
−
∞
)
{\displaystyle 0\cdot (-\infty )}
are undefined). The sum
−
∞
+
∞
{\displaystyle -\infty +\infty }
is also undefined so a convex extended real-valued function is typically only allowed to take exactly one of
−
∞
{\displaystyle -\infty }
and
+
∞
{\displaystyle +\infty }
as a value.
The second statement can also be modified to get the definition of strict convexity, where the latter is obtained by replacing
≤
{\displaystyle \,\leq \,}
with the strict inequality
<
.
{\displaystyle \,<.}
Explicitly, the map
f
{\displaystyle f}
is called strictly convex if and only if for all real
0
<
t
<
1
{\displaystyle 0<t<1}
and all
x
1
,
x
2
∈
X
{\displaystyle x_{1},x_{2}\in X}
such that
x
1
≠
x
2
{\displaystyle x_{1}\neq x_{2}}
:
f
(
t
x
1
+
(
1
−
t
)
x
2
)
<
t
f
(
x
1
)
+
(
1
−
t
)
f
(
x
2
)
{\displaystyle f\left(tx_{1}+(1-t)x_{2}\right)<tf\left(x_{1}\right)+(1-t)f\left(x_{2}\right)}
A strictly convex function
f
{\displaystyle f}
is a function that the straight line between any pair of points on the curve
f
{\displaystyle f}
is above the curve
f
{\displaystyle f}
except for the intersection points between the straight line and the curve. An example of a function which is convex but not strictly convex is
f
(
x
,
y
)
=
x
2
+
y
{\displaystyle f(x,y)=x^{2}+y}
. This function is not strictly convex because any two points sharing an x coordinate will have a straight line between them, while any two points NOT sharing an x coordinate will have a greater value of the function than the points between them.
The function
f
{\displaystyle f}
is said to be concave (resp. strictly concave) if
−
f
{\displaystyle -f}
(
f
{\displaystyle f}
multiplied by −1) is convex (resp. strictly convex).
== Alternative naming ==
The term convex is often referred to as convex down or concave upward, and the term concave is often referred as concave down or convex upward. If the term "convex" is used without an "up" or "down" keyword, then it refers strictly to a cup shaped graph
∪
{\displaystyle \cup }
. As an example, Jensen's inequality refers to an inequality involving a convex or convex-(down), function.
== Properties ==
Many properties of convex functions have the same simple formulation for functions of many variables as for functions of one variable. See below the properties for the case of many variables, as some of them are not listed for functions of one variable.
=== Functions of one variable ===
Suppose
f
{\displaystyle f}
is a function of one real variable defined on an interval, and let
R
(
x
1
,
x
2
)
=
f
(
x
2
)
−
f
(
x
1
)
x
2
−
x
1
{\displaystyle R(x_{1},x_{2})={\frac {f(x_{2})-f(x_{1})}{x_{2}-x_{1}}}}
(note that
R
(
x
1
,
x
2
)
{\displaystyle R(x_{1},x_{2})}
is the slope of the purple line in the first drawing; the function
R
{\displaystyle R}
is symmetric in
(
x
1
,
x
2
)
,
{\displaystyle (x_{1},x_{2}),}
means that
R
{\displaystyle R}
does not change by exchanging
x
1
{\displaystyle x_{1}}
and
x
2
{\displaystyle x_{2}}
).
f
{\displaystyle f}
is convex if and only if
R
(
x
1
,
x
2
)
{\displaystyle R(x_{1},x_{2})}
is monotonically non-decreasing in
x
1
,
{\displaystyle x_{1},}
for every fixed
x
2
{\displaystyle x_{2}}
(or vice versa). This characterization of convexity is quite useful to prove the following results.
A convex function
f
{\displaystyle f}
of one real variable defined on some open interval
C
{\displaystyle C}
is continuous on
C
{\displaystyle C}
. Moreover,
f
{\displaystyle f}
admits left and right derivatives, and these are monotonically non-decreasing. In addition, the left derivative is left-continuous and the right-derivative is right-continuous. As a consequence,
f
{\displaystyle f}
is differentiable at all but at most countably many points, the set on which
f
{\displaystyle f}
is not differentiable can however still be dense. If
C
{\displaystyle C}
is closed, then
f
{\displaystyle f}
may fail to be continuous at the endpoints of
C
{\displaystyle C}
(an example is shown in the examples section).
A differentiable function of one variable is convex on an interval if and only if its derivative is monotonically non-decreasing on that interval. If a function is differentiable and convex then it is also continuously differentiable.
A differentiable function of one variable is convex on an interval if and only if its graph lies above all of its tangents:: 69
f
(
x
)
≥
f
(
y
)
+
f
′
(
y
)
(
x
−
y
)
{\displaystyle f(x)\geq f(y)+f'(y)(x-y)}
for all
x
{\displaystyle x}
and
y
{\displaystyle y}
in the interval.
A twice differentiable function of one variable is convex on an interval if and only if its second derivative is non-negative there; this gives a practical test for convexity. Visually, a twice differentiable convex function "curves up", without any bends the other way (inflection points). If its second derivative is positive at all points then the function is strictly convex, but the converse does not hold. For example, the second derivative of
f
(
x
)
=
x
4
{\displaystyle f(x)=x^{4}}
is
f
″
(
x
)
=
12
x
2
{\displaystyle f''(x)=12x^{2}}
, which is zero for
x
=
0
,
{\displaystyle x=0,}
but
x
4
{\displaystyle x^{4}}
is strictly convex.
This property and the above property in terms of "...its derivative is monotonically non-decreasing..." are not equal since if
f
″
{\displaystyle f''}
is non-negative on an interval
X
{\displaystyle X}
then
f
′
{\displaystyle f'}
is monotonically non-decreasing on
X
{\displaystyle X}
while its converse is not true, for example,
f
′
{\displaystyle f'}
is monotonically non-decreasing on
X
{\displaystyle X}
while its derivative
f
″
{\displaystyle f''}
is not defined at some points on
X
{\displaystyle X}
.
If
f
{\displaystyle f}
is a convex function of one real variable, and
f
(
0
)
≤
0
{\displaystyle f(0)\leq 0}
, then
f
{\displaystyle f}
is superadditive on the positive reals, that is
f
(
a
+
b
)
≥
f
(
a
)
+
f
(
b
)
{\displaystyle f(a+b)\geq f(a)+f(b)}
for positive real numbers
a
{\displaystyle a}
and
b
{\displaystyle b}
.
A function
f
{\displaystyle f}
is midpoint convex on an interval
C
{\displaystyle C}
if for all
x
1
,
x
2
∈
C
{\displaystyle x_{1},x_{2}\in C}
f
(
x
1
+
x
2
2
)
≤
f
(
x
1
)
+
f
(
x
2
)
2
.
{\displaystyle f\!\left({\frac {x_{1}+x_{2}}{2}}\right)\leq {\frac {f(x_{1})+f(x_{2})}{2}}.}
This condition is only slightly weaker than convexity. For example, a real-valued Lebesgue measurable function that is midpoint-convex is convex: this is a theorem of Sierpiński. In particular, a continuous function that is midpoint convex will be convex.
=== Functions of several variables ===
A function that is marginally convex in each individual variable is not necessarily (jointly) convex. For example, the function
f
(
x
,
y
)
=
x
y
{\displaystyle f(x,y)=xy}
is marginally linear, and thus marginally convex, in each variable, but not (jointly) convex.
A function
f
:
X
→
[
−
∞
,
∞
]
{\displaystyle f:X\to [-\infty ,\infty ]}
valued in the extended real numbers
[
−
∞
,
∞
]
=
R
∪
{
±
∞
}
{\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \}}
is convex if and only if its epigraph
{
(
x
,
r
)
∈
X
×
R
:
r
≥
f
(
x
)
}
{\displaystyle \{(x,r)\in X\times \mathbb {R} ~:~r\geq f(x)\}}
is a convex set.
A differentiable function
f
{\displaystyle f}
defined on a convex domain is convex if and only if
f
(
x
)
≥
f
(
y
)
+
∇
f
(
y
)
T
⋅
(
x
−
y
)
{\displaystyle f(x)\geq f(y)+\nabla f(y)^{T}\cdot (x-y)}
holds for all
x
,
y
{\displaystyle x,y}
in the domain.
A twice differentiable function of several variables is convex on a convex set if and only if its Hessian matrix of second partial derivatives is positive semidefinite on the interior of the convex set.
For a convex function
f
,
{\displaystyle f,}
the sublevel sets
{
x
:
f
(
x
)
<
a
}
{\displaystyle \{x:f(x)<a\}}
and
{
x
:
f
(
x
)
≤
a
}
{\displaystyle \{x:f(x)\leq a\}}
with
a
∈
R
{\displaystyle a\in \mathbb {R} }
are convex sets. A function that satisfies this property is called a quasiconvex function and may fail to be a convex function.
Consequently, the set of global minimisers of a convex function
f
{\displaystyle f}
is a convex set:
argmin
f
{\displaystyle {\operatorname {argmin} }\,f}
- convex.
Any local minimum of a convex function is also a global minimum. A strictly convex function will have at most one global minimum.
Jensen's inequality applies to every convex function
f
{\displaystyle f}
. If
X
{\displaystyle X}
is a random variable taking values in the domain of
f
,
{\displaystyle f,}
then
E
(
f
(
X
)
)
≥
f
(
E
(
X
)
)
,
{\displaystyle \operatorname {E} (f(X))\geq f(\operatorname {E} (X)),}
where
E
{\displaystyle \operatorname {E} }
denotes the mathematical expectation. Indeed, convex functions are exactly those that satisfies the hypothesis of Jensen's inequality.
A first-order homogeneous function of two positive variables
x
{\displaystyle x}
and
y
,
{\displaystyle y,}
(that is, a function satisfying
f
(
a
x
,
a
y
)
=
a
f
(
x
,
y
)
{\displaystyle f(ax,ay)=af(x,y)}
for all positive real
a
,
x
,
y
>
0
{\displaystyle a,x,y>0}
) that is convex in one variable must be convex in the other variable.
== Operations that preserve convexity ==
−
f
{\displaystyle -f}
is concave if and only if
f
{\displaystyle f}
is convex.
If
r
{\displaystyle r}
is any real number then
r
+
f
{\displaystyle r+f}
is convex if and only if
f
{\displaystyle f}
is convex.
Nonnegative weighted sums:
if
w
1
,
…
,
w
n
≥
0
{\displaystyle w_{1},\ldots ,w_{n}\geq 0}
and
f
1
,
…
,
f
n
{\displaystyle f_{1},\ldots ,f_{n}}
are all convex, then so is
w
1
f
1
+
⋯
+
w
n
f
n
.
{\displaystyle w_{1}f_{1}+\cdots +w_{n}f_{n}.}
In particular, the sum of two convex functions is convex.
this property extends to infinite sums, integrals and expected values as well (provided that they exist).
Elementwise maximum: let
{
f
i
}
i
∈
I
{\displaystyle \{f_{i}\}_{i\in I}}
be a collection of convex functions. Then
g
(
x
)
=
sup
i
∈
I
f
i
(
x
)
{\displaystyle g(x)=\sup \nolimits _{i\in I}f_{i}(x)}
is convex. The domain of
g
(
x
)
{\displaystyle g(x)}
is the collection of points where the expression is finite. Important special cases:
If
f
1
,
…
,
f
n
{\displaystyle f_{1},\ldots ,f_{n}}
are convex functions then so is
g
(
x
)
=
max
{
f
1
(
x
)
,
…
,
f
n
(
x
)
}
.
{\displaystyle g(x)=\max \left\{f_{1}(x),\ldots ,f_{n}(x)\right\}.}
Danskin's theorem: If
f
(
x
,
y
)
{\displaystyle f(x,y)}
is convex in
x
{\displaystyle x}
then
g
(
x
)
=
sup
y
∈
C
f
(
x
,
y
)
{\displaystyle g(x)=\sup \nolimits _{y\in C}f(x,y)}
is convex in
x
{\displaystyle x}
even if
C
{\displaystyle C}
is not a convex set.
Composition:
If
f
{\displaystyle f}
and
g
{\displaystyle g}
are convex functions and
g
{\displaystyle g}
is non-decreasing over a univariate domain, then
h
(
x
)
=
g
(
f
(
x
)
)
{\displaystyle h(x)=g(f(x))}
is convex. For example, if
f
{\displaystyle f}
is convex, then so is
e
f
(
x
)
{\displaystyle e^{f(x)}}
because
e
x
{\displaystyle e^{x}}
is convex and monotonically increasing.
If
f
{\displaystyle f}
is concave and
g
{\displaystyle g}
is convex and non-increasing over a univariate domain, then
h
(
x
)
=
g
(
f
(
x
)
)
{\displaystyle h(x)=g(f(x))}
is convex.
Convexity is invariant under affine maps: that is, if
f
{\displaystyle f}
is convex with domain
D
f
⊆
R
m
{\displaystyle D_{f}\subseteq \mathbf {R} ^{m}}
, then so is
g
(
x
)
=
f
(
A
x
+
b
)
{\displaystyle g(x)=f(Ax+b)}
, where
A
∈
R
m
×
n
,
b
∈
R
m
{\displaystyle A\in \mathbf {R} ^{m\times n},b\in \mathbf {R} ^{m}}
with domain
D
g
⊆
R
n
.
{\displaystyle D_{g}\subseteq \mathbf {R} ^{n}.}
Minimization: If
f
(
x
,
y
)
{\displaystyle f(x,y)}
is convex in
(
x
,
y
)
{\displaystyle (x,y)}
then
g
(
x
)
=
inf
y
∈
C
f
(
x
,
y
)
{\displaystyle g(x)=\inf \nolimits _{y\in C}f(x,y)}
is convex in
x
,
{\displaystyle x,}
provided that
C
{\displaystyle C}
is a convex set and that
g
(
x
)
≠
−
∞
.
{\displaystyle g(x)\neq -\infty .}
If
f
{\displaystyle f}
is convex, then its perspective
g
(
x
,
t
)
=
t
f
(
x
t
)
{\displaystyle g(x,t)=tf\left({\tfrac {x}{t}}\right)}
with domain
{
(
x
,
t
)
:
x
t
∈
Dom
(
f
)
,
t
>
0
}
{\displaystyle \left\{(x,t):{\tfrac {x}{t}}\in \operatorname {Dom} (f),t>0\right\}}
is convex.
Let
X
{\displaystyle X}
be a vector space.
f
:
X
→
R
{\displaystyle f:X\to \mathbf {R} }
is convex and satisfies
f
(
0
)
≤
0
{\displaystyle f(0)\leq 0}
if and only if
f
(
a
x
+
b
y
)
≤
a
f
(
x
)
+
b
f
(
y
)
{\displaystyle f(ax+by)\leq af(x)+bf(y)}
for any
x
,
y
∈
X
{\displaystyle x,y\in X}
and any non-negative real numbers
a
,
b
{\displaystyle a,b}
that satisfy
a
+
b
≤
1.
{\displaystyle a+b\leq 1.}
== Strongly convex functions ==
The concept of strong convexity extends and parametrizes the notion of strict convexity. Intuitively, a strongly-convex function is a function that grows as fast as a quadratic function. A strongly convex function is also strictly convex, but not vice versa. If a one-dimensional function
f
{\displaystyle f}
is twice continuously differentiable and the domain is the real line, then we can characterize it as follows:
f
{\displaystyle f}
convex if and only if
f
″
(
x
)
≥
0
{\displaystyle f''(x)\geq 0}
for all
x
.
{\displaystyle x.}
f
{\displaystyle f}
strictly convex if
f
″
(
x
)
>
0
{\displaystyle f''(x)>0}
for all
x
{\displaystyle x}
(note: this is sufficient, but not necessary).
f
{\displaystyle f}
strongly convex if and only if
f
″
(
x
)
≥
m
>
0
{\displaystyle f''(x)\geq m>0}
for all
x
.
{\displaystyle x.}
For example, let
f
{\displaystyle f}
be strictly convex, and suppose there is a sequence of points
(
x
n
)
{\displaystyle (x_{n})}
such that
f
″
(
x
n
)
=
1
n
{\displaystyle f''(x_{n})={\tfrac {1}{n}}}
. Even though
f
″
(
x
n
)
>
0
{\displaystyle f''(x_{n})>0}
, the function is not strongly convex because
f
″
(
x
)
{\displaystyle f''(x)}
will become arbitrarily small.
More generally, a differentiable function
f
{\displaystyle f}
is called strongly convex with parameter
m
>
0
{\displaystyle m>0}
if the following inequality holds for all points
x
,
y
{\displaystyle x,y}
in its domain:
(
∇
f
(
x
)
−
∇
f
(
y
)
)
T
(
x
−
y
)
≥
m
‖
x
−
y
‖
2
2
{\displaystyle (\nabla f(x)-\nabla f(y))^{T}(x-y)\geq m\|x-y\|_{2}^{2}}
or, more generally,
⟨
∇
f
(
x
)
−
∇
f
(
y
)
,
x
−
y
⟩
≥
m
‖
x
−
y
‖
2
{\displaystyle \langle \nabla f(x)-\nabla f(y),x-y\rangle \geq m\|x-y\|^{2}}
where
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
is any inner product, and
‖
⋅
‖
{\displaystyle \|\cdot \|}
is the corresponding norm. Some authors, such as refer to functions satisfying this inequality as elliptic functions.
An equivalent condition is the following:
f
(
y
)
≥
f
(
x
)
+
∇
f
(
x
)
T
(
y
−
x
)
+
m
2
‖
y
−
x
‖
2
2
{\displaystyle f(y)\geq f(x)+\nabla f(x)^{T}(y-x)+{\frac {m}{2}}\|y-x\|_{2}^{2}}
It is not necessary for a function to be differentiable in order to be strongly convex. A third definition for a strongly convex function, with parameter
m
,
{\displaystyle m,}
is that, for all
x
,
y
{\displaystyle x,y}
in the domain and
t
∈
[
0
,
1
]
,
{\displaystyle t\in [0,1],}
f
(
t
x
+
(
1
−
t
)
y
)
≤
t
f
(
x
)
+
(
1
−
t
)
f
(
y
)
−
1
2
m
t
(
1
−
t
)
‖
x
−
y
‖
2
2
{\displaystyle f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)-{\frac {1}{2}}mt(1-t)\|x-y\|_{2}^{2}}
Notice that this definition approaches the definition for strict convexity as
m
→
0
,
{\displaystyle m\to 0,}
and is identical to the definition of a convex function when
m
=
0.
{\displaystyle m=0.}
Despite this, functions exist that are strictly convex but are not strongly convex for any
m
>
0
{\displaystyle m>0}
(see example below).
If the function
f
{\displaystyle f}
is twice continuously differentiable, then it is strongly convex with parameter
m
{\displaystyle m}
if and only if
∇
2
f
(
x
)
⪰
m
I
{\displaystyle \nabla ^{2}f(x)\succeq mI}
for all
x
{\displaystyle x}
in the domain, where
I
{\displaystyle I}
is the identity and
∇
2
f
{\displaystyle \nabla ^{2}f}
is the Hessian matrix, and the inequality
⪰
{\displaystyle \succeq }
means that
∇
2
f
(
x
)
−
m
I
{\displaystyle \nabla ^{2}f(x)-mI}
is positive semi-definite. This is equivalent to requiring that the minimum eigenvalue of
∇
2
f
(
x
)
{\displaystyle \nabla ^{2}f(x)}
be at least
m
{\displaystyle m}
for all
x
.
{\displaystyle x.}
If the domain is just the real line, then
∇
2
f
(
x
)
{\displaystyle \nabla ^{2}f(x)}
is just the second derivative
f
″
(
x
)
,
{\displaystyle f''(x),}
so the condition becomes
f
″
(
x
)
≥
m
{\displaystyle f''(x)\geq m}
. If
m
=
0
{\displaystyle m=0}
then this means the Hessian is positive semidefinite (or if the domain is the real line, it means that
f
″
(
x
)
≥
0
{\displaystyle f''(x)\geq 0}
), which implies the function is convex, and perhaps strictly convex, but not strongly convex.
Assuming still that the function is twice continuously differentiable, one can show that the lower bound of
∇
2
f
(
x
)
{\displaystyle \nabla ^{2}f(x)}
implies that it is strongly convex. Using Taylor's Theorem there exists
z
∈
{
t
x
+
(
1
−
t
)
y
:
t
∈
[
0
,
1
]
}
{\displaystyle z\in \{tx+(1-t)y:t\in [0,1]\}}
such that
f
(
y
)
=
f
(
x
)
+
∇
f
(
x
)
T
(
y
−
x
)
+
1
2
(
y
−
x
)
T
∇
2
f
(
z
)
(
y
−
x
)
{\displaystyle f(y)=f(x)+\nabla f(x)^{T}(y-x)+{\frac {1}{2}}(y-x)^{T}\nabla ^{2}f(z)(y-x)}
Then
(
y
−
x
)
T
∇
2
f
(
z
)
(
y
−
x
)
≥
m
(
y
−
x
)
T
(
y
−
x
)
{\displaystyle (y-x)^{T}\nabla ^{2}f(z)(y-x)\geq m(y-x)^{T}(y-x)}
by the assumption about the eigenvalues, and hence we recover the second strong convexity equation above.
A function
f
{\displaystyle f}
is strongly convex with parameter m if and only if the function
x
↦
f
(
x
)
−
m
2
‖
x
‖
2
{\displaystyle x\mapsto f(x)-{\frac {m}{2}}\|x\|^{2}}
is convex.
A twice continuously differentiable function
f
{\displaystyle f}
on a compact domain
X
{\displaystyle X}
that satisfies
f
″
(
x
)
>
0
{\displaystyle f''(x)>0}
for all
x
∈
X
{\displaystyle x\in X}
is strongly convex. The proof of this statement follows from the extreme value theorem, which states that a continuous function on a compact set has a maximum and minimum.
Strongly convex functions are in general easier to work with than convex or strictly convex functions, since they are a smaller class. Like strictly convex functions, strongly convex functions have unique minima on compact sets.
=== Properties of strongly-convex functions ===
If f is a strongly-convex function with parameter m, then:: Prop.6.1.4
For every real number r, the level set {x | f(x) ≤ r} is compact.
The function f has a unique global minimum on Rn.
== Uniformly convex functions ==
A uniformly convex function, with modulus
ϕ
{\displaystyle \phi }
, is a function
f
{\displaystyle f}
that, for all
x
,
y
{\displaystyle x,y}
in the domain and
t
∈
[
0
,
1
]
,
{\displaystyle t\in [0,1],}
satisfies
f
(
t
x
+
(
1
−
t
)
y
)
≤
t
f
(
x
)
+
(
1
−
t
)
f
(
y
)
−
t
(
1
−
t
)
ϕ
(
‖
x
−
y
‖
)
{\displaystyle f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)-t(1-t)\phi (\|x-y\|)}
where
ϕ
{\displaystyle \phi }
is a function that is non-negative and vanishes only at 0. This is a generalization of the concept of strongly convex function; by taking
ϕ
(
α
)
=
m
2
α
2
{\displaystyle \phi (\alpha )={\tfrac {m}{2}}\alpha ^{2}}
we recover the definition of strong convexity.
It is worth noting that some authors require the modulus
ϕ
{\displaystyle \phi }
to be an increasing function, but this condition is not required by all authors.
== Examples ==
=== Functions of one variable ===
The function
f
(
x
)
=
x
2
{\displaystyle f(x)=x^{2}}
has
f
″
(
x
)
=
2
>
0
{\displaystyle f''(x)=2>0}
, so f is a convex function. It is also strongly convex (and hence strictly convex too), with strong convexity constant 2.
The function
f
(
x
)
=
x
4
{\displaystyle f(x)=x^{4}}
has
f
″
(
x
)
=
12
x
2
≥
0
{\displaystyle f''(x)=12x^{2}\geq 0}
, so f is a convex function. It is strictly convex, even though the second derivative is not strictly positive at all points. It is not strongly convex.
The absolute value function
f
(
x
)
=
|
x
|
{\displaystyle f(x)=|x|}
is convex (as reflected in the triangle inequality), even though it does not have a derivative at the point
x
=
0.
{\displaystyle x=0.}
It is not strictly convex.
The function
f
(
x
)
=
|
x
|
p
{\displaystyle f(x)=|x|^{p}}
for
p
≥
1
{\displaystyle p\geq 1}
is convex.
The exponential function
f
(
x
)
=
e
x
{\displaystyle f(x)=e^{x}}
is convex. It is also strictly convex, since
f
″
(
x
)
=
e
x
>
0
{\displaystyle f''(x)=e^{x}>0}
, but it is not strongly convex since the second derivative can be arbitrarily close to zero. More generally, the function
g
(
x
)
=
e
f
(
x
)
{\displaystyle g(x)=e^{f(x)}}
is logarithmically convex if
f
{\displaystyle f}
is a convex function. The term "superconvex" is sometimes used instead.
The function
f
{\displaystyle f}
with domain [0,1] defined by
f
(
0
)
=
f
(
1
)
=
1
,
f
(
x
)
=
0
{\displaystyle f(0)=f(1)=1,f(x)=0}
for
0
<
x
<
1
{\displaystyle 0<x<1}
is convex; it is continuous on the open interval
(
0
,
1
)
,
{\displaystyle (0,1),}
but not continuous at 0 and 1.
The function
x
3
{\displaystyle x^{3}}
has second derivative
6
x
{\displaystyle 6x}
; thus it is convex on the set where
x
≥
0
{\displaystyle x\geq 0}
and concave on the set where
x
≤
0.
{\displaystyle x\leq 0.}
Examples of functions that are monotonically increasing but not convex include
f
(
x
)
=
x
{\displaystyle f(x)={\sqrt {x}}}
and
g
(
x
)
=
log
x
{\displaystyle g(x)=\log x}
.
Examples of functions that are convex but not monotonically increasing include
h
(
x
)
=
x
2
{\displaystyle h(x)=x^{2}}
and
k
(
x
)
=
−
x
{\displaystyle k(x)=-x}
.
The function
f
(
x
)
=
1
x
{\displaystyle f(x)={\tfrac {1}{x}}}
has
f
″
(
x
)
=
2
x
3
{\displaystyle f''(x)={\tfrac {2}{x^{3}}}}
which is greater than 0 if
x
>
0
{\displaystyle x>0}
so
f
(
x
)
{\displaystyle f(x)}
is convex on the interval
(
0
,
∞
)
{\displaystyle (0,\infty )}
. It is concave on the interval
(
−
∞
,
0
)
{\displaystyle (-\infty ,0)}
.
The function
f
(
x
)
=
1
x
2
{\displaystyle f(x)={\tfrac {1}{x^{2}}}}
with
f
(
0
)
=
∞
{\displaystyle f(0)=\infty }
, is convex on the interval
(
0
,
∞
)
{\displaystyle (0,\infty )}
and convex on the interval
(
−
∞
,
0
)
{\displaystyle (-\infty ,0)}
, but not convex on the interval
(
−
∞
,
∞
)
{\displaystyle (-\infty ,\infty )}
, because of the singularity at
x
=
0.
{\displaystyle x=0.}
=== Functions of n variables ===
LogSumExp function, also called softmax function, is a convex function.
The function
−
log
det
(
X
)
{\displaystyle -\log \det(X)}
on the domain of positive-definite matrices is convex.: 74
Every real-valued linear transformation is convex but not strictly convex, since if
f
{\displaystyle f}
is linear, then
f
(
a
+
b
)
=
f
(
a
)
+
f
(
b
)
{\displaystyle f(a+b)=f(a)+f(b)}
. This statement also holds if we replace "convex" by "concave".
Every real-valued affine function, that is, each function of the form
f
(
x
)
=
a
T
x
+
b
,
{\displaystyle f(x)=a^{T}x+b,}
is simultaneously convex and concave.
Every norm is a convex function, by the triangle inequality and positive homogeneity.
The spectral radius of a nonnegative matrix is a convex function of its diagonal elements.
== See also ==
== Notes ==
== References ==
Bertsekas, Dimitri (2003). Convex Analysis and Optimization. Athena Scientific.
Borwein, Jonathan, and Lewis, Adrian. (2000). Convex Analysis and Nonlinear Optimization. Springer.
Donoghue, William F. (1969). Distributions and Fourier Transforms. Academic Press.
Hiriart-Urruty, Jean-Baptiste, and Lemaréchal, Claude. (2004). Fundamentals of Convex analysis. Berlin: Springer.
Krasnosel'skii M.A., Rutickii Ya.B. (1961). Convex Functions and Orlicz Spaces. Groningen: P.Noordhoff Ltd.
Lauritzen, Niels (2013). Undergraduate Convexity. World Scientific Publishing.
Luenberger, David (1984). Linear and Nonlinear Programming. Addison-Wesley.
Luenberger, David (1969). Optimization by Vector Space Methods. Wiley & Sons.
Rockafellar, R. T. (1970). Convex analysis. Princeton: Princeton University Press.
Thomson, Brian (1994). Symmetric Properties of Real Functions. CRC Press.
Zălinescu, C. (2002). Convex analysis in general vector spaces. River Edge, NJ: World Scientific Publishing Co., Inc. pp. xx+367. ISBN 981-238-067-1. MR 1921556.
== External links ==
"Convex function (of a real variable)", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"Convex function (of a complex variable)", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Strictly_convex_function |
A water supply network or water supply system is a system of engineered hydrologic and hydraulic components that provide water supply. A water supply system typically includes the following:
A drainage basin (see water purification – sources of drinking water)
A raw water collection point (above or below ground) where the water accumulates, such as a lake, a river, or groundwater from an underground aquifer. Raw water may be transferred using uncovered ground-level aqueducts, covered tunnels, or underground water pipes to water purification facilities.
Water purification facilities. Treated water is transferred using water pipes (usually underground).
Water storage facilities such as reservoirs, water tanks, or water towers. Smaller water systems may store the water in cisterns or pressure vessels. Tall buildings may also need to store water locally in pressure vessels in order for the water to reach the upper floors.
Additional water pressurizing components such as pumping stations may need to be situated at the outlet of underground or aboveground reservoirs or cisterns (if gravity flow is impractical).
A pipe network for distribution of water to consumers (which may be private houses or industrial, commercial, or institution establishments) and other usage points (such as fire hydrants)
Connections to the sewers (underground pipes, or aboveground ditches in some developing countries) are generally found downstream of the water consumers, but the sewer system is considered to be a separate system, rather than part of the water supply system.
Water supply networks are often run by public utilities of the water industry.
== Water extraction and raw water transfer ==
Raw water (untreated) is from a surface water source (such as an intake on a lake or a river) or from a groundwater source (such as a water well drawing from an underground aquifer) within the watershed that provides the water resource.
The raw water is transferred to the water purification facilities using uncovered aqueducts, covered tunnels or underground water pipes.
== Water treatment ==
Virtually all large systems must treat the water; a fact that is tightly regulated by global, state and federal agencies, such as the World Health Organization (WHO) or the United States Environmental Protection Agency (EPA). Water treatment must occur before the product reaches the consumer and afterwards (when it is discharged again). Water purification usually occurs close to the final delivery points to reduce pumping costs and the chances of the water becoming contaminated after treatment.
Traditional surface water treatment plants generally consists of three steps: clarification, filtration and disinfection. Clarification refers to the separation of particles (dirt, organic matter, etc.) from the water stream. Chemical addition (i.e. alum, ferric chloride) destabilizes the particle charges and prepares them for clarification either by settling or floating out of the water stream. Sand, anthracite or activated carbon filters refine the water stream, removing smaller particulate matter. While other methods of disinfection exist, the preferred method is via chlorine addition. Chlorine effectively kills bacteria and most viruses and maintains a residual to protect the water supply through the supply network.
== Water distribution network ==
The product, delivered to the point of consumption, is called potable water if it meets the water quality standards required for human consumption.
The water in the supply network is maintained at positive pressure to ensure that water reaches all parts of the network, that a sufficient flow is available at every take-off point and to ensure that untreated water in the ground cannot enter the network. The water is typically pressurised by pumping the water into storage tanks constructed at the highest local point in the network. One network may have several such service reservoirs.
In small domestic systems, the water may be pressurised by a pressure vessel or even by an underground cistern (the latter however does need additional pressurizing). This eliminates the need of a water tower or any other heightened water reserve to supply the water pressure.
These systems are usually owned and maintained by local governments such as cities or other public entities, but are occasionally operated by a commercial enterprise (see water privatization). Water supply networks are part of the master planning of communities, counties, and municipalities. Their planning and design requires the expertise of city planners and civil engineers, who must consider many factors, such as location, current demand, future growth, leakage, pressure, pipe size, pressure loss, fire fighting flows, etc.—using pipe network analysis and other tools.
As water passes through the distribution system, the water quality can degrade by chemical reactions and biological processes. Corrosion of metal pipe materials in the distribution system can cause the release of metals into the water with undesirable aesthetic and health effects. Release of iron from unlined iron pipes can result in customer reports of "red water" at the tap. Release of copper from copper pipes can result in customer reports of "blue water" and/or a metallic taste. Release of lead can occur from the solder used to join copper pipe together or from brass fixtures. Copper and lead levels at the consumer's tap are regulated to protect consumer health.
Utilities will often adjust the chemistry of the water before distribution to minimize its corrosiveness. The simplest adjustment involves control of pH and alkalinity to produce a water that tends to passivate corrosion by depositing a layer of calcium carbonate. Corrosion inhibitors are often added to reduce release of metals into the water. Common corrosion inhibitors added to the water are phosphates and silicates.
Maintenance of a biologically safe drinking water is another goal in water distribution. Typically, a chlorine based disinfectant, such as sodium hypochlorite or monochloramine is added to the water as it leaves the treatment plant. Booster stations can be placed within the distribution system to ensure that all areas of the distribution system have adequate sustained levels of disinfection.
=== Topologies ===
Like electric power lines, roads, and microwave radio networks, water systems may have a loop or branch network topology, or a combination of both. The piping networks are circular or rectangular. If any one section of water distribution main fails or needs repair, that section can be isolated without disrupting all users on the network.
Most systems are divided into zones. Factors determining the extent or size of a zone can include hydraulics, telemetry systems, history, and population density. Sometimes systems are designed for a specific area then are modified to accommodate development. Terrain affects hydraulics and some forms of telemetry. While each zone may operate as a stand-alone system, there is usually some arrangement to interconnect zones in order to manage equipment failures or system failures.
== Water network maintenance ==
Water supply networks usually represent the majority of assets of a water utility. Systematic documentation of maintenance works using a computerized maintenance management system (CMMS) is a key to a successful operation of a water utility.
== Sustainable urban water supply ==
A sustainable urban water supply network covers all the activities related to provision of potable water. Sustainable development is of increasing importance for the water supply to urban areas. Incorporating innovative water technologies into water supply systems improves water supply from sustainable perspectives. The development of innovative water technologies provides flexibility to the water supply system, generating a fundamental and effective means of sustainability based on an integrated real options approach.
Water is an essential natural resource for human existence. It is needed in every industrial and natural process, for example, it is used for oil refining, for liquid-liquid extraction in hydro-metallurgical processes, for cooling, for scrubbing in the iron and the steel industry, and for several operations in food processing facilities.
It is necessary to adopt a new approach to design urban water supply networks; water shortages are expected in the forthcoming decades and environmental regulations for water utilization and waste-water disposal are increasingly stringent.
To achieve a sustainable water supply network, new sources of water are needed to be developed, and to reduce environmental pollution.
The price of water is increasing, so less water must be wasted and actions must be taken to prevent pipeline leakage. Shutting down the supply service to fix leaks is less and less tolerated by consumers. A sustainable water supply network must monitor the freshwater consumption rate and the waste-water generation rate.
Many of the urban water supply networks in developing countries face problems related to population increase, water scarcity, and environmental pollution.
=== Population growth ===
In 1900 just 13% of the global population lived in cities. By 2005, 49% of the global population lived in urban areas. In 2030 it is predicted that this statistic will rise to 60%. Attempts to expand water supply by governments are costly and often not sufficient. The building of new illegal settlements makes it hard to map, and make connections to, the water supply, and leads to inadequate water management. In 2002, there were 158 million people with inadequate water supply. An increasing number of people live in slums, in inadequate sanitary conditions, and are therefore at risk of disease.
=== Water scarcity ===
Potable water is not well distributed in the world. 1.8 million deaths are attributed to unsafe water supplies every year, according to the WHO. Many people do not have any access, or do not have access to quality and quantity of potable water, though water itself is abundant. Poor people in developing countries can be close to major rivers, or be in high rainfall areas, yet not have access to potable water at all. There are also people living where lack of water creates millions of deaths every year.
Where the water supply system cannot reach the slums, people manage to use hand pumps, to reach the pit wells, rivers, canals, swamps and any other source of water. In most cases the water quality is unfit for human consumption. The principal cause of water scarcity is the growth in demand. Water is taken from remote areas to satisfy the needs of urban areas. Another reason for water scarcity is climate change: precipitation patterns have changed; rivers have decreased their flow; lakes are drying up; and aquifers are being emptied.
=== Governmental issues ===
In developing countries many governments are corrupt and poor and they respond to these problems with frequently changing policies and non clear agreements. Water demand exceeds supply, and household and industrial water supplies are prioritised over other uses, which leads to water stress. Potable water has a price in the market; water often becomes a business for private companies, which earn a profit by putting a higher price on water, which imposes a barrier for lower-income people. The Millennium Development Goals propose the changes required.
Goal 6 of the United Nations' Sustainable Development Goals is to "Ensure availability and sustainable management of water and sanitation for all". This is in recognition of the human right to water and sanitation, which was formally acknowledged at the United Nations General Assembly in 2010, that "clean drinking water and sanitation are essential to the recognition of all human rights". Sustainable water supply includes ensuring availability, accessibility, affordability and quality of water for all individuals.
In advanced economies, the problems are about optimising existing supply networks. These economies have usually had continuing evolution, which allowed them to construct infrastructure to supply water to people. The European Union has developed a set of rules and policies to overcome expected future problems.
There are many international documents with interesting, but not very specific, ideas and therefore they are not put into practice. Recommendations have been made by the United Nations, such as the Dublin Statement on Water and Sustainable Development.
== Optimizing the water supply network ==
The yield of a system can be measured by either its value or its net benefit. For a water supply system, the true value or the net benefit is a reliable water supply service having adequate quantity and good quality of the product. For example, if the existing water supply of a city needs to be extended to supply a new municipality, the impact of the new branch of the system must be designed to supply the new needs, while maintaining supply to the old system.
=== Single-objective optimization ===
The design of a system is governed by multiple criteria, one being cost. If the benefit is fixed, the least cost design results in maximum benefit. However, the least cost approach normally results in a minimum capacity for a water supply network. A minimum cost model usually searches for the least cost solution (in pipe sizes), while satisfying the hydraulic constraints such as: required output pressures, maximum pipe flow rate and pipe flow velocities. The cost is a function of pipe diameters; therefore the optimization problem consists of finding a minimum cost solution by optimising pipe sizes to provide the minimum acceptable capacity.
=== Multi-objective optimization ===
However, according to the authors of the paper entitled, “Method for optimizing design and rehabilitation of water distribution systems”, “the least capacity is not a desirable solution to a sustainable water supply network in a long term, due to the uncertainty of the future demand”. It is preferable to provide extra pipe capacity to cope with unexpected demand growth and with water outages. The problem changes from a single objective optimization problem (minimizing cost), to a multi-objective optimization problem (minimizing cost and maximizing flow capacity).
=== Weighted sum method ===
To solve a multi-objective optimization problem, it is necessary to convert the problem into a single objective optimization problem, by using adjustments, such as a weighted sum of objectives, or an ε-constraint method. The weighted sum approach gives a certain weight to the different objectives, and then factors in all these weights to form a single objective function that can be solved by single factor optimization. This method is not entirely satisfactory, because the weights cannot be correctly chosen, so this approach cannot find the optimal solution for all the original objectives.
=== The constraint method ===
The second approach (the constraint method), chooses one of the objective functions as the single objective, and the other objective functions are treated as constraints with a limited value. However, the optimal solution depends on the pre-defined constraint limits.
=== Sensitivity analysis ===
The multiple objective optimization problems involve computing the tradeoff between the costs and benefits resulting in a set of solutions that can be used for sensitivity analysis and tested in different scenarios. But there is no single optimal solution that will satisfy the global optimality of both objectives. As both objectives are to some extent contradictory, it is not possible to improve one objective without sacrificing the other. It is necessary in some cases use a different approach. (e.g. Pareto Analysis), and choose the best combination.
=== Operational constraints ===
Returning to the cost objective function, it cannot violate any of the operational constraints. Generally this cost is dominated by the energy cost for pumping. “The operational constraints include the standards of customer service, such as: the minimum delivered pressure, in addition to the physical constraints such as the maximum and the minimum water levels in storage tanks to prevent overtopping and emptying respectively.”
In order to optimize the operational performance of the water supply network, at the same time as minimizing the energy costs, it is necessary to predict the consequences of different pump and valve settings on the behavior of the network.
Apart from Linear and Non-linear Programming, there are other methods and approaches to design, to manage and operate a water supply network to achieve sustainability—for instance, the adoption of appropriate technology coupled with effective strategies for operation and maintenance. These strategies must include effective management models, technical support to the householders and industries, sustainable financing mechanisms, and development of reliable supply chains. All these measures must ensure the following: system working lifespan; maintenance cycle; continuity of functioning; down time for repairs; water yield and water quality.
== Sustainable development ==
In an unsustainable system there is insufficient maintenance of the water networks, especially in the major pipe lines in urban areas. The system deteriorates and then needs rehabilitation or renewal.
Householders and sewage treatment plants can both make the water supply networks more efficient and sustainable. Major improvements in eco-efficiency are gained through systematic separation of rainfall and wastewater. Membrane technology can be used for recycling wastewater.
The municipal government can develop a “Municipal Water Reuse System” which is a current approach to manage the rainwater. It applies a water reuse scheme for treated wastewater, on a municipal scale, to provide non-potable water for industry, household and municipal uses. This technology consists in separating the urine fraction of sanitary wastewater, and collecting it for recycling its nutrients. The feces and graywater fraction is collected, together with organic wastes from the households, using a gravity sewer system, continuously flushed with non-potable water. The water is treated anaerobically and the biogas is used for energy production.
One effective way to achieve sustainable water management is to shift emphasis towards decentralized water projects, such as drip irrigation diffusion in India. This project covers large spatial areas while relying on individual technological adoption decisions, offering scalable solutions that can mitigate water scarcity and enhance agricultural productivity.
Another method that can be utilized is through the promoting of community engagement and resistance against unsustainable water infrastructure projects. Grassroots movements, as observed in anti-dam protests in various countries, play a crucial role in challenging dominant development narratives and advocating for more socially and ecologically just water management practices.
Municipalities and other forms of local governments should also invest in innovative technologies, such as membrane technology for wastewater recycling, and develop policy frameworks that incentivize eco-efficient practices. Municipal water reuse systems, as demonstrated in implementation, offer promising avenues for integrating wastewater treatment and resource recovery into urban water networks.
The sustainable water supply system is an integrated system including water intake, water utilization, wastewater discharge and treatment and water environmental protection. It requires reducing freshwater and groundwater usage in all sectors of consumption. Developing sustainable water supply systems is a growing trend, because it serves people's long-term interests. There are several ways to reuse and recycle the water, in order to achieve long-term sustainability, such as:
Gray water re-use and treatment: gray water is wastewater coming from baths, showers, sinks and washbasins. If this water is treated it can be used as a source of water for uses other than drinking. Depending on the type of gray water and its level of treatment, it can be re-used for irrigation and toilet flushing. According to an investigation about the impacts of domestic grey water reuse on public health, carried out by the New South Wales Health Centre in Australia in the year 2000, grey water contains less nitrogen and fecal pathogenic organisms than sewage, and the organic content of grey water decomposes more rapidly.
Ecological treatment systems use little energy: there are many applications in gray water re-use, such as reed beds, soil treatment systems and plant filters. This process is ideal for gray water re-use, because of easier maintenance and higher removal rates of organic matter, ammonia, nitrogen and phosphorus.
Other possible approaches to scoping models for water supply, applicable to any urban area, include the following:
Sustainable drainage system
Borehole extraction
Intercluster groundwater flow
Canal and river extraction
Aquifer storage
A more user-friendly indoor water use
The Dublin Statement on Water and Sustainable Development is a good example of the new trend to overcome water supply problems. This statement, suggested by advanced economies, has come up with some principles that are of great significance to urban water supply. These are:
Fresh water is a finite and vulnerable resource, essential to sustain life, development and the environment.
Water development and management should be based on a participatory approach, involving users, planners and policy-makers at all levels.
Women play a central part in the provision, management and safeguarding of water. Institutional arrangements should reflect the role of women in water provision and protection.
Water has an economic value in all its competing uses and should be recognized as an economic good.
From these statements, developed in 1992, several policies have been created to give importance to water and to move urban water system management towards sustainable development. The Water Framework Directive by the European Commission is a good example of what has been created there out of former policies.
== Future approaches ==
There is great need for a more sustainable water supply systems. To achieve sustainability several factors must be tackled at the same time: climate change, rising energy cost, and rising populations. All of these factors provoke change and put pressure on management of available water resources.
An obstacle to transforming conventional water supply systems is the amount of time needed to achieve the transformation. More specifically, transformation must be implemented by municipal legislation bodies, which always need short-term solutions too. Another obstacle to achieving sustainability in water supply systems is the insufficient practical experience with the technologies required, and the missing know-how about the organization and the transition process.
Urban water infrastructure faces several challenges that undermine its sustainability and resilience. One critical issue highlighted in recent research is the vulnerability of water networks to climate variability and extreme weather events. Poor seasonal rains, as observed in the case of the Panama Canal's lock and dam infrastructure, exemplify how inadequate water supply can strain water-intensive infrastructure, raising questions about engineering legitimacy and the reliability of water systems.
Another key challenge is the unequal development associated with large-scale water infrastructure projects such as dams and canals. Such projects, while aimed at promoting economic growth, often actually reproduce social and economic inequalities by displacing rural communities and marginalizing indigenous populations. This phenomenon of "accumulation by dispossession" further emphasizes the need for more equitable and inclusive approaches to water infrastructure development.
Possible ways to improve this situation is simulating of the network, implementing pilot projects, learning from the costs involved and the benefits achieved.
== See also ==
Aqueduct
Civil engineering
Conduit hydroelectricity
Domestic water system
Hardy Cross method
Hydrological optimization
Hydrology
Infrastructure
Plumbing
River
Tap water
Water
Water pipes
Water meter
Water well
Automatic meter reading
Backflow prevention device
Fire hydrant
Strainers
Valve
Water tower
Water quality
Water resources
Water supply
== References ==
== External links ==
DCMMS: A web-based GIS application to record maintenance activities for water and wastewater networks.
An open-source hydraulic toolbox for water distribution systems
Water supply network schematic | Wikipedia/Water_supply_network |
A proportional–integral–derivative controller (PID controller or three-term controller) is a feedback-based control loop mechanism commonly used to manage machines and processes that require continuous control and automatic adjustment. It is typically used in industrial control systems and various other applications where constant control through modulation is necessary without human intervention. The PID controller automatically compares the desired target value (setpoint or SP) with the actual value of the system (process variable or PV). The difference between these two values is called the error value, denoted as
e
(
t
)
{\displaystyle e(t)}
.
It then applies corrective actions automatically to bring the PV to the same value as the SP using three methods: The proportional (P) component responds to the current error value by producing an output that is directly proportional to the magnitude of the error. This provides immediate correction based on how far the system is from the desired setpoint. The integral (I) component, in turn, considers the cumulative sum of past errors to address any residual steady-state errors that persist over time, eliminating lingering discrepancies. Lastly, the derivative (D) component predicts future error by assessing the rate of change of the error, which helps to mitigate overshoot and enhance system stability, particularly when the system undergoes rapid changes. The PID output signal can directly control actuators through voltage, current, or other modulation methods, depending on the application. The PID controller reduces the likelihood of human error and improves automation.
A common example is a vehicle’s cruise control system. For instance, when a vehicle encounters a hill, its speed will decrease if the engine power output is kept constant. The PID controller adjusts the engine's power output to restore the vehicle to its desired speed, doing so efficiently with minimal delay and overshoot.
The theoretical foundation of PID controllers dates back to the early 1920s with the development of automatic steering systems for ships. This concept was later adopted for automatic process control in manufacturing, first appearing in pneumatic actuators and evolving into electronic controllers. PID controllers are widely used in numerous applications requiring accurate, stable, and optimized automatic control, such as temperature regulation, motor speed control, and industrial process management.
== Fundamental operation ==
The distinguishing feature of the PID controller is the ability to use the three control terms of proportional, integral and derivative influence on the controller output to apply accurate and optimal control. The block diagram on the right shows the principles of how these terms are generated and applied. It shows a PID controller, which continuously calculates an error value
e
(
t
)
{\displaystyle e(t)}
as the difference between a desired setpoint
SP
=
r
(
t
)
{\displaystyle {\text{SP}}=r(t)}
and a measured process variable
PV
=
y
(
t
)
{\displaystyle {\text{PV}}=y(t)}
:
e
(
t
)
=
r
(
t
)
−
y
(
t
)
{\displaystyle e(t)=r(t)-y(t)}
, and applies a correction based on proportional, integral, and derivative terms. The controller attempts to minimize the error over time by adjustment of a control variable
u
(
t
)
{\displaystyle u(t)}
, such as the opening of a control valve, to a new value determined by a weighted sum of the control terms.
The PID controller directly generates a continuous control signal based on error, without discrete modulation.
In this model:
Term P is proportional to the current value of the SP − PV error
e
(
t
)
{\displaystyle e(t)}
. For example, if the error is large, the control output will be proportionately large by using the gain factor "Kp". Using proportional control alone will result in an error between the set point and the process value because the controller requires an error to generate the proportional output response. In steady state process conditions an equilibrium is reached, with a steady SP-PV "offset".
Term I accounts for past values of the SP − PV error and integrates them over time to produce the I term. For example, if there is a residual SP − PV error after the application of proportional control, the integral term seeks to eliminate the residual error by adding a control effect due to the historic cumulative value of the error. When the error is eliminated, the integral term will cease to grow. This will result in the proportional effect diminishing as the error decreases, but this is compensated for by the growing integral effect.
Term D is a best estimate of the future trend of the SP − PV error, based on its current rate of change. It is sometimes called "anticipatory control", as it is effectively seeking to reduce the effect of the SP − PV error by exerting a control influence generated by the rate of error change. The more rapid the change, the greater the controlling or damping effect.
Tuning – The balance of these effects is achieved by loop tuning to produce the optimal control function. The tuning constants are shown below as "K" and must be derived for each control application, as they depend on the response characteristics of the physical system, external to the controller. These are dependent on the behavior of the measuring sensor, the final control element (such as a control valve), any control signal delays, and the process itself. Approximate values of constants can usually be initially entered knowing the type of application, but they are normally refined, or tuned, by introducing a setpoint change and observing the system response.
Control action – The mathematical model and practical loop above both use a direct control action for all the terms, which means an increasing positive error results in an increasing positive control output correction. This is because the "error" term is not the deviation from the setpoint (actual-desired) but is in fact the correction needed (desired-actual). The system is called reverse acting if it is necessary to apply negative corrective action. For instance, if the valve in the flow loop was 100–0% valve opening for 0–100% control output – meaning that the controller action has to be reversed. Some process control schemes and final control elements require this reverse action. An example would be a valve for cooling water, where the fail-safe mode, in the case of signal loss, would be 100% opening of the valve; therefore 0% controller output needs to cause 100% valve opening.
=== Control function ===
The overall control function is
u
(
t
)
=
K
p
e
(
t
)
+
K
i
∫
0
t
e
(
τ
)
d
τ
+
K
d
d
e
(
t
)
d
t
,
{\displaystyle u(t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,\mathrm {d} \tau +K_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm {d} t}},}
where
K
p
{\displaystyle K_{\text{p}}}
,
K
i
{\displaystyle K_{\text{i}}}
, and
K
d
{\displaystyle K_{\text{d}}}
, all non-negative, denote the coefficients for the proportional, integral, and derivative terms respectively (sometimes denoted P, I, and D).
=== Standard form ===
In the standard form of the equation (see later in article),
K
i
{\displaystyle K_{\text{i}}}
and
K
d
{\displaystyle K_{\text{d}}}
are respectively replaced by
K
p
/
T
i
{\displaystyle K_{\text{p}}/T_{\text{i}}}
and
K
p
T
d
{\displaystyle K_{\text{p}}T_{\text{d}}}
; the advantage of this being that
T
i
{\displaystyle T_{\text{i}}}
and
T
d
{\displaystyle T_{\text{d}}}
have some understandable physical meaning, as they represent an integration time and a derivative time respectively.
K
p
T
d
{\displaystyle K_{\text{p}}T_{\text{d}}}
is the time constant with which the controller will attempt to approach the set point.
K
p
/
T
i
{\displaystyle K_{\text{p}}/T_{\text{i}}}
determines how long the controller will tolerate the output being consistently above or below the set point.
u
(
t
)
=
K
p
(
e
(
t
)
+
1
T
i
∫
0
t
e
(
τ
)
d
τ
+
T
d
d
e
(
t
)
d
t
)
{\displaystyle u(t)=K_{\text{p}}\left(e(t)+{\frac {1}{T_{\text{i}}}}\int _{0}^{t}e(\tau )\,\mathrm {d} \tau +T_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm {d} t}}\right)}
where
T
i
=
K
p
K
i
{\displaystyle T_{\text{i}}={K_{\text{p}} \over K_{\text{i}}}}
is the integration time constant, and
T
d
=
K
d
K
p
{\displaystyle T_{\text{d}}={K_{\text{d}} \over K_{\text{p}}}}
is the derivative time constant.
=== Selective use of control terms ===
Although a PID controller has three control terms, some applications need only one or two terms to provide appropriate control. This is achieved by setting the unused parameters to zero and is called a PI, PD, P, or I controller in the absence of the other control actions. PI controllers are fairly common in applications where derivative action would be sensitive to measurement noise, but the integral term is often needed for the system to reach its target value.
=== Applicability ===
The use of the PID algorithm does not guarantee optimal control of the system or its control stability (see § Limitations, below). Situations may occur where there are excessive delays: the measurement of the process value is delayed, or the control action does not apply quickly enough. In these cases lead–lag compensation is required to be effective. The response of the controller can be described in terms of its responsiveness to an error, the degree to which the system overshoots a setpoint, and the degree of any system oscillation. But the PID controller is broadly applicable since it relies only on the response of the measured process variable, not on knowledge or a model of the underlying process.
== History ==
=== Origins ===
The centrifugal governor was invented by Christiaan Huygens in the 17th century to regulate the gap between millstones in windmills depending on the speed of rotation, and thereby compensate for the variable speed of grain feed.
With the invention of the low-pressure stationary steam engine there was a need for automatic speed control, and James Watt's self-designed "conical pendulum" governor, a set of revolving steel balls attached to a vertical spindle by link arms, came to be an industry standard. This was based on the millstone-gap control concept.
Rotating-governor speed control, however, was still variable under conditions of varying load, where the shortcoming of what is now known as proportional control alone was evident. The error between the desired speed and the actual speed would increase with increasing load. In the 19th century, the theoretical basis for the operation of governors was first described by James Clerk Maxwell in 1868 in his now-famous paper On Governors. He explored the mathematical basis for control stability, and progressed a good way towards a solution, but made an appeal for mathematicians to examine the problem. The problem was examined further in 1874 by Edward Routh, Charles Sturm, and in 1895, Adolf Hurwitz, all of whom contributed to the establishment of control stability criteria.
In subsequent applications, speed governors were further refined, notably by American scientist Willard Gibbs, who in 1872 theoretically analyzed Watt's conical pendulum governor.
About this time, the invention of the Whitehead torpedo posed a control problem that required accurate control of the running depth. Use of a depth pressure sensor alone proved inadequate, and a pendulum that measured the fore and aft pitch of the torpedo was combined with depth measurement to become the pendulum-and-hydrostat control. Pressure control provided only a proportional control that, if the control gain was too high, would become unstable and go into overshoot with considerable instability of depth-holding. The pendulum added what is now known as derivative control, which damped the oscillations by detecting the torpedo dive/climb angle and thereby the rate-of-change of depth. This development (named by Whitehead as "The Secret" to give no clue to its action) was around 1868.
Another early example of a PID-type controller was developed by Elmer Sperry in 1911 for ship steering, though his work was intuitive rather than mathematically-based.
It was not until 1922, however, that a formal control law for what we now call PID or three-term control was first developed using theoretical analysis, by Russian American engineer Nicolas Minorsky. Minorsky was researching and designing automatic ship steering for the US Navy and based his analysis on observations of a helmsman. He noted the helmsman steered the ship based not only on the current course error but also on past error, as well as the current rate of change; this was then given a mathematical treatment by Minorsky.
His goal was stability, not general control, which simplified the problem significantly. While proportional control provided stability against small disturbances, it was insufficient for dealing with a steady disturbance, notably a stiff gale (due to steady-state error), which required adding the integral term. Finally, the derivative term was added to improve stability and control.
Trials were carried out on the USS New Mexico, with the controllers controlling the angular velocity (not the angle) of the rudder. PI control yielded sustained yaw (angular error) of ±2°. Adding the D element yielded a yaw error of ±1/6°, better than most helmsmen could achieve.
The Navy ultimately did not adopt the system due to resistance by personnel. Similar work was carried out and published by several others in the 1930s.
=== Industrial control ===
The wide use of feedback controllers did not become feasible until the development of wideband high-gain amplifiers to use the concept of negative feedback. This had been developed in telephone engineering electronics by Harold Black in the late 1920s, but not published until 1934. Independently, Clesson E Mason of the Foxboro Company in 1930 invented a wide-band pneumatic controller by combining the nozzle and flapper high-gain pneumatic amplifier, which had been invented in 1914, with negative feedback from the controller output. This dramatically increased the linear range of operation of the nozzle and flapper amplifier, and integral control could also be added by the use of a precision bleed valve and a bellows generating the integral term. The result was the "Stabilog" controller which gave both proportional and integral functions using feedback bellows. The integral term was called Reset. Later the derivative term was added by a further bellows and adjustable orifice.
From about 1932 onwards, the use of wideband pneumatic controllers increased rapidly in a variety of control applications. Air pressure was used for generating the controller output, and also for powering process modulating devices such as diaphragm-operated control valves. They were simple low maintenance devices that operated well in harsh industrial environments and did not present explosion risks in hazardous locations. They were the industry standard for many decades until the advent of discrete electronic controllers and distributed control systems (DCSs).
With these controllers, a pneumatic industry signaling standard of 3–15 psi (0.2–1.0 bar) was established, which had an elevated zero to ensure devices were working within their linear characteristic and represented the control range of 0-100%.
In the 1950s, when high gain electronic amplifiers became cheap and reliable, electronic PID controllers became popular, and the pneumatic standard was emulated by 10-50 mA and 4–20 mA current loop signals (the latter became the industry standard). Pneumatic field actuators are still widely used because of the advantages of pneumatic energy for control valves in process plant environments.
Most modern PID controls in industry are implemented as computer software in DCSs, programmable logic controllers (PLCs), or discrete compact controllers.
=== Electronic analog controllers ===
Electronic analog PID control loops were often found within more complex electronic systems, for example, the head positioning of a disk drive, the power conditioning of a power supply, or even the movement-detection circuit of a modern seismometer. Discrete electronic analog controllers have been largely replaced by digital controllers using microcontrollers or FPGAs to implement PID algorithms. However, discrete analog PID controllers are still used in niche applications requiring high-bandwidth and low-noise performance, such as laser-diode controllers.
== Control loop example ==
Consider a robotic arm that can be moved and positioned by a control loop. An electric motor may lift or lower the arm, depending on forward or reverse power applied, but power cannot be a simple function of position because of the inertial mass of the arm, forces due to gravity, external forces on the arm such as a load to lift or work to be done on an external object.
The sensed position is the process variable (PV).
The desired position is called the setpoint (SP).
The difference between the PV and SP is the error (e), which quantifies whether the arm is too low or too high and by how much.
The input to the process (the electric current in the motor) is the output from the PID controller. It is called either the manipulated variable (MV) or the control variable (CV).
The PID controller continuously adjusts the input current to achieve smooth motion.
By measuring the position (PV), and subtracting it from the setpoint (SP), the error (e) is found, and from it the controller calculates how much electric current to supply to the motor (MV).
=== Proportional ===
The obvious method is proportional control: the motor current is set in proportion to the existing error. However, this method fails if, for instance, the arm has to lift different weights: a greater weight needs a greater force applied for the same error on the down side, but a smaller force if the error is low on the upside. That's where the integral and derivative terms play their part.
=== Integral ===
An integral term increases action in relation not only to the error but also the time for which it has persisted. So, if the applied force is not enough to bring the error to zero, this force will be increased as time passes. A pure "I" controller could bring the error to zero, but it would be both weakly reacting at the start (because the action would be small at the beginning, depending on time to become significant) and more aggressive at the end (the action increases as long as the error is positive, even if the error is near zero).
Applying too much integral when the error is small and decreasing will lead to overshoot. After overshooting, if the controller were to apply a large correction in the opposite direction and repeatedly overshoot the desired position, the output would oscillate around the setpoint in either a constant, growing, or decaying sinusoid. If the amplitude of the oscillations increases with time, the system is unstable. If it decreases, the system is stable. If the oscillations remain at a constant magnitude, the system is marginally stable.
=== Derivative ===
A derivative term does not consider the magnitude of the error (meaning it cannot bring it to zero: a pure D controller cannot bring the system to its setpoint), but rather the rate of change of error, trying to bring this rate to zero. It aims at flattening the error trajectory into a horizontal line, damping the force applied, and so reduces overshoot (error on the other side because of too great applied force).
=== Control damping ===
In the interest of achieving a controlled arrival at the desired position (SP) in a timely and accurate way, the controlled system needs to be critically damped. A well-tuned position control system will also apply the necessary currents to the controlled motor so that the arm pushes and pulls as necessary to resist external forces trying to move it away from the required position. The setpoint itself may be generated by an external system, such as a PLC or other computer system, so that it continuously varies depending on the work that the robotic arm is expected to do. A well-tuned PID control system will enable the arm to meet these changing requirements to the best of its capabilities.
=== Response to disturbances ===
If a controller starts from a stable state with zero error (PV = SP), then further changes by the controller will be in response to changes in other measured or unmeasured inputs to the process that affect the process, and hence the PV. Variables that affect the process other than the MV are known as disturbances. Generally, controllers are used to reject disturbances and to implement setpoint changes. A change in load on the arm constitutes a disturbance to the robot arm control process.
=== Applications ===
In theory, a controller can be used to control any process that has a measurable output (PV), a known ideal value for that output (SP), and an input to the process (MV) that will affect the relevant PV. Controllers are used in industry to regulate temperature, pressure, force, feed rate, flow rate, chemical composition (component concentrations), weight, position, speed, and practically every other variable for which a measurement exists.
== Controller theory ==
This section describes the parallel or non-interacting form of the PID controller. For other forms please see § Alternative nomenclature and forms.
The PID control scheme is named after its three correcting terms, whose sum constitutes the manipulated variable (MV). The proportional, integral, and derivative terms are summed to calculate the output of the PID controller. Defining
u
(
t
)
{\displaystyle u(t)}
as the controller output, the final form of the PID algorithm is
u
(
t
)
=
M
V
(
t
)
=
K
p
e
(
t
)
+
K
i
∫
0
t
e
(
τ
)
d
τ
+
K
d
d
e
(
t
)
d
t
,
{\displaystyle u(t)=\mathrm {MV} (t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,d\tau +K_{\text{d}}{\frac {de(t)}{dt}},}
where
K
p
{\displaystyle K_{\text{p}}}
is the proportional gain, a tuning parameter,
K
i
{\displaystyle K_{\text{i}}}
is the integral gain, a tuning parameter,
K
d
{\displaystyle K_{\text{d}}}
is the derivative gain, a tuning parameter,
e
(
t
)
=
S
P
−
P
V
(
t
)
{\displaystyle e(t)=\mathrm {SP} -\mathrm {PV} (t)}
is the error (SP is the setpoint, and PV(t) is the process variable),
t
{\displaystyle t}
is the time or instantaneous time (the present),
τ
{\displaystyle \tau }
is the variable of integration (takes on values from time 0 to the present
t
{\displaystyle t}
).
Equivalently, the transfer function in the Laplace domain of the PID controller is
L
(
s
)
=
K
p
+
K
i
/
s
+
K
d
s
{\displaystyle L(s)=K_{\text{p}}+K_{\text{i}}/s+K_{\text{d}}s}
=
K
d
s
2
+
K
p
s
+
K
i
s
{\displaystyle ={K_{\text{d}}s^{2}+K_{\text{p}}s+K_{\text{i}} \over s}}
where
s
{\displaystyle s}
is the complex angular frequency.
=== Proportional term ===
The proportional term produces an output value that is proportional to the current error value. The proportional response can be adjusted by multiplying the error by a constant Kp, called the proportional gain constant.
The proportional term is given by
P
out
=
K
p
e
(
t
)
.
{\displaystyle P_{\text{out}}=K_{\text{p}}e(t).}
A high proportional gain results in a large change in the output for a given change in the error. If the proportional gain is too high, the system can become unstable (see the section on loop tuning). In contrast, a small gain results in a small output response to a large input error, and a less responsive or less sensitive controller. If the proportional gain is too low, the control action may be too small when responding to system disturbances. Tuning theory and industrial practice indicate that the proportional term should contribute the bulk of the output change.
==== Steady-state error ====
The steady-state error is the difference between the desired final output and the actual one. Because a non-zero error is required to drive it, a proportional controller generally operates with a steady-state error. Steady-state error (SSE) is proportional to the process gain and inversely proportional to proportional gain. SSE may be mitigated by adding a compensating bias term to the setpoint AND output or corrected dynamically by adding an integral term.
=== Integral term ===
The contribution from the integral term is proportional to both the magnitude of the error and the duration of the error. The integral in a PID controller is the sum of the instantaneous error over time and gives the accumulated offset that should have been corrected previously. The accumulated error is then multiplied by the integral gain (Ki) and added to the controller output.
The integral term is given by
I
out
=
K
i
∫
0
t
e
(
τ
)
d
τ
.
{\displaystyle I_{\text{out}}=K_{\text{i}}\int _{0}^{t}e(\tau )\,d\tau .}
The integral term accelerates the movement of the process towards setpoint and eliminates the residual steady-state error that occurs with a pure proportional controller. However, since the integral term responds to accumulated errors from the past, it can cause the present value to overshoot the setpoint value (see the section on loop tuning).
=== Derivative term ===
The derivative of the process error is calculated by determining the slope of the error over time and multiplying this rate of change by the derivative gain Kd. The magnitude of the contribution of the derivative term to the overall control action is termed the derivative gain, Kd.
The derivative term is given by
D
out
=
K
d
d
e
(
t
)
d
t
.
{\displaystyle D_{\text{out}}=K_{\text{d}}{\frac {de(t)}{dt}}.}
Derivative action predicts system behavior and thus improves settling time and stability of the system. An ideal derivative is not causal, so that implementations of PID controllers include an additional low-pass filtering for the derivative term to limit the high-frequency gain and noise. Derivative action is seldom used in practice though – by one estimate in only 25% of deployed controllers – because of its variable impact on system stability in real-world applications.
== Loop tuning ==
Tuning a control loop is the adjustment of its control parameters (proportional band/gain, integral gain/reset, derivative gain/rate) to the optimum values for the desired control response. Stability (no unbounded oscillation) is a basic requirement, but beyond that, different systems have different behavior, different applications have different requirements, and requirements may conflict with one another.
Even though there are only three parameters and it is simple to describe in principle, PID tuning is a difficult problem because it must satisfy complex criteria within the limitations of PID control. Accordingly, there are various methods for loop tuning, and more sophisticated techniques are the subject of patents; this section describes some traditional, manual methods for loop tuning.
Designing and tuning a PID controller appears to be conceptually intuitive, but can be hard in practice, if multiple (and often conflicting) objectives, such as short transient and high stability, are to be achieved. PID controllers often provide acceptable control using default tunings, but performance can generally be improved by careful tuning, and performance may be unacceptable with poor tuning. Usually, initial designs need to be adjusted repeatedly through computer simulations until the closed-loop system performs or compromises as desired.
Some processes have a degree of nonlinearity, so parameters that work well at full-load conditions do not work when the process is starting up from no load. This can be corrected by gain scheduling (using different parameters in different operating regions).
=== Stability ===
If the PID controller parameters (the gains of the proportional, integral and derivative terms) are chosen incorrectly, the controlled process input can be unstable; i.e., its output diverges, with or without oscillation, and is limited only by saturation or mechanical breakage. Instability is caused by excess gain, particularly in the presence of significant lag.
Generally, stabilization of response is required and the process must not oscillate for any combination of process conditions and setpoints, though sometimes marginal stability (bounded oscillation) is acceptable or desired.
Mathematically, the origins of instability can be seen in the Laplace domain.
The closed-loop transfer function is
H
(
s
)
=
K
(
s
)
G
(
s
)
1
+
K
(
s
)
G
(
s
)
,
{\displaystyle H(s)={\frac {K(s)G(s)}{1+K(s)G(s)}},}
where
K
(
s
)
{\displaystyle K(s)}
is the PID transfer function, and
G
(
s
)
{\displaystyle G(s)}
is the plant transfer function. A system is unstable where the closed-loop transfer function diverges for some
s
{\displaystyle s}
. This happens in situations where
K
(
s
)
G
(
s
)
=
−
1
{\displaystyle K(s)G(s)=-1}
. In other words, this happens when
|
K
(
s
)
G
(
s
)
|
=
1
{\displaystyle |K(s)G(s)|=1}
with a 180° phase shift. Stability is guaranteed when
K
(
s
)
G
(
s
)
<
1
{\displaystyle K(s)G(s)<1}
for frequencies that suffer high phase shifts. A more general formalism of this effect is known as the Nyquist stability criterion.
=== Optimal behavior ===
The optimal behavior on a process change or setpoint change varies depending on the application.
Two basic requirements are regulation (disturbance rejection – staying at a given setpoint) and command tracking (implementing setpoint changes). These terms refer to how well the controlled variable tracks the desired value. Specific criteria for command tracking include rise time and settling time. Some processes must not allow an overshoot of the process variable beyond the setpoint if, for example, this would be unsafe. Other processes must minimize the energy expended in reaching a new setpoint.
=== Overview of tuning methods ===
There are several methods for tuning a PID loop. The most effective methods generally involve developing some form of process model and then choosing P, I, and D based on the dynamic model parameters. Manual tuning methods can be relatively time-consuming, particularly for systems with long loop times.
The choice of method depends largely on whether the loop can be taken offline for tuning, and on the response time of the system. If the system can be taken offline, the best tuning method often involves subjecting the system to a step change in input, measuring the output as a function of time, and using this response to determine the control parameters.
=== Manual tuning ===
If the system must remain online, one tuning method is to first set
K
i
{\displaystyle K_{i}}
and
K
d
{\displaystyle K_{d}}
values to zero. Increase the
K
p
{\displaystyle K_{p}}
until the output of the loop oscillates; then set
K
p
{\displaystyle K_{p}}
to approximately half that value for a "quarter amplitude decay"-type response. Then increase
K
i
{\displaystyle K_{i}}
until any offset is corrected in sufficient time for the process, but not until too great a value causes instability. Finally, increase
K
d
{\displaystyle K_{d}}
, if required, until the loop is acceptably quick to reach its reference after a load disturbance. Too much
K
p
{\displaystyle K_{p}}
causes excessive response and overshoot. A fast PID loop tuning usually overshoots slightly to reach the setpoint more quickly; however, some systems cannot accept overshoot, in which case an overdamped closed-loop system is required, which in turn requires a
K
p
{\displaystyle K_{p}}
setting significantly less than half that of the
K
p
{\displaystyle K_{p}}
setting that was causing oscillation.
=== Ziegler–Nichols method ===
Another heuristic tuning method is known as the Ziegler–Nichols method, introduced by John G. Ziegler and Nathaniel B. Nichols in the 1940s. As in the method above, the
K
i
{\displaystyle K_{i}}
and
K
d
{\displaystyle K_{d}}
gains are first set to zero. The proportional gain is increased until it reaches the ultimate gain
K
u
{\displaystyle K_{u}}
at which the output of the loop starts to oscillate constantly.
K
u
{\displaystyle K_{u}}
and the oscillation period
T
u
{\displaystyle T_{u}}
are used to set the gains as follows:
The oscillation frequency is often measured instead, and the reciprocals of each multiplication yields the same result.
These gains apply to the ideal, parallel form of the PID controller. When applied to the standard PID form, only the integral and derivative gains
K
i
{\displaystyle K_{i}}
and
K
d
{\displaystyle K_{d}}
are dependent on the oscillation period
T
u
{\displaystyle T_{u}}
.
=== Cohen–Coon parameters ===
This method was developed in 1953 and is based on a first-order + time delay model. Similar to the Ziegler–Nichols method, a set of tuning parameters were developed to yield a closed-loop response with a decay ratio of
1
4
{\displaystyle {\tfrac {1}{4}}}
. Arguably the biggest problem with these parameters is that a small change in the process parameters could potentially cause a closed-loop system to become unstable.
=== Relay (Åström–Hägglund) method ===
Published in 1984 by Karl Johan Åström and Tore Hägglund, the relay method temporarily operates the process using bang-bang control and measures the resultant oscillations. The output is switched (as if by a relay, hence the name) between two values of the control variable. The values must be chosen so the process will cross the setpoint, but they need not be 0% and 100%; by choosing suitable values, dangerous oscillations can be avoided.
As long as the process variable is below the setpoint, the control output is set to the higher value. As soon as it rises above the setpoint, the control output is set to the lower value. Ideally, the output waveform is nearly square, spending equal time above and below the setpoint. The period and amplitude of the resultant oscillations are measured, and used to compute the ultimate gain and period, which are then fed into the Ziegler–Nichols method.
Specifically, the ultimate period
T
u
{\displaystyle T_{u}}
is assumed to be equal to the observed period, and the ultimate gain is computed as
K
u
=
4
b
/
π
a
,
{\displaystyle K_{u}=4b/\pi a,}
where a is the amplitude of the process variable oscillation, and b is the amplitude of the control output change which caused it.
There are numerous variants on the relay method.
=== First-order model with dead time ===
The transfer function for a first-order process with dead time is
y
(
s
)
=
k
p
e
−
θ
s
τ
p
s
+
1
u
(
s
)
,
{\displaystyle y(s)={\frac {k_{\text{p}}e^{-\theta s}}{\tau _{\text{p}}s+1}}u(s),}
where kp is the process gain, τp is the time constant, θ is the dead time, and u(s) is a step change input. Converting this transfer function to the time domain results in
y
(
t
)
=
k
p
Δ
u
(
1
−
e
−
t
−
θ
τ
p
)
,
{\displaystyle y(t)=k_{\text{p}}\Delta u\left(1-e^{\frac {-t-\theta }{\tau _{\text{p}}}}\right),}
using the same parameters found above.
It is important when using this method to apply a large enough step-change input that the output can be measured; however, too large of a step change can affect the process stability. Additionally, a larger step change ensures that the output does not change due to a disturbance (for best results, try to minimize disturbances when performing the step test).
One way to determine the parameters for the first-order process is using the 63.2% method. In this method, the process gain (kp) is equal to the change in output divided by the change in input. The dead time θ is the amount of time between when the step change occurred and when the output first changed. The time constant (τp) is the amount of time it takes for the output to reach 63.2% of the new steady-state value after the step change. One downside to using this method is that it can take a while to reach a new steady-state value if the process has large time constants.
=== Tuning software ===
Most modern industrial facilities no longer tune loops using the manual calculation methods shown above. Instead, PID tuning and loop optimization software are used to ensure consistent results. These software packages gather data, develop process models, and suggest optimal tuning. Some software packages can even develop tuning by gathering data from reference changes.
Mathematical PID loop tuning induces an impulse in the system and then uses the controlled system's frequency response to design the PID loop values. In loops with response times of several minutes, mathematical loop tuning is recommended, because trial and error can take days just to find a stable set of loop values. Optimal values are harder to find. Some digital loop controllers offer a self-tuning feature in which very small setpoint changes are sent to the process, allowing the controller itself to calculate optimal tuning values.
Another approach calculates initial values via the Ziegler–Nichols method, and uses a numerical optimization technique to find better PID coefficients.
Other formulas are available to tune the loop according to different performance criteria. Many patented formulas are now embedded within PID tuning software and hardware modules.
Advances in automated PID loop tuning software also deliver algorithms for tuning PID Loops in a dynamic or non-steady state (NSS) scenario. The software models the dynamics of a process, through a disturbance, and calculate PID control parameters in response.
== Limitations ==
While PID controllers are applicable to many control problems and often perform satisfactorily without any improvements or only coarse tuning, they can perform poorly in some applications and do not in general provide optimal control. The fundamental difficulty with PID control is that it is a feedback control system with constant parameters and no direct knowledge of the process, and thus overall performance is reactive and a compromise. While PID control is the best controller for an observer that has no model of the process, better performance can be obtained by overtly modeling the actor of the process without resorting to an observer.
PID controllers, when used alone, can give poor performance when the PID loop gains must be reduced so that the control system does not overshoot, oscillate or hunt about the control setpoint value. They also have difficulties in the presence of non-linearities, may trade-off regulation versus response time, do not react to changing process behavior (say, the process changes after it has warmed up), and have lag in responding to large disturbances.
The most significant improvement is to incorporate feed-forward control with knowledge about the system, and using the PID only to control error. Alternatively, PIDs can be modified in more minor ways, such as by changing the parameters (either gain scheduling in different use cases or adaptively modifying them based on performance), improving measurement (higher sampling rate, precision, and accuracy, and low-pass filtering if necessary), or cascading multiple PID controllers.
=== Linearity and symmetry ===
PID controllers work best when the loop to be controlled is linear and symmetric. Thus, their performance in non-linear and asymmetric systems is degraded.
A nonlinear valve in a flow control application, for instance, will result in variable loop sensitivity that requires damping to prevent instability. One solution is to include a model of the valve's nonlinearity in the control algorithm to compensate for this.
An asymmetric application, for example, is temperature control in HVAC systems that use only active heating (via a heating element) whereas only passive cooling is available. Overshoot of rising temperature can only be corrected slowly; active cooling is not available to force temperature downward as a function of the control output. In this case the PID controller could be tuned to be over-damped, to prevent or reduce overshoot, but this reduces performance by increasing the settling time of a rising temperature to the set point. The inherent degradation of control quality in this application could be solved by application of active cooling.
=== Noise in derivative term ===
A problem with the derivative term is that it amplifies higher frequency measurement or process noise that can cause large amounts of change in the output. It is often helpful to filter the measurements with a low-pass filter in order to remove higher-frequency noise components. As low-pass filtering and derivative control can cancel each other out, the amount of filtering is limited. Therefore, low noise instrumentation can be important. A nonlinear median filter may be used, which improves the filtering efficiency and practical performance. In some cases, the differential band can be turned off with little loss of control. This is equivalent to using the PID controller as a PI controller.
== Modifications to the algorithm ==
The basic PID algorithm presents some challenges in control applications that have been addressed by minor modifications to the PID form.
=== Integral windup ===
One common problem resulting from the ideal PID implementations is integral windup. Following a large change in setpoint the integral term can accumulate an error larger than the maximal value for the regulation variable (windup), thus the system overshoots and continues to increase until this accumulated error is unwound. This problem can be addressed by:
Disabling the integration until the PV has entered the controllable region
Preventing the integral term from accumulating above or below pre-determined bounds
Back-calculating the integral term to constrain the regulator output within feasible bounds.
=== Overshooting from known disturbances ===
For example, a PID loop is used to control the temperature of an electric resistance furnace where the system has stabilized. Now when the door is opened and something cold is put into the furnace the temperature drops below the setpoint. The integral function of the controller tends to compensate for error by introducing another error in the positive direction. This overshoot can be avoided by freezing of the integral function after the opening of the door for the time the control loop typically needs to reheat the furnace.
=== PI controller ===
A PI controller (proportional-integral controller) is a special case of the PID controller in which the derivative (D) of the error is not used.
The controller output is given by
K
P
Δ
+
K
I
∫
Δ
d
t
{\displaystyle K_{P}\Delta +K_{I}\int \Delta \,dt}
where
Δ
{\displaystyle \Delta }
is the error or deviation of actual measured value (PV) from the setpoint (SP).
Δ
=
S
P
−
P
V
.
{\displaystyle \Delta =SP-PV.}
A PI controller can be modelled easily in software such as Simulink or Xcos using a "flow chart" box involving Laplace operators:
C
=
G
(
1
+
τ
s
)
τ
s
{\displaystyle C={\frac {G(1+\tau s)}{\tau s}}}
where
G
=
K
P
{\displaystyle G=K_{P}}
= proportional gain
G
τ
=
K
I
{\displaystyle {\frac {G}{\tau }}=K_{I}}
= integral gain
Setting a value for
G
{\displaystyle G}
is often a trade off between decreasing overshoot and increasing settling time.
The lack of derivative action may make the system more steady in the steady state in the case of noisy data. This is because derivative action is more sensitive to higher-frequency terms in the inputs.
Without derivative action, a PI-controlled system is less responsive to real (non-noise) and relatively fast alterations in state and so the system will be slower to reach setpoint and slower to respond to perturbations than a well-tuned PID system may be.
=== Deadband ===
Many PID loops control a mechanical device (for example, a valve). Mechanical maintenance can be a major cost and wear leads to control degradation in the form of either stiction or backlash in the mechanical response to an input signal. The rate of mechanical wear is mainly a function of how often a device is activated to make a change. Where wear is a significant concern, the PID loop may have an output deadband to reduce the frequency of activation of the output (valve). This is accomplished by modifying the controller to hold its output steady if the change would be small (within the defined deadband range). The calculated output must leave the deadband before the actual output will change.
=== Setpoint step change ===
The proportional and derivative terms can produce excessive movement in the output when a system is subjected to an instantaneous step increase in the error, such as a large setpoint change. In the case of the derivative term, this is due to taking the derivative of the error, which is very large in the case of an instantaneous step change. As a result, some PID algorithms incorporate some of the following modifications:
Setpoint ramping
In this modification, the setpoint is gradually moved from its old value to a newly specified value using a linear or first-order differential ramp function. This avoids the discontinuity present in a simple step change.
Derivative of the process variable
In this case the PID controller measures the derivative of the measured PV, rather than the derivative of the error. This quantity is always continuous (i.e., never has a step change as a result of changed setpoint). This modification is a simple case of setpoint weighting.
Setpoint weighting
Setpoint weighting adds adjustable factors (usually between 0 and 1) to the setpoint in the error in the proportional and derivative element of the controller. The error in the integral term must be the true control error to avoid steady-state control errors. These two extra parameters do not affect the response to load disturbances and measurement noise and can be tuned to improve the controller's setpoint response.
=== Feed-forward ===
The control system performance can be improved by combining the feedback (or closed-loop) control of a PID controller with feed-forward (or open-loop) control. Knowledge about the system (such as the desired acceleration and inertia) can be fed forward and combined with the PID output to improve the overall system performance. The feed-forward value alone can often provide the major portion of the controller output. The PID controller primarily has to compensate for whatever difference or error remains between the setpoint (SP) and the system response to the open-loop control. Since the feed-forward output is not affected by the process feedback, it can never cause the control system to oscillate, thus improving the system response without affecting stability. Feed forward can be based on the setpoint and on extra measured disturbances. Setpoint weighting is a simple form of feed forward.
For example, in most motion control systems, in order to accelerate a mechanical load under control, more force is required from the actuator. If a velocity loop PID controller is being used to control the speed of the load and command the force being applied by the actuator, then it is beneficial to take the desired instantaneous acceleration, scale that value appropriately and add it to the output of the PID velocity loop controller. This means that whenever the load is being accelerated or decelerated, a proportional amount of force is commanded from the actuator regardless of the feedback value. The PID loop in this situation uses the feedback information to change the combined output to reduce the remaining difference between the process setpoint and the feedback value. Working together, the combined open-loop feed-forward controller and closed-loop PID controller can provide a more responsive control system.
=== Bumpless operation ===
PID controllers are often implemented with a "bumpless" initialization feature that recalculates the integral accumulator term to maintain a consistent process output through parameter changes. A partial implementation is to store the integral gain times the error rather than storing the error and postmultiplying by the integral gain, which prevents discontinuous output when the I gain is changed, but not the P or D gains.
=== Other improvements ===
In addition to feed-forward, PID controllers are often enhanced through methods such as PID gain scheduling (changing parameters in different operating conditions), fuzzy logic, or computational verb logic. Further practical application issues can arise from instrumentation connected to the controller. A high enough sampling rate, measurement precision, and measurement accuracy are required to achieve adequate control performance. Another new method for improvement of PID controller is to increase the degree of freedom by using fractional order. The order of the integrator and differentiator add increased flexibility to the controller.
== Cascade control ==
One distinctive advantage of PID controllers is that two PID controllers can be used together to yield better dynamic performance. This is called cascaded PID control. Two controllers are in cascade when they are arranged so that one regulates the set point of the other. A PID controller acts as outer loop controller, which controls the primary physical parameter, such as fluid level or velocity. The other controller acts as inner loop controller, which reads the output of outer loop controller as setpoint, usually controlling a more rapid changing parameter, flowrate or acceleration. It can be mathematically proven that the working frequency of the controller is increased and the time constant of the object is reduced by using cascaded PID controllers..
For example, a temperature-controlled circulating bath has two PID controllers in cascade, each with its own thermocouple temperature sensor. The outer controller controls the temperature of the water using a thermocouple located far from the heater, where it accurately reads the temperature of the bulk of the water. The error term of this PID controller is the difference between the desired bath temperature and measured temperature. Instead of controlling the heater directly, the outer PID controller sets a heater temperature goal for the inner PID controller. The inner PID controller controls the temperature of the heater using a thermocouple attached to the heater. The inner controller's error term is the difference between this heater temperature setpoint and the measured temperature of the heater. Its output controls the actual heater to stay near this setpoint.
The proportional, integral, and differential terms of the two controllers will be very different. The outer PID controller has a long time constant – all the water in the tank needs to heat up or cool down. The inner loop responds much more quickly. Each controller can be tuned to match the physics of the system it controls – heat transfer and thermal mass of the whole tank or of just the heater – giving better total response.
== Alternative nomenclature and forms ==
=== Standard versus parallel (ideal) form ===
The form of the PID controller most often encountered in industry, and the one most relevant to tuning algorithms is the standard form. In this form the
K
p
{\displaystyle K_{p}}
gain is applied to the
I
o
u
t
{\displaystyle I_{\mathrm {out} }}
, and
D
o
u
t
{\displaystyle D_{\mathrm {out} }}
terms, yielding:
u
(
t
)
=
K
p
(
e
(
t
)
+
1
T
i
∫
0
t
e
(
τ
)
d
τ
+
T
d
d
d
t
e
(
t
)
)
{\displaystyle u(t)=K_{p}\left(e(t)+{\frac {1}{T_{i}}}\int _{0}^{t}e(\tau )\,d\tau +T_{d}{\frac {d}{dt}}e(t)\right)}
where
T
i
{\displaystyle T_{i}}
is the integral time
T
d
{\displaystyle T_{d}}
is the derivative time
In this standard form, the parameters have a clear physical meaning. In particular, the inner summation produces a new single error value which is compensated for future and past errors. The proportional error term is the current error. The derivative components term attempts to predict the error value at
T
d
{\displaystyle T_{d}}
seconds (or samples) in the future, assuming that the loop control remains unchanged. The integral component adjusts the error value to compensate for the sum of all past errors, with the intention of completely eliminating them in
T
i
{\displaystyle T_{i}}
seconds (or samples). The resulting compensated single error value is then scaled by the single gain
K
p
{\displaystyle K_{p}}
to compute the control variable.
In the parallel form, shown in the controller theory section
u
(
t
)
=
K
p
e
(
t
)
+
K
i
∫
0
t
e
(
τ
)
d
τ
+
K
d
d
d
t
e
(
t
)
{\displaystyle u(t)=K_{p}e(t)+K_{i}\int _{0}^{t}e(\tau )\,d\tau +K_{d}{\frac {d}{dt}}e(t)}
the gain parameters are related to the parameters of the standard form through
K
i
=
K
p
/
T
i
{\displaystyle K_{i}=K_{p}/T_{i}}
and
K
d
=
K
p
T
d
{\displaystyle K_{d}=K_{p}T_{d}}
. This parallel form, where the parameters are treated as simple gains, is the most general and flexible form. However, it is also the form where the parameters have the weakest relationship to physical behaviors and is generally reserved for theoretical treatment of the PID controller. The standard form, despite being slightly more complex mathematically, is more common in industry.
=== Reciprocal gain, a.k.a. proportional band ===
In many cases, the manipulated variable output by the PID controller is a dimensionless fraction between 0 and 100% of some maximum possible value, and the translation into real units (such as pumping rate or watts of heater power) is outside the PID controller. The process variable, however, is in dimensioned units such as temperature. It is common in this case to express the gain
K
p
{\displaystyle K_{p}}
not as "output per degree", but rather in the reciprocal form of a proportional band
100
/
K
p
{\displaystyle 100/K_{p}}
, which is "degrees per full output": the range over which the output changes from 0 to 1 (0% to 100%). Beyond this range, the output is saturated, full-off or full-on. The narrower this band, the higher the proportional gain.
=== Basing derivative action on PV ===
In most commercial control systems, derivative action is based on process variable rather than error. That is, a change in the setpoint does not affect the derivative action. This is because the digitized version of the algorithm produces a large unwanted spike when the setpoint is changed. If the setpoint is constant then changes in the PV will be the same as changes in error. Therefore, this modification makes no difference to the way the controller responds to process disturbances.
=== Basing proportional action on PV ===
Most commercial control systems offer the option of also basing the proportional action solely on the process variable. This means that only the integral action responds to changes in the setpoint. The modification to the algorithm does not affect the way the controller responds to process disturbances.
Basing proportional action on PV eliminates the instant and possibly very large change in output caused by a sudden change to the setpoint. Depending on the process and tuning this may be beneficial to the response to a setpoint step.
M
V
(
t
)
=
K
p
(
−
P
V
(
t
)
+
1
T
i
∫
0
t
e
(
τ
)
d
τ
−
T
d
d
d
t
P
V
(
t
)
)
{\displaystyle \mathrm {MV(t)} =K_{p}\left(\,{-PV(t)}+{\frac {1}{T_{i}}}\int _{0}^{t}{e(\tau )}\,{d\tau }-T_{d}{\frac {d}{dt}}PV(t)\right)}
King describes an effective chart-based method.
=== Laplace form ===
Sometimes it is useful to write the PID regulator in Laplace transform form:
G
(
s
)
=
K
p
+
K
i
s
+
K
d
s
=
K
d
s
2
+
K
p
s
+
K
i
s
{\displaystyle G(s)=K_{p}+{\frac {K_{i}}{s}}+K_{d}{s}={\frac {K_{d}{s^{2}}+K_{p}{s}+K_{i}}{s}}}
Having the PID controller written in Laplace form and having the transfer function of the controlled system makes it easy to determine the closed-loop transfer function of the system.
=== Series/interacting form ===
Another representation of the PID controller is the series, or interacting form
G
(
s
)
=
K
c
(
1
τ
i
s
+
1
)
(
τ
d
s
+
1
)
{\displaystyle G(s)=K_{c}({\frac {1}{\tau _{i}{s}}}+1)(\tau _{d}{s}+1)}
where the parameters are related to the parameters of the standard form through
K
p
=
K
c
⋅
α
{\displaystyle K_{p}=K_{c}\cdot \alpha }
,
T
i
=
τ
i
⋅
α
{\displaystyle T_{i}=\tau _{i}\cdot \alpha }
, and
T
d
=
τ
d
α
{\displaystyle T_{d}={\frac {\tau _{d}}{\alpha }}}
with
α
=
1
+
τ
d
τ
i
{\displaystyle \alpha =1+{\frac {\tau _{d}}{\tau _{i}}}}
.
This form essentially consists of a PD and PI controller in series. As the integral is required to calculate the controller's bias this form provides the ability to track an external bias value which is required to be used for proper implementation of multi-controller advanced control schemes.
=== Discrete implementation ===
The analysis for designing a digital implementation of a PID controller in a microcontroller (MCU) or FPGA device requires the standard form of the PID controller to be discretized. Approximations for first-order derivatives are made by backward finite differences.
u
(
t
)
{\displaystyle u(t)}
and
e
(
t
)
{\displaystyle e(t)}
are discretized with a sampling period
Δ
t
{\displaystyle \Delta t}
, k is the sample index.
Differentiating both sides of PID equation using Newton's notation gives:
u
˙
(
t
)
=
K
p
e
˙
(
t
)
+
K
i
e
(
t
)
+
K
d
e
¨
(
t
)
{\displaystyle {\dot {u}}(t)=K_{p}{\dot {e}}(t)+K_{i}e(t)+K_{d}{\ddot {e}}(t)}
Derivative terms are approximated as,
f
˙
(
t
k
)
=
d
f
(
t
k
)
d
t
=
f
(
t
k
)
−
f
(
t
k
−
1
)
Δ
t
{\displaystyle {\dot {f}}(t_{k})={\dfrac {df(t_{k})}{dt}}={\dfrac {f(t_{k})-f(t_{k-1})}{\Delta t}}}
So,
u
(
t
k
)
−
u
(
t
k
−
1
)
Δ
t
=
K
p
e
(
t
k
)
−
e
(
t
k
−
1
)
Δ
t
+
K
i
e
(
t
k
)
+
K
d
e
˙
(
t
k
)
−
e
˙
(
t
k
−
1
)
Δ
t
{\displaystyle {\frac {u(t_{k})-u(t_{k-1})}{\Delta t}}=K_{p}{\frac {e(t_{k})-e(t_{k-1})}{\Delta t}}+K_{i}e(t_{k})+K_{d}{\frac {{\dot {e}}(t_{k})-{\dot {e}}(t_{k-1})}{\Delta t}}}
Applying backward difference again gives,
u
(
t
k
)
−
u
(
t
k
−
1
)
Δ
t
=
K
p
e
(
t
k
)
−
e
(
t
k
−
1
)
Δ
t
+
K
i
e
(
t
k
)
+
K
d
e
(
t
k
)
−
e
(
t
k
−
1
)
Δ
t
−
e
(
t
k
−
1
)
−
e
(
t
k
−
2
)
Δ
t
Δ
t
{\displaystyle {\frac {u(t_{k})-u(t_{k-1})}{\Delta t}}=K_{p}{\frac {e(t_{k})-e(t_{k-1})}{\Delta t}}+K_{i}e(t_{k})+K_{d}{\frac {{\frac {e(t_{k})-e(t_{k-1})}{\Delta t}}-{\frac {e(t_{k-1})-e(t_{k-2})}{\Delta t}}}{\Delta t}}}
By simplifying and regrouping terms of the above equation, an algorithm for an implementation of the discretized PID controller in a MCU is finally obtained:
u
(
t
k
)
=
u
(
t
k
−
1
)
+
(
K
p
+
K
i
Δ
t
+
K
d
Δ
t
)
e
(
t
k
)
+
(
−
K
p
−
2
K
d
Δ
t
)
e
(
t
k
−
1
)
+
K
d
Δ
t
e
(
t
k
−
2
)
{\displaystyle u(t_{k})=u(t_{k-1})+\left(K_{p}+K_{i}\Delta t+{\dfrac {K_{d}}{\Delta t}}\right)e(t_{k})+\left(-K_{p}-{\dfrac {2K_{d}}{\Delta t}}\right)e(t_{k-1})+{\dfrac {K_{d}}{\Delta t}}e(t_{k-2})}
or:
u
(
t
k
)
=
u
(
t
k
−
1
)
+
K
p
[
(
1
+
Δ
t
T
i
+
T
d
Δ
t
)
e
(
t
k
)
+
(
−
1
−
2
T
d
Δ
t
)
e
(
t
k
−
1
)
+
T
d
Δ
t
e
(
t
k
−
2
)
]
{\displaystyle u(t_{k})=u(t_{k-1})+K_{p}\left[\left(1+{\dfrac {\Delta t}{T_{i}}}+{\dfrac {T_{d}}{\Delta t}}\right)e(t_{k})+\left(-1-{\dfrac {2T_{d}}{\Delta t}}\right)e(t_{k-1})+{\dfrac {T_{d}}{\Delta t}}e(t_{k-2})\right]}
s.t.
T
i
=
K
p
/
K
i
,
T
d
=
K
d
/
K
p
{\displaystyle T_{i}=K_{p}/K_{i},T_{d}=K_{d}/K_{p}}
Note: This method solves in fact
u
(
t
)
=
K
p
e
(
t
)
+
K
i
∫
0
t
e
(
τ
)
d
τ
+
K
d
d
e
(
t
)
d
t
+
u
0
{\displaystyle u(t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,\mathrm {d} \tau +K_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm {d} t}}+u_{0}}
where
u
0
{\displaystyle u_{0}}
is a constant independent of t. This constant is useful when you want to have a start and stop control on the regulation loop. For instance, setting Kp,Ki and Kd to 0 will keep u(t) constant. Likewise, when you want to start a regulation on a system where the error is already close to 0 with u(t) non null, it prevents from sending the output to 0.
== Pseudocode ==
Here is a very simple and explicit group of pseudocode that can be easily understood by the layman:
Kp - proportional gain
Ki - integral gain
Kd - derivative gain
dt - loop interval time (assumes reasonable scale)
previous_error := 0
integral := 0
loop:
error := setpoint − measured_value
proportional := error;
integral := integral + error × dt
derivative := (error - previous_error) / dt
output := Kp × proportional + Ki × integral + Kd × derivative
previous_error := error
wait(dt)
goto loop
Below a pseudocode illustrates how to implement a PID considering the PID as an IIR filter:
The Z-transform of a PID can be written as (
Δ
t
{\displaystyle \Delta _{t}}
is the sampling time):
C
(
z
)
=
K
p
+
K
i
Δ
t
z
z
−
1
+
K
d
Δ
t
z
−
1
z
{\displaystyle C(z)=K_{p}+K_{i}\Delta _{t}{\frac {z}{z-1}}+{\frac {K_{d}}{\Delta _{t}}}{\frac {z-1}{z}}}
and expressed in a IIR form (in agreement with the discrete implementation shown above):
C
(
z
)
=
(
K
p
+
K
i
Δ
t
+
K
d
Δ
t
)
+
(
−
K
p
−
2
K
d
Δ
t
)
z
−
1
+
K
d
Δ
t
z
−
2
1
−
z
−
1
{\displaystyle C(z)={\frac {\left(K_{p}+K_{i}\Delta _{t}+{\dfrac {K_{d}}{\Delta _{t}}}\right)+\left(-K_{p}-{\dfrac {2K_{d}}{\Delta _{t}}}\right)z^{-1}+{\dfrac {K_{d}}{\Delta _{t}}}z^{-2}}{1-z^{-1}}}}
We can then deduce the recursive iteration often found in FPGA implementation
u
[
n
]
=
u
[
n
−
1
]
+
(
K
p
+
K
i
Δ
t
+
K
d
Δ
t
)
ϵ
[
n
]
+
(
−
K
p
−
2
K
d
Δ
t
)
ϵ
[
n
−
1
]
+
K
d
Δ
t
ϵ
[
n
−
2
]
{\displaystyle u[n]=u[n-1]+\left(K_{p}+K_{i}\Delta _{t}+{\dfrac {K_{d}}{\Delta _{t}}}\right)\epsilon [n]+\left(-K_{p}-{\dfrac {2K_{d}}{\Delta _{t}}}\right)\epsilon [n-1]+{\dfrac {K_{d}}{\Delta _{t}}}\epsilon [n-2]}
A0 := Kp + Ki*dt + Kd/dt
A1 := -Kp - 2*Kd/dt
A2 := Kd/dt
error[2] := 0 // e(t-2)
error[1] := 0 // e(t-1)
error[0] := 0 // e(t)
output := u0 // Usually the current value of the actuator
loop:
error[2] := error[1]
error[1] := error[0]
error[0] := setpoint − measured_value
output := output + A0 * error[0] + A1 * error[1] + A2 * error[2]
wait(dt)
goto loop
Here, Kp is a dimensionless number, Ki is expressed in
s
−
1
{\displaystyle s^{-1}}
and Kd is expressed in s. When doing a regulation where the actuator and the measured value are not in the same unit (ex. temperature regulation using a motor controlling a valve), Kp, Ki and Kd may be corrected by a unit conversion factor. It may also be interesting to use Ki in its reciprocal form (integration time). The above implementation allows to perform an I-only controller which may be useful in some cases.
In the real world, this is D-to-A converted and passed into the process under control as the manipulated variable (MV). The current error is stored elsewhere for re-use in the next differentiation, the program then waits until dt seconds have passed since start, and the loop begins again, reading in new values for the PV and the setpoint and calculating a new value for the error.
Note that for real code, the use of "wait(dt)" might be inappropriate because it doesn't account for time taken by the algorithm itself during the loop, or more importantly, any pre-emption delaying the algorithm.
A common issue when using
K
d
{\displaystyle K_{d}}
is the response to the derivative of a rising or falling edge of the setpoint as shown below:
A typical workaround is to filter the derivative action using a low pass filter of time constant
τ
d
/
N
{\displaystyle \tau _{d}/N}
where
3
<=
N
<=
10
{\displaystyle 3<=N<=10}
:
A variant of the above algorithm using an infinite impulse response (IIR) filter for the derivative:
A0 := Kp + Ki*dt
A1 := -Kp
error[2] := 0 // e(t-2)
error[1] := 0 // e(t-1)
error[0] := 0 // e(t)
output := u0 // Usually the current value of the actuator
A0d := Kd/dt
A1d := - 2.0*Kd/dt
A2d := Kd/dt
N := 5
tau := Kd / (Kp*N) // IIR filter time constant
alpha := dt / (2*tau)
d0 := 0
d1 := 0
fd0 := 0
fd1 := 0
loop:
error[2] := error[1]
error[1] := error[0]
error[0] := setpoint − measured_value
// PI
output := output + A0 * error[0] + A1 * error[1]
// Filtered D
d1 := d0
d0 := A0d * error[0] + A1d * error[1] + A2d * error[2]
fd1 := fd0
fd0 := ((alpha) / (alpha + 1)) * (d0 + d1) - ((alpha - 1) / (alpha + 1)) * fd1
output := output + fd0
wait(dt)
goto loop
== See also ==
Control theory
Active disturbance rejection control
== Notes ==
== References ==
== Further reading ==
== External links ==
PID tuning using Mathematica
PID tuning using Python
Principles of PID Control and Tuning
Introduction to the key terms associated with PID Temperature Control
=== PID tutorials ===
PID Control in MATLAB/Simulink and Python with TCLab
What's All This P-I-D Stuff, Anyhow? Article in Electronic Design
Shows how to build a PID controller with basic electronic components (pg. 22)
PID Without a PhD
PID Control with MATLAB and Simulink
PID with single Operational Amplifier
Proven Methods and Best Practices for PID Control
Principles of PID Control and Tuning
PID Tuning Guide: A Best-Practices Approach to Understanding and Tuning PID Controllers
Michael Barr (2002-07-30), Introduction to Closed-Loop Control, Embedded Systems Programming, archived from the original on 2010-02-09
Jinghua Zhong, Mechanical Engineering, Purdue University (Spring 2006). "PID Controller Tuning: A Short Tutorial" (PDF). Archived from the original (PDF) on 2015-04-21. Retrieved 2013-12-04.{{cite web}}: CS1 maint: multiple names: authors list (link)
Introduction to P,PI,PD & PID Controller with MATLAB
Improving The Beginners PID | Wikipedia/Proportional–integral–derivative_controller |
A manifold is a wider and/or larger pipe or channel, into which smaller pipes or channels lead, or a pipe fitting or similar device that connects multiple inputs or outputs for fluids.
== Manifolds ==
=== Engineering ===
Types of manifolds in engineering include:
Exhaust manifold
An engine part that collects the exhaust gases from multiple cylinders into one pipe. Also known as headers.
Hydraulic manifold
A component used to regulate fluid flow in a hydraulic system, thus controlling the transfer of power between actuators and pumps
Inlet manifold (or "intake manifold")
An engine part that supplies the air or fuel/air mixture to the cylinders
Scuba manifold
In a scuba set, connects two or more diving cylinders
Vacuum gas manifold
An apparatus used in chemistry to manipulate gases
Also, many dredge pipe pieces.
=== Biology ===
In biology manifolds are found in:
Cardiovascular system (blood vessel manifolds, etc.)
Lymphatic system
Respiratory system
=== Other fields ===
Manifolds are used in:
HVAC
Pipe organ
Plumbing
== References == | Wikipedia/Manifold_(fluid_mechanics) |
Waterborne diseases are conditions (meaning adverse effects on human health, such as death, disability, illness or disorders): 47 caused by pathogenic micro-organisms that are transmitted by water. These diseases can be spread while bathing, washing, drinking water, or by eating food exposed to contaminated water. They are a pressing issue in rural areas amongst developing countries all over the world. While diarrhea and vomiting are the most commonly reported symptoms of waterborne illness, other symptoms can include skin, ear, respiratory, or eye problems. Lack of clean water supply, sanitation and hygiene (WASH) are major causes for the spread of waterborne diseases in a community. Therefore, reliable access to clean drinking water and sanitation is the main method to prevent waterborne diseases.
Microorganisms causing diseases that characteristically are waterborne prominently include protozoa and bacteria, many of which are intestinal parasites, or invade the tissues or circulatory system through walls of the digestive tract. Various other waterborne diseases are caused by viruses.
Yet other important classes of waterborne diseases are caused by metazoan parasites. Typical examples include certain Nematoda, that is to say "roundworms". As an example of waterborne Nematode infections, one important waterborne nematode disease is Dracunculiasis. It is acquired by swallowing water in which certain copepoda occur that act as vectors for the Nematoda. Anyone swallowing a copepod that happens to be infected with Nematode larvae in the genus Dracunculus, becomes liable to infection. The larvae cause guinea worm disease.
Another class of waterborne metazoan pathogens are certain members of the Schistosomatidae, a family of blood flukes. They usually infect people that make skin contact with the water. Blood flukes are pathogens that cause Schistosomiasis of various forms, more or less seriously affecting hundreds of millions of people worldwide.
== Terminology ==
The term waterborne disease is reserved largely for infections that predominantly are transmitted through contact with or consumption of microbially polluted water. Many infections may be transmitted by microbes or parasites that accidentally, possibly as a result of exceptional circumstances, have entered the water. However, the fact that there might be an occasional infection need not mean that it is useful to categorize the resulting disease as "waterborne". Nor is it common practice to refer to diseases such as malaria as "waterborne" just because mosquitoes have aquatic phases in their life cycles, or because treating the water they inhabit happens to be an effective strategy in control of the mosquitoes that are the vectors.
A related term is "water-related disease" which is defined as "any significant or widespread adverse effects on human health, such as death, disability, illness or disorders, caused directly or indirectly by the condition, or changes in the quantity or quality of any water".: 47 Water-related diseases are grouped according to their transmission mechanism: water borne, water hygiene, water based, water related.: 47 The main transmission mode for waterborne diseases is ingestion of contaminated water.
== Causes ==
Water-borne diseases are primarily transmitted through the consumption of water contaminated with pathogenic microorganisms, including bacteria, viruses, and parasites. Chemical pollutants can also contribute to water-related health issues. Contamination typically occurs at various points in the water supply chain, often due to inadequate sanitation, industrial activity, or poor hygiene practices.
=== Natural Water Sources ===
Surface water bodies such as rivers, lakes, and ponds can become contaminated through the direct discharge of human and animal waste. This is particularly common in regions where open defecation is prevalent or where sanitation infrastructure is limited. The presence of fecal matter in water significantly increases the risk of transmitting pathogens responsible for diseases such as cholera, typhoid, and dysentery.
=== Inadequate Sanitation and Sewage Disposal ===
Improperly treated or untreated sewage can pollute groundwater and surface water sources. Leaks from septic tanks or sewer systems may introduce harmful microorganisms into water supplies. In areas with limited wastewater treatment facilities, this form of contamination is a major contributor to the spread of water-borne illnesses.
=== Agricultural Runoff ===
Agricultural activities can affect water quality through runoff containing fertilizers, pesticides, and animal waste. These substances may enter water bodies during rainfall or irrigation, carrying both chemical contaminants and microbial pathogens. Nitrates from fertilizers, for example, can cause health problems such as methemoglobinemia (blue baby syndrome) in infants.
=== Industrial Pollution ===
Industries may discharge untreated or inadequately treated waste into nearby water sources. Industrial effluents often contain hazardous substances such as heavy metals, organic toxins, and chemical solvents. Prolonged exposure to these pollutants through drinking or household use of contaminated water can lead to chronic health issues, including cancer and organ damage.
=== Poor Hygiene Practices ===
In many low-resource settings, contaminated water is used for washing food, bathing, or cleaning cooking utensils. The absence of basic hygiene measures, such as handwashing with soap, further exacerbates the risk of infection. Diseases like hepatitis A and E are commonly transmitted under such conditions.
=== Influence of climate change ===
== Diseases by type of pathogen ==
=== Protozoa ===
=== Bacteria ===
=== Viruses ===
=== Algae ===
=== Parasitic worms ===
== Prevention ==
Reliable access to clean drinking water and sanitation is the main method to prevent waterborne diseases. The aim is to break the fecal–oral route of disease transmission.
== Epidemiology ==
According to the World Health Organization, waterborne diseases account for an estimated 3.6% of the total DALY (disability- adjusted life year) global burden of disease, and cause about 1.5 million human deaths annually. The World Health Organization estimates that 58% of that burden, or 842,000 deaths per year, is attributable to a lack of safe drinking water supply, sanitation and hygiene (summarized as WASH).
=== United States ===
The Waterborne Disease and Outbreak Surveillance System (WBDOSS) is the principal database used to identify the causative agents, deficiencies, water systems, and sources associated with waterborne disease and outbreaks in the United States. Since 1971, the Centers for Disease Control and Prevention (CDC), the Council of State and Territorial Epidemiologists (CSTE), and the US Environmental Protection Agency (EPA) have maintained this surveillance system for collecting and reporting data on "waterborne disease and outbreaks associated with recreational water, drinking water, environmental, and undetermined exposures to water." "Data from WBDOSS have supported EPA efforts to develop drinking water regulations and have provided guidance for CDC's recreational water activities."
WBDOSS relies on complete and accurate data from public health departments in individual states, territories, and other U.S. jurisdictions regarding waterborne disease and outbreak activity. In 2009, reporting to the WBDOSS transitioned from a paper form to the electronic National Outbreak Reporting System (NORS). Annual or biennial surveillance reports of the data collected by the WBDOSS have been published in CDC reports from 1971 to 1984; since 1985, surveillance data have been published in the Morbidity and Mortality Weekly Report (MMWR).
WBDOSS and the public health community work together to look into the causes of contaminated water leading to waterborne disease outbreaks and maintaining those outbreaks. They do so by having the public health community investigating the outbreaks and WBDOSS receiving the reports.
== Society and culture ==
=== Socioeconomic impact ===
Waterborne diseases can have a significant impact on the economy. People who are infected by a waterborne disease are usually confronted with related healthcare costs. This is especially the case in developing countries. On average, a family spends about 10% of the monthly households income per person infected.
== History ==
Waterborne diseases were once wrongly explained by the miasma theory, the theory that bad air causes the spread of diseases. However, people started to find a correlation between water quality and waterborne diseases, which led to different water purification methods, such as sand filtering and chlorinating their drinking water. Founders of microscopy, Antonie van Leeuwenhoek and Robert Hooke, used the newly invented microscope to observe for the first time small material particles that were suspended in the water, laying the groundwork for the future understanding of waterborne pathogens and waterborne diseases.
== See also ==
Airborne disease
Food microbiology
List of diseases caused by water pollution
Neglected tropical diseases
Public health
Vector (epidemiology)
Water quality
Zoonosis
== References ==
== External links ==
Water-related Diseases, Contaminants, and Injuries Listing of water-related diseases, contaminants and injuries with alphabetical index, listing by type of disease (bacterial, parasitic, etc.) and listing by symptoms caused (diarrhea, skin rash, and many more ) including links to other resources (CDC's Healthy Water site)
World Health Organization (WHO) "Water-Related Diseases" | Wikipedia/Waterborne_disease |
In thermodynamics, the entropy of mixing is the increase in the total entropy when several initially separate systems of different composition, each in a thermodynamic state of internal equilibrium, are mixed without chemical reaction by the thermodynamic operation of removal of impermeable partition(s) between them, followed by a time for establishment of a new thermodynamic state of internal equilibrium in the new unpartitioned closed system.
In general, the mixing may be constrained to occur under various prescribed conditions. In the customarily prescribed conditions, the materials are each initially at a common temperature and pressure, and the new system may change its volume, while being maintained at that same constant temperature, pressure, and chemical component masses. The volume available for each material to explore is increased, from that of its initially separate compartment, to the total common final volume. The final volume need not be the sum of the initially separate volumes, so that work can be done on or by the new closed system during the process of mixing, as well as heat being transferred to or from the surroundings, because of the maintenance of constant pressure and temperature.
The internal energy of the new closed system is equal to the sum of the internal energies of the initially separate systems. The reference values for the internal energies should be specified in a way that is constrained to make this so, maintaining also that the internal energies are respectively proportional to the masses of the systems.
For concision in this article, the term 'ideal material' is used to refer to either an ideal gas (mixture) or an ideal solution.
In the special case of mixing ideal materials, the final common volume is in fact the sum of the initial separate compartment volumes. There is no heat transfer and no work is done. The entropy of mixing is entirely accounted for by the diffusive expansion of each material into a final volume not initially accessible to it.
In the general case of mixing non-ideal materials, however, the total final common volume may be different from the sum of the separate initial volumes, and there may occur transfer of work or heat, to or from the surroundings; also there may be a departure of the entropy of mixing from that of the corresponding ideal case. That departure is the main reason for interest in entropy of mixing. These energy and entropy variables and their temperature dependences provide valuable information about the properties of the materials.
On a molecular level, the entropy of mixing is of interest because it is a macroscopic variable that provides information about constitutive molecular properties. In ideal materials, intermolecular forces are the same between every pair of molecular kinds, so that a molecule feels no difference between other molecules of its own kind and of those of the other kind. In non-ideal materials, there may be differences of intermolecular forces or specific molecular effects between different species, even though they are chemically non-reacting. The entropy of mixing provides information about constitutive differences of intermolecular forces or specific molecular effects in the materials.
The statistical concept of randomness is used for statistical mechanical explanation of the entropy of mixing. Mixing of ideal materials is regarded as random at a molecular level, and, correspondingly, mixing of non-ideal materials may be non-random.
== Mixing of ideal species at constant temperature and pressure ==
In ideal species, intermolecular forces are the same between every pair of molecular kinds, so that a molecule "feels" no difference between itself and its molecular neighbors. This is the reference case for examining corresponding mixing of non-ideal species.
For example, two ideal gases, at the same temperature and pressure, are initially separated by a dividing partition.
Upon removal of the dividing partition, they expand into a final common volume (the sum of the two initial volumes), and the entropy of mixing
Δ
S
m
i
x
{\displaystyle \Delta S_{mix}}
is given by
Δ
S
m
i
x
=
−
n
R
(
x
1
ln
x
1
+
x
2
ln
x
2
)
.
{\displaystyle \Delta S_{mix}=-nR(x_{1}\ln x_{1}+x_{2}\ln x_{2})\,.}
where
R
{\displaystyle R}
is the gas constant,
n
{\displaystyle n}
the total number of moles and
x
i
{\displaystyle x_{i}}
the mole fraction of component
i
{\displaystyle i\,}
, which initially occupies volume
V
i
=
x
i
V
{\displaystyle V_{i}=x_{i}V\,}
. After the removal of the partition, the
n
i
=
n
x
i
{\displaystyle n_{i}=nx_{i}}
moles of component
i
{\displaystyle i}
may explore the combined volume
V
{\displaystyle V\,}
, which causes an entropy increase equal to
n
x
i
R
ln
(
V
/
V
i
)
=
−
n
R
x
i
ln
x
i
{\displaystyle nx_{i}R\ln(V/V_{i})=-nRx_{i}\ln x_{i}}
for each component gas.
In this case, the increase in entropy is entirely due to the irreversible processes of expansion of the two gases, and involves no heat or work flow between the system and its surroundings.
=== Gibbs free energy of mixing ===
The Gibbs free energy change
Δ
G
mix
=
Δ
H
mix
−
T
Δ
S
mix
{\displaystyle \Delta G_{\text{mix}}=\Delta H_{\text{mix}}-T\Delta S_{\text{mix}}}
determines whether mixing at constant (absolute) temperature
T
{\displaystyle T}
and pressure
p
{\displaystyle p}
is a spontaneous process. This quantity combines two physical effects—the enthalpy of mixing, which is a measure of the energy change, and the entropy of mixing considered here.
For an ideal gas mixture or an ideal solution, there is no enthalpy of mixing (
Δ
H
mix
{\displaystyle \Delta H_{\text{mix}}\,}
), so that the Gibbs free energy of mixing is given by the entropy term only:
Δ
G
mix
=
−
T
Δ
S
mix
{\displaystyle \Delta G_{\text{mix}}=-T\Delta S_{\text{mix}}}
For an ideal solution, the Gibbs free energy of mixing is always negative, meaning that mixing of ideal solutions is always spontaneous. The lowest value is when the mole fraction is 0.5 for a mixture of two components, or 1/n for a mixture of n components.
== Solutions and temperature dependence of miscibility ==
=== Ideal and regular solutions ===
The above equation for the entropy of mixing of ideal gases is valid also for certain liquid (or solid) solutions—those formed by completely random mixing so that the components move independently in the total volume. Such random mixing of solutions occurs if the interaction energies between unlike molecules are similar to the average interaction energies between like molecules.: 149 The value of the entropy corresponds exactly to random mixing for ideal solutions and for regular solutions, and approximately so for many real solutions.
For binary mixtures the entropy of random mixing can be considered as a function of the mole fraction of one component.
Δ
S
mix
=
−
n
R
(
x
1
ln
x
1
+
x
2
ln
x
2
)
=
−
n
R
[
x
ln
x
+
(
1
−
x
)
ln
(
1
−
x
)
]
{\displaystyle \Delta S_{\text{mix}}=-nR(x_{1}\ln x_{1}+x_{2}\ln x_{2})=-nR[x\ln x+(1-x)\ln(1-x)]}
For all possible mixtures,
0
<
x
<
1
{\displaystyle 0<x<1}
, so that
ln
{\displaystyle \ln }
x
{\displaystyle x}
and
ln
(
1
−
x
)
{\displaystyle \ln(1-x)}
are both negative and the entropy of mixing
Δ
S
mix
{\displaystyle \Delta S_{\text{mix}}}
is positive and favors mixing of the pure components.
The curvature of
Δ
S
mix
{\displaystyle \Delta S_{\text{mix}}}
as a function of
x
{\displaystyle x}
is given by the second derivative
(
∂
2
Δ
S
mix
∂
x
2
)
T
,
P
=
−
n
R
(
1
x
+
1
1
−
x
)
{\displaystyle \left({\frac {\partial ^{2}\Delta S_{\text{mix}}}{\partial x^{2}}}\right)_{T,P}=-nR\left({\frac {1}{x}}+{\frac {1}{1-x}}\right)}
This curvature is negative for all possible mixtures
(
0
<
x
<
1
)
{\displaystyle (0<x<1)}
, so that mixing two solutions to form a solution of intermediate composition also increases the entropy of the system. Random mixing therefore always favors miscibility and opposes phase separation.
For ideal solutions, the enthalpy of mixing is zero so that the components are miscible in all proportions. For regular solutions a positive enthalpy of mixing may cause incomplete miscibility (phase separation for some compositions) at temperatures below the upper critical solution temperature (UCST).: 186 This is the minimum temperature at which the
−
T
Δ
S
m
i
x
{\displaystyle -T\Delta S_{mix}}
term in the Gibbs energy of mixing is sufficient to produce miscibility in all proportions.
=== Systems with a lower critical solution temperature ===
Nonrandom mixing with a lower entropy of mixing can occur when the attractive interactions between unlike molecules are significantly stronger (or weaker) than the mean interactions between like molecules. For some systems this can lead to a lower critical solution temperature (LCST) or lower limiting temperature for phase separation.
For example, triethylamine and water are miscible in all proportions below 19 °C, but above this critical temperature, solutions of certain compositions separate into two phases at equilibrium with each other.: 187 This means that
Δ
G
mix
{\displaystyle \Delta G_{\text{mix}}}
is negative for mixing of the two phases below 19 °C and positive above this temperature. Therefore,
Δ
S
mix
=
−
(
∂
Δ
G
mix
∂
T
)
P
{\displaystyle \Delta S_{\text{mix}}=-\left({\frac {\partial \Delta G_{\text{mix}}}{\partial T}}\right)_{P}}
is negative for mixing of these two equilibrium phases. This is due to the formation of attractive hydrogen bonds between the two components that prevent random mixing. Triethylamine molecules cannot form hydrogen bonds with each other but only with water molecules, so in solution they remain associated to water molecules with loss of entropy. The mixing that occurs below 19 °C is due not to entropy but to the enthalpy of formation of the hydrogen bonds.
Lower critical solution temperatures also occur in many polymer-solvent mixtures. For polar systems such as polyacrylic acid in 1,4-dioxane, this is often due to the formation of hydrogen bonds between polymer and solvent. For nonpolar systems such as polystyrene in cyclohexane, phase separation has been observed in sealed tubes (at high pressure) at temperatures approaching the liquid-vapor critical point of the solvent. At such temperatures the solvent expands much more rapidly than the polymer, whose segments are covalently linked. Mixing therefore requires contraction of the solvent for compatibility of the polymer, resulting in a loss of entropy.
== Statistical thermodynamical explanation of the entropy of mixing of ideal gases ==
Since thermodynamic entropy can be related to statistical mechanics or to information theory, it is possible to calculate the entropy of mixing using these two approaches. Here we consider the simple case of mixing ideal gases.
=== Proof from statistical mechanics ===
Assume that the molecules of two different substances are approximately the same size, and regard space as subdivided into a square lattice whose cells are the size of the molecules. (In fact, any lattice would do, including close packing.) This is a crystal-like conceptual model to identify the molecular centers of mass. If the two phases are liquids, there is no spatial uncertainty in each one individually. (This is, of course, an approximation. Liquids have a "free volume". This is why they are (usually) less dense than solids.) Everywhere we look in component 1, there is a molecule present, and likewise for component 2. After the two different substances are intermingled (assuming they are miscible), the liquid is still dense with molecules, but now there is uncertainty about what kind of molecule is in which location. Of course, any idea of identifying molecules in given locations is a thought experiment, not something one could do, but the calculation of the uncertainty is well-defined.
We can use Boltzmann's equation for the entropy change as applied to the mixing process
Δ
S
mix
=
k
B
ln
Ω
{\displaystyle \Delta S_{\text{mix}}=k_{\text{B}}\ln \Omega }
where
k
B
{\displaystyle k_{\text{B}}}
is the Boltzmann constant. We then calculate the number of ways
Ω
{\displaystyle \Omega }
of arranging
N
1
{\displaystyle N_{1}}
molecules of component 1 and
N
2
{\displaystyle N_{2}}
molecules of component 2 on a lattice, where
N
=
N
1
+
N
2
{\displaystyle N=N_{1}+N_{2}}
is the total number of molecules, and therefore the number of lattice sites.
Calculating the number of permutations of
N
{\displaystyle N}
objects, correcting for the fact that
N
1
{\displaystyle N_{1}}
of them are identical to one another, and likewise for
N
2
{\displaystyle N_{2}}
,
Ω
=
N
!
/
N
1
!
N
2
!
{\displaystyle \Omega =N!/N_{1}!N_{2}!}
After applying Stirling's approximation for the factorial of a large integer m:
ln
m
!
=
∑
k
ln
k
≈
∫
1
m
d
k
ln
k
=
m
ln
m
−
m
+
1
≈
m
ln
m
−
m
{\displaystyle \ln m!=\sum _{k}\ln k\approx \int _{1}^{m}dk\ln k=m\ln m-m+1\approx m\ln m-m}
,
the result is
Δ
S
m
i
x
=
−
k
B
[
N
1
ln
(
N
1
/
N
)
+
N
2
ln
(
N
2
/
N
)
]
=
−
k
B
N
[
x
1
ln
x
1
+
x
2
ln
x
2
]
{\displaystyle \Delta S_{mix}=-k_{\text{B}}[N_{1}\ln(N_{1}/N)+N_{2}\ln(N_{2}/N)]=-k_{\text{B}}N[x_{1}\ln x_{1}+x_{2}\ln x_{2}]}
where we have introduced the mole fractions, which are also the probabilities of finding any particular component in a given lattice site.
x
1
=
N
1
/
N
=
p
1
and
x
2
=
N
2
/
N
=
p
2
{\displaystyle x_{1}=N_{1}/N=p_{1}\;\;{\text{and}}\;\;x_{2}=N_{2}/N=p_{2}}
Since the Boltzmann constant
k
B
=
R
/
N
A
{\displaystyle k_{\text{B}}=R/N_{\text{A}}}
, where
N
A
{\displaystyle N_{\text{A}}}
is the Avogadro constant, and the number of molecules
N
=
n
N
A
{\displaystyle N=nN_{\text{A}}}
, we recover the thermodynamic expression for the mixing of two ideal gases,
Δ
S
mix
=
−
n
R
[
x
1
ln
x
1
+
x
2
ln
x
2
]
{\displaystyle \Delta S_{\text{mix}}=-nR[x_{1}\ln x_{1}+x_{2}\ln x_{2}]}
This expression can be generalized to a mixture of
r
{\displaystyle r}
components,
N
i
{\displaystyle N_{i}}
, with
i
=
1
,
2
,
3
,
…
,
r
{\displaystyle i=1,2,3,\ldots ,r}
Δ
S
mix
=
−
k
B
∑
i
=
1
r
N
i
ln
(
N
i
/
N
)
=
−
N
k
B
∑
i
=
1
r
x
i
ln
x
i
=
−
n
R
∑
i
=
1
r
x
i
ln
x
i
{\displaystyle \Delta S_{\text{mix}}=-k_{\text{B}}\sum _{i=1}^{r}N_{i}\ln(N_{i}/N)=-Nk_{\text{B}}\sum _{i=1}^{r}x_{i}\ln x_{i}=-nR\sum _{i=1}^{r}x_{i}\ln x_{i}}
The Flory–Huggins solution theory is an example of a more detailed model along these lines.
=== Relationship to information theory ===
The entropy of mixing is also proportional to the Shannon entropy or compositional uncertainty of information theory, which is defined without requiring Stirling's approximation. Claude Shannon introduced this expression for use in information theory, but similar formulas can be found as far back as the work of Ludwig Boltzmann and J. Willard Gibbs. The Shannon uncertainty is not the same as the Heisenberg uncertainty principle in quantum mechanics which is based on variance. The Shannon entropy is defined as:
H
=
−
∑
i
=
1
r
p
i
ln
(
p
i
)
{\displaystyle H=-\sum _{i=1}^{r}p_{i}\ln(p_{i})}
where pi is the probability that an information source will produce the ith symbol from an r-symbol alphabet and is independent of previous symbols. (thus i runs from 1 to r ). H is then a measure of the expected amount of information (log pi) missing before the symbol is known or measured, or, alternatively, the expected amount of information supplied when the symbol becomes known. The set of messages of length N symbols from the source will then have an entropy of NH.
The thermodynamic entropy is only due to positional uncertainty, so we may take the "alphabet" to be any of the r different species in the gas, and, at equilibrium, the probability that a given particle is of type i is simply the mole fraction xi for that particle. Since we are dealing with ideal gases, the identity of nearby particles is irrelevant. Multiplying by the number of particles N yields the change in entropy of the entire system from the unmixed case in which all of the pi were either 1 or 0. We again obtain the entropy of mixing on multiplying by the Boltzmann constant
k
B
{\displaystyle k_{\text{B}}}
.
Δ
S
mix
=
−
N
k
B
∑
i
=
1
r
x
i
ln
x
i
{\displaystyle \Delta S_{\text{mix}}=-Nk_{\text{B}}\sum _{i=1}^{r}x_{i}\ln x_{i}}
So thermodynamic entropy with r chemical species with a total of N particles has a parallel to an information source that has r distinct symbols with messages that are N symbols long.
=== Application to gases ===
In gases there is a lot more spatial uncertainty because most of their volume is merely empty space. We can regard the mixing process as allowing the contents of the two originally separate contents to expand into the combined volume of the two conjoined containers. The two lattices that allow us to conceptually localize molecular centers of mass also join. The total number of empty cells is the sum of the numbers of empty cells in the two components prior to mixing. Consequently, that part of the spatial uncertainty concerning whether any molecule is present in a lattice cell is the sum of the initial values, and does not increase upon "mixing".
Almost everywhere we look, we find empty lattice cells. Nevertheless, we do find molecules in a few occupied cells. When there is real mixing, for each of those few occupied cells, there is a contingent uncertainty about which kind of molecule it is. When there is no real mixing because the two substances are identical, there is no uncertainty about which kind of molecule it is. Using conditional probabilities, it turns out that the analytical problem for the small subset of occupied cells is exactly the same as for mixed liquids, and the increase in the entropy, or spatial uncertainty, has exactly the same form as obtained previously. Obviously the subset of occupied cells is not the same at different times. But only when there is real mixing and an occupied cell is found do we ask which kind of molecule is there.
See also: Gibbs paradox, in which it would seem that "mixing" two samples of the same gas would produce entropy.
=== Application to solutions ===
If the solute is a crystalline solid, the argument is much the same. A crystal has no spatial uncertainty at all, except for crystallographic defects, and a (perfect) crystal allows us to localize the molecules using the crystal symmetry group. The fact that volumes do not add when dissolving a solid in a liquid is not important for condensed phases. If the solute is not crystalline, we can still use a spatial lattice, as good an approximation for an amorphous solid as it is for a liquid.
The Flory–Huggins solution theory provides the entropy of mixing for polymer solutions, in which the macromolecules are huge compared to the solvent molecules. In this case, the assumption is made that each monomer subunit in the polymer chain occupies a lattice site.
Note that solids in contact with each other also slowly interdiffuse, and solid mixtures of two or more components may be made at will (alloys, semiconductors, etc.). Again, the same equations for the entropy of mixing apply, but only for homogeneous, uniform phases.
== Mixing under other constraints ==
=== Mixing with and without change of available volume ===
In the established customary usage, expressed in the lead section of this article, the entropy of mixing comes from two mechanisms, the intermingling and possible interactions of the distinct molecular species, and the change in the volume available for each molecular species, or the change in concentration of each molecular species. For ideal gases, the entropy of mixing at prescribed common temperature and pressure has nothing to do with mixing in the sense of intermingling and interactions of molecular species, but is only to do with expansion into the common volume.: 273
According to Fowler and Guggenheim (1939/1965), the conflating of the just-mentioned two mechanisms for the entropy of mixing is well established in customary terminology, but can be confusing unless it is borne in mind that the independent variables are the common initial and final temperature and total pressure; if the respective partial pressures or the total volume are chosen as independent variables instead of the total pressure, the description is different.
==== Mixing with each gas kept at constant partial volume, with changing total volume ====
In contrast to the established customary usage, "mixing" might be conducted reversibly at constant volume for each of two fixed masses of gases of equal volume, being mixed by gradually merging their initially separate volumes by use of two ideal semipermeable membranes, each permeable only to one of the respective gases, so that the respective volumes available to each gas remain constant during the merge. Either one of the common temperature or the common pressure is chosen to be independently controlled by the experimenter, the other being allowed to vary so as to maintain constant volume for each mass of gas. In this kind of "mixing", the final common volume is equal to each of the respective separate initial volumes, and each gas finally occupies the same volume as it did initially.: 163–164 : 217
This constant volume kind of "mixing", in the special case of perfect gases, is referred to in what is sometimes called Gibbs' theorem. It states that the entropy of such "mixing" of perfect gases is zero.
==== Mixing at constant total volume and changing partial volumes, with mechanically controlled varying pressure, and constant temperature ====
An experimental demonstration may be considered. The two distinct gases, in a cylinder of constant total volume, are at first separated by two contiguous pistons made respectively of two suitably specific ideal semipermeable membranes. Ideally slowly and fictively reversibly, at constant temperature, the gases are allowed to mix in the volume between the separating membranes, forcing them apart, thereby supplying work to an external system. The energy for the work comes from the heat reservoir that keeps the temperature constant. Then, by externally forcing ideally slowly the separating membranes together, back to contiguity, work is done on the mixed gases, fictively reversibly separating them again, so that heat is returned to the heat reservoir at constant temperature. Because the mixing and separation are ideally slow and fictively reversible, the work supplied by the gases as they mix is equal to the work done in separating them again. Passing from fictive reversibility to physical reality, some amount of additional work, that remains external to the gases and the heat reservoir, must be provided from an external source for this cycle, as required by the second law of thermodynamics, because this cycle has only one heat reservoir at constant temperature, and the external provision of work cannot be completely efficient.: 163–164
== Gibbs' paradox: "mixing" of identical species versus mixing of closely similar but non-identical species ==
For entropy of mixing to exist, the putatively mixing molecular species must be chemically or physically detectably distinct. Thus arises the so-called Gibbs paradox, as follows. If molecular species are identical, there is no entropy change on mixing them, because, defined in thermodynamic terms, there is no mass transfer, and thus no thermodynamically recognized process of mixing. Yet the slightest detectable difference in constitutive properties between the two species yields a thermodynamically recognized process of transfer with mixing, and a possibly considerable entropy change, namely the entropy of mixing.
The "paradox" arises because any detectable constitutive distinction, no matter how slight, can lead to a considerably large change in amount of entropy as a result of mixing. Though a continuous change in the properties of the materials that are mixed might make the degree of constitutive difference tend continuously to zero, the entropy change would nonetheless vanish discontinuously when the difference reached zero.: 87
From a general physical viewpoint, this discontinuity is paradoxical. But from a specifically thermodynamic viewpoint, it is not paradoxical, because in that discipline the degree of constitutive difference is not questioned; it is either there or not there. Gibbs himself did not see it as paradoxical. Distinguishability of two materials is a constitutive, not a thermodynamic, difference, for the laws of thermodynamics are the same for every material, while their constitutive characteristics are diverse.
Though one might imagine a continuous decrease of the constitutive difference between any two chemical substances, physically it cannot be continuously decreased till it actually vanishes.: 164 It is hard to think of a smaller difference than that between ortho- and para-hydrogen. Yet they differ by a finite amount. The hypothesis, that the distinction might tend continuously to zero, is unphysical. This is neither examined nor explained by thermodynamics. Differences of constitution are explained by quantum mechanics, which postulates discontinuity of physical processes.
For a detectable distinction, some means should be physically available. One theoretical means would be through an ideal semi-permeable membrane.: 217 It should allow passage, backwards and forwards, of one species, while passage of the other is prevented entirely. The entirety of prevention should include perfect efficacy over a practically infinite time, in view of the nature of thermodynamic equilibrium. Even the slightest departure from ideality, as assessed over a finite time, would extend to utter non-ideality, as assessed over a practically infinite time. Such quantum phenomena as tunneling ensure that nature does not allow such membrane ideality as would support the theoretically demanded continuous decrease, to zero, of detectable distinction. The decrease to zero detectable distinction must be discontinuous.
For ideal gases, the entropy of mixing does not depend on the degree of difference between the distinct molecular species, but only on the fact that they are distinct; for non-ideal gases, the entropy of mixing can depend on the degree of difference of the distinct molecular species. The suggested or putative "mixing" of identical molecular species is not in thermodynamic terms a mixing at all, because thermodynamics refers to states specified by state variables, and does not permit an imaginary labelling of particles. Only if the molecular species are different is there mixing in the thermodynamic sense.: 217–218 : 274, 516–517
== See also ==
CALPHAD
Enthalpy of mixing
Gibbs energy
== Notes ==
== References ==
== External links ==
Online lecture | Wikipedia/Entropy_of_mixing |
In thermodynamics, entropy is often associated with the amount of order or disorder in a thermodynamic system. This stems from Rudolf Clausius' 1862 assertion that any thermodynamic process always "admits to being reduced [reduction] to the alteration in some way or another of the arrangement of the constituent parts of the working body" and that internal work associated with these alterations is quantified energetically by a measure of "entropy" change, according to the following differential expression:
∫
δ
Q
T
≥
0
{\displaystyle \int \!{\frac {\delta Q}{T}}\geq 0}
where Q = motional energy ("heat") that is transferred reversibly to the system from the surroundings and T = the absolute temperature at which the transfer occurs.
In the years to follow, Ludwig Boltzmann translated these 'alterations of arrangement' into a probabilistic view of order and disorder in gas-phase molecular systems. In the context of entropy, "perfect internal disorder" has often been regarded as describing thermodynamic equilibrium, but since the thermodynamic concept is so far from everyday thinking, the use of the term in physics and chemistry has caused much confusion and misunderstanding.
In recent years, to interpret the concept of entropy, by further describing the 'alterations of arrangement', there has been a shift away from the words 'order' and 'disorder', to words such as 'spread' and 'dispersal'.
== History ==
This "molecular ordering" entropy perspective traces its origins to molecular movement interpretations developed by Rudolf Clausius in the 1850s, particularly with his 1862 visual conception of molecular disgregation. Similarly, in 1859, after reading a paper on the diffusion of molecules by Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics.
In 1864, Ludwig Boltzmann, a young student in Vienna, came across Maxwell's paper and was so inspired by it that he spent much of his long and distinguished life developing the subject further. Later, Boltzmann, in efforts to develop a kinetic theory for the behavior of a gas, applied the laws of probability to Maxwell's and Clausius' molecular interpretation of entropy so as to begin to interpret entropy in terms of order and disorder. Similarly, in 1882 Hermann von Helmholtz used the word "Unordnung" (disorder) to describe entropy.
== Overview ==
To highlight the fact that order and disorder are commonly understood to be measured in terms of entropy, below are current science encyclopedia and science dictionary definitions of entropy:
A measure of the unavailability of a system's energy to do work; also a measure of disorder; the higher the entropy the greater the disorder.
A measure of disorder; the higher the entropy the greater the disorder.
In thermodynamics, a parameter representing the state of disorder of a system at the atomic, ionic, or molecular level; the greater the disorder the higher the entropy.
A measure of disorder in the universe or of the unavailability of the energy in a system to do work.
Entropy and disorder also have associations with equilibrium. Technically, entropy, from this perspective, is defined as a thermodynamic property which serves as a measure of how close a system is to equilibrium—that is, to perfect internal disorder. Likewise, the value of the entropy of a distribution of atoms and molecules in a thermodynamic system is a measure of the disorder in the arrangements of its particles. In a stretched out piece of rubber, for example, the arrangement of the molecules of its structure has an "ordered" distribution and has zero entropy, while the "disordered" kinky distribution of the atoms and molecules in the rubber in the non-stretched state has positive entropy. Similarly, in a gas, the order is perfect and the measure of entropy of the system has its lowest value when all the molecules are in one place, whereas when more points are occupied the gas is all the more disorderly and the measure of the entropy of the system has its largest value.
In systems ecology, as another example, the entropy of a collection of items comprising a system is defined as a measure of their disorder or equivalently the relative likelihood of the instantaneous configuration of the items. Moreover, according to theoretical ecologist and chemical engineer Robert Ulanowicz, "that entropy might provide a quantification of the heretofore subjective notion of disorder has spawned innumerable scientific and philosophical narratives." In particular, many biologists have taken to speaking in terms of the entropy of an organism, or about its antonym negentropy, as a measure of the structural order within an organism.
The mathematical basis with respect to the association entropy has with order and disorder began, essentially, with the famous Boltzmann formula,
S
=
k
B
ln
W
{\displaystyle S=k_{\mathrm {B} }\ln W\!}
, which relates entropy S to the number of possible states W in which a system can be found. As an example, consider a box that is divided into two sections. What is the probability that a certain number, or all of the particles, will be found in one section versus the other when the particles are randomly allocated to different places within the box? If you only have one particle, then that system of one particle can subsist in two states, one side of the box versus the other. If you have more than one particle, or define states as being further locational subdivisions of the box, the entropy is larger because the number of states is greater. The relationship between entropy, order, and disorder in the Boltzmann equation is so clear among physicists that according to the views of thermodynamic ecologists Sven Jorgensen and Yuri Svirezhev, "it is obvious that entropy is a measure of order or, most likely, disorder in the system." In this direction, the second law of thermodynamics, as famously enunciated by Rudolf Clausius in 1865, states that:
Thus, if entropy is associated with disorder and if the entropy of the universe is headed towards maximal entropy, then many are often puzzled as to the nature of the "ordering" process and operation of evolution in relation to Clausius' most famous version of the second law, which states that the universe is headed towards maximal "disorder". In the recent 2003 book SYNC – the Emerging Science of Spontaneous Order by Steven Strogatz, for example, we find "Scientists have often been baffled by the existence of spontaneous order in the universe. The laws of thermodynamics seem to dictate the opposite, that nature should inexorably degenerate toward a state of greater disorder, greater entropy. Yet all around us we see magnificent structures—galaxies, cells, ecosystems, human beings—that have all somehow managed to assemble themselves."
The common argument used to explain this is that, locally, entropy can be lowered by external action, e.g. solar heating action, and that this applies to machines, such as a refrigerator, where the entropy in the cold chamber is being reduced, to growing crystals, and to living organisms. This local increase in order is, however, only possible at the expense of an entropy increase in the surroundings; here more disorder must be created. The conditioner of this statement suffices that living systems are open systems in which both heat, mass, and or work may transfer into or out of the system. Unlike temperature, the putative entropy of a living system would drastically change if the organism were thermodynamically isolated. If an organism was in this type of "isolated" situation, its entropy would increase markedly as the once-living components of the organism decayed to an unrecognizable mass.
== Phase change ==
Owing to these early developments, the typical example of entropy change ΔS is that associated with phase change. In solids, for example, which are typically ordered on the molecular scale, usually have smaller entropy than liquids, and liquids have smaller entropy than gases and colder gases have smaller entropy than hotter gases. Moreover, according to the third law of thermodynamics, at absolute zero temperature, crystalline structures are approximated to have perfect "order" and zero entropy. This correlation occurs because the numbers of different microscopic quantum energy states available to an ordered system are usually much smaller than the number of states available to a system that appears to be disordered.
From his famous 1896 Lectures on Gas Theory, Boltzmann diagrams the structure of a solid body, as shown above, by postulating that each molecule in the body has a "rest position". According to Boltzmann, if it approaches a neighbor molecule it is repelled by it, but if it moves farther away there is an attraction. This, of course was a revolutionary perspective in its time; many, during these years, did not believe in the existence of either atoms or molecules (see: history of the molecule). According to these early views, and others such as those developed by William Thomson, if energy in the form of heat is added to a solid, so to make it into a liquid or a gas, a common depiction is that the ordering of the atoms and molecules becomes more random and chaotic with an increase in temperature:
Thus, according to Boltzmann, owing to increases in thermal motion, whenever heat is added to a working substance, the rest position of molecules will be pushed apart, the body will expand, and this will create more molar-disordered distributions and arrangements of molecules. These disordered arrangements, subsequently, correlate, via probability arguments, to an increase in the measure of entropy.
== Entropy-driven order ==
Entropy has been historically, e.g. by Clausius and Helmholtz, associated with disorder. However, in common speech, order is used to describe organization, structural regularity, or form, like that found in a crystal compared with a gas. This commonplace notion of order is described quantitatively by Landau theory. In Landau theory, the development of order in the everyday sense coincides with the change in the value of a mathematical quantity, a so-called order parameter. An example of an order parameter for crystallization is "bond orientational order" describing the development of preferred directions (the crystallographic axes) in space. For many systems, phases with more structural (e.g. crystalline) order exhibit less entropy than fluid phases under the same thermodynamic conditions. In these cases, labeling phases as ordered or disordered according to the relative amount of entropy (per the Clausius/Helmholtz notion of order/disorder) or via the existence of structural regularity (per the Landau notion of order/disorder) produces matching labels.
However, there is a broad class of systems that manifest entropy-driven order, in which phases with organization or structural regularity, e.g. crystals, have higher entropy than structurally disordered (e.g. fluid) phases under the same thermodynamic conditions. In these systems phases that would be labeled as disordered by virtue of their higher entropy (in the sense of Clausius or Helmholtz) are ordered in both the everyday sense and in Landau theory.
Under suitable thermodynamic conditions, entropy has been predicted or discovered to induce systems to form ordered liquid-crystals, crystals, and quasicrystals. In many systems, directional entropic forces drive this behavior. More recently, it has been shown it is possible to precisely engineer particles for target ordered structures.
== Adiabatic demagnetization ==
In the quest for ultra-cold temperatures, a temperature lowering technique called adiabatic demagnetization is used, where atomic entropy considerations are utilized which can be described in order-disorder terms. In this process, a sample of solid such as chrome-alum salt, whose molecules are equivalent to tiny magnets, is inside an insulated enclosure cooled to a low temperature, typically 2 or 4 kelvins, with a strong magnetic field being applied to the container using a powerful external magnet, so that the tiny molecular magnets are aligned forming a well-ordered "initial" state at that low temperature. This magnetic alignment means that the magnetic energy of each molecule is minimal. The external magnetic field is then reduced, a removal that is considered to be closely reversible. Following this reduction, the atomic magnets then assume random less-ordered orientations, owing to thermal agitations, in the "final" state:
The "disorder" and hence the entropy associated with the change in the atomic alignments has clearly increased. In terms of energy flow, the movement from a magnetically aligned state requires energy from the thermal motion of the molecules, converting thermal energy into magnetic energy. Yet, according to the second law of thermodynamics, because no heat can enter or leave the container, due to its adiabatic insulation, the system should exhibit no change in entropy, i.e. ΔS = 0. The increase in disorder, however, associated with the randomizing directions of the atomic magnets represents an entropy increase? To compensate for this, the disorder (entropy) associated with the temperature of the specimen must decrease by the same amount. The temperature thus falls as a result of this process of thermal energy being converted into magnetic energy. If the magnetic field is then increased, the temperature rises and the magnetic salt has to be cooled again using a cold material such as liquid helium.
== Difficulties with the term "disorder" ==
In recent years the long-standing use of term "disorder" to discuss entropy has met with some criticism. Critics of the terminology state that entropy is not a measure of 'disorder' or 'chaos', but rather a measure of energy's diffusion or dispersal to more microstates. Shannon's use of the term 'entropy' in information theory refers to the most compressed, or least dispersed, amount of code needed to encompass the content of a signal.
== See also ==
Entropy
Entropy production
Entropy rate
History of entropy
Entropy of mixing
Entropy (information theory)
Entropy (computing)
Entropy (energy dispersal)
Second law of thermodynamics
Entropy (statistical thermodynamics)
Entropy (classical thermodynamics)
== References ==
== External links ==
Lambert, F. L. Entropy Sites — A Guide
Lambert, F. L. Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms – Examples of Entropy Increase? Nonsense! Journal of Chemical Education | Wikipedia/Entropy_(order_and_disorder) |
Chemical process modeling is a computer modeling technique used in chemical engineering process design. It typically involves using purpose-built software to define a system of interconnected components, which are then solved so that the steady-state or dynamic behavior of the system can be predicted. The system components and connections are represented as a process flow diagram. Simulations can be as simple as the mixing of two substances in a tank, or as complex as an entire alumina refinery.
Chemical process modeling requires a knowledge of the properties of the chemicals involved in the simulation, as well as the physical properties and characteristics of the components of the system, such as tanks, pumps, pipes, pressure vessels, and so on.
== See also ==
Manufacturing process management
Process simulation
Process optimization
Process design (chemical engineering)
Process systems engineering
== External links ==
Real world examples by PEA
Comprehensive directory of topics in plant simulation, process modeling and chemical engineering. by Kimmo Klemola, Dr. Tech. (Chem. Eng.), Lappeenranta, Finland.
Parameter calibration in chemical reaction network models
== References == | Wikipedia/Chemical_process_modeling |
Systems engineering is an interdisciplinary field of engineering and engineering management that focuses on how to design, integrate, and manage complex systems over their life cycles. At its core, systems engineering utilizes systems thinking principles to organize this body of knowledge. The individual outcome of such efforts, an engineered system, can be defined as a combination of components that work in synergy to collectively perform a useful function.
Issues such as requirements engineering, reliability, logistics, coordination of different teams, testing and evaluation, maintainability, and many other disciplines, aka "ilities", necessary for successful system design, development, implementation, and ultimate decommission become more difficult when dealing with large or complex projects. Systems engineering deals with work processes, optimization methods, and risk management tools in such projects. It overlaps technical and human-centered disciplines such as industrial engineering, production systems engineering, process systems engineering, mechanical engineering, manufacturing engineering, production engineering, control engineering, software engineering, electrical engineering, cybernetics, aerospace engineering, organizational studies, civil engineering and project management. Systems engineering ensures that all likely aspects of a project or system are considered and integrated into a whole.
The systems engineering process is a discovery process that is quite unlike a manufacturing process. A manufacturing process is focused on repetitive activities that achieve high-quality outputs with minimum cost and time. The systems engineering process must begin by discovering the real problems that need to be resolved and identifying the most probable or highest-impact failures that can occur. Systems engineering involves finding solutions to these problems.
== History ==
The term systems engineering can be traced back to Bell Telephone Laboratories in the 1940s. The need to identify and manipulate the properties of a system as a whole, which in complex engineering projects may greatly differ from the sum of the parts' properties, motivated various industries, especially those developing systems for the U.S. military, to apply the discipline.
When it was no longer possible to rely on design evolution to improve upon a system and the existing tools were not sufficient to meet growing demands, new methods began to be developed that addressed the complexity directly. The continuing evolution of systems engineering comprises the development and identification of new methods and modeling techniques. These methods aid in a better comprehension of the design and developmental control of engineering systems as they grow more complex. Popular tools that are often used in the systems engineering context were developed during these times, including Universal Systems Language (USL), Unified Modeling Language (UML), Quality function deployment (QFD), and Integration Definition (IDEF).
In 1990, a professional society for systems engineering, the National Council on Systems Engineering (NCOSE), was founded by representatives from a number of U.S. corporations and organizations. NCOSE was created to address the need for improvements in systems engineering practices and education. As a result of growing involvement from systems engineers outside of the U.S., the name of the organization was changed to the International Council on Systems Engineering (INCOSE) in 1995. Schools in several countries offer graduate programs in systems engineering, and continuing education options are also available for practicing engineers.
== Concept ==
Systems engineering signifies only an approach and, more recently, a discipline in engineering. The aim of education in systems engineering is to formalize various approaches simply and in doing so, identify new methods and research opportunities similar to that which occurs in other fields of engineering. As an approach, systems engineering is holistic and interdisciplinary in flavor.
=== Origins and traditional scope ===
The traditional scope of engineering embraces the conception, design, development, production, and operation of physical systems. Systems engineering, as originally conceived, falls within this scope. "Systems engineering", in this sense of the term, refers to the building of engineering concepts.
=== Evolution to a broader scope ===
The use of the term "systems engineer" has evolved over time to embrace a wider, more holistic concept of "systems" and of engineering processes. This evolution of the definition has been a subject of ongoing controversy, and the term continues to apply to both the narrower and a broader scope.
Traditional systems engineering was seen as a branch of engineering in the classical sense, that is, as applied only to physical systems, such as spacecraft and aircraft. More recently, systems engineering has evolved to take on a broader meaning especially when humans were seen as an essential component of a system. Peter Checkland, for example, captures the broader meaning of systems engineering by stating that 'engineering' "can be read in its general sense; you can engineer a meeting or a political agreement.": 10
Consistent with the broader scope of systems engineering, the Systems Engineering Body of Knowledge (SEBoK) has defined three types of systems engineering:
Product Systems Engineering (PSE) is the traditional systems engineering focused on the design of physical systems consisting of hardware and software.
Enterprise Systems Engineering (ESE) pertains to the view of enterprises, that is, organizations or combinations of organizations, as systems.
Service Systems Engineering (SSE) has to do with the engineering of service systems. Checkland defines a service system as a system which is conceived as serving another system. Most civil infrastructure systems are service systems.
=== Holistic view ===
Systems engineering focuses on analyzing and eliciting customer needs and required functionality early in the development cycle, documenting requirements, then proceeding with design synthesis and system validation while considering the complete problem, the system lifecycle. This includes fully understanding all of the stakeholders involved. Oliver et al. claim that the systems engineering process can be decomposed into:
A Systems Engineering Technical Process
A Systems Engineering Management Process
Within Oliver's model, the goal of the Management Process is to organize the technical effort in the lifecycle, while the Technical Process includes assessing available information, defining effectiveness measures, to create a behavior model, create a structure model, perform trade-off analysis, and create sequential build & test plan.
Depending on their application, although there are several models that are used in the industry, all of them aim to identify the relation between the various stages mentioned above and incorporate feedback. Examples of such models include the Waterfall model and the VEE model (also called the V model).
=== Interdisciplinary field ===
System development often requires contribution from diverse technical disciplines. By providing a systems (holistic) view of the development effort, systems engineering helps mold all the technical contributors into a unified team effort, forming a structured development process that proceeds from concept to production to operation and, in some cases, to termination and disposal. In an acquisition, the holistic integrative discipline combines contributions and balances tradeoffs among cost, schedule, and performance while maintaining an acceptable level of risk covering the entire life cycle of the item.
This perspective is often replicated in educational programs, in that systems engineering courses are taught by faculty from other engineering departments, which helps create an interdisciplinary environment.
=== Managing complexity ===
The need for systems engineering arose with the increase in complexity of systems and projects, in turn exponentially increasing the possibility of component friction, and therefore the unreliability of the design. When speaking in this context, complexity incorporates not only engineering systems but also the logical human organization of data. At the same time, a system can become more complex due to an increase in size as well as with an increase in the amount of data, variables, or the number of fields that are involved in the design. The International Space Station is an example of such a system.
The development of smarter control algorithms, microprocessor design, and analysis of environmental systems also come within the purview of systems engineering. Systems engineering encourages the use of tools and methods to better comprehend and manage complexity in systems. Some examples of these tools can be seen here:
System architecture
System model, modeling, and simulation
Mathematical optimization
System dynamics
Systems analysis
Statistical analysis
Reliability engineering
Decision making
Taking an interdisciplinary approach to engineering systems is inherently complex since the behavior of and interaction among system components is not always immediately well defined or understood. Defining and characterizing such systems and subsystems and the interactions among them is one of the goals of systems engineering. In doing so, the gap that exists between informal requirements from users, operators, marketing organizations, and technical specifications is successfully bridged.
=== Scope ===
The principles of systems engineering – holism, emergent behavior, boundary, et al. – can be applied to any system, complex or otherwise, provided systems thinking is employed at all levels. Besides defense and aerospace, many information and technology-based companies, software development firms, and industries in the field of electronics & communications require systems engineers as part of their team.
An analysis by the INCOSE Systems Engineering Center of Excellence (SECOE) indicates that optimal effort spent on systems engineering is about 15–20% of the total project effort. At the same time, studies have shown that systems engineering essentially leads to a reduction in costs among other benefits. However, no quantitative survey at a larger scale encompassing a wide variety of industries has been conducted until recently. Such studies are underway to determine the effectiveness and quantify the benefits of systems engineering.
Systems engineering encourages the use of modeling and simulation to validate assumptions or theories on systems and the interactions within them.
Use of methods that allow early detection of possible failures, in safety engineering, are integrated into the design process. At the same time, decisions made at the beginning of a project whose consequences are not clearly understood can have enormous implications later in the life of a system, and it is the task of the modern systems engineer to explore these issues and make critical decisions. No method guarantees today's decisions will still be valid when a system goes into service years or decades after first conceived. However, there are techniques that support the process of systems engineering. Examples include soft systems methodology, Jay Wright Forrester's System dynamics method, and the Unified Modeling Language (UML)—all currently being explored, evaluated, and developed to support the engineering decision process.
== Education ==
Education in systems engineering is often seen as an extension to the regular engineering courses, reflecting the industry attitude that engineering students need a foundational background in one of the traditional engineering disciplines (e.g. aerospace engineering, civil engineering, electrical engineering, mechanical engineering, manufacturing engineering, industrial engineering, chemical engineering)—plus practical, real-world experience to be effective as systems engineers. Undergraduate university programs explicitly in systems engineering are growing in number but remain uncommon, the degrees including such material are most often presented as a BS in Industrial Engineering. Typically programs (either by themselves or in combination with interdisciplinary study) are offered beginning at the graduate level in both academic and professional tracks, resulting in the grant of either a MS/MEng or Ph.D./EngD degree.
INCOSE, in collaboration with the Systems Engineering Research Center at Stevens Institute of Technology maintains a regularly updated directory of worldwide academic programs at suitably accredited institutions. As of 2017, it lists over 140 universities in North America offering more than 400 undergraduate and graduate programs in systems engineering. Widespread institutional acknowledgment of the field as a distinct subdiscipline is quite recent; the 2009 edition of the same publication reported the number of such schools and programs at only 80 and 165, respectively.
Education in systems engineering can be taken as systems-centric or domain-centric:
Systems-centric programs treat systems engineering as a separate discipline and most of the courses are taught focusing on systems engineering principles and practice.
Domain-centric programs offer systems engineering as an option that can be exercised with another major field in engineering.
Both of these patterns strive to educate the systems engineer who is able to oversee interdisciplinary projects with the depth required of a core engineer.
== Systems engineering topics ==
Systems engineering tools are strategies, procedures, and techniques that aid in performing systems engineering on a project or product. The purpose of these tools varies from database management, graphical browsing, simulation, and reasoning, to document production, neutral import/export, and more.
=== System ===
There are many definitions of what a system is in the field of systems engineering. Below are a few authoritative definitions:
ANSI/EIA-632-1999: "An aggregation of end products and enabling products to achieve a given purpose."
DAU Systems Engineering Fundamentals: "an integrated composite of people, products, and processes that provide a capability to satisfy a stated need or objective."
IEEE Std 1220-1998: "A set or arrangement of elements and processes that are related and whose behavior satisfies customer/operational needs and provides for life cycle sustainment of the products."
INCOSE Systems Engineering Handbook: "homogeneous entity that exhibits predefined behavior in the real world and is composed of heterogeneous parts that do not individually exhibit that behavior and an integrated configuration of components and/or subsystems."
INCOSE: "A system is a construct or collection of different elements that together produce results not obtainable by the elements alone. The elements, or parts, can include people, hardware, software, facilities, policies, and documents; that is, all things required to produce systems-level results. The results include system-level qualities, properties, characteristics, functions, behavior, and performance. The value added by the system as a whole, beyond that contributed independently by the parts, is primarily created by the relationship among the parts; that is, how they are interconnected."
ISO/IEC 15288:2008: "A combination of interacting elements organized to achieve one or more stated purposes."
NASA Systems Engineering Handbook: "(1) The combination of elements that function together to produce the capability to meet a need. The elements include all hardware, software, equipment, facilities, personnel, processes, and procedures needed for this purpose. (2) The end product (which performs operational functions) and enabling products (which provide life-cycle support services to the operational end products) that make up a system."
=== Systems engineering processes ===
Systems engineering processes encompass all creative, manual, and technical activities necessary to define the product and which need to be carried out to convert a system definition to a sufficiently detailed system design specification for product manufacture and deployment. Design and development of a system can be divided into four stages, each with different definitions:
Task definition (informative definition)
Conceptual stage (cardinal definition)
Design stage (formative definition)
Implementation stage (manufacturing definition)
Depending on their application, tools are used for various stages of the systems engineering process:
=== Using models ===
Models play important and diverse roles in systems engineering. A model can be defined in several
ways, including:
An abstraction of reality designed to answer specific questions about the real world
An imitation, analog, or representation of a real-world process or structure; or
A conceptual, mathematical, or physical tool to assist a decision-maker.
Together, these definitions are broad enough to encompass physical engineering models used in the verification of a system design, as well as schematic models like a functional flow block diagram and mathematical (i.e. quantitative) models used in the trade study process. This section focuses on the last.
The main reason for using mathematical models and diagrams in trade studies is to provide estimates of system effectiveness, performance or technical attributes, and cost from a set of known or estimable quantities. Typically, a collection of separate models is needed to provide all of these outcome variables. The heart of any mathematical model is a set of meaningful quantitative relationships among its inputs and outputs. These relationships can be as simple as adding up constituent quantities to obtain a total, or as complex as a set of differential equations describing the trajectory of a spacecraft in a gravitational field. Ideally, the relationships express causality, not just correlation. Furthermore, key to successful systems engineering activities are also the methods with which these models are efficiently and effectively managed and used to simulate the systems. However, diverse domains often present recurring problems of modeling and simulation for systems engineering, and new advancements are aiming to cross-fertilize methods among distinct scientific and engineering communities, under the title of 'Modeling & Simulation-based Systems Engineering'.
=== Modeling formalisms and graphical representations ===
Initially, when the primary purpose of a systems engineer is to comprehend a complex problem, graphic representations of a system are used to communicate a system's functional and data requirements. Common graphical representations include:
Functional flow block diagram (FFBD)
Model-based design
Data flow diagram (DFD)
N2 chart
IDEF0 diagram
Use case diagram
Sequence diagram
Block diagram
Signal-flow graph
USL function maps and type maps
Enterprise architecture frameworks
A graphical representation relates the various subsystems or parts of a system through functions, data, or interfaces. Any or each of the above methods is used in an industry based on its requirements. For instance, the N2 chart may be used where interfaces between systems are important. Part of the design phase is to create structural and behavioral models of the system.
Once the requirements are understood, it is now the responsibility of a systems engineer to refine them and to determine, along with other engineers, the best technology for a job. At this point starting with a trade study, systems engineering encourages the use of weighted choices to determine the best option. A decision matrix, or Pugh method, is one way (QFD is another) to make this choice while considering all criteria that are important. The trade study in turn informs the design, which again affects graphic representations of the system (without changing the requirements). In an SE process, this stage represents the iterative step that is carried out until a feasible solution is found. A decision matrix is often populated using techniques such as statistical analysis, reliability analysis, system dynamics (feedback control), and optimization methods.
=== Other tools ===
==== Systems Modeling Language ====
Systems Modeling Language (SysML), a modeling language used for systems engineering applications, supports the specification, analysis, design, verification and validation of a broad range of complex systems.
==== Lifecycle Modeling Language ====
Lifecycle Modeling Language (LML), is an open-standard modeling language designed for systems engineering that supports the full lifecycle: conceptual, utilization, support, and retirement stages.
== Related fields and sub-fields ==
Many related fields may be considered tightly coupled to systems engineering. The following areas have contributed to the development of systems engineering as a distinct entity:
=== Cognitive systems engineering ===
Cognitive systems engineering (CSE) is a specific approach to the description and analysis of human-machine systems or sociotechnical systems. The three main themes of CSE are how humans cope with complexity, how work is accomplished by the use of artifacts, and how human-machine systems and socio-technical systems can be described as joint cognitive systems. CSE has since its beginning become a recognized scientific discipline, sometimes also referred to as cognitive engineering. The concept of a Joint Cognitive System (JCS) has in particular become widely used as a way of understanding how complex socio-technical systems can be described with varying degrees of resolution. The more than 20 years of experience with CSE has been described extensively.
=== Configuration management ===
Like systems engineering, configuration management as practiced in the defense and aerospace industry is a broad systems-level practice. The field parallels the taskings of systems engineering; where systems engineering deals with requirements development, allocation to development items and verification, configuration management deals with requirements capture, traceability to the development item, and audit of development item to ensure that it has achieved the desired functionality and outcomes that systems engineering and/or Test and Verification Engineering have obtained and proven through objective testing.
=== Control engineering ===
Control engineering and its design and implementation of control systems, used extensively in nearly every industry, is a large sub-field of systems engineering. The cruise control on an automobile and the guidance system for a ballistic missile are two examples. Control systems theory is an active field of applied mathematics involving the investigation of solution spaces and the development of new methods for the analysis of the control process.
=== Industrial engineering ===
Industrial engineering is a branch of engineering that concerns the development, improvement, implementation, and evaluation of integrated systems of people, money, knowledge, information, equipment, energy, material, and process. Industrial engineering draws upon the principles and methods of engineering analysis and synthesis, as well as mathematical, physical, and social sciences together with the principles and methods of engineering analysis and design to specify, predict, and evaluate results obtained from such systems.
=== Production Systems Engineering ===
Production Systems Engineering (PSE) is an emerging branch of Engineering intended to uncover fundamental principles of production systems and utilize them for analysis, continuous improvement, and design.
=== Interface design ===
Interface design and its specification are concerned with assuring that the pieces of a system connect and inter-operate with other parts of the system and with external systems as necessary. Interface design also includes assuring that system interfaces are able to accept new features, including mechanical, electrical, and logical interfaces, including reserved wires, plug-space, command codes, and bits in communication protocols. This is known as extensibility. Human-Computer Interaction (HCI) or Human-Machine Interface (HMI) is another aspect of interface design and is a critical aspect of modern systems engineering. Systems engineering principles are applied in the design of communication protocols for local area networks and wide area networks.
=== Mechatronic engineering ===
Mechatronic engineering, like systems engineering, is a multidisciplinary field of engineering that uses dynamic systems modeling to express tangible constructs. In that regard, it is almost indistinguishable from Systems Engineering, but what sets it apart is the focus on smaller details rather than larger generalizations and relationships. As such, both fields are distinguished by the scope of their projects rather than the methodology of their practice.
=== Operations research ===
Operations research supports systems engineering. Operations research, briefly, is concerned with the optimization of a process under multiple constraints.
=== Performance engineering ===
Performance engineering is the discipline of ensuring a system meets customer expectations for performance throughout its life. Performance is usually defined as the speed with which a certain operation is executed or the capability of executing a number of such operations in a unit of time. Performance may be degraded when operations queued to execute are throttled by limited system capacity. For example, the performance of a packet-switched network is characterized by the end-to-end packet transit delay or the number of packets switched in an hour. The design of high-performance systems uses analytical or simulation modeling, whereas the delivery of high-performance implementation involves thorough performance testing. Performance engineering relies heavily on statistics, queueing theory, and probability theory for its tools and processes.
=== Program management and project management ===
Program management (or project management) has many similarities with systems engineering, but has broader-based origins than the engineering ones of systems engineering. Project management is also closely related to both program management and systems engineering. Both include scheduling as engineering support tool in assessing interdisciplinary concerns under management process. In particular, the direct relationship of resources, performance features, and risk to the duration of a task or the dependency links among tasks and impacts across the system lifecycle are systems engineering concerns.
=== Proposal engineering ===
Proposal engineering is the application of scientific and mathematical principles to design, construct, and operate a cost-effective proposal development system. Basically, proposal engineering uses the "systems engineering process" to create a cost-effective proposal and increase the odds of a successful proposal.
=== Reliability engineering ===
Reliability engineering is the discipline of ensuring a system meets customer expectations for reliability throughout its life (i.e. it does not fail more frequently than expected). Next to the prediction of failure, it is just as much about the prevention of failure. Reliability engineering applies to all aspects of the system. It is closely associated with maintainability, availability (dependability or RAMS preferred by some), and integrated logistics support. Reliability engineering is always a critical component of safety engineering, as in failure mode and effects analysis (FMEA) and hazard fault tree analysis, and of security engineering.
=== Risk management ===
Risk management, the practice of assessing and dealing with risk is one of the interdisciplinary parts of Systems Engineering. In development, acquisition, or operational activities, the inclusion of risk in tradeoffs with cost, schedule, and performance features, involves the iterative complex configuration management of traceability and evaluation to the scheduling and requirements management across domains and for the system lifecycle that requires the interdisciplinary technical approach of systems engineering. Systems Engineering has Risk Management define, tailor, implement, and monitor a structured process for risk management which is integrated into the overall effort.
=== Safety engineering ===
The techniques of safety engineering may be applied by non-specialist engineers in designing complex systems to minimize the probability of safety-critical failures. The "System Safety Engineering" function helps to identify "safety hazards" in emerging designs and may assist with techniques to "mitigate" the effects of (potentially) hazardous conditions that cannot be designed out of systems.
=== Security engineering ===
Security engineering can be viewed as an interdisciplinary field that integrates the community of practice for control systems design, reliability, safety, and systems engineering. It may involve such sub-specialties as authentication of system users, system targets, and others: people, objects, and processes.
=== Software engineering ===
From its beginnings, software engineering has helped shape modern systems engineering practice. The techniques used in the handling of the complexities of large software-intensive systems have had a major effect on the shaping and reshaping of the tools, methods, and processes of Systems Engineering.
== See also ==
== References ==
== Further reading ==
Madhavan, Guru (2024). Wicked Problems: How to Engineer a Better World. New York: W.W. Norton & Company. ISBN 978-0-393-65146-1
Blockley, D. Godfrey, P. Doing it Differently: Systems for Rethinking Infrastructure, Second Edition, ICE Publications, London, 2017.
Buede, D.M., Miller, W.D. The Engineering Design of Systems: Models and Methods, Third Edition, John Wiley and Sons, 2016.
Chestnut, H., Systems Engineering Methods. Wiley, 1967.
Gianni, D. et al. (eds.), Modeling and Simulation-Based Systems Engineering Handbook, CRC Press, 2014 at CRC
Goode, H.H., Robert E. Machol System Engineering: An Introduction to the Design of Large-scale Systems, McGraw-Hill, 1957.
Hitchins, D. (1997) World Class Systems Engineering at hitchins.net.
Lienig, J., Bruemmer, H., Fundamentals of Electronic Systems Design, Springer, 2017 ISBN 978-3-319-55839-4.
Malakooti, B. (2013). Operations and Production Systems with Multiple Objectives. John Wiley & Sons.ISBN 978-1-118-58537-5
MITRE, The MITRE Systems Engineering Guide(pdf)
NASA (2007) Systems Engineering Handbook, NASA/SP-2007-6105 Rev1, December 2007.
NASA (2013) NASA Systems Engineering Processes and Requirements Archived 27 December 2016 at the Wayback Machine NPR 7123.1B, April 2013 NASA Procedural Requirements
Oliver, D.W., et al. Engineering Complex Systems with Models and Objects. McGraw-Hill, 1997.
Parnell, G.S., Driscoll, P.J., Henderson, D.L. (eds.), Decision Making in Systems Engineering and Management, 2nd. ed., Hoboken, NJ: Wiley, 2011. This is a textbook for undergraduate students of engineering.
Ramo, S., St.Clair, R.K. The Systems Approach: Fresh Solutions to Complex Problems Through Combining Science and Practical Common Sense, Anaheim, CA: KNI, Inc, 1998.
Sage, A.P., Systems Engineering. Wiley IEEE, 1992. ISBN 0-471-53639-3.
Sage, A.P., Olson, S.R., Modeling and Simulation in Systems Engineering, 2001.
SEBOK.org, Systems Engineering Body of Knowledge (SEBoK)
Shermon, D. Systems Cost Engineering, Gower Publishing, 2009
Shishko, R., et al. (2005) NASA Systems Engineering Handbook. NASA Center for AeroSpace Information, 2005.
Stevens, R., et al. Systems Engineering: Coping with Complexity. Prentice Hall, 1998.
US Air Force, SMC Systems Engineering Primer & Handbook, 2004
US DoD Systems Management College (2001) Systems Engineering Fundamentals. Defense Acquisition University Press, 2001
US DoD Guide for Integrating Systems Engineering into DoD Acquisition Contracts Archived 29 August 2017 at the Wayback Machine, 2006
US DoD MIL-STD-499 System Engineering Management
== External links ==
ICSEng homepage
INCOSE homepage
INCOSE UK homepage
PPI SE Goldmine homepage
Systems Engineering Body of Knowledge
Systems Engineering Tools
AcqNotes DoD Systems Engineering Overview
NDIA Systems Engineering Division | Wikipedia/Engineered_systems |
A continuity equation or transport equation is an equation that describes the transport of some quantity. It is particularly simple and powerful when applied to a conserved quantity, but it can be generalized to apply to any extensive quantity. Since mass, energy, momentum, electric charge and other natural quantities are conserved under their respective appropriate conditions, a variety of physical phenomena may be described using continuity equations.
Continuity equations are a stronger, local form of conservation laws. For example, a weak version of the law of conservation of energy states that energy can neither be created nor destroyed—i.e., the total amount of energy in the universe is fixed. This statement does not rule out the possibility that a quantity of energy could disappear from one point while simultaneously appearing at another point. A stronger statement is that energy is locally conserved: energy can neither be created nor destroyed, nor can it "teleport" from one place to another—it can only move by a continuous flow. A continuity equation is the mathematical way to express this kind of statement. For example, the continuity equation for electric charge states that the amount of electric charge in any volume of space can only change by the amount of electric current flowing into or out of that volume through its boundaries.
Continuity equations more generally can include "source" and "sink" terms, which allow them to describe quantities that are often but not always conserved, such as the density of a molecular species which can be created or destroyed by chemical reactions. In an everyday example, there is a continuity equation for the number of people alive; it has a "source term" to account for people being born, and a "sink term" to account for people dying.
Any continuity equation can be expressed in an "integral form" (in terms of a flux integral), which applies to any finite region, or in a "differential form" (in terms of the divergence operator) which applies at a point.
Continuity equations underlie more specific transport equations such as the convection–diffusion equation, Boltzmann transport equation, and Navier–Stokes equations.
Flows governed by continuity equations can be visualized using a Sankey diagram.
== General equation ==
=== Definition of flux ===
A continuity equation is useful when a flux can be defined. To define flux, first there must be a quantity q which can flow or move, such as mass, energy, electric charge, momentum, number of molecules, etc. Let ρ be the volume density of this quantity, that is, the amount of q per unit volume.
The way that this quantity q is flowing is described by its flux. The flux of q is a vector field, which we denote as j. Here are some examples and properties of flux:
The dimension of flux is "amount of q flowing per unit time, through a unit area". For example, in the mass continuity equation for flowing water, if 1 gram per second of water is flowing through a pipe with cross-sectional area 1 cm2, then the average mass flux j inside the pipe is (1 g/s) / cm2, and its direction is along the pipe in the direction that the water is flowing. Outside the pipe, where there is no water, the flux is zero.
If there is a velocity field u which describes the relevant flow—in other words, if all of the quantity q at a point x is moving with velocity u(x)—then the flux is by definition equal to the density times the velocity field:
j
=
ρ
u
{\displaystyle \mathbf {j} =\rho \mathbf {u} }
For example, if in the mass continuity equation for flowing water, u is the water's velocity at each point, and ρ is the water's density at each point, then j would be the mass flux, also known as the material discharge.
In a well-known example, the flux of electric charge is the electric current density.
If there is an imaginary surface S, then the surface integral of flux over S is equal to the amount of q that is passing through the surface S per unit time:
in which
∬
S
d
S
{\textstyle \iint _{S}d\mathbf {S} }
is a surface integral.
(Note that the concept that is here called "flux" is alternatively termed flux density in some literature, in which context "flux" denotes the surface integral of flux density. See the main article on Flux for details.)
=== Integral form ===
The integral form of the continuity equation states that:
The amount of q in a region increases when additional q flows inward through the surface of the region, and decreases when it flows outward;
The amount of q in a region increases when new q is created inside the region, and decreases when q is destroyed;
Apart from these two processes, there is no other way for the amount of q in a region to change.
Mathematically, the integral form of the continuity equation expressing the rate of increase of q within a volume V is:
where
S is any imaginary closed surface, that encloses a volume V,
∮
S
d
S
{\displaystyle \oint _{S}d\mathbf {S} }
denotes a surface integral over that closed surface,
q is the total amount of the quantity in the volume V,
j is the flux of q,
t is time,
Σ is the net rate that q is being generated inside the volume V per unit time. When q is being generated (i.e., when
∂
q
∂
t
>
0
{\displaystyle {\tfrac {\partial q}{\partial t}}>0}
), the region is called a source of q, and it makes Σ more positive. When q is being destroyed (i.e., when
∂
q
∂
t
<
0
{\displaystyle {\tfrac {\partial q}{\partial t}}<0}
), the region is called a sink of q, and it makes Σ more negative. The term Σ is sometimes written as
d
q
/
d
t
|
gen
{\displaystyle dq/dt|_{\text{gen}}}
or the total change of q from its generation or destruction inside the control volume.
In a simple example, V could be a building, and q could be the number of living people in the building. The surface S would consist of the walls, doors, roof, and foundation of the building. Then the continuity equation states that the number of living people in the building (1) increases when living people enter the building (i.e., when there is an inward flux through the surface), (2) decreases when living people exit the building (i.e., when there is an outward flux through the surface), (3) increases when someone in the building gives birth to new life (i.e., when there is a positive time rate of change within the volume), and (4) decreases when someone in the building no longer lives (i.e., when there is a negative time rate of change within the volume). In conclusion, in this example there are four distinct ways that the net rate Σ may be altered.
=== Differential form ===
By the divergence theorem, a general continuity equation can also be written in a "differential form":
where
∇⋅ is divergence,
ρ is the density of the amount q (i.e. the quantity q per unit volume),
j is the flux of q (i.e. j = ρv, where v is the vector field describing the movement of the quantity q),
t is time,
σ is the generation of q per unit volume per unit time. Terms that generate q (i.e., σ > 0) or remove q (i.e., σ < 0) are referred to as sources and sinks respectively.
This general equation may be used to derive any continuity equation, ranging from as simple as the volume continuity equation to as complicated as the Navier–Stokes equations. This equation also generalizes the advection equation. Other equations in physics, such as Gauss's law of the electric field and Gauss's law for gravity, have a similar mathematical form to the continuity equation, but are not usually referred to by the term "continuity equation", because j in those cases does not represent the flow of a real physical quantity.
In the case that q is a conserved quantity that cannot be created or destroyed (such as energy), σ = 0 and the equations become:
∂
ρ
∂
t
+
∇
⋅
j
=
0
{\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \mathbf {j} =0}
== Electromagnetism ==
In electromagnetic theory, the continuity equation is an empirical law expressing (local) charge conservation. Mathematically it is an automatic consequence of Maxwell's equations, although charge conservation is more fundamental than Maxwell's equations. It states that the divergence of the current density J (in amperes per square meter) is equal to the negative rate of change of the charge density ρ (in coulombs per cubic meter),
∇
⋅
J
=
−
∂
ρ
∂
t
{\displaystyle \nabla \cdot \mathbf {J} =-{\frac {\partial \rho }{\partial t}}}
Current is the movement of charge. The continuity equation says that if charge is moving out of a differential volume (i.e., divergence of current density is positive) then the amount of charge within that volume is going to decrease, so the rate of change of charge density is negative. Therefore, the continuity equation amounts to a conservation of charge.
If magnetic monopoles exist, there would be a continuity equation for monopole currents as well, see the monopole article for background and the duality between electric and magnetic currents.
== Fluid dynamics ==
In fluid dynamics, the continuity equation states that the rate at which mass enters a system is equal to the rate at which mass leaves the system plus the accumulation of mass within the system.
The differential form of the continuity equation is:
∂
ρ
∂
t
+
∇
⋅
(
ρ
u
)
=
0
{\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )=0}
where
ρ is fluid density,
t is time,
u is the flow velocity vector field.
The time derivative can be understood as the accumulation (or loss) of mass in the system, while the divergence term represents the difference in flow in versus flow out. In this context, this equation is also one of the Euler equations (fluid dynamics). The Navier–Stokes equations form a vector continuity equation describing the conservation of linear momentum.
If the fluid is incompressible (volumetric strain rate is zero), the mass continuity equation simplifies to a volume continuity equation:
∇
⋅
u
=
0
,
{\displaystyle \nabla \cdot \mathbf {u} =0,}
which means that the divergence of the velocity field is zero everywhere. Physically, this is equivalent to saying that the local volume dilation rate is zero, hence a flow of water through a converging pipe will adjust solely by increasing its velocity as water is largely incompressible.
== Computer vision ==
In computer vision, optical flow is the pattern of apparent motion of objects in a visual scene. Under the assumption that brightness of the moving object did not change between two image frames, one can derive the optical flow equation as:
∂
I
∂
x
V
x
+
∂
I
∂
y
V
y
+
∂
I
∂
t
=
∇
I
⋅
V
+
∂
I
∂
t
=
0
{\displaystyle {\frac {\partial I}{\partial x}}V_{x}+{\frac {\partial I}{\partial y}}V_{y}+{\frac {\partial I}{\partial t}}=\nabla I\cdot \mathbf {V} +{\frac {\partial I}{\partial t}}=0}
where
t is time,
x, y coordinates in the image,
I is the image intensity at image coordinate (x, y) and time t,
V is the optical flow velocity vector
(
V
x
,
V
y
)
{\displaystyle (V_{x},V_{y})}
at image coordinate (x, y) and time t
== Energy and heat ==
Conservation of energy says that energy cannot be created or destroyed. (See below for the nuances associated with general relativity.) Therefore, there is a continuity equation for energy flow:
∂
u
∂
t
+
∇
⋅
q
=
0
{\displaystyle {\frac {\partial u}{\partial t}}+\nabla \cdot \mathbf {q} =0}
where
u, local energy density (energy per unit volume),
q, energy flux (transfer of energy per unit cross-sectional area per unit time) as a vector,
An important practical example is the flow of heat. When heat flows inside a solid, the continuity equation can be combined with Fourier's law (heat flux is proportional to temperature gradient) to arrive at the heat equation. The equation of heat flow may also have source terms: Although energy cannot be created or destroyed, heat can be created from other types of energy, for example via friction or joule heating.
== Probability distributions ==
If there is a quantity that moves continuously according to a stochastic (random) process, like the location of a single dissolved molecule with Brownian motion, then there is a continuity equation for its probability distribution. The flux in this case is the probability per unit area per unit time that the particle passes through a surface. According to the continuity equation, the negative divergence of this flux equals the rate of change of the probability density. The continuity equation reflects the fact that the molecule is always somewhere—the integral of its probability distribution is always equal to 1—and that it moves by a continuous motion (no teleporting).
== Quantum mechanics ==
Quantum mechanics is another domain where there is a continuity equation related to conservation of probability. The terms in the equation require the following definitions, and are slightly less obvious than the other examples above, so they are outlined here:
The wavefunction Ψ for a single particle in position space (rather than momentum space), that is, a function of position r and time t, Ψ = Ψ(r, t).
The probability density function is
ρ
(
r
,
t
)
=
Ψ
∗
(
r
,
t
)
Ψ
(
r
,
t
)
=
|
Ψ
(
r
,
t
)
|
2
.
{\displaystyle \rho (\mathbf {r} ,t)=\Psi ^{*}(\mathbf {r} ,t)\Psi (\mathbf {r} ,t)=|\Psi (\mathbf {r} ,t)|^{2}.}
The probability of finding the particle within V at t is denoted and defined by
P
=
P
r
∈
V
(
t
)
=
∫
V
Ψ
∗
Ψ
d
V
=
∫
V
|
Ψ
|
2
d
V
.
{\displaystyle P=P_{\mathbf {r} \in V}(t)=\int _{V}\Psi ^{*}\Psi dV=\int _{V}|\Psi |^{2}dV.}
The probability current (probability flux) is
j
(
r
,
t
)
=
ℏ
2
m
i
[
Ψ
∗
(
∇
Ψ
)
−
Ψ
(
∇
Ψ
∗
)
]
.
{\displaystyle \mathbf {j} (\mathbf {r} ,t)={\frac {\hbar }{2mi}}\left[\Psi ^{*}\left(\nabla \Psi \right)-\Psi \left(\nabla \Psi ^{*}\right)\right].}
With these definitions the continuity equation reads:
∇
⋅
j
+
∂
ρ
∂
t
=
0
⇌
∇
⋅
j
+
∂
|
Ψ
|
2
∂
t
=
0.
{\displaystyle \nabla \cdot \mathbf {j} +{\frac {\partial \rho }{\partial t}}=0\mathrel {\rightleftharpoons } \nabla \cdot \mathbf {j} +{\frac {\partial |\Psi |^{2}}{\partial t}}=0.}
Either form may be quoted. Intuitively, the above quantities indicate this represents the flow of probability. The chance of finding the particle at some position r and time t flows like a fluid; hence the term probability current, a vector field. The particle itself does not flow deterministically in this vector field.
== Semiconductor ==
The total current flow in the semiconductor consists of drift current and diffusion current of both the electrons in the conduction band and holes in the valence band.
General form for electrons in one-dimension:
∂
n
∂
t
=
n
μ
n
∂
E
∂
x
+
μ
n
E
∂
n
∂
x
+
D
n
∂
2
n
∂
x
2
+
(
G
n
−
R
n
)
{\displaystyle {\frac {\partial n}{\partial t}}=n\mu _{n}{\frac {\partial E}{\partial x}}+\mu _{n}E{\frac {\partial n}{\partial x}}+D_{n}{\frac {\partial ^{2}n}{\partial x^{2}}}+(G_{n}-R_{n})}
where:
n is the local concentration of electrons
μ
n
{\displaystyle \mu _{n}}
is electron mobility
E is the electric field across the depletion region
Dn is the diffusion coefficient for electrons
Gn is the rate of generation of electrons
Rn is the rate of recombination of electrons
Similarly, for holes:
∂
p
∂
t
=
−
p
μ
p
∂
E
∂
x
−
μ
p
E
∂
p
∂
x
+
D
p
∂
2
p
∂
x
2
+
(
G
p
−
R
p
)
{\displaystyle {\frac {\partial p}{\partial t}}=-p\mu _{p}{\frac {\partial E}{\partial x}}-\mu _{p}E{\frac {\partial p}{\partial x}}+D_{p}{\frac {\partial ^{2}p}{\partial x^{2}}}+(G_{p}-R_{p})}
where:
p is the local concentration of holes
μ
p
{\displaystyle \mu _{p}}
is hole mobility
E is the electric field across the depletion region
Dp is the diffusion coefficient for holes
Gp is the rate of generation of holes
Rp is the rate of recombination of holes
=== Derivation ===
This section presents a derivation of the equation above for electrons. A similar derivation can be found for the equation for holes.
Consider the fact that the number of electrons is conserved across a volume of semiconductor material with cross-sectional area, A, and length, dx, along the x-axis. More precisely, one can say:
Rate of change of electron density
=
(
Electron flux in
−
Electron flux out
)
+
Net generation inside a volume
{\displaystyle {\text{Rate of change of electron density}}=({\text{Electron flux in}}-{\text{Electron flux out}})+{\text{Net generation inside a volume}}}
Mathematically, this equality can be written:
d
n
d
t
A
d
x
=
[
J
(
x
+
d
x
)
−
J
(
x
)
]
A
e
+
(
G
n
−
R
n
)
A
d
x
=
[
J
(
x
)
+
d
J
d
x
d
x
−
J
(
x
)
]
A
e
+
(
G
n
−
R
n
)
A
d
x
d
n
d
t
=
1
e
d
J
d
x
+
(
G
n
−
R
n
)
{\displaystyle {\begin{aligned}{\frac {dn}{dt}}A\,dx&=\left[J(x+dx)-J(x)\right]{\frac {A}{e}}+(G_{n}-R_{n})A\,dx\\&=\left[J(x)+{\frac {dJ}{dx}}dx-J(x)\right]{\frac {A}{e}}+(G_{n}-R_{n})A\,dx\\[1.2ex]{\frac {dn}{dt}}&={\frac {1}{e}}{\frac {dJ}{dx}}+(G_{n}-R_{n})\end{aligned}}}
Here J denotes current density(whose direction is against electron flow by convention) due to electron flow within the considered volume of the semiconductor. It is also called electron current density.
Total electron current density is the sum of drift current and diffusion current densities:
J
n
=
e
n
μ
n
E
+
e
D
n
d
n
d
x
{\displaystyle J_{n}=en\mu _{n}E+eD_{n}{\frac {dn}{dx}}}
Therefore, we have
d
n
d
t
=
1
e
d
d
x
(
e
n
μ
n
E
+
e
D
n
d
n
d
x
)
+
(
G
n
−
R
n
)
{\displaystyle {\frac {dn}{dt}}={\frac {1}{e}}{\frac {d}{dx}}\left(en\mu _{n}E+eD_{n}{\frac {dn}{dx}}\right)+(G_{n}-R_{n})}
Applying the product rule results in the final expression:
d
n
d
t
=
μ
n
E
d
n
d
x
+
μ
n
n
d
E
d
x
+
D
n
d
2
n
d
x
2
+
(
G
n
−
R
n
)
{\displaystyle {\frac {dn}{dt}}=\mu _{n}E{\frac {dn}{dx}}+\mu _{n}n{\frac {dE}{dx}}+D_{n}{\frac {d^{2}n}{dx^{2}}}+(G_{n}-R_{n})}
=== Solution ===
The key to solving these equations in real devices is whenever possible to select regions in which most of the mechanisms are negligible so that the equations reduce to a much simpler form.
== Relativistic version ==
=== Special relativity ===
The notation and tools of special relativity, especially 4-vectors and 4-gradients, offer a convenient way to write any continuity equation.
The density of a quantity ρ and its current j can be combined into a 4-vector called a 4-current:
J
=
(
c
ρ
,
j
x
,
j
y
,
j
z
)
{\displaystyle J=\left(c\rho ,j_{x},j_{y},j_{z}\right)}
where c is the speed of light. The 4-divergence of this current is:
∂
μ
J
μ
=
c
∂
ρ
∂
c
t
+
∇
⋅
j
{\displaystyle \partial _{\mu }J^{\mu }=c{\frac {\partial \rho }{\partial ct}}+\nabla \cdot \mathbf {j} }
where ∂μ is the 4-gradient and μ is an index labeling the spacetime dimension. Then the continuity equation is:
∂
μ
J
μ
=
0
{\displaystyle \partial _{\mu }J^{\mu }=0}
in the usual case where there are no sources or sinks, that is, for perfectly conserved quantities like energy or charge. This continuity equation is manifestly ("obviously") Lorentz invariant.
Examples of continuity equations often written in this form include electric charge conservation
∂
μ
J
μ
=
0
{\displaystyle \partial _{\mu }J^{\mu }=0}
where J is the electric 4-current; and energy–momentum conservation
∂
ν
T
μ
ν
=
0
{\displaystyle \partial _{\nu }T^{\mu \nu }=0}
where T is the stress–energy tensor.
=== General relativity ===
In general relativity, where spacetime is curved, the continuity equation (in differential form) for energy, charge, or other conserved quantities involves the covariant divergence instead of the ordinary divergence.
For example, the stress–energy tensor is a second-order tensor field containing energy–momentum densities, energy–momentum fluxes, and shear stresses, of a mass-energy distribution. The differential form of energy–momentum conservation in general relativity states that the covariant divergence of the stress-energy tensor is zero:
T
μ
ν
;
μ
=
0.
{\displaystyle {T^{\mu }}_{\nu ;\mu }=0.}
This is an important constraint on the form the Einstein field equations take in general relativity.
However, the ordinary divergence of the stress–energy tensor does not necessarily vanish:
∂
μ
T
μ
ν
=
−
Γ
μ
λ
μ
T
λ
ν
−
Γ
μ
λ
ν
T
μ
λ
,
{\displaystyle \partial _{\mu }T^{\mu \nu }=-\Gamma _{\mu \lambda }^{\mu }T^{\lambda \nu }-\Gamma _{\mu \lambda }^{\nu }T^{\mu \lambda },}
The right-hand side strictly vanishes for a flat geometry only.
As a consequence, the integral form of the continuity equation is difficult to define and not necessarily valid for a region within which spacetime is significantly curved (e.g. around a black hole, or across the whole universe).
== Particle physics ==
Quarks and gluons have color charge, which is always conserved like electric charge, and there is a continuity equation for such color charge currents (explicit expressions for currents are given at gluon field strength tensor).
There are many other quantities in particle physics which are often or always conserved: baryon number (proportional to the number of quarks minus the number of antiquarks), electron number, mu number, tau number, isospin, and others. Each of these has a corresponding continuity equation, possibly including source / sink terms.
== Noether's theorem ==
One reason that conservation equations frequently occur in physics is Noether's theorem. This states that whenever the laws of physics have a continuous symmetry, there is a continuity equation for some conserved physical quantity. The three most famous examples are:
The laws of physics are invariant with respect to time-translation—for example, the laws of physics today are the same as they were yesterday. This symmetry leads to the continuity equation for conservation of energy.
The laws of physics are invariant with respect to space-translation—for example, a rocket in outer space is not subject to different forces or potentials if it is displaced in any given direction (eg. x, y, z), leading to the conservation of the three components of momentum conservation of momentum.
The laws of physics are invariant with respect to orientation—for example, floating in outer space, there is no measurement you can do to say "which way is up"; the laws of physics are the same regardless of how you are oriented. This symmetry leads to the continuity equation for conservation of angular momentum.
== See also ==
Conservation law
Conservation form
Dissipative system
== References ==
== Further reading ==
Lamb, H. (2006) [1932]. Hydrodynamics (6th ed.). Cambridge University Press. ISBN 978-0-521-45868-9.
Griffiths, D. J. (1999). Introduction to Electrodynamics (3rd ed.). Pearson Education Inc. ISBN 81-7758-293-3.
Grant, I. S.; Phillips, W. R. (2008). Electromagnetism. Manchester Physics Series (2nd ed.). ISBN 978-0-471-92712-9.
Wheeler, J. A.; Misner, C.; Thorne, K. S. (1973). Gravitation. W. H. Freeman & Co. ISBN 0-7167-0344-0. | Wikipedia/Continuity_equations |
Forced convection is a mechanism, or type of transport, in which fluid motion is generated by an external source (like a pump, fan, suction device, etc.). Alongside natural convection, thermal radiation, and thermal conduction it is one of the methods of heat transfer and allows significant amounts of heat energy to be transported very efficiently.
== Applications ==
This mechanism is found very commonly in everyday life, including central heating and air conditioning and in many other machines. Forced convection is often encountered by engineers designing or analyzing heat exchangers, pipe flow, and flow over a plate at a different temperature than the stream (the case of a shuttle wing during re-entry, for example).
== Mixed convection ==
In any forced convection situation, some amount of natural convection is always present whenever there are gravitational forces present (i.e., unless the system is in an inertial frame or free-fall). When the natural convection is not negligible, such flows are typically referred to as mixed convection.
== Mathematical analysis ==
When analyzing potentially mixed convection, a parameter called the Archimedes number (Ar) parametrizes the relative strength of free and forced convection. The Archimedes number is the ratio of Grashof number and the square of Reynolds number, which represents the ratio of buoyancy force and inertia force, and which stands in for the contribution of natural convection. When Ar ≫ 1, natural convection dominates and when Ar ≪ 1, forced convection dominates.
A
r
=
G
r
R
e
2
{\displaystyle Ar={\frac {Gr}{Re^{2}}}}
When natural convection isn't a significant factor, mathematical analysis with forced convection theories typically yields accurate results. The parameter of importance in forced convection is the Péclet number, which is the ratio of advection (movement by currents) and diffusion (movement from high to low concentrations) of heat.
P
e
=
U
L
α
{\displaystyle Pe={\frac {UL}{\alpha }}}
When the Peclet number is much greater than unity (1), advection dominates diffusion. Similarly, much smaller ratios indicate a higher rate of diffusion relative to advection.
== See also ==
Convective heat transfer
Combined forced and natural convection
== References ==
== External links == | Wikipedia/Forced_convection |
In physics, a pulse is a generic term describing a single disturbance that moves through a transmission medium. This medium may be vacuum (in the case of electromagnetic radiation) or matter, and may be indefinitely large or finite.
== Pulse reflection ==
Consider a pulse moving through a medium - perhaps through a rope or a slinky. When the pulse reaches the end of that medium, what happens to it depends on whether the medium is fixed in space or free to move at its end. For example, if the pulse is moving through a rope and the end of the rope is held firmly by a person, then it is said that the pulse is approaching a fixed end. On the other hand, if the end of the rope is fixed to a stick such that it is free to move up or down along the stick when the pulse reaches its end, then it is said that the pulse is approaching a free end.
=== Free end ===
A pulse will reflect off a free end and return with the same direction of displacement that it had before reflection. That is, a pulse with an upward displacement will reflect off the end and return with an upward displacement.
This is illustrated by figures 1 and 2 that were obtained by the numerical integration of the wave equation.
=== Fixed end ===
A pulse will reflect off a fixed end and return with the opposite direction of displacement. In this case, the pulse is said to have inverted. That is, a pulse with an upward displacement will reflect off the end and return with a downward displacement.
This is illustrated by figures 3 and 4 that were obtained by the numerical integration of the wave equation. In addition it is illustrated in the animation of figure 5.
=== Crossing media ===
When there exists a pulse in a medium that is connected to another less heavy or less dense medium, the pulse will reflect as if it were approaching a free end (no inversion). Contrarily, when a pulse is traveling through a medium connected to a heavier or denser medium, the pulse will reflect as if it were approaching a fixed end (inversion).
== Optical pulse ==
=== Dark pulse ===
Dark pulses are characterized by being formed from a localized reduction of intensity compared to a more intense continuous wave background. Scalar dark solitons (linearly polarized dark solitons) can be formed in all normal dispersion fiber lasers mode-locked by the nonlinear polarization rotation method and can be rather stable. Vector dark solitons are much less stable due to the cross-interaction between the two polarization components. Therefore, it is interesting to investigate how the polarization state of these two polarization components evolves.
In 2008, the first dark pulse laser was reported in a quantum dot diode laser with a saturable absorber.
In 2009, the dark pulse fiber laser was successfully achieved in an all-normal dispersion erbium-doped fiber laser with a polarizer in cavity. Experimentation has revealed that apart from the bright pulse emission, under appropriate conditions the fiber laser could also emit single or multiple dark pulses. Based on numerical simulations, the dark pulse formation in the laser is a result of dark soliton shaping.
In 2022, the first free space dark pulse laser using a nonlinear crystal inside of a solid state laser demonstrated.
== See also ==
Pulse (signal processing)
Soliton
== References == | Wikipedia/Pulse_(physics) |
An ideal solution or ideal mixture is a solution that exhibits thermodynamic properties analogous to those of a mixture of ideal gases. The enthalpy of mixing is zero as is the volume change on mixing. The vapor pressures of all components obey Raoult's law across the entire range of concentrations, and the activity coefficient (which measures deviation from ideality) is equal to one for each component.
The concept of an ideal solution is fundamental to both thermodynamics and chemical thermodynamics and their applications, such as the explanation of colligative properties.
== Physical origin ==
Ideality of solutions is analogous to ideality for gases, with the important difference that intermolecular interactions in liquids are strong and cannot simply be neglected as they can for ideal gases. Instead we assume that the mean strength of the interactions are the same between all the molecules of the solution.
More formally, for a mix of molecules of A and B, then the interactions between unlike neighbors (UAB) and like neighbors UAA and UBB must be of the same average strength, i.e., 2 UAB = UAA + UBB and the longer-range interactions must be nil (or at least indistinguishable). If the molecular forces are the same between AA, AB and BB, i.e., UAB = UAA = UBB, then the solution is automatically ideal.
If the molecules are almost identical chemically, e.g., 1-butanol and 2-butanol, then the solution will be almost ideal. Since the interaction energies between A and B are almost equal, it follows that there is only a very small overall energy (enthalpy) change when the substances are mixed. The more dissimilar the nature of A and B, the more strongly the solution is expected to deviate from ideality.
== Formal definition ==
Different related definitions of an ideal solution have been proposed. The simplest definition is that an ideal solution is a solution for which each component obeys Raoult's law
p
i
=
x
i
p
i
∗
{\displaystyle p_{i}=x_{i}p_{i}^{*}}
for all compositions. Here
p
i
{\displaystyle p_{i}}
is the vapor pressure of component
i
{\displaystyle i}
above the solution,
x
i
{\displaystyle x_{i}}
is its mole fraction and
p
i
∗
{\displaystyle p_{i}^{*}}
is the vapor pressure of the pure substance
i
{\displaystyle i}
at the same temperature.
This definition depends on vapor pressure, which is a directly measurable property, at least for volatile components. The thermodynamic properties may then be obtained from the chemical potential μ (which is the partial molar Gibbs energy g) of each component. If the vapor is an ideal gas,
μ
(
T
,
p
i
)
=
g
(
T
,
p
i
)
=
g
u
(
T
,
p
u
)
+
R
T
ln
p
i
p
u
.
{\displaystyle \mu (T,p_{i})=g(T,p_{i})=g^{\mathrm {u} }(T,p^{u})+RT\ln {\frac {p_{i}}{p^{u}}}.}
The reference pressure
p
u
{\displaystyle p^{u}}
may be taken as
P
o
{\displaystyle P^{o}}
= 1 bar, or as the pressure of the mix, whichever is simpler.
On substituting the value of
p
i
{\displaystyle p_{i}}
from Raoult's law,
μ
(
T
,
p
i
)
=
g
u
(
T
,
p
u
)
+
R
T
ln
p
i
∗
p
u
+
R
T
ln
x
i
=
μ
i
∗
+
R
T
ln
x
i
.
{\displaystyle \mu (T,p_{i})=g^{\mathrm {u} }(T,p^{u})+RT\ln {\frac {p_{i}^{*}}{p^{u}}}+RT\ln x_{i}=\mu _{i}^{*}+RT\ln x_{i}.}
This equation for the chemical potential can be used as an alternate definition for an ideal solution.
However, the vapor above the solution may not actually behave as a mixture of ideal gases. Some authors therefore define an ideal solution as one for which each component obeys the fugacity analogue of Raoult's law
f
i
=
x
i
f
i
∗
{\displaystyle f_{i}=x_{i}f_{i}^{*}}
. Here
f
i
{\displaystyle f_{i}}
is the fugacity of component
i
{\displaystyle i}
in solution and
f
i
∗
{\displaystyle f_{i}^{*}}
is the fugacity of
i
{\displaystyle i}
as a pure substance. Since the fugacity is defined by the equation
μ
(
T
,
P
)
=
g
(
T
,
P
)
=
g
u
(
T
,
p
u
)
+
R
T
ln
f
i
p
u
{\displaystyle \mu (T,P)=g(T,P)=g^{\mathrm {u} }(T,p^{u})+RT\ln {\frac {f_{i}}{p^{u}}}}
this definition leads to ideal values of the chemical potential and other thermodynamic properties even when the component vapors above the solution are not ideal gases. An equivalent statement uses thermodynamic activity instead of fugacity.
== Thermodynamic properties ==
=== Volume ===
If we differentiate this last equation with respect to
p
{\displaystyle p}
at
T
{\displaystyle T}
constant we get:
(
∂
g
(
T
,
P
)
∂
P
)
T
=
R
T
(
∂
ln
f
∂
P
)
T
.
{\displaystyle \left({\frac {\partial g(T,P)}{\partial P}}\right)_{T}=RT\left({\frac {\partial \ln f}{\partial P}}\right)_{T}.}
Since we know from the Gibbs potential equation that:
(
∂
g
(
T
,
P
)
∂
P
)
T
=
v
{\displaystyle \left({\frac {\partial g(T,P)}{\partial P}}\right)_{T}=v}
with the molar volume
v
{\displaystyle v}
, these last two equations put together give:
(
∂
ln
f
∂
P
)
T
=
v
R
T
.
{\displaystyle \left({\frac {\partial \ln f}{\partial P}}\right)_{T}={\frac {v}{RT}}.}
Since all this, done as a pure substance, is valid in an ideal mix just adding the subscript
i
{\displaystyle i}
to all the intensive variables and changing
v
{\displaystyle v}
to
v
i
¯
{\displaystyle {\bar {v_{i}}}}
, with optional overbar, standing for partial molar volume:
(
∂
ln
f
i
∂
P
)
T
,
x
i
=
v
i
¯
R
T
.
{\displaystyle \left({\frac {\partial \ln f_{i}}{\partial P}}\right)_{T,x_{i}}={\frac {\bar {v_{i}}}{RT}}.}
Applying the first equation of this section to this last equation we find:
v
i
∗
=
v
¯
i
{\displaystyle v_{i}^{*}={\bar {v}}_{i}}
which means that the partial molar volumes in an ideal mix are independent of composition. Consequently, the total volume is the sum of the volumes of the components in their pure forms:
V
=
∑
i
V
i
∗
.
{\displaystyle V=\sum _{i}V_{i}^{*}.}
=== Enthalpy and heat capacity ===
Proceeding in a similar way but taking the derivative with respect to
T
{\displaystyle T}
we get a similar result for molar enthalpies:
g
(
T
,
P
)
−
g
g
a
s
(
T
,
p
u
)
R
T
=
ln
f
p
u
.
{\displaystyle {\frac {g(T,P)-g^{\mathrm {gas} }(T,p^{u})}{RT}}=\ln {\frac {f}{p^{u}}}.}
Remembering that
(
∂
g
T
∂
T
)
P
=
−
h
T
2
{\displaystyle \left({\frac {\partial {\frac {g}{T}}}{\partial T}}\right)_{P}=-{\frac {h}{T^{2}}}}
we get:
−
h
i
¯
−
h
i
g
a
s
R
=
−
h
i
∗
−
h
i
g
a
s
R
{\displaystyle -{\frac {{\bar {h_{i}}}-h_{i}^{\mathrm {gas} }}{R}}=-{\frac {h_{i}^{*}-h_{i}^{\mathrm {gas} }}{R}}}
which in turn means that
h
i
¯
=
h
i
∗
{\displaystyle {\bar {h_{i}}}=h_{i}^{*}}
and that the enthalpy of the mix is equal to the sum of its component enthalpies.
Since
u
i
¯
=
h
i
¯
−
p
v
i
¯
{\displaystyle {\bar {u_{i}}}={\bar {h_{i}}}-p{\bar {v_{i}}}}
and
u
i
∗
=
h
i
∗
−
p
v
i
∗
{\displaystyle u_{i}^{*}=h_{i}^{*}-pv_{i}^{*}}
, similarly
u
i
∗
=
u
i
¯
.
{\displaystyle u_{i}^{*}={\bar {u_{i}}}.}
It is also easily verifiable that
C
p
i
∗
=
C
p
i
¯
.
{\displaystyle C_{pi}^{*}={\bar {C_{pi}}}.}
=== Entropy of mixing ===
Finally since
g
i
¯
=
μ
i
=
g
i
g
a
s
+
R
T
ln
f
i
p
u
=
g
i
g
a
s
+
R
T
ln
f
i
∗
p
u
+
R
T
ln
x
i
=
μ
i
∗
+
R
T
ln
x
i
{\displaystyle {\bar {g_{i}}}=\mu _{i}=g_{i}^{\mathrm {gas} }+RT\ln {\frac {f_{i}}{p^{u}}}=g_{i}^{\mathrm {gas} }+RT\ln {\frac {f_{i}^{*}}{p^{u}}}+RT\ln x_{i}=\mu _{i}^{*}+RT\ln x_{i}}
we find that
Δ
g
i
,
m
i
x
=
R
T
ln
x
i
.
{\displaystyle \Delta g_{i,\mathrm {mix} }=RT\ln x_{i}.}
Since the Gibbs free energy per mole of the mixture
G
m
{\displaystyle G_{m}}
is
G
m
=
∑
i
x
i
g
i
{\displaystyle G_{m}=\sum _{i}x_{i}{g_{i}}}
then
Δ
G
m
,
m
i
x
=
R
T
∑
i
x
i
ln
x
i
.
{\displaystyle \Delta G_{\mathrm {m,mix} }=RT\sum _{i}{x_{i}\ln x_{i}}.}
At last we can calculate the molar entropy of mixing since
g
i
∗
=
h
i
∗
−
T
s
i
∗
{\displaystyle g_{i}^{*}=h_{i}^{*}-Ts_{i}^{*}}
and
g
i
¯
=
h
i
¯
−
T
s
i
¯
{\displaystyle {\bar {g_{i}}}={\bar {h_{i}}}-T{\bar {s_{i}}}}
Δ
s
i
,
m
i
x
=
−
R
∑
i
ln
x
i
{\displaystyle \Delta s_{i,\mathrm {mix} }=-R\sum _{i}\ln x_{i}}
Δ
S
m
,
m
i
x
=
−
R
∑
i
x
i
ln
x
i
.
{\displaystyle \Delta S_{\mathrm {m,mix} }=-R\sum _{i}x_{i}\ln x_{i}.}
== Consequences ==
Solvent–solute interactions are the same as solute–solute and solvent–solvent interactions, on average. Consequently, the enthalpy of mixing (solution) is zero and the change in Gibbs free energy on mixing is determined solely by the entropy of mixing. Hence the molar Gibbs free energy of mixing is
Δ
G
m
,
m
i
x
=
R
T
∑
i
x
i
ln
x
i
{\displaystyle \Delta G_{\mathrm {m,mix} }=RT\sum _{i}x_{i}\ln x_{i}}
or for a two-component ideal solution
Δ
G
m
,
m
i
x
=
R
T
(
x
A
ln
x
A
+
x
B
ln
x
B
)
{\displaystyle \Delta G_{\mathrm {m,mix} }=RT(x_{A}\ln x_{A}+x_{B}\ln x_{B})}
where m denotes molar, i.e., change in Gibbs free energy per mole of solution, and
x
i
{\displaystyle x_{i}}
is the mole fraction of component
i
{\displaystyle i}
. Note that this free energy of mixing is always negative (since each
x
i
∈
[
0
,
1
]
{\displaystyle x_{i}\in [0,1]}
, each
ln
x
i
{\displaystyle \ln x_{i}}
or its limit for
x
i
→
0
{\displaystyle x_{i}\to 0}
must be negative (infinite)), i.e., ideal solutions are miscible at any composition and no phase separation will occur.
The equation above can be expressed in terms of chemical potentials of the individual components
Δ
G
m
,
m
i
x
=
∑
i
x
i
Δ
μ
i
,
m
i
x
{\displaystyle \Delta G_{\mathrm {m,mix} }=\sum _{i}x_{i}\Delta \mu _{i,\mathrm {mix} }}
where
Δ
μ
i
,
m
i
x
=
R
T
ln
x
i
{\displaystyle \Delta \mu _{i,\mathrm {mix} }=RT\ln x_{i}}
is the change in chemical potential of
i
{\displaystyle i}
on mixing. If the chemical potential of pure liquid
i
{\displaystyle i}
is denoted
μ
i
∗
{\displaystyle \mu _{i}^{*}}
, then the chemical potential of
i
{\displaystyle i}
in an ideal solution is
μ
i
=
μ
i
∗
+
R
T
ln
x
i
.
{\displaystyle \mu _{i}=\mu _{i}^{*}+RT\ln x_{i}.}
Any component
i
{\displaystyle i}
of an ideal solution obeys Raoult's Law over the entire composition range:
p
i
=
(
p
i
)
pure
x
i
{\displaystyle \ p_{i}=(p_{i})_{\text{pure}}x_{i}}
where
(
p
i
)
pure
{\displaystyle (p_{i})_{\text{pure}}}
is the equilibrium vapor pressure of pure component
i
{\displaystyle i}
and
x
i
{\displaystyle x_{i}\,}
is the mole fraction of component
i
{\displaystyle i}
in solution.
== Non-ideality ==
Deviations from ideality can be described by the use of Margules functions or activity coefficients. A single Margules parameter may be sufficient to describe the properties of the solution if the deviations from ideality are modest; such solutions are termed regular.
In contrast to ideal solutions, where volumes are strictly additive and mixing is always complete, the volume of a non-ideal solution is not, in general, the simple sum of the volumes of the component pure liquids and solubility is not guaranteed over the whole composition range. By measurement of densities, thermodynamic activity of components can be determined.
== See also ==
Activity coefficient
Entropy of mixing
Margules function
Regular solution
Coil-globule transition
Apparent molar property
Dilution equation
Virial coefficient
== References == | Wikipedia/Ideal_solution |
Transport Phenomena is the first textbook about transport phenomena. It is specifically designed for chemical engineering students. The first edition was published in 1960, two years after having been preliminarily published under the title Notes on Transport Phenomena based on mimeographed notes prepared for a chemical engineering course taught at the University of Wisconsin–Madison during the academic year 1957-1958. The second edition was published in August 2001. A revised second edition was published in 2007. This text is often known simply as BSL after its authors' initials.
== History ==
As the chemical engineering profession developed in the first half of the 20th century, the concept of "unit operations" arose as being needed in the education of undergraduate chemical engineers. The theories of mass, momentum and energy transfer were being taught at that time only to the extent necessary for a narrow range of applications. As chemical engineers began moving into a number of new areas, problem definitions and solutions required a deeper knowledge of the fundamentals of transport phenomena than those provided in the textbooks then available on unit operations.
In the 1950s, R. Byron Bird, Warren E. Stewart and Edwin N. Lightfoot stepped forward to develop an undergraduate course at the University of Wisconsin–Madison to integrate the teaching of fluid flow, heat transfer, and diffusion. From this beginning, they prepared their landmark textbook Transport Phenomena.
== Subjects covered in the book ==
The book is divided into three basic sections, named Momentum Transport, Energy Transport and Mass Transport:
Momentum Transport
Viscosity and the Mechanisms of Momentum Transport
Momentum Balances and Velocity Distributions in Laminar Flow
The Equations of Change for Isothermal Systems
Velocity Distributions in Turbulent Flow
Interphase Transport in Isothermal Systems
Macroscopic Balances for Isothermal Flow Systems
Energy Transport
Thermal Conductivity and the Mechanisms of Energy Transport
Energy Balances and Temperature Distributions in Solids and Laminar Flow
The Equations of Change for Nonisothermal Systems
Temperature Distributions in Turbulent Flow
Interphase Transport in Nonisothermal Systems
Macroscopic Balances for Nonisothermal Systems
Mass transport
Diffusivity and the Mechanisms of Mass Transport
Concentration Distributions in Solids and Laminar Flow
Equations of Change for Multicomponent Systems
Concentration Distributions in Turbulent Flow
Interphase Transport in Nonisothermal Mixtures
Macroscopic Balances for Multicomponent Systems
Other Mechanisms for Mass Transport
== Word play ==
Transport Phenomena contains many instances of hidden messages and other word play.
For example, the first letters of each sentence of the Preface spell out "This book is dedicated to O. A. Hougen." while in the revised second edition, the first letters of each paragraph spell out "Welcome". The first letters of each paragraph in the Postface spell out "On Wisconsin". In the first printing, in Fig. 9.L (p. 305) "Bird" is typeset safely outside the furnace wall.
== Advantages of the first edition over the second edition ==
According to many chemical engineering professors, the first edition is much better than the second edition. There are many reasons in this regard; The second edition has been revised many times despite the fact that there are still many defects and typographical errors in many parts of the book. On account of revision to defects of the revised second edition book, the authors published "Notes for the 2nd revised edition of TRANSPORT PHENOMENA" on 9 Aug 2011.
== See also ==
Chemical engineer
Distillation Design
Transport phenomena
Unit Operations of Chemical Engineering
Perry's Chemical Engineers' Handbook
== External links ==
Publisher's description of this book
== References == | Wikipedia/Transport_Phenomena_(book) |
The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge about a system is the one with largest entropy, in the context of precisely stated prior data (such as a proposition that expresses testable information).
Another way of stating this: Take precisely stated prior data or testable information about a probability distribution function. Consider the set of all trial probability distributions that would encode the prior data. According to this principle, the distribution with maximal information entropy is the best choice.
== History ==
The principle was first expounded by E. T. Jaynes in two papers in 1957, where he emphasized a natural correspondence between statistical mechanics and information theory. In particular, Jaynes argued that the Gibbsian method of statistical mechanics is sound by also arguing that the entropy of statistical mechanics and the information entropy of information theory are the same concept. Consequently, statistical mechanics should be considered a particular application of a general tool of logical inference and information theory.
== Overview ==
In most practical cases, the stated prior data or testable information is given by a set of conserved quantities (average values of some moment functions), associated with the probability distribution in question. This is the way the maximum entropy principle is most often used in statistical thermodynamics. Another possibility is to prescribe some symmetries of the probability distribution. The equivalence between conserved quantities and corresponding symmetry groups implies a similar equivalence for these two ways of specifying the testable information in the maximum entropy method.
The maximum entropy principle is also needed to guarantee the uniqueness and consistency of probability assignments obtained by different methods, statistical mechanics and logical inference in particular.
The maximum entropy principle makes explicit our freedom in using different forms of prior data. As a special case, a uniform prior probability density (Laplace's principle of indifference, sometimes called the principle of insufficient reason), may be adopted. Thus, the maximum entropy principle is not merely an alternative way to view the usual methods of inference of classical statistics, but represents a significant conceptual generalization of those methods.
However these statements do not imply that thermodynamical systems need not be shown to be ergodic to justify treatment as a statistical ensemble.
In ordinary language, the principle of maximum entropy can be said to express a claim of epistemic modesty, or of maximum ignorance. The selected distribution is the one that makes the least claim to being informed beyond the stated prior data, that is to say the one that admits the most ignorance beyond the stated prior data.
== Testable information ==
The principle of maximum entropy is useful explicitly only when applied to testable information. Testable information is a statement about a probability distribution whose truth or falsity is well-defined. For example, the statements
the expectation of the variable
x
{\displaystyle x}
is 2.87
and
p
2
+
p
3
>
0.6
{\displaystyle p_{2}+p_{3}>0.6}
(where
p
2
{\displaystyle p_{2}}
and
p
3
{\displaystyle p_{3}}
are probabilities of events) are statements of testable information.
Given testable information, the maximum entropy procedure consists of seeking the probability distribution which maximizes information entropy, subject to the constraints of the information. This constrained optimization problem is typically solved using the method of Lagrange multipliers.
Entropy maximization with no testable information respects the universal "constraint" that the sum of the probabilities is one. Under this constraint, the maximum entropy discrete probability distribution is the uniform distribution,
p
i
=
1
n
f
o
r
a
l
l
i
∈
{
1
,
…
,
n
}
.
{\displaystyle p_{i}={\frac {1}{n}}\ {\rm {for\ all}}\ i\in \{\,1,\dots ,n\,\}.}
== Applications ==
The principle of maximum entropy is commonly applied in two ways to inferential problems:
=== Prior probabilities ===
The principle of maximum entropy is often used to obtain prior probability distributions for Bayesian inference. Jaynes was a strong advocate of this approach, claiming the maximum entropy distribution represented the least informative distribution.
A large amount of literature is now dedicated to the elicitation of maximum entropy priors and links with channel coding.
=== Posterior probabilities ===
Maximum entropy is a sufficient updating rule for radical probabilism. Richard Jeffrey's probability kinematics is a special case of maximum entropy inference. However, maximum entropy is not a generalisation of all such sufficient updating rules.
=== Maximum entropy models ===
Alternatively, the principle is often invoked for model specification: in this case the observed data itself is assumed to be the testable information. Such models are widely used in natural language processing. An example of such a model is logistic regression, which corresponds to the maximum entropy classifier for independent observations.
=== Probability density estimation ===
One of the main applications of the maximum entropy principle is in discrete and continuous density estimation.
Similar to support vector machine estimators,
the maximum entropy principle may require the solution to a quadratic programming problem, and thus provide
a sparse mixture model as the optimal density estimator. One important advantage of the method is its ability to incorporate prior information in the density estimation.
== General solution for the maximum entropy distribution with linear constraints ==
=== Discrete case ===
We have some testable information I about a quantity x taking values in {x1, x2,..., xn}. We assume this information has the form of m constraints on the expectations of the functions fk; that is, we require our probability distribution to satisfy the moment inequality/equality constraints:
∑
i
=
1
n
Pr
(
x
i
)
f
k
(
x
i
)
≥
F
k
k
=
1
,
…
,
m
.
{\displaystyle \sum _{i=1}^{n}\Pr(x_{i})f_{k}(x_{i})\geq F_{k}\qquad k=1,\ldots ,m.}
where the
F
k
{\displaystyle F_{k}}
are observables. We also require the probability density to sum to one, which may be viewed as a primitive constraint on the identity function and an observable equal to 1 giving the constraint
∑
i
=
1
n
Pr
(
x
i
)
=
1.
{\displaystyle \sum _{i=1}^{n}\Pr(x_{i})=1.}
The probability distribution with maximum information entropy subject to these inequality/equality constraints is of the form:
Pr
(
x
i
)
=
1
Z
(
λ
1
,
…
,
λ
m
)
exp
[
λ
1
f
1
(
x
i
)
+
⋯
+
λ
m
f
m
(
x
i
)
]
,
{\displaystyle \Pr(x_{i})={\frac {1}{Z(\lambda _{1},\ldots ,\lambda _{m})}}\exp \left[\lambda _{1}f_{1}(x_{i})+\cdots +\lambda _{m}f_{m}(x_{i})\right],}
for some
λ
1
,
…
,
λ
m
{\displaystyle \lambda _{1},\ldots ,\lambda _{m}}
. It is sometimes called the Gibbs distribution. The normalization constant is determined by:
Z
(
λ
1
,
…
,
λ
m
)
=
∑
i
=
1
n
exp
[
λ
1
f
1
(
x
i
)
+
⋯
+
λ
m
f
m
(
x
i
)
]
,
{\displaystyle Z(\lambda _{1},\ldots ,\lambda _{m})=\sum _{i=1}^{n}\exp \left[\lambda _{1}f_{1}(x_{i})+\cdots +\lambda _{m}f_{m}(x_{i})\right],}
and is conventionally called the partition function. (The Pitman–Koopman theorem states that the necessary and sufficient condition for a sampling distribution to admit sufficient statistics of bounded dimension is that it have the general form of a maximum entropy distribution.)
The λk parameters are Lagrange multipliers. In the case of equality constraints their values are determined from the solution of the nonlinear equations
F
k
=
∂
∂
λ
k
log
Z
(
λ
1
,
…
,
λ
m
)
.
{\displaystyle F_{k}={\frac {\partial }{\partial \lambda _{k}}}\log Z(\lambda _{1},\ldots ,\lambda _{m}).}
In the case of inequality constraints, the Lagrange multipliers are determined from the solution of a convex optimization program with linear constraints.
In both cases, there is no closed form solution, and the computation of the Lagrange multipliers usually requires numerical methods.
=== Continuous case ===
For continuous distributions, the Shannon entropy cannot be used, as it is only defined for discrete probability spaces. Instead Edwin Jaynes (1963, 1968, 2003) gave the following formula, which is closely related to the relative entropy (see also differential entropy).
H
c
=
−
∫
p
(
x
)
log
p
(
x
)
q
(
x
)
d
x
{\displaystyle H_{c}=-\int p(x)\log {\frac {p(x)}{q(x)}}\,dx}
where q(x), which Jaynes called the "invariant measure", is proportional to the limiting density of discrete points. For now, we shall assume that q is known; we will discuss it further after the solution equations are given.
A closely related quantity, the relative entropy, is usually defined as the Kullback–Leibler divergence of p from q (although it is sometimes, confusingly, defined as the negative of this). The inference principle of minimizing this, due to Kullback, is known as the Principle of Minimum Discrimination Information.
We have some testable information I about a quantity x which takes values in some interval of the real numbers (all integrals below are over this interval). We assume this information has the form of m constraints on the expectations of the functions fk, i.e. we require our probability density function to satisfy the inequality (or purely equality) moment constraints:
∫
p
(
x
)
f
k
(
x
)
d
x
≥
F
k
k
=
1
,
…
,
m
.
{\displaystyle \int p(x)f_{k}(x)\,dx\geq F_{k}\qquad k=1,\dotsc ,m.}
where the
F
k
{\displaystyle F_{k}}
are observables. We also require the probability density to integrate to one, which may be viewed as a primitive constraint on the identity function and an observable equal to 1 giving the constraint
∫
p
(
x
)
d
x
=
1.
{\displaystyle \int p(x)\,dx=1.}
The probability density function with maximum Hc subject to these constraints is:
p
(
x
)
=
1
Z
(
λ
1
,
…
,
λ
m
)
q
(
x
)
exp
[
λ
1
f
1
(
x
)
+
⋯
+
λ
m
f
m
(
x
)
]
{\displaystyle p(x)={\frac {1}{Z(\lambda _{1},\dotsc ,\lambda _{m})}}q(x)\exp \left[\lambda _{1}f_{1}(x)+\dotsb +\lambda _{m}f_{m}(x)\right]}
with the partition function determined by
Z
(
λ
1
,
…
,
λ
m
)
=
∫
q
(
x
)
exp
[
λ
1
f
1
(
x
)
+
⋯
+
λ
m
f
m
(
x
)
]
d
x
.
{\displaystyle Z(\lambda _{1},\dotsc ,\lambda _{m})=\int q(x)\exp \left[\lambda _{1}f_{1}(x)+\dotsb +\lambda _{m}f_{m}(x)\right]\,dx.}
As in the discrete case, in the case where all moment constraints are equalities, the values of the
λ
k
{\displaystyle \lambda _{k}}
parameters are determined by the system of nonlinear equations:
F
k
=
∂
∂
λ
k
log
Z
(
λ
1
,
…
,
λ
m
)
.
{\displaystyle F_{k}={\frac {\partial }{\partial \lambda _{k}}}\log Z(\lambda _{1},\dotsc ,\lambda _{m}).}
In the case with inequality moment constraints the Lagrange multipliers are determined from the solution of a convex optimization program.
The invariant measure function q(x) can be best understood by supposing that x is known to take values only in the bounded interval (a, b), and that no other information is given. Then the maximum entropy probability density function is
p
(
x
)
=
A
⋅
q
(
x
)
,
a
<
x
<
b
{\displaystyle p(x)=A\cdot q(x),\qquad a<x<b}
where A is a normalization constant. The invariant measure function is actually the prior density function encoding 'lack of relevant information'. It cannot be determined by the principle of maximum entropy, and must be determined by some other logical method, such as the principle of transformation groups or marginalization theory.
=== Examples ===
For several examples of maximum entropy distributions, see the article on maximum entropy probability distributions.
== Justifications for the principle of maximum entropy ==
Proponents of the principle of maximum entropy justify its use in assigning probabilities in several ways, including the following two arguments. These arguments take the use of Bayesian probability as given, and are thus subject to the same postulates.
=== Information entropy as a measure of 'uninformativeness' ===
Consider a discrete probability distribution among
m
{\displaystyle m}
mutually exclusive propositions. The most informative distribution would occur when one of the propositions was known to be true. In that case, the information entropy would be equal to zero. The least informative distribution would occur when there is no reason to favor any one of the propositions over the others. In that case, the only reasonable probability distribution would be uniform, and then the information entropy would be equal to its maximum possible value,
log
m
{\displaystyle \log m}
. The information entropy can therefore be seen as a numerical measure which describes how uninformative a particular probability distribution is, ranging from zero (completely informative) to
log
m
{\displaystyle \log m}
(completely uninformative).
By choosing to use the distribution with the maximum entropy allowed by our information, the argument goes, we are choosing the most uninformative distribution possible. To choose a distribution with lower entropy would be to assume information we do not possess. Thus the maximum entropy distribution is the only reasonable distribution. The dependence of the solution on the dominating measure represented by
m
(
x
)
{\displaystyle m(x)}
is however a source of criticisms of the approach since this dominating measure is in fact arbitrary.
=== The Wallis derivation ===
The following argument is the result of a suggestion made by Graham Wallis to E. T. Jaynes in 1962. It is essentially the same mathematical argument used for the Maxwell–Boltzmann statistics in statistical mechanics, although the conceptual emphasis is quite different. It has the advantage of being strictly combinatorial in nature, making no reference to information entropy as a measure of 'uncertainty', 'uninformativeness', or any other imprecisely defined concept. The information entropy function is not assumed a priori, but rather is found in the course of the argument; and the argument leads naturally to the procedure of maximizing the information entropy, rather than treating it in some other way.
Suppose an individual wishes to make a probability assignment among
m
{\displaystyle m}
mutually exclusive propositions. They have some testable information, but are not sure how to go about including this information in their probability assessment. They therefore conceive of the following random experiment. They will distribute
N
{\displaystyle N}
quanta of probability (each worth
1
/
N
{\displaystyle 1/N}
) at random among the
m
{\displaystyle m}
possibilities. (One might imagine that they will throw
N
{\displaystyle N}
balls into
m
{\displaystyle m}
buckets while blindfolded. In order to be as fair as possible, each throw is to be independent of any other, and every bucket is to be the same size.) Once the experiment is done, they will check if the probability assignment thus obtained is consistent with their information. (For this step to be successful, the information must be a constraint given by an open set in the space of probability measures). If it is inconsistent, they will reject it and try again. If it is consistent, their assessment will be
p
i
=
n
i
N
{\displaystyle p_{i}={\frac {n_{i}}{N}}}
where
p
i
{\displaystyle p_{i}}
is the probability of the
i
{\displaystyle i}
th proposition, while ni is the number of quanta that were assigned to the
i
{\displaystyle i}
th proposition (i.e. the number of balls that ended up in bucket
i
{\displaystyle i}
).
Now, in order to reduce the 'graininess' of the probability assignment, it will be necessary to use quite a large number of quanta of probability. Rather than actually carry out, and possibly have to repeat, the rather long random experiment, the protagonist decides to simply calculate and use the most probable result. The probability of any particular result is the multinomial distribution,
P
r
(
p
)
=
W
⋅
m
−
N
{\displaystyle Pr(\mathbf {p} )=W\cdot m^{-N}}
where
W
=
N
!
n
1
!
n
2
!
⋯
n
m
!
{\displaystyle W={\frac {N!}{n_{1}!\,n_{2}!\,\dotsb \,n_{m}!}}}
is sometimes known as the multiplicity of the outcome.
The most probable result is the one which maximizes the multiplicity
W
{\displaystyle W}
. Rather than maximizing
W
{\displaystyle W}
directly, the protagonist could equivalently maximize any monotonic increasing function of
W
{\displaystyle W}
. They decide to maximize
1
N
log
W
=
1
N
log
N
!
n
1
!
n
2
!
⋯
n
m
!
=
1
N
log
N
!
(
N
p
1
)
!
(
N
p
2
)
!
⋯
(
N
p
m
)
!
=
1
N
(
log
N
!
−
∑
i
=
1
m
log
(
(
N
p
i
)
!
)
)
.
{\displaystyle {\begin{aligned}{\frac {1}{N}}\log W&={\frac {1}{N}}\log {\frac {N!}{n_{1}!\,n_{2}!\,\dotsb \,n_{m}!}}\\[6pt]&={\frac {1}{N}}\log {\frac {N!}{(Np_{1})!\,(Np_{2})!\,\dotsb \,(Np_{m})!}}\\[6pt]&={\frac {1}{N}}\left(\log N!-\sum _{i=1}^{m}\log((Np_{i})!)\right).\end{aligned}}}
At this point, in order to simplify the expression, the protagonist takes the limit as
N
→
∞
{\displaystyle N\to \infty }
, i.e. as the probability levels go from grainy discrete values to smooth continuous values. Using Stirling's approximation, they find
lim
N
→
∞
(
1
N
log
W
)
=
1
N
(
N
log
N
−
∑
i
=
1
m
N
p
i
log
(
N
p
i
)
)
=
log
N
−
∑
i
=
1
m
p
i
log
(
N
p
i
)
=
log
N
−
log
N
∑
i
=
1
m
p
i
−
∑
i
=
1
m
p
i
log
p
i
=
(
1
−
∑
i
=
1
m
p
i
)
log
N
−
∑
i
=
1
m
p
i
log
p
i
=
−
∑
i
=
1
m
p
i
log
p
i
=
H
(
p
)
.
{\displaystyle {\begin{aligned}\lim _{N\to \infty }\left({\frac {1}{N}}\log W\right)&={\frac {1}{N}}\left(N\log N-\sum _{i=1}^{m}Np_{i}\log(Np_{i})\right)\\[6pt]&=\log N-\sum _{i=1}^{m}p_{i}\log(Np_{i})\\[6pt]&=\log N-\log N\sum _{i=1}^{m}p_{i}-\sum _{i=1}^{m}p_{i}\log p_{i}\\[6pt]&=\left(1-\sum _{i=1}^{m}p_{i}\right)\log N-\sum _{i=1}^{m}p_{i}\log p_{i}\\[6pt]&=-\sum _{i=1}^{m}p_{i}\log p_{i}\\[6pt]&=H(\mathbf {p} ).\end{aligned}}}
All that remains for the protagonist to do is to maximize entropy under the constraints of their testable information. They have found that the maximum entropy distribution is the most probable of all "fair" random distributions, in the limit as the probability levels go from discrete to continuous.
=== Compatibility with Bayes' theorem ===
Giffin and Caticha (2007) state that Bayes' theorem and the principle of maximum entropy are completely compatible and can be seen as special cases of the "method of maximum relative entropy". They state that this method reproduces every aspect of orthodox Bayesian inference methods. In addition this new method opens the door to tackling problems that could not be addressed by either the maximal entropy principle or orthodox Bayesian methods individually. Moreover, recent contributions (Lazar 2003, and Schennach 2005) show that frequentist relative-entropy-based inference approaches (such as empirical likelihood and exponentially tilted empirical likelihood – see e.g. Owen 2001 and Kitamura 2006) can be combined with prior information to perform Bayesian posterior analysis.
Jaynes stated Bayes' theorem was a way to calculate a probability, while maximum entropy was a way to assign a prior probability distribution.
It is however, possible in concept to solve for a posterior distribution directly from a stated prior distribution using the principle of minimum cross-entropy (or the Principle of Maximum Entropy being a special case of using a uniform distribution as the given prior), independently of any Bayesian considerations by treating the problem formally as a constrained optimisation problem, the Entropy functional being the objective function. For the case of given average values as testable information (averaged over the sought after probability distribution), the sought after distribution is formally the Gibbs (or Boltzmann) distribution the parameters of which must be solved for in order to achieve minimum cross entropy and satisfy the given testable information.
== Relevance to physics ==
The principle of maximum entropy bears a relation to a key assumption of kinetic theory of gases known as molecular chaos or Stosszahlansatz. This asserts that the distribution function characterizing particles entering a collision can be factorized. Though this statement can be understood as a strictly physical hypothesis, it can also be interpreted as a heuristic hypothesis regarding the most probable configuration of particles before colliding.
== See also ==
== Notes ==
== References ==
Bajkova, A. T. (1992). "The generalization of maximum entropy method for reconstruction of complex functions". Astronomical and Astrophysical Transactions. 1 (4): 313–320. Bibcode:1992A&AT....1..313B. doi:10.1080/10556799208230532.
Fornalski, K.W.; Parzych, G.; Pylak, M.; Satuła, D.; Dobrzyński, L. (2010). "Application of Bayesian reasoning and the Maximum Entropy Method to some reconstruction problems" (PDF). Acta Physica Polonica A. 117 (6): 892–899. Bibcode:2010AcPPA.117..892F. doi:10.12693/APhysPolA.117.892.
Giffin, A. and Caticha, A., 2007, Updating Probabilities with Data and Moments
Guiasu, S.; Shenitzer, A. (1985). "The principle of maximum entropy". The Mathematical Intelligencer. 7 (1): 42–48. doi:10.1007/bf03023004. S2CID 53059968.
Harremoës, P.; Topsøe (2001). "Maximum entropy fundamentals". Entropy. 3 (3): 191–226. Bibcode:2001Entrp...3..191H. doi:10.3390/e3030191.
Jaynes, E. T. (1963). "Information Theory and Statistical Mechanics". In Ford, K. (ed.). Statistical Physics. New York: Benjamin. p. 181.
Jaynes, E. T., 1986 (new version online 1996), "Monkeys, kangaroos and N", in Maximum-Entropy and Bayesian Methods in Applied Statistics, J. H. Justice (ed.), Cambridge University Press, Cambridge, p. 26.
Kapur, J. N.; and Kesavan, H. K., 1992, Entropy Optimization Principles with Applications, Boston: Academic Press. ISBN 0-12-397670-7
Kitamura, Y., 2006, Empirical Likelihood Methods in Econometrics: Theory and Practice, Cowles Foundation Discussion Papers 1569, Cowles Foundation, Yale University.
Lazar, N (2003). "Bayesian empirical likelihood". Biometrika. 90 (2): 319–326. doi:10.1093/biomet/90.2.319.
Owen, A. B., 2001, Empirical Likelihood, Chapman and Hall/CRC. ISBN 1-58-488071-6.
Schennach, S. M. (2005). "Bayesian exponentially tilted empirical likelihood". Biometrika. 92 (1): 31–46. doi:10.1093/biomet/92.1.31.
Uffink, Jos (1995). "Can the Maximum Entropy Principle be explained as a consistency requirement?" (PDF). Studies in History and Philosophy of Modern Physics. 26B (3): 223–261. Bibcode:1995SHPMP..26..223U. CiteSeerX 10.1.1.27.6392. doi:10.1016/1355-2198(95)00015-1. hdl:1874/2649. Archived from the original (PDF) on 2006-06-03.
== Further reading ==
Boyd, Stephen; Lieven Vandenberghe (2004). Convex Optimization (PDF). Cambridge University Press. p. 362. ISBN 0-521-83378-7. Retrieved 2008-08-24.
Ratnaparkhi A. (1997) "A simple introduction to maximum entropy models for natural language processing" Technical Report 97-08, Institute for Research in Cognitive Science, University of Pennsylvania. An easy-to-read introduction to maximum entropy methods in the context of natural language processing.
Tang, A.; Jackson, D.; Hobbs, J.; Chen, W.; Smith, J. L.; Patel, H.; Prieto, A.; Petrusca, D.; Grivich, M. I.; Sher, A.; Hottowy, P.; Dabrowski, W.; Litke, A. M.; Beggs, J. M. (2008). "A Maximum Entropy Model Applied to Spatial and Temporal Correlations from Cortical Networks in Vitro". Journal of Neuroscience. 28 (2): 505–518. doi:10.1523/JNEUROSCI.3359-07.2008. PMC 6670549. PMID 18184793. Open access article containing pointers to various papers and software implementations of Maximum Entropy Model on the net. | Wikipedia/Entropy_maximization_principle |
Sensory neurons, also known as afferent neurons, are neurons in the nervous system, that convert a specific type of stimulus, via their receptors, into action potentials or graded receptor potentials. This process is called sensory transduction. The cell bodies of the sensory neurons are located in the dorsal root ganglia of the spinal cord.
The sensory information travels on the afferent nerve fibers in a sensory nerve, to the brain via the spinal cord. Spinal nerves transmit external sensations via sensory nerves to the brain through the spinal cord. The stimulus can come from exteroreceptors outside the body, for example those that detect light and sound, or from interoreceptors inside the body, for example those that are responsive to blood pressure or the sense of body position.
== Types and function ==
Sensory neurons in vertebrates are predominantly pseudounipolar or bipolar, and different types of sensory neurons have different sensory receptors that respond to different kinds of stimuli. There are at least six external and two internal sensory receptors:
=== External receptors ===
External receptors that respond to stimuli from outside the body are called exteroreceptors. Exteroreceptors include chemoreceptors such as olfactory receptors (smell) and taste receptors, photoreceptors (vision), thermoreceptors (temperature), nociceptors (pain), hair cells (hearing and balance), and a number of other different mechanoreceptors for touch and proprioception (stretch, distortion and stress).
==== Smell ====
The sensory neurons involved in smell are called olfactory sensory neurons. These neurons contain receptors, called olfactory receptors, that are activated by odor molecules in the air. The molecules in the air are detected by enlarged cilia and microvilli. These sensory neurons produce action potentials. Their axons form the olfactory nerve, and they synapse directly onto neurons in the cerebral cortex (olfactory bulb). They do not use the same route as other sensory systems, bypassing the brain stem and the thalamus. The neurons in the olfactory bulb that receive direct sensory nerve input, have connections to other parts of the olfactory system and many parts of the limbic system. 9.
==== Taste ====
Taste sensation is facilitated by specialized sensory neurons located in the taste buds of the tongue and other parts of the mouth and throat. These sensory neurons are responsible for detecting different taste qualities, such as sweet, sour, salty, bitter, and savory. When you eat or drink something, chemicals in the food or liquid interact with receptors on these sensory neurons, triggering signals that are sent to the brain. The brain then processes these signals and interprets them as specific taste sensations, allowing you to perceive and enjoy the flavors of the foods you consume. When taste receptor cells are stimulated by the binding of these chemical compounds (tastants), it can lead to changes in the flow of ions, such as sodium (Na+), calcium (Ca2+), and potassium (K+), across the cell membrane. In response to tastant binding, ion channels on the taste receptor cell membrane can open or close. This can lead to depolarization of the cell membrane, creating an electrical signal.
Similar to olfactory receptors, taste receptors (gustatory receptors) in taste buds interact with chemicals in food to produce an action potential.
==== Vision ====
Photoreceptor cells are capable of phototransduction, a process which converts light (electromagnetic radiation) into electrical signals. These signals are refined and controlled by the interactions with other types of neurons in the retina. The five basic classes of neurons within the retina are photoreceptor cells, bipolar cells, ganglion cells, horizontal cells, and amacrine cells. The basic circuitry of the retina incorporates a three-neuron chain consisting of the photoreceptor (either a rod or cone), bipolar cell, and the ganglion cell. The first action potential occurs in the retinal ganglion cell. This pathway is the most direct way for transmitting visual information to the brain. There are three primary types of photoreceptors: Cones are photoreceptors that respond significantly to color. In humans the three different types of cones correspond with a primary response to short wavelength (blue), medium wavelength (green), and long wavelength (yellow/red). Rods are photoreceptors that are very sensitive to the intensity of light, allowing for vision in dim lighting. The concentrations and ratio of rods to cones is strongly correlated with whether an animal is diurnal or nocturnal. In humans, rods outnumber cones by approximately 20:1, while in nocturnal animals, such as the tawny owl, the ratio is closer to 1000:1. Retinal ganglion cells are involved in the sympathetic response. Of the ~1.3 million ganglion cells present in the retina, 1-2% are believed to be photosensitive.
Issues and decay of sensory neurons associated with vision lead to disorders such as:
Macular degeneration – degeneration of the central visual field due to either cellular debris or blood vessels accumulating between the retina and the choroid, thereby disturbing and/or destroying the complex interplay of neurons that are present there.
Glaucoma – loss of retinal ganglion cells which causes some loss of vision to blindness.
Diabetic retinopathy – poor blood sugar control due to diabetes damages the tiny blood vessels in the retina.
==== Auditory ====
The auditory system is responsible for converting pressure waves generated by vibrating air molecules or sound into signals that can be interpreted by the brain.
This mechanoelectrical transduction is mediated with hair cells within the ear. Depending on the movement, the hair cell can either hyperpolarize or depolarize. When the movement is towards the tallest stereocilia, the Na+ cation channels open allowing Na+ to flow into cell and the resulting depolarization causes the Ca++ channels to open, thus releasing its neurotransmitter into the afferent auditory nerve. There are two types of hair cells: inner and outer. The inner hair cells are the sensory receptors .
Problems with sensory neurons associated with the auditory system leads to disorders such as:
Auditory processing disorder – Auditory information in the brain is processed in an abnormal way. Patients with auditory processing disorder can usually gain the information normally, but their brain cannot process it properly, leading to hearing disability.
Auditory verbal agnosia – Comprehension of speech is lost but hearing, speaking, reading, and writing ability is retained. This is caused by damage to the posterior superior temporal lobes, again not allowing the brain to process auditory input correctly.
==== Temperature ====
Thermoreceptors are sensory receptors, which respond to varying temperatures. While the mechanisms through which these receptors operate is unclear, recent discoveries have shown that mammals have at least two distinct types of thermoreceptors.
The bulboid corpuscle, is a cutaneous receptor a cold-sensitive receptor, that detects cold temperatures. The other type is a warmth-sensitive receptor.
==== Mechanoreceptors ====
Mechanoreceptors are sensory receptors which respond to mechanical forces, such as pressure or distortion.
Specialized sensory receptor cells called mechanoreceptors often encapsulate afferent fibers to help tune the afferent fibers to the different types of somatic stimulation. Mechanoreceptors also help lower thresholds for action potential generation in afferent fibers and thus make them more likely to fire in the presence of sensory stimulation.
Some types of mechanoreceptors fire action potentials when their membranes are physically stretched.
Proprioceptors are another type of mechanoreceptors which literally means "receptors for self". These receptors provide spatial information about limbs and other body parts.
Nociceptors are responsible for processing pain and temperature changes. The burning pain and irritation experienced after eating a chili pepper (due to its main ingredient, capsaicin), the cold sensation experienced after ingesting a chemical such as menthol or icillin, as well as the common sensation of pain are all a result of neurons with these receptors.
Problems with mechanoreceptors lead to disorders such as:
Neuropathic pain - a severe pain condition resulting from a damaged sensory nerve
Hyperalgesia - an increased sensitivity to pain caused by sensory ion channel, TRPM8, which is typically responds to temperatures between 23 and 26 degrees, and provides the cooling sensation associated with menthol and icillin
Phantom limb syndrome - a sensory system disorder where pain or movement is experienced in a limb that does not exist
=== Internal receptors ===
Internal receptors that respond to changes inside the body are known as interoceptors.
==== Blood ====
The aortic bodies and carotid bodies contain clusters of glomus cells – peripheral chemoreceptors that detect changes in chemical properties in the blood such as oxygen concentration. These receptors are polymodal responding to a number of different stimuli.
==== Nociceptors ====
Nociceptors respond to potentially damaging stimuli by sending signals to the spinal cord and brain. This process, called nociception, usually causes the perception of pain. They are found in internal organs as well as on the surface of the body to "detect and protect". Nociceptors detect different kinds of noxious stimuli indicating potential for damage, then initiate neural responses to withdraw from the stimulus.
Thermal nociceptors are activated by noxious heat or cold at various temperatures.
Mechanical nociceptors respond to excess pressure or mechanical deformation, such as a pinch.
Chemical nociceptors respond to a wide variety of chemicals, some of which signal a response. They are involved in the detection of some spices in food, such as the pungent ingredients in Brassica and Allium plants, which target the sensory neural receptor to produce acute pain and subsequent pain hypersensitivity.
== Connection with the central nervous system ==
Information coming from the sensory neurons in the head enters the central nervous system (CNS) through cranial nerves. Information from the sensory neurons below the head enters the spinal cord and passes towards the brain through the 31 spinal nerves. The sensory information traveling through the spinal cord follows well-defined pathways. The nervous system codes the differences among the sensations in terms of which cells are active.
== Classification ==
=== Adequate stimulus ===
A sensory receptor's adequate stimulus is the stimulus modality for which it possesses the adequate sensory transduction apparatus. Adequate stimulus can be used to classify sensory receptors:
Baroreceptors respond to pressure in blood vessels
Chemoreceptors respond to chemical stimuli
Electromagnetic radiation receptors respond to electromagnetic radiation
Infrared receptors respond to infrared radiation
Photoreceptors respond to visible light
Ultraviolet receptors respond to ultraviolet radiation
Electroreceptors respond to electric fields
Ampullae of Lorenzini respond to electric fields, salinity, and to temperature, but function primarily as electroreceptors
Hydroreceptors respond to changes in humidity
Magnetoreceptors respond to magnetic fields
Mechanoreceptors respond to mechanical stress or mechanical strain
Nociceptors respond to damage, or threat of damage, to body tissues, leading (often but not always) to pain perception
Osmoreceptors respond to the osmolarity of fluids (such as in the hypothalamus)
Proprioceptors provide the sense of position
Thermoreceptors respond to temperature, either heat, cold or both
=== Location ===
Sensory receptors can be classified by location:
Cutaneous receptors are sensory receptors found in the dermis or epidermis.
Muscle spindles contain mechanoreceptors that detect stretch in muscles.
=== Morphology ===
Somatic sensory receptors near the surface of the skin can usually be divided into two groups based on morphology:
Free nerve endings characterize the nociceptors and thermoreceptors and are called thus because the terminal branches of the neuron are unmyelinated and spread throughout the dermis and epidermis.
Encapsulated receptors consist of the remaining types of cutaneous receptors. Encapsulation exists for specialized functioning.
=== Rate of adaptation ===
A tonic receptor is a sensory receptor that adapts slowly to a stimulus and continues to produce action potentials over the duration of the stimulus. In this way it conveys information about the duration of the stimulus. Some tonic receptors are permanently active and indicate a background level. Examples of such tonic receptors are pain receptors, joint capsule, and muscle spindle.
A phasic receptor is a sensory receptor that adapts rapidly to a stimulus. The response of the cell diminishes very quickly and then stops. It does not provide information on the duration of the stimulus; instead some of them convey information on rapid changes in stimulus intensity and rate. An example of a phasic receptor is the Pacinian corpuscle.
== Drugs ==
There are many drugs currently on the market that are used to manipulate or treat sensory system disorders. For instance, gabapentin is a drug that is used to treat neuropathic pain by interacting with one of the voltage-dependent calcium channels present on non-receptive neurons. Some drugs may be used to combat other health problems, but can have unintended side effects on the sensory system. Dysfunction in the hair cell mechanotransduction complex, along with the potential loss of specialized ribbon synapses, can lead to hair cell death, often caused by ototoxic drugs like aminoglycoside antibiotics poisoning the cochlea. Through the use of these toxins, the K+ pumping hair cells cease their function. Thus, the energy generated by the endocochlear potential which drives the auditory signal transduction process is lost, leading to hearing loss.
== Neuroplasticity ==
Ever since scientists observed cortical remapping in the brain of Taub's Silver Spring monkeys, there has been a large amount of research into sensory system plasticity. Huge strides have been made in treating disorders of the sensory system. Techniques such as constraint-induced movement therapy developed by Taub have helped patients with paralyzed limbs regain use of their limbs by forcing the sensory system to grow new neural pathways. Phantom limb syndrome is a sensory system disorder in which amputees perceive that their amputated limb still exists and they may still be experiencing pain in it. The mirror box developed by V.S. Ramachandran, has enabled patients with phantom limb syndrome to relieve the perception of paralyzed or painful phantom limbs. It is a simple device which uses a mirror in a box to create an illusion in which the sensory system perceives that it is seeing two hands instead of one, therefore allowing the sensory system to control the "phantom limb". By doing this, the sensory system can gradually get acclimated to the amputated limb, and thus alleviate this syndrome.
== Other animals ==
Hydrodynamic reception is a form of mechanoreception used in a range of animal species.
== Additional images ==
== See also ==
Pseudounipolar neuron
Neural coding
Posterior column
Receptive field
Sensory system
List of distinct cell types in the adult human body
Sensory nerve
Motor nerve
Afferent nerve fiber
Efferent nerve fiber
Motor neuron
== References ==
== External links ==
Media related to Sensory neuron at Wikimedia Commons
Purves D, Augustine GJ, Fitzpatrick D, et al., eds. (2001). "Table 9.1 The major classes of somatic sensory receptors". Neuroscience (2nd ed.). Sunderland MA: Sinauer Associates. ISBN 0-87893-742-0. | Wikipedia/Sensory_neuron |
A diversity index is a method of measuring how many different types (e.g. species) there are in a dataset (e.g. a community). Diversity indices are statistical representations of different aspects of biodiversity (e.g. richness, evenness, and dominance), which are useful simplifications for comparing different communities or sites.
When diversity indices are used in ecology, the types of interest are usually species, but they can also be other categories, such as genera, families, functional types, or haplotypes. The entities of interest are usually individual organisms (e.g. plants or animals), and the measure of abundance can be, for example, number of individuals, biomass or coverage. In demography, the entities of interest can be people, and the types of interest various demographic groups. In information science, the entities can be characters and the types of the different letters of the alphabet. The most commonly used diversity indices are simple transformations of the effective number of types (also known as 'true diversity'), but each diversity index can also be interpreted in its own right as a measure corresponding to some real phenomenon (but a different one for each diversity index).
Many indices only account for categorical diversity between subjects or entities. Such indices, however do not account for the total variation (diversity) that can be held between subjects or entities which occurs only when both categorical and qualitative diversity are calculated.
Diversity indices described in this article include:
Richness, simply a count of the number of types in a dataset.
Shannon index, which also takes into account the proportional abundance of each class under a weighted geometric mean.
The Rényi entropy, which adds the ability to freely vary the kind of weighted mean used.
Simpson index, which too takes into account the proportional abundance of each class under a weighted arithmetic mean
Berger–Parker index, which gives the proportional abundance of the most abundant type.
Effective number of species (true diversity), which allows for freely varying the kind of weighted mean used, and has a intuitive meaning.
Some more sophisticated indices also account for the phylogenetic relatedness among the types. These are called phylo-divergence indices, and are not yet described in this article.
== Effective number of species or Hill numbers ==
True diversity, or the effective number of types, refers to the number of equally abundant types needed for the average proportional abundance of the types to equal that observed in the dataset of interest (where all types may not be equally abundant). The true diversity in a dataset is calculated by first taking the weighted generalized mean Mq−1 of the proportional abundances of the types in the dataset, and then taking the reciprocal of this. The equation is:
q
D
=
1
M
q
−
1
=
1
∑
i
=
1
R
p
i
p
i
q
−
1
q
−
1
=
(
∑
i
=
1
R
p
i
q
)
1
/
(
1
−
q
)
{\displaystyle {}^{q}\!D={1 \over M_{q-1}}={1 \over {\sqrt[{q-1}]{\sum _{i=1}^{R}p_{i}p_{i}^{q-1}}}}=\left({\sum _{i=1}^{R}p_{i}^{q}}\right)^{1/(1-q)}}
The denominator Mq−1 equals the average proportional abundance of the types in the dataset as calculated with the weighted generalized mean with exponent q − 1. In the equation, R is richness (the total number of types in the dataset), and the proportional abundance of the ith type is pi. The proportional abundances themselves are used as the nominal weights. The numbers
q
D
{\displaystyle ^{q}D}
are called Hill numbers of order q or effective number of species.
When q = 1, the above equation is undefined. However, the mathematical limit as q approaches 1 is well defined and the corresponding diversity is calculated with the following equation:
1
D
=
1
∏
i
=
1
R
p
i
p
i
=
exp
(
−
∑
i
=
1
R
p
i
ln
(
p
i
)
)
{\displaystyle {}^{1}\!D={1 \over {\prod _{i=1}^{R}p_{i}^{p_{i}}}}=\exp \left(-\sum _{i=1}^{R}p_{i}\ln(p_{i})\right)}
which is the exponential of the Shannon entropy calculated with natural logarithms (see above). In other domains, this statistic is also known as the perplexity.
The general equation of diversity is often written in the form
q
D
=
(
∑
i
=
1
R
p
i
q
)
1
/
(
1
−
q
)
{\displaystyle {}^{q}\!D=\left({\sum _{i=1}^{R}p_{i}^{q}}\right)^{1/(1-q)}}
and the term inside the parentheses is called the basic sum. Some popular diversity indices correspond to the basic sum as calculated with different values of q.
== Sensitivity of the diversity value to rare vs. abundant species ==
The value of q is often referred to as the order of the diversity. It defines the sensitivity of the true diversity to rare vs. abundant species by modifying how the weighted mean of the species' proportional abundances is calculated. With some values of the parameter q, the value of the generalized mean Mq−1 assumes familiar kinds of weighted means as special cases. In particular,
q = 0 corresponds to the weighted harmonic mean,
q = 1 to the weighted geometric mean, and
q = 2 to the weighted arithmetic mean.
As q approaches infinity, the weighted generalized mean with exponent q − 1 approaches the maximum pi value, which is the proportional abundance of the most abundant species in the dataset.
Generally, increasing the value of q increases the effective weight given to the most abundant species. This leads to obtaining a larger Mq−1 value and a smaller true diversity (qD) value with increasing q.
When q = 1, the weighted geometric mean of the pi values is used, and each species is exactly weighted by its proportional abundance (in the weighted geometric mean, the weights are the exponents). When q > 1, the weight given to abundant species is exaggerated, and when q < 1, the weight given to rare species is. At q = 0, the species weights exactly cancel out the species proportional abundances, such that the weighted mean of the pi values equals 1 / R even when all species are not equally abundant. At q = 0, the effective number of species, 0D, hence equals the actual number of species R. In the context of diversity, q is generally limited to non-negative values. This is because negative values of q would give rare species so much more weight than abundant ones that qD would exceed R.
== Richness ==
Richness R simply quantifies how many different types the dataset of interest contains. For example, species richness (usually noted S) is simply the number of species, e.g. at a particular site. Richness is a simple measure, so it has been a popular diversity index in ecology, where abundance data are often not available. If true diversity is calculated with q = 0, the effective number of types (0D) equals the actual number of types, which is identical to Richness (R).
== Shannon index ==
The Shannon index has been a popular diversity index in the ecological literature, where it is also known as Shannon's diversity index, Shannon–Wiener index, and (erroneously) Shannon–Weaver index. The measure was originally proposed by Claude Shannon in 1948 to quantify the entropy (hence Shannon entropy, related to Shannon information content) in strings of text. The idea is that the more letters there are, and the closer their proportional abundances in the string of interest, the more difficult it is to correctly predict which letter will be the next one in the string. The Shannon entropy quantifies the uncertainty (entropy or degree of surprise) associated with this prediction. It is most often calculated as follows:
H
′
=
−
∑
i
=
1
R
p
i
ln
(
p
i
)
{\displaystyle H'=-\sum _{i=1}^{R}p_{i}\ln(p_{i})}
where pi is the proportion of characters belonging to the ith type of letter in the string of interest. In ecology, pi is often the proportion of individuals belonging to the ith species in the dataset of interest. Then the Shannon entropy quantifies the uncertainty in predicting the species identity of an individual that is taken at random from the dataset.
Although the equation is here written with natural logarithms, the base of the logarithm used when calculating the Shannon entropy can be chosen freely. Shannon himself discussed logarithm bases 2, 10 and e, and these have since become the most popular bases in applications that use the Shannon entropy. Each log base corresponds to a different measurement unit, which has been called binary digits (bits), decimal digits (decits), and natural digits (nats) for the bases 2, 10 and e, respectively. Comparing Shannon entropy values that were originally calculated with different log bases requires converting them to the same log base: change from the base a to base b is obtained with multiplication by logb(a).
The Shannon index (H') is related to the weighted geometric mean of the proportional abundances of the types. Specifically, it equals the logarithm of true diversity as calculated with q = 1:
H
′
=
−
∑
i
=
1
R
p
i
ln
(
p
i
)
=
−
∑
i
=
1
R
ln
(
p
i
p
i
)
{\displaystyle H'=-\sum _{i=1}^{R}p_{i}\ln(p_{i})=-\sum _{i=1}^{R}\ln \left(p_{i}^{p_{i}}\right)}
This can also be written
H
′
=
−
(
ln
(
p
1
p
1
)
+
ln
(
p
2
p
2
)
+
ln
(
p
3
p
3
)
+
⋯
+
ln
(
p
R
p
R
)
)
{\displaystyle H'=-\left(\ln \left(p_{1}^{p_{1}}\right)+\ln \left(p_{2}^{p_{2}}\right)+\ln \left(p_{3}^{p_{3}}\right)+\cdots +\ln \left(p_{R}^{p_{R}}\right)\right)}
which equals
H
′
=
−
ln
(
p
1
p
1
p
2
p
2
p
3
p
3
⋯
p
R
p
R
)
=
ln
(
1
p
1
p
1
p
2
p
2
p
3
p
3
⋯
p
R
p
R
)
=
ln
(
1
∏
i
=
1
R
p
i
p
i
)
{\displaystyle H'=-\ln \left(p_{1}^{p_{1}}p_{2}^{p_{2}}p_{3}^{p_{3}}\cdots p_{R}^{p_{R}}\right)=\ln \left({1 \over p_{1}^{p_{1}}p_{2}^{p_{2}}p_{3}^{p_{3}}\cdots p_{R}^{p_{R}}}\right)=\ln \left({1 \over {\prod _{i=1}^{R}p_{i}^{p_{i}}}}\right)}
Since the sum of the pi values equals 1 by definition, the denominator equals the weighted geometric mean of the pi values, with the pi values themselves being used as the weights (exponents in the equation). The term within the parentheses hence equals true diversity 1D, and H' equals ln(1D).
When all types in the dataset of interest are equally common, all pi values equal 1 / R, and the Shannon index hence takes the value ln(R). The more unequal the abundances of the types, the larger the weighted geometric mean of the pi values, and the smaller the corresponding Shannon entropy. If practically all abundance is concentrated to one type, and the other types are very rare (even if there are many of them), Shannon entropy approaches zero. When there is only one type in the dataset, Shannon entropy exactly equals zero (there is no uncertainty in predicting the type of the next randomly chosen entity).
In machine learning the Shannon index is also called as Information gain.
=== Rényi entropy ===
The Rényi entropy is a generalization of the Shannon entropy to other values of q than 1. It can be expressed:
q
H
=
1
1
−
q
ln
(
∑
i
=
1
R
p
i
q
)
{\displaystyle {}^{q}H={\frac {1}{1-q}}\;\ln \left(\sum _{i=1}^{R}p_{i}^{q}\right)}
which equals
q
H
=
ln
(
1
∑
i
=
1
R
p
i
p
i
q
−
1
q
−
1
)
=
ln
(
q
D
)
{\displaystyle {}^{q}H=\ln \left({1 \over {\sqrt[{q-1}]{\sum _{i=1}^{R}p_{i}p_{i}^{q-1}}}}\right)=\ln({}^{q}\!D)}
This means that taking the logarithm of true diversity based on any value of q gives the Rényi entropy corresponding to the same value of q.
== Simpson index ==
The Simpson index was introduced in 1949 by Edward H. Simpson to measure the degree of concentration when individuals are classified into types. The same index was rediscovered by Orris C. Herfindahl in 1950. The square root of the index had already been introduced in 1945 by the economist Albert O. Hirschman. As a result, the same measure is usually known as the Simpson index in ecology, and as the Herfindahl index or the Herfindahl–Hirschman index (HHI) in economics.
The measure equals the probability that two entities taken at random from the dataset of interest represent the same type. It equals:
λ
=
∑
i
=
1
R
p
i
2
,
{\displaystyle \lambda =\sum _{i=1}^{R}p_{i}^{2},}
where R is richness (the total number of types in the dataset). This equation is also equal to the weighted arithmetic mean of the proportional abundances pi of the types of interest, with the proportional abundances themselves being used as the weights. Proportional abundances are by definition constrained to values between zero and one, but it is a weighted arithmetic mean, hence λ ≥ 1/R, which is reached when all types are equally abundant.
By comparing the equation used to calculate λ with the equations used to calculate true diversity, it can be seen that 1/λ equals 2D, i.e., true diversity as calculated with q = 2. The original Simpson's index hence equals the corresponding basic sum.
The interpretation of λ as the probability that two entities taken at random from the dataset of interest represent the same type assumes that the first entity is replaced to the dataset before taking the second entity. If the dataset is very large, sampling without replacement gives approximately the same result, but in small datasets, the difference can be substantial. If the dataset is small, and sampling without replacement is assumed, the probability of obtaining the same type with both random draws is:
ℓ
=
∑
i
=
1
R
n
i
(
n
i
−
1
)
N
(
N
−
1
)
{\displaystyle \ell ={\frac {\sum _{i=1}^{R}n_{i}(n_{i}-1)}{N(N-1)}}}
where ni is the number of entities belonging to the ith type and N is the total number of entities in the dataset. This form of the Simpson index is also known as the Hunter–Gaston index in microbiology.
Since the mean proportional abundance of the types increases with decreasing number of types and increasing abundance of the most abundant type, λ obtains small values in datasets of high diversity and large values in datasets of low diversity. This is counterintuitive behavior for a diversity index, so often, such transformations of λ that increase with increasing diversity have been used instead. The most popular of such indices have been the inverse Simpson index (1/λ) and the Gini–Simpson index (1 − λ). Both of these have also been called the Simpson index in the ecological literature, so care is needed to avoid accidentally comparing the different indices as if they were the same.
=== Inverse Simpson index ===
The inverse Simpson index equals:
1
λ
=
1
∑
i
=
1
R
p
i
2
=
2
D
{\displaystyle {\frac {1}{\lambda }}={1 \over \sum _{i=1}^{R}p_{i}^{2}}={}^{2}D}
This simply equals true diversity of order 2, i.e. the effective number of types that is obtained when the weighted arithmetic mean is used to quantify average proportional abundance of types in the dataset of interest.
The index is also used as a measure of the effective number of parties.
=== Gini–Simpson index ===
The Gini-Simpson Index is also called Gini impurity, or Gini's diversity index in the field of Machine Learning. The original Simpson index λ equals the probability that two entities taken at random from the dataset of interest (with replacement) represent the same type. Its transformation 1 − λ, therefore, equals the probability that the two entities represent different types. This measure is also known in ecology as the probability of interspecific encounter (PIE) and the Gini–Simpson index. It can be expressed as a transformation of the true diversity of order 2:
1
−
λ
=
1
−
∑
i
=
1
R
p
i
2
=
1
−
1
2
D
{\displaystyle 1-\lambda =1-\sum _{i=1}^{R}p_{i}^{2}=1-{\frac {1}{{}^{2}D}}}
The Gibbs–Martin index of sociology, psychology, and management studies, which is also known as the Blau index, is the same measure as the Gini–Simpson index.
The quantity is also known as the expected heterozygosity in population genetics.
== Berger–Parker index ==
The Berger–Parker index, named after Wolfgang H. Berger and Frances Lawrence Parker, equals the maximum pi value in the dataset, i.e., the proportional abundance of the most abundant type. This corresponds to the weighted generalized mean of the pi values when q approaches infinity, and hence equals the inverse of the true diversity of order infinity (1/∞D).
== See also ==
== References ==
== Further reading ==
Colinvaux, Paul A. (1973). Introduction to Ecology. Wiley. ISBN 0-471-16498-4.
Cover, Thomas M.; Thomas, Joy A. (1991). Elements of Information Theory. Wiley. ISBN 0-471-06259-6. See chapter 5 for an elaboration of coding procedures described informally above.
Chao, A.; Shen, T-J. (2003). "Nonparametric estimation of Shannon's index of diversity when there are unseen species in sample" (PDF). Environmental and Ecological Statistics. 10 (4): 429–443. Bibcode:2003EnvES..10..429C. doi:10.1023/A:1026096204727. S2CID 20389926.
== External links ==
Simpson's Diversity index
Diversity indices Archived 2005-12-19 at the Wayback Machine gives some examples of estimates of Simpson's index for real ecosystems. | Wikipedia/Ecological_entropy |
Perforated metal, also known as perforated sheet, perforated plate, or perforated screen, is sheet metal that has been manually or mechanically stamped or punched using CNC technology or in some cases laser cutting to create different holes sizes, shapes and patterns. Materials used to manufacture perforated metal sheets include stainless steel, cold rolled steel, galvanized steel, brass, aluminum, tinplate, copper, Monel, Inconel, titanium, plastic, and more.
The process of perforating metal sheets has been practiced for over 150 years. In the late 19th century, metal screens were used as an efficient means of separating coal. The first perforators were laborers who would manually punch individual holes into the metal sheet. This proved to be an inefficient and inconsistent method which led to the development of new techniques, such as perforating the metal with a series of needles arranged in a way that would create the desired hole pattern.
Modern day perforation methods involve the use of technology and machines. Common equipment used for the perforation of metal include rotary pinned perforation rollers, die and punch presses, and laser perforations.
== Applications ==
Perforated metal has been utilized across a variety of industries including, but not limited to:
Architectural - infill panels, sunshade, cladding, column covers, metal signage, site amenities, fencing screens, etc.
Food & beverage - beehive construction, grain dryers, wine vats, fish farming, silo ventilation, sorting machines, fruit and vegetable juice presses, cheese molds, baking trays, coffee screens, etc.
Chemical & energy - filters, centrifuges, drying machine baskets, battery separator plates, water screens, gas purifiers, liquid gas burning tubes, mine cages, coal washing, etc.
Material development - glass reinforcement, cement slurry screens, dyeing machines, textile printers and felt mills, cinder screens, blast furnace screens, etc.
Automotive - air filters, oil filters, silencer tubes, radiator grilles, running boards, flooring, motorcycle silencers, ventilation grids, tractor engine ventilation, sand ladders and mats, etc.
Construction - ceiling noise protection, acoustic panels, stair treads, pipe guards, ventilation grilles, sun protection slats, facades, sign boards, temporary airfield surface, etc.
== Benefits ==
The acoustic performance of perforated metal helps people or workers to limit health effects from noise. Studies have shown that perforated metals help reduce sound levels.
Studies have shown that having buildings use perforated metal sheets in front of their façade can bring in one study 29% energy savings (HVAC + Lighting estimated consumption in 1 year) and in the second one 45% energy savings (heating, ventilation, air conditioning). Depending on the location of the building (intensity of the external sun), solar irradiation can be decrease by 77.9%.
== See also ==
Europerf, the European trade association for the metal perforation industry
== References == | Wikipedia/Perforated_metal |
Active noise control (ANC), also known as noise cancellation (NC), or active noise reduction (ANR), is a method for reducing unwanted sound by the addition of a second sound specifically designed to cancel the first. The concept was first developed in the late 1930s; later developmental work that began in the 1950s eventually resulted in commercial airline headsets with the technology becoming available in the late 1980s. The technology is also used in road vehicles, mobile telephones, earbuds, and headphones.
== Explanation ==
Sound is a pressure wave, which consists of alternating periods of compression and rarefaction. A noise-cancellation speaker emits a sound wave with the same amplitude but with an inverted phase (also known as antiphase) relative to the original sound. The waves combine to form a new wave, in a process called interference, and effectively cancel each other out – an effect which is called destructive interference.
Modern active noise control is generally achieved through the use of analog circuits or digital signal processing. Adaptive algorithms are designed to analyze the waveform of the background aural or nonaural noise, then based on the specific algorithm generate a signal that will either phase shift or invert the polarity of the original signal. This inverted signal (in antiphase) is then amplified and a transducer creates a sound wave directly proportional to the amplitude of the original waveform, creating destructive interference. This effectively reduces the volume of the perceivable noise.
A noise-cancellation speaker may be co-located with the sound source to be attenuated. In this case, it must have the same audio power level as the source of the unwanted sound in order to cancel the noise. Alternatively, the transducer emitting the cancellation signal may be located at the location where sound attenuation is wanted (e.g. the user's ear). This requires a much lower power level for cancellation but is effective only for a single user. Noise cancellation at other locations is more difficult as the three-dimensional wavefronts of the unwanted sound and the cancellation signal could match and create alternating zones of constructive and destructive interference, reducing noise in some spots while doubling noise in others. In small enclosed spaces (e.g. the passenger compartment of a car) global noise reduction can be achieved via multiple speakers and feedback microphones, and measurement of the modal responses of the enclosure.
== Applications ==
Applications can be 1-dimensional or 3-dimensional, depending on the type of zone to protect. Periodic sounds, even complex ones, are easier to cancel than random sounds due to the repetition in the waveform.
Protection of a 1-dimension zone is easier and requires only one or two microphones and speakers to be effective. Several commercial applications have been successful: noise-canceling headphones, active mufflers, anti-snoring devices, vocal or center channel extraction for karaoke machines, and the control of noise in air conditioning ducts. The term 1-dimension refers to a simple pistonic relationship between the noise and the active speaker (mechanical noise reduction) or between the active speaker and the listener (headphones).
Protection of a 3-dimensional zone requires many microphones and speakers, making it more expensive. Noise reduction is more easily achieved with a single listener remaining stationary but if there are multiple listeners or if the single listener turns their head or moves throughout the space then the noise reduction challenge is made much more difficult. High-frequency waves are difficult to reduce in three dimensions due to their relatively short audio wavelength in air. The wavelength in air of sinusoidal noise at approximately 800 Hz is double the distance of the average person's left ear to the right ear; such a noise coming directly from the front will be easily reduced by an active system but coming from the side will tend to cancel at one ear while being reinforced at the other, making the noise louder, not softer. High-frequency sounds above 1000 Hz tend to cancel and reinforce unpredictably from many directions. In sum, the most effective noise reduction in three-dimensional space involves low-frequency sounds. Commercial applications of 3-D noise reduction include the protection of aircraft cabins and car interiors, but in these situations, protection is mainly limited to the cancellation of repetitive (or periodic) noise such as engine-, propeller- or rotor-induced noise. This is because an engine's cyclic nature makes analysis and noise cancellation easier to apply.
Modern mobile phones use a multi-microphone design to cancel out ambient noise from the speech signal. Sound is captured from the microphone(s) furthest from the mouth (the noise signal(s)) and from the one closest to the mouth (the desired signal). The signals are processed to cancel the noise from the desired signal, producing improved voice sound quality.
In some cases, noise can be controlled by employing active vibration control. This approach is appropriate when the vibration of a structure produces unwanted noise by coupling the vibration into the surrounding air or water.
== Active vis-à-vis passive noise control ==
Noise control is an active or passive means of reducing sound emissions, often for personal comfort, environmental considerations, or legal compliance. Active noise control is sound reduction using a power source. Passive noise control is sound reduction by noise-isolating materials such as insulation, sound-absorbing tiles, or a muffler rather than a power source.
Active noise canceling is best suited for low frequencies. For higher frequencies, the spacing requirements for free space and zone of silence techniques become prohibitive. In acoustic cavity and duct-based systems, the number of nodes grows rapidly with increasing frequency, which quickly makes active noise control techniques unmanageable. Passive treatments become more effective at higher frequencies and often provide an adequate solution without the need for active control.
== History ==
The first patent for a noise control system—U.S. patent 2,043,416—was granted to inventor Paul Lueg in 1936. The patent described how to cancel sinusoidal tones in ducts by phase-advancing the wave and canceling arbitrary sounds in the region around a loudspeaker by inverting the polarity. In the 1950s Lawrence J. Fogel patented systems to cancel the noise in helicopter and airplane cockpits. In 1957 Willard Meeker developed a working model of active noise control applied to a circumaural earmuff. This headset had an active attenuation bandwidth of approximately 50–500 Hz, with a maximum attenuation of approximately 20 dB. By the late 1980s the first commercially available active noise reduction headsets became available. They could be powered by NiCad batteries or directly from the aircraft power system.
== See also ==
Active sound design
Adaptive noise cancelling
Coherence (physics)
Noise-canceling microphone
== Notes ==
== References ==
== External links ==
BYU physicists quiet fans in computers, office equipment
Anti-Noise, Quieting the Environment with Active Noise Cancellation Technology, IEEE Potentials, April 1992
Christopher E. Ruckman's ANC FAQ (This page was created in 1994 and maintained until approximately 2010, but is no longer active.)
Waves of Silence: Digisonix, active noise control, and the digital revolution Archived 2016-03-04 at the Wayback Machine | Wikipedia/Active_noise_control |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.