text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Occupational noise is the amount of acoustic energy received by an employee's auditory system when they are working in the industry. Occupational noise, or industrial noise, is often a term used in occupational safety and health, as sustained exposure can cause permanent hearing damage.
Occupational noise is considered an occupational hazard traditionally linked to loud industries such as ship-building, mining, railroad work, welding, and construction, but can be present in any workplace where hazardous noise is present.
== Regulation ==
In the US, the National Institute for Occupational Safety and Health (NIOSH) and the Occupational Safety and Health Administration (OSHA) work together to provide standards and regulations for noise in the workplace.
National Institute for Occupational Safety and Health (NIOSH), Occupational Safety and Health Administration (OSHA), Mine Safety and Health Administration (MSHA), Federal Railroad Administration (FRA) have all set standards on hazardous occupational noise in their respective industries. Each industry is different, as workers' tasks and equipment differ, but most regulations agree that noise becomes hazardous when it exceeds 85 decibels, for an 8-hour time exposure (typical work shift). This relationship between allotted noise level and exposure time is known as an exposure action value (EAV) or permissible exposure limit (PEL). The EAV or PEL can be seen as equations which manipulate the allotted exposure time according to the intensity of the industrial noise. This equation works as an inverse, exponential, relationship. As the industrial noise intensity increases, the allotted exposure time, to still remain safe, decreases. Thus, a worker exposed to a noise level of 100 decibels for 15 minutes would be at the same risk level as a worker exposed to 85 decibels for 8 hours. Using this mathematical relationship, an employer can calculate whether or not their employees are being overexposed to noise. When it is suspected that an employee will reach or exceed the PEL, a monitoring program for that employee should be implemented by the employer.
The above calculations of PEL and EAV are based on measurements taken to determine the intensity of that particular industrial noise. A-weighted measurements are commonly used to determine noise levels that can cause harm to the human ear. There are also special exposure meters available that integrate noise over a period of time to give an Leq value (equivalent sound pressure level), defined by standards.
=== Regulations in different countries ===
These numerical values do not fully reflect the real situation. For example, the OSHA standard sets the Action Level 85 dBA, and the PEL 90 dBA. But in practice, the Compliance Safety and Health Officer must record the excess of these values with a margin, in order to take into account the potential measurement error. And, instead of PEL 90 dBA, it turns out 92 dBA, and instead of AL 85 dBA, it's 87 dBA.
== Risks of occupational hearing loss ==
Occupational noise, if experienced repeatedly, at high intensity, for an extended period of time, can cause noise-induced hearing loss (NIHL) which is then classified as occupational hearing loss. Most often, this is a type of sensorineural hearing loss.
Noise, in the context of industrial noise, is hazardous to a person's hearing because of its loud intensity through repeated long-term exposure. In order for noise to cause hearing impairment for the worker, the noise has to be close enough, loud enough, and sustained long enough to damage the hair cells in the auditory system. These factors have been taken into account by the governing occupational health and safety organization to determine the unsafe noise exposure levels and durations for their respective industries.
Noise can also affect the safety of the employee and others. Noise can be a causal factor in work accidents as it may mask hazards and warning signals and impede concentration. High intensity noise interferes with vital workplace communication which increases the chance of accidents and decreases productivity.
Noise may also act synergistically with other hazards to increase the risk of harm to workers. In particular, toxic materials (e.g., some solvents, metals, asphyxiants and pesticides) have some ototoxic properties that may affect hearing function.
Modern thinking in occupational safety and health further identifies noise as hazardous to workers' safety and health. This hazard is experienced in various places of employment and through a variety of sources.
== Reduction ==
There are several ways to limit exposure to hazardous occupational noise. The hierarchy of controls is a guideline for reducing hazardous noise. Before starting a noise reduction program, base noise levels should first be recorded. After this the company can start to eliminate the noise source. If the noise source cannot be eliminated, the company must try to reduce the noise with alternative methods. This process is called Acoustic quieting.
Acoustic quieting is the process of making machinery quieter by damping vibrations to prevent them from reaching the observer. The company can isolate the certain piece of machinery by placing materials on the machine or in between the machine and the worker to decreases the signal intensity that reaches the worker's ear.
If elimination and substitution are not sufficient in reducing the noise exposure, engineering controls should be put in place by the employer. An engineering control usually changes the physical environment of a workplace. For noise reduction, an engineering control might be as simple as placing barriers between the noise source and the employee to disrupt the transmission path. An engineering control might also involve changing the machine that produces the noise. Ideally, most machines should be made with noise reduction in mind, but this doesn't always happen. Changing the machinery involved in an industrial process may not be possible, but is a good way to reduce the noise at its source.
To decrease an employee's exposure to hazardous noise, the company can also take administrative control by limiting the employee's exposure time. This can be done by changing work shifts and switching employees out from the noise exposure area. An employer might also implement a training program so that employees can learn about the hazards of occupational noise. Other administrative controls might include restricting access to noisy areas as well as placing warning signs around those same areas.
If all other controls fail to decrease the occupational noise exposure to an acceptable level, hearing protection should be used. There are several types of earplugs that can be used to attenuate the noise to a safe level. Some earplug types include: single-use earplugs, multiple-use ear plugs, and banded ear plugs. Depending on the type of work being done and the needs of the employees, earmuffs might also be a good option. While earmuffs might not have as high of a noise reduction rating (NNR) as earplugs, they can be useful if the noise exposure isn't very high, or if an employee cannot wear earplugs. Unfortunately, the ability of HPDs decrease the risk of health damage is close to zero in practice.
== Initiatives ==
Since the hazards of occupational noise exposure were realized, programs and initiatives such as the US Buy Quiet program have been set up to regulate or discourage noise exposure. The Buy Quiet initiative promotes the purchase of quieter tools and equipment and encourages manufacturers to design quieter machines. Additionally, the Safe-In-Sound Award was created to recognize successes in hearing loss prevention programs or initiatives.
== See also ==
Hearing conservation program
Occupational hearing loss
Noise-induced hearing loss
Buy Quiet
Earplug
Earmuffs
Protective clothing
A-weighting
ITU-R 468 noise weighting
Weighting filter
Equal-loudness contour
Safe-In-Sound Award Excellence in Hearing Loss Prevention
General:
Health effects from noise
Noise control
Noise pollution
Noise regulation
== Notes ==
== References ==
== External links ==
Occupational Safety & Health Administration
NIOSH Buy Quiet Topic Page
Establishing an OSHA-Compliant Occupational Hearing Testing Program | Wikipedia/Industrial_noise |
Engineering controls are strategies designed to protect workers from hazardous conditions by placing a barrier between the worker and the hazard or by removing a hazardous substance through air ventilation. Engineering controls involve a physical change to the workplace itself, rather than relying on workers' behavior or requiring workers to wear protective clothing.
Engineering controls is the third of five members of the hierarchy of hazard controls, which orders control strategies by their feasibility and effectiveness. Engineering controls are preferred over administrative controls and personal protective equipment (PPE) because they are designed to remove the hazard at the source, before it comes in contact with the worker. Well-designed engineering controls can be highly effective in protecting workers and will typically be independent of worker interactions to provide this high level of protection. The initial cost of engineering controls can be higher than the cost of administrative controls or PPE, but over the longer term, operating costs are frequently lower, and in some instances, can provide a cost savings in other areas of the process.
Elimination and substitution are usually considered to be separate levels of hazard controls, but in some schemes they are categorized as types of engineering control.
The U.S. National Institute for Occupational Safety and Health researches engineering control technologies, and provides information on their details and effectiveness in the NIOSH Engineering Controls Database.
== Background ==
Controlling exposures to occupational hazards is considered the fundamental method of protecting workers. Traditionally, a hierarchy of controls has been used as a means of determining how to implement feasible and effective controls, which typically include elimination, substitution, engineering controls, administrative controls, and personal protective equipment. Methods earlier in the list are considered generally more effective in reducing the risk associated with a hazard, with process changes and engineering controls recommended as the primary means for reducing exposures, and personal protective equipment being the approach of last resort. Following the hierarchy is intended to lead to the implementation of inherently safer systems, ones where the risk of illness or injury has been substantially reduced.
Engineering controls are physical changes to the workplace that isolate workers from hazards by containing them in an enclosure, or removing contaminated air from the workplace through ventilation and filtering. Well-designed engineering controls are typically passive, in the sense of being independent of worker interactions, which reduces the potential for worker behavior to impact exposure levels. They also ideally do not interfere with productivity and ease of processing for the worker, because otherwise the operator may be motivated to circumvent the controls. The initial cost of engineering controls can be higher than administrative controls or personal protective equipment, but the long-term operating costs are frequently lower, and can sometimes provide cost savings in other areas of the process.: 10–11
== Chemical and biological hazards ==
Various chemical hazards and biological hazards are known to cause disease. Engineering control approaches are often oriented towards reducing inhalation exposure through ventilation and isolation of the toxic material. However, isolation can also be useful for preventing skin and eye contact as well, reducing reliance on personal protective equipment which should be the control of last resort.
=== Ventilation ===
Ventilation systems are distinguished as being either local or general. Local exhaust ventilation operates at or near the source of contamination, often in conjunction with an enclosure, while general exhaust ventilation operates on an entire room through a building's HVAC system.: 11–12
==== Local exhaust ventilation ====
Local exhaust ventilation (LEV) is the application of an exhaust system at or near the source of contamination. If properly designed, it will be much more efficient at removing contaminants than dilution ventilation, requiring lower exhaust volumes, less make-up air, and, in many cases, lower costs. By applying exhaust at the source, contaminants are removed before they get into the general work environment.: 12 Examples of local exhaust systems include fume hoods, vented balance enclosures, and biosafety cabinets. Exhaust hoods lacking an enclosure are less preferable, and laminar flow hoods are not recommended because they direct air outwards towards the worker.: 18–28
Fume hoods are recommended to have an average inward velocity of 80–100 feet per minute (fpm) at the face of the hood. For higher toxicity materials, a higher face velocity of 100–120 fpm is recommended in order to provide better protection. However, face velocities exceeding 150 fpm are not believed to improve performance, and could increase hood leakage. It is recommended that air exiting a fume hood should be passed through a HEPA filter and exhausted outside the work environment, with used filters being handled as hazardous waste. Turbulence can cause materials to exit the front of the hood, and can be avoided by keeping the sash in the proper position, keeping the interior of the hood uncluttered with equipment, and not making fast movements while working.: 19–24
Low-turbulence balance enclosures were initially developed for the weighing of pharmaceutical powders and are also used for nanomaterials; these provide adequate containment at lower face velocities, typically operating at 65–85 fpm. They are useful for weighing operations, which disturb the material and increase its aerosolization.: 27–28
Biosafety cabinets are designed to contain bioaerosols. However, common biosafety cabinets are more prone to turbulence. As with fume hoods, they are recommended to be exhausted outside the facility.: 25–27
Dedicated large-scale ventilated enclosures for large pieces of equipment can also be used.: 9–11
==== General exhaust ventilation ====
General exhaust ventilation (GEV), also called dilution ventilation, is different from local exhaust ventilation because instead of capturing emissions at their source and removing them from the air, general exhaust ventilation allows the contaminant to be emitted into the workplace air and then dilutes the concentration of the contaminant to an acceptable level. GEV is inefficient and costly as compared to local exhaust ventilation, and given the lack of established exposure limits for most nanomaterials, they are not recommended to be relied upon for controlling exposure.: 11–12
However, GEV can provide negative room pressure to prevent contaminants from exiting the room. The use of supply and exhaust air throughout the facility can provide pressurization schemes that reduce the number of workers exposed to potentially hazardous materials, for example keeping production areas at a negative pressure with respect to nearby areas.: 11–12 For general exhaust ventilation in laboratories, a nonrecirculating system is used with 4–12 air changes per hour when used in tandem with local exhaust ventilation, and sources of contamination are placed close to the air exhaust and downwind of workers, and away from windows or doors that may cause air drafts.: 13
==== Control verification ====
Several control verification techniques can be used to assess room airflow patterns and verify the proper operation of LEV systems. It is considered important to confirm that an LEV system is operating as designed by regularly measuring exhaust airflows. A standard measurement, hood static pressure, provides information on airflow changes that affect hood performance. For hoods designed to prevent exposure to hazardous airborne contaminants, the American Conference of Governmental Industrial Hygienists recommends the installation of a fixed hood static pressure gauge.
Additionally, Pitot tubes, hot-wire anemometers, smoke generators, and dry ice tests can be used to qualitatively measure hood slot/face and duct air velocity, while tracer-gas leak testing is a quantitative method.: 50–52, 59 Standardized testing and certification procedures such as ANSI Z9.5 and ASHRAE 110 can be used, as can qualitative indicators of proper installation and functionality such as inspection of gaskets and hoses.: 59–60 : 14–15
=== Containment ===
Containment refers to the physical isolation of a process or a piece of equipment to prevent the release of the hazardous material into the workplace.: 13 It can be used in conjunction with ventilation measures to provide an enhanced level of protection for nanomaterial workers. Examples include placing equipment that may release toxic materials in a separate room.: 9–11 Standard dust control methods such as enclosures for conveyor systems or using a sealed system for bag filling are effective at reducing respirable dust concentrations.: 16–17
Non-ventilation engineering controls can also include devices developed for the pharmaceutical industry, including isolation containment systems. One of the most common flexible isolation systems is glovebox containment, which can be used as an enclosure around small-scale powder processes, such as mixing and drying. Rigid glovebox isolation units also provide a method for isolating the worker from the process and are often used for medium-scale operations involving transfer of powders. Glovebags are similar to rigid gloveboxes, but they are flexible and disposable. They are used for small operations for containment or protection from contamination. Gloveboxes are sealed systems that provide a high degree of operator protection, but are more difficult to use due to limited mobility and size of operation. Transferring materials into and out of the enclosure also is an exposure risk. In addition, some gloveboxes are configured to use positive pressure, which can increase the risk of leaks.: 24–28
Another non-ventilation control used in this industry is the continuous liner system, which allows the filling of product containers while enclosing the material in a polypropylene bag. This system is often used for off-loading materials when the powders are to be packed into drums.
=== Other ===
Other non-ventilation engineering controls in general cover a range of control measures, such as guards and barricades, material treatment, or additives. One example is placing walk-off sticky mats at room exits.: 9–11 Antistatic devices can be used when handling particulates including nanomaterials to reduce their electrostatic charge, making them less likely to disperse or adhere to clothing.: 28 Water spray application is also an effective method for reducing respirable dust concentrations.: 16–17
== Physical hazards ==
=== Ergonomic hazards ===
Ergonomics is the study of how employees relate to their work environments. Ergonomists and industrial hygienists aim to prevent musculoskeletal disorders and soft tissue injuries by fitting the workers to their work space. Tools, lighting, tasks, controls, displays, and equipment as well as the employee's capabilities and limitations must all be considered to create an ergonomically appropriate workplace.
=== Falls ===
Fall protection is the use of controls designed to protect personnel from falling or in the event they do fall, to stop them without causing severe injury. Typically, fall protection is implemented when working at height, but may be relevant when working near any edge, such as near a pit or hole, or performing work on a steep surface. According to the US Department of Labor, falls account for 8% of all work-related trauma injuries leading to death.
Fall guarding is the use of guard rails or other barricades to prevent a person from falling. These barricades are placed near an edge where a fall hazard can occur, or to surround a weak surface (such as a skylight on a roof) that may break when stepped on.
Fall arrest is the form of fall protection which involves the safe stopping of a person already falling. Fall arrest is of two major types: general fall arrest, such as nets; and personal fall arrest, such as lifelines.
=== Noise ===
Occupational hearing loss is one of the most common work-related illnesses in the United States. Each year, about 22 million U.S. workers are exposed to hazardous noise levels at work. Hearing loss costs businesses $242 million annually for workers compensation claims. There are both regulatory and recommended exposure limits for noise exposure in the U.S. The NIOSH Recommended Exposure Limit (REL) for occupational noise exposure is 85 decibels, A-weighted, as an 8-hour time-weighted average (85 dBA as an 8-hr TWA) using a 3-dB exchange rate. The OSHA permissible exposure limit (PEL) is 90 dBA as an 8 hr-TWA, using a 5 dBA exchange rate. The exchange rate means that when the noise level is increased by either 3 dBA (according to the NIOSH REL) or 5 dBA (according to the OSHA PEL), the amount of time a person can be exposed to a certain noise level to receive the same dose is cut in half. Exposures at or above these levels are considered hazardous.
The Hierarchy of Controls approach can also be applied to reducing exposures to noise sources. The use of engineering control approaches to reduce noise at the source is preferred and can be accomplished by several means, including: using quieter tools, using vibration isolation or dampers on machinery, and disrupting the noise path by using barriers or sound insulation around the equipment
=== Other ===
Lockout-tagout
Rupture disc
== Psychosocial hazards ==
Engineering controls for psychosocial hazards include workplace design to affect the amount, type, and level of personal control of work, as well as access controls and alarms. The risk of workplace violence can be reduced through physical design of the workplace or by cameras.
== See also ==
Engineering controls for nanomaterials – class of hazard controls for nanomaterialsPages displaying wikidata descriptions as a fallback
== References ==
This article incorporates public domain material from websites or documents of the National Institute for Occupational Safety and Health.
== Further reading ==
Harold E. Roland; Brian Moriarty (10 October 1990). System Safety Engineering and Management. John Wiley & Sons. pp. 73–. ISBN 978-0-471-61816-4.
Jeanne Mager Stellman (1 January 1998). Encyclopaedia of Occupational Health and Safety: Chemical, industries and occupations. International Labour Organization. pp. 871–. ISBN 978-92-2-109816-4.
Jeanne Mager Stellman (1998). Encyclopaedia of Occupational Health and Safety: The body, health care, management and policy, tools and approaches. International Labour Organization. pp. 1026–. ISBN 978-92-2-109814-0.
Effective workplace safety and health management systems from the U.S. Occupational Safety and Health Administration
Related media at Wikimedia Commons:
Building Air Quality - A Guide for Building Owners and Facility Managers | Wikipedia/Engineering_controls |
Administrative controls are training, procedure, policy, or shift designs that lessen the threat of a hazard to an individual. Administrative controls typically change the behavior of people (e.g., factory workers) rather than removing the actual hazard or providing personal protective equipment (PPE).
Administrative controls are fourth in larger hierarchy of hazard controls, which ranks the effectiveness and efficiency of hazard controls. Administrative controls are more effective than PPE because they involve some manner of prior planning and avoidance, whereas PPE serves only as a final barrier between the hazard and worker. Administrative controls are second lowest because they require workers or employers to actively think or comply with regulations and do not offer permanent solutions to problems. Generally, administrative controls are cheaper to begin, but they may become more expensive over time as higher failure rates and the need for constant training or re-certification eclipse the initial investments of the three more desirable hazard controls in the hierarchy. The U.S. National Institute for Occupational Safety and Health recommends administrative controls when hazards cannot be removed or changed, and engineering controls are not practical.
Some common examples of administrative controls include work practice controls such as prohibiting mouth pipetting and rotating worker shifts in coal mines to prevent hearing loss. Other examples include hours of service regulations for commercial vehicle operators, Safety signage for hazards, and regular maintenance of equipment.
== References == | Wikipedia/Administrative_controls |
The history of thermodynamics is a fundamental strand in the history of physics, the history of chemistry, and the history of science in general. Due to the relevance of thermodynamics in much of science and technology, its history is finely woven with the developments of classical mechanics, quantum mechanics, magnetism, and chemical kinetics, to more distant applied fields such as meteorology, information theory, and biology (physiology), and to technological developments such as the steam engine, internal combustion engine, cryogenics and electricity generation. The development of thermodynamics both drove and was driven by atomic theory. It also, albeit in a subtle manner, motivated new directions in probability and statistics; see, for example, the timeline of thermodynamics.
== Antiquity ==
The ancients viewed heat as that related to fire. In 3000 BC, the ancient Egyptians viewed heat as related to origin mythologies. The ancient Indian philosophy including Vedic philosophy believed that five classical elements (or pancha mahā bhūta) are the basis of all cosmic creations. In the Western philosophical tradition, after much debate about the primal element among earlier pre-Socratic philosophers, Empedocles proposed a four-element theory, in which all substances derive from earth, water, air, and fire. The Empedoclean element of fire is perhaps the principal ancestor of later concepts such as phlogiston and caloric. Around 500 BC, the Greek philosopher Heraclitus became famous as the "flux and fire" philosopher for his proverbial utterance: "All things are flowing." Heraclitus argued that the three principal elements in nature were fire, earth, and water.
=== Vacuum-abhorrence ===
The 5th century BC Greek philosopher Parmenides, in his only known work, a poem conventionally titled On Nature, uses verbal reasoning to postulate that a void, essentially what is now known as a vacuum, in nature could not occur. This view was supported by the arguments of Aristotle, but was criticized by Leucippus and Hero of Alexandria. From antiquity to the Middle Ages various arguments were put forward to prove or disapprove the existence of a vacuum and several attempts were made to construct a vacuum but all proved unsuccessful.
=== Atomism ===
Atomism is a central part of today's relationship between thermodynamics and statistical mechanics. Ancient thinkers such as Leucippus and Democritus, and later the Epicureans, by advancing atomism, laid the foundations for the later atomic theory. Until experimental proof of atoms was later provided in the 20th century, the atomic theory was driven largely by philosophical considerations and scientific intuition.
== 17th century ==
=== Early thermometers ===
The European scientists Cornelius Drebbel, Robert Fludd, Galileo Galilei and Santorio Santorio in the 16th and 17th centuries were able to gauge the relative "coldness" or "hotness" of air, using a rudimentary air thermometer (or thermoscope). This may have been influenced by an earlier device which could expand and contract the air constructed by Philo of Byzantium and Hero of Alexandria.
=== "Heat is motion" (Francis Bacon) ===
The idea that heat is a form of motion is perhaps an ancient one and is certainly discussed by the English philosopher and scientist Francis Bacon in 1620 in his Novum Organum. Bacon surmised: "Heat itself, its essence and quiddity is motion and nothing else." "not ... of the whole, but of the small particles of the body."
=== René Descartes ===
==== Precursor to work ====
In 1637, in a letter to the Dutch scientist Christiaan Huygens, the French philosopher René Descartes wrote:
Lifting 100 lb one foot twice over is the same as lifting 200 lb one foot, or 100 lb two feet.
In 1686, the German philosopher Gottfried Leibniz wrote essentially the same thing: The same force ["work" in modern terms] is necessary to raise body A of 1 pound (libra) to a height of 4 yards (ulnae), as is necessary to raise body B of 4 pounds to a height of 1 yard.
==== Quantity of motion ====
In Principles of Philosophy (Principia Philosophiae) from 1644, Descartes defined "quantity of motion" (Latin: quantitas motus) as the product of size and speed, and claimed that the total quantity of motion in the universe is conserved.
If x is twice the size of y, and is moving half as fast, then there's the same amount of motion in each.[God] created matter, along with its motion ... merely by letting things run their course, he preserves the same amount of motion ... as he put there in the beginning.
He claimed that merely by letting things run their course, God preserves the same amount of motion as He created, and that thus the total quantity of motion in the universe is conserved.
=== Boyle's law ===
Irish physicist and chemist Robert Boyle in 1656, in coordination with English scientist Robert Hooke, built an air pump. Using this pump, Boyle and Hooke noticed the pressure-volume correlation: PV=constant. In that time, air was assumed to be a system of motionless particles, and not interpreted as a system of moving molecules. The concept of thermal motion came two centuries later. Therefore, Boyle's publication in 1660 speaks about a mechanical concept: the air spring. Later, after the invention of the thermometer, the property temperature could be quantified. This tool gave Gay-Lussac the opportunity to derive his law, which led shortly later to the ideal gas law.
==== Gas laws in brief ====
Boyle's law (1662)
Charles's law was first published by Joseph Louis Gay-Lussac in 1802, but he referenced unpublished work by Jacques Charles from around 1787. The relationship had been anticipated by the work of Guillaume Amontons in 1702.
Gay-Lussac's law (1802)
=== Steam digester ===
Denis Papin, an associate of Boyle's, built in 1679 a bone digester, which is a closed vessel with a tightly fitting lid that confines steam until a high pressure is generated. Later designs implemented a steam release valve to keep the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and cylinder engine. He did not however follow through with his design. Nevertheless, in 1697, based on Papin's designs, Thomas Newcomen greatly improved upon engineer Thomas Savery's earlier "fire engine" by incorporating a piston. This made it suitable for mechanical work in addition to pumping to heights beyond 30 feet, and is thus often considered the first true steam engine.
=== Heat transfer (Halley and Newton) ===
The phenomenon of heat conduction is immediately grasped in everyday life. The fact that warm air rises and the importance of the phenomenon to meteorology was first realised by Edmond Halley in 1686.
In 1701, Sir Isaac Newton published his law of cooling.
== 18th century ==
=== Phlogiston theory ===
The theory of phlogiston arose in the 17th century, late in the period of alchemy. Its replacement by caloric theory in the 18th century is one of the historical markers of the transition from alchemy to chemistry. Phlogiston was a hypothetical substance that was presumed to be liberated from combustible substances during burning, and from metals during the process of rusting.
=== Limit to the "degree of cold" ===
In 1702 Guillaume Amontons introduced the concept of absolute zero based on observations of gases.
=== Kinetic theory (18th century) ===
An early scientific reflection on the microscopic and kinetic nature of matter and heat is found in a work by Mikhail Lomonosov, in which he wrote: "Movement should not be denied based on the fact it is not seen. ... leaves of trees move when rustled by a wind, despite it being unobservable from large distances. Just as in this case motion ... remains hidden in warm bodies due to the extremely small sizes of the moving particles."
During the same years, Daniel Bernoulli published his book Hydrodynamics (1738), in which he derived an equation for the pressure of a gas considering the collisions of its atoms with the walls of a container. He proved that this pressure is two thirds the average kinetic energy of the gas in a unit volume. Bernoulli's ideas, however, made little impact on the dominant caloric culture. Bernoulli made a connection with Gottfried Leibniz's vis viva principle, an early formulation of the principle of conservation of energy, and the two theories became intimately entwined throughout their history.
=== Thermochemistry and steam engines ===
==== Heat capacity ====
Bodies were capable of holding a certain amount of this fluid, leading to the term heat capacity, named and first investigated by Scottish chemist Joseph Black in the 1750s.
In the mid- to late 19th century, heat became understood as a manifestation of a system's internal energy. Today heat is seen as the transfer of disordered thermal energy. Nevertheless, at least in English, the term heat capacity survives. In some other languages, the term thermal capacity is preferred, and it is also sometimes used in English.
==== Steam engines ====
Prior to 1698 and the invention of the Savery engine, horses were used to power pulleys, attached to buckets, which lifted water out of flooded salt mines in England. In the years to follow, more variations of steam engines were built, such as the Newcomen engine, and later the Watt engine. In time, these early engines would eventually be utilized in place of horses. Thus, each engine began to be associated with a certain amount of "horse power" depending upon how many horses it had replaced. The main problem with these first engines was that they were slow and clumsy, converting less than 2% of the input fuel into useful work. In other words, large quantities of coal (or wood) had to be burned to yield only a small fraction of work output. Hence the need for a new science of engine dynamics was born.
==== Caloric theory ====
In the mid- to late 18th century, heat was thought to be a measurement of an invisible fluid, known as the caloric. Like phlogiston, caloric was presumed to be the "substance" of heat that would flow from a hotter body to a cooler body, thus warming it. The utility and explanatory power of kinetic theory, however, soon started to displace the caloric theory. Nevertheless, William Thomson, for example, was still trying to explain James Joule's observations within a caloric framework as late as 1850. The caloric theory was largely obsolete by the end of the 19th century.
==== Calorimetry ====
Joseph Black and Antoine Lavoisier made important contributions in the precise measurement of heat changes using the calorimeter, a subject which became known as thermochemistry. The development of the steam engine focused attention on calorimetry and the amount of heat produced from different types of coal. The first quantitative research on the heat changes during chemical reactions was initiated by Lavoisier using an ice calorimeter following research by Joseph Black on the latent heat of water.
=== Thermal conduction and thermal radiation ===
Carl Wilhelm Scheele distinguished heat transfer by thermal radiation (radiant heat) from that by convection and conduction in 1777.
In the 17th century, it came to be believed that all materials had an identical conductivity and that differences in sensation arose from their different heat capacities. Suggestions that this might not be the case came from the new science of electricity in which it was easily apparent that some materials were good electrical conductors while others were effective insulators. Jan Ingen-Housz in 1785-9 made some of the earliest measurements, as did Benjamin Thompson during the same period.
In 1791, Pierre Prévost showed that all bodies radiate heat, no matter how hot or cold they are. In 1804, Sir John Leslie observed that a matte black surface radiates heat more effectively than a polished surface, suggesting the importance of black-body radiation.
=== Heat and friction (Rumford) ===
In the 19th century, scientists abandoned the idea of a physical caloric. The first substantial experimental challenges to the caloric theory arose in a work by Benjamin Thompson's (Count Rumford) from 1798, in which he showed that boring cast iron cannons produced great amounts of heat which he ascribed to friction. His work was among the first to undermine the caloric theory.
As a result of his experiments in 1798, Thompson suggested that heat was a form of motion, though no attempt was made to reconcile theoretical and experimental approaches, and it is unlikely that he was thinking of the vis viva principle.
== Early 19th century ==
=== Modern thermodynamics (Carnot) ===
Although early steam engines were crude and inefficient, they attracted the attention of the leading scientists of the time. One such scientist was Sadi Carnot, the "father of thermodynamics", who in 1824 published Reflections on the Motive Power of Fire, a discourse on heat, power, and engine efficiency. Most cite this book as the starting point for thermodynamics as a modern science. (The name "thermodynamics", however, did not arrive until 1854, when the British mathematician and physicist William Thomson (Lord Kelvin) coined the term thermo-dynamics in his paper On the Dynamical Theory of Heat.)
Carnot defined "motive power" to be the expression of the useful effect that a motor is capable of producing. Herein, Carnot introduced us to the first modern day definition of "work": weight lifted through a height. The desire to understand, via formulation, this useful effect in relation to "work" is at the core of all modern day thermodynamics.
Even though he was working with the caloric theory, Carnot in 1824 suggested that some of the caloric available for generating useful work is lost in any real process.
=== Reflection, refraction, and polarisation of radiant heat ===
Though it had come to be suspected from Scheele's work, in 1831 Macedonio Melloni demonstrated that radiant heat could be reflected, refracted and polarised in the same way as light.
=== Kinetic theory (early 19th century) ===
John Herapath independently formulated a kinetic theory in 1820, but mistakenly associated temperature with momentum rather than vis viva or kinetic energy. His work ultimately failed peer review, even from someone as well-disposed to the kinetic principle as Humphry Davy, and was neglected.
John James Waterston in 1843 provided a largely accurate account, again independently, but his work received the same reception, failing peer review.
Further progress in kinetic theory started only in the middle of the 19th century, with the works of Rudolf Clausius, James Clerk Maxwell, and Ludwig Boltzmann.
=== Mechanical equivalent of heat ===
Quantitative studies by Joule from 1843 onwards provided soundly reproducible phenomena, and helped to place the subject of thermodynamics on a solid footing. In 1843, Joule experimentally found the mechanical equivalent of heat. In 1845, Joule reported his best-known experiment, involving the use of a falling weight to spin a paddle-wheel in a barrel of water, which allowed him to estimate a mechanical equivalent of heat of 819 ft·lbf/Btu (4.41 J/cal). This led to the theory of conservation of energy and explained why heat can do work.
=== Absolute zero and the Kelvin scale ===
The idea of absolute zero was generalised in 1848 by Lord Kelvin.
== Late 19th century ==
=== Entropy and the second law of thermodynamics ===
==== Lord Kelvin ====
In March 1851, while grappling to come to terms with the work of Joule, Lord Kelvin started to speculate that there was an inevitable loss of useful heat in all processes. The idea was framed even more dramatically by Hermann von Helmholtz in 1854, giving birth to the spectre of the heat death of the universe.
==== William Rankine ====
In 1854, William John Macquorn Rankine started to make use of what he called thermodynamic function in calculations. This has subsequently been shown to be identical to the concept of entropy formulated by the famed mathematical physicist Rudolf Clausius.
==== Rudolf Clausius ====
In 1865, Clausius coined the term "entropy" (das Wärmegewicht, symbolized S) to denote heat lost or turned into waste. ("Wärmegewicht" translates literally as "heat-weight"; the corresponding English term stems from the Greek τρέπω, "I turn".) Clausius used the concept to develop his classic statement of the second law of thermodynamics the same year.
=== Statistical thermodynamics ===
==== Temperature is average kinetic energy of molecules ====
In his 1857 work On the nature of the motion called heat, Clausius for the first time clearly states that heat is the average kinetic energy of molecules.
==== Maxwell–Boltzmann distribution ====
Clausius' above statement interested the Scottish mathematician and physicist James Clerk Maxwell, who in 1859 derived the momentum distribution later named after him. The Austrian physicist Ludwig Boltzmann subsequently generalized this distribution for the case of gases in external fields. In association with Clausius, in 1871, Maxwell formulated a new branch of thermodynamics called statistical thermodynamics, which functions to analyze large numbers of particles at equilibrium, i.e., systems where no changes are occurring, such that only their average properties as temperature T, pressure P, and volume V become important.
==== Degrees of freedom ====
Boltzmann is perhaps the most significant contributor to kinetic theory, as he introduced many of the fundamental concepts in the theory. Besides the Maxwell–Boltzmann distribution mentioned above, he also associated the kinetic energy of particles with their degrees of freedom. The Boltzmann equation for the distribution function of a gas in non-equilibrium states is still the most effective equation for studying transport phenomena in gases and metals. By introducing the concept of thermodynamic probability as the number of microstates corresponding to the current macrostate, he showed that its logarithm is proportional to entropy.
==== Definition of entropy ====
In 1875, the Austrian physicist Ludwig Boltzmann formulated a precise connection between entropy S and molecular motion:
S
=
k
log
W
{\displaystyle S=k\log W\,}
being defined in terms of the number of possible states W that such motion could occupy, where k is the Boltzmann constant.
==== Gibbs free energy ====
In 1876, chemical engineer Willard Gibbs published an obscure 300-page paper titled: On the Equilibrium of Heterogeneous Substances, wherein he formulated one grand equality, the Gibbs free energy equation, which suggested a measure of the amount of "useful work" attainable in reacting systems.
=== Enthalpy ===
Gibbs also originated the concept we now know as enthalpy H, calling it "a heat function for constant pressure". The modern word enthalpy would be coined many years later by Heike Kamerlingh Onnes, who based it on the Greek word enthalpein meaning to warm.
=== Stefan–Boltzmann law ===
James Clerk Maxwell's 1862 insight that both light and radiant heat were forms of electromagnetic wave led to the start of the quantitative analysis of thermal radiation. In 1879, Jožef Stefan observed that the total radiant flux from a blackbody is proportional to the fourth power of its temperature and stated the Stefan–Boltzmann law. The law was derived theoretically by Ludwig Boltzmann in 1884.
== 20th century ==
=== Quantum thermodynamics ===
In 1900 Max Planck found an accurate formula for the spectrum of black-body radiation. Fitting new data required the introduction of a new constant, known as the Planck constant, the fundamental constant of modern physics. Looking at the radiation as coming from a cavity oscillator in thermal equilibrium, the formula suggested that energy in a cavity occurs only in multiples of frequency times the constant. That is, it is quantized. This avoided a divergence to which the theory would lead without the quantization.
=== Third law of thermodynamics ===
In 1906, Walther Nernst stated the third law of thermodynamics.
=== Erwin Schrödinger ===
Building on the foundations above, Lars Onsager, Erwin Schrödinger, Ilya Prigogine and others, brought these engine "concepts" into the thoroughfare of almost every modern-day branch of science.
== Branches of thermodynamics ==
The following list is a rough disciplinary outline of the major branches of thermodynamics and their time of inception:
Thermochemistry – 1780s
Classical thermodynamics – 1824
Chemical thermodynamics – 1876
Statistical mechanics – c. 1880s
Equilibrium thermodynamics
Engineering thermodynamics
Chemical engineering thermodynamics – c. 1940s
Non-equilibrium thermodynamics – 1941
Small systems thermodynamics – 1960s
Biological thermodynamics – 1957
Ecosystem thermodynamics – 1959
Relativistic thermodynamics – 1965
Rational thermodynamics – 1960s
Quantum thermodynamics – 1968
Black hole thermodynamics – c. 1970s
Theory of critical phenomena and use of renormalization group theory in statistical physics – 1966-1974
Geological thermodynamics – c. 1970s
Biological evolution thermodynamics – 1978
Geochemical thermodynamics – c. 1980s
Atmospheric thermodynamics – c. 1980s
Natural systems thermodynamics – 1990s
Supramolecular thermodynamics – 1990s
Earthquake thermodynamics – 2000
Drug-receptor thermodynamics – 2001
Pharmaceutical systems thermodynamics – 2002
Concepts of thermodynamics have also been applied in other fields, for example:
Thermoeconomics – c. 1970s
== See also ==
History of chemistry
Timeline of heat engine technology
Timeline of low-temperature technology
Timeline of thermodynamics
== References ==
== Further reading ==
Cardwell, D.S.L. (1971). From Watt to Clausius: The Rise of Thermodynamics in the Early Industrial Age. London: Heinemann. ISBN 978-0-435-54150-7.
Leff, H.S.; Rex, A.F., eds. (1990). Maxwell's Demon: Entropy, Information and Computing. Bristol: Adam Hilger. ISBN 978-0-7503-0057-5.
== External links ==
History of Thermodynamics – University of Waterloo
Thermodynamic History Notes – WolframScience.com
Brief History of Thermodynamics – Berkeley [PDF] | Wikipedia/Mechanical_theory_of_heat |
Carnot's theorem, also called Carnot's rule or Carnot's law, is a principle of thermodynamics developed by Nicolas Léonard Sadi Carnot in 1824 that specifies limits on the maximum efficiency that any heat engine can obtain.
Carnot's theorem states that all heat engines operating between the same two thermal or heat reservoirs cannot have efficiencies greater than a reversible heat engine operating between the same reservoirs. A corollary of this theorem is that every reversible heat engine operating between a pair of heat reservoirs is equally efficient, regardless of the working substance employed or the operation details. Since a Carnot heat engine is also a reversible engine, the efficiency of all the reversible heat engines is determined as the efficiency of the Carnot heat engine that depends solely on the temperatures of its hot and cold reservoirs.
The maximum efficiency (i.e., the Carnot heat engine efficiency) of a heat engine operating between hot and cold reservoirs, denoted as H and C respectively, is the ratio of the temperature difference between the reservoirs to the hot reservoir temperature, expressed in the equation
η
max
=
T
H
−
T
C
T
H
,
{\displaystyle \eta _{\text{max}}={\frac {T_{\mathrm {H} }-T_{\mathrm {C} }}{T_{\mathrm {H} }}},}
where
T
H
{\displaystyle T_{\mathrm {H} }}
and
T
C
{\displaystyle T_{\mathrm {C} }}
are the absolute temperatures of the hot and cold reservoirs, respectively, and the efficiency
η
{\displaystyle \eta }
is the ratio of the work done by the engine (to the surroundings) to the heat drawn out of the hot reservoir (to the engine).
η
max
{\displaystyle \eta _{\text{max}}}
is greater than zero if and only if there is a temperature difference between the two thermal reservoirs. Since
η
max
{\displaystyle \eta _{\text{max}}}
is the upper limit of all reversible and irreversible heat engine efficiencies, it is concluded that work from a heat engine can be produced if and only if there is a temperature difference between two thermal reservoirs connecting to the engine.
Carnot's theorem is a consequence of the second law of thermodynamics. Historically, it was based on contemporary caloric theory, and preceded the establishment of the second law.
== Proof ==
The proof of the Carnot theorem is a proof by contradiction or reductio ad absurdum (a method to prove a statement by assuming its falsity and logically deriving a false or contradictory statement from this assumption), based on a situation like the right figure where two heat engines with different efficiencies are operating between two thermal reservoirs at different temperature. The relatively hotter reservoir is called the hot reservoir and the other reservoir is called the cold reservoir. A (not necessarily reversible) heat engine
M
{\displaystyle M}
with a greater efficiency
η
M
{\displaystyle \eta _{_{M}}}
is driving a reversible heat engine
L
{\displaystyle L}
with a less efficiency
η
L
{\displaystyle \eta _{_{L}}}
, causing the latter to act as a heat pump. The requirement for the engine
L
{\displaystyle L}
to be reversible is necessary to explain work
W
{\displaystyle W}
and heat
Q
{\displaystyle Q}
associated with it by using its known efficiency. However, since
η
M
>
η
L
{\displaystyle \eta _{_{M}}>\eta _{_{L}}}
, the net heat flow would be backwards, i.e., into the hot reservoir:
Q
h
out
=
Q
<
η
M
η
L
Q
=
Q
h
in
,
{\displaystyle Q_{\text{h}}^{\text{out}}=Q<{\frac {\eta _{_{M}}}{\eta _{_{L}}}}Q=Q_{\text{h}}^{\text{in}},}
where
Q
{\displaystyle Q}
represents heat,
in
{\displaystyle {\text{in}}}
denotes input to an object,
out
{\displaystyle {\text{out}}}
for output from an object, and
h
{\displaystyle h}
for the hot thermal reservoir. If heat
Q
h
out
{\displaystyle Q_{\text{h}}^{\text{out}}}
flows from the hot reservoir then it has the sign of + while if
Q
h
in
{\displaystyle Q_{\text{h}}^{\text{in}}}
flows from the hot reservoir then it has the sign of -. This expression can be easily derived by using the definition of the efficiency of a heat engine,
η
=
W
/
Q
h
in
{\displaystyle \eta =W/Q_{\text{h}}^{\text{in}}}
, where work and heat in this expression are net quantities per engine cycle, and the conservation of energy for each engine as shown below. The sign convention of work
W
{\displaystyle W}
, with which the sign of + for work done by an engine to its surroundings, is employed.
The above expression means that heat into the hot reservoir from the engine pair (can be considered as a single engine) is greater than heat into the engine pair from the hot reservoir (i.e., the hot reservoir continuously gets energy). A reversible heat engine with a low efficiency delivers more heat (energy) to the hot reservoir for a given amount of work (energy) to this engine when it is being driven as a heat pump. All these mean that heat can transfer from cold to hot places without external work, and such a heat transfer is impossible by the second law of thermodynamics.
It may seem odd that a hypothetical reversible heat pump with a low efficiency is used to violate the second law of thermodynamics, but the figure of merit for refrigerator units is not the efficiency,
W
/
Q
h
out
{\displaystyle W/Q_{\text{h}}^{\text{out}}}
, but the coefficient of performance (COP), which is
Q
c
out
/
W
{\displaystyle Q_{\text{c}}^{\text{out}}/W}
where this
W
{\displaystyle W}
has the sign opposite to the above (+ for work done to the engine).
Let's find the values of work
W
{\displaystyle W}
and heat
Q
{\displaystyle Q}
depicted in the right figure in which a reversible heat engine
L
{\displaystyle L}
with a less efficiency
η
L
{\displaystyle \eta _{_{L}}}
is driven as a heat pump by a heat engine
M
{\displaystyle M}
with a more efficiency
η
M
{\displaystyle \eta _{_{M}}}
.
The definition of the efficiency is
η
=
W
/
Q
h
out
{\displaystyle \eta =W/Q_{\text{h}}^{\text{out}}}
for each engine and the following expressions can be made:
η
M
=
W
M
Q
h
out
,
M
=
η
M
Q
Q
=
η
M
,
{\displaystyle \eta _{M}={\frac {W_{M}}{Q_{\text{h}}^{{\text{out}},M}}}={\frac {\eta _{M}Q}{Q}}=\eta _{M},}
η
L
=
W
L
Q
h
out
,
L
=
−
η
M
Q
−
η
M
η
L
Q
=
η
L
.
{\displaystyle \eta _{L}={\frac {W_{L}}{Q_{\text{h}}^{{\text{out}},L}}}={\frac {-\eta _{M}Q}{-{\frac {\eta _{M}}{\eta _{L}}}Q}}=\eta _{L}.}
The denominator of the second expression,
Q
h
out
,
L
=
−
η
M
η
L
Q
{\displaystyle Q_{\text{h}}^{{\text{out}},L}=-{\frac {\eta _{M}}{\eta _{L}}}Q}
, is made to make the expression to be consistent, and it helps to fill the values of work and heat for the engine
L
{\displaystyle L}
.
For each engine, the absolute value of the energy entering the engine,
E
abs
in
{\displaystyle E_{\text{abs}}^{\text{in}}}
, must be equal to the absolute value of the energy leaving from the engine,
E
abs
out
{\displaystyle E_{\text{abs}}^{\text{out}}}
. Otherwise, energy is continuously accumulated in an engine or the conservation of energy is violated by taking more energy from an engine than input energy to the engine:
E
M,abs
in
=
Q
=
(
1
−
η
M
)
Q
+
η
M
Q
=
E
M,abs
out
,
{\displaystyle E_{\text{M,abs}}^{\text{in}}=Q=(1-\eta _{M})Q+\eta _{M}Q=E_{\text{M,abs}}^{\text{out}},}
E
L,abs
in
=
η
M
Q
+
η
M
Q
(
1
η
L
−
1
)
=
η
M
η
L
Q
=
E
L,abs
out
.
{\displaystyle E_{\text{L,abs}}^{\text{in}}=\eta _{M}Q+\eta _{M}Q\left({\frac {1}{\eta _{L}}}-1\right)={\frac {\eta _{M}}{\eta _{L}}}Q=E_{\text{L,abs}}^{\text{out}}.}
In the second expression,
|
Q
h
out
,
L
|
=
|
−
η
M
η
L
Q
|
{\textstyle \left|Q_{\text{h}}^{{\text{out}},L}\right|=\left|-{\frac {\eta _{M}}{\eta _{L}}}Q\right|}
is used to find the term
η
M
Q
(
1
η
L
−
1
)
{\textstyle \eta _{M}Q\left({\frac {1}{\eta _{L}}}-1\right)}
describing the amount of heat taken from the cold reservoir, completing the absolute value expressions of work and heat in the right figure.
Having established that the right figure values are correct, Carnot's theorem may be proven for irreversible and the reversible heat engines as shown below.
=== Reversible engines ===
To see that every reversible engine operating between reservoirs at temperatures
T
1
{\displaystyle T_{1}}
and
T
2
{\displaystyle T_{2}}
must have the same efficiency, assume that two reversible heat engines have different efficiencies, and let the relatively more efficient engine
M
{\displaystyle M}
drive the relatively less efficient engine
L
{\displaystyle L}
as a heat pump. As the right figure shows, this will cause heat to flow from the cold to the hot reservoir without external work, which violates the second law of thermodynamics. Therefore, both (reversible) heat engines have the same efficiency, and we conclude that:
All reversible heat engines that operate between the same two thermal (heat) reservoirs have the same efficiency.
The reversible heat engine efficiency can be determined by analyzing a Carnot heat engine as one of reversible heat engine.
This conclusion is an important result because it helps establish the Clausius theorem, which implies that the change in entropy
S
{\displaystyle S}
is unique for all reversible processes:
Δ
S
=
∫
a
b
d
Q
rev
T
{\displaystyle \Delta S=\int _{a}^{b}{\frac {dQ_{\text{rev}}}{T}}}
as the entropy change, that is made during a transition from a thermodynamic equilibrium state
a
{\displaystyle a}
to a state
b
{\displaystyle b}
in a V-T (Volume-Temperature) space, is the same over all reversible process paths between these two states. If this integral were not path independent, then entropy would not be a state variable.
=== Irreversible engines ===
Consider two engines,
M
{\displaystyle M}
and
L
{\displaystyle L}
, which are irreversible and reversible respectively. We construct the machine shown in the right figure, with
M
{\displaystyle M}
driving
L
{\displaystyle L}
as a heat pump. Then if
M
{\displaystyle M}
is more efficient than
L
{\displaystyle L}
, the machine will violate the second law of thermodynamics. Since a Carnot heat engine is a reversible heat engine, and all reversible heat engines operate with the same efficiency between the same reservoirs, we have the first part of Carnot's theorem:
No irreversible heat engine is more efficient than a Carnot heat engine operating between the same two thermal reservoirs.
== Definition of thermodynamic temperature ==
The efficiency of a heat engine is the work done by the engine divided by the heat introduced to the engine per engine cycle or
where
w
cy
{\displaystyle w_{\text{cy}}}
is the work done by the engine,
q
C
{\displaystyle q_{C}}
is the heat to the cold reservoir from the engine, and
q
H
{\displaystyle q_{H}}
is the heat to the engine from the hot reservoir, per cycle. Thus, the efficiency depends only on
q
C
q
H
{\displaystyle {\frac {q_{C}}{q_{H}}}}
.
Because all reversible heat engines operating between temperatures
T
1
{\displaystyle T_{1}}
and
T
2
{\displaystyle T_{2}}
must have the same efficiency, the efficiency of a reversible heat engine is a function of only the two reservoir temperatures:
In addition, a reversible heat engine operating between temperatures
T
1
{\displaystyle T_{1}}
and
T
3
{\displaystyle T_{3}}
must have the same efficiency as one consisting of two cycles, one between
T
1
{\displaystyle T_{1}}
and another (intermediate) temperature
T
2
{\displaystyle T_{2}}
, and the second between
T
2
{\displaystyle T_{2}}
and
T
3
{\displaystyle T_{3}}
(
T
1
<
T
2
<
T
3
{\displaystyle T_{1}<T_{2}<T_{3}}
). This can only be the case if
Specializing to the case that
T
1
{\displaystyle T_{1}}
is a fixed reference temperature: the temperature of the triple point of water as 273.16. (Of course any reference temperature and any positive numerical value could be used — the choice here corresponds to the Kelvin scale.) Then for any
T
2
{\displaystyle T_{2}}
and
T
3
{\displaystyle T_{3}}
,
f
(
T
2
,
T
3
)
=
f
(
T
1
,
T
3
)
f
(
T
1
,
T
2
)
=
273.16
⋅
f
(
T
1
,
T
3
)
273.16
⋅
f
(
T
1
,
T
2
)
.
{\displaystyle f(T_{2},T_{3})={\frac {f(T_{1},T_{3})}{f(T_{1},T_{2})}}={\frac {273.16\cdot f(T_{1},T_{3})}{273.16\cdot f(T_{1},T_{2})}}.}
Therefore, if thermodynamic temperature is defined by
T
′
=
273.16
⋅
f
(
T
1
,
T
)
,
{\displaystyle T'=273.16\cdot f(T_{1},T),}
then the function viewed as a function of thermodynamic temperature, is
f
(
T
2
,
T
3
)
=
T
3
′
T
2
′
.
{\displaystyle f(T_{2},T_{3})={\frac {T_{3}'}{T_{2}'}}.}
It follows immediately that
Substituting this equation back into the above equation
q
C
q
H
=
f
(
T
H
,
T
C
)
{\displaystyle {\frac {q_{C}}{q_{H}}}=f(T_{H},T_{C})}
gives a relationship for the efficiency in terms of thermodynamic temperatures:
== Applicability to fuel cells ==
Since fuel cells can generate useful power when all components of the system are at the same temperature (
T
=
T
H
=
T
C
{\displaystyle T=T_{H}=T_{C}}
), they are clearly not limited by Carnot's theorem, which states that no power can be generated when
T
H
=
T
C
{\displaystyle T_{H}=T_{C}}
. This is because Carnot's theorem applies to engines converting thermal energy to work, whereas fuel cells instead convert chemical energy to work. Nevertheless, the second law of thermodynamics still provides restrictions on fuel cell energy conversion.
A Carnot battery is a type of energy storage system that stores electricity in thermal energy storage and converts the stored heat back to electricity through thermodynamic cycles.
== See also ==
Chambadal–Novikov efficiency
Heating and cooling efficiency bounds
== References == | Wikipedia/Carnot_theorem_(thermodynamics) |
In the thermodynamics of equilibrium, a state function, function of state, or point function for a thermodynamic system is a mathematical function relating several state variables or state quantities (that describe equilibrium states of a system) that depend only on the current equilibrium thermodynamic state of the system (e.g. gas, liquid, solid, crystal, or emulsion), not the path which the system has taken to reach that state. A state function describes equilibrium states of a system, thus also describing the type of system. A state variable is typically a state function so the determination of other state variable values at an equilibrium state also determines the value of the state variable as the state function at that state. The ideal gas law is a good example. In this law, one state variable (e.g., pressure, volume, temperature, or the amount of substance in a gaseous equilibrium system) is a function of other state variables so is regarded as a state function. A state function could also describe the number of a certain type of atoms or molecules in a gaseous, liquid, or solid form in a heterogeneous or homogeneous mixture, or the amount of energy required to create such a system or change the system into a different equilibrium state.
Internal energy, enthalpy, and entropy are examples of state quantities or state functions because they quantitatively describe an equilibrium state of a thermodynamic system, regardless of how the system has arrived in that state. They are expressed by exact differentials. In contrast, mechanical work and heat are process quantities or path functions because their values depend on a specific "transition" (or "path") between two equilibrium states that a system has taken to reach the final equilibrium state, being expressed by inexact differentials. Exchanged heat (in certain discrete amounts) can be associated with changes of state function such as enthalpy. The description of the system heat exchange is done by a state function, and thus enthalpy changes point to an amount of heat. This can also apply to entropy when heat is compared to temperature. The description breaks down for quantities exhibiting hysteresis.
== History ==
It is likely that the term "functions of state" was used in a loose sense during the 1850s and 1860s by those such as Rudolf Clausius, William Rankine, Peter Tait, and William Thomson. By the 1870s, the term had acquired a use of its own. In his 1873 paper "Graphical Methods in the Thermodynamics of Fluids", Willard Gibbs states: "The quantities v, p, t, ε, and η are determined when the state of the body is given, and it may be permitted to call them functions of the state of the body."
== Overview ==
A thermodynamic system is described by a number of thermodynamic parameters (e.g. temperature, volume, or pressure) which are not necessarily independent. The number of parameters needed to describe the system is the dimension of the state space of the system (D). For example, a monatomic gas with a fixed number of particles is a simple case of a two-dimensional system (D = 2). Any two-dimensional system is uniquely specified by two parameters. Choosing a different pair of parameters, such as pressure and volume instead of pressure and temperature, creates a different coordinate system in two-dimensional thermodynamic state space but is otherwise equivalent. Pressure and temperature can be used to find volume, pressure and volume can be used to find temperature, and temperature and volume can be used to find pressure. An analogous statement holds for higher-dimensional spaces, as described by the state postulate.
Generally, a state space is defined by an equation of the form
F
(
P
,
V
,
T
,
…
)
=
0
{\displaystyle F(P,V,T,\ldots )=0}
, where P denotes pressure, T denotes temperature, V denotes volume, and the ellipsis denotes other possible state variables like particle number N and entropy S. If the state space is two-dimensional as in the above example, it can be visualized as a three-dimensional graph (a surface in three-dimensional space). However, the labels of the axes are not unique (since there are more than three state variables in this case), and only two independent variables are necessary to define the state.
When a system changes state continuously, it traces out a "path" in the state space. The path can be specified by noting the values of the state parameters as the system traces out the path, whether as a function of time or a function of some other external variable. For example, having the pressure P(t) and volume V(t) as functions of time from time t0 to t1 will specify a path in two-dimensional state space. Any function of time can then be integrated over the path. For example, to calculate the work done by the system from time t0 to time t1, calculate
W
(
t
0
,
t
1
)
=
∫
0
1
P
d
V
=
∫
t
0
t
1
P
(
t
)
d
V
(
t
)
d
t
d
t
{\textstyle W(t_{0},t_{1})=\int _{0}^{1}P\,dV=\int _{t_{0}}^{t_{1}}P(t){\frac {dV(t)}{dt}}\,dt}
. In order to calculate the work W in the above integral, the functions P(t) and V(t) must be known at each time t over the entire path. In contrast, a state function only depends upon the system parameters' values at the endpoints of the path. For example, the following equation can be used to calculate the work plus the integral of V dP over the path:
Φ
(
t
0
,
t
1
)
=
∫
t
0
t
1
P
d
V
d
t
d
t
+
∫
t
0
t
1
V
d
P
d
t
d
t
=
∫
t
0
t
1
d
(
P
V
)
d
t
d
t
=
P
(
t
1
)
V
(
t
1
)
−
P
(
t
0
)
V
(
t
0
)
.
{\displaystyle {\begin{aligned}\Phi (t_{0},t_{1})&=\int _{t_{0}}^{t_{1}}P{\frac {dV}{dt}}\,dt+\int _{t_{0}}^{t_{1}}V{\frac {dP}{dt}}\,dt\\&=\int _{t_{0}}^{t_{1}}{\frac {d(PV)}{dt}}\,dt=P(t_{1})V(t_{1})-P(t_{0})V(t_{0}).\end{aligned}}}
In the equation,
d
(
P
V
)
d
t
d
t
=
d
(
P
V
)
{\displaystyle {\frac {d(PV)}{dt}}dt=d(PV)}
can be expressed as the exact differential of the function P(t)V(t). Therefore, the integral can be expressed as the difference in the value of P(t)V(t) at the end points of the integration. The product PV is therefore a state function of the system.
The notation d will be used for an exact differential. In other words, the integral of dΦ will be equal to Φ(t1) − Φ(t0). The symbol δ will be reserved for an inexact differential, which cannot be integrated without full knowledge of the path. For example, δW = PdV will be used to denote an infinitesimal increment of work.
State functions represent quantities or properties of a thermodynamic system, while non-state functions represent a process during which the state functions change. For example, the state function PV is proportional to the internal energy of an ideal gas, but the work W is the amount of energy transferred as the system performs work. Internal energy is identifiable; it is a particular form of energy. Work is the amount of energy that has changed its form or location.
== List of state functions ==
The following are considered to be state functions in thermodynamics:
== See also ==
Markov property
Conservative vector field
Nonholonomic system
Equation of state
State variable
== Notes ==
== References ==
Callen, Herbert B. (1985). Thermodynamics and an Introduction to Thermostatistics. Wiley & Sons. ISBN 978-0-471-86256-7.
Gibbs, Josiah Willard (1873). "Graphical Methods in the Thermodynamics of Fluids". Transactions of the Connecticut Academy. II. ASIN B00088UXBK – via WikiSource.
Mandl, F. (May 1988). Statistical physics (2nd ed.). Wiley & Sons. ISBN 978-0-471-91533-1.
== External links ==
Media related to State functions at Wikimedia Commons | Wikipedia/Function_of_state |
The Navier–Stokes equations ( nav-YAY STOHKS) are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
The Navier–Stokes equations mathematically express momentum balance for Newtonian fluids and make use of conservation of mass. They are sometimes accompanied by an equation of state relating pressure, temperature and density. They arise from applying Isaac Newton's second law to fluid motion, together with the assumption that the stress in the fluid is the sum of a diffusing viscous term (proportional to the gradient of velocity) and a pressure term—hence describing viscous flow. The difference between them and the closely related Euler equations is that Navier–Stokes equations take viscosity into account while the Euler equations model only inviscid flow. As a result, the Navier–Stokes are an elliptic equation and therefore have better analytic properties, at the expense of having less mathematical structure (e.g. they are never completely integrable).
The Navier–Stokes equations are useful because they describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing. The Navier–Stokes equations, in their full and simplified forms, help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of pollution, and many other problems. Coupled with Maxwell's equations, they can be used to model and study magnetohydrodynamics.
The Navier–Stokes equations are also of great interest in a purely mathematical sense. Despite their wide range of practical uses, it has not yet been proven whether smooth solutions always exist in three dimensions—i.e., whether they are infinitely differentiable (or even just bounded) at all points in the domain. This is called the Navier–Stokes existence and smoothness problem. The Clay Mathematics Institute has called this one of the seven most important open problems in mathematics and has offered a US$1 million prize for a solution or a counterexample.
== Flow velocity ==
The solution of the equations is a flow velocity. It is a vector field—to every point in a fluid, at any moment in a time interval, it gives a vector whose direction and magnitude are those of the velocity of the fluid at that point in space and at that moment in time. It is usually studied in three spatial dimensions and one time dimension, although two (spatial) dimensional and steady-state cases are often used as models, and higher-dimensional analogues are studied in both pure and applied mathematics. Once the velocity field is calculated, other quantities of interest such as pressure or temperature may be found using dynamical equations and relations. This is different from what one normally sees in classical mechanics, where solutions are typically trajectories of position of a particle or deflection of a continuum. Studying velocity instead of position makes more sense for a fluid, although for visualization purposes one can compute various trajectories. In particular, the streamlines of a vector field, interpreted as flow velocity, are the paths along which a massless fluid particle would travel. These paths are the integral curves whose derivative at each point is equal to the vector field, and they can represent visually the behavior of the vector field at a point in time.
== General continuum equations ==
The Navier–Stokes momentum equation can be derived as a particular form of the Cauchy momentum equation, whose general convective form is:
D
u
D
t
=
1
ρ
∇
⋅
σ
+
f
.
{\displaystyle {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}={\frac {1}{\rho }}\nabla \cdot {\boldsymbol {\sigma }}+\mathbf {f} .}
By setting the Cauchy stress tensor
σ
{\textstyle {\boldsymbol {\sigma }}}
to be the sum of a viscosity term
τ
{\textstyle {\boldsymbol {\tau }}}
(the deviatoric stress) and a pressure term
−
p
I
{\textstyle -p\mathbf {I} }
(volumetric stress), we arrive at:
where
D
D
t
{\textstyle {\frac {\mathrm {D} }{\mathrm {D} t}}}
is the material derivative, defined as
∂
∂
t
+
u
⋅
∇
{\textstyle {\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla }
,
ρ
{\textstyle \rho }
is the (mass) density,
u
{\textstyle \mathbf {u} }
is the flow velocity,
∇
⋅
{\textstyle \nabla \cdot \,}
is the divergence,
p
{\textstyle p}
is the pressure,
t
{\textstyle t}
is time,
τ
{\textstyle {\boldsymbol {\tau }}}
is the deviatoric stress tensor, which has order 2,
a
{\textstyle \mathbf {a} }
represents body accelerations acting on the continuum, for example gravity, inertial accelerations, electrostatic accelerations, and so on.
In this form, it is apparent that in the assumption of an inviscid fluid – no deviatoric stress – Cauchy equations reduce to the Euler equations.
Assuming conservation of mass, with the known properties of divergence and gradient we can use the mass continuity equation, which represents the mass per unit volume of a homogenous fluid with respect to space and time (i.e., material derivative
D
D
t
{\displaystyle {\frac {\mathbf {D} }{\mathbf {Dt} }}}
) of any finite volume (V) to represent the change of velocity in fluid media:
D
m
D
t
=
∭
V
(
D
ρ
D
t
+
ρ
(
∇
⋅
u
)
)
d
V
D
ρ
D
t
+
ρ
(
∇
⋅
u
)
=
∂
ρ
∂
t
+
(
∇
ρ
)
⋅
u
+
ρ
(
∇
⋅
u
)
=
∂
ρ
∂
t
+
∇
⋅
(
ρ
u
)
=
0
{\displaystyle {\begin{aligned}&{\frac {\mathbf {D} m}{\mathbf {Dt} }}=\iiint \limits _{V}\left({\frac {\mathbf {D} \rho }{\mathbf {Dt} +\rho (\nabla \cdot \mathbf {u} )}}\right)\,dV\\[5pt]&{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot \mathbf {u} )={\frac {\partial \rho }{\partial t}}+(\nabla \rho )\cdot \mathbf {u} +\rho (\nabla \cdot \mathbf {u} )={\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )=0\end{aligned}}}
where
D
m
D
t
{\textstyle {\frac {\mathrm {D} m}{\mathrm {D} t}}}
is the material derivative of mass per unit volume (density,
ρ
{\displaystyle \rho }
),
∭
V
(
F
(
x
1
,
x
2
,
x
3
,
t
)
)
d
V
{\textstyle \iiint \limits _{V}(F(x_{1},x_{2},x_{3},t))\,dV}
is the mathematical operation for the integration throughout the volume (V),
∂
∂
t
{\textstyle {\frac {\partial }{\partial t}}}
is the partial derivative mathematical operator,
∇
⋅
u
{\textstyle \nabla \cdot \mathbf {u} \,}
is the divergence of the flow velocity (
u
{\displaystyle \mathbf {u} }
), which is a scalar field, Note 1
∇
ρ
{\textstyle \nabla \rho \,}
is the gradient of density (
ρ
{\displaystyle \rho }
), which is the vector derivative of a scalar field, Note 1
Note 1 – Refer to the mathematical operator del represented by the nabla (
∇
{\displaystyle \nabla }
) symbol.
to arrive at the conservation form of the equations of motion. This is often written:
where
⊗
{\textstyle \otimes }
is the outer product of the flow velocity (
u
{\displaystyle \mathbf {u} }
):
u
⊗
u
=
u
u
T
{\displaystyle \mathbf {u} \otimes \mathbf {u} =\mathbf {u} \mathbf {u} ^{\mathrm {T} }}
The left side of the equation describes acceleration, and may be composed of time-dependent and convective components (also the effects of non-inertial coordinates if present). The right side of the equation is in effect a summation of hydrostatic effects, the divergence of deviatoric stress and body forces (such as gravity).
All non-relativistic balance equations, such as the Navier–Stokes equations, can be derived by beginning with the Cauchy equations and specifying the stress tensor through a constitutive relation. By expressing the deviatoric (shear) stress tensor in terms of viscosity and the fluid velocity gradient, and assuming constant viscosity, the above Cauchy equations will lead to the Navier–Stokes equations below.
=== Convective acceleration ===
A significant feature of the Cauchy equation and consequently all other continuum equations (including Euler and Navier–Stokes) is the presence of convective acceleration: the effect of acceleration of a flow with respect to space. While individual fluid particles indeed experience time-dependent acceleration, the convective acceleration of the flow field is a spatial effect, one example being fluid speeding up in a nozzle.
== Compressible flow ==
Remark: here, the deviatoric stress tensor is denoted
τ
{\textstyle {\boldsymbol {\tau }}}
as it was in the general continuum equations and in the incompressible flow section.
The compressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor:
the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient
∇
u
{\textstyle \nabla \mathbf {u} }
, or more simply the rate-of-strain tensor:
ε
(
∇
u
)
≡
1
2
∇
u
+
1
2
(
∇
u
)
T
{\textstyle {\boldsymbol {\varepsilon }}\left(\nabla \mathbf {u} \right)\equiv {\frac {1}{2}}\nabla \mathbf {u} +{\frac {1}{2}}\left(\nabla \mathbf {u} \right)^{T}}
the deviatoric stress is linear in this variable:
σ
(
ε
)
=
−
p
I
+
C
:
ε
{\textstyle {\boldsymbol {\sigma }}({\boldsymbol {\varepsilon }})=-p\mathbf {I} +\mathbf {C} :{\boldsymbol {\varepsilon }}}
, where
p
{\textstyle p}
is independent on the strain rate tensor,
C
{\textstyle \mathbf {C} }
is the fourth-order tensor representing the constant of proportionality, called the viscosity or elasticity tensor, and : is the double-dot product.
the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently
C
{\textstyle \mathbf {C} }
is an isotropic tensor; furthermore, since the deviatoric stress tensor is symmetric, by Helmholtz decomposition it can be expressed in terms of two scalar Lamé parameters, the second viscosity
λ
{\textstyle \lambda }
and the dynamic viscosity
μ
{\textstyle \mu }
, as it is usual in linear elasticity:
where
I
{\textstyle \mathbf {I} }
is the identity tensor, and
tr
(
ε
)
{\textstyle \operatorname {tr} ({\boldsymbol {\varepsilon }})}
is the trace of the rate-of-strain tensor. So this decomposition can be explicitly defined as:
σ
=
−
p
I
+
λ
(
∇
⋅
u
)
I
+
μ
(
∇
u
+
(
∇
u
)
T
)
.
{\displaystyle {\boldsymbol {\sigma }}=-p\mathbf {I} +\lambda (\nabla \cdot \mathbf {u} )\mathbf {I} +\mu \left(\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }\right).}
Since the trace of the rate-of-strain tensor in three dimensions is the divergence (i.e. rate of expansion) of the flow:
tr
(
ε
)
=
∇
⋅
u
.
{\displaystyle \operatorname {tr} ({\boldsymbol {\varepsilon }})=\nabla \cdot \mathbf {u} .}
Given this relation, and since the trace of the identity tensor in three dimensions is three:
tr
(
I
)
=
3.
{\displaystyle \operatorname {tr} ({\boldsymbol {I}})=3.}
the trace of the stress tensor in three dimensions becomes:
tr
(
σ
)
=
−
3
p
+
(
3
λ
+
2
μ
)
∇
⋅
u
.
{\displaystyle \operatorname {tr} ({\boldsymbol {\sigma }})=-3p+(3\lambda +2\mu )\nabla \cdot \mathbf {u} .}
So by alternatively decomposing the stress tensor into isotropic and deviatoric parts, as usual in fluid dynamics:
σ
=
−
[
p
−
(
λ
+
2
3
μ
)
(
∇
⋅
u
)
]
I
+
μ
(
∇
u
+
(
∇
u
)
T
−
2
3
(
∇
⋅
u
)
I
)
{\displaystyle {\boldsymbol {\sigma }}=-\left[p-\left(\lambda +{\tfrac {2}{3}}\mu \right)\left(\nabla \cdot \mathbf {u} \right)\right]\mathbf {I} +\mu \left(\nabla \mathbf {u} +\left(\nabla \mathbf {u} \right)^{\mathrm {T} }-{\tfrac {2}{3}}\left(\nabla \cdot \mathbf {u} \right)\mathbf {I} \right)}
Introducing the bulk viscosity
ζ
{\textstyle \zeta }
,
ζ
≡
λ
+
2
3
μ
,
{\displaystyle \zeta \equiv \lambda +{\tfrac {2}{3}}\mu ,}
we arrive to the linear constitutive equation in the form usually employed in thermal hydraulics:
which can also be arranged in the other usual form:
σ
=
−
p
I
+
μ
(
∇
u
+
(
∇
u
)
T
)
+
(
ζ
−
2
3
μ
)
(
∇
⋅
u
)
I
.
{\displaystyle {\boldsymbol {\sigma }}=-p\mathbf {I} +\mu \left(\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }\right)+\left(\zeta -{\frac {2}{3}}\mu \right)(\nabla \cdot \mathbf {u} )\mathbf {I} .}
Note that in the compressible case the pressure is no more proportional to the isotropic stress term, since there is the additional bulk viscosity term:
p
=
−
1
3
tr
(
σ
)
+
ζ
(
∇
⋅
u
)
{\displaystyle p=-{\frac {1}{3}}\operatorname {tr} ({\boldsymbol {\sigma }})+\zeta (\nabla \cdot \mathbf {u} )}
and the deviatoric stress tensor
σ
′
{\displaystyle {\boldsymbol {\sigma }}'}
is still coincident with the shear stress tensor
τ
{\displaystyle {\boldsymbol {\tau }}}
(i.e. the deviatoric stress in a Newtonian fluid has no normal stress components), and it has a compressibility term in addition to the incompressible case, which is proportional to the shear viscosity:
σ
′
=
τ
=
μ
[
∇
u
+
(
∇
u
)
T
−
2
3
(
∇
⋅
u
)
I
]
{\displaystyle {\boldsymbol {\sigma }}'={\boldsymbol {\tau }}=\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]}
Both bulk viscosity
ζ
{\textstyle \zeta }
and dynamic viscosity
μ
{\textstyle \mu }
need not be constant – in general, they depend on two thermodynamics variables if the fluid contains a single chemical species, say for example, pressure and temperature. Any equation that makes explicit one of these transport coefficient in the conservation variables is called an equation of state.
The most general of the Navier–Stokes equations become
in index notation, the equation can be written as
The corresponding equation in conservation form can be obtained by considering that, given the mass continuity equation, the left side is equivalent to:
ρ
D
u
D
t
=
∂
∂
t
(
ρ
u
)
+
∇
⋅
(
ρ
u
⊗
u
)
{\displaystyle \rho {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}={\frac {\partial }{\partial t}}(\rho \mathbf {u} )+\nabla \cdot (\rho \mathbf {u} \otimes \mathbf {u} )}
to give finally:
Apart from its dependence of pressure and temperature, the second viscosity coefficient also depends on the process, that is to say, the second viscosity coefficient is not just a material property. Example: in the case of a sound wave with a definitive frequency that alternatively compresses and expands a fluid element, the second viscosity coefficient depends on the frequency of the wave. This dependence is called the dispersion. In some cases, the second viscosity
ζ
{\textstyle \zeta }
can be assumed to be constant in which case, the effect of the volume viscosity
ζ
{\textstyle \zeta }
is that the mechanical pressure is not equivalent to the thermodynamic pressure: as demonstrated below.
∇
⋅
(
∇
⋅
u
)
I
=
∇
(
∇
⋅
u
)
,
{\displaystyle \nabla \cdot (\nabla \cdot \mathbf {u} )\mathbf {I} =\nabla (\nabla \cdot \mathbf {u} ),}
p
¯
≡
p
−
ζ
∇
⋅
u
,
{\displaystyle {\bar {p}}\equiv p-\zeta \,\nabla \cdot \mathbf {u} ,}
However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves, where second viscosity coefficient becomes important) by explicitly assuming
ζ
=
0
{\textstyle \zeta =0}
. The assumption of setting
ζ
=
0
{\textstyle \zeta =0}
is called as the Stokes hypothesis. The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory; for other gases and liquids, Stokes hypothesis is generally incorrect. With the Stokes hypothesis, the Navier–Stokes equations become
If the dynamic μ and bulk
ζ
{\displaystyle \zeta }
viscosities are assumed to be uniform in space, the equations in convective form can be simplified further. By computing the divergence of the stress tensor, since the divergence of tensor
∇
u
{\textstyle \nabla \mathbf {u} }
is
∇
2
u
{\textstyle \nabla ^{2}\mathbf {u} }
and the divergence of tensor
(
∇
u
)
T
{\textstyle \left(\nabla \mathbf {u} \right)^{\mathrm {T} }}
is
∇
(
∇
⋅
u
)
{\textstyle \nabla \left(\nabla \cdot \mathbf {u} \right)}
, one finally arrives to the compressible Navier–Stokes momentum equation:
where
D
D
t
{\textstyle {\frac {\mathrm {D} }{\mathrm {D} t}}}
is the material derivative.
ν
=
μ
ρ
{\displaystyle \nu ={\frac {\mu }{\rho }}}
is the shear kinematic viscosity and
ξ
=
ζ
ρ
{\displaystyle \xi ={\frac {\zeta }{\rho }}}
is the bulk kinematic viscosity. The left-hand side changes in the conservation form of the Navier–Stokes momentum equation.
By bringing the operator on the flow velocity on the left side, one also has:
The convective acceleration term can also be written as
u
⋅
∇
u
=
(
∇
×
u
)
×
u
+
1
2
∇
u
2
,
{\displaystyle \mathbf {u} \cdot \nabla \mathbf {u} =(\nabla \times \mathbf {u} )\times \mathbf {u} +{\tfrac {1}{2}}\nabla \mathbf {u} ^{2},}
where the vector
(
∇
×
u
)
×
u
{\textstyle (\nabla \times \mathbf {u} )\times \mathbf {u} }
is known as the Lamb vector.
For the special case of an incompressible flow, the pressure constrains the flow so that the volume of fluid elements is constant: isochoric flow resulting in a solenoidal velocity field with
∇
⋅
u
=
0
{\textstyle \nabla \cdot \mathbf {u} =0}
.
== Incompressible flow ==
The incompressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor:
the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient
∇
u
{\textstyle \nabla \mathbf {u} }
.
the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently
τ
{\textstyle {\boldsymbol {\tau }}}
is an isotropic tensor; furthermore, since the deviatoric stress tensor can be expressed in terms of the dynamic viscosity
μ
{\textstyle \mu }
:
where
ε
=
1
2
(
∇
u
+
∇
u
T
)
{\displaystyle {\boldsymbol {\varepsilon }}={\tfrac {1}{2}}\left(\mathbf {\nabla u} +\mathbf {\nabla u} ^{\mathrm {T} }\right)}
is the rate-of-strain tensor. So this decomposition can be made explicit as:
This is constitutive equation is also called the Newtonian law of viscosity.
Dynamic viscosity μ need not be constant – in incompressible flows it can depend on density and on pressure. Any equation that makes explicit one of these transport coefficient in the conservative variables is called an equation of state.
The divergence of the deviatoric stress in case of uniform viscosity is given by:
∇
⋅
τ
=
2
μ
∇
⋅
ε
=
μ
∇
⋅
(
∇
u
+
∇
u
T
)
=
μ
∇
2
u
{\displaystyle \nabla \cdot {\boldsymbol {\tau }}=2\mu \nabla \cdot {\boldsymbol {\varepsilon }}=\mu \nabla \cdot \left(\nabla \mathbf {u} +\nabla \mathbf {u} ^{\mathrm {T} }\right)=\mu \,\nabla ^{2}\mathbf {u} }
because
∇
⋅
u
=
0
{\textstyle \nabla \cdot \mathbf {u} =0}
for an incompressible fluid.
Incompressibility rules out density and pressure waves like sound or shock waves, so this simplification is not useful if these phenomena are of interest. The incompressible flow assumption typically holds well with all fluids at low Mach numbers (say up to about Mach 0.3), such as for modelling air winds at normal temperatures. the incompressible Navier–Stokes equations are best visualized by dividing for the density:
where
ν
=
μ
ρ
{\textstyle \nu ={\frac {\mu }{\rho }}}
is called the kinematic viscosity.
By isolating the fluid velocity, one can also state:
If the density is constant throughout the fluid domain, or, in other words, if all fluid elements have the same density,
ρ
{\textstyle \rho }
, then we have
where
p
/
ρ
{\textstyle p/\rho }
is called the unit pressure head.
In incompressible flows, the pressure field satisfies the Poisson equation,
∇
2
p
=
−
ρ
∂
u
i
∂
x
k
∂
u
k
∂
x
i
=
−
ρ
∂
2
u
i
u
k
∂
x
k
x
i
,
{\displaystyle \nabla ^{2}p=-\rho {\frac {\partial u_{i}}{\partial x_{k}}}{\frac {\partial u_{k}}{\partial x_{i}}}=-\rho {\frac {\partial ^{2}u_{i}u_{k}}{\partial x_{k}x_{i}}},}
which is obtained by taking the divergence of the momentum equations.
It is well worth observing the meaning of each term (compare to the Cauchy momentum equation):
∂
u
∂
t
⏟
Variation
+
(
u
⋅
∇
)
u
⏟
Convective
acceleration
⏞
Inertia (per volume)
=
∂
∂
−
∇
w
⏟
Internal
source
+
ν
∇
2
u
⏟
Diffusion
⏞
Divergence of stress
+
g
⏟
External
source
.
{\displaystyle \overbrace {{\vphantom {\frac {}{}}}\underbrace {\frac {\partial \mathbf {u} }{\partial t}} _{\text{Variation}}+\underbrace {{\vphantom {\frac {}{}}}(\mathbf {u} \cdot \nabla )\mathbf {u} } _{\begin{smallmatrix}{\text{Convective}}\\{\text{acceleration}}\end{smallmatrix}}} ^{\text{Inertia (per volume)}}=\overbrace {{\vphantom {\frac {\partial }{\partial }}}\underbrace {{\vphantom {\frac {}{}}}-\nabla w} _{\begin{smallmatrix}{\text{Internal}}\\{\text{source}}\end{smallmatrix}}+\underbrace {{\vphantom {\frac {}{}}}\nu \nabla ^{2}\mathbf {u} } _{\text{Diffusion}}} ^{\text{Divergence of stress}}+\underbrace {{\vphantom {\frac {}{}}}\mathbf {g} } _{\begin{smallmatrix}{\text{External}}\\{\text{source}}\end{smallmatrix}}.}
The higher-order term, namely the shear stress divergence
∇
⋅
τ
{\textstyle \nabla \cdot {\boldsymbol {\tau }}}
, has simply reduced to the vector Laplacian term
μ
∇
2
u
{\textstyle \mu \nabla ^{2}\mathbf {u} }
. This Laplacian term can be interpreted as the difference between the velocity at a point and the mean velocity in a small surrounding volume. This implies that – for a Newtonian fluid – viscosity operates as a diffusion of momentum, in much the same way as the heat conduction. In fact neglecting the convection term, incompressible Navier–Stokes equations lead to a vector diffusion equation (namely Stokes equations), but in general the convection term is present, so incompressible Navier–Stokes equations belong to the class of convection–diffusion equations.
In the usual case of an external field being a conservative field:
g
=
−
∇
φ
{\displaystyle \mathbf {g} =-\nabla \varphi }
by defining the hydraulic head:
h
≡
w
+
φ
{\displaystyle h\equiv w+\varphi }
one can finally condense the whole source in one term, arriving to the incompressible Navier–Stokes equation with conservative external field:
∂
u
∂
t
+
(
u
⋅
∇
)
u
−
ν
∇
2
u
=
−
∇
h
.
{\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} -\nu \,\nabla ^{2}\mathbf {u} =-\nabla h.}
The incompressible Navier–Stokes equations with uniform density and viscosity and conservative external field is the fundamental equation of hydraulics. The domain for these equations is commonly a 3 or fewer dimensional Euclidean space, for which an orthogonal coordinate reference frame is usually set to explicit the system of scalar partial differential equations to be solved. In 3-dimensional orthogonal coordinate systems are 3: Cartesian, cylindrical, and spherical. Expressing the Navier–Stokes vector equation in Cartesian coordinates is quite straightforward and not much influenced by the number of dimensions of the euclidean space employed, and this is the case also for the first-order terms (like the variation and convection ones) also in non-cartesian orthogonal coordinate systems. But for the higher order terms (the two coming from the divergence of the deviatoric stress that distinguish Navier–Stokes equations from Euler equations) some tensor calculus is required for deducing an expression in non-cartesian orthogonal coordinate systems.
A special case of the fundamental equation of hydraulics is the Bernoulli's equation.
The incompressible Navier–Stokes equation is composite, the sum of two orthogonal equations,
∂
u
∂
t
=
Π
S
(
−
(
u
⋅
∇
)
u
+
ν
∇
2
u
)
+
f
S
ρ
−
1
∇
p
=
Π
I
(
−
(
u
⋅
∇
)
u
+
ν
∇
2
u
)
+
f
I
{\displaystyle {\begin{aligned}{\frac {\partial \mathbf {u} }{\partial t}}&=\Pi ^{S}\left(-(\mathbf {u} \cdot \nabla )\mathbf {u} +\nu \,\nabla ^{2}\mathbf {u} \right)+\mathbf {f} ^{S}\\\rho ^{-1}\,\nabla p&=\Pi ^{I}\left(-(\mathbf {u} \cdot \nabla )\mathbf {u} +\nu \,\nabla ^{2}\mathbf {u} \right)+\mathbf {f} ^{I}\end{aligned}}}
where
Π
S
{\textstyle \Pi ^{S}}
and
Π
I
{\textstyle \Pi ^{I}}
are solenoidal and irrotational projection operators satisfying
Π
S
+
Π
I
=
1
{\textstyle \Pi ^{S}+\Pi ^{I}=1}
, and
f
S
{\textstyle \mathbf {f} ^{S}}
and
f
I
{\textstyle \mathbf {f} ^{I}}
are the non-conservative and conservative parts of the body force. This result follows from the Helmholtz theorem (also known as the fundamental theorem of vector calculus). The first equation is a pressureless governing equation for the velocity, while the second equation for the pressure is a functional of the velocity and is related to the pressure Poisson equation.
The explicit functional form of the projection operator in 3D is found from the Helmholtz Theorem:
Π
S
F
(
r
)
=
1
4
π
∇
×
∫
∇
′
×
F
(
r
′
)
|
r
−
r
′
|
d
V
′
,
Π
I
=
1
−
Π
S
{\displaystyle \Pi ^{S}\,\mathbf {F} (\mathbf {r} )={\frac {1}{4\pi }}\nabla \times \int {\frac {\nabla ^{\prime }\times \mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} V',\quad \Pi ^{I}=1-\Pi ^{S}}
with a similar structure in 2D. Thus the governing equation is an integro-differential equation similar to Coulomb's and Biot–Savart's law, not convenient for numerical computation.
An equivalent weak or variational form of the equation, proved to produce the same velocity solution as the Navier–Stokes equation, is given by,
(
w
,
∂
u
∂
t
)
=
−
(
w
,
(
u
⋅
∇
)
u
)
−
ν
(
∇
w
:
∇
u
)
+
(
w
,
f
S
)
{\displaystyle \left(\mathbf {w} ,{\frac {\partial \mathbf {u} }{\partial t}}\right)=-{\bigl (}\mathbf {w} ,\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} {\bigr )}-\nu \left(\nabla \mathbf {w} :\nabla \mathbf {u} \right)+\left(\mathbf {w} ,\mathbf {f} ^{S}\right)}
for divergence-free test functions
w
{\textstyle \mathbf {w} }
satisfying appropriate boundary conditions. Here, the projections are accomplished by the orthogonality of the solenoidal and irrotational function spaces. The discrete form of this is eminently suited to finite element computation of divergence-free flow, as we shall see in the next section. There, one will be able to address the question, "How does one specify pressure-driven (Poiseuille) problems with a pressureless governing equation?".
The absence of pressure forces from the governing velocity equation demonstrates that the equation is not a dynamic one, but rather a kinematic equation where the divergence-free condition serves the role of a conservation equation. This would seem to refute the frequent statements that the incompressible pressure enforces the divergence-free condition.
=== Weak form of the incompressible Navier–Stokes equations ===
==== Strong form ====
Consider the incompressible Navier–Stokes equations for a Newtonian fluid of constant density
ρ
{\textstyle \rho }
in a domain
Ω
⊂
R
d
(
d
=
2
,
3
)
{\displaystyle \Omega \subset \mathbb {R} ^{d}\quad (d=2,3)}
with boundary
∂
Ω
=
Γ
D
∪
Γ
N
,
{\displaystyle \partial \Omega =\Gamma _{D}\cup \Gamma _{N},}
being
Γ
D
{\textstyle \Gamma _{D}}
and
Γ
N
{\textstyle \Gamma _{N}}
portions of the boundary where respectively a Dirichlet and a Neumann boundary condition is applied (
Γ
D
∩
Γ
N
=
∅
{\textstyle \Gamma _{D}\cap \Gamma _{N}=\emptyset }
):
{
ρ
∂
u
∂
t
+
ρ
(
u
⋅
∇
)
u
−
∇
⋅
σ
(
u
,
p
)
=
f
in
Ω
×
(
0
,
T
)
∇
⋅
u
=
0
in
Ω
×
(
0
,
T
)
u
=
g
on
Γ
D
×
(
0
,
T
)
σ
(
u
,
p
)
n
^
=
h
on
Γ
N
×
(
0
,
T
)
u
(
0
)
=
u
0
in
Ω
×
{
0
}
{\displaystyle {\begin{cases}\rho {\dfrac {\partial \mathbf {u} }{\partial t}}+\rho (\mathbf {u} \cdot \nabla )\mathbf {u} -\nabla \cdot {\boldsymbol {\sigma }}(\mathbf {u} ,p)=\mathbf {f} &{\text{ in }}\Omega \times (0,T)\\\nabla \cdot \mathbf {u} =0&{\text{ in }}\Omega \times (0,T)\\\mathbf {u} =\mathbf {g} &{\text{ on }}\Gamma _{D}\times (0,T)\\{\boldsymbol {\sigma }}(\mathbf {u} ,p){\hat {\mathbf {n} }}=\mathbf {h} &{\text{ on }}\Gamma _{N}\times (0,T)\\\mathbf {u} (0)=\mathbf {u} _{0}&{\text{ in }}\Omega \times \{0\}\end{cases}}}
u
{\textstyle \mathbf {u} }
is the fluid velocity,
p
{\textstyle p}
the fluid pressure,
f
{\textstyle \mathbf {f} }
a given forcing term,
n
^
{\displaystyle {\hat {\mathbf {n} }}}
the outward directed unit normal vector to
Γ
N
{\textstyle \Gamma _{N}}
, and
σ
(
u
,
p
)
{\textstyle {\boldsymbol {\sigma }}(\mathbf {u} ,p)}
the viscous stress tensor defined as:
σ
(
u
,
p
)
=
−
p
I
+
2
μ
ε
(
u
)
.
{\displaystyle {\boldsymbol {\sigma }}(\mathbf {u} ,p)=-p\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}(\mathbf {u} ).}
Let
μ
{\textstyle \mu }
be the dynamic viscosity of the fluid,
I
{\textstyle \mathbf {I} }
the second-order identity tensor and
ε
(
u
)
{\textstyle {\boldsymbol {\varepsilon }}(\mathbf {u} )}
the strain-rate tensor defined as:
ε
(
u
)
=
1
2
(
(
∇
u
)
+
(
∇
u
)
T
)
.
{\displaystyle {\boldsymbol {\varepsilon }}(\mathbf {u} )={\frac {1}{2}}\left(\left(\nabla \mathbf {u} \right)+\left(\nabla \mathbf {u} \right)^{\mathrm {T} }\right).}
The functions
g
{\textstyle \mathbf {g} }
and
h
{\textstyle \mathbf {h} }
are given Dirichlet and Neumann boundary data, while
u
0
{\textstyle \mathbf {u} _{0}}
is the initial condition. The first equation is the momentum balance equation, while the second represents the mass conservation, namely the continuity equation.
Assuming constant dynamic viscosity, using the vectorial identity
∇
⋅
(
∇
f
)
T
=
∇
(
∇
⋅
f
)
{\displaystyle \nabla \cdot \left(\nabla \mathbf {f} \right)^{\mathrm {T} }=\nabla (\nabla \cdot \mathbf {f} )}
and exploiting mass conservation, the divergence of the total stress tensor in the momentum equation can also be expressed as:
∇
⋅
σ
(
u
,
p
)
=
∇
⋅
(
−
p
I
+
2
μ
ε
(
u
)
)
=
−
∇
p
+
2
μ
∇
⋅
ε
(
u
)
=
−
∇
p
+
2
μ
∇
⋅
[
1
2
(
(
∇
u
)
+
(
∇
u
)
T
)
]
=
−
∇
p
+
μ
(
Δ
u
+
∇
⋅
(
∇
u
)
T
)
=
−
∇
p
+
μ
(
Δ
u
+
∇
(
∇
⋅
u
)
⏟
=
0
)
=
−
∇
p
+
μ
Δ
u
.
{\displaystyle {\begin{aligned}\nabla \cdot {\boldsymbol {\sigma }}(\mathbf {u} ,p)&=\nabla \cdot \left(-p\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}(\mathbf {u} )\right)\\&=-\nabla p+2\mu \nabla \cdot {\boldsymbol {\varepsilon }}(\mathbf {u} )\\&=-\nabla p+2\mu \nabla \cdot \left[{\tfrac {1}{2}}\left(\left(\nabla \mathbf {u} \right)+\left(\nabla \mathbf {u} \right)^{\mathrm {T} }\right)\right]\\&=-\nabla p+\mu \left(\Delta \mathbf {u} +\nabla \cdot \left(\nabla \mathbf {u} \right)^{\mathrm {T} }\right)\\&=-\nabla p+\mu {\bigl (}\Delta \mathbf {u} +\nabla \underbrace {(\nabla \cdot \mathbf {u} )} _{=0}{\bigr )}=-\nabla p+\mu \,\Delta \mathbf {u} .\end{aligned}}}
Moreover, note that the Neumann boundary conditions can be rearranged as:
σ
(
u
,
p
)
n
^
=
(
−
p
I
+
2
μ
ε
(
u
)
)
n
^
=
−
p
n
^
+
μ
∂
u
∂
n
^
.
{\displaystyle {\boldsymbol {\sigma }}(\mathbf {u} ,p){\hat {\mathbf {n} }}=\left(-p\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}(\mathbf {u} )\right){\hat {\mathbf {n} }}=-p{\hat {\mathbf {n} }}+\mu {\frac {\partial {\boldsymbol {u}}}{\partial {\hat {\mathbf {n} }}}}.}
==== Weak form ====
In order to find the weak form of the Navier–Stokes equations, firstly, consider the momentum equation
ρ
∂
u
∂
t
−
μ
Δ
u
+
ρ
(
u
⋅
∇
)
u
+
∇
p
=
f
{\displaystyle \rho {\frac {\partial \mathbf {u} }{\partial t}}-\mu \Delta \mathbf {u} +\rho (\mathbf {u} \cdot \nabla )\mathbf {u} +\nabla p=\mathbf {f} }
multiply it for a test function
v
{\textstyle \mathbf {v} }
, defined in a suitable space
V
{\textstyle V}
, and integrate both members with respect to the domain
Ω
{\textstyle \Omega }
:
∫
Ω
ρ
∂
u
∂
t
⋅
v
−
∫
Ω
μ
Δ
u
⋅
v
+
∫
Ω
ρ
(
u
⋅
∇
)
u
⋅
v
+
∫
Ω
∇
p
⋅
v
=
∫
Ω
f
⋅
v
{\displaystyle \int \limits _{\Omega }\rho {\frac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} -\int \limits _{\Omega }\mu \Delta \mathbf {u} \cdot \mathbf {v} +\int \limits _{\Omega }\rho (\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} +\int \limits _{\Omega }\nabla p\cdot \mathbf {v} =\int \limits _{\Omega }\mathbf {f} \cdot \mathbf {v} }
Counter-integrating by parts the diffusive and the pressure terms and by using the Gauss' theorem:
−
∫
Ω
μ
Δ
u
⋅
v
=
∫
Ω
μ
∇
u
⋅
∇
v
−
∫
∂
Ω
μ
∂
u
∂
n
^
⋅
v
∫
Ω
∇
p
⋅
v
=
−
∫
Ω
p
∇
⋅
v
+
∫
∂
Ω
p
v
⋅
n
^
{\displaystyle {\begin{aligned}-\int \limits _{\Omega }\mu \Delta \mathbf {u} \cdot \mathbf {v} &=\int _{\Omega }\mu \nabla \mathbf {u} \cdot \nabla \mathbf {v} -\int \limits _{\partial \Omega }\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}\cdot \mathbf {v} \\\int \limits _{\Omega }\nabla p\cdot \mathbf {v} &=-\int \limits _{\Omega }p\nabla \cdot \mathbf {v} +\int \limits _{\partial \Omega }p\mathbf {v} \cdot {\hat {\mathbf {n} }}\end{aligned}}}
Using these relations, one gets:
∫
Ω
ρ
∂
u
∂
t
⋅
v
+
∫
Ω
μ
∇
u
⋅
∇
v
+
∫
Ω
ρ
(
u
⋅
∇
)
u
⋅
v
−
∫
Ω
p
∇
⋅
v
=
∫
Ω
f
⋅
v
+
∫
∂
Ω
(
μ
∂
u
∂
n
^
−
p
n
^
)
⋅
v
∀
v
∈
V
.
{\displaystyle \int \limits _{\Omega }\rho {\dfrac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} +\int \limits _{\Omega }\mu \nabla \mathbf {u} \cdot \nabla \mathbf {v} +\int \limits _{\Omega }\rho (\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} -\int \limits _{\Omega }p\nabla \cdot \mathbf {v} =\int \limits _{\Omega }\mathbf {f} \cdot \mathbf {v} +\int \limits _{\partial \Omega }\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)\cdot \mathbf {v} \quad \forall \mathbf {v} \in V.}
In the same fashion, the continuity equation is multiplied for a test function q belonging to a space
Q
{\textstyle Q}
and integrated in the domain
Ω
{\textstyle \Omega }
:
∫
Ω
q
∇
⋅
u
=
0.
∀
q
∈
Q
.
{\displaystyle \int \limits _{\Omega }q\nabla \cdot \mathbf {u} =0.\quad \forall q\in Q.}
The space functions are chosen as follows:
V
=
[
H
0
1
(
Ω
)
]
d
=
{
v
∈
[
H
1
(
Ω
)
]
d
:
v
=
0
on
Γ
D
}
,
Q
=
L
2
(
Ω
)
{\displaystyle {\begin{aligned}V=\left[H_{0}^{1}(\Omega )\right]^{d}&=\left\{\mathbf {v} \in \left[H^{1}(\Omega )\right]^{d}:\quad \mathbf {v} =\mathbf {0} {\text{ on }}\Gamma _{D}\right\},\\Q&=L^{2}(\Omega )\end{aligned}}}
Considering that the test function v vanishes on the Dirichlet boundary and considering the Neumann condition, the integral on the boundary can be rearranged as:
∫
∂
Ω
(
μ
∂
u
∂
n
^
−
p
n
^
)
⋅
v
=
∫
Γ
D
(
μ
∂
u
∂
n
^
−
p
n
^
)
⋅
v
⏟
v
=
0
on
Γ
D
+
∫
Γ
N
∫
Γ
N
(
μ
∂
u
∂
n
^
−
p
n
^
)
⏟
=
h
on
Γ
N
⋅
v
=
∫
Γ
N
h
⋅
v
.
{\displaystyle \int \limits _{\partial \Omega }\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)\cdot \mathbf {v} =\underbrace {\int \limits _{\Gamma _{D}}\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)\cdot \mathbf {v} } _{\mathbf {v} =\mathbf {0} {\text{ on }}\Gamma _{D}\ }+\int \limits _{\Gamma _{N}}\underbrace {{\vphantom {\int \limits _{\Gamma _{N}}}}\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)} _{=\mathbf {h} {\text{ on }}\Gamma _{N}}\cdot \mathbf {v} =\int \limits _{\Gamma _{N}}\mathbf {h} \cdot \mathbf {v} .}
Having this in mind, the weak formulation of the Navier–Stokes equations is expressed as:
find
u
∈
L
2
(
R
+
[
H
1
(
Ω
)
]
d
)
∩
C
0
(
R
+
[
L
2
(
Ω
)
]
d
)
such that:
{
∫
Ω
ρ
∂
u
∂
t
⋅
v
+
∫
Ω
μ
∇
u
⋅
∇
v
+
∫
Ω
ρ
(
u
⋅
∇
)
u
⋅
v
−
∫
Ω
p
∇
⋅
v
=
∫
Ω
f
⋅
v
+
∫
Γ
N
h
⋅
v
∀
v
∈
V
,
∫
Ω
q
∇
⋅
u
=
0
∀
q
∈
Q
.
{\displaystyle {\begin{aligned}&{\text{find }}\mathbf {u} \in L^{2}\left(\mathbb {R} ^{+}\;\left[H^{1}(\Omega )\right]^{d}\right)\cap C^{0}\left(\mathbb {R} ^{+}\;\left[L^{2}(\Omega )\right]^{d}\right){\text{ such that: }}\\[5pt]&\quad {\begin{cases}\displaystyle \int \limits _{\Omega }\rho {\dfrac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} +\int \limits _{\Omega }\mu \nabla \mathbf {u} \cdot \nabla \mathbf {v} +\int \limits _{\Omega }\rho (\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} -\int \limits _{\Omega }p\nabla \cdot \mathbf {v} =\int \limits _{\Omega }\mathbf {f} \cdot \mathbf {v} +\int \limits _{\Gamma _{N}}\mathbf {h} \cdot \mathbf {v} \quad \forall \mathbf {v} \in V,\\\displaystyle \int \limits _{\Omega }q\nabla \cdot \mathbf {u} =0\quad \forall q\in Q.\end{cases}}\end{aligned}}}
=== Discrete velocity ===
With partitioning of the problem domain and defining basis functions on the partitioned domain, the discrete form of the governing equation is
(
w
i
,
∂
u
j
∂
t
)
=
−
(
w
i
,
(
u
⋅
∇
)
u
j
)
−
ν
(
∇
w
i
:
∇
u
j
)
+
(
w
i
,
f
S
)
.
{\displaystyle \left(\mathbf {w} _{i},{\frac {\partial \mathbf {u} _{j}}{\partial t}}\right)=-{\bigl (}\mathbf {w} _{i},\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} _{j}{\bigr )}-\nu \left(\nabla \mathbf {w} _{i}:\nabla \mathbf {u} _{j}\right)+\left(\mathbf {w} _{i},\mathbf {f} ^{S}\right).}
It is desirable to choose basis functions that reflect the essential feature of incompressible flow – the elements must be divergence-free. While the velocity is the variable of interest, the existence of the stream function or vector potential is necessary by the Helmholtz theorem. Further, to determine fluid flow in the absence of a pressure gradient, one can specify the difference of stream function values across a 2D channel, or the line integral of the tangential component of the vector potential around the channel in 3D, the flow being given by Stokes' theorem. Discussion will be restricted to 2D in the following.
We further restrict discussion to continuous Hermite finite elements which have at least first-derivative degrees-of-freedom. With this, one can draw a large number of candidate triangular and rectangular elements from the plate-bending literature. These elements have derivatives as components of the gradient. In 2D, the gradient and curl of a scalar are clearly orthogonal, given by the expressions,
∇
φ
=
(
∂
φ
∂
x
,
∂
φ
∂
y
)
T
,
∇
×
φ
=
(
∂
φ
∂
y
,
−
∂
φ
∂
x
)
T
.
{\displaystyle {\begin{aligned}\nabla \varphi &=\left({\frac {\partial \varphi }{\partial x}},\,{\frac {\partial \varphi }{\partial y}}\right)^{\mathrm {T} },\\[5pt]\nabla \times \varphi &=\left({\frac {\partial \varphi }{\partial y}},\,-{\frac {\partial \varphi }{\partial x}}\right)^{\mathrm {T} }.\end{aligned}}}
Adopting continuous plate-bending elements, interchanging the derivative degrees-of-freedom and changing the sign of the appropriate one gives many families of stream function elements.
Taking the curl of the scalar stream function elements gives divergence-free velocity elements. The requirement that the stream function elements be continuous assures that the normal component of the velocity is continuous across element interfaces, all that is necessary for vanishing divergence on these interfaces.
Boundary conditions are simple to apply. The stream function is constant on no-flow surfaces, with no-slip velocity conditions on surfaces.
Stream function differences across open channels determine the flow. No boundary conditions are necessary on open boundaries, though consistent values may be used with some problems. These are all Dirichlet conditions.
The algebraic equations to be solved are simple to set up, but of course are non-linear, requiring iteration of the linearized equations.
Similar considerations apply to three-dimensions, but extension from 2D is not immediate because of the vector nature of the potential, and there exists no simple relation between the gradient and the curl as was the case in 2D.
=== Pressure recovery ===
Recovering pressure from the velocity field is easy. The discrete weak equation for the pressure gradient is,
(
g
i
,
∇
p
)
=
−
(
g
i
,
(
u
⋅
∇
)
u
j
)
−
ν
(
∇
g
i
:
∇
u
j
)
+
(
g
i
,
f
I
)
{\displaystyle (\mathbf {g} _{i},\nabla p)=-\left(\mathbf {g} _{i},\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} _{j}\right)-\nu \left(\nabla \mathbf {g} _{i}:\nabla \mathbf {u} _{j}\right)+\left(\mathbf {g} _{i},\mathbf {f} ^{I}\right)}
where the test/weight functions are irrotational. Any conforming scalar finite element may be used. However, the pressure gradient field may also be of interest. In this case, one can use scalar Hermite elements for the pressure. For the test/weight functions
g
i
{\textstyle \mathbf {g} _{i}}
one would choose the irrotational vector elements obtained from the gradient of the pressure element.
== Non-inertial frame of reference ==
The rotating frame of reference introduces some interesting pseudo-forces into the equations through the material derivative term. Consider a stationary inertial frame of reference
K
{\textstyle K}
, and a non-inertial frame of reference
K
′
{\textstyle K'}
, which is translating with velocity
U
(
t
)
{\textstyle \mathbf {U} (t)}
and rotating with angular velocity
Ω
(
t
)
{\textstyle \Omega (t)}
with respect to the stationary frame. The Navier–Stokes equation observed from the non-inertial frame then becomes
Here
x
{\textstyle \mathbf {x} }
and
u
{\textstyle \mathbf {u} }
are measured in the non-inertial frame. The first term in the parenthesis represents Coriolis acceleration, the second term is due to centrifugal acceleration, the third is due to the linear acceleration of
K
′
{\textstyle K'}
with respect to
K
{\textstyle K}
and the fourth term is due to the angular acceleration of
K
′
{\textstyle K'}
with respect to
K
{\textstyle K}
.
== Other equations ==
The Navier–Stokes equations are strictly a statement of the balance of momentum. To fully describe fluid flow, more information is needed, how much depending on the assumptions made. This additional information may include boundary data (no-slip, capillary surface, etc.), conservation of mass, balance of energy, and/or an equation of state.
=== Continuity equation for incompressible fluid ===
Regardless of the flow assumptions, a statement of the conservation of mass is generally necessary. This is achieved through the mass continuity equation, as discussed above in the "General continuum equations" within this article, as follows:
D
m
D
t
=
∭
V
(
D
ρ
D
t
+
ρ
(
∇
⋅
u
)
)
d
V
D
ρ
D
t
+
ρ
(
∇
⋅
u
)
=
∂
ρ
∂
t
+
(
∇
ρ
)
⋅
u
+
ρ
(
∇
⋅
u
)
=
∂
ρ
∂
t
+
∇
⋅
(
ρ
u
)
=
0
{\displaystyle {\begin{aligned}{\frac {\mathbf {D} m}{\mathbf {Dt} }}&={\iiint \limits _{V}}({{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot \mathbf {u} )})dV\\{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot {\mathbf {u} })&={\frac {\partial \rho }{\partial t}}+({\nabla \rho })\cdot {\mathbf {u} }+{\rho }(\nabla \cdot \mathbf {u} )={\frac {\partial \rho }{\partial t}}+\nabla \cdot ({\rho \mathbf {u} })=0\end{aligned}}}
A fluid media for which the density (
ρ
{\displaystyle \rho }
) is constant is called incompressible. Therefore, the rate of change of density (
ρ
{\displaystyle \rho }
) with respect to time
(
∂
ρ
∂
t
)
{\displaystyle ({\frac {\partial \rho }{\partial t}})}
and the gradient of density
(
∇
ρ
)
{\displaystyle (\nabla \rho )}
are equal to zero
(
0
)
{\displaystyle (0)}
. In this case the general equation of continuity,
∂
ρ
∂
t
+
∇
⋅
(
ρ
u
)
=
0
{\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot ({\rho \mathbf {u} })=0}
, reduces to:
ρ
(
∇
⋅
u
)
=
0
{\displaystyle \rho (\nabla {\cdot }{\mathbf {u} })=0}
. Furthermore, assuming that density (
ρ
{\displaystyle \rho }
) is a non-zero constant
(
ρ
≠
0
)
{\displaystyle (\rho \neq 0)}
means that the right-hand side of the equation
(
0
)
{\displaystyle (0)}
is divisible by density (
ρ
{\displaystyle \rho }
). Therefore, the continuity equation for an incompressible fluid reduces further to:
(
∇
⋅
u
)
=
0
{\displaystyle (\nabla {\cdot {\mathbf {u} }})=0}
This relationship,
(
∇
⋅
u
)
=
0
{\textstyle (\nabla {\cdot {\mathbf {u} }})=0}
, identifies that the divergence of the flow velocity vector (
u
{\displaystyle \mathbf {u} }
) is equal to zero
(
0
)
{\displaystyle (0)}
, which means that for an incompressible fluid the flow velocity field is a solenoidal vector field or a divergence-free vector field. Note that this relationship can be expanded upon due to its uniqueness with the vector Laplace operator
(
∇
2
u
=
∇
(
∇
⋅
u
)
−
∇
×
(
∇
×
u
)
)
{\displaystyle (\nabla ^{2}\mathbf {u} =\nabla (\nabla \cdot \mathbf {u} )-\nabla \times (\nabla \times \mathbf {u} ))}
, and vorticity
(
ω
→
=
∇
×
u
)
{\displaystyle ({\vec {\omega }}=\nabla \times \mathbf {u} )}
which is now expressed like so, for an incompressible fluid:
∇
2
u
=
−
(
∇
×
(
∇
×
u
)
)
=
−
(
∇
×
ω
→
)
{\displaystyle \nabla ^{2}\mathbf {u} =-(\nabla \times (\nabla \times \mathbf {u} ))=-(\nabla \times {\vec {\omega }})}
== Stream function for incompressible 2D fluid ==
Taking the curl of the incompressible Navier–Stokes equation results in the elimination of pressure. This is especially easy to see if 2D Cartesian flow is assumed (like in the degenerate 3D case with
u
z
=
0
{\textstyle u_{z}=0}
and no dependence of anything on
z
{\textstyle z}
), where the equations reduce to:
ρ
(
∂
u
x
∂
t
+
u
x
∂
u
x
∂
x
+
u
y
∂
u
x
∂
y
)
=
−
∂
p
∂
x
+
μ
(
∂
2
u
x
∂
x
2
+
∂
2
u
x
∂
y
2
)
+
ρ
g
x
ρ
(
∂
u
y
∂
t
+
u
x
∂
u
y
∂
x
+
u
y
∂
u
y
∂
y
)
=
−
∂
p
∂
y
+
μ
(
∂
2
u
y
∂
x
2
+
∂
2
u
y
∂
y
2
)
+
ρ
g
y
.
{\displaystyle {\begin{aligned}\rho \left({\frac {\partial u_{x}}{\partial t}}+u_{x}{\frac {\partial u_{x}}{\partial x}}+u_{y}{\frac {\partial u_{x}}{\partial y}}\right)&=-{\frac {\partial p}{\partial x}}+\mu \left({\frac {\partial ^{2}u_{x}}{\partial x^{2}}}+{\frac {\partial ^{2}u_{x}}{\partial y^{2}}}\right)+\rho g_{x}\\\rho \left({\frac {\partial u_{y}}{\partial t}}+u_{x}{\frac {\partial u_{y}}{\partial x}}+u_{y}{\frac {\partial u_{y}}{\partial y}}\right)&=-{\frac {\partial p}{\partial y}}+\mu \left({\frac {\partial ^{2}u_{y}}{\partial x^{2}}}+{\frac {\partial ^{2}u_{y}}{\partial y^{2}}}\right)+\rho g_{y}.\end{aligned}}}
Differentiating the first with respect to
y
{\textstyle y}
, the second with respect to
x
{\textstyle x}
and subtracting the resulting equations will eliminate pressure and any conservative force.
For incompressible flow, defining the stream function
ψ
{\textstyle \psi }
through
u
x
=
∂
ψ
∂
y
;
u
y
=
−
∂
ψ
∂
x
{\displaystyle u_{x}={\frac {\partial \psi }{\partial y}};\quad u_{y}=-{\frac {\partial \psi }{\partial x}}}
results in mass continuity being unconditionally satisfied (given the stream function is continuous), and then incompressible Newtonian 2D momentum and mass conservation condense into one equation:
∂
∂
t
(
∇
2
ψ
)
+
∂
ψ
∂
y
∂
∂
x
(
∇
2
ψ
)
−
∂
ψ
∂
x
∂
∂
y
(
∇
2
ψ
)
=
ν
∇
4
ψ
{\displaystyle {\frac {\partial }{\partial t}}\left(\nabla ^{2}\psi \right)+{\frac {\partial \psi }{\partial y}}{\frac {\partial }{\partial x}}\left(\nabla ^{2}\psi \right)-{\frac {\partial \psi }{\partial x}}{\frac {\partial }{\partial y}}\left(\nabla ^{2}\psi \right)=\nu \nabla ^{4}\psi }
where
∇
4
{\textstyle \nabla ^{4}}
is the 2D biharmonic operator and
ν
{\textstyle \nu }
is the kinematic viscosity,
ν
=
μ
ρ
{\textstyle \nu ={\frac {\mu }{\rho }}}
. We can also express this compactly using the Jacobian determinant:
∂
∂
t
(
∇
2
ψ
)
+
∂
(
ψ
,
∇
2
ψ
)
∂
(
y
,
x
)
=
ν
∇
4
ψ
.
{\displaystyle {\frac {\partial }{\partial t}}\left(\nabla ^{2}\psi \right)+{\frac {\partial \left(\psi ,\nabla ^{2}\psi \right)}{\partial (y,x)}}=\nu \nabla ^{4}\psi .}
This single equation together with appropriate boundary conditions describes 2D fluid flow, taking only kinematic viscosity as a parameter. Note that the equation for creeping flow results when the left side is assumed zero.
In axisymmetric flow another stream function formulation, called the Stokes stream function, can be used to describe the velocity components of an incompressible flow with one scalar function.
The incompressible Navier–Stokes equation is a differential algebraic equation, having the inconvenient feature that there is no explicit mechanism for advancing the pressure in time. Consequently, much effort has been expended to eliminate the pressure from all or part of the computational process. The stream function formulation eliminates the pressure but only in two dimensions and at the expense of introducing higher derivatives and elimination of the velocity, which is the primary variable of interest.
== Properties ==
=== Nonlinearity ===
The Navier–Stokes equations are nonlinear partial differential equations in the general case and so remain in almost every real situation. In some cases, such as one-dimensional flow and Stokes flow (or creeping flow), the equations can be simplified to linear equations. The nonlinearity makes most problems difficult or impossible to solve and is the main contributor to the turbulence that the equations model.
The nonlinearity is due to convective acceleration, which is an acceleration associated with the change in velocity over position. Hence, any convective flow, whether turbulent or not, will involve nonlinearity. An example of convective but laminar (nonturbulent) flow would be the passage of a viscous fluid (for example, oil) through a small converging nozzle. Such flows, whether exactly solvable or not, can often be thoroughly studied and understood.
=== Turbulence ===
Turbulence is the time-dependent chaotic behaviour seen in many fluid flows. It is generally believed that it is due to the inertia of the fluid as a whole: the culmination of time-dependent and convective acceleration; hence flows where inertial effects are small tend to be laminar (the Reynolds number quantifies how much the flow is affected by inertia). It is believed, though not known with certainty, that the Navier–Stokes equations describe turbulence properly.
The numerical solution of the Navier–Stokes equations for turbulent flow is extremely difficult, and due to the significantly different mixing-length scales that are involved in turbulent flow, the stable solution of this requires such a fine mesh resolution that the computational time becomes significantly infeasible for calculation or direct numerical simulation. Attempts to solve turbulent flow using a laminar solver typically result in a time-unsteady solution, which fails to converge appropriately. To counter this, time-averaged equations such as the Reynolds-averaged Navier–Stokes equations (RANS), supplemented with turbulence models, are used in practical computational fluid dynamics (CFD) applications when modeling turbulent flows. Some models include the Spalart–Allmaras, k–ω, k–ε, and SST models, which add a variety of additional equations to bring closure to the RANS equations. Large eddy simulation (LES) can also be used to solve these equations numerically. This approach is computationally more expensive—in time and in computer memory—than RANS, but produces better results because it explicitly resolves the larger turbulent scales.
=== Applicability ===
Together with supplemental equations (for example, conservation of mass) and well-formulated boundary conditions, the Navier–Stokes equations seem to model fluid motion accurately; even turbulent flows seem (on average) to agree with real world observations.
The Navier–Stokes equations assume that the fluid being studied is a continuum (it is infinitely divisible and not composed of particles such as atoms or molecules), and is not moving at relativistic velocities. At very small scales or under extreme conditions, real fluids made out of discrete molecules will produce results different from the continuous fluids modeled by the Navier–Stokes equations. For example, capillarity of internal layers in fluids appears for flow with high gradients. For large Knudsen number of the problem, the Boltzmann equation may be a suitable replacement.
Failing that, one may have to resort to molecular dynamics or various hybrid methods.
Another limitation is simply the complicated nature of the equations. Time-tested formulations exist for common fluid families, but the application of the Navier–Stokes equations to less common families tends to result in very complicated formulations and often to open research problems. For this reason, these equations are usually written for Newtonian fluids where the viscosity model is linear; truly general models for the flow of other kinds of fluids (such as blood) do not exist.
== Application to specific problems ==
The Navier–Stokes equations, even when written explicitly for specific fluids, are rather generic in nature and their proper application to specific problems can be very diverse. This is partly because there is an enormous variety of problems that may be modeled, ranging from as simple as the distribution of static pressure to as complicated as multiphase flow driven by surface tension.
Generally, application to specific problems begins with some flow assumptions and initial/boundary condition formulation, this may be followed by scale analysis to further simplify the problem.
=== Parallel flow ===
Assume steady, parallel, one-dimensional, non-convective pressure-driven flow between parallel plates, the resulting scaled (dimensionless) boundary value problem is:
d
2
u
d
y
2
=
−
1
;
u
(
0
)
=
u
(
1
)
=
0.
{\displaystyle {\frac {\mathrm {d} ^{2}u}{\mathrm {d} y^{2}}}=-1;\quad u(0)=u(1)=0.}
The boundary condition is the no slip condition. This problem is easily solved for the flow field:
u
(
y
)
=
y
−
y
2
2
.
{\displaystyle u(y)={\frac {y-y^{2}}{2}}.}
From this point onward, more quantities of interest can be easily obtained, such as viscous drag force or net flow rate.
=== Radial flow ===
Difficulties may arise when the problem becomes slightly more complicated. A seemingly modest twist on the parallel flow above would be the radial flow between parallel plates; this involves convection and thus non-linearity. The velocity field may be represented by a function f(z) that must satisfy:
d
2
f
d
z
2
+
R
f
2
=
−
1
;
f
(
−
1
)
=
f
(
1
)
=
0.
{\displaystyle {\frac {\mathrm {d} ^{2}f}{\mathrm {d} z^{2}}}+Rf^{2}=-1;\quad f(-1)=f(1)=0.}
This ordinary differential equation is what is obtained when the Navier–Stokes equations are written and the flow assumptions applied (additionally, the pressure gradient is solved for). The nonlinear term makes this a very difficult problem to solve analytically (a lengthy implicit solution may be found which involves elliptic integrals and roots of cubic polynomials). Issues with the actual existence of solutions arise for
R
>
1.41
{\textstyle R>1.41}
(approximately; this is not √2), the parameter
R
{\textstyle R}
being the Reynolds number with appropriately chosen scales. This is an example of flow assumptions losing their applicability, and an example of the difficulty in "high" Reynolds number flows.
=== Convection ===
A type of natural convection that can be described by the Navier–Stokes equation is the Rayleigh–Bénard convection. It is one of the most commonly studied convection phenomena because of its analytical and experimental accessibility.
== Exact solutions of the Navier–Stokes equations ==
Some exact solutions to the Navier–Stokes equations exist. Examples of degenerate cases—with the non-linear terms in the Navier–Stokes equations equal to zero—are Poiseuille flow, Couette flow and the oscillatory Stokes boundary layer. But also, more interesting examples, solutions to the full non-linear equations, exist, such as Jeffery–Hamel flow, Von Kármán swirling flow, stagnation point flow, Landau–Squire jet, and Taylor–Green vortex. Time-dependent self-similar solutions of the three-dimensional non-compressible Navier–Stokes equations in Cartesian coordinate can be given with the help of the Kummer's functions with quadratic arguments. For the compressible Navier–Stokes equations the time-dependent self-similar solutions are however the Whittaker functions again with quadratic arguments when the polytropic equation of state is used as a closing condition. Note that the existence of these exact solutions does not imply they are stable: turbulence may develop at higher Reynolds numbers.
Under additional assumptions, the component parts can be separated.
=== A three-dimensional steady-state vortex solution ===
A steady-state example with no singularities comes from considering the flow along the lines of a Hopf fibration. Let
r
{\textstyle r}
be a constant radius of the inner coil. One set of solutions is given by:
ρ
(
x
,
y
,
z
)
=
3
B
r
2
+
x
2
+
y
2
+
z
2
p
(
x
,
y
,
z
)
=
−
A
2
B
(
r
2
+
x
2
+
y
2
+
z
2
)
3
u
(
x
,
y
,
z
)
=
A
(
r
2
+
x
2
+
y
2
+
z
2
)
2
(
2
(
−
r
y
+
x
z
)
2
(
r
x
+
y
z
)
r
2
−
x
2
−
y
2
+
z
2
)
g
=
0
μ
=
0
{\displaystyle {\begin{aligned}\rho (x,y,z)&={\frac {3B}{r^{2}+x^{2}+y^{2}+z^{2}}}\\p(x,y,z)&={\frac {-A^{2}B}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{3}}}\\\mathbf {u} (x,y,z)&={\frac {A}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{2}}}{\begin{pmatrix}2(-ry+xz)\\2(rx+yz)\\r^{2}-x^{2}-y^{2}+z^{2}\end{pmatrix}}\\g&=0\\\mu &=0\end{aligned}}}
for arbitrary constants
A
{\textstyle A}
and
B
{\textstyle B}
. This is a solution in a non-viscous gas (compressible fluid) whose density, velocities and pressure goes to zero far from the origin. (Note this is not a solution to the Clay Millennium problem because that refers to incompressible fluids where
ρ
{\textstyle \rho }
is a constant, and neither does it deal with the uniqueness of the Navier–Stokes equations with respect to any turbulence properties.) It is also worth pointing out that the components of the velocity vector are exactly those from the Pythagorean quadruple parametrization. Other choices of density and pressure are possible with the same velocity field:
=== Viscous three-dimensional periodic solutions ===
Two examples of periodic fully-three-dimensional viscous solutions are described in.
These solutions are defined on a three-dimensional torus
T
3
=
[
0
,
L
]
3
{\displaystyle \mathbb {T} ^{3}=[0,L]^{3}}
and are characterized by positive and negative helicity respectively.
The solution with positive helicity is given by:
u
x
=
4
2
3
3
U
0
[
sin
(
k
x
−
π
/
3
)
cos
(
k
y
+
π
/
3
)
sin
(
k
z
+
π
/
2
)
−
cos
(
k
z
−
π
/
3
)
sin
(
k
x
+
π
/
3
)
sin
(
k
y
+
π
/
2
)
]
e
−
3
ν
k
2
t
u
y
=
4
2
3
3
U
0
[
sin
(
k
y
−
π
/
3
)
cos
(
k
z
+
π
/
3
)
sin
(
k
x
+
π
/
2
)
−
cos
(
k
x
−
π
/
3
)
sin
(
k
y
+
π
/
3
)
sin
(
k
z
+
π
/
2
)
]
e
−
3
ν
k
2
t
u
z
=
4
2
3
3
U
0
[
sin
(
k
z
−
π
/
3
)
cos
(
k
x
+
π
/
3
)
sin
(
k
y
+
π
/
2
)
−
cos
(
k
y
−
π
/
3
)
sin
(
k
z
+
π
/
3
)
sin
(
k
x
+
π
/
2
)
]
e
−
3
ν
k
2
t
{\displaystyle {\begin{aligned}u_{x}&={\frac {4{\sqrt {2}}}{3{\sqrt {3}}}}\,U_{0}\left[\,\sin(kx-\pi /3)\cos(ky+\pi /3)\sin(kz+\pi /2)-\cos(kz-\pi /3)\sin(kx+\pi /3)\sin(ky+\pi /2)\,\right]e^{-3\nu k^{2}t}\\u_{y}&={\frac {4{\sqrt {2}}}{3{\sqrt {3}}}}\,U_{0}\left[\,\sin(ky-\pi /3)\cos(kz+\pi /3)\sin(kx+\pi /2)-\cos(kx-\pi /3)\sin(ky+\pi /3)\sin(kz+\pi /2)\,\right]e^{-3\nu k^{2}t}\\u_{z}&={\frac {4{\sqrt {2}}}{3{\sqrt {3}}}}\,U_{0}\left[\,\sin(kz-\pi /3)\cos(kx+\pi /3)\sin(ky+\pi /2)-\cos(ky-\pi /3)\sin(kz+\pi /3)\sin(kx+\pi /2)\,\right]e^{-3\nu k^{2}t}\end{aligned}}}
where
k
=
2
π
/
L
{\displaystyle k=2\pi /L}
is the wave number and the velocity components are normalized so that the average kinetic energy per unit of mass is
U
0
2
/
2
{\displaystyle U_{0}^{2}/2}
at
t
=
0
{\displaystyle t=0}
.
The pressure field is obtained from the velocity field as
p
=
p
0
−
ρ
0
‖
u
‖
2
/
2
{\displaystyle p=p_{0}-\rho _{0}\|{\boldsymbol {u}}\|^{2}/2}
(where
p
0
{\displaystyle p_{0}}
and
ρ
0
{\displaystyle \rho _{0}}
are reference values for the pressure and density fields respectively).
Since both the solutions belong to the class of Beltrami flow, the vorticity field is parallel to the velocity and, for the case with positive helicity, is given by
ω
=
3
k
u
{\displaystyle \omega ={\sqrt {3}}\,k\,{\boldsymbol {u}}}
.
These solutions can be regarded as a generalization in three dimensions of the classic two-dimensional Taylor-Green Taylor–Green vortex.
== Wyld diagrams ==
Wyld diagrams are bookkeeping graphs that correspond to the Navier–Stokes equations via a perturbation expansion of the fundamental continuum mechanics. Similar to the Feynman diagrams in quantum field theory, these diagrams are an extension of Mstislav Keldysh's technique for nonequilibrium processes in fluid dynamics. In other words, these diagrams assign graphs to the (often) turbulent phenomena in turbulent fluids by allowing correlated and interacting fluid particles to obey stochastic processes associated to pseudo-random functions in probability distributions.
== Representations in 3D ==
Note that the formulas in this section make use of the single-line notation for partial derivatives, where, e.g.
∂
x
u
{\textstyle \partial _{x}u}
means the partial derivative of
u
{\textstyle u}
with respect to
x
{\textstyle x}
, and
∂
y
2
f
θ
{\textstyle \partial _{y}^{2}f_{\theta }}
means the second-order partial derivative of
f
θ
{\textstyle f_{\theta }}
with respect to
y
{\textstyle y}
.
A 2022 paper provides a less costly, dynamical and recurrent solution of the Navier-Stokes equation for 3D turbulent fluid flows. On suitably short time scales, the dynamics of turbulence is deterministic.
=== Cartesian coordinates ===
From the general form of the Navier–Stokes, with the velocity vector expanded as
u
=
(
u
x
,
u
y
,
u
z
)
{\textstyle \mathbf {u} =(u_{x},u_{y},u_{z})}
, sometimes respectively named
u
{\textstyle u}
,
v
{\textstyle v}
,
w
{\textstyle w}
, we may write the vector equation explicitly,
x
:
ρ
(
∂
t
u
x
+
u
x
∂
x
u
x
+
u
y
∂
y
u
x
+
u
z
∂
z
u
x
)
=
−
∂
x
p
+
μ
(
∂
x
2
u
x
+
∂
y
2
u
x
+
∂
z
2
u
x
)
+
1
3
μ
∂
x
(
∂
x
u
x
+
∂
y
u
y
+
∂
z
u
z
)
+
ρ
g
x
{\displaystyle {\begin{aligned}x:\ &\rho \left({\partial _{t}u_{x}}+u_{x}\,{\partial _{x}u_{x}}+u_{y}\,{\partial _{y}u_{x}}+u_{z}\,{\partial _{z}u_{x}}\right)\\&\quad =-\partial _{x}p+\mu \left({\partial _{x}^{2}u_{x}}+{\partial _{y}^{2}u_{x}}+{\partial _{z}^{2}u_{x}}\right)+{\frac {1}{3}}\mu \ \partial _{x}\left({\partial _{x}u_{x}}+{\partial _{y}u_{y}}+{\partial _{z}u_{z}}\right)+\rho g_{x}\\\end{aligned}}}
y
:
ρ
(
∂
t
u
y
+
u
x
∂
x
u
y
+
u
y
∂
y
u
y
+
u
z
∂
z
u
y
)
=
−
∂
y
p
+
μ
(
∂
x
2
u
y
+
∂
y
2
u
y
+
∂
z
2
u
y
)
+
1
3
μ
∂
y
(
∂
x
u
x
+
∂
y
u
y
+
∂
z
u
z
)
+
ρ
g
y
{\displaystyle {\begin{aligned}y:\ &\rho \left({\partial _{t}u_{y}}+u_{x}{\partial _{x}u_{y}}+u_{y}{\partial _{y}u_{y}}+u_{z}{\partial _{z}u_{y}}\right)\\&\quad =-{\partial _{y}p}+\mu \left({\partial _{x}^{2}u_{y}}+{\partial _{y}^{2}u_{y}}+{\partial _{z}^{2}u_{y}}\right)+{\frac {1}{3}}\mu \ \partial _{y}\left({\partial _{x}u_{x}}+{\partial _{y}u_{y}}+{\partial _{z}u_{z}}\right)+\rho g_{y}\\\end{aligned}}}
z
:
ρ
(
∂
t
u
z
+
u
x
∂
x
u
z
+
u
y
∂
y
u
z
+
u
z
∂
z
u
z
)
=
−
∂
z
p
+
μ
(
∂
x
2
u
z
+
∂
y
2
u
z
+
∂
z
2
u
z
)
+
1
3
μ
∂
z
(
∂
x
u
x
+
∂
y
u
y
+
∂
z
u
z
)
+
ρ
g
z
.
{\displaystyle {\begin{aligned}z:\ &\rho \left({\partial _{t}u_{z}}+u_{x}{\partial _{x}u_{z}}+u_{y}{\partial _{y}u_{z}}+u_{z}{\partial _{z}u_{z}}\right)\\&\quad =-{\partial _{z}p}+\mu \left({\partial _{x}^{2}u_{z}}+{\partial _{y}^{2}u_{z}}+{\partial _{z}^{2}u_{z}}\right)+{\frac {1}{3}}\mu \ \partial _{z}\left({\partial _{x}u_{x}}+{\partial _{y}u_{y}}+{\partial _{z}u_{z}}\right)+\rho g_{z}.\end{aligned}}}
Note that gravity has been accounted for as a body force, and the values of
g
x
{\textstyle g_{x}}
,
g
y
{\textstyle g_{y}}
,
g
z
{\textstyle g_{z}}
will depend on the orientation of gravity with respect to the chosen set of coordinates.
The continuity equation reads:
∂
t
ρ
+
∂
x
(
ρ
u
x
)
+
∂
y
(
ρ
u
y
)
+
∂
z
(
ρ
u
z
)
=
0.
{\displaystyle \partial _{t}\rho +\partial _{x}(\rho u_{x})+\partial _{y}(\rho u_{y})+\partial _{z}(\rho u_{z})=0.}
When the flow is incompressible,
ρ
{\textstyle \rho }
does not change for any fluid particle, and its material derivative vanishes:
D
ρ
D
t
=
0
{\textstyle {\frac {\mathrm {D} \rho }{\mathrm {D} t}}=0}
. The continuity equation is reduced to:
∂
x
u
x
+
∂
y
u
y
+
∂
z
u
z
=
0.
{\displaystyle \partial _{x}u_{x}+\partial _{y}u_{y}+\partial _{z}u_{z}=0.}
Thus, for the incompressible version of the Navier–Stokes equation the second part of the viscous terms fall away (see Incompressible flow).
This system of four equations comprises the most commonly used and studied form. Though comparatively more compact than other representations, this is still a nonlinear system of partial differential equations for which solutions are difficult to obtain.
=== Cylindrical coordinates ===
A change of variables on the Cartesian equations will yield the following momentum equations for
r
{\textstyle r}
,
ϕ
{\textstyle \phi }
, and
z
{\textstyle z}
r
:
ρ
(
∂
t
u
r
+
u
r
∂
r
u
r
+
u
φ
r
∂
φ
u
r
+
u
z
∂
z
u
r
−
u
φ
2
r
)
=
−
∂
r
p
+
μ
(
1
r
∂
r
(
r
∂
r
u
r
)
+
1
r
2
∂
φ
2
u
r
+
∂
z
2
u
r
−
u
r
r
2
−
2
r
2
∂
φ
u
φ
)
+
1
3
μ
∂
r
(
1
r
∂
r
(
r
u
r
)
+
1
r
∂
φ
u
φ
+
∂
z
u
z
)
+
ρ
g
r
{\displaystyle {\begin{aligned}r:\ &\rho \left({\partial _{t}u_{r}}+u_{r}{\partial _{r}u_{r}}+{\frac {u_{\varphi }}{r}}{\partial _{\varphi }u_{r}}+u_{z}{\partial _{z}u_{r}}-{\frac {u_{\varphi }^{2}}{r}}\right)\\&\quad =-{\partial _{r}p}\\&\qquad +\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{r}}\right)+{\frac {1}{r^{2}}}{\partial _{\varphi }^{2}u_{r}}+{\partial _{z}^{2}u_{r}}-{\frac {u_{r}}{r^{2}}}-{\frac {2}{r^{2}}}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +{\frac {1}{3}}\mu \partial _{r}\left({\frac {1}{r}}{\partial _{r}\left(ru_{r}\right)}+{\frac {1}{r}}{\partial _{\varphi }u_{\varphi }}+{\partial _{z}u_{z}}\right)\\&\qquad +\rho g_{r}\\[8px]\end{aligned}}}
φ
:
ρ
(
∂
t
u
φ
+
u
r
∂
r
u
φ
+
u
φ
r
∂
φ
u
φ
+
u
z
∂
z
u
φ
+
u
r
u
φ
r
)
=
−
1
r
∂
φ
p
+
μ
(
1
r
∂
r
(
r
∂
r
u
φ
)
+
1
r
2
∂
φ
2
u
φ
+
∂
z
2
u
φ
−
u
φ
r
2
+
2
r
2
∂
φ
u
r
)
+
1
3
μ
1
r
∂
φ
(
1
r
∂
r
(
r
u
r
)
+
1
r
∂
φ
u
φ
+
∂
z
u
z
)
+
ρ
g
φ
{\displaystyle {\begin{aligned}\varphi :\ &\rho \left({\partial _{t}u_{\varphi }}+u_{r}{\partial _{r}u_{\varphi }}+{\frac {u_{\varphi }}{r}}{\partial _{\varphi }u_{\varphi }}+u_{z}{\partial _{z}u_{\varphi }}+{\frac {u_{r}u_{\varphi }}{r}}\right)\\&\quad =-{\frac {1}{r}}{\partial _{\varphi }p}\\&\qquad +\mu \left({\frac {1}{r}}\ \partial _{r}\left(r{\partial _{r}u_{\varphi }}\right)+{\frac {1}{r^{2}}}{\partial _{\varphi }^{2}u_{\varphi }}+{\partial _{z}^{2}u_{\varphi }}-{\frac {u_{\varphi }}{r^{2}}}+{\frac {2}{r^{2}}}{\partial _{\varphi }u_{r}}\right)\\&\qquad +{\frac {1}{3}}\mu {\frac {1}{r}}\partial _{\varphi }\left({\frac {1}{r}}{\partial _{r}\left(ru_{r}\right)}+{\frac {1}{r}}{\partial _{\varphi }u_{\varphi }}+{\partial _{z}u_{z}}\right)\\&\qquad +\rho g_{\varphi }\\[8px]\end{aligned}}}
z
:
ρ
(
∂
t
u
z
+
u
r
∂
r
u
z
+
u
φ
r
∂
φ
u
z
+
u
z
∂
z
u
z
)
=
−
∂
z
p
+
μ
(
1
r
∂
r
(
r
∂
r
u
z
)
+
1
r
2
∂
φ
2
u
z
+
∂
z
2
u
z
)
+
1
3
μ
∂
z
(
1
r
∂
r
(
r
u
r
)
+
1
r
∂
φ
u
φ
+
∂
z
u
z
)
+
ρ
g
z
.
{\displaystyle {\begin{aligned}z:\ &\rho \left({\partial _{t}u_{z}}+u_{r}{\partial _{r}u_{z}}+{\frac {u_{\varphi }}{r}}{\partial _{\varphi }u_{z}}+u_{z}{\partial _{z}u_{z}}\right)\\&\quad =-{\partial _{z}p}\\&\qquad +\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{z}}\right)+{\frac {1}{r^{2}}}{\partial _{\varphi }^{2}u_{z}}+{\partial _{z}^{2}u_{z}}\right)\\&\qquad +{\frac {1}{3}}\mu \partial _{z}\left({\frac {1}{r}}{\partial _{r}\left(ru_{r}\right)}+{\frac {1}{r}}{\partial _{\varphi }u_{\varphi }}+{\partial _{z}u_{z}}\right)\\&\qquad +\rho g_{z}.\end{aligned}}}
The gravity components will generally not be constants, however for most applications either the coordinates are chosen so that the gravity components are constant or else it is assumed that gravity is counteracted by a pressure field (for example, flow in horizontal pipe is treated normally without gravity and without a vertical pressure gradient). The continuity equation is:
∂
t
ρ
+
1
r
∂
r
(
ρ
r
u
r
)
+
1
r
∂
φ
(
ρ
u
φ
)
+
∂
z
(
ρ
u
z
)
=
0.
{\displaystyle {\partial _{t}\rho }+{\frac {1}{r}}\partial _{r}\left(\rho ru_{r}\right)+{\frac {1}{r}}{\partial _{\varphi }\left(\rho u_{\varphi }\right)}+{\partial _{z}\left(\rho u_{z}\right)}=0.}
This cylindrical representation of the incompressible Navier–Stokes equations is the second most commonly seen (the first being Cartesian above). Cylindrical coordinates are chosen to take advantage of symmetry, so that a velocity component can disappear. A very common case is axisymmetric flow with the assumption of no tangential velocity (
u
ϕ
=
0
{\textstyle u_{\phi }=0}
), and the remaining quantities are independent of
ϕ
{\textstyle \phi }
:
ρ
(
∂
t
u
r
+
u
r
∂
r
u
r
+
u
z
∂
z
u
r
)
=
−
∂
r
p
+
μ
(
1
r
∂
r
(
r
∂
r
u
r
)
+
∂
z
2
u
r
−
u
r
r
2
)
+
ρ
g
r
ρ
(
∂
t
u
z
+
u
r
∂
r
u
z
+
u
z
∂
z
u
z
)
=
−
∂
z
p
+
μ
(
1
r
∂
r
(
r
∂
r
u
z
)
+
∂
z
2
u
z
)
+
ρ
g
z
1
r
∂
r
(
r
u
r
)
+
∂
z
u
z
=
0.
{\displaystyle {\begin{aligned}\rho \left({\partial _{t}u_{r}}+u_{r}{\partial _{r}u_{r}}+u_{z}{\partial _{z}u_{r}}\right)&=-{\partial _{r}p}+\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{r}}\right)+{\partial _{z}^{2}u_{r}}-{\frac {u_{r}}{r^{2}}}\right)+\rho g_{r}\\\rho \left({\partial _{t}u_{z}}+u_{r}{\partial _{r}u_{z}}+u_{z}{\partial _{z}u_{z}}\right)&=-{\partial _{z}p}+\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{z}}\right)+{\partial _{z}^{2}u_{z}}\right)+\rho g_{z}\\{\frac {1}{r}}\partial _{r}\left(ru_{r}\right)+{\partial _{z}u_{z}}&=0.\end{aligned}}}
=== Spherical coordinates ===
In spherical coordinates, the
r
{\textstyle r}
,
ϕ
{\textstyle \phi }
, and
θ
{\textstyle \theta }
momentum equations are (note the convention used:
θ
{\textstyle \theta }
is polar angle, or colatitude,
0
≤
θ
≤
π
{\textstyle 0\leq \theta \leq \pi }
):
r
:
ρ
(
∂
t
u
r
+
u
r
∂
r
u
r
+
u
φ
r
sin
θ
∂
φ
u
r
+
u
θ
r
∂
θ
u
r
−
u
φ
2
+
u
θ
2
r
)
=
−
∂
r
p
+
μ
(
1
r
2
∂
r
(
r
2
∂
r
u
r
)
+
1
r
2
sin
2
θ
∂
φ
2
u
r
+
1
r
2
sin
θ
∂
θ
(
sin
θ
∂
θ
u
r
)
−
2
u
r
+
∂
θ
u
θ
+
u
θ
cot
θ
r
2
−
2
r
2
sin
θ
∂
φ
u
φ
)
+
1
3
μ
∂
r
(
1
r
2
∂
r
(
r
2
u
r
)
+
1
r
sin
θ
∂
θ
(
u
θ
sin
θ
)
+
1
r
sin
θ
∂
φ
u
φ
)
+
ρ
g
r
{\displaystyle {\begin{aligned}r:\ &\rho \left({\partial _{t}u_{r}}+u_{r}{\partial _{r}u_{r}}+{\frac {u_{\varphi }}{r\sin \theta }}{\partial _{\varphi }u_{r}}+{\frac {u_{\theta }}{r}}{\partial _{\theta }u_{r}}-{\frac {u_{\varphi }^{2}+u_{\theta }^{2}}{r}}\right)\\&\quad =-{\partial _{r}p}\\&\qquad +\mu \left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}{\partial _{r}u_{r}}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\partial _{\varphi }^{2}u_{r}}+{\frac {1}{r^{2}\sin \theta }}\partial _{\theta }\left(\sin \theta {\partial _{\theta }u_{r}}\right)-2{\frac {u_{r}+{\partial _{\theta }u_{\theta }}+u_{\theta }\cot \theta }{r^{2}}}-{\frac {2}{r^{2}\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +{\frac {1}{3}}\mu \partial _{r}\left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(u_{\theta }\sin \theta \right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +\rho g_{r}\\[8px]\end{aligned}}}
φ
:
ρ
(
∂
t
u
φ
+
u
r
∂
r
u
φ
+
u
φ
r
sin
θ
∂
φ
u
φ
+
u
θ
r
∂
θ
u
φ
+
u
r
u
φ
+
u
φ
u
θ
cot
θ
r
)
=
−
1
r
sin
θ
∂
φ
p
+
μ
(
1
r
2
∂
r
(
r
2
∂
r
u
φ
)
+
1
r
2
sin
2
θ
∂
φ
2
u
φ
+
1
r
2
sin
θ
∂
θ
(
sin
θ
∂
θ
u
φ
)
+
2
sin
θ
∂
φ
u
r
+
2
cos
θ
∂
φ
u
θ
−
u
φ
r
2
sin
2
θ
)
+
1
3
μ
1
r
sin
θ
∂
φ
(
1
r
2
∂
r
(
r
2
u
r
)
+
1
r
sin
θ
∂
θ
(
u
θ
sin
θ
)
+
1
r
sin
θ
∂
φ
u
φ
)
+
ρ
g
φ
{\displaystyle {\begin{aligned}\varphi :\ &\rho \left({\partial _{t}u_{\varphi }}+u_{r}{\partial _{r}u_{\varphi }}+{\frac {u_{\varphi }}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}+{\frac {u_{\theta }}{r}}{\partial _{\theta }u_{\varphi }}+{\frac {u_{r}u_{\varphi }+u_{\varphi }u_{\theta }\cot \theta }{r}}\right)\\&\quad =-{\frac {1}{r\sin \theta }}{\partial _{\varphi }p}\\&\qquad +\mu \left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}{\partial _{r}u_{\varphi }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\partial _{\varphi }^{2}u_{\varphi }}+{\frac {1}{r^{2}\sin \theta }}\partial _{\theta }\left(\sin \theta {\partial _{\theta }u_{\varphi }}\right)+{\frac {2\sin \theta {\partial _{\varphi }u_{r}}+2\cos \theta {\partial _{\varphi }u_{\theta }}-u_{\varphi }}{r^{2}\sin ^{2}\theta }}\right)\\&\qquad +{\frac {1}{3}}\mu {\frac {1}{r\sin \theta }}\partial _{\varphi }\left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(u_{\theta }\sin \theta \right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +\rho g_{\varphi }\\[8px]\end{aligned}}}
θ
:
ρ
(
∂
t
u
θ
+
u
r
∂
r
u
θ
+
u
φ
r
sin
θ
∂
φ
u
θ
+
u
θ
r
∂
θ
u
θ
+
u
r
u
θ
−
u
φ
2
cot
θ
r
)
=
−
1
r
∂
θ
p
+
μ
(
1
r
2
∂
r
(
r
2
∂
r
u
θ
)
+
1
r
2
sin
2
θ
∂
φ
2
u
θ
+
1
r
2
sin
θ
∂
θ
(
sin
θ
∂
θ
u
θ
)
+
2
r
2
∂
θ
u
r
−
u
θ
+
2
cos
θ
∂
φ
u
φ
r
2
sin
2
θ
)
+
1
3
μ
1
r
∂
θ
(
1
r
2
∂
r
(
r
2
u
r
)
+
1
r
sin
θ
∂
θ
(
u
θ
sin
θ
)
+
1
r
sin
θ
∂
φ
u
φ
)
+
ρ
g
θ
.
{\displaystyle {\begin{aligned}\theta :\ &\rho \left({\partial _{t}u_{\theta }}+u_{r}{\partial _{r}u_{\theta }}+{\frac {u_{\varphi }}{r\sin \theta }}{\partial _{\varphi }u_{\theta }}+{\frac {u_{\theta }}{r}}{\partial _{\theta }u_{\theta }}+{\frac {u_{r}u_{\theta }-u_{\varphi }^{2}\cot \theta }{r}}\right)\\&\quad =-{\frac {1}{r}}{\partial _{\theta }p}\\&\qquad +\mu \left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}{\partial _{r}u_{\theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\partial _{\varphi }^{2}u_{\theta }}+{\frac {1}{r^{2}\sin \theta }}\partial _{\theta }\left(\sin \theta {\partial _{\theta }u_{\theta }}\right)+{\frac {2}{r^{2}}}{\partial _{\theta }u_{r}}-{\frac {u_{\theta }+2\cos \theta {\partial _{\varphi }u_{\varphi }}}{r^{2}\sin ^{2}\theta }}\right)\\&\qquad +{\frac {1}{3}}\mu {\frac {1}{r}}\partial _{\theta }\left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(u_{\theta }\sin \theta \right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +\rho g_{\theta }.\end{aligned}}}
Mass continuity will read:
∂
t
ρ
+
1
r
2
∂
r
(
ρ
r
2
u
r
)
+
1
r
sin
θ
∂
φ
(
ρ
u
φ
)
+
1
r
sin
θ
∂
θ
(
sin
θ
ρ
u
θ
)
=
0.
{\displaystyle {\partial _{t}\rho }+{\frac {1}{r^{2}}}\partial _{r}\left(\rho r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }(\rho u_{\varphi })}+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(\sin \theta \rho u_{\theta }\right)=0.}
These equations could be (slightly) compacted by, for example, factoring
1
r
2
{\textstyle {\frac {1}{r^{2}}}}
from the viscous terms. However, doing so would undesirably alter the structure of the Laplacian and other quantities.
== See also ==
== Citations ==
== General references ==
Acheson, D. J. (1990), Elementary Fluid Dynamics, Oxford Applied Mathematics and Computing Science Series, Oxford University Press, ISBN 978-0-19-859679-0
Batchelor, G. K. (1967), An Introduction to Fluid Dynamics, Cambridge University Press, ISBN 978-0-521-66396-0
Currie, I. G. (1974), Fundamental Mechanics of Fluids, McGraw-Hill, ISBN 978-0-07-015000-3
V. Girault and P. A. Raviart. Finite Element Methods for Navier–Stokes Equations: Theory and Algorithms. Springer Series in Computational Mathematics. Springer-Verlag, 1986.
Landau, L. D.; Lifshitz, E. M. (1987), Fluid mechanics, vol. Course of Theoretical Physics Volume 6 (2nd revised ed.), Pergamon Press, ISBN 978-0-08-033932-0, OCLC 15017127
Polyanin, A. D.; Kutepov, A. M.; Vyazmin, A. V.; Kazenin, D. A. (2002), Hydrodynamics, Mass and Heat Transfer in Chemical Engineering, Taylor & Francis, London, ISBN 978-0-415-27237-7
Rhyming, Inge L. (1991), Dynamique des fluides, Presses polytechniques et universitaires romandes
Smits, Alexander J. (2014), A Physical Introduction to Fluid Mechanics, Wiley, ISBN 0-47-1253499
Temam, Roger (1984): Navier–Stokes Equations: Theory and Numerical Analysis, ACM Chelsea Publishing, ISBN 978-0-8218-2737-6
Milne-Thomson, L.M. C.B.E (1962), Theoretical Hydrodynamics, Macmillan & Co Ltd.
Tartar, L (2006), An Introduction to Navier Stokes Equation and Oceanography, Springer ISBN 3-540-35743-2
Birkhoff, Garrett (1960), Hydrodynamics, Princeton University Press
Campos, D.(Editor) (2017) Handbook on Navier-Stokes Equations Theory and Applied Analysis, Nova Science Publisher ISBN 978-1-53610-292-5
Döring, C.E. and J.D. Gibbon, J.D. (1995) Applied analysis of the Navier-Stokes equations, Cambridge University Press, ISBN 0-521-44557-1
Basset, A.B. (1888) Hydrodynamics Volume I and II, Cambridge: Delighton, Bell and Co
Fox, R. W. McDonald, A.T. and Pritchard, P.J. (2004) Introduction to Fluid Mechanics, John Wiley and Sons, ISBN 0-471-2023-2
Foias, C. Mainley, O. Rosa, R. and Temam, R. (2004) Navier–Stokes Equations and Turbulence, Cambridge University Press, {{ISBN}0-521-36032-3}}
Lions, P-L. (1998) Mathematical Topics in Fluid Mechanics Volume 1 and 2, Clarendon Press, ISBN 0-19-851488-3
Deville, M.O. and Gatski, T. B. (2012) Mathematical Modeling for Complex Fluids and Flows, Springer, ISBN 978-3-642-25294-5
Kochin, N.E. Kibel, I.A. and Roze, N.V. (1964) Theoretical Hydromechanics, John Wiley & Sons, Ltd.
Lamb, H. (1879) Hydrodynamics, Cambridge University Press,
White, Frank M. (2006), Viscous Fluid Flow, McGraw-Hill, ISBN 978-0-07-124493-0
== External links ==
Simplified derivation of the Navier–Stokes equations
Three-dimensional unsteady form of the Navier–Stokes equations Glenn Research Center, NASA | Wikipedia/Navier-Stokes_equations |
Paul Erlich (born 1972) is a guitarist and music theorist living near Boston, Massachusetts. He is known for his seminal role in developing the theory of regular temperaments, including being the first to define pajara temperament and its decatonic scales in 22-ET. He holds a Bachelor of Science degree in physics from Yale University.
His definition of harmonic entropy, a refinement of a model by van Eck influenced by Ernst Terhardt has received attention from music theorists such as William Sethares. It is intended to model one of the components of dissonance as a measure of the uncertainty of the virtual pitch ("missing fundamental") evoked by a set of two or more pitches. This measures how easy or difficult it is to fit the pitches into a single harmonic series. For example, most listeners rank a
4
:
5
:
6
:
7
{\displaystyle {4:5:6:7}}
harmonic seventh chord as far more consonant than a
1
4
:
5
:
6
:
7
{\displaystyle {\tfrac {1}{4:5:6:7}}}
chord. Both have exactly the same set of intervals between the notes, under inversion, but the first one is easy to fit into a single harmonic series (overtones rather than undertones). In the harmonic series, the integers are much lower for the harmonic seventh chord,
4
:
5
:
6
:
7
{\displaystyle {4:5:6:7}}
, versus its inverse,
105
:
120
:
140
:
168
{\displaystyle {105:120:140:168}}
. Components of dissonance not modeled by this theory include critical band roughness as well as tonal context (e.g. an augmented second is more dissonant than a minor third even though both can be tuned to the same size, as in 12-ET).
For the
n
{\displaystyle n}
th iteration of the Farey diagram, the mediant between the
j
{\displaystyle j}
th element,
f
j
=
a
j
/
b
j
{\displaystyle f_{j}=a_{j}/b_{j}}
, and the next highest element:
a
j
+
a
j
+
1
b
j
+
b
j
+
1
{\displaystyle {\frac {a_{j}+a_{j+1}}{b_{j}+b_{j+1}}}}
is subtracted by the mediant between the element and the next lowest element:
a
j
−
1
+
a
j
b
j
−
1
+
b
j
{\displaystyle {\frac {a_{j-1}+a_{j}}{b_{j-1}+b_{j}}}}
From here, the process to compute harmonic entropy is as follows:
(a) compute the areas defined by the normal (Gaussian) bell curve on top, and the mediants on the sides
(b) normalize the sum of the areas to add to 1, such that each represents a probability
(c) calculate the entropy of that set of probabilities
See external links for a detailed description of the model of harmonic entropy.
== Notes ==
== References ==
== External links ==
"Some music theory from Paul Erlich", Lumma.org.
"A Middle Path: Between Just Intonation and the Equal Temperaments", DKeenan.com.
"Harmonic Entropy on the Xenharmonic Wiki", en.xen.wiki | Wikipedia/Harmonic_entropy |
Reaction–diffusion systems are mathematical models that correspond to several physical phenomena. The most common is the change in space and time of the concentration of one or more chemical substances: local chemical reactions in which the substances are transformed into each other, and diffusion which causes the substances to spread out over a surface in space.
Reaction–diffusion systems are naturally applied in chemistry. However, the system can also describe dynamical processes of non-chemical nature. Examples are found in biology, geology and physics (neutron diffusion theory) and ecology. Mathematically, reaction–diffusion systems take the form of semi-linear parabolic partial differential equations. They can be represented in the general form
∂
t
q
=
D
_
_
∇
2
q
+
R
(
q
)
,
{\displaystyle \partial _{t}{\boldsymbol {q}}={\underline {\underline {\boldsymbol {D}}}}\,\nabla ^{2}{\boldsymbol {q}}+{\boldsymbol {R}}({\boldsymbol {q}}),}
where q(x, t) represents the unknown vector function, D is a diagonal matrix of diffusion coefficients, and R accounts for all local reactions. The solutions of reaction–diffusion equations display a wide range of behaviours, including the formation of travelling waves and wave-like phenomena as well as other self-organized patterns like stripes, hexagons or more intricate structure like dissipative solitons. Such patterns have been dubbed "Turing patterns". Each function, for which a reaction diffusion differential equation holds, represents in fact a concentration variable.
== One-component reaction–diffusion equations ==
The simplest reaction–diffusion equation is in one spatial dimension in plane geometry,
∂
t
u
=
D
∂
x
2
u
+
R
(
u
)
,
{\displaystyle \partial _{t}u=D\partial _{x}^{2}u+R(u),}
is also referred to as the Kolmogorov–Petrovsky–Piskunov equation. If the reaction term vanishes, then the equation represents a pure diffusion process. The corresponding equation is Fick's second law. The choice R(u) = u(1 − u) yields Fisher's equation that was originally used to describe the spreading of biological populations, the Newell–Whitehead-Segel equation with R(u) = u(1 − u2) to describe Rayleigh–Bénard convection, the more general Zeldovich–Frank-Kamenetskii equation with R(u) = u(1 − u)e-β(1-u) and 0 < β < ∞ (Zeldovich number) that arises in combustion theory, and its particular degenerate case with R(u) = u2 − u3 that is sometimes referred to as the Zeldovich equation as well.
The dynamics of one-component systems is subject to certain restrictions as the evolution equation can also be written in the variational form
∂
t
u
=
−
δ
L
δ
u
{\displaystyle \partial _{t}u=-{\frac {\delta {\mathfrak {L}}}{\delta u}}}
and therefore describes a permanent decrease of the "free energy"
L
{\displaystyle {\mathfrak {L}}}
given by the functional
L
=
∫
−
∞
∞
[
D
2
(
∂
x
u
)
2
−
V
(
u
)
]
d
x
{\displaystyle {\mathfrak {L}}=\int _{-\infty }^{\infty }\left[{\tfrac {D}{2}}\left(\partial _{x}u\right)^{2}-V(u)\right]\,{\text{d}}x}
with a potential V(u) such that R(u) = dV(u)/du.
In systems with more than one stationary homogeneous solution, a typical solution is given by travelling fronts connecting the homogeneous states. These solutions move with constant speed without changing their shape and are of the form u(x, t) = û(ξ) with ξ = x − ct, where c is the speed of the travelling wave. Note that while travelling waves are generically stable structures, all non-monotonous stationary solutions (e.g. localized domains composed of a front-antifront pair) are unstable. For c = 0, there is a simple proof for this statement: if u0(x) is a stationary solution and u = u0(x) + ũ(x, t) is an infinitesimally perturbed solution, linear stability analysis yields the equation
∂
t
u
~
=
D
∂
x
2
u
~
−
U
(
x
)
u
~
,
U
(
x
)
=
−
R
′
(
u
)
|
u
=
u
0
(
x
)
.
{\displaystyle \partial _{t}{\tilde {u}}=D\partial _{x}^{2}{\tilde {u}}-U(x){\tilde {u}},\qquad U(x)=-R^{\prime }(u){\Big |}_{u=u_{0}(x)}.}
With the ansatz ũ = ψ(x)exp(−λt) we arrive at the eigenvalue problem
H
^
ψ
=
λ
ψ
,
H
^
=
−
D
∂
x
2
+
U
(
x
)
,
{\displaystyle {\hat {H}}\psi =\lambda \psi ,\qquad {\hat {H}}=-D\partial _{x}^{2}+U(x),}
of Schrödinger type where negative eigenvalues result in the instability of the solution. Due to translational invariance ψ = ∂x u0(x) is a neutral eigenfunction with the eigenvalue λ = 0, and all other eigenfunctions can be sorted according to an increasing number of nodes with the magnitude of the corresponding real eigenvalue increases monotonically with the number of zeros. The eigenfunction ψ = ∂x u0(x) should have at least one zero, and for a non-monotonic stationary solution the corresponding eigenvalue λ = 0 cannot be the lowest one, thereby implying instability.
To determine the velocity c of a moving front, one may go to a moving coordinate system and look at stationary solutions:
D
∂
ξ
2
u
^
(
ξ
)
+
c
∂
ξ
u
^
(
ξ
)
+
R
(
u
^
(
ξ
)
)
=
0.
{\displaystyle D\partial _{\xi }^{2}{\hat {u}}(\xi )+c\partial _{\xi }{\hat {u}}(\xi )+R({\hat {u}}(\xi ))=0.}
This equation has a nice mechanical analogue as the motion of a mass D with position û in the course of the "time" ξ under the force R with the damping coefficient c which allows for a rather illustrative access to the construction of different types of solutions and the determination of c.
When going from one to more space dimensions, a number of statements from one-dimensional systems can still be applied. Planar or curved wave fronts are typical structures, and a new effect arises as the local velocity of a curved front becomes dependent on the local radius of curvature (this can be seen by going to polar coordinates). This phenomenon leads to the so-called curvature-driven instability.
== Two-component reaction–diffusion equations ==
Two-component systems allow for a much larger range of possible phenomena than their one-component counterparts. An important idea that was first proposed by Alan Turing is that a state that is stable in the local system can become unstable in the presence of diffusion.
A linear stability analysis however shows that when linearizing the general two-component system
(
∂
t
u
∂
t
v
)
=
(
D
u
0
0
D
v
)
(
∂
x
x
u
∂
x
x
v
)
+
(
F
(
u
,
v
)
G
(
u
,
v
)
)
{\displaystyle {\begin{pmatrix}\partial _{t}u\\\partial _{t}v\end{pmatrix}}={\begin{pmatrix}D_{u}&0\\0&D_{v}\end{pmatrix}}{\begin{pmatrix}\partial _{xx}u\\\partial _{xx}v\end{pmatrix}}+{\begin{pmatrix}F(u,v)\\G(u,v)\end{pmatrix}}}
a plane wave perturbation
q
~
k
(
x
,
t
)
=
(
u
~
(
t
)
v
~
(
t
)
)
e
i
k
⋅
x
{\displaystyle {\tilde {\boldsymbol {q}}}_{\boldsymbol {k}}({\boldsymbol {x}},t)={\begin{pmatrix}{\tilde {u}}(t)\\{\tilde {v}}(t)\end{pmatrix}}e^{i{\boldsymbol {k}}\cdot {\boldsymbol {x}}}}
of the stationary homogeneous solution will satisfy
(
∂
t
u
~
k
(
t
)
∂
t
v
~
k
(
t
)
)
=
−
k
2
(
D
u
u
~
k
(
t
)
D
v
v
~
k
(
t
)
)
+
R
′
(
u
~
k
(
t
)
v
~
k
(
t
)
)
.
{\displaystyle {\begin{pmatrix}\partial _{t}{\tilde {u}}_{\boldsymbol {k}}(t)\\\partial _{t}{\tilde {v}}_{\boldsymbol {k}}(t)\end{pmatrix}}=-k^{2}{\begin{pmatrix}D_{u}{\tilde {u}}_{\boldsymbol {k}}(t)\\D_{v}{\tilde {v}}_{\boldsymbol {k}}(t)\end{pmatrix}}+{\boldsymbol {R}}^{\prime }{\begin{pmatrix}{\tilde {u}}_{\boldsymbol {k}}(t)\\{\tilde {v}}_{\boldsymbol {k}}(t)\end{pmatrix}}.}
Turing's idea can only be realized in four equivalence classes of systems characterized by the signs of the Jacobian R′ of the reaction function. In particular, if a finite wave vector k is supposed to be the most unstable one, the Jacobian must have the signs
(
+
−
+
−
)
,
(
+
+
−
−
)
,
(
−
+
−
+
)
,
(
−
−
+
+
)
.
{\displaystyle {\begin{pmatrix}+&-\\+&-\end{pmatrix}},\quad {\begin{pmatrix}+&+\\-&-\end{pmatrix}},\quad {\begin{pmatrix}-&+\\-&+\end{pmatrix}},\quad {\begin{pmatrix}-&-\\+&+\end{pmatrix}}.}
This class of systems is named activator-inhibitor system after its first representative: close to the ground state, one component stimulates the production of both components while the other one inhibits their growth. Its most prominent representative is the FitzHugh–Nagumo equation
∂
t
u
=
d
u
2
∇
2
u
+
f
(
u
)
−
σ
v
,
τ
∂
t
v
=
d
v
2
∇
2
v
+
u
−
v
{\displaystyle {\begin{aligned}\partial _{t}u&=d_{u}^{2}\,\nabla ^{2}u+f(u)-\sigma v,\\\tau \partial _{t}v&=d_{v}^{2}\,\nabla ^{2}v+u-v\end{aligned}}}
with f (u) = λu − u3 − κ which describes how an action potential travels through a nerve. Here, du, dv, τ, σ and λ are positive constants.
When an activator-inhibitor system undergoes a change of parameters, one may pass from conditions under which a homogeneous ground state is stable to conditions under which it is linearly unstable. The corresponding bifurcation may be either a Hopf bifurcation to a globally oscillating homogeneous state with a dominant wave number k = 0 or a Turing bifurcation to a globally patterned state with a dominant finite wave number. The latter in two spatial dimensions typically leads to stripe or hexagonal patterns.
Subcritical Turing bifurcation: formation of a hexagonal pattern from noisy initial conditions in the above two-component reaction–diffusion system of Fitzhugh–Nagumo type.
For the Fitzhugh–Nagumo example, the neutral stability curves marking the boundary of the linearly stable region for the Turing and Hopf bifurcation are given by
q
n
H
(
k
)
:
1
τ
+
(
d
u
2
+
1
τ
d
v
2
)
k
2
=
f
′
(
u
h
)
,
q
n
T
(
k
)
:
κ
1
+
d
v
2
k
2
+
d
u
2
k
2
=
f
′
(
u
h
)
.
{\displaystyle {\begin{aligned}q_{\text{n}}^{H}(k):&{}\quad {\frac {1}{\tau }}+\left(d_{u}^{2}+{\frac {1}{\tau }}d_{v}^{2}\right)k^{2}&=f^{\prime }(u_{h}),\\[6pt]q_{\text{n}}^{T}(k):&{}\quad {\frac {\kappa }{1+d_{v}^{2}k^{2}}}+d_{u}^{2}k^{2}&=f^{\prime }(u_{h}).\end{aligned}}}
If the bifurcation is subcritical, often localized structures (dissipative solitons) can be observed in the hysteretic region where the pattern coexists with the ground state. Other frequently encountered structures comprise pulse trains (also known as periodic travelling waves), spiral waves and target patterns. These three solution types are also generic features of two- (or more-) component reaction–diffusion equations in which the local dynamics have a stable limit cycle
Other patterns found in the above two-component reaction–diffusion system of Fitzhugh–Nagumo type.
== Three- and more-component reaction–diffusion equations ==
For a variety of systems, reaction–diffusion equations with more than two components have been proposed, e.g. the Belousov–Zhabotinsky reaction, for blood clotting, fission waves or planar gas discharge systems.
It is known that systems with more components allow for a variety of phenomena not possible in systems with one or two components (e.g. stable running pulses in more than one spatial dimension without global feedback). An introduction and systematic overview of the possible phenomena in dependence on the properties of the underlying system is given in.
== Applications and universality ==
In recent times, reaction–diffusion systems have attracted much interest as a prototype model for pattern formation. The above-mentioned patterns (fronts, spirals, targets, hexagons, stripes and dissipative solitons) can be found in various types of reaction–diffusion systems in spite of large discrepancies e.g. in the local reaction terms. It has also been argued that reaction–diffusion processes are an essential basis for processes connected to morphogenesis in biology and may even be related to animal coats and skin pigmentation. Other applications of reaction–diffusion equations include ecological invasions, spread of epidemics, tumour growth, dynamics of fission waves, wound healing and visual hallucinations. Another reason for the interest in reaction–diffusion systems is that although they are nonlinear partial differential equations, there are often possibilities for an analytical treatment.
== Experiments ==
Well-controllable experiments in chemical reaction–diffusion systems have up to now been realized in three ways. First, gel reactors or filled capillary tubes may be used. Second, temperature pulses on catalytic surfaces have been investigated. Third, the propagation of running nerve pulses is modelled using reaction–diffusion systems.
Aside from these generic examples, it has turned out that under appropriate circumstances electric transport systems like plasmas or semiconductors can be described in a reaction–diffusion approach. For these systems various experiments on pattern formation have been carried out.
== Numerical treatments ==
A reaction–diffusion system can be solved by using methods of numerical mathematics. There exist several numerical treatments in research literature. Numerical solution methods for complex geometries are also proposed. Reaction-diffusion systems are described to the highest degree of detail with particle based simulation tools like SRSim or ReaDDy which employ among others reversible interacting-particle reaction dynamics.
== See also ==
Autowave
Diffusion-controlled reaction
Chemical kinetics
Phase space method
Autocatalytic reactions and order creation
Pattern formation
Patterns in nature
Periodic travelling wave
Self-similar solutions
Diffusion equation
Stochastic geometry
MClone
The Chemical Basis of Morphogenesis
Turing pattern
Multi-state modeling of biomolecules
== Examples ==
Fisher's equation
Zeldovich–Frank-Kamenetskii equation
FitzHugh–Nagumo model
Wrinkle paint
== References ==
== External links ==
Reaction–Diffusion by the Gray–Scott Model: Pearson's parameterization a visual map of the parameter space of Gray–Scott reaction diffusion.
A thesis on reaction–diffusion patterns with an overview of the field
RD Tool: an interactive web application for reaction-diffusion simulation | Wikipedia/Reaction-diffusion_systems |
In thermodynamics, the entropy of vaporization is the increase in entropy upon vaporization of a liquid. This is always positive, since the degree of disorder increases in the transition from a liquid in a relatively small volume to a vapor or gas occupying a much larger space. At standard pressure
P
⊖
{\displaystyle P^{\ominus }}
= 1 bar, the value is denoted as
Δ
S
vap
⊖
{\displaystyle \Delta S_{\text{vap}}^{\ominus }}
and normally expressed in joules per mole-kelvin, J/(mol·K).
For a phase transition such as vaporization or fusion (melting), both phases may coexist in equilibrium at constant temperature and pressure, in which case the difference in Gibbs free energy is equal to zero:
Δ
G
vap
=
Δ
H
vap
−
T
vap
×
Δ
S
vap
=
0
,
{\displaystyle \Delta G_{\text{vap}}=\Delta H_{\text{vap}}-T_{\text{vap}}\times \Delta S_{\text{vap}}=0,}
where
Δ
H
vap
{\displaystyle \Delta H_{\text{vap}}}
is the heat or enthalpy of vaporization. Since this is a thermodynamic equation, the symbol
T
{\displaystyle T}
refers to the absolute thermodynamic temperature, measured in kelvins (K). The entropy of vaporization is then equal to the heat of vaporization divided by the boiling point:
Δ
S
vap
=
Δ
H
vap
T
vap
.
{\displaystyle \Delta S_{\text{vap}}={\frac {\Delta H_{\text{vap}}}{T_{\text{vap}}}}.}
According to Trouton's rule, the entropy of vaporization (at standard pressure) of most liquids has similar values. The typical value is variously given as 85 J/(mol·K), 88 J/(mol·K) and 90 J/(mol·K). Hydrogen-bonded liquids have somewhat higher values of
Δ
S
vap
⊖
.
{\displaystyle \Delta S_{\text{vap}}^{\ominus }.}
== See also ==
Entropy of fusion
== References == | Wikipedia/Entropy_of_vaporization |
A small-world network is a graph characterized by a high clustering coefficient and low distances. In an example of the social network, high clustering implies the high probability that two friends of one person are friends themselves. The low distances, on the other hand, mean that there is a short chain of social connections between any two people (this effect is known as six degrees of separation). Specifically, a small-world network is defined to be a network where the typical distance L between two randomly chosen nodes (the number of steps required) grows proportionally to the logarithm of the number of nodes N in the network, that is:
L
∝
log
N
{\displaystyle L\propto \log N}
while the global clustering coefficient is not small.
In the context of a social network, this results in the small world phenomenon of strangers being linked by a short chain of acquaintances. Many empirical graphs show the small-world effect, including social networks, wikis such as Wikipedia, gene networks, and even the underlying architecture of the Internet. It is the inspiration for many network-on-chip architectures in contemporary computer hardware.
A certain category of small-world networks were identified as a class of random graphs by Duncan Watts and Steven Strogatz in 1998. They noted that graphs could be classified according to two independent structural features, namely the clustering coefficient, and average node-to-node distance (also known as average shortest path length). Purely random graphs, built according to the Erdős–Rényi (ER) model, exhibit a small average shortest path length (varying typically as the logarithm of the number of nodes) along with a small clustering coefficient. Watts and Strogatz measured that in fact many real-world networks have a small average shortest path length, but also a clustering coefficient significantly higher than expected by random chance. Watts and Strogatz then proposed a novel graph model, currently named the Watts and Strogatz model, with (i) a small average shortest path length, and (ii) a large clustering coefficient. The crossover in the Watts–Strogatz model between a "large world" (such as a lattice) and a small world was first described by Barthelemy and Amaral in 1999. This work was followed by many studies, including exact results (Barrat and Weigt, 1999; Dorogovtsev and Mendes; Barmpoutis and Murray, 2010).
== Properties of small-world networks ==
Small-world networks tend to contain cliques, and near-cliques, meaning sub-networks which have connections between almost any two nodes within them. This follows from the defining property of a high clustering coefficient. Secondly, most pairs of nodes will be connected by at least one short path. This follows from the defining property that the mean-shortest path length be small. Several other properties are often associated with small-world networks. Typically there is an over-abundance of hubs – nodes in the network with a high number of connections (known as high degree nodes). These hubs serve as the common connections mediating the short path lengths between other edges. By analogy, the small-world network of airline flights has a small mean-path length (i.e. between any two cities you are likely to have to take three or fewer flights) because many flights are routed through hub cities. This property is often analyzed by considering the fraction of nodes in the network that have a particular number of connections going into them (the degree distribution of the network). Networks with a greater than expected number of hubs will have a greater fraction of nodes with high degree, and consequently the degree distribution will be enriched at high degree values. This is known colloquially as a fat-tailed distribution. Graphs of very different topology qualify as small-world networks as long as they satisfy the two definitional requirements above.
Network small-worldness has been quantified by a small-coefficient,
σ
{\displaystyle \sigma }
, calculated by comparing clustering and path length of a given network to an Erdős–Rényi model with same degree on average.
σ
=
C
C
r
L
L
r
{\displaystyle \sigma ={\frac {\frac {C}{C_{r}}}{\frac {L}{L_{r}}}}}
if
σ
>
1
{\displaystyle \sigma >1}
(
C
≫
C
r
{\textstyle C\gg C_{r}}
and
L
≈
L
r
{\textstyle L\approx {L_{r}}}
), network is small-world. However, this metric is known to perform poorly because it is heavily influenced by the network's size.
Another method for quantifying network small-worldness utilizes the original definition of the small-world network comparing the clustering of a given network to an equivalent lattice network and its path length to an equivalent random network. The small-world measure (
ω
{\displaystyle \omega }
) is defined as
ω
=
L
r
L
−
C
C
ℓ
{\displaystyle \omega ={\frac {L_{r}}{L}}-{\frac {C}{C_{\ell }}}}
Where the characteristic path length L and clustering coefficient C are calculated from the network you are testing, Cℓ is the clustering coefficient for an equivalent lattice network and Lr is the characteristic path length for an equivalent random network.
Still another method for quantifying small-worldness normalizes both the network's clustering and path length relative to these characteristics in equivalent lattice and random networks. The Small World Index (SWI) is defined as
SWI
=
L
−
L
ℓ
L
r
−
L
ℓ
×
C
−
C
r
C
ℓ
−
C
r
{\displaystyle {\text{SWI}}={\frac {L-L_{\ell }}{L_{r}-L_{\ell }}}\times {\frac {C-C_{r}}{C_{\ell }-C_{r}}}}
Both ω′ and SWI range between 0 and 1, and have been shown to capture aspects of small-worldness. However, they adopt slightly different conceptions of ideal small-worldness. For a given set of constraints (e.g. size, density, degree distribution), there exists a network for which ω′ = 1, and thus ω aims to capture the extent to which a network with given constraints as small worldly as possible. In contrast, there may not exist a network for which SWI = 1, thus SWI aims to capture the extent to which a network with given constraints approaches the theoretical small world ideal of a network where C ≈ Cℓ and L ≈ Lr.
== Examples of small-world networks ==
Small-world properties are found in many real-world phenomena, including websites with navigation menus, food webs, electric power grids, metabolite processing networks, networks of brain neurons,
voter networks, telephone call graphs, and airport networks. Cultural networks and word co-occurrence networks have also been shown to be small-world networks.
Networks of connected proteins have small world properties such as power-law obeying degree distributions. Similarly transcriptional networks, in which the nodes are genes, and they are linked if one gene has an up or down-regulatory genetic influence on the other, have small world network properties.
== Examples of non-small-world networks ==
In another example, the famous theory of "six degrees of separation" between people tacitly presumes that the domain of discourse is the set of people alive at any one time. The number of degrees of separation between Albert Einstein and Alexander the Great is almost certainly greater than 30 and this network does not have small-world properties. A similarly constrained network would be the "went to school with" network: if two people went to the same college ten years apart from one another, it is unlikely that they have acquaintances in common amongst the student body.
Similarly, the number of relay stations through which a message must pass was not always small. In the days when the post was carried by hand or on horseback, the number of times a letter changed hands between its source and destination would have been much greater than it is today. The number of times a message changed hands in the days of the visual telegraph (circa 1800–1850) was determined by the requirement that two stations be connected by line-of-sight.
Tacit assumptions, if not examined, can cause a bias in the literature on graphs in favor of finding small-world networks (an example of the file drawer effect resulting from the publication bias).
== Network robustness ==
It is hypothesized by some researchers, such as Albert-László Barabási, that the prevalence of small world networks in biological systems may reflect an evolutionary advantage of such an architecture. One possibility is that small-world networks are more robust to perturbations than other network architectures. If this were the case, it would provide an advantage to biological systems that are subject to damage by mutation or viral infection.
In a small world network with a degree distribution following a power-law, deletion of a random node rarely causes a dramatic increase in mean-shortest path length (or a dramatic decrease in the clustering coefficient). This follows from the fact that most shortest paths between nodes flow through hubs, and if a peripheral node is deleted it is unlikely to interfere with passage between other peripheral nodes. As the fraction of peripheral nodes in a small world network is much higher than the fraction of hubs, the probability of deleting an important node is very low. For example, if the small airport in Sun Valley, Idaho was shut down, it would not increase the average number of flights that other passengers traveling in the United States would have to take to arrive at their respective destinations. However, if random deletion of a node hits a hub by chance, the average path length can increase dramatically. This can be observed annually when northern hub airports, such as Chicago's O'Hare airport, are shut down because of snow; many people have to take additional flights.
By contrast, in a random network, in which all nodes have roughly the same number of connections, deleting a random node is likely to increase the mean-shortest path length slightly but significantly for almost any node deleted. In this sense, random networks are vulnerable to random perturbations, whereas small-world networks are robust. However, small-world networks are vulnerable to targeted attack of hubs, whereas random networks cannot be targeted for catastrophic failure.
== Construction of small-world networks ==
The main mechanism to construct small-world networks is the Watts–Strogatz mechanism.
Small-world networks can also be introduced with time-delay, which will not only produce fractals but also chaos under the right conditions, or transition to chaos in dynamics networks.
Soon after the publication of Watts–Strogatz mechanism, approaches have been developed by Mashaghi and co-workers to generate network models that exhibit high degree correlations, while preserving the desired degree distribution and small-world properties. These approaches are based on edge-dual transformation and can be used to generate analytically solvable small-world network models for research into these systems.
Degree–diameter graphs are constructed such that the number of neighbors each vertex in the network has is bounded, while the distance from any given vertex in the network to any other vertex (the diameter of the network) is minimized. Constructing such small-world networks is done as part of the effort to find graphs of order close to the Moore bound.
Another way to construct a small world network from scratch is given in Barmpoutis et al., where a network with very small average distance and very large average clustering is constructed. A fast algorithm of constant complexity is given, along with measurements of the robustness of the resulting graphs. Depending on the application of each network, one can start with one such "ultra small-world" network, and then rewire some edges, or use several small such networks as subgraphs to a larger graph.
Small-world properties can arise naturally in social networks and other real-world systems via the process of dual-phase evolution. This is particularly common where time or spatial constraints limit the addition of connections between vertices The mechanism generally involves periodic shifts between phases, with connections being added during a "global" phase and being reinforced or removed during a "local" phase.
Small-world networks can change from scale-free class to broad-scale class whose connectivity distribution has a sharp cutoff following a power law regime due to constraints limiting the addition of new links. For strong enough constraints, scale-free networks can even become single-scale networks whose connectivity distribution is characterized as fast decaying. It was also shown analytically that scale-free networks are ultra-small, meaning that the distance scales according to
L
∝
log
log
N
{\displaystyle L\propto \log \log N}
.
== Applications ==
=== Applications to sociology ===
The advantages to small world networking for social movement groups are their resistance to change due to the filtering apparatus of using highly connected nodes, and its better effectiveness in relaying information while keeping the number of links required to connect a network to a minimum.
The small world network model is directly applicable to affinity group theory represented in sociological arguments by William Finnegan. Affinity groups are social movement groups that are small and semi-independent pledged to a larger goal or function. Though largely unaffiliated at the node level, a few members of high connectivity function as connectivity nodes, linking the different groups through networking. This small world model has proven an extremely effective protest organization tactic against police action. Clay Shirky argues that the larger the social network created through small world networking, the more valuable the nodes of high connectivity within the network. The same can be said for the affinity group model, where the few people within each group connected to outside groups allowed for a large amount of mobilization and adaptation. A practical example of this is small world networking through affinity groups that William Finnegan outlines in reference to the 1999 Seattle WTO protests.
=== Applications to earth sciences ===
Many networks studied in geology and geophysics have been shown to have characteristics of small-world networks. Networks defined in fracture systems and porous substances have demonstrated these characteristics. The seismic network in the Southern California region may be a small-world network. The examples above occur on very different spatial scales, demonstrating the scale invariance of the phenomenon in the earth sciences.
=== Applications to computing ===
Small-world networks have been used to estimate the usability of information stored in large databases. The measure is termed the Small World Data Transformation Measure. The greater the database links align to a small-world network the more likely a user is going to be able to extract information in the future. This usability typically comes at the cost of the amount of information that can be stored in the same repository.
The Freenet peer-to-peer network has been shown to form a small-world network in simulation, allowing information to be stored and retrieved in a manner that scales efficiency as the network grows.
Nearest Neighbor Search solutions like HNSW use small-world networks to efficiently find the information in large item corpuses.
=== Small-world neural networks in the brain ===
Both anatomical connections in the brain and the synchronization networks of cortical neurons exhibit small-world topology.
Structural and functional connectivity in the brain has also been found to reflect the small-world topology of short path length and high clustering. The network structure has been found in the mammalian cortex across species as well as in large scale imaging studies in humans. Advances in connectomics and network neuroscience, have found the small-worldness of neural networks to be associated with efficient communication.
In neural networks, short pathlength between nodes and high clustering at network hubs supports efficient communication between brain regions at the lowest energetic cost. The brain is constantly processing and adapting to new information and small-world network model supports the intense communication demands of neural networks. High clustering of nodes forms local networks which are often functionally related. Short path length between these hubs supports efficient global communication. This balance enables the efficiency of the global network while simultaneously equipping the brain to handle disruptions and maintain homeostasis, due to local subsystems being isolated from the global network. Loss of small-world network structure has been found to indicate changes in cognition and increased risk of psychological disorders.
In addition to characterizing whole-brain functional and structural connectivity, specific neural systems, such as the visual system, exhibit small-world network properties.
A small-world network of neurons can exhibit short-term memory. A computer model developed by Sara Solla had two stable states, a property (called bistability) thought to be important in memory storage. An activating pulse generated self-sustaining loops of communication activity among the neurons. A second pulse ended this activity. The pulses switched the system between stable states: flow (recording a "memory"), and stasis (holding it). Small world neuronal networks have also been used as models to understand seizures.
== See also ==
Barabási–Albert model – Scale-free network generation algorithm
Climate as complex networks – Conceptual model to generate insight into climate science
Dual-phase evolution – Process that drives self-organization within complex adaptive systems
Dunbar's number – Suggested cognitive limit important in sociology and anthropology
Erdős number – Closeness of someone's association with mathematician Paul Erdős
Erdős–Rényi (ER) model – Two closely related models for generating random graphs
Local World Evolving Network Models
Percolation theory – Mathematical theory on behavior of connected clusters in a random graph
Network science – Academic field - mathematical theory of networks
Scale-free network – Network whose degree distribution follows a power law
Six degrees of Kevin Bacon – Parlor game on degrees of separation
Small-world experiment – Experiments examining the average path length for social networks
Social network – Social structure made up of a set of social actors
Watts–Strogatz model – Method of generating random small-world graphs
Network on a chip – Electronic communication subsystem on an integrated circuit – systems on chip modeled on small-world networks
Zachary's karate club
== References ==
== Further reading ==
=== Books ===
=== Journal articles ===
== External links ==
Dynamic Proximity Networks by Seth J. Chandler, The Wolfram Demonstrations Project.
Small-World Networks entry on Scholarpedia (by Mason A. Porter) | Wikipedia/Small-world_networks |
In chemical thermodynamics, conformational entropy is the entropy associated with the number of conformations of a molecule. The concept is most commonly applied to biological macromolecules such as proteins and RNA, but also be used for polysaccharides and other molecules. To calculate the conformational entropy, the possible conformations of the molecule may first be discretized into a finite number of states, usually characterized by unique combinations of certain structural parameters, each of which has been assigned an energy. In proteins, backbone dihedral angles and side chain rotamers are commonly used as parameters, and in RNA the base pairing pattern may be used. These characteristics are used to define the degrees of freedom (in the statistical mechanics sense of a possible "microstate"). The conformational entropy associated with a particular structure or state, such as an alpha-helix, a folded or an unfolded protein structure, is then dependent on the probability of the occupancy of that structure.
The entropy of heterogeneous random coil or denatured proteins is significantly higher than that of the tertiary structure of its folded native state. In particular, the conformational entropy of the amino acid side chains in a protein is thought to be a major contributor to the energetic stabilization of the denatured state and thus a barrier to protein folding. However, a recent study has shown that side-chain conformational entropy can stabilize native structures among alternative compact structures. The conformational entropy of RNA and proteins can be estimated; for example, empirical methods to estimate the loss of conformational entropy in a particular side chain on incorporation into a folded protein can roughly predict the effects of particular point mutations in a protein. Side-chain conformational entropies can be defined as Boltzmann sampling over all possible rotameric states:
S
=
−
R
∑
i
p
i
ln
p
i
{\displaystyle S=-R\sum _{i}p_{i}\ln p_{i}}
where R is the gas constant and pi is the probability of a residue being in rotamer i.
The limited conformational range of proline residues lowers the conformational entropy of the denatured state and thus stabilizes the native states. A correlation has been observed between the thermostability of a protein and its proline residue content.
== See also ==
Configuration entropy
Folding funnel
Loop entropy
Molten globule
Protein folding
== References == | Wikipedia/Conformational_entropy |
Entropy is a scientific concept that is most commonly associated with a state of disorder, randomness, or uncertainty.
Entropy may also refer to:
== Science and technology ==
=== Physics ===
Entropy (classical thermodynamics), thermodynamic entropy in macroscopic terms, with less emphasis on the statistical explanation
Entropy (statistical thermodynamics), the statistical explanation of thermodynamic entropy based on probability theory
Configuration entropy, the entropy change due to a change in the knowledge of the position of particles, rather than their momentum
Conformational entropy, the entropy change due to a change in the "configuration" of a particle (e.g. a right-handed vs. a left-handed polyatomic molecule)
Tsallis entropy, a generalization of Boltzmann-Gibbs entropy
von Neumann entropy, entropy in quantum statistical physics and quantum information science
Entropy (order and disorder), the relationship of disorder with heat and work
Entropy in thermodynamics and information theory, the relationship between thermodynamic entropy and information (Shannon) entropy
Entropy (energy dispersal), dispersal of energy as a descriptor of entropy
Entropy (astrophysics), the adiabatic constant
=== Information theory and mathematics ===
Entropy (information theory), also called Shannon entropy, a measure of the unpredictability or information content of a message source
Differential entropy, a generalization of Entropy (information theory) to continuous random variables
Entropy of entanglement, related to the Shannon and von Neumann entropies for entangled systems; reflecting the degree of entanglement of subsystems
Algorithmic entropy an (incomputable) measure of the information content of a particular message
Rényi entropy, a family of diversity measures generalising Shannon entropy; used to define fractal dimensions
Topological entropy, a measure of exponential growth in dynamical systems; equivalent to the rate of increase of α->0 Renyi entropy of a trajectory in trajectory-space
Volume entropy, a Riemannian invariant measuring the exponential rate of volume growth of a Riemannian metric
Graph entropy, a measure of the information rate achievable by communicating symbols over a channel in which certain pairs of values may be confused
=== Other uses in science and technology ===
Entropy encoding, data compression strategies to produce a code length equal to the entropy of a message
Entropy (computing), an indicator of the number of random bits available to seed cryptography systems
Entropy (anesthesiology), a measure of a patient's cortical function, based on the mathematical entropy of EEG signals
Entropy (ecology), measures of biodiversity in the study of biological ecology, based on Shannon and Rényi entropies
Social entropy, a measure of the natural decay within a social system
== Arts and entertainment ==
=== Film and television ===
Entropy (film), a 1999 film by Phil Joanou
"Entropy" (Buffy episode), 2002
"Entropy", an episode of Criminal Minds season 11
=== Games ===
Entropy (board game)
Entropy (video game)
Entropy or Alchemiss, a character in Freedom Force vs the 3rd Reich
N. Tropy, a character from Crash Bandicoot: Warped
Entropy: Zero, a Source mod for Half-Life 2 released in 2017 developed by Breadman
Entropy: Zero 2, a sequel to Entropy: Zero released in 2022
Entropy: Zero - Uprising, a mod of Entropy: Zero developed by Employee8 and Filipad
=== Literature ===
Entropy: A New World View, a book by Jeremy Rifkin and Ted Howard
"Entropy", a 1960 short story by Thomas Pynchon
"Entropy", a 1995 short story by Leanne Frahm
Entropy (journal), a scientific journal published by MDPI
Entropy (magazine), an online literary magazine
=== Music ===
Entropy, a 2005 album by Anathallo and Javelins
Entropy / Send Them, an EP by DJ Shadow and the Groove Robbers
"Entropy", a song by Bad Religion from Against the Grain
"Entropy", a song by Kelly Osbourne from Sleeping in the Nothing
"Entropy", a song by VNV Nation from Matter + Form
"Entropy", a song by Moxy Früvous from The 'b' Album
"Entropy", a song by Brymo from Oṣó
"Entropy", a song by Daniel Caesar from Case Study 01
"Entropy", a song by MC Hawking
"Entropy", a song by Grimes and Bleachers
Entropia (album), an album by Pain of Salvation
Entropy, a track by Nigel Stanford
The Book Of Us: Entropy, an album by DAY6
=== Other uses in arts and entertainment ===
Entropy (choreography), the difference between a transcription and a performance
== Other uses ==
Entropy (package manager), a Sabayon Linux package manager
Information entropy, relating to disruptive actions taken in Internet vigilantism
== See also ==
All pages with titles beginning with Entropy
All pages with titles containing Entropy | Wikipedia/Entropy_(disambiguation) |
Psychodynamics, also known as psychodynamic psychology, in its broadest sense, is an approach to psychology that emphasizes systematic study of the psychological forces underlying human behavior, feelings, and emotions and how they might relate to early experience. It is especially interested in the dynamic relations between conscious motivation and unconscious motivation.
The term psychodynamics is sometimes used to refer specifically to the psychoanalytical approach developed by Sigmund Freud (1856–1939) and his followers. Freud was inspired by the theory of thermodynamics and used the term psychodynamics to describe the processes of the mind as flows of psychological energy (libido or psi) in an organically complex brain. However, modern usage differentiates psychoanalytic practice as referring specifically to the earliest forms of psychotherapy, practiced by Freud and his immediate followers, and psychodynamic practice as practice that is informed by psychoanalytic theory, but diverges from the traditional practice model.
In the treatment of psychological distress, psychodynamic psychotherapy tends to be a less intensive (once- or twice-weekly) modality than the classical Freudian psychoanalysis treatment (of 3–5 sessions per week) and typically relies less on the traditional practices of psychoanalytic therapy, such as the patient facing away from the therapist during treatment and free association. Psychodynamic therapies depend upon a psychoanalytic understanding of inner conflict, wherein unconscious thoughts, desires, and memories influence behavior and psychological problems are caused by unconscious or repressed conflicts.
Despite largely falling out of favor as the primary modality of psychotherapy and facing criticism as being "non-empirical", psychodynamic treatment has been shown to be effective at treating a number of psychological conditions in randomized controlled trials, more effectively than controls and to the same degree as other psychotherapy modalities.
== Overview ==
In general, psychodynamics is the study of the interrelationship of various parts of the mind, personality, or psyche as they relate to mental, emotional, or motivational forces especially at the unconscious level. The mental forces involved in psychodynamics are often divided into two parts: (a) the interaction of the emotional and motivational forces that affect behavior and mental states, especially on a subconscious level; (b) inner forces affecting behavior: the study of the emotional and motivational forces that affect behavior and states of mind.
Freud proposed that psychological energy was constant (hence, emotional changes consisted only in displacements) and that it tended to rest (point attractor) through discharge (catharsis).
In mate selection psychology, psychodynamics is defined as the study of the forces, motives, and energy generated by the deepest of human needs.
In general, psychodynamics studies the transformations and exchanges of "psychic energy" within the personality. A focus in psychodynamics is the connection between the energetics of emotional states in the Id, ego and super-ego as they relate to early childhood developments and processes. At the heart of psychological processes, according to Freud, is the ego, which he envisions as battling with three forces: the id, the super-ego, and the outside world. The id is the unconscious reservoir of libido, the psychic energy that fuels instincts and psychic processes. The ego serves as the general manager of personality, making decisions regarding the pleasures that will be pursued at the id's demand, the person's safety requirements, and the moral dictates of the superego that will be followed. The superego refers to the repository of an individual's moral values, divided into the conscience – the internalization of a society's rules and regulations – and the ego-ideal – the internalization of one's goals. Hence, the basic psychodynamic model focuses on the dynamic interactions between the id, ego, and superego. Psychodynamics, subsequently, attempts to explain or interpret behaviour or mental states in terms of innate emotional forces or processes.
== History ==
Freud used the term psychodynamics to describe the processes of the mind as flows of psychological energy (libido) in an organically complex brain. The idea for this came from his first year adviser, Ernst von Brücke at the University of Vienna, who held the view that all living organisms, including humans, are basically energy-systems to which the principle of the conservation of energy applies. This principle states that "the total amount of energy in any given physical system is always constant, that energy quanta can be changed but not annihilated, and that consequently when energy is moved from one part of the system, it must reappear in another part." This principle is at the very root of Freud's ideas, whereby libido, which is primarily seen as sexual energy, is transformed into other behaviours. However, it is now clear that the term energy in physics means something quite different from the term energy in relation to mental functioning.
Psychodynamics was initially further developed by Carl Jung, Alfred Adler and Melanie Klein. By the mid-1940s and into the 1950s, the general application of the "psychodynamic theory" had been well established.
In his 1988 book Introduction to Psychodynamics – a New Synthesis, psychiatrist Mardi J. Horowitz states that his own interest and fascination with psychodynamics began during the 1950s, when he heard Ralph Greenson, a popular local psychoanalyst who spoke to the public on topics such as "People who Hate", speak on the radio at UCLA. In his radio discussion, according to Horowitz, he "vividly described neurotic behavior and unconscious mental processes and linked psychodynamics theory directly to everyday life."
In the 1950s, American psychiatrist Eric Berne built on Freud's psychodynamic model, particularly that of the "ego states", to develop a psychology of human interactions called transactional analysis which, according to physician James R. Allen, is a "cognitive-behavioral approach to treatment and that it is a very effective way of dealing with internal models of self and others as well as other psychodynamic issues.".
Around the 1970s, a growing number of researchers began departing from the psychodynamics model and Freudian subconscious. Many felt that the evidence was over-reliant on imaginative discourse in therapy, and on patient reports of their state-of-mind. These subjective experiences are inaccessible to others. Philosopher of science Karl Popper argued that much of Freudianism was untestable and therefore not scientific. In 1975 literary critic Frederick Crews began a decades-long campaign against the scientific credibility of Freudianism. This culminated in Freud: The Making of an Illusion which aggregated years of criticism from many quarters. Medical schools and psychology departments no longer offer much training in psychodynamics, according to a 2007 survey. An Emory University psychology professor explained, “I don’t think psychoanalysis is going to survive unless there is more of an appreciation for empirical rigor and testing.”
=== Freudian analysis ===
According to American psychologist Calvin S. Hall, from his 1954 Primer in Freudian Psychology:
Freud greatly admired Brücke and quickly became indoctrinated by this new dynamic physiology. Thanks to Freud's singular genius, he was to discover some twenty years later that the laws of dynamics could be applied to man's personality as well as to his body. When he made his discovery Freud proceeded to create a dynamic psychology. A dynamic psychology is one that studies the transformations and exchanges of energy within the personality. This was Freud’s greatest achievement, and one of the greatest achievements in modern science, It is certainly a crucial event in the history of psychology.
At the heart of psychological processes, according to Freud, is the ego, which he sees battling with three forces: the id, the super-ego, and the outside world. Hence, the basic psychodynamic model focuses on the dynamic interactions between the id, ego, and superego. Psychodynamics, subsequently, attempts to explain or interpret behavior or mental states in terms of innate emotional forces or processes. In his writings about the "engines of human behavior", Freud used the German word Trieb, a word that can be translated into English as either instinct or drive.
In the 1930s, Freud's daughter Anna Freud began to apply Freud's psychodynamic theories of the "ego" to the study of parent-child attachment and especially deprivation and in doing so developed ego psychology.
=== Jungian analysis ===
At the turn of the 20th century, during these decisive years, a young Swiss psychiatrist named Carl Jung had been following Freud's writings and had sent him copies of his articles and his first book, the 1907 Psychology of Dementia Praecox, in which he upheld the Freudian psychodynamic viewpoint, although with some reservations. That year, Freud invited Jung to visit him in Vienna. The two men, it is said, were greatly attracted to each other, and they talked continuously for thirteen hours. This led to a professional relationship in which they corresponded on a weekly basis, for a period of six years.
Carl Jung's contributions in psychodynamic psychology include:
The psyche tends toward wholeness.
The self is composed of the ego, the personal unconscious, the collective unconscious. The collective unconscious contains the archetypes which manifest in ways particular to each individual.
Archetypes are composed of dynamic tensions and arise spontaneously in the individual and collective psyche. Archetypes are autonomous energies common to the human species. They give the psyche its dynamic properties and help organize it. Their effects can be seen in many forms and across cultures.
The Transcendent Function: The emergence of the third resolves the split between dynamic polar tensions within the archetypal structure.
The recognition of the spiritual dimension of the human psyche.
The role of images which spontaneously arise in the human psyche (images include the interconnection between affect, images, and instinct) to communicate the dynamic processes taking place in the personal and collective unconscious, images which can be used to help the ego move in the direction of psychic wholeness.
Recognition of the multiplicity of psyche and psychic life, that there are several organizing principles within the psyche, and that they are at times in conflict.
== See also ==
Ernst Wilhelm Brücke
Yisrael Salantar
Cathexis
Object relations theory
Reaction formation
Robert Langs
== References ==
== Further reading ==
Brown, Junius Flagg & Menninger, Karl Augustus (1940). The Psychodynamics of Abnormal Behavior, 484 pages, McGraw-Hill Book Company, inc.
Weiss, Edoardo (1950). Principles of Psychodynamics, 268 pages, Grune & Stratton
Pearson Education (1970). The Psychodynamics of Patient Care Prentice Hall, 422 pgs. Stanford University: Higher Education Division.
Jean Laplanche et J.B. Pontalis (1974). The Language of Psycho-Analysis, Editeur: W. W. Norton & Company, ISBN 0-393-01105-4
Raphael-Leff, Joan (2005). Parent Infant Psychodynamics – Wild Things, Mirrors, and Ghosts. Wiley. ISBN 1-86156-346-9.
Shedler, Jonathan. "That was Then, This is Now: An Introduction to Contemporary Psychodynamic Therapy", PDF
PDM Task Force. (2006). Psychodynamic Diagnostic Manual. Silver Spring, MD. Alliance of Psychoanalytic Organizations.
Aziz, Robert (1990). C.G. Jung's Psychology of Religion and Synchronicity (10 ed.). The State University of New York Press. ISBN 0791401669.
Aziz, Robert (1999). "Synchronicity and the Transformation of the Ethical in Jungian Psychology". In Becker, Carl (ed.). Asian and Jungian Views of Ethics. Greenwood. ISBN 0313304521.
Aziz, Robert (2007). The Syndetic Paradigm: The Untrodden Path Beyond Freud and Jung. The State University of New York Press. ISBN 9780791469828.
Aziz, Robert (2008). "Foreword". In Storm, Lance (ed.). Synchronicity: Multiple Perspectives on Meaningful Coincidence. Pari Publishing. ISBN 9788895604022.
Bateman, Anthony; Brown, Dennis and Pedder, Jonathan (2000). Introduction to Psychotherapy: An Outline of Psychodynamic Principles and Practice. Routledge. ISBN 0415205697.{{cite book}}: CS1 maint: multiple names: authors list (link)
Bateman, Anthony; Holmes, Jeremy (1995). Introduction to Psychoanalysis: Contemporary Theory and Practice. Routledge. ISBN 0415107393.
Oberst, Ursula E.; Stewart, Alan E. (2003). Adlerian Psychotherapy: An Advanced Approach to Individual Psychology. New York: Brunner-Routledge. ISBN 1583911227.
Ellenberger, Henri F. (1970). The Discovery of the Unconscious: The History and Evolution of Dynamic Psychiatry. Basic Books. ISBN 0465016723.
Hutchinson, E.(ED.) (2017).Essentials of human behavior: Integrating person, environment, and the life course. Thousand Oaks, CA: Sage. | Wikipedia/Psychodynamics |
In thermodynamics, a temperature–entropy (T–s) diagram is a thermodynamic diagram used to visualize changes to temperature (T ) and specific entropy (s) during a thermodynamic process or cycle as the graph of a curve. It is a useful and common tool, particularly because it helps to visualize the heat transfer during a process. For reversible (ideal) processes, the area under the T–s curve of a process is the heat transferred to the system during that process.
Working fluids are often categorized on the basis of the shape of their T–s diagram.
An isentropic process is depicted as a vertical line on a T–s diagram, whereas an isothermal process is a horizontal line.
== See also ==
Carnot cycle
Pressure–volume diagram
Rankine cycle
Saturation vapor curve
Working fluid
Working fluid selection
== References == | Wikipedia/Temperature–entropy_diagram |
In statistical mechanics, configuration entropy is the portion of a system's entropy that is related to discrete representative positions of its constituent particles. For example, it may refer to the number of ways that atoms or molecules pack together in a mixture, alloy or glass, the number of conformations of a molecule, or the number of spin configurations in a magnet. The name might suggest that it relates to all possible configurations or particle positions of a system, excluding the entropy of their velocity or momentum, but that usage rarely occurs.
== Calculation ==
If the configurations all have the same weighting, or energy, the configurational entropy is given by Boltzmann's entropy formula
S
=
k
B
ln
W
,
{\displaystyle S=k_{B}\,\ln W,}
where kB is the Boltzmann constant and W is the number of possible configurations. In a more general formulation, if a system can be in states n with probabilities Pn, the configurational entropy of the system is given by
S
=
−
k
B
∑
n
=
1
W
P
n
ln
P
n
,
{\displaystyle S=-k_{B}\,\sum _{n=1}^{W}P_{n}\ln P_{n},}
which in the perfect disorder limit (all Pn = 1/W) leads to Boltzmann's formula, while in the opposite limit (one configuration with probability 1), the entropy vanishes. This formulation is called the Gibbs entropy formula and is analogous to that of Shannon's information entropy.
The mathematical field of combinatorics, and in particular the mathematics of combinations and permutations is highly important in the calculation of configurational entropy. In particular, this field of mathematics offers formalized approaches for calculating the number of ways of choosing or arranging discrete objects; in this case, atoms or molecules. However, it is important to note that the positions of molecules are not strictly speaking discrete above the quantum level. Thus a variety of approximations may be used in discretizing a system to allow for a purely combinatorial approach. Alternatively, integral methods may be used in some cases to work directly with continuous position functions, usually denoted as a configurational integral.
== See also ==
Conformational entropy
Combinatorics
Entropic force
Entropy of mixing
High entropy oxide
Nanomechanics
== Notes ==
== References == | Wikipedia/Configuration_entropy |
In thermodynamics, the interpretation of entropy as a measure of energy dispersal has been exercised against the background of the traditional view, introduced by Ludwig Boltzmann, of entropy as a quantitative measure of disorder. The energy dispersal approach avoids the ambiguous term 'disorder'. An early advocate of the energy dispersal conception was Edward A. Guggenheim in 1949, using the word 'spread'.
In this alternative approach, entropy is a measure of energy dispersal or spread at a specific temperature. Changes in entropy can be quantitatively related to the distribution or the spreading out of the energy of a thermodynamic system, divided by its temperature.
Some educators propose that the energy dispersal idea is easier to understand than the traditional approach. The concept has been used to facilitate teaching entropy to students beginning university chemistry and biology.
== Comparisons with traditional approach ==
The term "entropy" has been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels.
Such descriptions have tended to be used together with commonly used terms such as disorder and randomness, which are ambiguous, and whose everyday meaning is the opposite of what they are intended to mean in thermodynamics. Not only does this situation cause confusion, but it also hampers the teaching of thermodynamics. Students were being asked to grasp meanings directly contradicting their normal usage, with equilibrium being equated to "perfect internal disorder" and the mixing of milk in coffee from apparent chaos to uniformity being described as a transition from an ordered state into a disordered state.
The description of entropy as the amount of "mixedupness" or "disorder," as well as the abstract nature of the statistical mechanics grounding this notion, can lead to confusion and considerable difficulty for those beginning the subject. Even though courses emphasised microstates and energy levels, most students could not get beyond simplistic notions of randomness or disorder. Many of those who learned by practising calculations did not understand well the intrinsic meanings of equations, and there was a need for qualitative explanations of thermodynamic relationships.
Arieh Ben-Naim recommends abandonment of the word entropy, rejecting both the 'dispersal' and the 'disorder' interpretations; instead he proposes the notion of "missing information" about microstates as considered in statistical mechanics, which he regards as commonsensical.
== Description ==
Increase of entropy in a thermodynamic process can be described in terms of "energy dispersal" and the "spreading of energy," while avoiding mention of "disorder" except when explaining misconceptions. All explanations of where and how energy is dispersing or spreading have been recast in terms of energy dispersal, so as to emphasise the underlying qualitative meaning.
In this approach, the second law of thermodynamics is introduced as "Energy spontaneously disperses from being localized to becoming spread out if it is not hindered from doing so," often in the context of common experiences such as a rock falling, a hot frying pan cooling down, iron rusting, air leaving a punctured tyre and ice melting in a warm room. Entropy is then depicted as a sophisticated kind of "before and after" yardstick — measuring how much energy is spread out over time as a result of a process such as heating a system, or how widely spread out the energy is after something happens in comparison with its previous state, in a process such as gas expansion or fluids mixing (at a constant temperature). The equations are explored with reference to the common experiences, with emphasis that in chemistry the energy that entropy measures as dispersing is the internal energy of molecules.
The statistical interpretation is related to quantum mechanics in describing the way that energy is distributed (quantized) amongst molecules on specific energy levels, with all the energy of the macrostate always in only one microstate at one instant. Entropy is described as measuring the energy dispersal for a system by the number of accessible microstates, the number of different arrangements of all its energy at the next instant. Thus, an increase in entropy means a greater number of microstates for the final state than for the initial state, and hence more possible arrangements of a system's total energy at any one instant. Here, the greater 'dispersal of the total energy of a system' means the existence of many possibilities.
Continuous movement and molecular collisions visualised as being like bouncing balls blown by air as used in a lottery can then lead on to showing the possibilities of many Boltzmann distributions and continually changing "distribution of the instant", and on to the idea that when the system changes, dynamic molecules will have a greater number of accessible microstates. In this approach, all everyday spontaneous physical happenings and chemical reactions are depicted as involving some type of energy flows from being localized or concentrated to becoming spread out to a larger space, always to a state with a greater number of microstates.
This approach does not work as well for very complex cases where the qualitative relation of energy dispersal to entropy change can be so inextricably obscured that it is moot. The entropy of mixing is one of these complex cases, when two or more different substances are mixed at the same temperature and pressure. There will be no net exchange of heat or work, so the entropy increase will be due to the literal spreading out of the motional energy of each substance in the larger combined final volume. Each component's energetic molecules become more separated from one another than they would be in the pure state, when in the pure state they were colliding only with identical adjacent molecules, leading to an increase in its number of accessible microstates.
== Current adoption ==
Variants of the energy dispersal approach have been adopted in number of undergraduate chemistry texts, mainly in the United States. One respected text states:
The concept of the number of microstates makes quantitative the ill-defined qualitative concepts of 'disorder' and the 'dispersal' of matter and energy that are used widely to introduce the concept of entropy: a more 'disorderly' distribution of energy and matter corresponds to a greater number of micro-states associated with the same total energy. — Atkins & de Paula (2006): 81
== History ==
The concept of 'dissipation of energy' was used in Lord Kelvin's 1852 article "On a Universal Tendency in Nature to the Dissipation of Mechanical Energy." He distinguished between two types or "stores" of mechanical energy: "statical" and "dynamical." He discussed how these two types of energy can change from one form to the other during a thermodynamic transformation. When heat is created by any irreversible process (such as friction), or when heat is diffused by conduction, mechanical energy is dissipated, and it is impossible to restore the initial state.
Using the word 'spread', an early advocate of the energy dispersal concept was Edward Armand Guggenheim. In the mid-1950s, with the development of quantum theory, researchers began speaking about entropy changes in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels, such as by the reactants and products of a chemical reaction.
In 1984, the Oxford physical chemist Peter Atkins, in a book The Second Law, written for laypersons, presented a nonmathematical interpretation of what he called the "infinitely incomprehensible entropy" in simple terms, describing the Second Law of thermodynamics as "energy tends to disperse". His analogies included an imaginary intelligent being called "Boltzmann's Demon," who runs around reorganizing and dispersing energy, in order to show how the W in Boltzmann's entropy formula relates to energy dispersion. This dispersion is transmitted via atomic vibrations and collisions. Atkins wrote: "each atom carries kinetic energy, and the spreading of the atoms spreads the energy…the Boltzmann equation therefore captures the aspect of dispersal: the dispersal of the entities that are carrying the energy.": 78, 79
In 1997, John Wrigglesworth described spatial particle distributions as represented by distributions of energy states. According to the second law of thermodynamics, isolated systems will tend to redistribute the energy of the system into a more probable arrangement or a maximum probability energy distribution, i.e. from that of being concentrated to that of being spread out. By virtue of the First law of thermodynamics, the total energy does not change; instead, the energy tends to disperse over the space to which it has access. In his 1999 Statistical Thermodynamics, M.C. Gupta defined entropy as a function that measures how energy disperses when a system changes from one state to another. Other authors defining entropy in a way that embodies energy dispersal are Cecie Starr and Andrew Scott.
In a 1996 article, the physicist Harvey S. Leff set out what he called "the spreading and sharing of energy." Another physicist, Daniel F. Styer, published an article in 2000 showing that "entropy as disorder" was inadequate. In an article published in the 2002 Journal of Chemical Education, Frank L. Lambert argued that portraying entropy as "disorder" is confusing and should be abandoned. He has gone on to develop detailed resources for chemistry instructors, equating entropy increase as the spontaneous dispersal of energy, namely how much energy is spread out in a process, or how widely dispersed it becomes – at a specific temperature.
== See also ==
Introduction to entropy
== References ==
== Further reading ==
Carson, E. M., and Watson, J. R., (Department of Educational and Professional Studies, King's College, London), 2002, "Undergraduate students' understandings of entropy and Gibbs Free energy," University Chemistry Education - 2002 Papers, Royal Society of Chemistry.
Lambert, Frank L. (2002). "Disorder - A Cracked Crutch For Supporting Entropy Discussions," Journal of Chemical Education 79: 187-92.
Leff, Harvey S. (2007). "Entropy, Its Language, and Interpretation" (PDF). Found Phys. 37 (12). Springer: 1744–1766. Bibcode:2007FoPh...37.1744L. doi:10.1007/s10701-007-9163-3. S2CID 3485226. Retrieved 24 February 2016.
=== Texts using the energy dispersal approach ===
Atkins, P. W., Physical Chemistry for the Life Sciences. Oxford University Press, ISBN 0-19-928095-9; W. H. Freeman, ISBN 0-7167-8628-1
Benjamin Gal-Or, "Cosmology, Physics and Philosophy", Springer-Verlag, New York, 1981, 1983, 1987 ISBN 0-387-90581-2
Bell, J., et al., 2005. Chemistry: A General Chemistry Project of the American Chemical Society, 1st ed. W. H. Freeman, 820pp, ISBN 0-7167-3126-6
Brady, J.E., and F. Senese, 2004. Chemistry, Matter and Its Changes, 4th ed. John Wiley, 1256pp, ISBN 0-471-21517-1
Brown, T. L., H. E. LeMay, and B. E. Bursten, 2006. Chemistry: The Central Science, 10th ed. Prentice Hall, 1248pp, ISBN 0-13-109686-9
Ebbing, D.D., and S. D. Gammon, 2005. General Chemistry, 8th ed. Houghton-Mifflin, 1200pp, ISBN 0-618-39941-0
Ebbing, Gammon, and Ragsdale. Essentials of General Chemistry, 2nd ed.
Hill, Petrucci, McCreary and Perry. General Chemistry, 4th ed.
Kotz, Treichel, and Weaver. Chemistry and Chemical Reactivity, 6th ed.
Moog, Spencer, and Farrell. Thermodynamics, A Guided Inquiry.
Moore, J. W., C. L. Stanistski, P. C. Jurs, 2005. Chemistry, The Molecular Science, 2nd ed. Thompson Learning. 1248pp, ISBN 0-534-42201-2
Olmsted and Williams, Chemistry, 4th ed.
Petrucci, Harwood, and Herring. General Chemistry, 9th ed.
Silberberg, M.S., 2006. Chemistry, The Molecular Nature of Matter and Change, 4th ed. McGraw-Hill, 1183pp, ISBN 0-07-255820-2
Suchocki, J., 2004. Conceptual Chemistry 2nd ed. Benjamin Cummings, 706pp, ISBN 0-8053-3228-6
== External links ==
welcome to entropy site A large website by Frank L. Lambert with links to work on the energy dispersal approach to entropy.
The Second Law of Thermodynamics (6) | Wikipedia/Energy_dispersal |
In thermodynamics, the Joule–Thomson effect (also known as the Joule–Kelvin effect or Kelvin–Joule effect) describes the temperature change of a real gas or liquid (as differentiated from an ideal gas) when it is expanding; typically caused by the pressure loss from flow through a valve or porous plug while keeping it insulated so that no heat is exchanged with the environment. This procedure is called a throttling process or Joule–Thomson process. The effect is purely due to deviation from ideality, as any ideal gas has no JT effect.
At room temperature, all gases except hydrogen, helium, and neon cool upon expansion by the Joule–Thomson process when being throttled through an orifice; these three gases rise in temperature when forced through a porous plug at room temperature, but lowers in temperature when already at lower temperatures. Most liquids such as hydraulic oils will be warmed by the Joule–Thomson throttling process. The temperature at which the JT effect switches sign is the inversion temperature.
The gas-cooling throttling process is commonly exploited in refrigeration processes such as liquefiers in air separation industrial process. In hydraulics, the warming effect from Joule–Thomson throttling can be used to find internally leaking valves as these will produce heat which can be detected by thermocouple or thermal-imaging camera. Throttling is a fundamentally irreversible process. The throttling due to the flow resistance in supply lines, heat exchangers, regenerators, and other components of (thermal) machines is a source of losses that limits their performance.
Since it is a constant-enthalpy process, it can be used to experimentally measure the lines of constant enthalpy (isenthalps) on the
(
p
,
T
)
{\displaystyle (p,T)}
diagram of a gas. Combined with the specific heat capacity at constant pressure
c
P
=
(
∂
h
/
∂
T
)
P
{\displaystyle c_{P}=(\partial h/\partial T)_{P}}
it allows the complete measurement of the thermodynamic potential for the gas.
== History ==
The effect is named after James Prescott Joule and William Thomson, 1st Baron Kelvin, who discovered it in 1852. It followed upon earlier work by Joule on Joule expansion, in which a gas undergoes free expansion in a vacuum and the temperature is unchanged, if the gas is ideal.
== Description ==
The adiabatic (no heat exchanged) expansion of a gas may be carried out in a number of ways. The change in temperature experienced by the gas during expansion depends not only on the initial and final pressure, but also on the manner in which the expansion is carried out.
If the expansion process is reversible, meaning that the gas is in thermodynamic equilibrium at all times, it is called an isentropic expansion. In this scenario, the gas does positive work during the expansion, and its temperature decreases.
In a free expansion, on the other hand, the gas does no work and absorbs no heat, so the internal energy is conserved. Expanded in this manner, the temperature of an ideal gas would remain constant, but the temperature of a real gas decreases, except at very high temperature.
The method of expansion discussed in this article, in which a gas or liquid at pressure P1 flows into a region of lower pressure P2 without significant change in kinetic energy, is called the Joule–Thomson expansion. The expansion is inherently irreversible. During this expansion, enthalpy remains unchanged (see proof below). Unlike a free expansion, work is done, causing a change in internal energy. Whether the internal energy increases or decreases is determined by whether work is done on or by the fluid; that is determined by the initial and final states of the expansion and the properties of the fluid.
The temperature change produced during a Joule–Thomson expansion is quantified by the Joule–Thomson coefficient,
μ
J
T
{\displaystyle \mu _{\mathrm {JT} }}
. This coefficient may be either positive (corresponding to cooling) or negative (heating); the regions where each occurs for molecular nitrogen, N2, are shown in the figure. Note that most conditions in the figure correspond to N2 being a supercritical fluid, where it has some properties of a gas and some of a liquid, but can not be really described as being either. The coefficient is negative at both very high and very low temperatures; at very high pressure it is negative at all temperatures. The maximum inversion temperature (621 K for N2) occurs as zero pressure is approached. For N2 gas at low pressures,
μ
J
T
{\displaystyle \mu _{\mathrm {JT} }}
is negative at high temperatures and positive at low temperatures. At temperatures below the gas-liquid coexistence curve, N2 condenses to form a liquid and the coefficient again becomes negative. Thus, for N2 gas below 621 K, a Joule–Thomson expansion can be used to cool the gas until liquid N2 forms.
== Physical mechanism ==
There are two factors that can change the temperature of a fluid during an adiabatic expansion: a change in internal energy or the conversion between potential and kinetic internal energy. Temperature is the measure of thermal kinetic energy (energy associated with molecular motion); so a change in temperature indicates a change in thermal kinetic energy. The internal energy is the sum of thermal kinetic energy and thermal potential energy. Thus, even if the internal energy does not change, the temperature can change due to conversion between kinetic and potential energy; this is what happens in a free expansion and typically produces a decrease in temperature as the fluid expands. If work is done on or by the fluid as it expands, then the total internal energy changes. This is what happens in a Joule–Thomson expansion and can produce larger heating or cooling than observed in a free expansion.
In a Joule–Thomson expansion the enthalpy remains constant. The enthalpy,
H
{\displaystyle H}
, is defined as
H
=
U
+
P
V
{\displaystyle H=U+PV}
where
U
{\displaystyle U}
is internal energy,
P
{\displaystyle P}
is pressure, and
V
{\displaystyle V}
is volume. Under the conditions of a Joule–Thomson expansion, the change in
P
V
{\displaystyle PV}
represents the work done by the fluid (see the proof below). If
P
V
{\displaystyle PV}
increases, with
H
{\displaystyle H}
constant, then
U
{\displaystyle U}
must decrease as a result of the fluid doing work on its surroundings. This produces a decrease in temperature and results in a positive Joule–Thomson coefficient. Conversely, a decrease in
P
V
{\displaystyle PV}
means that work is done on the fluid and the internal energy increases. If the increase in kinetic energy exceeds the increase in potential energy, there will be an increase in the temperature of the fluid and the Joule–Thomson coefficient will be negative.
For an ideal gas,
P
V
{\displaystyle PV}
does not change during a Joule–Thomson expansion. As a result, there is no change in internal energy; since there is also no change in thermal potential energy, there can be no change in thermal kinetic energy and, therefore, no change in temperature. In real gases,
P
V
{\displaystyle PV}
does change.
The ratio of the value of
P
V
{\displaystyle PV}
to that expected for an ideal gas at the same temperature is called the compressibility factor,
Z
{\displaystyle Z}
. For a gas, this is typically less than unity at low temperature and greater than unity at high temperature (see the discussion in compressibility factor). At low pressure, the value of
Z
{\displaystyle Z}
always moves towards unity as a gas expands. Thus at low temperature,
Z
{\displaystyle Z}
and
P
V
{\displaystyle PV}
will increase as the gas expands, resulting in a positive Joule–Thomson coefficient. At high temperature,
Z
{\displaystyle Z}
and
P
V
{\displaystyle PV}
decrease as the gas expands; if the decrease is large enough, the Joule–Thomson coefficient will be negative.
For liquids, and for supercritical fluids under high pressure,
P
V
{\displaystyle PV}
increases as pressure increases. This is due to molecules being forced together, so that the volume can barely decrease due to higher pressure. Under such conditions, the Joule–Thomson coefficient is negative, as seen in the figure above.
The physical mechanism associated with the Joule–Thomson effect is closely related to that of a shock wave, although a shock wave differs in that the change in bulk kinetic energy of the gas flow is not negligible.
== The Joule–Thomson (Kelvin) coefficient ==
The rate of change of temperature
T
{\displaystyle T}
with respect to pressure
P
{\displaystyle P}
in a Joule–Thomson process (that is, at constant enthalpy
H
{\displaystyle H}
) is the Joule–Thomson (Kelvin) coefficient
μ
J
T
{\displaystyle \mu _{\mathrm {JT} }}
. This coefficient can be expressed in terms of the gas's specific volume
V
{\displaystyle V}
, its heat capacity at constant pressure
C
p
{\displaystyle C_{\mathrm {p} }}
, and its coefficient of thermal expansion
α
{\displaystyle \alpha }
as:
μ
J
T
=
(
∂
T
∂
P
)
H
=
V
C
p
(
α
T
−
1
)
{\displaystyle \mu _{\mathrm {JT} }=\left({\partial T \over \partial P}\right)_{H}={\frac {V}{C_{\mathrm {p} }}}(\alpha T-1)\,}
See the § Derivation of the Joule–Thomson coefficient below for the proof of this relation. The value of
μ
J
T
{\displaystyle \mu _{\mathrm {JT} }}
is typically expressed in °C/bar (SI units: K/Pa) and depends on the type of gas and on the temperature and pressure of the gas before expansion. Its pressure dependence is usually only a few percent for pressures up to 100 bar.
All real gases have an inversion point at which the value of
μ
J
T
{\displaystyle \mu _{\mathrm {JT} }}
changes sign. The temperature of this point, the Joule–Thomson inversion temperature, depends on the pressure of the gas before expansion.
In a gas expansion the pressure decreases, so the sign of
∂
P
{\displaystyle \partial P}
is negative by definition. With that in mind, the following table explains when the Joule–Thomson effect cools or warms a real gas:
Helium and hydrogen are two gases whose Joule–Thomson inversion temperatures at a pressure of one atmosphere are very low (e.g., about 40 K, −233 °C for helium). Thus, helium and hydrogen warm when expanded at constant enthalpy at typical room temperatures. On the other hand, nitrogen and oxygen, the two most abundant gases in air, have inversion temperatures of 621 K (348 °C) and 764 K (491 °C) respectively: these gases can be cooled from room temperature by the Joule–Thomson effect.
For an ideal gas,
μ
JT
{\displaystyle \mu _{\text{JT}}}
is always equal to zero: ideal gases neither warm nor cool upon being expanded at constant enthalpy.
=== Theoretical models ===
For a Van der Waals gas, the coefficient is
μ
JT
=
−
V
m
C
p
R
T
V
m
2
b
−
2
a
(
V
m
−
b
)
2
R
T
V
m
3
−
2
a
(
V
m
−
b
)
2
.
{\displaystyle \mu _{\text{JT}}=-{\frac {V_{m}}{C_{p}}}{\frac {RTV_{m}^{2}b-2a(V_{m}-b)^{2}}{RTV_{m}^{3}-2a(V_{m}-b)^{2}}}.}
with inversion temperature
2
a
b
R
(
1
−
b
V
m
)
2
{\displaystyle {\frac {2a}{bR}}\left(1-{\frac {b}{V_{m}}}\right)^{2}}
.
For the Dieterici gas, the reduced inversion temperature is
T
~
I
=
8
−
4
/
V
~
m
{\displaystyle {\tilde {T}}_{I}=8-4/{\tilde {V}}_{m}}
, and the relation between reduced pressure and reduced inversion temperature is
p
~
=
(
8
−
T
~
I
)
e
5
2
−
4
8
−
T
~
I
{\displaystyle {\tilde {p}}=(8-{\tilde {T}}_{I})e^{{\frac {5}{2}}-{\frac {4}{8-{\tilde {T}}_{I}}}}}
. This is plotted on the right. The critical point falls inside the region where the gas cools on expansion. The outside region is where the gas warms on expansion.
== Applications ==
In practice, the Joule–Thomson effect is achieved by allowing the gas to expand through a throttling device (usually a valve) which must be very well insulated to prevent any heat transfer to or from the gas. No external work is extracted from the gas during the expansion (the gas must not be expanded through a turbine, for example).
The cooling produced in the Joule–Thomson expansion makes it a valuable tool in refrigeration. The effect is applied in the Linde technique as a standard process in the petrochemical industry, where the cooling effect is used to liquefy gases, and in many cryogenic applications (e.g. for the production of liquid oxygen, nitrogen, and argon). A gas must be below its inversion temperature to be liquefied by the Linde cycle. For this reason, simple Linde cycle liquefiers, starting from ambient temperature, cannot be used to liquefy helium, hydrogen, or neon. They must first be cooled to their inversion temperatures, which are −233 °C (helium), −71 °C (hydrogen), and −42 °C (neon).
== Proof that the specific enthalpy remains constant ==
In thermodynamics so-called "specific" quantities are quantities per unit mass (kg) and are denoted by lower-case characters. So h, u, and v are the specific enthalpy, specific internal energy, and specific volume (volume per unit mass, or reciprocal density), respectively. In a Joule–Thomson process the specific enthalpy h remains constant. To prove this, the first step is to compute the net work done when a mass m of the gas moves through the plug. This amount of gas has a volume of V1 = m v1 in the region at pressure P1 (region 1) and a volume V2 = m v2 when in the region at pressure P2 (region 2). Then in region 1, the "flow work" done on the amount of gas by the rest of the gas is: W1 = m P1v1. In region 2, the work done by the amount of gas on the rest of the gas is: W2 = m P2v2. So, the total work done on the mass m of gas is
W
=
m
P
1
v
1
−
m
P
2
v
2
.
{\displaystyle W=mP_{1}v_{1}-mP_{2}v_{2}.}
The change in internal energy minus the total work done on the amount of gas is, by the first law of thermodynamics, the total heat supplied to the amount of gas.
U
−
W
=
Q
{\displaystyle U-W=Q}
In the Joule–Thomson process, the gas is insulated, so no heat is absorbed. This means that
(
m
u
2
−
m
u
1
)
−
(
m
P
1
v
1
−
m
P
2
v
2
)
=
0
m
u
1
+
m
P
1
v
1
=
m
u
2
+
m
P
2
v
2
u
1
+
P
1
v
1
=
u
2
+
P
2
v
2
{\displaystyle {\begin{aligned}(mu_{2}-mu_{1})&-(mP_{1}v_{1}-mP_{2}v_{2})=0\\mu_{1}+mP_{1}v_{1}&=mu_{2}+mP_{2}v_{2}\\u_{1}+P_{1}v_{1}&=u_{2}+P_{2}v_{2}\end{aligned}}}
where u1 and u2 denote the specific internal energies of the gas in regions 1 and 2, respectively. Using the definition of the specific enthalpy h = u + Pv, the above equation implies that
h
1
=
h
2
{\displaystyle h_{1}=h_{2}}
where h1 and h2 denote the specific enthalpies of the amount of gas in regions 1 and 2, respectively.
== Throttling in the T-s diagram ==
A convenient way to get a quantitative understanding of the throttling process is by using diagrams such as h-T diagrams, h-P diagrams, and others. Commonly used are the so-called T-s diagrams. Figure 2 shows the T-s diagram of nitrogen as an example. Various points are indicated as follows:
As shown before, throttling keeps h constant. E.g. throttling from 200 bar and 300 K (point a in fig. 2) follows the isenthalpic (line of constant specific enthalpy) of 430 kJ/kg. At 1 bar it results in point b which has a temperature of 270 K. So throttling from 200 bar to 1 bar gives a cooling from room temperature to below the freezing point of water. Throttling from 200 bar and an initial temperature of 133 K (point c in fig. 2) to 1 bar results in point d, which is in the two-phase region of nitrogen at a temperature of 77.2 K. Since the enthalpy is an extensive parameter the enthalpy in d (hd) is equal to the enthalpy in e (he) multiplied with the mass fraction of the liquid in d (xd) plus the enthalpy in f (hf) multiplied with the mass fraction of the gas in d (1 − xd). So
h
d
=
x
d
h
e
+
(
1
−
x
d
)
h
f
.
{\displaystyle h_{d}=x_{d}h_{e}+(1-x_{d})h_{f}.}
With numbers: 150 = xd 28 + (1 − xd) 230 so xd is about 0.40. This means that the mass fraction of the liquid in the liquid–gas mixture leaving the throttling valve is 40%.
== Derivation of the Joule–Thomson coefficient ==
It is difficult to think physically about what the Joule–Thomson coefficient,
μ
J
T
{\displaystyle \mu _{\mathrm {JT} }}
, represents. Also, modern determinations of
μ
J
T
{\displaystyle \mu _{\mathrm {JT} }}
do not use the original method used by Joule and Thomson, but instead measure a different, closely related quantity. Thus, it is useful to derive relationships between
μ
J
T
{\displaystyle \mu _{\mathrm {JT} }}
and other, more conveniently measured quantities, as described below.
The first step in obtaining these results is to note that the Joule–Thomson coefficient involves the three variables T, P, and H. A useful result is immediately obtained by applying the cyclic rule; in terms of these three variables that rule may be written
(
∂
T
∂
P
)
H
(
∂
H
∂
T
)
P
(
∂
P
∂
H
)
T
=
−
1.
{\displaystyle \left({\frac {\partial T}{\partial P}}\right)_{H}\left({\frac {\partial H}{\partial T}}\right)_{P}\left({\frac {\partial P}{\partial H}}\right)_{T}=-1.}
Each of the three partial derivatives in this expression has a specific meaning. The first is
μ
J
T
{\displaystyle \mu _{\mathrm {JT} }}
, the second is the constant pressure heat capacity,
C
p
{\displaystyle C_{\mathrm {p} }}
, defined by
C
p
=
(
∂
H
∂
T
)
P
{\displaystyle C_{\mathrm {p} }=\left({\frac {\partial H}{\partial T}}\right)_{P}}
and the third is the inverse of the isothermal Joule–Thomson coefficient,
μ
T
{\displaystyle \mu _{\mathrm {T} }}
, defined by
μ
T
=
(
∂
H
∂
P
)
T
{\displaystyle \mu _{\mathrm {T} }=\left({\frac {\partial H}{\partial P}}\right)_{T}}
.
This last quantity is more easily measured than
μ
J
T
{\displaystyle \mu _{\mathrm {JT} }}
. Thus, the expression from the cyclic rule becomes
μ
J
T
=
−
μ
T
C
p
.
{\displaystyle \mu _{\mathrm {JT} }=-{\frac {\mu _{\mathrm {T} }}{C_{p}}}.}
This equation can be used to obtain Joule–Thomson coefficients from the more easily measured isothermal Joule–Thomson coefficient. It is used in the following to obtain a mathematical expression for the Joule–Thomson coefficient in terms of the volumetric properties of a fluid.
To proceed further, the starting point is the fundamental equation of thermodynamics in terms of enthalpy; this is
d
H
=
T
d
S
+
V
d
P
.
{\displaystyle \mathrm {d} H=T\mathrm {d} S+V\mathrm {d} P.}
Now "dividing through" by dP, while holding temperature constant, yields
(
∂
H
∂
P
)
T
=
T
(
∂
S
∂
P
)
T
+
V
{\displaystyle \left({\frac {\partial H}{\partial P}}\right)_{T}=T\left({\frac {\partial S}{\partial P}}\right)_{T}+V}
The partial derivative on the left is the isothermal Joule–Thomson coefficient,
μ
T
{\displaystyle \mu _{\mathrm {T} }}
, and the one on the right can be expressed in terms of the coefficient of thermal expansion via a Maxwell relation. The appropriate relation is
(
∂
S
∂
P
)
T
=
−
(
∂
V
∂
T
)
P
=
−
V
α
{\displaystyle \left({\frac {\partial S}{\partial P}}\right)_{T}=-\left({\frac {\partial V}{\partial T}}\right)_{P}=-V\alpha \,}
where α is the cubic coefficient of thermal expansion. Replacing these two partial derivatives yields
μ
T
=
−
T
V
α
+
V
.
{\displaystyle \mu _{\mathrm {T} }=-TV\alpha \ +V.}
This expression can now replace
μ
T
{\displaystyle \mu _{\mathrm {T} }}
in the earlier equation for
μ
J
T
{\displaystyle \mu _{\mathrm {JT} }}
to obtain:
μ
J
T
≡
(
∂
T
∂
P
)
H
=
V
C
p
(
α
T
−
1
)
.
{\displaystyle \mu _{\mathrm {JT} }\equiv \left({\frac {\partial T}{\partial P}}\right)_{H}={\frac {V}{C_{\mathrm {p} }}}(\alpha T-1).\,}
This provides an expression for the Joule–Thomson coefficient in terms of the commonly available properties heat capacity, molar volume, and thermal expansion coefficient. It shows that the Joule–Thomson inversion temperature, at which
μ
J
T
{\displaystyle \mu _{\mathrm {JT} }}
is zero, occurs when the coefficient of thermal expansion is equal to the inverse of the temperature. Since this is true at all temperatures for ideal gases (see expansion in gases), the Joule–Thomson coefficient of an ideal gas is zero at all temperatures.
== Joule's second law ==
It is easy to verify that for an ideal gas defined by suitable microscopic postulates that αT = 1, so the temperature change of such an ideal gas at a Joule–Thomson expansion is zero.
For such an ideal gas, this theoretical result implies that:
The internal energy of a fixed mass of an ideal gas depends only on its temperature (not pressure or volume).
This rule was originally found by Joule experimentally for real gases and is known as Joule's second law. More refined experiments found important deviations from it.
== See also ==
Critical point (thermodynamics)
Enthalpy and Isenthalpic process
Ideal gas
Liquefaction of gases
MIRI (Mid-Infrared Instrument), a J–T loop is used on one of the instruments of the James Webb Space Telescope
Refrigeration
Reversible process (thermodynamics)
== References ==
== Bibliography ==
M. W. Zemansky (1968). Heat and Thermodynamics; An Intermediate Textbook. McGraw-Hill. pp. 182, 355. LCCN 67026891.
D. V. Schroeder (2000). An Introduction to Thermal Physics. Addison Wesley Longman. p. 142. ISBN 978-0-201-38027-9.
C. Kittel, H. Kroemer (1980). Thermal Physics. W. H. Freeman. ISBN 978-0-7167-1088-2.
== External links ==
Weisstein, Eric Wolfgang (ed.). "Joule-Thomson process". ScienceWorld.
Weisstein, Eric Wolfgang (ed.). "Joule-Thomson coefficient". ScienceWorld.
"Inversion Curve of Joule-Thomson Effect using Peng-Robinson CEOS". Demonstrations Projects of Wolfram Mathematica.
Joule–Thomson effect module, University of Notre Dame | Wikipedia/Throttling_process_(thermodynamics) |
In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree (e.g., approximate solutions versus precise ones). The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?".
In order to perform a rigorous study of computation, computer scientists work with a mathematical abstraction of computers called a model of computation. There are several models in use, but the most commonly examined is the Turing machine. Computer scientists study the Turing machine because it is simple to formulate, can be analyzed and used to prove results, and because it represents what many consider the most powerful possible "reasonable" model of computation (see Church–Turing thesis). It might seem that the potentially infinite memory capacity is an unrealizable attribute, but any decidable problem solved by a Turing machine will always require only a finite amount of memory. So in principle, any problem that can be solved (decided) by a Turing machine can be solved by a computer that has a finite amount of memory.
== History ==
The theory of computation can be considered the creation of models of all kinds in the field of computer science. Therefore, mathematics and logic are used. In the last century, it separated from mathematics and became an independent academic discipline with its own conferences such as FOCS in 1960 and STOC in 1969, and its own awards such as the IMU Abacus Medal (established in 1981 as the Rolf Nevanlinna Prize), the Gödel Prize, established in 1993, and the Knuth Prize, established in 1996.
Some pioneers of the theory of computation were Ramon Llull, Alonzo Church, Kurt Gödel, Alan Turing, Stephen Kleene, Rózsa Péter, John von Neumann and Claude Shannon.
== Branches ==
=== Automata theory ===
Automata theory is the study of abstract machines (or more appropriately, abstract 'mathematical' machines or systems) and the computational problems that can be solved using these machines. These abstract machines are called automata. Automata comes from the Greek word (Αυτόματα) which means that something is doing something by itself.
Automata theory is also closely related to formal language theory, as the automata are often classified by the class of formal languages they are able to recognize. An automaton can be a finite representation of a formal language that may be an infinite set. Automata are used as theoretical models for computing machines, and are used for proofs about computability.
==== Formal language theory ====
Formal language theory is a branch of mathematics concerned with describing languages as a set of operations over an alphabet. It is closely linked with automata theory, as automata are used to generate and recognize formal languages. There are several classes of formal languages, each allowing more complex language specification than the one before it, i.e. Chomsky hierarchy, and each corresponding to a class of automata which recognizes it. Because automata are used as models for computation, formal languages are the preferred mode of specification for any problem that must be computed.
=== Computability theory ===
Computability theory deals primarily with the question of the extent to which a problem is solvable on a computer. The statement that the halting problem cannot be solved by a Turing machine is one of the most important results in computability theory, as it is an example of a concrete problem that is both easy to formulate and impossible to solve using a Turing machine. Much of computability theory builds on the halting problem result.
Another important step in computability theory was Rice's theorem, which states that for all non-trivial properties of partial functions, it is undecidable whether a Turing machine computes a partial function with that property.
Computability theory is closely related to the branch of mathematical logic called recursion theory, which removes the restriction of studying only models of computation which are reducible to the Turing model. Many mathematicians and computational theorists who study recursion theory will refer to it as computability theory.
=== Computational complexity theory ===
Computational complexity theory considers not only whether a problem can be solved at all on a computer, but also how efficiently the problem can be solved. Two major aspects are considered: time complexity and space complexity, which are respectively how many steps it takes to perform a computation, and how much memory is required to perform that computation.
In order to analyze how much time and space a given algorithm requires, computer scientists express the time or space required to solve the problem as a function of the size of the input problem. For example, finding a particular number in a long list of numbers becomes harder as the list of numbers grows larger. If we say there are n numbers in the list, then if the list is not sorted or indexed in any way we may have to look at every number in order to find the number we're seeking. We thus say that in order to solve this problem, the computer needs to perform a number of steps that grow linearly in the size of the problem.
To simplify this problem, computer scientists have adopted big O notation, which allows functions to be compared in a way that ensures that particular aspects of a machine's construction do not need to be considered, but rather only the asymptotic behavior as problems become large. So in our previous example, we might say that the problem requires
O
(
n
)
{\displaystyle O(n)}
steps to solve.
Perhaps the most important open problem in all of computer science is the question of whether a certain broad class of problems denoted NP can be solved efficiently. This is discussed further at Complexity classes P and NP, and P versus NP problem is one of the seven Millennium Prize Problems stated by the Clay Mathematics Institute in 2000. The Official Problem Description was given by Turing Award winner Stephen Cook.
== Models of computation ==
Aside from a Turing machine, other equivalent (see Church–Turing thesis) models of computation are in use.
Lambda calculus
A computation consists of an initial lambda expression (or two if you want to separate the function and its input) plus a finite sequence of lambda terms, each deduced from the preceding term by one application of Beta reduction.
Combinatory logic
is a concept which has many similarities to
λ
{\displaystyle \lambda }
-calculus, but also important differences exist (e.g. fixed point combinator Y has normal form in combinatory logic but not in
λ
{\displaystyle \lambda }
-calculus). Combinatory logic was developed with great ambitions: understanding the nature of paradoxes, making foundations of mathematics more economic (conceptually), eliminating the notion of variables (thus clarifying their role in mathematics).
μ-recursive functions
a computation consists of a mu-recursive function, i.e. its defining sequence, any input value(s) and a sequence of recursive functions appearing in the defining sequence with inputs and outputs. Thus, if in the defining sequence of a recursive function
f
(
x
)
{\displaystyle f(x)}
the functions
g
(
x
)
{\displaystyle g(x)}
and
h
(
x
,
y
)
{\displaystyle h(x,y)}
appear, then terms of the form 'g(5)=7' or 'h(3,2)=10' might appear. Each entry in this sequence needs to be an application of a basic function or follow from the entries above by using composition, primitive recursion or μ recursion. For instance if
f
(
x
)
=
h
(
x
,
g
(
x
)
)
{\displaystyle f(x)=h(x,g(x))}
, then for 'f(5)=3' to appear, terms like 'g(5)=6' and 'h(5,6)=3' must occur above. The computation terminates only if the final term gives the value of the recursive function applied to the inputs.
Markov algorithm
a string rewriting system that uses grammar-like rules to operate on strings of symbols.
Register machine
is a theoretically interesting idealization of a computer. There are several variants. In most of them, each register can hold a natural number (of unlimited size), and the instructions are simple (and few in number), e.g. only decrementation (combined with conditional jump) and incrementation exist (and halting). The lack of the infinite (or dynamically growing) external store (seen at Turing machines) can be understood by replacing its role with Gödel numbering techniques: the fact that each register holds a natural number allows the possibility of representing a complicated thing (e.g. a sequence, or a matrix etc.) by an appropriately huge natural number — unambiguity of both representation and interpretation can be established by number theoretical foundations of these techniques.
In addition to the general computational models, some simpler computational models are useful for special, restricted applications. Regular expressions, for example, specify string patterns in many contexts, from office productivity software to programming languages. Another formalism mathematically equivalent to regular expressions, finite automata are used in circuit design and in some kinds of problem-solving. Context-free grammars specify programming language syntax. Non-deterministic pushdown automata are another formalism equivalent to context-free grammars. Primitive recursive functions are a defined subclass of the recursive functions.
Different models of computation have the ability to do different tasks. One way to measure the power of a computational model is to study the class of formal languages that the model can generate; in such a way to the Chomsky hierarchy of languages is obtained.
== References ==
== Further reading ==
Textbooks aimed at computer scientists
(There are many textbooks in this area; this list is by necessity incomplete.)
Hopcroft, John E.; Motwani, Rajeev; Ullman, Jeffrey D. (2006) [1979]. Introduction to Automata Theory, Languages, and Computation (3rd ed.). Addison-Wesley. ISBN 0-321-45536-3. — One of the standard references in the field.
Linz P (2007). An introduction to formal language and automata. Narosa Publishing. ISBN 9788173197819.
Sipser, Michael (2013). Introduction to the Theory of Computation (3rd ed.). Cengage Learning. ISBN 978-1-133-18779-0.
Eitan Gurari (1989). An Introduction to the Theory of Computation. Computer Science Press. ISBN 0-7167-8182-4. Archived from the original on 2007-01-07.
Hein, James L. (1996) Theory of Computation. Sudbury, MA: Jones & Bartlett. ISBN 978-0-86720-497-1 A gentle introduction to the field, appropriate for second-year undergraduate computer science students.
Taylor, R. Gregory (1998). Models of Computation and Formal Languages. New York: Oxford University Press. ISBN 978-0-19-510983-2 An unusually readable textbook, appropriate for upper-level undergraduates or beginning graduate students.
Jon Kleinberg, and Éva Tardos (2006): Algorithm Design, Pearson/Addison-Wesley, ISBN 978-0-32129535-4
Lewis, F. D. (2007). Essentials of theoretical computer science A textbook covering the topics of formal languages, automata and grammars. The emphasis appears to be on presenting an overview of the results and their applications rather than providing proofs of the results.
Martin Davis, Ron Sigal, Elaine J. Weyuker, Computability, complexity, and languages: fundamentals of theoretical computer science, 2nd ed., Academic Press, 1994, ISBN 0-12-206382-1. Covers a wider range of topics than most other introductory books, including program semantics and quantification theory. Aimed at graduate students.
Books on computability theory from the (wider) mathematical perspective
Hartley Rogers, Jr (1987). Theory of Recursive Functions and Effective Computability, MIT Press. ISBN 0-262-68052-1
S. Barry Cooper (2004). Computability Theory. Chapman and Hall/CRC. ISBN 1-58488-237-9..
Carl H. Smith, A recursive introduction to the theory of computation, Springer, 1994, ISBN 0-387-94332-3. A shorter textbook suitable for graduate students in Computer Science.
Historical perspective
Richard L. Epstein and Walter A. Carnielli (2000). Computability: Computable Functions, Logic, and the Foundations of Mathematics, with Computability: A Timeline (2nd ed.). Wadsworth/Thomson Learning. ISBN 0-534-54644-7..
== External links ==
Theory of Computation at MIT
Theory of Computation at Harvard
Computability Logic - A theory of interactive computation. The main web source on this subject. | Wikipedia/Computation_theory |
The concept entropy was first developed by German physicist Rudolf Clausius in the mid-nineteenth century as a thermodynamic property that predicts that certain spontaneous processes are irreversible or impossible. In statistical mechanics, entropy is formulated as a statistical property using probability theory. The statistical entropy perspective was introduced in 1870 by Austrian physicist Ludwig Boltzmann, who established a new field of physics that provided the descriptive linkage between the macroscopic observation of nature and the microscopic view based on the rigorous treatment of large ensembles of microscopic states that constitute thermodynamic systems.
== Boltzmann's principle ==
Ludwig Boltzmann defined entropy as a measure of the number of possible microscopic states (microstates) of a system in thermodynamic equilibrium, consistent with its macroscopic thermodynamic properties, which constitute the macrostate of the system. A useful illustration is the example of a sample of gas contained in a container. The easily measurable parameters volume, pressure, and temperature of the gas describe its macroscopic condition (state). At a microscopic level, the gas consists of a vast number of freely moving atoms or molecules, which randomly collide with one another and with the walls of the container. The collisions with the walls produce the macroscopic pressure of the gas, which illustrates the connection between microscopic and macroscopic phenomena.
A microstate of the system is a description of the positions and momenta of all its particles. The large number of particles of the gas provides an infinite number of possible microstates for the sample, but collectively they exhibit a well-defined average of configuration, which is exhibited as the macrostate of the system, to which each individual microstate contribution is negligibly small. The ensemble of microstates comprises a statistical distribution of probability for each microstate, and the group of most probable configurations accounts for the macroscopic state. Therefore, the system can be described as a whole by only a few macroscopic parameters, called the thermodynamic variables: the total energy E, volume V, pressure P, temperature T, and so forth. However, this description is relatively simple only when the system is in a state of equilibrium.
Equilibrium may be illustrated with a simple example of a drop of food coloring falling into a glass of water. The dye diffuses in a complicated manner, which is difficult to precisely predict. However, after sufficient time has passed, the system reaches a uniform color, a state much easier to describe and explain.
Boltzmann formulated a simple relationship between entropy and the number of possible microstates of a system, which is denoted by the symbol Ω. The entropy S is proportional to the natural logarithm of this number:
S
=
k
B
ln
Ω
{\displaystyle S=k_{\text{B}}\ln \Omega }
The proportionality constant kB is one of the fundamental constants of physics and is named the Boltzmann constant in honor of its discoverer.
Boltzmann's entropy describes the system when all the accessible microstates are equally likely. It is the configuration corresponding to the maximum of entropy at equilibrium. The randomness or disorder is maximal, and so is the lack of distinction (or information) of each microstate.
Entropy is a thermodynamic property just like pressure, volume, or temperature. Therefore, it connects the microscopic and the macroscopic world view.
Boltzmann's principle is regarded as the foundation of statistical mechanics.
== Gibbs entropy formula ==
The macroscopic state of a system is characterized by a distribution on the microstates. The entropy of this distribution is given by the Gibbs entropy formula, named after J. Willard Gibbs. For a classical system (i.e., a collection of classical particles) with a discrete set of microstates, if Ei is the energy of microstate i, and pi is the probability that it occurs during the system's fluctuations, then the entropy of the system is
S
=
−
k
B
∑
i
p
i
ln
(
p
i
)
{\displaystyle S=-k_{\text{B}}\,\sum _{i}p_{i}\ln(p_{i})}
The quantity
k
B
{\displaystyle k_{\text{B}}}
is the Boltzmann constant, a multiplier of the summation expression. The summation is dimensionless, since the value
p
i
{\displaystyle p_{i}}
is a probability and therefore dimensionless, and ln is the natural logarithm. Hence the SI unit on both sides of the equation is that of heat capacity:
[
S
]
=
[
k
B
]
=
J
K
{\displaystyle [S]=[k_{\text{B}}]=\mathrm {\frac {J}{K}} }
This definition remains meaningful even when the system is far away from equilibrium. Other definitions assume that the system is in thermal equilibrium, either as an isolated system, or as a system in exchange with its surroundings. The set of microstates (with probability distribution) over which the sum is found is called a statistical ensemble. Each type of statistical ensemble (micro-canonical, canonical, grand-canonical, etc.) describes a different configuration of the system's exchanges with the outside, varying from a completely isolated system to a system that can exchange one or more quantities with a reservoir, like energy, volume or molecules. In every ensemble, the equilibrium configuration of the system is dictated by the maximization of the entropy of the union of the system and its reservoir, according to the second law of thermodynamics (see the statistical mechanics article).
Neglecting correlations (or, more generally, statistical dependencies) between the states of individual particles will lead to an incorrect probability distribution on the microstates and hence to an overestimate of the entropy. Such correlations occur in any system with nontrivially interacting particles, that is, in all systems more complex than an ideal gas.
This S is almost universally called simply the entropy. It can also be called the statistical entropy or the thermodynamic entropy without changing the meaning. Note the above expression of the statistical entropy is a discretized version of Shannon entropy. The von Neumann entropy formula is an extension of the Gibbs entropy formula to the quantum-mechanical case.
It has been shown that the Gibbs entropy is equal to the classical "heat engine" entropy characterized by
d
S
=
δ
Q
/
T
{\displaystyle dS={\delta Q}/{T}}
, and the generalized Boltzmann distribution is a sufficient and necessary condition for this equivalence. Furthermore, the Gibbs entropy is the only entropy measure that is equivalent to the classical "heat engine" entropy under the following postulates:
=== Ensembles ===
The various ensembles used in statistical thermodynamics are linked to the entropy by the following relations:
S
=
k
B
ln
Ω
mic
=
k
B
(
ln
Z
can
+
β
E
¯
)
=
k
B
(
ln
Z
gr
+
β
(
E
¯
−
μ
N
¯
)
)
{\displaystyle S=k_{\text{B}}\ln \Omega _{\text{mic}}=k_{\text{B}}(\ln Z_{\text{can}}+\beta {\bar {E}})=k_{\text{B}}(\ln {\mathcal {Z}}_{\text{gr}}+\beta ({\bar {E}}-\mu {\bar {N}}))}
Ω
mic
{\displaystyle \Omega _{\text{mic}}}
is the microcanonical partition function
Z
can
{\displaystyle Z_{\text{can}}}
is the canonical partition function
Z
gr
{\displaystyle {\mathcal {Z}}_{\text{gr}}}
is the grand canonical partition function
== Order through chaos and the second law of thermodynamics ==
We can think of Ω as a measure of our lack of knowledge about a system. To illustrate this idea, consider a set of 100 coins, each of which is either heads up or tails up. In this example, let us suppose that the macrostates are specified by the total number of heads and tails, while the microstates are specified by the facings of each individual coin (i.e., the exact order in which heads and tails occur). For the macrostates of 100 heads or 100 tails, there is exactly one possible configuration, so our knowledge of the system is complete. At the opposite extreme, the macrostate which gives us the least knowledge about the system consists of 50 heads and 50 tails in any order, for which there are 100891344545564193334812497256 (100 choose 50) ≈ 1029 possible microstates.
Even when a system is entirely isolated from external influences, its microstate is constantly changing. For instance, the particles in a gas are constantly moving, and thus occupy a different position at each moment of time; their momenta are also constantly changing as they collide with each other or with the container walls. Suppose we prepare the system in an artificially highly ordered equilibrium state. For instance, imagine dividing a container with a partition and placing a gas on one side of the partition, with a vacuum on the other side. If we remove the partition and watch the subsequent behavior of the gas, we will find that its microstate evolves according to some chaotic and unpredictable pattern, and that on average these microstates will correspond to a more disordered macrostate than before. It is possible, but extremely unlikely, for the gas molecules to bounce off one another in such a way that they remain in one half of the container. It is overwhelmingly probable for the gas to spread out to fill the container evenly, which is the new equilibrium macrostate of the system.
This is an example illustrating the second law of thermodynamics:
the total entropy of any isolated thermodynamic system tends to increase over time, approaching a maximum value.
Since its discovery, this idea has been the focus of a great deal of thought, some of it confused. A chief point of confusion is the fact that the Second Law applies only to isolated systems. For example, the Earth is not an isolated system because it is constantly receiving energy in the form of sunlight. In contrast, the universe may be considered an isolated system, so that its total entropy is constantly increasing. (Needs clarification. See: Second law of thermodynamics#cite note-Grandy 151-21)
== Counting of microstates ==
In classical statistical mechanics, the number of microstates is actually uncountably infinite, since the properties of classical systems are continuous. For example, a microstate of a classical ideal gas is specified by the positions and momenta of all the atoms, which range continuously over the real numbers. If we want to define Ω, we have to come up with a method of grouping the microstates together to obtain a countable set. This procedure is known as coarse graining. In the case of the ideal gas, we count two states of an atom as the "same" state if their positions and momenta are within δx and δp of each other. Since the values of δx and δp can be chosen arbitrarily, the entropy is not uniquely defined. It is defined only up to an additive constant. (As we will see, the thermodynamic definition of entropy is also defined only up to a constant.)
To avoid coarse graining one can take the entropy as defined by the H-theorem.
S
=
−
k
B
H
B
:=
−
k
B
∫
f
(
q
i
,
p
i
)
ln
f
(
q
i
,
p
i
)
d
q
1
d
p
1
⋯
d
q
N
d
p
N
{\displaystyle S=-k_{\text{B}}H_{\text{B}}:=-k_{\text{B}}\int f(q_{i},p_{i})\,\ln f(q_{i},p_{i})\,dq_{1}\,dp_{1}\cdots dq_{N}\,dp_{N}}
However, this ambiguity can be resolved with quantum mechanics. The quantum state of a system can be expressed as a superposition of "basis" states, which can be chosen to be energy eigenstates (i.e. eigenstates of the quantum Hamiltonian). Usually, the quantum states are discrete, even though there may be an infinite number of them. For a system with some specified energy E, one takes Ω to be the number of energy eigenstates within a macroscopically small energy range between E and E + δE. In the thermodynamical limit, the specific entropy becomes independent on the choice of δE.
An important result, known as Nernst's theorem or the third law of thermodynamics, states that the entropy of a system at zero absolute temperature is a well-defined constant. This is because a system at zero temperature exists in its lowest-energy state, or ground state, so that its entropy is determined by the degeneracy of the ground state. Many systems, such as crystal lattices, have a unique ground state, and (since ln(1) = 0) this means that they have zero entropy at absolute zero. Other systems have more than one state with the same, lowest energy, and have a non-vanishing "zero-point entropy". For instance, ordinary ice has a zero-point entropy of 3.41 J/(mol⋅K), because its underlying crystal structure possesses multiple configurations with the same energy (a phenomenon known as geometrical frustration).
The third law of thermodynamics states that the entropy of a perfect crystal at absolute zero (0 K) is zero. This means that nearly all molecular motion should cease. The oscillator equation for predicting quantized vibrational levels shows that even when the vibrational quantum number is 0, the molecule still has vibrational energy:
E
ν
=
h
ν
0
(
n
+
1
2
)
{\displaystyle E_{\nu }=h\nu _{0}\left(n+{\tfrac {1}{2}}\right)}
where
h
{\displaystyle h}
is the Planck constant,
ν
0
{\displaystyle \nu _{0}}
is the characteristic frequency of the vibration, and
n
{\displaystyle n}
is the vibrational quantum number. Even when
n
=
0
{\displaystyle n=0}
(the zero-point energy),
E
n
{\displaystyle E_{n}}
does not equal 0, in adherence to the Heisenberg uncertainty principle.
== See also ==
== References == | Wikipedia/Gibbs_entropy_formula |
An agent-based model (ABM) is a computational model for simulating the actions and interactions of autonomous agents (both individual or collective entities such as organizations or groups) in order to understand the behavior of a system and what governs its outcomes. It combines elements of game theory, complex systems, emergence, computational sociology, multi-agent systems, and evolutionary programming. Monte Carlo methods are used to understand the stochasticity of these models. Particularly within ecology, ABMs are also called individual-based models (IBMs). A review of recent literature on individual-based models, agent-based models, and multiagent systems shows that ABMs are used in many scientific domains including biology, ecology and social science. Agent-based modeling is related to, but distinct from, the concept of multi-agent systems or multi-agent simulation in that the goal of ABM is to search for explanatory insight into the collective behavior of agents obeying simple rules, typically in natural systems, rather than in designing agents or solving specific practical or engineering problems.
Agent-based models are a kind of microscale model that simulate the simultaneous operations and interactions of multiple agents in an attempt to re-create and predict the appearance of complex phenomena. The process is one of emergence, which some express as "the whole is greater than the sum of its parts". In other words, higher-level system properties emerge from the interactions of lower-level subsystems. Or, macro-scale state changes emerge from micro-scale agent behaviors. Or, simple behaviors (meaning rules followed by agents) generate complex behaviors (meaning state changes at the whole system level).
Individual agents are typically characterized as boundedly rational, presumed to be acting in what they perceive as their own interests, such as reproduction, economic benefit, or social status, using heuristics or simple decision-making rules. ABM agents may experience "learning", adaptation, and reproduction.
Most agent-based models are composed of: (1) numerous agents specified at various scales (typically referred to as agent-granularity); (2) decision-making heuristics; (3) learning rules or adaptive processes; (4) an interaction topology; and (5) an environment. ABMs are typically implemented as computer simulations, either as custom software, or via ABM toolkits, and this software can be then used to test how changes in individual behaviors will affect the system's emerging overall behavior.
== History ==
The idea of agent-based modeling was developed as a relatively simple concept in the late 1940s. Since it requires computation-intensive procedures, it did not become widespread until the 1990s.
=== Early developments ===
The history of the agent-based model can be traced back to the Von Neumann machine, a theoretical machine capable of reproduction. The device von Neumann proposed would follow precisely detailed instructions to fashion a copy of itself. The concept was then built upon by von Neumann's friend Stanislaw Ulam, also a mathematician; Ulam suggested that the machine be built on paper, as a collection of cells on a grid. The idea intrigued von Neumann, who drew it up—creating the first of the devices later termed cellular automata.
Another advance was introduced by the mathematician John Conway. He constructed the well-known Game of Life. Unlike von Neumann's machine, Conway's Game of Life operated by simple rules in a virtual world in the form of a 2-dimensional checkerboard.
The Simula programming language, developed in the mid 1960s and widely implemented by the early 1970s, was the first framework for automating step-by-step agent simulations.
=== 1970s and 1980s: the first models ===
One of the earliest agent-based models in concept was Thomas Schelling's segregation model, which was discussed in his paper "Dynamic Models of Segregation" in 1971. Though Schelling originally used coins and graph paper rather than computers, his models embodied the basic concept of agent-based models as autonomous agents interacting in a shared environment with an observed aggregate, emergent outcome.
In the late 1970s, Paulien Hogeweg and Bruce Hesper began experimenting with individual models of ecology. One of their first results was to show that the social structure of bumble-bee colonies emerged as a result of simple rules that govern the behaviour of individual bees.
They introduced the ToDo principle, referring to the way agents "do what there is to do" at any given time.
In the early 1980s, Robert Axelrod hosted a tournament of Prisoner's Dilemma strategies and had them interact in an agent-based manner to determine a winner. Axelrod would go on to develop many other agent-based models in the field of political science that examine phenomena from ethnocentrism to the dissemination of culture.
By the late 1980s, Craig Reynolds' work on flocking models contributed to the development of some of the first biological agent-based models that contained social characteristics. He tried to model the reality of lively biological agents, known as artificial life, a term coined by Christopher Langton.
The first use of the word "agent" and a definition as it is currently used today is hard to track down. One candidate appears to be John Holland and John H. Miller's 1991 paper "Artificial Adaptive Agents in Economic Theory", based on an earlier conference presentation of theirs. A stronger and earlier candidate is Allan Newell, who in the first Presidential Address of AAAI (published as The Knowledge Level) discussed intelligent agents as a concept.
At the same time, during the 1980s, social scientists, mathematicians, operations researchers, and a scattering of people from other disciplines developed Computational and Mathematical Organization Theory (CMOT). This field grew as a special interest group of The Institute of Management Sciences (TIMS) and its sister society, the Operations Research Society of America (ORSA).
=== 1990s: expansion ===
The 1990s were especially notable for the expansion of ABM within the social sciences, one notable effort was the large-scale ABM, Sugarscape, developed by
Joshua M. Epstein and Robert Axtell to simulate and explore the role of social phenomena such as seasonal migrations, pollution, sexual reproduction, combat, and transmission of disease and even culture. Other notable 1990s developments included Carnegie Mellon University's Kathleen Carley ABM, to explore the co-evolution of social networks and culture. The Santa Fe Institute (SFI) was important in encouraging the development of the ABM modeling platform Swarm under the leadership of Christopher Langton. Research conducted through SFI allowed the expansion of ABM techniques to a number of fields including study of the social and spatial dynamics of small-scale human societies and primates. During this 1990s timeframe Nigel Gilbert published the first textbook on Social Simulation: Simulation for the social scientist (1999) and established a journal from the perspective of social sciences: the Journal of Artificial Societies and Social Simulation (JASSS). Other than JASSS, agent-based models of any discipline are within scope of SpringerOpen journal Complex Adaptive Systems Modeling (CASM).
Through the mid-1990s, the social sciences thread of ABM began to focus on such issues as designing effective teams, understanding the communication required for organizational effectiveness, and the behavior of social networks. CMOT—later renamed Computational Analysis of Social and Organizational Systems (CASOS)—incorporated more and more agent-based modeling. Samuelson (2000) is a good brief overview of the early history, and Samuelson (2005) and Samuelson and Macal (2006) trace the more recent developments.
In the late 1990s, the merger of TIMS and ORSA to form INFORMS, and the move by INFORMS from two meetings each year to one, helped to spur the CMOT group to form a separate society, the North American Association for Computational Social and Organizational Sciences (NAACSOS). Kathleen Carley was a major contributor, especially to models of social networks, obtaining National Science Foundation funding for the annual conference and serving as the first President of NAACSOS. She was succeeded by David Sallach of the University of Chicago and Argonne National Laboratory, and then by Michael Prietula of Emory University. At about the same time NAACSOS began, the European Social Simulation Association (ESSA) and the Pacific Asian Association for Agent-Based Approach in Social Systems Science (PAAA), counterparts of NAACSOS, were organized. As of 2013, these three organizations collaborate internationally. The First World Congress on Social Simulation was held under their joint sponsorship in Kyoto, Japan, in August 2006. The Second World Congress was held in the northern Virginia suburbs of Washington, D.C., in July 2008, with George Mason University taking the lead role in local arrangements.
=== 2000s ===
More recently, Ron Sun developed methods for basing agent-based simulation on models of human cognition, known as cognitive social simulation. Bill McKelvey, Suzanne Lohmann, Dario Nardi, Dwight Read and others at UCLA have also made significant contributions in organizational behavior and decision-making. Since 1991, UCLA has arranged a conference at Lake Arrowhead, California, that has become another major gathering point for practitioners in this field.
=== 2020 and later ===
After the advent of large language models, researchers began applying interacting language models to agent based modeling. In one widely cited paper, agentic language models interacted in a sandbox environment to perform activities like planning birthday parties and holding elections.
== Theory ==
Most computational modeling research describes systems in equilibrium or as moving between equilibria. Agent-based modeling, however, using simple rules, can result in different sorts of complex and interesting behavior. The three ideas central to agent-based models are agents as objects, emergence, and complexity.
Agent-based models consist of dynamically interacting rule-based agents. The systems within which they interact can create real-world-like complexity. Typically agents are
situated in space and time and reside in networks or in lattice-like neighborhoods. The location of the agents and their responsive behavior are encoded in algorithmic form in computer programs. In some cases, though not always, the agents may be considered as intelligent and purposeful. In ecological ABM (often referred to as "individual-based models" in ecology), agents may, for example, be trees in a forest, and would not be considered intelligent, although they may be "purposeful" in the sense of optimizing access to a resource (such as water).
The modeling process is best described as inductive. The modeler makes those assumptions thought most relevant to the situation at hand and then watches phenomena emerge from the agents' interactions. Sometimes that result is an equilibrium. Sometimes it is an emergent pattern. Sometimes, however, it is an unintelligible mangle.
In some ways, agent-based models complement traditional analytic methods. Where analytic methods enable humans to characterize the equilibria of a system, agent-based models allow the possibility of generating those equilibria. This generative contribution may be the most mainstream of the potential benefits of agent-based modeling. Agent-based models can explain the emergence of higher-order patterns—network structures of terrorist organizations and the Internet, power-law distributions in the sizes of traffic jams, wars, and stock-market crashes, and social segregation that persists despite populations of tolerant people. Agent-based models also can be used to identify lever points, defined as moments in time in which interventions have extreme consequences, and to distinguish among types of path dependency.
Rather than focusing on stable states, many models consider a system's robustness—the ways that complex systems adapt to internal and external pressures so as to maintain their functionalities. The task of harnessing that complexity requires consideration of the agents themselves—their diversity, connectedness, and level of interactions.
=== Framework ===
Recent work on the Modeling and simulation of Complex Adaptive Systems has demonstrated the need for combining agent-based and complex network based models. describe a framework consisting of four levels of developing models of complex adaptive systems described using several example multidisciplinary case studies:
Complex Network Modeling Level for developing models using interaction data of various system components.
Exploratory Agent-based Modeling Level for developing agent-based models for assessing the feasibility of further research. This can e.g. be useful for developing proof-of-concept models such as for funding applications without requiring an extensive learning curve for the researchers.
Descriptive Agent-based Modeling (DREAM) for developing descriptions of agent-based models by means of using templates and complex network-based models. Building DREAM models allows model comparison across scientific disciplines.
Validated agent-based modeling using Virtual Overlay Multiagent system (VOMAS) for the development of verified and validated models in a formal manner.
Other methods of describing agent-based models include code templates and text-based methods such as the ODD (Overview, Design concepts, and Design Details) protocol.
The role of the environment where agents live, both macro and micro, is also becoming an important factor in agent-based modelling and simulation work. Simple environment affords simple agents, but complex environments generate diversity of behavior.
=== Multi-scale modelling ===
One strength of agent-based modelling is its ability to mediate information flow between scales. When additional details about an agent are needed, a researcher can integrate it with models describing the extra details. When one is interested in the emergent behaviours demonstrated by the agent population, they can combine the agent-based model with a continuum model describing population dynamics. For example, in a study about CD4+ T cells (a key cell type in the adaptive immune system), the researchers modelled biological phenomena occurring at different spatial (intracellular, cellular, and systemic), temporal, and organizational scales (signal transduction, gene regulation, metabolism, cellular behaviors, and cytokine transport). In the resulting modular model, signal transduction and gene regulation are described by a logical model, metabolism by constraint-based models, cell population dynamics are described by an agent-based model, and systemic cytokine concentrations by ordinary differential equations. In this multi-scale model, the agent-based model occupies the central place and orchestrates every stream of information flow between scales.
== Applications ==
=== In biology ===
Agent-based modeling has been used extensively in biology, including the analysis of the spread of epidemics, and the threat of biowarfare, biological applications including population dynamics, stochastic gene expression, plant-animal interactions, vegetation ecology, migratory ecology, landscape diversity, sociobiology, the growth and decline of ancient civilizations, evolution of ethnocentric behavior, forced displacement/migration, language choice dynamics, cognitive modeling, and biomedical applications including modeling 3D breast tissue formation/morphogenesis, the effects of ionizing radiation on mammary stem cell subpopulation dynamics, inflammation,
and the human immune system, and the evolution of foraging behaviors. Agent-based models have also been used for developing decision support systems such as for breast cancer. Agent-based models are increasingly being used to model pharmacological systems in early stage and pre-clinical research to aid in drug development and gain insights into biological systems that would not be possible a priori. Military applications have also been evaluated. Moreover, agent-based models have been recently employed to study molecular-level biological systems. Agent-based models have also been written to describe ecological processes at work in ancient systems, such as those in dinosaur environments and more recent ancient systems as well.
=== In epidemiology ===
Agent-based models now complement traditional compartmental models, the usual type of epidemiological models. ABMs have been shown to be superior to compartmental models in regard to the accuracy of predictions. Recently, ABMs such as CovidSim by epidemiologist Neil Ferguson, have been used to inform public health (nonpharmaceutical) interventions against the spread of SARS-CoV-2. Epidemiological ABMs have been criticized for simplifying and unrealistic assumptions. Still, they can be useful in informing decisions regarding mitigation and suppression measures in cases when ABMs are accurately calibrated. The ABMs for such simulations are mostly based on synthetic populations, since the data of the actual population is not always available.
=== In business, technology and network theory ===
Agent-based models have been used since the mid-1990s to solve a variety of business and technology problems. Examples of applications include marketing, organizational behaviour and cognition, team working, supply chain optimization and logistics, modeling of consumer behavior, including word of mouth, social network effects, distributed computing, workforce management, and portfolio management. They have also been used to analyze traffic congestion.
Recently, agent based modelling and simulation has been applied to various domains such as studying the impact of publication venues by researchers in the computer science domain (journals versus conferences). In addition, ABMs have been used to simulate information delivery in ambient assisted environments. A November 2016 article in arXiv analyzed an agent based simulation of posts spread in Facebook. In the domain of peer-to-peer, ad hoc and other self-organizing and complex networks, the usefulness of agent based modeling and simulation has been shown. The use of a computer science-based formal specification framework coupled with wireless sensor networks and an agent-based simulation has recently been demonstrated.
Agent based evolutionary search or algorithm is a new research topic for solving complex optimization problems.
=== In team science ===
In the realm of team science, agent-based modeling has been utilized to assess the effects of team members' characteristics and biases on team performance across various settings. By simulating interactions between agents—each representing individual team members with distinct traits and biases—this modeling approach enables researchers to explore how these factors collectively influence the dynamics and outcomes of team performance. Consequently, agent-based modeling provides a nuanced understanding of team science, facilitating a deeper exploration of the subtleties and variabilities inherent in team-based collaborations.
=== In economics and social sciences ===
Prior to, and during the 2008 financial crisis, interest has grown in ABMs as possible tools for economic analysis. ABMs do not assume the economy can achieve equilibrium and "representative agents" are replaced by agents with diverse, dynamic, and interdependent behavior including herding. ABMs take a "bottom-up" approach and can generate extremely complex and volatile simulated economies. ABMs can represent unstable systems with crashes and booms that develop out of non-linear (disproportionate) responses to proportionally small changes. A July 2010 article in The Economist looked at ABMs as alternatives to DSGE models. The journal Nature also encouraged agent-based modeling with an editorial that suggested ABMs can do a better job of representing financial markets and other economic complexities than standard models along with an essay by J. Doyne Farmer and Duncan Foley that argued ABMs could fulfill both the desires of Keynes to represent a complex economy and of Robert Lucas to construct models based on microfoundations. Farmer and Foley pointed to progress that has been made using ABMs to model parts of an economy, but argued for the creation of a very large model that incorporates low level models. By modeling a complex system of analysts based on three distinct behavioral profiles – imitating, anti-imitating, and indifferent – financial markets were simulated to high accuracy. Results showed a correlation between network morphology and the stock market index. However, the ABM approach has been criticized for its lack of robustness between models, where similar models can yield very different results.
ABMs have been deployed in architecture and urban planning to evaluate design and to simulate pedestrian flow in the urban environment and the examination of public policy applications to land-use. There is also a growing field of socio-economic analysis of infrastructure investment impact using ABM's ability to discern systemic impacts upon a socio-economic network. Heterogeneity and dynamics can be easily built in ABM models to address wealth inequality and social mobility.
ABMs have also been proposed as applied educational tools for diplomats in the field of international relations and for domestic and international policymakers to enhance their evaluation of public policy.
=== In water management ===
ABMs have also been applied in water resources planning and management, particularly for exploring, simulating, and predicting the performance of infrastructure design and policy decisions, and in assessing the value of cooperation and information exchange in large water resources systems.
=== Organizational ABM: agent-directed simulation ===
The agent-directed simulation (ADS) metaphor distinguishes between two categories, namely "Systems for Agents" and "Agents for Systems." Systems for Agents (sometimes referred to as agents systems) are systems implementing agents for the use in engineering, human and social dynamics, military applications, and others. Agents for Systems are divided in two subcategories. Agent-supported systems deal with the use of agents as a support facility to enable computer assistance in problem solving or enhancing cognitive capabilities. Agent-based systems focus on the use of agents for the generation of model behavior in a system evaluation (system studies and analyses).
=== Self-driving cars ===
Hallerbach et al. discussed the application of agent-based approaches for the development and validation of automated driving systems via a digital twin of the vehicle-under-test and microscopic traffic simulation based on independent agents. Waymo has created a multi-agent simulation environment Carcraft to test algorithms for self-driving cars. It simulates traffic interactions between human drivers, pedestrians and automated vehicles. People's behavior is imitated by artificial agents based on data of real human behavior. The basic idea of using agent-based modeling to understand self-driving cars was discussed as early as 2003.
== Implementation ==
Many ABM frameworks are designed for serial von-Neumann computer architectures, limiting the speed and scalability of implemented models. Since emergent behavior in large-scale ABMs is dependent of population size, scalability restrictions may hinder model validation. Such limitations have mainly been addressed using distributed computing, with frameworks such as Repast HPC specifically dedicated to these type of implementations. While such approaches map well to cluster and supercomputer architectures, issues related to communication and synchronization, as well as deployment complexity, remain potential obstacles for their widespread adoption.
A recent development is the use of data-parallel algorithms on Graphics Processing Units GPUs for ABM simulation. The extreme memory bandwidth combined with the sheer number crunching power of multi-processor GPUs has enabled simulation of millions of agents at tens of frames per second.
=== Integration with other modeling forms ===
Since Agent-Based Modeling is more of a modeling framework than a particular piece of software or platform, it has often been used in conjunction with other modeling forms. For instance, agent-based models have also been combined with Geographic Information Systems (GIS). This provides a useful combination where the ABM serves as a process model and the GIS system can provide a model of pattern. Similarly, Social Network Analysis (SNA) tools and agent-based models are sometimes integrated, where the ABM is used to simulate the dynamics on the network while the SNA tool models and analyzes the network of interactions. Tools like GAMA provide a natural way to integrate system dynamics and GIS with ABM.
== Verification and validation ==
Verification and validation (V&V) of simulation models is extremely important. Verification involves making sure the implemented model matches the conceptual model, whereas validation ensures that the implemented model has some relationship to the real-world. Face validation, sensitivity analysis, calibration, and statistical validation are different aspects of validation. A discrete-event simulation framework approach for the validation of agent-based systems has been proposed. A comprehensive resource on empirical validation of agent-based models can be found here.
As an example of V&V technique, consider VOMAS (virtual overlay multi-agent system), a software engineering based approach, where a virtual overlay multi-agent system is developed alongside the agent-based model. Muazi et al. also provide an example of using VOMAS for verification and validation of a forest fire simulation model. Another software engineering method, i.e. Test-Driven Development has been adapted to for agent-based model validation. This approach has another advantage that allows an automatic validation using unit test tools.
== See also ==
== References ==
=== General ===
== External links ==
=== Articles/general information ===
Agent-based models of social networks, java applets.
On-Line Guide for Newcomers to Agent-Based Modeling in the Social Sciences
Introduction to Agent-based Modeling and Simulation. Argonne National Laboratory, November 29, 2006.
Agent-based models in Ecology – Using computer models as theoretical tools to analyze complex ecological systems
Network for Computational Modeling in the Social and Ecological Sciences' Agent Based Modeling FAQ
Multiagent Information Systems – Article on the convergence of SOA, BPM and Multi-Agent Technology in the domain of the Enterprise Information Systems. Jose Manuel Gomez Alvarez, Artificial Intelligence, Technical University of Madrid – 2006
Artificial Life Framework
Article providing methodology for moving real world human behaviors into a simulation model where agent behaviors are represented
Agent-based Modeling Resources, an information hub for modelers, methods, and philosophy for agent-based modeling
An Agent-Based Model of the Flash Crash of May 6, 2010, with Policy Implications, Tommi A. Vuorenmaa (Valo Research and Trading), Liang Wang (University of Helsinki - Department of Computer Science), October, 2013
=== Simulation models ===
Multi-agent Meeting Scheduling System Model by Qasim Siddique
Multi-firm market simulation by Valentino Piana
List of COVID-19 simulation models | Wikipedia/Agent-based_modelling |
In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.
Typically, the behavior of a nonlinear system is described in mathematics by a nonlinear system of equations, which is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one.
In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it.
As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such as solitons, chaos, and singularities are hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology.
Some authors use the term nonlinear science for the study of nonlinear systems. This term is disputed by others:
Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals.
== Definition ==
In mathematics, a linear map (or linear function)
f
(
x
)
{\displaystyle f(x)}
is one which satisfies both of the following properties:
Additivity or superposition principle:
f
(
x
+
y
)
=
f
(
x
)
+
f
(
y
)
;
{\displaystyle \textstyle f(x+y)=f(x)+f(y);}
Homogeneity:
f
(
α
x
)
=
α
f
(
x
)
.
{\displaystyle \textstyle f(\alpha x)=\alpha f(x).}
Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity. For example, an antilinear map is additive but not homogeneous. The conditions of additivity and homogeneity are often combined in the superposition principle
f
(
α
x
+
β
y
)
=
α
f
(
x
)
+
β
f
(
y
)
{\displaystyle f(\alpha x+\beta y)=\alpha f(x)+\beta f(y)}
An equation written as
f
(
x
)
=
C
{\displaystyle f(x)=C}
is called linear if
f
(
x
)
{\displaystyle f(x)}
is a linear map (as defined above) and nonlinear otherwise. The equation is called homogeneous if
C
=
0
{\displaystyle C=0}
and
f
(
x
)
{\displaystyle f(x)}
is a homogeneous function.
The definition
f
(
x
)
=
C
{\displaystyle f(x)=C}
is very general in that
x
{\displaystyle x}
can be any sensible mathematical object (number, vector, function, etc.), and the function
f
(
x
)
{\displaystyle f(x)}
can literally be any mapping, including integration or differentiation with associated constraints (such as boundary values). If
f
(
x
)
{\displaystyle f(x)}
contains differentiation with respect to
x
{\displaystyle x}
, the result will be a differential equation.
== Nonlinear systems of equations ==
A nonlinear system of equations consists of a set of equations in several variables such that at least one of them is not a linear equation.
For a single equation of the form
f
(
x
)
=
0
,
{\displaystyle f(x)=0,}
many methods have been designed; see Root-finding algorithm. In the case where f is a polynomial, one has a polynomial equation such as
x
2
+
x
−
1
=
0.
{\displaystyle x^{2}+x-1=0.}
The general root-finding algorithms apply to polynomial roots, but, generally they do not find all the roots, and when they fail to find a root, this does not imply that there is no roots. Specific methods for polynomials allow finding all roots or the real roots; see real-root isolation.
Solving systems of polynomial equations, that is finding the common zeros of a set of several polynomials in several variables is a difficult problem for which elaborate algorithms have been designed, such as Gröbner base algorithms.
For the general case of system of equations formed by equating to zero several differentiable functions, the main method is Newton's method and its variants. Generally they may provide a solution, but do not provide any information on the number of solutions.
== Nonlinear recurrence relations ==
A nonlinear recurrence relation defines successive terms of a sequence as a nonlinear function of preceding terms. Examples of nonlinear recurrence relations are the logistic map and the relations that define the various Hofstadter sequences. Nonlinear discrete models that represent a wide class of nonlinear recurrence relationships include the NARMAX (Nonlinear Autoregressive Moving Average with eXogenous inputs) model and the related nonlinear system identification and analysis procedures. These approaches can be used to study a wide class of complex nonlinear behaviors in the time, frequency, and spatio-temporal domains.
== Nonlinear differential equations ==
A system of differential equations is said to be nonlinear if it is not a system of linear equations. Problems involving nonlinear differential equations are extremely diverse, and methods of solution or analysis are problem dependent. Examples of nonlinear differential equations are the Navier–Stokes equations in fluid dynamics and the Lotka–Volterra equations in biology.
One of the greatest difficulties of nonlinear problems is that it is not generally possible to combine known solutions into new solutions. In linear problems, for example, a family of linearly independent solutions can be used to construct general solutions through the superposition principle. A good example of this is one-dimensional heat transport with Dirichlet boundary conditions, the solution of which can be written as a time-dependent linear combination of sinusoids of differing frequencies; this makes solutions very flexible. It is often possible to find several very specific solutions to nonlinear equations, however the lack of a superposition principle prevents the construction of new solutions.
=== Ordinary differential equations ===
First order ordinary differential equations are often exactly solvable by separation of variables, especially for autonomous equations. For example, the nonlinear equation
d
u
d
x
=
−
u
2
{\displaystyle {\frac {du}{dx}}=-u^{2}}
has
u
=
1
x
+
C
{\displaystyle u={\frac {1}{x+C}}}
as a general solution (and also the special solution
u
=
0
,
{\displaystyle u=0,}
corresponding to the limit of the general solution when C tends to infinity). The equation is nonlinear because it may be written as
d
u
d
x
+
u
2
=
0
{\displaystyle {\frac {du}{dx}}+u^{2}=0}
and the left-hand side of the equation is not a linear function of
u
{\displaystyle u}
and its derivatives. Note that if the
u
2
{\displaystyle u^{2}}
term were replaced with
u
{\displaystyle u}
, the problem would be linear (the exponential decay problem).
Second and higher order ordinary differential equations (more generally, systems of nonlinear equations) rarely yield closed-form solutions, though implicit solutions and solutions involving nonelementary integrals are encountered.
Common methods for the qualitative analysis of nonlinear ordinary differential equations include:
Examination of any conserved quantities, especially in Hamiltonian systems
Examination of dissipative quantities (see Lyapunov function) analogous to conserved quantities
Linearization via Taylor expansion
Change of variables into something easier to study
Bifurcation theory
Perturbation methods (can be applied to algebraic equations too)
Existence of solutions of Finite-Duration, which can happen under specific conditions for some non-linear ordinary differential equations.
=== Partial differential equations ===
The most common basic approach to studying nonlinear partial differential equations is to change the variables (or otherwise transform the problem) so that the resulting problem is simpler (possibly linear). Sometimes, the equation may be transformed into one or more ordinary differential equations, as seen in separation of variables, which is always useful whether or not the resulting ordinary differential equation(s) is solvable.
Another common (though less mathematical) tactic, often exploited in fluid and heat mechanics, is to use scale analysis to simplify a general, natural equation in a certain specific boundary value problem. For example, the (very) nonlinear Navier-Stokes equations can be simplified into one linear partial differential equation in the case of transient, laminar, one dimensional flow in a circular pipe; the scale analysis provides conditions under which the flow is laminar and one dimensional and also yields the simplified equation.
Other methods include examining the characteristics and using the methods outlined above for ordinary differential equations.
=== Pendula ===
A classic, extensively studied nonlinear problem is the dynamics of a frictionless pendulum under the influence of gravity. Using Lagrangian mechanics, it may be shown that the motion of a pendulum can be described by the dimensionless nonlinear equation
d
2
θ
d
t
2
+
sin
(
θ
)
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\sin(\theta )=0}
where gravity points "downwards" and
θ
{\displaystyle \theta }
is the angle the pendulum forms with its rest position, as shown in the figure at right. One approach to "solving" this equation is to use
d
θ
/
d
t
{\displaystyle d\theta /dt}
as an integrating factor, which would eventually yield
∫
d
θ
C
0
+
2
cos
(
θ
)
=
t
+
C
1
{\displaystyle \int {\frac {d\theta }{\sqrt {C_{0}+2\cos(\theta )}}}=t+C_{1}}
which is an implicit solution involving an elliptic integral. This "solution" generally does not have many uses because most of the nature of the solution is hidden in the nonelementary integral (nonelementary unless
C
0
=
2
{\displaystyle C_{0}=2}
).
Another way to approach the problem is to linearize any nonlinearity (the sine function term in this case) at the various points of interest through Taylor expansions. For example, the linearization at
θ
=
0
{\displaystyle \theta =0}
, called the small angle approximation, is
d
2
θ
d
t
2
+
θ
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\theta =0}
since
sin
(
θ
)
≈
θ
{\displaystyle \sin(\theta )\approx \theta }
for
θ
≈
0
{\displaystyle \theta \approx 0}
. This is a simple harmonic oscillator corresponding to oscillations of the pendulum near the bottom of its path. Another linearization would be at
θ
=
π
{\displaystyle \theta =\pi }
, corresponding to the pendulum being straight up:
d
2
θ
d
t
2
+
π
−
θ
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\pi -\theta =0}
since
sin
(
θ
)
≈
π
−
θ
{\displaystyle \sin(\theta )\approx \pi -\theta }
for
θ
≈
π
{\displaystyle \theta \approx \pi }
. The solution to this problem involves hyperbolic sinusoids, and note that unlike the small angle approximation, this approximation is unstable, meaning that
|
θ
|
{\displaystyle |\theta |}
will usually grow without limit, though bounded solutions are possible. This corresponds to the difficulty of balancing a pendulum upright, it is literally an unstable state.
One more interesting linearization is possible around
θ
=
π
/
2
{\displaystyle \theta =\pi /2}
, around which
sin
(
θ
)
≈
1
{\displaystyle \sin(\theta )\approx 1}
:
d
2
θ
d
t
2
+
1
=
0.
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+1=0.}
This corresponds to a free fall problem. A very useful qualitative picture of the pendulum's dynamics may be obtained by piecing together such linearizations, as seen in the figure at right. Other techniques may be used to find (exact) phase portraits and approximate periods.
== Types of nonlinear dynamic behaviors ==
Amplitude death – any oscillations present in the system cease due to some kind of interaction with other system or feedback by the same system
Chaos – values of a system cannot be predicted indefinitely far into the future, and fluctuations are aperiodic
Multistability – the presence of two or more stable states
Solitons – self-reinforcing solitary waves
Limit cycles – asymptotic periodic orbits to which destabilized fixed points are attracted.
Self-oscillations – feedback oscillations taking place in open dissipative physical systems.
== Examples of nonlinear equations ==
== See also ==
== References ==
== Further reading ==
== External links ==
Command and Control Research Program (CCRP)
New England Complex Systems Institute: Concepts in Complex Systems
Nonlinear Dynamics I: Chaos at MIT's OpenCourseWare
Nonlinear Model Library – (in MATLAB) a Database of Physical Systems
The Center for Nonlinear Studies at Los Alamos National Laboratory | Wikipedia/Nonlinear_dynamics |
In thermodynamics, the entropy of fusion is the increase in entropy when melting a solid substance. This is almost always positive since the degree of disorder increases in the transition from an organized crystalline solid to the disorganized structure of a liquid; the only known exception is helium. It is denoted as
Δ
S
fus
{\displaystyle \Delta S_{\text{fus}}}
and normally expressed in joules per mole-kelvin, J/(mol·K).
A natural process such as a phase transition will occur when the associated change in the Gibbs free energy is negative.
Δ
G
fus
=
Δ
H
fus
−
T
×
Δ
S
fus
<
0
,
{\displaystyle \Delta G_{\text{fus}}=\Delta H_{\text{fus}}-T\times \Delta S_{\text{fus}}<0,}
where
Δ
H
fus
{\displaystyle \Delta H_{\text{fus}}}
is the enthalpy of fusion.
Since this is a thermodynamic equation, the symbol
T
{\displaystyle T}
refers to the absolute thermodynamic temperature, measured in kelvins (K).
Equilibrium occurs when the temperature is equal to the melting point
T
=
T
f
{\displaystyle T=T_{f}}
so that
Δ
G
fus
=
Δ
H
fus
−
T
f
×
Δ
S
fus
=
0
,
{\displaystyle \Delta G_{\text{fus}}=\Delta H_{\text{fus}}-T_{f}\times \Delta S_{\text{fus}}=0,}
and the entropy of fusion is the heat of fusion divided by the melting point:
Δ
S
fus
=
Δ
H
fus
T
f
{\displaystyle \Delta S_{\text{fus}}={\frac {\Delta H_{\text{fus}}}{T_{f}}}}
== Helium ==
Helium-3 has a negative entropy of fusion at temperatures below 0.3 K. Helium-4 also has a very slightly negative entropy of fusion below 0.8 K. This means that, at appropriate constant pressures, these substances freeze with the addition of heat.
== See also ==
Entropy of vaporization
== Notes ==
== References ==
Atkins, Peter; Jones, Loretta (2008), Chemical Principles: The Quest for Insight (4th ed.), W. H. Freeman and Company, p. 236, ISBN 978-0-7167-7355-9
Ott, J. Bevan; Boerio-Goates, Juliana (2000), Chemical Thermodynamics: Advanced Applications, Academic Press, ISBN 0-12-530985-6 | Wikipedia/Entropy_of_fusion |
In physics, black hole thermodynamics is the area of study that seeks to reconcile the laws of thermodynamics with the existence of black hole event horizons. As the study of the statistical mechanics of black-body radiation led to the development of the theory of quantum mechanics, the effort to understand the statistical mechanics of black holes has had a deep impact upon the understanding of quantum gravity, leading to the formulation of the holographic principle.
== Overview ==
The second law of thermodynamics requires that black holes have entropy. If black holes carried no entropy, it would be possible to violate the second law by throwing mass into the black hole. The increase of the entropy of the black hole more than compensates for the decrease of the entropy carried by the object that was swallowed.
In 1972, Jacob Bekenstein conjectured that black holes should have an entropy proportional to the area of the event horizon, where by the same year, he proposed no-hair theorems.
In 1973 Bekenstein suggested
ln
2
0.8
π
≈
0.276
{\displaystyle {\frac {\ln {2}}{0.8\pi }}\approx 0.276}
as the constant of proportionality, asserting that if the constant was not exactly this, it must be very close to it. The next year, in 1974, Stephen Hawking showed that black holes emit thermal Hawking radiation corresponding to a certain temperature (Hawking temperature). Using the thermodynamic relationship between energy, temperature and entropy, Hawking was able to confirm Bekenstein's conjecture and fix the constant of proportionality at
1
/
4
{\displaystyle 1/4}
:
S
BH
=
k
B
A
4
ℓ
P
2
,
{\displaystyle S_{\text{BH}}={\frac {k_{\text{B}}A}{4\ell _{\text{P}}^{2}}},}
where
A
{\displaystyle A}
is the area of the event horizon,
k
B
{\displaystyle k_{\text{B}}}
is the Boltzmann constant, and
ℓ
P
=
G
ℏ
/
c
3
{\displaystyle \ell _{\text{P}}={\sqrt {G\hbar /c^{3}}}}
is the Planck length.
This is often referred to as the Bekenstein–Hawking formula. The subscript BH either stands for "black hole" or "Bekenstein–Hawking". The black hole entropy is proportional to the area of its event horizon
A
{\displaystyle A}
. The fact that the black hole entropy is also the maximal entropy that can be obtained by the Bekenstein bound (wherein the Bekenstein bound becomes an equality) was the main observation that led to the holographic principle. This area relationship was generalized to arbitrary regions via the Ryu–Takayanagi formula, which relates the entanglement entropy of a boundary conformal field theory to a specific surface in its dual gravitational theory.
Although Hawking's calculations gave further thermodynamic evidence for black hole entropy, until 1995 no one was able to make a controlled calculation of black hole entropy based on statistical mechanics, which associates entropy with a large number of microstates. In fact, so called "no-hair" theorems appeared to suggest that black holes could have only a single microstate. The situation changed in 1995 when Andrew Strominger and Cumrun Vafa calculated the right Bekenstein–Hawking entropy of a supersymmetric black hole in string theory, using methods based on D-branes and string duality. Their calculation was followed by many similar computations of entropy of large classes of other extremal and near-extremal black holes, and the result always agreed with the Bekenstein–Hawking formula. However, for the Schwarzschild black hole, viewed as the most far-from-extremal black hole, the relationship between micro- and macrostates has not been characterized. Efforts to develop an adequate answer within the framework of string theory continue.
In loop quantum gravity (LQG) it is possible to associate a geometrical interpretation with the microstates: these are the quantum geometries of the horizon. LQG offers a geometric explanation of the finiteness of the entropy and of the proportionality of the area of the horizon. It is possible to derive, from the covariant formulation of full quantum theory (spinfoam) the correct relation between energy and area (1st law), the Unruh temperature and the distribution that yields Hawking entropy. The calculation makes use of the notion of dynamical horizon and is done for non-extremal black holes. There seems to be also discussed the calculation of Bekenstein–Hawking entropy from the point of view of loop quantum gravity. The current accepted microstate ensemble for black holes is the microcanonical ensemble. The partition function for black holes results in a negative heat capacity. In canonical ensembles, there is limitation for a positive heat capacity, whereas microcanonical ensembles can exist at a negative heat capacity.
== The laws of black hole mechanics ==
The four laws of black hole mechanics are physical properties that black holes are believed to satisfy. The laws, analogous to the laws of thermodynamics, were discovered by Jacob Bekenstein, Brandon Carter, and James Bardeen. Further considerations were made by Stephen Hawking.
=== Statement of the laws ===
The laws of black hole mechanics are expressed in geometrized units.
==== The zeroth law ====
The horizon has constant surface gravity for a stationary black hole.
==== The first law ====
For perturbations of stationary black holes, the change of energy is related to change of area, angular momentum, and electric charge by
d
E
=
κ
8
π
d
A
+
Ω
d
J
+
Φ
d
Q
,
{\displaystyle dE={\frac {\kappa }{8\pi }}\,dA+\Omega \,dJ+\Phi \,dQ,}
where
E
{\displaystyle E}
is the energy,
κ
{\displaystyle \kappa }
is the surface gravity,
A
{\displaystyle A}
is the horizon area,
Ω
{\displaystyle \Omega }
is the angular velocity,
J
{\displaystyle J}
is the angular momentum,
Φ
{\displaystyle \Phi }
is the electrostatic potential and
Q
{\displaystyle Q}
is the electric charge.
==== The second law ====
The horizon area is, assuming the weak energy condition, a non-decreasing function of time:
d
A
d
t
≥
0.
{\displaystyle {\frac {dA}{dt}}\geq 0.}
This "law" was superseded by Hawking's discovery that black holes radiate, which causes both the black hole's mass and the area of its horizon to decrease over time.
==== The third law ====
It is not possible to form a black hole with vanishing surface gravity. That is,
κ
=
0
{\displaystyle \kappa =0}
cannot be achieved.
=== Discussion of the laws ===
==== The zeroth law ====
The zeroth law is analogous to the zeroth law of thermodynamics, which states that the temperature is constant throughout a body in thermal equilibrium. It suggests that the surface gravity is analogous to temperature. T constant for thermal equilibrium for a normal system is analogous to
κ
{\displaystyle \kappa }
constant over the horizon of a stationary black hole.
==== The first law ====
The left side,
d
E
{\displaystyle dE}
, is the change in energy (proportional to mass). Although the first term does not have an immediately obvious physical interpretation, the second and third terms on the right side represent changes in energy due to rotation and electromagnetism. Analogously, the first law of thermodynamics is a statement of energy conservation, which contains on its right side the term
T
d
S
{\displaystyle TdS}
.
==== The second law ====
The second law is the statement of Hawking's area theorem. Analogously, the second law of thermodynamics states that the change in entropy in an isolated system will be greater than or equal to 0 for a spontaneous process, suggesting a link between entropy and the area of a black hole horizon. However, this version violates the second law of thermodynamics by matter losing (its) entropy as it falls in, giving a decrease in entropy. However, generalizing the second law as the sum of black hole entropy and outside entropy, shows that the second law of thermodynamics is not violated in a system including the universe beyond the horizon.
The generalized second law of thermodynamics (GSL) was needed to present the second law of thermodynamics as valid. This is because the second law of thermodynamics, as a result of the disappearance of entropy near the exterior of black holes, is not useful. The GSL allows for the application of the law because now the measurement of interior, common entropy is possible. The validity of the GSL can be established by studying an example, such as looking at a system having entropy that falls into a bigger, non-moving black hole, and establishing upper and lower entropy bounds for the increase in the black hole entropy and entropy of the system, respectively. One should also note that the GSL will hold for theories of gravity such as Einstein gravity, Lovelock gravity, or Braneworld gravity, because the conditions to use GSL for these can be met.
However, on the topic of black hole formation, the question becomes whether or not the generalized second law of thermodynamics will be valid, and if it is, it will have been proved valid for all situations. Because a black hole formation is not stationary, but instead moving, proving that the GSL holds is difficult. Proving the GSL is generally valid would require using quantum-statistical mechanics, because the GSL is both a quantum and statistical law. This discipline does not exist so the GSL can be assumed to be useful in general, as well as for prediction. For example, one can use the GSL to predict that, for a cold, non-rotating assembly of
N
{\displaystyle N}
nucleons,
S
B
H
−
S
>
0
{\displaystyle S_{BH}-S>0}
, where
S
B
H
{\displaystyle S_{BH}}
is the entropy of a black hole and
S
{\displaystyle S}
is the sum of the ordinary entropy.
==== The third law ====
The third law of black hole thermodynamics is controversial. Specific counterexamples called
extremal black holes fail to obey the rule. The classical
third law of thermodynamics, known as the Nernst theorem, which says the entropy of a system must go to zero as the temperature goes to absolute zero is also not a universal law.
However the systems that fail the classical third law have not been realized in practice, leading to the suggestion that the extremal black holes may not represent the physics of black holes generally.
A weaker form of the classical third law known as the "unattainability principle" states that an infinite number of steps are required to put a system in to its ground state. This form of the third law does have an analog in black hole physics.: 10
=== Interpretation of the laws ===
The four laws of black hole mechanics suggest that one should identify the surface gravity of a black hole with temperature and the area of the event horizon with entropy, at least up to some multiplicative constants. If one only considers black holes classically, then they have zero temperature and, by the no-hair theorem, zero entropy, and the laws of black hole mechanics remain an analogy. However, when quantum-mechanical effects are taken into account, one finds that black holes emit thermal radiation (Hawking radiation) at a temperature
T
H
=
κ
2
π
.
{\displaystyle T_{\text{H}}={\frac {\kappa }{2\pi }}.}
From the first law of black hole mechanics, this determines the multiplicative constant of the Bekenstein–Hawking entropy, which is (in geometrized units)
S
BH
=
A
4
.
{\displaystyle S_{\text{BH}}={\frac {A}{4}}.}
which is the entropy of the black hole in Einstein's general relativity. Quantum field theory in curved spacetime can be utilized to calculate the entropy for a black hole in any covariant theory for gravity, known as the Wald entropy.
== Critique ==
While black hole thermodynamics (BHT) has been regarded as one of the deepest clues to a quantum theory of gravity, there remains a philosophical criticism that "the analogy is not nearly as good as is commonly supposed", that it “is often based on a kind of caricature of thermodynamics” and "it’s unclear what the systems in BHT are supposed to be".
These criticisms were reexamined in detail, ending with the opposite conclusion, "stationary black holes are not analogous to thermodynamic systems: they are thermodynamic systems, in the fullest sense."
== Beyond black holes ==
Gary Gibbons and Hawking have shown that black hole thermodynamics is more general than black holes—that cosmological event horizons also have an entropy and temperature.
More fundamentally, Gerard 't Hooft and Leonard Susskind used the laws of black hole thermodynamics to argue for a general holographic principle of nature, which asserts that consistent theories of gravity and quantum mechanics must be lower-dimensional. Though not yet fully understood in general, the holographic principle is central to theories like the AdS/CFT correspondence.
There are also connections between black hole entropy and fluid surface tension.
== See also ==
Joseph Polchinski
Robert Wald
== Notes ==
== Citations ==
== Bibliography ==
Bardeen, J. M.; Carter, B.; Hawking, S. W. (1973). "The four laws of black hole mechanics". Communications in Mathematical Physics. 31 (2): 161–170. Bibcode:1973CMaPh..31..161B. doi:10.1007/BF01645742. S2CID 54690354.
Bekenstein, Jacob D. (April 1973). "Black holes and entropy". Physical Review D. 7 (8): 2333–2346. Bibcode:1973PhRvD...7.2333B. doi:10.1103/PhysRevD.7.2333. S2CID 122636624.
Hawking, Stephen W. (1974). "Black hole explosions?". Nature. 248 (5443): 30–31. Bibcode:1974Natur.248...30H. doi:10.1038/248030a0. S2CID 4290107.
Hawking, Stephen W. (1975). "Particle creation by black holes". Communications in Mathematical Physics. 43 (3): 199–220. Bibcode:1975CMaPh..43..199H. doi:10.1007/BF02345020. S2CID 55539246.
Hawking, S. W.; Ellis, G. F. R. (1973). The Large Scale Structure of Space–Time. New York: Cambridge University Press. ISBN 978-0-521-09906-6.
Hawking, Stephen W. (1994). "The Nature of Space and Time". arXiv:hep-th/9409195.
't Hooft, Gerardus (1985). "On the quantum structure of a black hole" (PDF). Nuclear Physics B. 256: 727–745. Bibcode:1985NuPhB.256..727T. doi:10.1016/0550-3213(85)90418-3. Archived from the original (PDF) on 2011-09-26.
Page, Don (2005). "Hawking Radiation and Black Hole Thermodynamics". New Journal of Physics. 7 (1): 203. arXiv:hep-th/0409024. Bibcode:2005NJPh....7..203P. doi:10.1088/1367-2630/7/1/203. S2CID 119047329.
== External links ==
Bekenstein-Hawking entropy on Scholarpedia
Black Hole Thermodynamics
Black hole entropy on arxiv.org | Wikipedia/Black_hole_entropy |
In mathematics and physics, time-reversibility is the property of a process whose governing rules remain unchanged when the direction of its sequence of actions is reversed.
A deterministic process is time-reversible if the time-reversed process satisfies the same dynamic equations as the original process; in other words, the equations are invariant or symmetrical under a change in the sign of time. A stochastic process is reversible if the statistical properties of the process are the same as the statistical properties for time-reversed data from the same process.
== Mathematics ==
In mathematics, a dynamical system is time-reversible if the forward evolution is one-to-one, so that for every state there exists a transformation (an involution) π which gives a one-to-one mapping between the time-reversed evolution of any one state and the forward-time evolution of another corresponding state, given by the operator equation:
U
−
t
=
π
U
t
π
{\displaystyle U_{-t}=\pi \,U_{t}\,\pi }
Any time-independent structures (e.g. critical points or attractors) which the dynamics give rise to must therefore either be self-symmetrical or have symmetrical images under the involution π.
== Physics ==
In physics, the laws of motion of classical mechanics exhibit time reversibility, as long as the operator π reverses the conjugate momenta of all the particles of the system, i.e.
p
→
−
p
{\displaystyle \mathbf {p} \rightarrow \mathbf {-p} }
(T-symmetry).
In quantum mechanical systems, however, the weak nuclear force is not invariant under T-symmetry alone; if weak interactions are present, reversible dynamics are still possible, but only if the operator π also reverses the signs of all the charges and the parity of the spatial co-ordinates (C-symmetry and P-symmetry). This reversibility of several linked properties is known as CPT symmetry.
Thermodynamic processes can be reversible or irreversible, depending on the change in entropy during the process. Note, however, that the fundamental laws that underlie the thermodynamic processes are all time-reversible (classical laws of motion and laws of electrodynamics), which means that on the microscopic level, if one were to keep track of all the particles and all the degrees of freedom, the many-body system processes are all reversible; However, such analysis is beyond the capability of any human being (or artificial intelligence), and the macroscopic properties (like entropy and temperature) of many-body system are only defined from the statistics of the ensembles. When we talk about such macroscopic properties in thermodynamics, in certain cases, we can see irreversibility in the time evolution of these quantities on a statistical level. Indeed, the second law of thermodynamics predicates that the entropy of the entire universe must not decrease, not because the probability of that is zero, but because it is so unlikely that it is a statistical impossibility for all practical considerations (see Crooks fluctuation theorem).
== Stochastic processes ==
A stochastic process is time-reversible if the joint probabilities of the forward and reverse state sequences are the same for all sets of time increments { τs }, for s = 1, ..., k for any k:
p
(
x
t
,
x
t
+
τ
1
,
x
t
+
τ
2
,
…
,
x
t
+
τ
k
)
=
p
(
x
t
′
,
x
t
′
−
τ
1
,
x
t
′
−
τ
2
,
…
,
x
t
′
−
τ
k
)
{\displaystyle p(x_{t},x_{t+\tau _{1}},x_{t+\tau _{2}},\ldots ,x_{t+\tau _{k}})=p(x_{t'},x_{t'-\tau _{1}},x_{t'-\tau _{2}},\ldots ,x_{t'-\tau _{k}})}
.
A univariate stationary Gaussian process is time-reversible. Markov processes can only be reversible if their stationary distributions have the property of detailed balance:
p
(
x
t
=
i
,
x
t
+
1
=
j
)
=
p
(
x
t
=
j
,
x
t
+
1
=
i
)
{\displaystyle p(x_{t}=i,x_{t+1}=j)=\,p(x_{t}=j,x_{t+1}=i)}
.
Kolmogorov's criterion defines the condition for a Markov chain or continuous-time Markov chain to be time-reversible.
Time reversal of numerous classes of stochastic processes has been studied, including Lévy processes, stochastic networks (Kelly's lemma), birth and death processes, Markov chains, and piecewise deterministic Markov processes.
== Waves and optics ==
Time reversal method works based on the linear reciprocity of the wave equation, which states that the time reversed solution of a wave equation is also a solution to the wave equation since standard wave equations only contain even derivatives of the unknown variables. Thus, the wave equation is symmetrical under time reversal, so the time reversal of any valid solution is also a solution. This means that a wave's path through space is valid when travelled in either direction.
Time reversal signal processing is a process in which this property is used to reverse a received signal; this signal is then re-emitted and a temporal compression occurs, resulting in a reversal of the initial excitation waveform being played at the initial source.
== See also ==
T-symmetry
Memorylessness
Markov property
Reversible computing
== Notes ==
== References ==
Isham, V. (1991) "Modelling stochastic phenomena". In: Stochastic Theory and Modelling, Hinkley, DV., Reid, N., Snell, E.J. (Eds). Chapman and Hall. ISBN 978-0-412-30590-0.
Tong, H. (1990) Non-linear Time Series: A Dynamical System Approach. Oxford UP. ISBN 0-19-852300-9 | Wikipedia/Reversible_dynamics |
In thermodynamics, the Gibbs free energy (or Gibbs energy as the recommended name; symbol
G
{\displaystyle G}
) is a thermodynamic potential that can be used to calculate the maximum amount of work, other than pressure–volume work, that may be performed by a thermodynamically closed system at constant temperature and pressure. It also provides a necessary condition for processes such as chemical reactions that may occur under these conditions. The Gibbs free energy is expressed as
G
(
p
,
T
)
=
U
+
p
V
−
T
S
=
H
−
T
S
{\displaystyle G(p,T)=U+pV-TS=H-TS}
where:
U
{\textstyle U}
is the internal energy of the system
H
{\textstyle H}
is the enthalpy of the system
S
{\textstyle S}
is the entropy of the system
T
{\textstyle T}
is the temperature of the system
V
{\textstyle V}
is the volume of the system
p
{\textstyle p}
is the pressure of the system (which must be equal to that of the surroundings for mechanical equilibrium).
The Gibbs free energy change (
Δ
G
=
Δ
H
−
T
Δ
S
{\displaystyle \Delta G=\Delta H-T\Delta S}
, measured in joules in SI) is the maximum amount of non-volume expansion work that can be extracted from a closed system (one that can exchange heat and work with its surroundings, but not matter) at fixed temperature and pressure. This maximum can be attained only in a completely reversible process. When a system transforms reversibly from an initial state to a final state under these conditions, the decrease in Gibbs free energy equals the work done by the system to its surroundings, minus the work of the pressure forces.
The Gibbs energy is the thermodynamic potential that is minimized when a system reaches chemical equilibrium at constant pressure and temperature when not driven by an applied electrolytic voltage. Its derivative with respect to the reaction coordinate of the system then vanishes at the equilibrium point. As such, a reduction in
G
{\displaystyle G}
is necessary for a reaction to be spontaneous under these conditions.
The concept of Gibbs free energy, originally called available energy, was developed in the 1870s by the American scientist Josiah Willard Gibbs. In 1873, Gibbs described this "available energy" as: 400
the greatest amount of mechanical work which can be obtained from a given quantity of a certain substance in a given initial state, without increasing its total volume or allowing heat to pass to or from external bodies, except such as at the close of the processes are left in their initial condition.
The initial state of the body, according to Gibbs, is supposed to be such that "the body can be made to pass from it to states of dissipated energy by reversible processes". In his 1876 magnum opus On the Equilibrium of Heterogeneous Substances, a graphical analysis of multi-phase chemical systems, he engaged his thoughts on chemical-free energy in full.
If the reactants and products are all in their thermodynamic standard states, then the defining equation is written as
Δ
G
∘
=
Δ
H
∘
−
T
Δ
S
∘
{\displaystyle \Delta G^{\circ }=\Delta H^{\circ }-T\Delta S^{\circ }}
, where
H
{\displaystyle H}
is enthalpy,
T
{\displaystyle T}
is absolute temperature, and
S
{\displaystyle S}
is entropy.
== Overview ==
According to the second law of thermodynamics, for systems reacting at fixed temperature and pressure without input of non-Pressure Volume (pV) work, there is a general natural tendency to achieve a minimum of the Gibbs free energy.
A quantitative measure of the favorability of a given reaction under these conditions is the change ΔG (sometimes written "delta G" or "dG") in Gibbs free energy that is (or would be) caused by the reaction. As a necessary condition for the reaction to occur at constant temperature and pressure, ΔG must be smaller than the non-pressure-volume (non-pV, e.g. electrical) work, which is often equal to zero (then ΔG must be negative). ΔG equals the maximum amount of non-pV work that can be performed as a result of the chemical reaction for the case of a reversible process. If analysis indicates a positive ΔG for a reaction, then energy — in the form of electrical or other non-pV work — would have to be added to the reacting system for ΔG to be smaller than the non-pV work and make it possible for the reaction to occur.: 298–299
One can think of ∆G as the amount of "free" or "useful" energy available to do non-pV work at constant temperature and pressure. The equation can be also seen from the perspective of the system taken together with its surroundings (the rest of the universe). First, one assumes that the given reaction at constant temperature and pressure is the only one that is occurring. Then the entropy released or absorbed by the system equals the entropy that the environment must absorb or release, respectively. The reaction will only be allowed if the total entropy change of the universe is zero or positive. This is reflected in a negative ΔG, and the reaction is called an exergonic process.
If two chemical reactions are coupled, then an otherwise endergonic reaction (one with positive ΔG) can be made to happen. The input of heat into an inherently endergonic reaction, such as the elimination of cyclohexanol to cyclohexene, can be seen as coupling an unfavorable reaction (elimination) to a favorable one (burning of coal or other provision of heat) such that the total entropy change of the universe is greater than or equal to zero, making the total Gibbs free energy change of the coupled reactions negative.
In traditional use, the term "free" was included in "Gibbs free energy" to mean "available in the form of useful work". The characterization becomes more precise if we add the qualification that it is the energy available for non-pressure-volume work. (An analogous, but slightly different, meaning of "free" applies in conjunction with the Helmholtz free energy, for systems at constant temperature). However, an increasing number of books and journal articles do not include the attachment "free", referring to G as simply "Gibbs energy". This is the result of a 1988 IUPAC meeting to set unified terminologies for the international scientific community, in which the removal of the adjective "free" was recommended. This standard, however, has not yet been universally adopted.
The name "free enthalpy" was also used for G in the past.
== History ==
The quantity called "free energy" is a more advanced and accurate replacement for the outdated term affinity, which was used by chemists in the earlier years of physical chemistry to describe the force that caused chemical reactions.
In 1873, Josiah Willard Gibbs published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, in which he sketched the principles of his new equation that was able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e., bodies composed of part solid, part liquid, and part vapor, and by using a three-dimensional volume-entropy-internal energy graph, Gibbs was able to determine three states of equilibrium, i.e., "necessarily stable", "neutral", and "unstable", and whether or not changes would ensue. Further, Gibbs stated:
If we wish to express in a single equation the necessary and sufficient condition of thermodynamic equilibrium for a substance when surrounded by a medium of constant pressure p and temperature T, this equation may be written:
when δ refers to the variation produced by any variations in the state of the parts of the body, and (when different parts of the body are in different states) in the proportion in which the body is divided between the different states. The condition of stable equilibrium is that the value of the expression in the parenthesis shall be a minimum.
In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body...
Thereafter, in 1882, the German scientist Hermann von Helmholtz characterized the affinity as the largest quantity of work which can be gained when the reaction is carried out in a reversible manner, e.g., electrical work in a reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system (Gibbs free energy G at T = constant, P = constant or Helmholtz free energy F at T = constant, V = constant), whilst the heat given out is usually a measure of the diminution of the total energy of the system (internal energy). Thus, G or F is the amount of energy "free" for work under the given conditions.
Until this point, the general view had been such that: "all chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish". Over the next 60 years, the term affinity came to be replaced with the term free energy. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world.: 206
== Definitions ==
The Gibbs free energy is defined as
G
(
p
,
T
)
=
U
+
p
V
−
T
S
,
{\displaystyle G(p,T)=U+pV-TS,}
which is the same as
G
(
p
,
T
)
=
H
−
T
S
,
{\displaystyle G(p,T)=H-TS,}
where:
U is the internal energy (SI unit: joule),
p is pressure (SI unit: pascal),
V is volume (SI unit: m3),
T is the temperature (SI unit: kelvin),
S is the entropy (SI unit: joule per kelvin),
H is the enthalpy (SI unit: joule).
The expression for the infinitesimal reversible change in the Gibbs free energy as a function of its "natural variables" p and T, for an open system, subjected to the operation of external forces (for instance, electrical or magnetic) Xi, which cause the external parameters of the system ai to change by an amount dai, can be derived as follows from the first law for reversible processes:
T
d
S
=
d
U
+
p
d
V
−
∑
i
=
1
k
μ
i
d
N
i
+
∑
i
=
1
n
X
i
d
a
i
+
⋯
d
(
T
S
)
−
S
d
T
=
d
U
+
d
(
p
V
)
−
V
d
p
−
∑
i
=
1
k
μ
i
d
N
i
+
∑
i
=
1
n
X
i
d
a
i
+
⋯
d
(
U
−
T
S
+
p
V
)
=
V
d
p
−
S
d
T
+
∑
i
=
1
k
μ
i
d
N
i
−
∑
i
=
1
n
X
i
d
a
i
+
⋯
d
G
=
V
d
p
−
S
d
T
+
∑
i
=
1
k
μ
i
d
N
i
−
∑
i
=
1
n
X
i
d
a
i
+
⋯
{\displaystyle {\begin{aligned}T\,\mathrm {d} S&=\mathrm {d} U+p\,\mathrm {d} V-\sum _{i=1}^{k}\mu _{i}\,\mathrm {d} N_{i}+\sum _{i=1}^{n}X_{i}\,\mathrm {d} a_{i}+\cdots \\\mathrm {d} (TS)-S\,\mathrm {d} T&=\mathrm {d} U+\mathrm {d} (pV)-V\,\mathrm {d} p-\sum _{i=1}^{k}\mu _{i}\,\mathrm {d} N_{i}+\sum _{i=1}^{n}X_{i}\,\mathrm {d} a_{i}+\cdots \\\mathrm {d} (U-TS+pV)&=V\,\mathrm {d} p-S\,\mathrm {d} T+\sum _{i=1}^{k}\mu _{i}\,\mathrm {d} N_{i}-\sum _{i=1}^{n}X_{i}\,\mathrm {d} a_{i}+\cdots \\\mathrm {d} G&=V\,\mathrm {d} p-S\,\mathrm {d} T+\sum _{i=1}^{k}\mu _{i}\,\mathrm {d} N_{i}-\sum _{i=1}^{n}X_{i}\,\mathrm {d} a_{i}+\cdots \end{aligned}}}
where:
μi is the chemical potential of the ith chemical component. (SI unit: joules per particle or joules per mole)
Ni is the number of particles (or number of moles) composing the ith chemical component.
This is one form of the Gibbs fundamental equation. In the infinitesimal expression, the term involving the chemical potential accounts for changes in Gibbs free energy resulting from an influx or outflux of particles. In other words, it holds for an open system or for a closed, chemically reacting system where the Ni are changing. For a closed, non-reacting system, this term may be dropped.
Any number of extra terms may be added, depending on the particular system being considered. Aside from mechanical work, a system may, in addition, perform numerous other types of work. For example, in the infinitesimal expression, the contractile work energy associated with a thermodynamic system that is a contractile fiber that shortens by an amount −dl under a force f would result in a term f dl being added. If a quantity of charge −de is acquired by a system at an electrical potential Ψ, the electrical work associated with this is −Ψ de, which would be included in the infinitesimal expression. Other work terms are added on per system requirements.
Each quantity in the equations above can be divided by the amount of substance, measured in moles, to form molar Gibbs free energy. The Gibbs free energy is one of the most important thermodynamic functions for the characterization of a system. It is a factor in determining outcomes such as the voltage of an electrochemical cell, and the equilibrium constant for a reversible reaction. In isothermal, isobaric systems, Gibbs free energy can be thought of as a "dynamic" quantity, in that it is a representative measure of the competing effects of the enthalpic and entropic driving forces involved in a thermodynamic process.
The temperature dependence of the Gibbs energy for an ideal gas is given by the Gibbs–Helmholtz equation, and its pressure dependence is given by
G
N
=
G
∘
N
+
k
T
ln
p
p
∘
.
{\displaystyle {\frac {G}{N}}={\frac {G^{\circ }}{N}}+kT\ln {\frac {p}{p^{\circ }}}.}
or more conveniently as its chemical potential:
G
N
=
μ
=
μ
∘
+
k
T
ln
p
p
∘
.
{\displaystyle {\frac {G}{N}}=\mu =\mu ^{\circ }+kT\ln {\frac {p}{p^{\circ }}}.}
In non-ideal systems, fugacity comes into play.
== Derivation ==
The Gibbs free energy total differential with respect to natural variables may be derived by Legendre transforms of the internal energy.
d
U
=
T
d
S
−
p
d
V
+
∑
i
μ
i
d
N
i
.
{\displaystyle \mathrm {d} U=T\,\mathrm {d} S-p\,\mathrm {d} V+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}.}
The definition of G from above is
G
=
U
+
p
V
−
T
S
{\displaystyle G=U+pV-TS}
.
Taking the total differential, we have
d
G
=
d
U
+
p
d
V
+
V
d
p
−
T
d
S
−
S
d
T
.
{\displaystyle \mathrm {d} G=\mathrm {d} U+p\,\mathrm {d} V+V\,\mathrm {d} p-T\,\mathrm {d} S-S\,\mathrm {d} T.}
Replacing dU with the result from the first law gives
d
G
=
T
d
S
−
p
d
V
+
∑
i
μ
i
d
N
i
+
p
d
V
+
V
d
p
−
T
d
S
−
S
d
T
=
V
d
p
−
S
d
T
+
∑
i
μ
i
d
N
i
.
{\displaystyle {\begin{aligned}\mathrm {d} G&=T\,\mathrm {d} S-p\,\mathrm {d} V+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}+p\,\mathrm {d} V+V\,\mathrm {d} p-T\,\mathrm {d} S-S\,\mathrm {d} T\\&=V\,\mathrm {d} p-S\,\mathrm {d} T+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}.\end{aligned}}}
The natural variables of G are then p, T, and {Ni}.
=== Homogeneous systems ===
Because S, V, and Ni are extensive variables, an Euler relation allows easy integration of dU:
U
=
T
S
−
p
V
+
∑
i
μ
i
N
i
.
{\displaystyle U=TS-pV+\sum _{i}\mu _{i}N_{i}.}
Because some of the natural variables of G are intensive, dG may not be integrated using Euler relations as is the case with internal energy. However, simply substituting the above integrated result for U into the definition of G gives a standard expression for G:
G
=
U
+
p
V
−
T
S
=
(
T
S
−
p
V
+
∑
i
μ
i
N
i
)
+
p
V
−
T
S
=
∑
i
μ
i
N
i
.
{\displaystyle {\begin{aligned}G&=U+pV-TS\\&=\left(TS-pV+\sum _{i}\mu _{i}N_{i}\right)+pV-TS\\&=\sum _{i}\mu _{i}N_{i}.\end{aligned}}}
This result shows that the chemical potential of a substance
i
{\displaystyle i}
is its (partial) mol(ecul)ar Gibbs free energy. It applies to homogeneous, macroscopic systems, but not to all thermodynamic systems.
== Gibbs free energy of reactions ==
The system under consideration is held at constant temperature and pressure, and is closed (no matter can come in or out). The Gibbs energy of any system is
G
=
U
+
p
V
−
T
S
{\displaystyle G=U+pV-TS}
and an infinitesimal change in G, at constant temperature and pressure, yields
d
G
=
d
U
+
p
d
V
−
T
d
S
.
{\displaystyle dG=dU+pdV-TdS.}
By the first law of thermodynamics, a change in the internal energy U is given by
d
U
=
δ
Q
+
δ
W
{\displaystyle dU=\delta Q+\delta W}
where δQ is energy added as heat, and δW is energy added as work. The work done on the system may be written as δW = −pdV + δWx, where −pdV is the mechanical work of compression/expansion done on or by the system and δWx is all other forms of work, which may include electrical, magnetic, etc. Then
d
U
=
δ
Q
−
p
d
V
+
δ
W
x
{\displaystyle dU=\delta Q-pdV+\delta W_{x}}
and the infinitesimal change in G is
d
G
=
δ
Q
−
T
d
S
+
δ
W
x
.
{\displaystyle dG=\delta Q-TdS+\delta W_{x}.}
The second law of thermodynamics states that for a closed system at constant temperature (in a heat bath),
T
d
S
≥
δ
Q
{\displaystyle TdS\geq \delta Q}
, and so it follows that
d
G
≤
δ
W
x
{\displaystyle dG\leq \delta W_{x}}
Assuming that only mechanical work is done, this simplifies to
d
G
≤
0
{\displaystyle dG\leq 0}
This means that for such a system when not in equilibrium, the Gibbs energy will always be decreasing, and in equilibrium, the infinitesimal change dG will be zero. In particular, this will be true if the system is experiencing any number of internal chemical reactions on its path to equilibrium.
=== In electrochemical thermodynamics ===
When electric charge dQele is passed between the electrodes of an electrochemical cell generating an emf
E
{\displaystyle {\mathcal {E}}}
, an electrical work term appears in the expression for the change in Gibbs energy:
d
G
=
−
S
d
T
+
V
d
p
+
E
d
Q
e
l
e
,
{\displaystyle dG=-SdT+Vdp+{\mathcal {E}}dQ_{ele},}
where S is the entropy, V is the system volume, p is its pressure and T is its absolute temperature.
The combination (
E
{\displaystyle {\mathcal {E}}}
, Qele) is an example of a conjugate pair of variables. At constant pressure the above equation produces a Maxwell relation that links the change in open cell voltage with temperature T (a measurable quantity) to the change in entropy S when charge is passed isothermally and isobarically. The latter is closely related to the reaction entropy of the electrochemical reaction that lends the battery its power. This Maxwell relation is:
(
∂
E
∂
T
)
Q
e
l
e
,
p
=
−
(
∂
S
∂
Q
e
l
e
)
T
,
p
{\displaystyle \left({\frac {\partial {\mathcal {E}}}{\partial T}}\right)_{Q_{ele},p}=-\left({\frac {\partial S}{\partial Q_{ele}}}\right)_{T,p}}
If a mole of ions goes into solution (for example, in a Daniell cell, as discussed below) the charge through the external circuit is
Δ
Q
e
l
e
=
−
n
0
F
0
,
{\displaystyle \Delta Q_{ele}=-n_{0}F_{0}\,,}
where n0 is the number of electrons/ion, and F0 is the Faraday constant and the minus sign indicates discharge of the cell. Assuming constant pressure and volume, the thermodynamic properties of the cell are related strictly to the behavior of its emf by
Δ
H
=
−
n
0
F
0
(
E
−
T
d
E
d
T
)
,
{\displaystyle \Delta H=-n_{0}F_{0}\left({\mathcal {E}}-T{\frac {d{\mathcal {E}}}{dT}}\right),}
where ΔH is the enthalpy of reaction. The quantities on the right are all directly measurable.
== Useful identities to derive the Nernst equation ==
During a reversible electrochemical reaction at constant temperature and pressure, the following equations involving the Gibbs free energy hold:
Δ
r
G
=
Δ
r
G
∘
+
R
T
ln
Q
r
{\displaystyle \Delta _{\text{r}}G=\Delta _{\text{r}}G^{\circ }+RT\ln Q_{\text{r}}}
(see chemical equilibrium),
Δ
r
G
∘
=
−
R
T
ln
K
eq
{\displaystyle \Delta _{\text{r}}G^{\circ }=-RT\ln K_{\text{eq}}}
(for a system at chemical equilibrium),
Δ
r
G
=
w
elec,rev
=
−
n
F
E
{\displaystyle \Delta _{\text{r}}G=w_{\text{elec,rev}}=-nF{\mathcal {E}}}
(for a reversible electrochemical process at constant temperature and pressure),
Δ
r
G
∘
=
−
n
F
E
∘
{\displaystyle \Delta _{\text{r}}G^{\circ }=-nF{\mathcal {E}}^{\circ }}
(definition of
E
∘
{\displaystyle {\mathcal {E}}^{\circ }}
),
and rearranging gives
n
F
E
∘
=
R
T
ln
K
eq
,
n
F
E
=
n
F
E
∘
−
R
T
ln
Q
r
,
E
=
E
∘
−
R
T
n
F
ln
Q
r
,
{\displaystyle {\begin{aligned}nF{\mathcal {E}}^{\circ }&=RT\ln K_{\text{eq}},\\nF{\mathcal {E}}&=nF{\mathcal {E}}^{\circ }-RT\ln Q_{\text{r}},\\{\mathcal {E}}&={\mathcal {E}}^{\circ }-{\frac {RT}{nF}}\ln Q_{\text{r}},\end{aligned}}}
which relates the cell potential resulting from the reaction to the equilibrium constant and reaction quotient for that reaction (Nernst equation),
where
ΔrG, Gibbs free energy change per mole of reaction,
ΔrG°, Gibbs free energy change per mole of reaction for unmixed reactants and products at standard conditions (i.e. 298 K, 100 kPa, 1 M of each reactant and product),
R, gas constant,
T, absolute temperature,
ln, natural logarithm,
Qr, reaction quotient (unitless),
Keq, equilibrium constant (unitless),
welec,rev, electrical work in a reversible process (chemistry sign convention),
n, number of moles of electrons transferred in the reaction,
F = NAe ≈ 96485 C/mol, Faraday constant (charge per mole of electrons),
E
{\displaystyle {\mathcal {E}}}
, cell potential,
E
∘
{\displaystyle {\mathcal {E}}^{\circ }}
, standard cell potential.
Moreover, we also have
K
eq
=
e
−
Δ
r
G
∘
R
T
,
Δ
r
G
∘
=
−
R
T
(
ln
K
eq
)
=
−
2.303
R
T
(
log
10
K
eq
)
,
{\displaystyle {\begin{aligned}K_{\text{eq}}&=e^{-{\frac {\Delta _{\text{r}}G^{\circ }}{RT}}},\\\Delta _{\text{r}}G^{\circ }&=-RT\left(\ln K_{\text{eq}}\right)=-2.303\,RT\left(\log _{10}K_{\text{eq}}\right),\end{aligned}}}
which relates the equilibrium constant with Gibbs free energy. This implies that at equilibrium
Q
r
=
K
eq
{\displaystyle Q_{\text{r}}=K_{\text{eq}}}
and
Δ
r
G
=
0.
{\displaystyle \Delta _{\text{r}}G=0.}
== Standard Gibbs energy change of formation ==
The standard Gibbs free energy of formation of a compound is the change of Gibbs free energy that accompanies the formation of 1 mole of that substance from its component elements, in their standard states (the most stable form of the element at 25 °C and 100 kPa). Its symbol is ΔfG˚.
All elements in their standard states (diatomic oxygen gas, graphite, etc.) have standard Gibbs free energy change of formation equal to zero, as there is no change involved.
ΔfG = ΔfG˚ + RT ln Qf,
where Qf is the reaction quotient.
At equilibrium, ΔfG = 0, and Qf = K, so the equation becomes
ΔfG˚ = −RT ln K,
where K is the equilibrium constant of the formation reaction of the substance from the elements in their standard states.
== Graphical interpretation by Gibbs ==
Gibbs free energy was originally defined graphically. In 1873, American scientist Willard Gibbs published his first thermodynamics paper, "Graphical Methods in the Thermodynamics of Fluids", in which Gibbs used the two coordinates of the entropy and volume to represent the state of the body. In his second follow-up paper, "A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces", published later that year, Gibbs added in the third coordinate of the energy of the body, defined on three figures. In 1874, Scottish physicist James Clerk Maxwell used Gibbs' figures to make a 3D energy-entropy-volume thermodynamic surface of a fictitious water-like substance. Thus, in order to understand the concept of Gibbs free energy, it may help to understand its interpretation by Gibbs as section AB on his figure 3, and as Maxwell sculpted that section on his 3D surface figure.
== See also ==
Bioenergetics
Calphad (CALculation of PHAse Diagrams)
Critical point (thermodynamics)
Electron equivalent
Enthalpy–entropy compensation
Free entropy
Gibbs–Helmholtz equation
Grand potential
Non-random two-liquid model (NRTL model) – Gibbs energy of excess and mixing calculation and activity coefficients
Spinodal – Spinodal Curves (Hessian matrix)
Standard molar entropy
Thermodynamic free energy
UNIQUAC model – Gibbs energy of excess and mixing calculation and activity coefficients
== Notes and references ==
== External links ==
IUPAC definition (Gibbs energy)
Gibbs Free Energy – Georgia State University | Wikipedia/Gibbs_function |
In thermodynamics, the Gibbs–Duhem equation describes the relationship between changes in chemical potential for components in a thermodynamic system:
∑
i
=
1
I
N
i
d
μ
i
=
−
S
d
T
+
V
d
p
{\displaystyle \sum _{i=1}^{I}N_{i}\mathrm {d} \mu _{i}=-S\mathrm {d} T+V\mathrm {d} p}
where
N
i
{\displaystyle N_{i}}
is the number of moles of component
i
,
d
μ
i
{\displaystyle i,\mathrm {d} \mu _{i}}
the infinitesimal increase in chemical potential for this component,
S
{\displaystyle S}
the entropy,
T
{\displaystyle T}
the absolute temperature,
V
{\displaystyle V}
volume and
p
{\displaystyle p}
the pressure.
I
{\displaystyle I}
is the number of different components in the system. This equation shows that in thermodynamics intensive properties are not independent but related, making it a mathematical statement of the state postulate. When pressure and temperature are variable, only
I
−
1
{\displaystyle I-1}
of
I
{\displaystyle I}
components have independent values for chemical potential and Gibbs' phase rule follows.
The Gibbs−Duhem equation applies to homogeneous thermodynamic systems. It does not apply to inhomogeneous systems such as small thermodynamic systems, systems subject to long-range forces like electricity and gravity, or to fluids in porous media.
The equation is named after Josiah Willard Gibbs and Pierre Duhem.
== Derivation ==
The Gibbs–Duhem equation follows from assuming the system can be scaled in amount perfectly. Gibbs derived the relationship based on the thought experiment of varying the amount of substance starting from zero, keeping its nature and state the same.
Mathematically, this means the internal energy
U
{\displaystyle U}
scales with its extensive variables as follows:
U
(
λ
S
,
λ
V
,
λ
N
1
,
λ
N
2
,
…
)
=
λ
U
(
S
,
V
,
N
1
,
N
2
,
…
)
{\displaystyle U(\lambda S,\lambda V,\lambda N_{1},\lambda N_{2},\ldots )=\lambda U(S,V,N_{1},N_{2},\ldots )}
where
S
,
V
,
N
1
,
N
2
,
…
{\displaystyle S,V,N_{1},N_{2},\ldots }
are all of the extensive variables of system: entropy, volume, and particle numbers. The internal energy is thus a first-order homogenous function. Applying Euler's homogeneous function theorem, one finds the following relation:
U
=
T
S
−
p
V
+
∑
i
=
1
I
μ
i
N
i
{\displaystyle U=TS-pV+\sum _{i=1}^{I}\mu _{i}N_{i}}
Taking the total differential, one finds
d
U
=
T
d
S
+
S
d
T
−
p
d
V
−
V
d
p
+
∑
i
=
1
I
μ
i
d
N
i
+
∑
i
=
1
I
N
i
d
μ
i
{\displaystyle \mathrm {d} U=T\mathrm {d} S+S\mathrm {d} T-p\mathrm {d} V-V\mathrm {d} p+\sum _{i=1}^{I}\mu _{i}\mathrm {d} N_{i}+\sum _{i=1}^{I}N_{i}\mathrm {d} \mu _{i}}
From both sides one can subtract the fundamental thermodynamic relation,
d
U
=
T
d
S
−
p
d
V
+
∑
i
=
1
I
μ
i
d
N
i
{\displaystyle \mathrm {d} U=T\mathrm {d} S-p\mathrm {d} V+\sum _{i=1}^{I}\mu _{i}\mathrm {d} N_{i}}
yielding the Gibbs–Duhem equation
0
=
S
d
T
−
V
d
p
+
∑
i
=
1
I
N
i
d
μ
i
.
{\displaystyle 0=S\mathrm {d} T-V\mathrm {d} p+\sum _{i=1}^{I}N_{i}\mathrm {d} \mu _{i}.}
== Applications ==
By normalizing the above equation by the extent of a system, such as the total number of moles, the Gibbs–Duhem equation provides a relationship between the intensive variables of the system. For a simple system with
I
{\displaystyle I}
different components, there will be
I
+
1
{\displaystyle I+1}
independent parameters or "degrees of freedom". For example, if we know a gas cylinder filled with pure nitrogen is at room temperature (298 K) and 25 MPa, we can determine the fluid density (258 kg/m3), enthalpy (272 kJ/kg), entropy (5.07 kJ/kg⋅K) or any other intensive thermodynamic variable. If instead the cylinder contains a nitrogen/oxygen mixture, we require an additional piece of information, usually the ratio of oxygen-to-nitrogen.
If multiple phases of matter are present, the chemical potentials across a phase boundary are equal. Combining expressions for the Gibbs–Duhem equation in each phase and assuming systematic equilibrium (i.e. that the temperature and pressure is constant throughout the system), we recover the Gibbs' phase rule.
One particularly useful expression arises when considering binary solutions. At constant P (isobaric) and T (isothermal) it becomes:
0
=
N
1
d
μ
1
+
N
2
d
μ
2
{\displaystyle 0=N_{1}\mathrm {d} \mu _{1}+N_{2}\mathrm {d} \mu _{2}}
or, normalizing by total number of moles in the system
N
1
+
N
2
,
{\displaystyle N_{1}+N_{2},}
substituting in the definition of activity coefficient
γ
{\displaystyle \gamma }
and using the identity
x
1
+
x
2
=
1
{\displaystyle x_{1}+x_{2}=1}
:
0
=
x
1
d
ln
(
γ
1
)
+
x
2
d
ln
(
γ
2
)
{\displaystyle 0=x_{1}\mathrm {d} \ln(\gamma _{1})+x_{2}\mathrm {d} \ln(\gamma _{2})}
This equation is instrumental in the calculation of thermodynamically consistent and thus more accurate expressions for the vapor pressure of a binary mixture from limited experimental data. One can develop this further to the Duhem–Margules equation which relates to vapor pressures directly.
== Ternary and multicomponent solutions and mixtures ==
Lawrence Stamper Darken has shown that the Gibbs–Duhem equation can be applied to the determination of chemical potentials of components from a multicomponent system from experimental data regarding the chemical potential
G
2
¯
{\displaystyle {\bar {G_{2}}}}
of only one component (here component 2) at all compositions. He has deduced the following relation
G
2
¯
=
G
+
(
1
−
x
2
)
(
∂
G
∂
x
2
)
x
1
x
3
{\displaystyle {\bar {G_{2}}}=G+(1-x_{2})\left({\frac {\partial G}{\partial x_{2}}}\right)_{\frac {x_{1}}{x_{3}}}}
xi, amount (mole) fractions of components.
Making some rearrangements and dividing by (1 – x2)2 gives:
G
(
1
−
x
2
)
2
+
1
1
−
x
2
(
∂
G
∂
x
2
)
x
1
x
3
=
G
2
¯
(
1
−
x
2
)
2
{\displaystyle {\frac {G}{(1-x_{2})^{2}}}+{\frac {1}{1-x_{2}}}\left({\frac {\partial G}{\partial x_{2}}}\right)_{\frac {x_{1}}{x_{3}}}={\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}}
or
(
d
G
1
−
x
2
d
x
2
)
x
1
x
3
=
G
2
¯
(
1
−
x
2
)
2
{\displaystyle \left({\mathfrak {d}}{\frac {G}{\frac {1-x_{2}}{{\mathfrak {d}}x_{2}}}}\right)_{\frac {x_{1}}{x_{3}}}={\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}}
or
(
∂
G
1
−
x
2
∂
x
2
)
x
1
x
3
=
G
2
¯
(
1
−
x
2
)
2
{\displaystyle \left({\frac {\frac {\partial G}{1-x_{2}}}{\partial x_{2}}}\right)_{\frac {x_{1}}{x_{3}}}={\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}}
as formatting variant
The derivative with respect to one mole fraction x2 is taken at constant ratios of amounts (and therefore of mole fractions) of the other components of the solution representable in a diagram like ternary plot.
The last equality can be integrated from
x
2
=
1
{\displaystyle x_{2}=1}
to
x
2
{\displaystyle x_{2}}
gives:
G
−
(
1
−
x
2
)
lim
x
2
→
1
G
1
−
x
2
=
(
1
−
x
2
)
∫
1
x
2
G
2
¯
(
1
−
x
2
)
2
d
x
2
{\displaystyle G-(1-x_{2})\lim _{x_{2}\to 1}{\frac {G}{1-x_{2}}}=(1-x_{2})\int _{1}^{x_{2}}{\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}dx_{2}}
Applying LHopital's rule gives:
lim
x
2
→
1
G
1
−
x
2
=
lim
x
2
→
1
(
∂
G
∂
x
2
)
x
1
x
3
.
{\displaystyle \lim _{x_{2}\to 1}{\frac {G}{1-x_{2}}}=\lim _{x_{2}\to 1}\left({\frac {\partial G}{\partial x_{2}}}\right)_{\frac {x_{1}}{x_{3}}}.}
This becomes further:
lim
x
2
→
1
G
1
−
x
2
=
−
lim
x
2
→
1
G
2
¯
−
G
1
−
x
2
.
{\displaystyle \lim _{x_{2}\to 1}{\frac {G}{1-x_{2}}}=-\lim _{x_{2}\to 1}{\frac {{\bar {G_{2}}}-G}{1-x_{2}}}.}
Express the mole fractions of component 1 and 3 as functions of component 2 mole fraction and binary mole ratios:
x
1
=
1
−
x
2
1
+
x
3
x
1
{\displaystyle x_{1}={\frac {1-x_{2}}{1+{\frac {x_{3}}{x_{1}}}}}}
x
3
=
1
−
x
2
1
+
x
1
x
3
{\displaystyle x_{3}={\frac {1-x_{2}}{1+{\frac {x_{1}}{x_{3}}}}}}
and the sum of partial molar quantities
G
=
∑
i
=
1
3
x
i
G
i
¯
,
{\displaystyle G=\sum _{i=1}^{3}x_{i}{\bar {G_{i}}},}
gives
G
=
x
1
(
G
1
¯
)
x
2
=
1
+
x
3
(
G
3
¯
)
x
2
=
1
+
(
1
−
x
2
)
∫
1
x
2
G
2
¯
(
1
−
x
2
)
2
d
x
2
{\displaystyle G=x_{1}({\bar {G_{1}}})_{x_{2}=1}+x_{3}({\bar {G_{3}}})_{x_{2}=1}+(1-x_{2})\int _{1}^{x_{2}}{\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}dx_{2}}
(
G
1
¯
)
x
2
=
1
{\displaystyle ({\bar {G_{1}}})_{x_{2}=1}}
and
(
G
3
¯
)
x
2
=
1
{\displaystyle ({\bar {G_{3}}})_{x_{2}=1}}
are constants which can be determined from the binary systems 1_2 and 2_3. These constants can be obtained from the previous equality by putting the complementary mole fraction x3 = 0 for x1 and vice versa.
Thus
(
G
1
¯
)
x
2
=
1
=
−
(
∫
1
0
G
2
¯
(
1
−
x
2
)
2
d
x
2
)
x
3
=
0
{\displaystyle ({\bar {G_{1}}})_{x_{2}=1}=-\left(\int _{1}^{0}{\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}dx_{2}\right)_{x_{3}=0}}
and
(
G
3
¯
)
x
2
=
1
=
−
(
∫
1
0
G
2
¯
(
1
−
x
2
)
2
d
x
2
)
x
1
=
0
{\displaystyle ({\bar {G_{3}}})_{x_{2}=1}=-\left(\int _{1}^{0}{\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}dx_{2}\right)_{x_{1}=0}}
The final expression is given by substitution of these constants into the previous equation:
G
=
(
1
−
x
2
)
(
∫
1
x
2
G
2
¯
(
1
−
x
2
)
2
d
x
2
)
x
1
x
3
−
x
1
(
∫
1
0
G
2
¯
(
1
−
x
2
)
2
d
x
2
)
x
3
=
0
−
x
3
(
∫
1
0
G
2
¯
(
1
−
x
2
)
2
d
x
2
)
x
1
=
0
{\displaystyle G=(1-x_{2})\left(\int _{1}^{x_{2}}{\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}dx_{2}\right)_{\frac {x_{1}}{x_{3}}}-x_{1}\left(\int _{1}^{0}{\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}dx_{2}\right)_{x_{3}=0}-x_{3}\left(\int _{1}^{0}{\frac {\bar {G_{2}}}{(1-x_{2})^{2}}}dx_{2}\right)_{x_{1}=0}}
== See also ==
Margules activity model
Darken's equations
Gibbs–Helmholtz equation
== References ==
== External links ==
J. Phys. Chem. Gokcen 1960
A lecture from www.chem.neu.edu
A lecture from www.chem.arizona.edu
Encyclopædia Britannica entry | Wikipedia/Gibbs-Duhem_equation |
In chemistry, bond energy (BE) is one measure of the strength of a chemical bond. It is sometimes called the mean bond, bond enthalpy, average bond enthalpy, or bond strength. IUPAC defines bond energy as the average value of the gas-phase bond-dissociation energy (usually at a temperature of 298.15 K) for all bonds of the same type within the same chemical species.
The bond dissociation energy (enthalpy) is also referred to as bond disruption energy, bond energy, bond strength, or binding energy (abbreviation: BDE, BE, or D). It is defined as the standard enthalpy change of the following fission: R—X → R + X. The BDE, denoted by Dº(R—X), is usually derived by the thermochemical equation,
D
∘
(
R
−
X
)
=
Δ
H
f
∘
(
R
)
+
Δ
H
f
∘
(
X
)
−
Δ
H
f
∘
(
R
X
)
{\displaystyle {\begin{array}{lcl}\mathrm {D^{\circ }(R-} X)\ =\Delta H_{f}^{\circ }\mathrm {(R)} +\Delta H_{f}^{\circ }(X)-\Delta H_{f}^{\circ }(\mathrm {R} X)\end{array}}}
This equation tells us that the BDE for a given bond is equal to the energy of the individual components that make up the bond when they are free and unbonded minus the energy of the components when they are bonded together. These energies are given by the enthalpy of formation ΔHfº of the components in each state.
The enthalpy of formation of a large number of atoms, free radicals, ions, clusters and compounds is available from the websites of NIST, NASA, CODATA, and IUPAC. Most authors use the BDE values at 298.15 K.
For example, the carbon–hydrogen bond energy in methane BE(C–H) is the enthalpy change (∆H) of breaking one molecule of methane into a carbon atom and four hydrogen radicals, divided by four. The exact value for a certain pair of bonded elements varies somewhat depending on the specific molecule, so tabulated bond energies are generally averages from a number of selected typical chemical species containing that type of bond.
== Bond energy versus bond-dissociation energy ==
Bond energy (BE) is the average of all bond-dissociation energies of a single type of bond in a given molecule. The bond-dissociation energies of several different bonds of the same type can vary even within a single molecule.
For example, a water molecule is composed of two O–H bonds bonded as H–O–H. The bond energy for H2O is the average energy required to break each of the two O–H bonds in sequence:
Although the two bonds are the equivalent in the original symmetric molecule, the bond-dissociation energy of an oxygen–hydrogen bond varies slightly depending on whether or not there is another hydrogen atom bonded to the oxygen atom. Thus, the bond energy of a molecule of water is 461.5 kJ/mol (110.3 kcal/mol).
When the bond is broken, the bonding electron pair will split equally to the products. This process is called homolytic bond cleavage (homolytic cleavage; homolysis) and results in the formation of radicals.
== Predicting the bond strength by radius ==
The strength of a bond can be estimated by comparing the atomic radii of the atoms that form the bond to the length of bond itself. For example, the atomic radius of boron is estimated at 85 pm, while the length of the B–B bond in B2Cl4 is 175 pm. Dividing the length of this bond by the sum of each boron atom's radius gives a ratio of
175
pm
85
pm
+
85
pm
=
175
pm
170
pm
≈
1.03
{\displaystyle {\frac {175\ {\text{pm}}}{85\ {\text{pm}}+85\ {\text{pm}}}}={\frac {175\ {\text{pm}}}{170\ {\text{pm}}}}\approx 1.03}
.
This ratio is slightly larger than 1, indicating that the bond itself is slightly longer than the expected minimum overlap between the two boron atoms' valence electron clouds. Thus, we can conclude that this bond is a rather weak single bond.
In another example, the atomic radius of rhenium is 135 pm, with a Re–Re bond length of 224 pm in the compound [Re2Cl8]−2. Taking the same steps as above gives a ratio of
224
pm
135
pm
+
135
pm
=
224
pm
270
pm
≈
0.83
{\displaystyle {\frac {224\ {\text{pm}}}{135\ {\text{pm}}+135\ {\text{pm}}}}={\frac {224\ {\text{pm}}}{270\ {\text{pm}}}}\approx \ 0.83}
.
This ratio is notably lower than 1, indicating that there is a large amount of overlap between the valence electron clouds of the two rhenium atoms. From this data, we can conclude that this is a very strong bond. Experimentally, the Re-Re bond in [Re2Cl8]−2 was found to be a quadruple bond. This method of determination is most useful for covalently bonded compounds.
== Factors affecting ionic bond energy ==
In ionic compounds, the electronegativity of the two atoms bonding together has a major effect on their bond energy. The extent of this effect is described by the compound's lattice energy, where a more negative lattice energy corresponds to a stronger force of attraction between the ions. Generally, greater differences in electronegativity correspond to stronger ionic bonds. For example, the compound sodium chloride (NaCl) has a lattice energy of −786 kJ/mol with an electronegativity difference of 2.23 between sodium and chlorine. Meanwhile, the compound sodium iodide (NaI) has a lower lattice energy of −704 kJ/mol with a similarly lower electronegativity difference of 1.73 between sodium and iodine.
== See also ==
Bond-dissociation energy
Binding energy
Ionization energy
Isodesmic reaction
Lattice energy
== References == | Wikipedia/Bond_energy |
In thermodynamics, a component is one of a collection of chemically independent constituents of a system. The number of components represents the minimum number of independent chemical species necessary to define the composition of all phases of the system.
Calculating the number of components in a system is necessary when applying Gibbs' phase rule in determination of the number of degrees of freedom of a system.
The number of components is equal to the number of distinct chemical species (constituents), minus the number of chemical reactions between them, minus the number of any constraints (like charge neutrality or balance of molar quantities).
== Calculation ==
Suppose that a chemical system has M elements and N chemical species (elements or compounds). The latter are combinations of the former, and each species Ai can be represented as a sum of elements:
A
i
=
∑
j
a
i
j
E
j
,
{\displaystyle A_{i}=\sum _{j}a_{ij}E_{j},}
where aij are the integers denoting number of atoms of element Ej in molecule Ai. Each species is determined by a vector (a row of this matrix), but the rows are not necessarily linearly independent. If the rank of the matrix is C, then there are C linearly independent vectors, and the remaining N-C vectors can be obtained by adding up multiples of those vectors. The chemical species represented by those C vectors are components of the system.
If, for example, the species are C (in the form of graphite), CO2 and CO, then
[
1
0
1
2
1
1
]
[
C
O
]
=
[
C
C
O
2
C
O
]
.
{\displaystyle {\begin{bmatrix}1&0\\1&2\\1&1\end{bmatrix}}{\begin{bmatrix}C\\\\O\end{bmatrix}}={\begin{bmatrix}C\\CO_{2}\\CO\end{bmatrix}}.}
Since CO can be expressed as CO = (1/2)C + (1/2)CO2, it is not independent and C and CO can be chosen as the components of the system.
There are two ways that the vectors can be dependent. One is that some pairs of elements always appear in the same ratio in each species. An example is a series of polymers that are composed of different numbers of identical units. The number of such constraints is given by Z. In addition, some combinations of elements may be forbidden by chemical kinetics. If the number of such constraints is R', then
C
=
M
−
Z
+
R
′
.
{\displaystyle C=M-Z+R'.}
Equivalently, if R is the number of independent reactions that can take place, then
C
=
N
−
Z
−
R
.
{\displaystyle C=N-Z-R.}
The constants are related by N - M = R + R'.
== Examples ==
=== CaCO3 - CaO - CO2 system ===
This is an example of a system with several phases, which at ordinary temperatures are two solids and a gas. There are three chemical species (CaCO3, CaO and CO2) and one reaction:
CaCO3 ⇌ CaO + CO2.
The number of components is then 3 - 1 = 2.
=== Water - Hydrogen - Oxygen system ===
The reactions included in the calculation are only those that actually occur under the given conditions, and not those that might occur under different conditions such as higher temperature or the presence of a catalyst. For example, the dissociation of water into its elements does not occur at ordinary temperature, so a system of water, hydrogen and oxygen at 25 °C has 3 independent components.
=== Aqueous solution of 4 kinds of salts ===
Consider an aqueous solution containing sodium chloride (NaCl), potassium chloride (KCl), sodium bromide (NaBr), and potassium bromide (KBr), in equilibrium with their respective solid phases. While 6 elements are present (H, O, Na, K, Cl, Br), their quantities are not independent due to the following constraints:
The stoichiometry of water: n(H) = 2n(O). This constraint imply that knowing the quantity of one determines the other.
Charge balance in the solution: n(Na) + n(K) = n(Cl) + n(Br). Thin constraint imply that knowing the quantity of 3 of the 4 ionic species (Na, K, Cl, Br) determines the fourth.
Consequently, the number of independently variable constituents, and therefore the number of components, is 4.
== References == | Wikipedia/Component_(thermodynamics) |
A solid solution, a term popularly used for metals, is a homogeneous mixture of two compounds in solid state and having a single crystal structure. Many examples can be found in metallurgy, geology, and solid-state chemistry. The word "solution" is used to describe the intimate mixing of components at the atomic level and distinguishes these homogeneous materials from physical mixtures of components. Two terms are mainly associated with solid solutions – solvents and solutes, depending on the relative abundance of the atomic species.
In general if two compounds are isostructural then a solid solution will exist between the end members (also known as parents). For example sodium chloride and potassium chloride have the same cubic crystal structure so it is possible to make a pure compound with any ratio of sodium to potassium (Na1-xKx)Cl by dissolving that ratio of NaCl and KCl in water and then evaporating the solution. A member of this family is sold under the brand name Lo Salt which is (Na0.33K0.66)Cl, hence it contains 66% less sodium than normal table salt (NaCl). The pure minerals are called halite and sylvite; a physical mixture of the two is referred to as sylvinite.
Because minerals are natural materials they are prone to large variations in composition. In many cases specimens are members for a solid solution family and geologists find it more helpful to discuss the composition of the family than an individual specimen. Olivine is described by the formula (Mg, Fe)2SiO4, which is equivalent to (Mg1−xFex)2SiO4. The ratio of magnesium to iron varies between the two endmembers of the solid solution series: forsterite (Mg-endmember: Mg2SiO4) and fayalite (Fe-endmember: Fe2SiO4) but the ratio in olivine is not normally defined. With increasingly complex compositions the geological notation becomes significantly easier to manage than the chemical notation.
== Nomenclature ==
The IUPAC definition of a solid solution is a "solid in which components are compatible and form a unique phase".
The definition "crystal containing a second constituent which fits into and is distributed in the lattice of the host crystal" given in refs., is not general and, thus, is not recommended.
The expression is to be used to describe a solid phase containing more than one substance when, for convenience, one (or more) of the substances, called the solvent, is treated differently from the other substances, called solutes.
One or several of the components can be macromolecules. Some of the other components can then act as plasticizers, i.e., as molecularly dispersed substances that decrease the glass-transition temperature at which the amorphous phase of a polymer is converted between glassy and rubbery states.
In pharmaceutical preparations, the concept of solid solution is often applied to the case of mixtures of drug and polymer.
The number of drug molecules that do behave as solvent (plasticizer) of polymers is small.
== Phase diagrams ==
On a phase diagram a solid solution is represented by an area, often labeled with the structure type, which covers the compositional and temperature/pressure ranges. Where the end members are not isostructural there are likely to be two solid solution ranges with different structures dictated by the parents. In this case the ranges may overlap and the materials in this region can have either structure, or there may be a miscibility gap in solid state indicating that attempts to generate materials with this composition will result in mixtures. In areas on a phase diagram which are not covered by a solid solution there may be line phases, these are compounds with a known crystal structure and set stoichiometry. Where the crystalline phase consists of two (non-charged) organic molecules the line phase is commonly known as a cocrystal. In metallurgy alloys with a set composition are referred to as intermetallic compounds. A solid solution is likely to exist when the two elements (generally metals) involved are close together on the periodic table, an intermetallic compound generally results when two metals involved are not near each other on the periodic table.
== Details ==
The solute may incorporate into the solvent crystal lattice substitutionally, by replacing a solvent particle in the lattice, or interstitially, by fitting into the space between solvent particles. Both of these types of solid solution affect the properties of the material by distorting the crystal lattice and disrupting the physical and electrical homogeneity of the solvent material. Where the atomic radii of the solute atom is larger than the solvent atom it replaces the crystal structure (unit cell) often expands to accommodate it, this means that the composition of a material in a solid solution can be calculated from the unit cell volume a relationship known as Vegard's law.
Some mixtures will readily form solid solutions over a range of concentrations, while other mixtures will not form solid solutions at all. The propensity for any two substances to form a solid solution is a complicated matter involving the chemical, crystallographic, and quantum properties of the substances in question. Substitutional solid solutions, in accordance with the Hume-Rothery rules, may form if the solute and solvent have:
Similar atomic radii (15% or less difference)
Same crystal structure
Similar electronegativities
Similar valency
a solid solution mixes with others to form a new solution
The phase diagram in the above diagram displays an alloy of two metals which forms a solid solution at all relative concentrations of the two species. In this case, the pure phase of each element is of the same crystal structure, and the similar properties of the two elements allow for unbiased substitution through the full range of relative concentrations. Solid solution of pseudo-binary systems in complex systems with three or more components may require a more involved representation of the phase diagram with more than one solvus curves drawn corresponding to different equilibrium chemical conditions.
Solid solutions have important commercial and industrial applications, as such mixtures often have superior properties to pure materials. Many metal alloys are solid solutions. Even small amounts of solute can affect the electrical and physical properties of the solvent.
The binary phase diagram in the above diagram shows the phases of a mixture of two substances in varying concentrations,
A
{\displaystyle A}
and
B
{\displaystyle B}
. The region labeled "
α
{\displaystyle \alpha }
" is a solid solution, with
B
{\displaystyle B}
acting as the solute in a matrix of
A
{\displaystyle A}
. On the other end of the concentration scale, the region labeled "
β
{\displaystyle \beta }
" is also a solid solution, with
A
{\displaystyle A}
acting as the solute in a matrix of
B
{\displaystyle B}
. The large solid region in between the
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
solid solutions, labeled "
α
{\displaystyle \alpha }
+
β
{\displaystyle \beta }
", is not a solid solution. Instead, an examination of the microstructure of a mixture in this range would reveal two phases—solid solution
A
{\displaystyle A}
-in-
B
{\displaystyle B}
and solid solution
B
{\displaystyle B}
-in-
A
{\displaystyle A}
would form separate phases, perhaps lamella or grains.
== Application ==
In the phase diagram, at three different concentrations, the material will be solid until heated to its melting point, and then (after adding the heat of fusion) become liquid at that same temperature:
the unalloyed extreme left
the unalloyed extreme right
the dip in the center (the eutectic composition).
At other proportions, the material will enter a mushy or pasty phase until it warms up to being completely melted.
The mixture at the dip point of the diagram is called a eutectic alloy. Lead-tin mixtures formulated at that point (37/63 mixture) are useful when soldering electronic components, particularly if done manually, since the solid phase is quickly entered as the solder cools. In contrast, when lead-tin mixtures were used to solder seams in automobile bodies a pasty state enabled a shape to be formed with a wooden paddle or tool, so a 70–30 lead to tin ratio was used. (Lead is being removed from such applications owing to its toxicity and consequent difficulty in recycling devices and components that include lead.)
== Exsolution ==
When a solid solution becomes unstable—due to a lower temperature, for example—exsolution occurs and the two phases separate into distinct microscopic to megascopic lamellae. This is mainly caused by difference in cation size. Cations which have a large difference in radii are not likely to readily substitute.
Alkali feldspar minerals, for example, have end members of albite, NaAlSi3O8 and microcline, KAlSi3O8. At high temperatures Na+ and K+ readily substitute for each other and so the minerals will form a solid solution, yet at low temperatures albite can only substitute a small amount of K+ and the same applies for Na+ in the microcline. This leads to exsolution where they will separate into two separate phases. In the case of the alkali feldspar minerals, thin white albite layers will alternate between typically pink microcline, resulting in a perthite texture.
== See also ==
Solid solution strengthening
== Notes ==
== References ==
== External links ==
DoITPoMS Teaching and Learning Package—"Solid Solutions" | Wikipedia/Solid_solution |
Occupational hygiene or industrial hygiene (IH) is the anticipation, recognition, evaluation, control, and confirmation (ARECC) of protection from risks associated with exposures to hazards in, or arising from, the workplace that may result in injury, illness, impairment, or affect the well-being of workers and members of the community. These hazards or stressors are typically divided into the categories biological, chemical, physical, ergonomic and psychosocial. The risk of a health effect from a given stressor is a function of the hazard multiplied by the exposure to the individual or group. For chemicals, the hazard can be understood by the dose response profile most often based on toxicological studies or models. Occupational hygienists work closely with toxicologists (see Toxicology) for understanding chemical hazards, physicists (see Physics) for physical hazards, and physicians and microbiologists for biological hazards (see Microbiology, Tropical medicine, Infection). Environmental and occupational hygienists are considered experts in exposure science and exposure risk management. Depending on an individual's type of job, a hygienist will apply their exposure science expertise for the protection of workers, consumers and/or communities.
== The profession of occupational hygienist ==
The British Occupational Hygiene Society (BOHS) defines that "occupational hygiene is about the prevention of ill-health from work, through recognizing, evaluating and controlling the risks". The International Occupational Hygiene Association (IOHA) refers to occupational hygiene as the discipline of anticipating, recognizing, evaluating and controlling health hazards in the working environment with the objective of protecting worker health and well-being and safeguarding the community at large. The term occupational hygiene (used in the UK and Commonwealth countries as well as much of Europe) is synonymous with industrial hygiene (used in the US, Latin America, and other countries that received initial technical support or training from US sources). The term industrial hygiene traditionally stems from industries with construction, mining or manufacturing, and occupational hygiene refers to all types of industry such as those listed for industrial hygiene as well as financial and support services industries and refers to "work", "workplace" and "place of work" in general. Environmental hygiene addresses similar issues to occupational hygiene but is likely to be about broad industry or broad issues affecting the local community, broader society, region or country.
The profession of occupational hygiene uses strict and rigorous scientific methodology and often requires professional judgment based on experience and education in determining the potential for hazardous exposure risks in workplace and environmental studies. These aspects of occupational hygiene can often be referred to as the "art" of occupational hygiene and is used in a similar sense to the "art" of medicine. In fact "occupational hygiene" is both an aspect of preventive medicine and in particular occupational medicine, in that its goal is to prevent industrial disease, using the science of risk management, exposure assessment and industrial safety. Ultimately professionals seek to implement "safe" systems, procedures or methods to be applied in the workplace or to the environment. Prevention of exposure to long working hours has been identified as a focus for occupational hygiene when a landmark United Nations study estimated that this occupational hazard causes an estimated 745,000 occupational fatalities per year worldwide, the largest burden of disease attributed to any single occupational hazard.
Industrial hygiene refers to the science of anticipating, recognizing, evaluating, and controlling workplaces to prevent illness or injuries to the workers. Industrial hygienists use various environmental monitoring and analytical methods to establish how workers are exposed. In turn, they employ techniques such as engineering and work practice controls to control any potential health hazards.
Anticipation involves identifying potential hazards in the workplace before they are introduced. The uncertainty of health hazards ranges from reasonable expectations to mere speculations. However, it implies that the industrial hygienist must understand the nature of changes in the processes, products, environments, and workforces of the workplaces and how they can affect workers' well-being.
Recognition of engineering, work practice, and administrative controls are the primary means of reducing the workers` exposure to occupational hazards. Timely recognition of hazards minimizes the workers' exposure to the hazards by removing or reducing the hazard's source or isolating the workers from the hazards.
Evaluation of a worksite is a significant step that helps the industrial hygienists establish jobs and worksites that are a potential source of problems. During the evaluation, the industrial hygienist measures and identifies the problem tasks, exposures, and tasks. The most effective worksites assessment includes all the jobs, work activities, and operations. The industrial hygienists inspect research and evaluations of how given physical or chemical hazards affect the workers' health. If the workplace contains a health hazard, the industrial hygienist recommends appropriate corrective actions.
Control measures include removing toxic chemicals and replacing harmful toxic materials with less hazardous ones. It also involves confining work operations or enclosing work processes and installing general and local ventilation systems. Controls change how the task is performed. Some of the basic work practice controls include: following the laid procedures to reduce exposures while at the workplace, inspecting and maintaining processes regularly, and implementing reasonable workplace procedures.
== History ==
The industrial hygiene profession gained respectability back in 1700 when Bernardino Ramazzini published a comprehensive book on industrial medicine. The book was written in Italian and was known as De Morbis Artificum Diatriba, meaning “The Diseases of Workmen”. The book detailed the accurate description of the occupational diseases that most of his time workers suffered from. Ramazzini was critical to the industrial hygiene profession's future because he asserted that occupational diseases should be studied in the workplace environment and not in hospital wards.
Industrial hygiene in the United States started taking shape in the early 20th century. There before, many workers risked their lives daily to work in industrial settings such as manufacturing, mills, constructions, and mines. Currently, the statistics on work safety are usually measured by the number of injuries and deaths yearly. Before the 20th century, these kinds of statistics were hard to come by because it appeared no one cared enough to make tracking of the job injuries and deaths a priority.
Industrial hygiene received another boost in the early 20th century when Alice Hamilton led an effort to improve industrial hygiene. She began by observing industrial conditions first and then startled mine owners, factory managers, and other state officials with evidence that there was a correlation between workers' illnesses and their exposure to chemical toxins. She presented definitive proposals for eliminating unhealthful working conditions. As a result, the US federal government also began investigating health conditions in the industry. In 1911, the states passed the first workers' compensation laws.
== The social role of occupational hygiene ==
Occupational hygienists have been involved historically with changing the perception of society about the nature and extent of hazards and preventing exposures in the workplace and communities. Many occupational hygienists work day-to-day with industrial situations that require control or improvement to the workplace situation. However larger social issues affecting whole industries have occurred in the past e.g. since 1900, asbestos exposures that have affected the lives of tens of thousands of people. Occupational hygienists have become more engaged in understanding and managing exposure risks to consumers from products with regulations such as REACh (Registration, Evaluation, Authorisation and Restriction of Chemicals) enacted in 2006.
More recent issues affecting broader society are, for example in 1976, Legionnaires' disease or legionellosis. More recently again in the 1990s, radon, and in the 2000s, the effects of mold from indoor air quality situations in the home and at work. In the later part of the 2000s, concern has been raised about the health effects of nanoparticles.
Many of these issues have required the coordination of medical and paraprofessionals in detecting and then characterizing the nature of the issue, both in terms of the hazard and in terms of the risk to the workplace and ultimately to society. This has involved occupational hygienists in research, collection of data and development of suitable and satisfactory control methodologies.
== General activities ==
The occupational hygienist may be involved with the assessment and control of physical, chemical, biological or environmental hazards in the workplace or community that could cause injury or disease. Physical hazards may include noise, temperature extremes, illumination extremes, ionizing or non-ionizing radiation, and ergonomics. Chemical hazards related to dangerous goods or hazardous substances are frequently investigated by occupational hygienists. Other related areas including indoor air quality (IAQ) and safety may also receive the attention of the occupational hygienist. Biological hazards may stem from the potential for legionella exposure at work or the investigation of biological injury or effects at work, such as dermatitis may be investigated.
As part of the investigation process, the occupational hygienist may be called upon to communicate effectively regarding the nature of the hazard, the potential for risk, and the appropriate methods of control. Appropriate controls are selected from the hierarchy of control: by elimination, substitution, engineering, administration and personal protective equipment (PPE) to control the hazard or eliminate the risk. Such controls may involve recommendations as simple as appropriate PPE such as a 'basic' particulate dust mask to occasionally designing dust extraction ventilation systems, work places or management systems to manage people and programs for the preservation of health and well-being of those who enter a workplace.
Examples of occupational hygiene include:
Analysis of physical hazards such as noise, which may require use of hearing protection earplugs and/or earmuffs to prevent hearing loss.
Developing plans and procedures to protect against infectious disease exposure in the event of a flu pandemic.
Monitoring the air for hazardous contaminants which may potentially lead to worker illness or death.
== Workplace assessment methods ==
Although there are many aspects to occupational hygiene work the most known and sought after is in determining or estimating potential or actual exposures to hazards. For many chemicals and physical hazards, occupational exposure limits have been derived using toxicological, epidemiological and medical data allowing hygienists to reduce the risks of health effects by implementing the "Hierarchy of Hazard Controls". Several methods can be applied in assessing the workplace or environment for exposure to a known or suspected hazard. Occupational hygienists do not rely on the accuracy of the equipment or method used but in knowing with certainty and precision the limits of the equipment or method being used and the error or variance given by using that particular equipment or method. Well known methods for performing occupational exposure assessments can be found in the book A Strategy for Assessing and Managing Occupational Exposures, published by AIHA Press.
The main steps outlined for assessing and managing occupational exposures:
Basic Characterization (identify agents, hazards, people potentially exposed and existing exposure controls)
Exposure Assessment (select occupational exposure limits, hazard bands, relevant toxicological data to determine if exposures are "acceptable", "unacceptable" or "uncertain")
Exposure Controls (for "unacceptable" or "uncertain" exposures)
Further Information Gathering (for "uncertain" exposures)
Hazard Communication (for all exposures)
Reassessment (as needed) / Management of Change
=== Basic characterization, hazard identification and walk-through surveys ===
The first step in understanding health risks related to exposures requires the collection of "basic characterization" information from available sources. A traditional method applied by occupational hygienists to initially survey a workplace or environment is used to determine both the types and possible exposures from hazards (e.g. noise, chemicals, radiation). The walk-through survey can be targeted or limited to particular hazards such as silica dust, or noise, to focus attention on control of all hazards to workers. A full walk-through survey is frequently used to provide information on establishing a framework for future investigations, prioritizing hazards, determining the requirements for measurement and establishing some immediate control of potential exposures. The Health Hazard Evaluation Program from the National Institute for Occupational Safety and Health is an example of an industrial hygiene walk-through survey. Other sources of basic characterization information include worker interviews, observing exposure tasks, material safety data sheets, workforce scheduling, production data, equipment and maintenance schedules to identify potential exposure agents and people possibly exposed.
The information that needs to be gathered from sources should apply to the specific type of work from which the hazards can come from. As mentioned previously, examples of these sources include interviews with people who have worked in the field of the hazard, history and analysis of past incidents, and official reports of work and the hazards encountered. Of these, the personnel interviews may be the most critical in identifying undocumented practices, events, releases, hazards and other relevant information. Once the information is gathered from a collection of sources, it is recommended for these to be digitally archived (to allow for quick searching) and to have a physical set of the same information in order for it to be more accessible. One innovative way to display the complex historical hazard information is with a historical hazards identification map, which distills the hazard information into an easy to use graphical format.
=== Sampling ===
An occupational hygienist may use one or a number of commercially available electronic measuring devices to measure noise, vibration, ionizing and non-ionizing radiation, dust, solvents, gases, and so on. Each device is often specifically designed to measure a specific or particular type of contaminant. Electronic devices need to be calibrated before and after use to ensure the accuracy of the measurements taken and often require a system of certifying the precision of the instrument.
Collecting occupational exposure data is resource- and time-intensive, and can be used for different purposes, including evaluating compliance with government regulations and for planning preventive interventions. The usability of occupational exposure data is influenced by these factors:
Data storage (e.g. use of electronic and centralized databases with retention of all records)
Standardization of data collection
Collaboration between researchers, safety and health professionals and insurers
In 2018, in an effort to standardize industrial hygiene data collection among workers compensation insurers and to determine the feasibility of pooling collected IH data, IH air and noise survey forms were collected. Data fields were evaluated for importance and a study list of core fields was developed, and submitted to an expert panel for review before finalization. The final core study list was compared to recommendations published by the American Conference of Governmental Industrial Hygienists (ACGIH) and the American Industrial Hygiene Association (AIHA). Data fields essential to standardizing IH data collection were identified and verified. The "essential" data fields are available and could contribute to improved data quality and its management if incorporated into IH data management systems.
Canada and several European countries have been working to establish occupational exposure databases with standardized data elements and improved data quality. These databases include MEGA, COLCHIC, and CWED.
==== Dust sampling ====
Nuisance dust is considered to be the total dust in air including inhalable and respirable fractions.
Various dust sampling methods exist that are internationally recognised. Inhalable dust is determined using the modern equivalent of the Institute of Occupational Medicine (IOM) MRE 113A monitor. Inhalable dust is considered to be dust of less than 100 micrometers aerodynamic equivalent diameter (AED) that enters through the nose and or mouth.
Respirable dust is sampled using a cyclone dust sampler design to sample for a specific fraction of dust AED at a set flow rate. The respirable dust fraction is dust that enters the 'deep lung' and is considered to be less than 10 micrometers AED.
Nuisance, inhalable and respirable dust fractions are all sampled using a constant volumetric pump for a specific sampling period. By knowing the mass of the sample collected and the volume of air sampled, a concentration for the fraction sampled can be given in milligrams (mg) per meter cubed (m3). From such samples, the amount of inhalable or respirable dust can be determined and compared to the relevant occupational exposure limits.
By use of inhalable, respirable or other suitable sampler (7 hole, 5 hole, etc.), these dust sampling methods can also used to determine metal exposure in the air. This requires collection of the sample on a methyl cellulose ester (MCE) filter and acid digestion of the collection media in the laboratory followed by measuring metal concentration through atomic absorption spectroscopy or atomic emission spectroscopy. Both the UK Health and Safety Laboratory and NIOSH Manual of Analytical Methods have specific methodologies for a broad range of metals in air found in industrial processing (smelting, foundries, etc.).
A further method exists for the determination of asbestos, fiberglass, synthetic mineral fiber and ceramic mineral fiber dust in air. This is the membrane filter method (MFM) and requires the collection of the dust on a gridded filter for estimation of exposure by the counting of 'conforming' fibers in 100 fields through a microscope. Results are quantified on the basis of number of fibers per milliliter of air (f/mL). Many countries strictly regulate the methodology applied to the MFM.
==== Chemical sampling ====
Two types of chemically absorbent tubes are used to sample for a wide range of chemical substances. Traditionally a chemical absorbent 'tube' (a glass or stainless steel tube of between 2 and 10 mm internal diameter) filled with very fine absorbent silica (hydrophilic) or carbon, such as coconut charcoal (lipophilic), is used in a sampling line where air is drawn through the absorbent material for between four hours (minimum workplace sample) to 24 hours (environmental sample) period. The hydrophilic material readily absorbs water-soluble chemical and the lipophilic material absorbs non water-soluble materials. The absorbent material is then chemically or physically extracted and measurements performed using various gas chromatography or mass spectrometry methods. These absorbent tube methods have the advantage of being usable for a wide range of potential contaminates. However, they are relatively expensive methods, are time-consuming and require significant expertise in sampling and chemical analysis. A frequent complaint of workers is in having to wear the sampling pump (up to 1 kg) for several days of work to provide adequate data for the required statistical certainty determination of the exposure.
In the last few decades, advances have been made in 'passive' badge technology. These samplers can now be purchased to measure one chemical (e.g. formaldehyde) or a chemical type (e.g. ketones) or a broad spectrum of chemicals (e.g. solvents). They are relatively easy to set up and use. However, considerable cost can still be incurred in analysis of the 'badge'. They weigh 20 to 30 grams and workers do not complain about their presence. Unfortunately 'badges' may not exist for all types of workplace sampling that may be required, and the charcoal or silica method may sometimes have to be applied.
From the sampling method, results are expressed in milligrams per cubic meter (mg/m3) or parts per million (PPM) and compared to the relevant occupational exposure limits.
It is a critical part of the exposure determination that the method of sampling for the specific contaminate exposure is directly linked to the exposure standard used. Many countries regulate both the exposure standard, the method used to determine the exposure and the methods to be used for chemical or other analysis of the samples collected.
==== Noise sampling ====
Two types of noise are environmental noise, which is unwanted sound that occurs outdoors, and occupational noise, the sound that is received by employees while they are in the workplace. Environmental noise can originate from various sources depending on the activity, location, and time. Environmental noise can be generated from transportation such as road, rail, and air traffic, or construction and building services, and even domestic and leisure activities.
There is a legal limit on noise that the environmental noise is 70 dB(A) over 24 hours of average exposure. Similarly, the limit of occupational noise is 85 dB(A) per NIOSH, or 90 dB(A) per OSHA for an 8-hour work period. In order to enforce these limits, these are the methods to measure noise, including sound level meter (SLM), Sound Level Meter App, integrating sound level meter (ISLM), impulse sound level meter (Impulse SLM), noise dosimeter, and personal sound exposure meter (PSEM).
Sound level meter (SLM): measures the sound level at a single point of time and consequently requires multiple measurements to be taken at different times of the day. The SLM is primarily used for measuring relatively stable sound levels; there is increased difficulty in measuring the average sound exposure if the noise levels vary greatly.
Sound Level Meter App is a program that can be downloaded to a mobile device. It receives noise through the phone's built-in or external microphone and displays the sound level measurement from the app's sound level meters and noise dosimeters.
Integrating sound level meter (ISLM): measures the equivalent sound levels within the measurement period. Because the ISLM measures noise in a particular area, it is difficult to measure a worker's personal exposure as they move throughout a workspace.
Impulse sound level meter (Impulse SLM): measures the peak of each sound impulse. The most optimal conditions to measure the peaks occur when there is little background noise.
Noise dosimeter: collects the sound level for a given point in time, as well as different sound levels across time. The noise dosimeter can measure personal exposure levels and can be used in the areas with a high risk of fire.
Personal sound exposure meter (PSEM): worn by employees while they work. The advantage of the PSEM is that it eliminates the need for noise assessors to follow up with workers when the assessors measure the noise levels of the work areas.
Excessive noise can lead to occupational hearing loss. 12% of workers report having hearing difficulties, making this the third most common chronic disease in the U.S. Among these workers, 24% have hearing difficulties caused by occupational noise, with 8% affected by tinnitus, and 4% having both hearing difficulties and tinnitus.
Ototoxic chemicals including solvents, metals, compounds, asphyxiants, nitriles, and pharmaceuticals, may contribute further to hearing loss.
=== Exposure management and controls ===
The hierarchy of control defines the approach used to reduce exposure risks protecting workers and communities. These methods include elimination, substitution, engineering controls (isolation or ventilation), administrative controls and personal protective equipment. Occupational hygienists, engineers, maintenance, management and employees should all be consulted for selecting and designing the most effective and efficient controls based on the hierarchy of control.
== Professional societies ==
The development of industrial hygiene societies originated in the United States, beginning with the first convening of members for the American Conference of Governmental Industrial Hygienists in 1938, and the formation of the American Industrial Hygiene Association in 1939. In the United Kingdom, the British Occupational Hygiene Society started in 1953. Through the years, professional occupational societies have formed in many different countries, leading to the formation of the International Occupational Hygiene Association in 1987, in order to promote and develop occupational hygiene worldwide through the member organizations. The IOHA has grown to 29 member organizations, representing over 20,000 occupational hygienists worldwide, with representation from countries present in every continent.
== Peer-reviewed literature ==
There are several academic journals specifically focused on publishing studies and research in the occupational health field. The Journal of Occupational and Environmental Hygiene (JOEH) has been published jointly since 2004 by the American Industrial Hygiene Association and the American Conference of Governmental Industrial Hygienists, replacing the former American Industrial Hygiene Association Journal and Applied Occupational & Environmental Hygiene journals. Another seminal occupational hygiene journal would be The Annals of Occupational Hygiene, published by the British Occupational Hygiene Society since 1958. Further, NIOSH maintains a searchable bibliographic database (NIOSHTIC-2) of occupational safety and health publications, documents, grant reports, and other communication products.
== Occupational hygiene as a career ==
Examples of occupational hygiene careers include:
Compliance officer on behalf of regulatory agency
Professional working on behalf of company for the protection of the workforce
Consultant working on behalf of companies
Researcher performing laboratory or field occupational hygiene work
=== Education ===
The basis of the technical knowledge of occupational hygiene is from competent training in the following areas of science and management:
Basic sciences (biology, chemistry, mathematics (statistics), physics)
Occupational diseases (illness, injury and health surveillance (biostatistics, epidemiology, toxicology))
Health hazards (biological, chemical and physical hazards, ergonomics and human factors)
Working environments (mining, industrial, manufacturing, transport and storage, service industries and offices)
Programme management principles (professional and business ethics, work site and incident investigation methods, exposure guidelines, occupational exposure limits, jurisdictional based regulations, hazard identification, risk assessment and risk communication, data management, fire evacuation and other emergency responses)
Sampling, measurement and evaluation practices (instrumentation, sampling protocols, methods or techniques, analytical chemistry)
Hazard controls (elimination, substitution, engineering, administrative, PPE and air conditioning and extraction ventilation)
Environment (air pollution, hazardous waste)
However, it is not rote knowledge that identifies a competent occupational hygienist. There is an "art" to applying the technical principles in a manner that provides a reasonable solution for workplace and environmental issues. In effect an experienced "mentor", who has experience in occupational hygiene is required to show a new occupational hygienist how to apply the learned scientific and management knowledge in the workplace and to the environment issue to satisfactorily resolve the problem.
To be a professional occupational hygienist, experience in as wide a practice as possible is required to demonstrate knowledge in areas of occupational hygiene. This is difficult for "specialists" or those who practice in narrow subject areas. Limiting experience to individual subject like asbestos remediation, confined spaces, indoor air quality, or lead abatement, or learning only through a textbook or “review course” can be a disadvantage when required to demonstrate competence in other areas of occupational hygiene.
Information presented in Wikipedia can be considered to be only an outline of the requirements for professional occupational hygiene training. This is because the actual requirements in any country, state or region may vary due to educational resources available, industry demand or regulatory mandated requirements.
During 2010, the Occupational Hygiene Training Association (OHTA) through sponsorship provided by the IOHA initiated a training scheme for those with an interest in or those requiring training in occupational hygiene. These training modules can be downloaded and used freely. The available subject modules (Basic Principles in Occupational Hygiene, Health Effects of Hazardous Substances, Measurement of Hazardous Substances, Thermal Environment, Noise, Asbestos, Control, Ergonomics) are aimed at the ‘foundation’ and ‘intermediate’ levels in occupational hygiene. Although the modules can be used freely without supervision, attendance at an accredited training course is encouraged. These training modules are available from ohtatraining.org
Academic programs offering industrial hygiene bachelor's or master's degrees in United States may apply to the Accreditation Board for Engineering and Technology (ABET) to have their program accredited. As of October 1, 2006, 27 institutions have accredited their industrial hygiene programs. Accreditation is not available for doctoral programs.
In the U.S., the training of IH professionals is supported by NIOSH through their NIOSH Education and Research Centers.
=== Professional credentials ===
==== Australia ====
In 2005, the Australian Institute of Occupational Hygiene (AIOH) accredited professional occupational hygienists through a certification scheme. Occupational Hygienists in Australia certified through this scheme are entitled to use the phrase Certified Occupational Hygienist (COH) as part of their qualifications.
==== Hong Kong ====
Registered Professional Hygienist Registration & Examination Board (RPH R&EB) is set up by the Council of the Hong Kong Institute of Occupational & Environmental Hygiene (HKIOEH) with an aim to enhance the professional development of occupational hygienists and to provide a path for persons who reach professional maturity in the field of occupational hygiene to obtain qualification recognised by peer professionals. Under HKIOEH, RPH R&EB operates the registration program of Registered Professional Hygienist (RPH) and qualifying examination in a standard meeting the practice as recognised by the National Accreditation Recognition (NAR) Committee of the International Occupational Hygiene Association (IOHA).
==== Saudi Arabia ====
The Saudi Arabian Ministry of Health's Occupational Health Directorate and Labor Office are the government agencies responsible for decisions and surveillance related to occupational hygiene. Professional occupational hygiene and safety education programs surveilled under these offices are available through Saudi Arabian colleges.
==== United States ====
Practitioners who successfully meet specific education and work-experience requirements and pass a written examination administered by the Board for Global EHS Credentialing (BGC) are authorized to use the term Certified Industrial Hygienist (CIH) or the discontinued Certified Associate Industrial Hygienist (CAIH). Both of these terms have been codified into law in many states in the United States to identify minimum qualifications of individuals having oversight over certain activities that may affect employee and general public health.
After the initial certification, the CIH or CAIH maintains their certification by meeting on-going requirements for ethical behavior, education, and professional activities (e.g., active practice, technical committees, publishing, teaching).
Certification examinations are offered during a spring and fall testing window each year worldwide.
The CIH designation is the most well known and recognized industrial hygiene designation throughout the world. There are approximately 6800 CIHs in the world making BGC the largest industrial hygiene certification organization. The CAIH certification program was discontinued in 2006. Those who were certified as a CAIH retain their certification through ongoing certification maintenance. People who are currently certified by BGC can be found in a public roster.
The BGC is a recognized certification board by the International Occupational Hygiene Association (IOHA). The CIH certification has been accredited internationally by the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC 17024). In the United States, the CIH has been accredited by the Council of Engineering and Scientific Specialty Boards (CESB).
==== Canada ====
In Canada, a practitioner who successfully completes a written test and an interview administered by the Canadian Registration Board of Occupational Hygienists can be recognized as a Registered Occupational Hygienist (ROH) or Registered Occupational Hygiene Technician (ROHT). There is also designation to be recognized as a Canadian Registered Safety Professional (CRSP).
==== United Kingdom ====
The Faculty of Occupational Hygiene, part of the British Occupational Hygiene Society, represents the interests of professional occupational hygienists.
Membership of the Faculty of Occupational Hygiene is confined to BOHS members who hold a recognized professional qualification in occupational hygiene.
There are three grades of Faculty membership:
Licentiate (LFOH) holders will have obtained the BOHS Certificate of Operational Competence in Occupational Hygiene and have at least three years’ practical experience in the field.
Members (MFOH) are normally holders of the Diploma of Professional Competence in Occupational Hygiene and have at least five years’ experience at a senior level.
Fellows (FFOH) are senior members of the profession who have made a distinct contribution to the advancement of occupational hygiene.
All Faculty members participate in a Continuous Professional Development (CPD) scheme designed to maintain a high level of current awareness and knowledge in occupational hygiene.
==== India ====
The Indian Society of Industrial Hygiene was formed in 1981 at Chennai, India. Subsequently, its secretariat was shifted to Kanpur. The society has registered about 400 members, about 90 of whom are life members. The society publishes a newsletter, Industrial Hygiene Link. The secretary of the society is Shyam Singh Gautam.
== See also ==
== References ==
As of this edit, this article uses content from "Occupational hygiene", authored by https://en.wikipedia.org/w/index.php?title=Occupational_hygiene&action=history, which is licensed in a way that permits reuse under the Creative Commons Attribution-ShareAlike 4.0 International License, but not under the GFDL. All relevant terms must be followed.
== Further reading ==
World Health Organization Occupational Health Publications
International Labour Organization Encyclopaedia of Occupational Health and Safety, ISBN 92-2-109203-8[1]
UK HSEline
EPA Indoor Air Quality on-line educator
Canada hazard information
A list of MSDS sites (Partly commercial)
(US) NIOSH Pocket Guide
(US) Agency for Toxic Substances and Disease Registry
(US) National Library of Medicine Toxicology Data Network Archived 2010-01-15 at the Wayback Machine
(US) National Toxicology Program
International Agency for Research on Cancer
RTECS (by subscription only)
Chemfinder
Inchem
Many larger businesses maintain their own product and chemical information.
There are also many subscription services available (CHEMINFO, OSH, CHEMpendium, Chem Alert, Chemwatch, Infosafe, RightAnswer.com's TOMES Plus, OSH Update, OSH-ROM, et cetera).
== External links ==
(OSHA) passed standards on exposure to hexavalent chromium - Hexavalent Chromium National Emphasis Program
American Conference of Governmental Industrial Hygienists (ACGIH)
American Industrial Hygiene Association
Government of Hong Kong Occupational Safety and Health Council, Air Contaminants in the Workplace
View a PowerPoint Presentation Explaining What Industrial Hygiene Is - developed and made available by AIHA
The National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM)
UK Health and Safety Executive, Health and Safety Laboratory, Methods for the Determination of Hazardous Substances (MDHS)
International Organization for Standardization (ISO)
International Occupational Hygiene Association (IOHA)
Workplace Health Without Borders (WHWB) | Wikipedia/Industrial_hygiene |
In thermochemistry, the enthalpy of solution (heat of solution or enthalpy of solvation) is the enthalpy change associated with the dissolution of a substance in a solvent at constant pressure resulting in infinite dilution.
The enthalpy of solution is most often expressed in kJ/mol at constant temperature. The energy change can be regarded as being made up of three parts: the endothermic breaking of bonds within the solute and within the solvent, and the formation of attractions between the solute and the solvent. An ideal solution has a null enthalpy of mixing. For a non-ideal solution, it is an excess molar quantity.
== Energetics ==
Dissolution by most gases is exothermic. That is, when a gas dissolves in a liquid solvent, energy is released as heat, warming both the system (i.e. the solution) and the surroundings.
The temperature of the solution eventually decreases to match that of the surroundings. The equilibrium, between the gas as a separate phase and the gas in solution, will by Le Châtelier's principle shift to favour the gas going into solution as the temperature is decreased (decreasing the temperature increases the solubility of a gas).
When a saturated solution of a gas is heated, gas comes out of the solution.
== Steps in dissolution ==
Dissolution can be viewed as occurring in three steps:
Breaking solute–solute attractions (endothermic), for instance, lattice energy
U
latt
{\displaystyle U_{\text{latt}}}
in salts.
Breaking solvent–solvent attractions (endothermic), for instance, that of hydrogen bonding.
Forming solvent–solute attractions (exothermic), in solvation.
The value of the enthalpy of solvation is the sum of these individual steps:
Δ
H
solv
=
Δ
H
diss
+
U
latt
.
{\displaystyle \Delta H_{\text{solv}}=\Delta H_{\text{diss}}+U_{\text{latt}}.}
Dissolving ammonium nitrate in water is endothermic. The energy released by the solvation of the ammonium ions and nitrate ions is less than the energy absorbed in breaking up the ammonium nitrate ionic lattice and the attractions between water molecules. Dissolving potassium hydroxide is exothermic, as more energy is released during solvation than is used in breaking up the solute and solvent.
== Expressions in differential or integral form ==
The expressions of the enthalpy change of dissolution can be differential or integral, as a function of the ratio of amounts of solute-solvent.
The molar differential enthalpy change of dissolution is
Δ
diss
d
H
=
(
∂
Δ
diss
H
∂
Δ
n
i
)
T
,
p
,
n
B
,
{\displaystyle \Delta _{\text{diss}}^{\text{d}}H=\left({\frac {\partial \Delta _{\text{diss}}H}{\partial \Delta n_{i}}}\right)_{T,p,n_{B}},}
where
∂
Δ
n
i
{\displaystyle \partial \Delta n_{i}}
is the infinitesimal variation, or differential, of the mole number of the solute during dissolution.
The integral heat of dissolution is defined as a process of obtaining a certain amount of solution with a final concentration. The enthalpy change in this process, normalized by the mole number of solute, is evaluated as the molar integral heat of dissolution. Mathematically, the molar integral heat of dissolution is denoted as
Δ
diss
i
H
=
Δ
diss
H
n
B
.
{\displaystyle \Delta _{\text{diss}}^{\text{i}}H={\frac {\Delta _{\text{diss}}H}{n_{B}}}.}
The prime heat of dissolution is the differential heat of dissolution for obtaining an infinitely diluted solution.
== Dependence on the nature of the solution ==
The enthalpy of mixing of an ideal solution is zero by definition, but the enthalpy of dissolution of nonelectrolytes has the value of the enthalpy of fusion or vaporisation. For non-ideal solutions of electrolytes it is connected to the activity coefficient of the solute(s) and the temperature derivative of the relative permittivity through the following formula:
H
dil
=
∑
i
ν
i
R
T
ln
γ
i
(
1
+
T
ϵ
∂
ϵ
∂
T
)
.
{\displaystyle H_{\text{dil}}=\sum _{i}\nu _{i}RT\ln \gamma _{i}\left(1+{\frac {T}{\epsilon }}{\frac {\partial \epsilon }{\partial T}}\right).}
== See also ==
Apparent molar property
Enthalpy of mixing
Heat of dilution
Heat of melting
Hydration energy
Lattice energy
Law of dilution
Solvation
Thermodynamic activity
Solubility equilibrium
== References ==
== External links ==
phase diagram | Wikipedia/Enthalpy_change_of_solution |
An aqueous solution is a solution in which the solvent is water. It is mostly shown in chemical equations by appending (aq) to the relevant chemical formula. For example, a solution of table salt, also known as sodium chloride (NaCl), in water would be represented as Na+(aq) + Cl−(aq). The word aqueous (which comes from aqua) means pertaining to, related to, similar to, or dissolved in, water. As water is an excellent solvent and is also naturally abundant, it is a ubiquitous solvent in chemistry. Since water is frequently used as the solvent in experiments, the word solution refers to an aqueous solution, unless the solvent is specified.
A non-aqueous solution is a solution in which the solvent is a liquid, but is not water.
== Characteristics ==
Substances that are hydrophobic ('water-fearing') do not dissolve well in water, whereas those that are hydrophilic ('water-friendly') do. An example of a hydrophilic substance is sodium chloride. In an aqueous solution the hydrogen ions (H+) and hydroxide ions (OH−) are in Arrhenius balance ([H+] [OH−] = Kw = 1 x 10−14 at 298 K).
Acids and bases are aqueous solutions, as part of their Arrhenius definitions. An example of an Arrhenius acid is hydrogen chloride (HCl) because of its dissociation of the hydrogen ion when dissolved in water. Sodium hydroxide (NaOH) is an Arrhenius base because it dissociates the hydroxide ion when it is dissolved in water.
Aqueous solutions may contain, especially in the alkaline zone or subjected to radiolysis, hydrated atomic hydrogen and hydrated electrons.
== Electrolytes ==
Aqueous solutions that conduct electric current efficiently contain "strong" electrolytes, while those that conduct poorly are attributable to weaker electrolytes. The former substances are completely, or at least substantially, ionized in water; conversely, the weak electrolytes exhibit relatively limited ionization in water. The ability of ions to move freely through a solvent is a characteristic of an aqueous strong electrolyte solution; the solutes in a weaker electrolyte solution are present as ions, but only to a small degree.
Non-electrolytes, conversely, are substances that dissolve in water, yet maintain their molecular integrity: they do not dissociate into ions. (Such examples include sugar, urea, glycerol, and methylsulfonylmethane (MSM).
== Reactions ==
Reactions in aqueous solutions are usually metathesis reactions. Metathesis reactions are another term for double-displacement; that is, when a cation displaces to form an ionic bond with the other anion. The cation bonded with the latter anion will dissociate and bond with the other anion.
A common metathesis reaction in aqueous solutions is a precipitation reaction. This reaction occurs when two aqueous strong electrolyte solutions mix and produce an insoluble solid, also known as a precipitate. The ability of a substance to dissolve in water is determined by whether the substance can match or exceed the strong attractive forces that water molecules generate between themselves. If the substance lacks the ability to dissolve in water, the molecules form a precipitate.
When writing the equations of precipitation reactions, it is essential to determine the precipitate. To determine the precipitate, one must consult a chart of solubility. Soluble compounds are aqueous, while insoluble compounds are the precipitate. There may not always be a precipitate. Complete ionic equations and net ionic equations are used to show dissociated ions in metathesis reactions. When performing calculations regarding the reacting of one or more aqueous solutions, in general one must know the concentration, or molarity, of the aqueous solutions.
== See also ==
Acid–base reaction
Acidity function
Dissociation (chemistry)
Drug permeability
Inorganic nonaqueous solvent
List of ions in pure water (aqueous chemistry)
Metal ions in aqueous solution
Properties of water
Solubility
Solvated electron
== References == | Wikipedia/Aqueous_solution |
The Antoine equation is a class of semi-empirical correlations describing the relation between vapor pressure and temperature for pure substances. The Antoine equation is derived from the Clausius–Clapeyron relation. The equation was presented in 1888 by the French engineer Louis Charles Antoine (1825–1897).
== Equation ==
The Antoine equation is
log
10
p
=
A
−
B
C
+
T
,
{\displaystyle \log _{10}p=A-{\frac {B}{C+T}},}
where p is the vapor pressure, T is temperature (in °C or in K according to the value of C), and A, B and C are component-specific constants.
The simplified form with C set to zero,
log
10
p
=
A
−
B
T
,
{\displaystyle \log _{10}p=A-{\frac {B}{T}},}
is the August equation, after the German physicist Ernst Ferdinand August (1795–1870). The August equation describes a linear relation between the logarithm of the pressure and the reciprocal temperature. This assumes a temperature-independent heat of vaporization. The Antoine equation allows an improved, but still inexact description of the change of the heat of vaporization with the temperature.
The Antoine equation can also be transformed in a temperature-explicit form with simple algebraic manipulations:
T
=
B
A
−
log
10
p
−
C
.
{\displaystyle T={\frac {B}{A-\log _{10}\,p}}-C.}
== Validity range ==
Usually, the Antoine equation cannot be used to describe the entire saturated vapour pressure curve from the triple point to the critical point, because it is not flexible enough. Therefore, multiple parameter sets for a single component are commonly used. A low-pressure parameter set is used to describe the vapour pressure curve up to the normal boiling point and the second set of parameters is used for the range from the normal boiling point to the critical point.
Typical deviations of a parameter fit over the entire range (experimental data for Benzene)
== Example parameters ==
=== Example calculation ===
The normal boiling point of ethanol is TB = 78.32 °C. At this temperature, the two sets of parameters above produce the following vapor pressures:
P
=
10
(
8.20417
−
1642.89
78.32
+
230.300
)
=
760.0
mmHg
,
P
=
10
(
7.68117
−
1332.04
78.32
+
199.200
)
=
761.0
mmHg
{\displaystyle {\begin{aligned}P&=10^{\left(8.20417-{\frac {1642.89}{78.32+230.300}}\right)}=760.0\ {\text{mmHg}},\\P&=10^{\left(7.68117-{\frac {1332.04}{78.32+199.200}}\right)}=761.0\ {\text{mmHg}}\end{aligned}}}
(760 mmHg = 101.325 kPa = 1.000 atm = normal pressure).
This example shows a severe problem caused by using two different sets of coefficients. The described vapor pressure is not continuous—at the normal boiling point the two sets give different results. This causes severe problems for computational techniques which rely on a continuous vapor pressure curve.
Two solutions are possible: The first approach uses a single Antoine parameter set over a larger temperature range and accepts the increased deviation between calculated and real vapor pressures. A variant of this single set approach is using a special parameter set fitted for the examined temperature range. The second solution is switching to another vapor pressure equation with more than three parameters. Commonly used are simple extensions of the Antoine equation (see below) and the equations of DIPPR or Wagner.
== Units ==
The coefficients of Antoine's equation are normally given in mmHg—even today where the SI is recommended and pascals are preferred. The usage of the pre-SI units has only historic reasons and originates directly from Antoine's original publication.
It is, however, easy to convert the parameters to different pressure and temperature units. For switching from degrees Celsius to kelvins, it is sufficient to subtract 273.15 from the C parameter. For switching from millimeters of mercury to pascals, it is sufficient to add the common logarithm of the factor between both units to the A parameter:
A
Pa
=
A
mmHg
+
log
10
101325
760
=
A
mmHg
+
2.124903.
{\displaystyle A_{\text{Pa}}=A_{\text{mmHg}}+\log _{10}{\frac {101325}{760}}=A_{\text{mmHg}}+2.124903.}
The parameters for °C and mmHg for ethanol
A = 8.20417
B = 1642.89
C = 230.300
are converted for K and Pa to
A = 10.32907
B = 1642.89
C = −42.85
The first example calculation with TB = 351.47 K becomes
log
10
(
P
[Pa]
)
=
10.3291
−
1642.89
351.47
−
42.85
=
5.005727378
=
log
10
(
101328
)
.
{\displaystyle \log _{10}(P\ {\text{[Pa]}})=10.3291-{\frac {1642.89}{351.47-42.85}}=5.005727378=\log _{10}(101328).}
A similarly simple transformation can be used if the common logarithm should be replaced by the natural logarithm. It is sufficient to multiply the A and B parameters by ln(10) = 2.302585.
The example calculation with the converted parameters (for K and Pa):
A = 23.7836
B = 3782.89
C = −42.85
becomes
ln
(
P
[Pa]
)
=
23.7836
−
3782.89
351.47
−
42.85
=
11.52616367
=
ln
(
101332
)
.
{\displaystyle \ln(P\ {\text{[Pa]}})=23.7836-{\frac {3782.89}{351.47-42.85}}=11.52616367=\ln(101332).}
(The small differences in the results are only caused by the used limited precision of the coefficients.)
== Extensions ==
To overcome the limits of the Antoine equation, some simple extension by additional terms are used:
P
=
exp
(
A
+
B
C
+
T
+
D
⋅
T
+
E
⋅
T
2
+
F
⋅
ln
T
)
,
P
=
exp
(
A
+
B
C
+
T
+
D
⋅
ln
T
+
E
⋅
T
F
)
.
{\displaystyle {\begin{aligned}P&=\exp {\left(A+{\frac {B}{C+T}}+D\cdot T+E\cdot T^{2}+F\cdot \ln T\right)},\\P&=\exp \left(A+{\frac {B}{C+T}}+D\cdot \ln T+E\cdot T^{F}\right).\end{aligned}}}
The additional parameters increase the flexibility of the equation and allow the description of the entire vapor pressure curve. The extended equation forms can be reduced to the original form by setting the additional parameters D, E and F to 0.
A further difference is that the extended equations use the e as base for the exponential function and the natural logarithm. This doesn't affect the equation form.
== Generalized Antoine equation with acentric factor ==
Lee developed a modified form of the Antoine equation that allows calculating vapor pressure across the entire temperature range using the acentric factor (ω) of a substance:
ln
p
v,r
=
ln
27
−
27
/
8
A
(
ω
)
T
r
9.5663
+
B
(
ω
)
T
r
2.0074
+
C
(
ω
)
T
r
1.1206
,
{\displaystyle \ln p_{\text{v,r}}=\ln 27-{\frac {27/8}{A(\omega )T_{\text{r}}^{9.5663}+B(\omega )T_{\text{r}}^{2.0074}+C(\omega )T_{\text{r}}^{1.1206}}},}
where
A
(
ω
)
=
−
0.0966
ω
3
+
0.1717
ω
2
+
0.0280
ω
+
0.0498
,
B
(
ω
)
=
0.6093
ω
3
−
1.2620
ω
2
+
1.3025
ω
+
0.2817
,
C
(
ω
)
=
−
0.5127
ω
3
+
1.0903
ω
2
−
1.3305
ω
+
0.6925
,
{\displaystyle {\begin{aligned}A(\omega )&=-0.0966\omega ^{3}+0.1717\omega ^{2}+0.0280\omega +0.0498,\\B(\omega )&=0.6093\omega ^{3}-1.2620\omega ^{2}+1.3025\omega +0.2817,\\C(\omega )&=-0.5127\omega ^{3}+1.0903\omega ^{2}-1.3305\omega +0.6925,\end{aligned}}}
ln pv,r is the natural logarithm of the reduced vapor pressure, Tr is the reduced temperature, and ω is the acentric factor.
The fundamental structure of the equation is based on the van der Waals equation and builds upon the findings of Wall, and Gutmann et al., who reformulated it into the Antoine equation. The proposed equation demonstrates improved accuracy compared to the Lee–Kesler method.
== Sources for Antoine equation parameters ==
NIST Chemistry WebBook
Dortmund Data Bank
Directory of reference books and data banks containing Antoine constants
Several reference books and publications, e. g.
Lange's Handbook of Chemistry, McGraw-Hill Professional
Wichterle I., Linek J., "Antoine Vapor Pressure Constants of Pure Compounds"
Yaws C. L., Yang H.-C., "To Estimate Vapor Pressure Easily. Antoine Coefficients Relate Vapor Pressure to Temperature for Almost 700 Major Organic Compounds", Hydrocarbon Processing, 68(10), Pages 65–68, 1989
== See also ==
Vapour pressure of water
Arden Buck equation
Lee–Kesler method
Goff–Gratch equation
Raoult's law
Thermodynamic activity
== References ==
== External links ==
Gallica, scanned original paper
NIST Chemistry Web Book
Calculation of vapor pressures with the Antoine equation | Wikipedia/Antoine_equation |
A heteronuclear molecule is a molecule composed of atoms of more than one chemical element. For example, a molecule of water (H2O) is heteronuclear because it has atoms of two different elements, hydrogen (H) and oxygen (O).
Similarly, a heteronuclear ion is an ion that contains atoms of more than one chemical element. For example, the carbonate ion (CO2−3) is heteronuclear because it has atoms of carbon (C) and oxygen (O). The lightest heteronuclear ion is the helium hydride ion (HeH+). This is in contrast to a homonuclear ion, which contains all the same kind of atom, such as the dihydrogen cation, or atomic ions that only contain one atom such as the hydrogen anion (H−).
== References ==
== See also ==
Homonuclear molecule
Chemical compound | Wikipedia/Heteronuclear_molecule |
The internal energy of a thermodynamic system is the energy of the system as a state function, measured as the quantity of energy necessary to bring the system from its standard internal state to its present internal state of interest, accounting for the gains and losses of energy due to changes in its internal state, including such quantities as magnetization. It excludes the kinetic energy of motion of the system as a whole and the potential energy of position of the system as a whole, with respect to its surroundings and external force fields. It includes the thermal energy, i.e., the constituent particles' kinetic energies of motion relative to the motion of the system as a whole. Without a thermodynamic process, the internal energy of an isolated system cannot change, as expressed in the law of conservation of energy, a foundation of the first law of thermodynamics. The notion has been introduced to describe the systems characterized by temperature variations, temperature being added to the set of state parameters, the position variables known in mechanics (and their conjugated generalized force parameters), in a similar way to potential energy of the conservative fields of force, gravitational and electrostatic. Its author is Rudolf Clausius. Without transfer of matter, internal energy changes equal the algebraic sum of the heat transferred and the work done. In systems without temperature changes, internal energy changes equal the work done by/on the system.
The internal energy cannot be measured absolutely. Thermodynamics concerns changes in the internal energy, not its absolute value. The processes that change the internal energy are transfers, into or out of the system, of substance, or of energy, as heat, or by thermodynamic work. These processes are measured by changes in the system's properties, such as temperature, entropy, volume, electric polarization, and molar constitution. The internal energy depends only on the internal state of the system and not on the particular choice from many possible processes by which energy may pass into or out of the system. It is a state variable, a thermodynamic potential, and an extensive property.
Thermodynamics defines internal energy macroscopically, for the body as a whole. In statistical mechanics, the internal energy of a body can be analyzed microscopically in terms of the kinetic energies of microscopic motion of the system's particles from translations, rotations, and vibrations, and of the potential energies associated with microscopic forces, including chemical bonds.
The unit of energy in the International System of Units (SI) is the joule (J). The internal energy relative to the mass with unit J/kg is the specific internal energy. The corresponding quantity relative to the amount of substance with unit J/mol is the molar internal energy.
== Cardinal functions ==
The internal energy of a system depends on its entropy S, its volume V and its number of massive particles: U(S,V,{Nj}). It expresses the thermodynamics of a system in the energy representation. As a function of state, its arguments are exclusively extensive variables of state. Alongside the internal energy, the other cardinal function of state of a thermodynamic system is its entropy, as a function, S(U,V,{Nj}), of the same list of extensive variables of state, except that the entropy, S, is replaced in the list by the internal energy, U. It expresses the entropy representation.
Each cardinal function is a monotonic function of each of its natural or canonical variables. Each provides its characteristic or fundamental equation, for example U = U(S,V,{Nj}), that by itself contains all thermodynamic information about the system. The fundamental equations for the two cardinal functions can in principle be interconverted by solving, for example, U = U(S,V,{Nj}) for S, to get S = S(U,V,{Nj}).
In contrast, Legendre transformations are necessary to derive fundamental equations for other thermodynamic potentials and Massieu functions. The entropy as a function only of extensive state variables is the one and only cardinal function of state for the generation of Massieu functions. It is not itself customarily designated a 'Massieu function', though rationally it might be thought of as such, corresponding to the term 'thermodynamic potential', which includes the internal energy.
For real and practical systems, explicit expressions of the fundamental equations are almost always unavailable, but the functional relations exist in principle. Formal, in principle, manipulations of them are valuable for the understanding of thermodynamics.
== Description and definition ==
The internal energy
U
{\displaystyle U}
of a given state of the system is determined relative to that of a standard state of the system, by adding up the macroscopic transfers of energy that accompany a change of state from the reference state to the given state:
Δ
U
=
∑
i
E
i
,
{\displaystyle \Delta U=\sum _{i}E_{i},}
where
Δ
U
{\displaystyle \Delta U}
denotes the difference between the internal energy of the given state and that of the reference state,
and the
E
i
{\displaystyle E_{i}}
are the various energies transferred to the system in the steps from the reference state to the given state.
It is the energy needed to create the given state of the system from the reference state. From a non-relativistic microscopic point of view, it may be divided into microscopic potential energy,
U
micro,pot
{\displaystyle U_{\text{micro,pot}}}
, and microscopic kinetic energy,
U
micro,kin
{\displaystyle U_{\text{micro,kin}}}
, components:
U
=
U
micro,pot
+
U
micro,kin
.
{\displaystyle U=U_{\text{micro,pot}}+U_{\text{micro,kin}}.}
The microscopic kinetic energy of a system arises as the sum of the motions of all the system's particles with respect to the center-of-mass frame, whether it be the motion of atoms, molecules, atomic nuclei, electrons, or other particles. The microscopic potential energy algebraic summative components are those of the chemical and nuclear particle bonds, and the physical force fields within the system, such as due to internal induced electric or magnetic dipole moment, as well as the energy of deformation of solids (stress-strain). Usually, the split into microscopic kinetic and potential energies is outside the scope of macroscopic thermodynamics.
Internal energy does not include the energy due to motion or location of a system as a whole. That is to say, it excludes any kinetic or potential energy the body may have because of its motion or location in external gravitational, electrostatic, or electromagnetic fields. It does, however, include the contribution of such a field to the energy due to the coupling of the internal degrees of freedom of the system with the field. In such a case, the field is included in the thermodynamic description of the object in the form of an additional external parameter.
For practical considerations in thermodynamics or engineering, it is rarely necessary, convenient, nor even possible, to consider all energies belonging to the total intrinsic energy of a sample system, such as the energy given by the equivalence of mass. Typically, descriptions only include components relevant to the system under study. Indeed, in most systems under consideration, especially through thermodynamics, it is impossible to calculate the total internal energy. Therefore, a convenient null reference point may be chosen for the internal energy.
The internal energy is an extensive property: it depends on the size of the system, or on the amount of substance it contains.
At any temperature greater than absolute zero, microscopic potential energy and kinetic energy are constantly converted into one another, but the sum remains constant in an isolated system (cf. table). In the classical picture of thermodynamics, kinetic energy vanishes at zero temperature and the internal energy is purely potential energy. However, quantum mechanics has demonstrated that even at zero temperature particles maintain a residual energy of motion, the zero point energy. A system at absolute zero is merely in its quantum-mechanical ground state, the lowest energy state available. At absolute zero a system of given composition has attained its minimum attainable entropy.
The microscopic kinetic energy portion of the internal energy gives rise to the temperature of the system. Statistical mechanics relates the pseudo-random kinetic energy of individual particles to the mean kinetic energy of the entire ensemble of particles comprising a system. Furthermore, it relates the mean microscopic kinetic energy to the macroscopically observed empirical property that is expressed as temperature of the system. While temperature is an intensive measure, this energy expresses the concept as an extensive property of the system, often referred to as the thermal energy, The scaling property between temperature and thermal energy is the entropy change of the system.
Statistical mechanics considers any system to be statistically distributed across an ensemble of
N
{\displaystyle N}
microstates. In a system that is in thermodynamic contact equilibrium with a heat reservoir, each microstate has an energy
E
i
{\displaystyle E_{i}}
and is associated with a probability
p
i
{\displaystyle p_{i}}
. The internal energy is the mean value of the system's total energy, i.e., the sum of all microstate energies, each weighted by its probability of occurrence:
U
=
∑
i
=
1
N
p
i
E
i
.
{\displaystyle U=\sum _{i=1}^{N}p_{i}\,E_{i}.}
This is the statistical expression of the law of conservation of energy.
=== Internal energy changes ===
Thermodynamics is chiefly concerned with the changes in internal energy
Δ
U
{\displaystyle \Delta U}
.
For a closed system, with mass transfer excluded, the changes in internal energy are due to heat transfer
Q
{\displaystyle Q}
and due to thermodynamic work
W
{\displaystyle W}
done by the system on its surroundings. Accordingly, the internal energy change
Δ
U
{\displaystyle \Delta U}
for a process may be written
Δ
U
=
Q
−
W
(closed system, no transfer of substance)
.
{\displaystyle \Delta U=Q-W\quad {\text{(closed system, no transfer of substance)}}.}
When a closed system receives energy as heat, this energy increases the internal energy. It is distributed between microscopic kinetic and microscopic potential energies. In general, thermodynamics does not trace this distribution. In an ideal gas all of the extra energy results in a temperature increase, as it is stored solely as microscopic kinetic energy; such heating is said to be sensible.
A second kind of mechanism of change in the internal energy of a closed system changed is in its doing of work on its surroundings. Such work may be simply mechanical, as when the system expands to drive a piston, or, for example, when the system changes its electric polarization so as to drive a change in the electric field in the surroundings.
If the system is not closed, the third mechanism that can increase the internal energy is transfer of substance into the system. This increase,
Δ
U
m
a
t
t
e
r
{\displaystyle \Delta U_{\mathrm {matter} }}
cannot be split into heat and work components. If the system is so set up physically that heat transfer and work that it does are by pathways separate from and independent of matter transfer, then the transfers of energy add to change the internal energy:
Δ
U
=
Q
−
W
+
Δ
U
matter
(matter transfer pathway separate from heat and work transfer pathways)
.
{\displaystyle \Delta U=Q-W+\Delta U_{\text{matter}}\quad {\text{(matter transfer pathway separate from heat and work transfer pathways)}}.}
If a system undergoes certain phase transformations while being heated, such as melting and vaporization, it may be observed that the temperature of the system does not change until the entire sample has completed the transformation. The energy introduced into the system while the temperature does not change is called latent energy or latent heat, in contrast to sensible heat, which is associated with temperature change.
== Internal energy of the ideal gas ==
Thermodynamics often uses the concept of the ideal gas for teaching purposes, and as an approximation for working systems. The ideal gas consists of particles considered as point objects that interact only by elastic collisions and fill a volume such that their mean free path between collisions is much larger than their diameter. Such systems approximate monatomic gases such as helium and other noble gases. For an ideal gas the kinetic energy consists only of the translational energy of the individual atoms. Monatomic particles do not possess rotational or vibrational degrees of freedom, and are not electronically excited to higher energies except at very high temperatures.
Therefore, the internal energy of an ideal gas depends solely on its temperature (and the number of gas particles):
U
=
U
(
N
,
T
)
{\displaystyle U=U(N,T)}
. It is not dependent on other thermodynamic quantities such as pressure or density.
The internal energy of an ideal gas is proportional to its amount of substance (number of moles)
N
{\displaystyle N}
and to its temperature
T
{\displaystyle T}
U
=
c
V
N
T
,
{\displaystyle U=c_{V}NT,}
where
c
V
{\displaystyle c_{V}}
is the isochoric (at constant volume) molar heat capacity of the gas;
c
V
{\displaystyle c_{V}}
is constant for an ideal gas. The internal energy of any gas (ideal or not) may be written as a function of the three extensive properties
S
{\displaystyle S}
,
V
{\displaystyle V}
,
N
{\displaystyle N}
(entropy, volume, number of moles). In case of the ideal gas it is in the following way
U
(
S
,
V
,
N
)
=
c
o
n
s
t
⋅
e
S
c
V
N
V
−
R
c
V
N
R
+
c
V
c
V
,
{\displaystyle U(S,V,N)=\mathrm {const} \cdot e^{\frac {S}{c_{V}N}}V^{\frac {-R}{c_{V}}}N^{\frac {R+c_{V}}{c_{V}}},}
where
c
o
n
s
t
{\displaystyle \mathrm {const} }
is an arbitrary positive constant and where
R
{\displaystyle R}
is the universal gas constant. It is easily seen that
U
{\displaystyle U}
is a linearly homogeneous function of the three variables (that is, it is extensive in these variables), and that it is weakly convex. Knowing temperature and pressure to be the derivatives
T
=
∂
U
∂
S
,
{\displaystyle T={\frac {\partial U}{\partial S}},}
P
=
−
∂
U
∂
V
,
{\displaystyle P=-{\frac {\partial U}{\partial V}},}
the ideal gas law
P
V
=
N
R
T
{\displaystyle PV=NRT}
immediately follows as below:
T
=
∂
U
∂
S
=
U
c
V
N
{\displaystyle T={\frac {\partial U}{\partial S}}={\frac {U}{c_{V}N}}}
P
=
−
∂
U
∂
V
=
U
R
c
V
V
{\displaystyle P=-{\frac {\partial U}{\partial V}}=U{\frac {R}{c_{V}V}}}
P
T
=
U
R
c
V
V
U
c
V
N
=
N
R
V
{\displaystyle {\frac {P}{T}}={\frac {\frac {UR}{c_{V}V}}{\frac {U}{c_{V}N}}}={\frac {NR}{V}}}
P
V
=
N
R
T
{\displaystyle PV=NRT}
== Internal energy of a closed thermodynamic system ==
The above summation of all components of change in internal energy assumes that a positive energy denotes heat added to the system or the negative of work done by the system on its surroundings.
This relationship may be expressed in infinitesimal terms using the differentials of each term, though only the internal energy is an exact differential.: 33 For a closed system, with transfers only as heat and work, the change in the internal energy is
d
U
=
δ
Q
−
δ
W
,
{\displaystyle \mathrm {d} U=\delta Q-\delta W,}
expressing the first law of thermodynamics. It may be expressed in terms of other thermodynamic parameters. Each term is composed of an intensive variable (a generalized force) and its conjugate infinitesimal extensive variable (a generalized displacement).
For example, the mechanical work done by the system may be related to the pressure
P
{\displaystyle P}
and volume change
d
V
{\displaystyle \mathrm {d} V}
. The pressure is the intensive generalized force, while the volume change is the extensive generalized displacement:
δ
W
=
P
d
V
.
{\displaystyle \delta W=P\,\mathrm {d} V.}
This defines the direction of work,
W
{\displaystyle W}
, to be energy transfer from the working system to the surroundings, indicated by a positive term. Taking the direction of heat transfer
Q
{\displaystyle Q}
to be into the working fluid and assuming a reversible process, the heat is
δ
Q
=
T
d
S
,
{\displaystyle \delta Q=T\mathrm {d} S,}
where
T
{\displaystyle T}
denotes the temperature, and
S
{\displaystyle S}
denotes the entropy.
The change in internal energy becomes
d
U
=
T
d
S
−
P
d
V
.
{\displaystyle \mathrm {d} U=T\,\mathrm {d} S-P\,\mathrm {d} V.}
=== Changes due to temperature and volume ===
The expression relating changes in internal energy to changes in temperature and volume is
This is useful if the equation of state is known.
In case of an ideal gas, we can derive that
d
U
=
C
V
d
T
{\displaystyle dU=C_{V}\,dT}
, i.e. the internal energy of an ideal gas can be written as a function that depends only on the temperature.
=== Changes due to temperature and pressure ===
When considering fluids or solids, an expression in terms of the temperature and pressure is usually more useful:
d
U
=
(
C
P
−
α
P
V
)
d
T
+
(
β
T
P
−
α
T
)
V
d
P
,
{\displaystyle \operatorname {d} U=\left(C_{P}-\alpha PV\right)\operatorname {d} T+\left(\beta _{T}P-\alpha T\right)V\operatorname {d} P,}
where it is assumed that the heat capacity at constant pressure is related to the heat capacity at constant volume according to
C
P
=
C
V
+
V
T
α
2
β
T
.
{\displaystyle C_{P}=C_{V}+VT{\frac {\alpha ^{2}}{\beta _{T}}}.}
=== Changes due to volume at constant temperature ===
The internal pressure is defined as a partial derivative of the internal energy with respect to the volume at constant temperature:
π
T
=
(
∂
U
∂
V
)
T
.
{\displaystyle \pi _{T}=\left({\frac {\partial U}{\partial V}}\right)_{T}.}
== Internal energy of multi-component systems ==
In addition to including the entropy
S
{\displaystyle S}
and volume
V
{\displaystyle V}
terms in the internal energy, a system is often described also in terms of the number of particles or chemical species it contains:
U
=
U
(
S
,
V
,
N
1
,
…
,
N
n
)
,
{\displaystyle U=U(S,V,N_{1},\ldots ,N_{n}),}
where
N
j
{\displaystyle N_{j}}
are the molar amounts of constituents of type
j
{\displaystyle j}
in the system. The internal energy is an extensive function of the extensive variables
S
{\displaystyle S}
,
V
{\displaystyle V}
, and the amounts
N
j
{\displaystyle N_{j}}
, the internal energy may be written as a linearly homogeneous function of first degree:
U
(
α
S
,
α
V
,
α
N
1
,
α
N
2
,
…
)
=
α
U
(
S
,
V
,
N
1
,
N
2
,
…
)
,
{\displaystyle U(\alpha S,\alpha V,\alpha N_{1},\alpha N_{2},\ldots )=\alpha U(S,V,N_{1},N_{2},\ldots ),}
where
α
{\displaystyle \alpha }
is a factor describing the growth of the system. The differential internal energy may be written as
d
U
=
∂
U
∂
S
d
S
+
∂
U
∂
V
d
V
+
∑
i
∂
U
∂
N
i
d
N
i
=
T
d
S
−
P
d
V
+
∑
i
μ
i
d
N
i
,
{\displaystyle \mathrm {d} U={\frac {\partial U}{\partial S}}\mathrm {d} S+{\frac {\partial U}{\partial V}}\mathrm {d} V+\sum _{i}\ {\frac {\partial U}{\partial N_{i}}}\mathrm {d} N_{i}\ =T\,\mathrm {d} S-P\,\mathrm {d} V+\sum _{i}\mu _{i}\mathrm {d} N_{i},}
which shows (or defines) temperature
T
{\displaystyle T}
to be the partial derivative of
U
{\displaystyle U}
with respect to entropy
S
{\displaystyle S}
and pressure
P
{\displaystyle P}
to be the negative of the similar derivative with respect to volume
V
{\displaystyle V}
,
T
=
∂
U
∂
S
,
{\displaystyle T={\frac {\partial U}{\partial S}},}
P
=
−
∂
U
∂
V
,
{\displaystyle P=-{\frac {\partial U}{\partial V}},}
and where the coefficients
μ
i
{\displaystyle \mu _{i}}
are the chemical potentials for the components of type
i
{\displaystyle i}
in the system. The chemical potentials are defined as the partial derivatives of the internal energy with respect to the variations in composition:
μ
i
=
(
∂
U
∂
N
i
)
S
,
V
,
N
j
≠
i
.
{\displaystyle \mu _{i}=\left({\frac {\partial U}{\partial N_{i}}}\right)_{S,V,N_{j\neq i}}.}
As conjugate variables to the composition
{
N
j
}
{\displaystyle \lbrace N_{j}\rbrace }
, the chemical potentials are intensive properties, intrinsically characteristic of the qualitative nature of the system, and not proportional to its extent. Under conditions of constant
T
{\displaystyle T}
and
P
{\displaystyle P}
, because of the extensive nature of
U
{\displaystyle U}
and its independent variables, using Euler's homogeneous function theorem, the differential
d
U
{\displaystyle \mathrm {d} U}
may be integrated and yields an expression for the internal energy:
U
=
T
S
−
P
V
+
∑
i
μ
i
N
i
.
{\displaystyle U=TS-PV+\sum _{i}\mu _{i}N_{i}.}
The sum over the composition of the system is the Gibbs free energy:
G
=
∑
i
μ
i
N
i
{\displaystyle G=\sum _{i}\mu _{i}N_{i}}
that arises from changing the composition of the system at constant temperature and pressure. For a single component system, the chemical potential equals the Gibbs energy per amount of substance, i.e. particles or moles according to the original definition of the unit for
{
N
j
}
{\displaystyle \lbrace N_{j}\rbrace }
.
== Internal energy in an elastic medium ==
For an elastic medium the potential energy component of the internal energy has an elastic nature expressed in terms of the stress
σ
i
j
{\displaystyle \sigma _{ij}}
and strain
ε
i
j
{\displaystyle \varepsilon _{ij}}
involved in elastic processes. In Einstein notation for tensors, with summation over repeated indices, for unit volume, the infinitesimal statement is
d
U
=
T
d
S
+
σ
i
j
d
ε
i
j
.
{\displaystyle \mathrm {d} U=T\mathrm {d} S+\sigma _{ij}\mathrm {d} \varepsilon _{ij}.}
Euler's theorem yields for the internal energy:
U
=
T
S
+
1
2
σ
i
j
ε
i
j
.
{\displaystyle U=TS+{\frac {1}{2}}\sigma _{ij}\varepsilon _{ij}.}
For a linearly elastic material, the stress is related to the strain by
σ
i
j
=
C
i
j
k
l
ε
k
l
,
{\displaystyle \sigma _{ij}=C_{ijkl}\varepsilon _{kl},}
where the
C
i
j
k
l
{\displaystyle C_{ijkl}}
are the components of the 4th-rank elastic constant tensor of the medium.
Elastic deformations, such as sound, passing through a body, or other forms of macroscopic internal agitation or turbulent motion create states when the system is not in thermodynamic equilibrium. While such energies of motion continue, they contribute to the total energy of the system; thermodynamic internal energy pertains only when such motions have ceased.
== History ==
James Joule studied the relationship between heat, work, and temperature. He observed that friction in a liquid, such as caused by its agitation with work by a paddle wheel, caused an increase in its temperature, which he described as producing a quantity of heat. Expressed in modern units, he found that c. 4186 joules of energy were needed to raise the temperature of one kilogram of water by one degree Celsius.
== Notes ==
== See also ==
Calorimetry
Enthalpy
Exergy
Thermodynamic equations
Thermodynamic potentials
Gibbs free energy
Helmholtz free energy
== References ==
=== Bibliography of cited references ===
Adkins, C. J. (1968/1975). Equilibrium Thermodynamics, second edition, McGraw-Hill, London, ISBN 0-07-084057-1.
Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 0-88318-797-3.
Born, M. (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London.
Callen, H. B. (1960/1985), Thermodynamics and an Introduction to Thermostatistics, (first edition 1960), second edition 1985, John Wiley & Sons, New York, ISBN 0-471-86256-8.
Crawford, F. H. (1963). Heat, Thermodynamics, and Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc.
Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081.
Thomas W. Leland Jr., G. A. Mansoori (ed.), Basic Principles of Classical and Statistical Thermodynamics (PDF).
Landau, L. D.; Lifshitz, E. M. (1986). Theory of Elasticity (Course of Theoretical Physics Volume 7). (Translated from Russian by J. B. Sykes and W. H. Reid) (Third ed.). Boston, MA: Butterworth Heinemann. ISBN 978-0-7506-2633-0.
Münster, A. (1970), Classical Thermodynamics, translated by E. S. Halberstadt, Wiley–Interscience, London, ISBN 0-471-62430-6.
Planck, M., (1923/1927). Treatise on Thermodynamics, translated by A. Ogg, third English edition, Longmans, Green and Co., London.
Tschoegl, N. W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics, Elsevier, Amsterdam, ISBN 0-444-50426-5.
== Bibliography ==
Alberty, R. A. (2001). "Use of Legendre transforms in chemical thermodynamics" (PDF). Pure Appl. Chem. 73 (8): 1349–1380. doi:10.1351/pac200173081349. S2CID 98264934.
Lewis, Gilbert Newton; Randall, Merle: Revised by Pitzer, Kenneth S. & Brewer, Leo (1961). Thermodynamics (2nd ed.). New York, NY USA: McGraw-Hill Book Co. ISBN 978-0-07-113809-3. {{cite book}}: ISBN / Date incompatibility (help)CS1 maint: multiple names: authors list (link) | Wikipedia/Internal_Energy |
Ehrenfest equations (named after Paul Ehrenfest) are equations which describe changes in specific heat capacity and derivatives of specific volume in second-order phase transitions. The Clausius–Clapeyron relation does not make sense for second-order phase transitions, as both specific entropy and specific volume do not change in second-order phase transitions.
== Quantitative consideration ==
Ehrenfest equations are the consequence of continuity of specific entropy
s
{\displaystyle s}
and specific volume
v
{\displaystyle v}
, which are first derivatives of specific Gibbs free energy – in second-order phase transitions. If one considers specific entropy
s
{\displaystyle s}
as a function of temperature and pressure, then its differential is:
d
s
=
(
∂
s
∂
T
)
P
d
T
+
(
∂
s
∂
P
)
T
d
P
{\displaystyle ds=\left({{\partial s} \over {\partial T}}\right)_{P}dT+\left({{\partial s} \over {\partial P}}\right)_{T}dP}
.
As
(
∂
s
∂
T
)
P
=
c
P
T
,
(
∂
s
∂
P
)
T
=
−
(
∂
v
∂
T
)
P
{\displaystyle \left({{\partial s} \over {\partial T}}\right)_{P}={{c_{P}} \over T},\left({{\partial s} \over {\partial P}}\right)_{T}=-\left({{\partial v} \over {\partial T}}\right)_{P}}
, then the differential of specific entropy also is:
d
s
i
=
c
i
P
T
d
T
−
(
∂
v
i
∂
T
)
P
d
P
{\displaystyle d{s_{i}}={{c_{iP}} \over T}dT-\left({{\partial v_{i}} \over {\partial T}}\right)_{P}dP}
,
where
i
=
1
{\displaystyle i=1}
and
i
=
2
{\displaystyle i=2}
are the two phases which transit one into other. Due to continuity of specific entropy, the following holds in second-order phase transitions:
d
s
1
=
d
s
2
{\displaystyle {ds_{1}}={ds_{2}}}
. So,
(
c
2
P
−
c
1
P
)
d
T
T
=
[
(
∂
v
2
∂
T
)
P
−
(
∂
v
1
∂
T
)
P
]
d
P
{\displaystyle \left({c_{2P}-c_{1P}}\right){{dT} \over T}=\left[{\left({{\partial v_{2}} \over {\partial T}}\right)_{P}-\left({{\partial v_{1}} \over {\partial T}}\right)_{P}}\right]dP}
Therefore, the first Ehrenfest equation is:
Δ
c
P
=
T
⋅
Δ
(
(
∂
v
∂
T
)
P
)
⋅
d
P
d
T
{\displaystyle {\Delta c_{P}=T\cdot \Delta \left({\left({{\partial v} \over {\partial T}}\right)_{P}}\right)\cdot {{dP} \over {dT}}}}
.
The second Ehrenfest equation is got in a like manner, but specific entropy is considered as a function of temperature and specific volume:
Δ
c
V
=
−
T
⋅
Δ
(
(
∂
P
∂
T
)
v
)
⋅
d
v
d
T
{\displaystyle {\Delta c_{V}=-T\cdot \Delta \left({\left({{\partial P} \over {\partial T}}\right)_{v}}\right)\cdot {{dv} \over {dT}}}}
The third Ehrenfest equation is got in a like manner, but specific entropy is considered as a function of
v
{\displaystyle v}
and
P
{\displaystyle P}
:
Δ
(
∂
v
∂
T
)
P
=
Δ
(
(
∂
P
∂
T
)
v
)
⋅
d
v
d
P
{\displaystyle {\Delta \left({{\partial v} \over {\partial T}}\right)_{P}=\Delta \left({\left({{\partial P} \over {\partial T}}\right)_{v}}\right)\cdot {{dv} \over {dP}}}}
.
Continuity of specific volume as a function of
T
{\displaystyle T}
and
P
{\displaystyle P}
gives the fourth Ehrenfest equation:
Δ
(
∂
v
∂
T
)
P
=
−
Δ
(
(
∂
v
∂
P
)
T
)
⋅
d
P
d
T
{\displaystyle {\Delta \left({{\partial v} \over {\partial T}}\right)_{P}=-\Delta \left({\left({{\partial v} \over {\partial P}}\right)_{T}}\right)\cdot {{dP} \over {dT}}}}
.
== Limitations ==
Derivatives of Gibbs free energy are not always finite. Transitions between different magnetic states of metals can't be described by Ehrenfest equations.
== See also ==
Paul Ehrenfest
Clausius–Clapeyron relation
Phase transition
== References == | Wikipedia/Ehrenfest_equations |
The Duhem–Margules equation, named for Pierre Duhem and Max Margules, is a thermodynamic statement of the relationship between the two components of a single liquid where the vapour mixture is regarded as an ideal gas:
(
d
ln
P
A
d
ln
x
A
)
T
,
P
=
(
d
ln
P
B
d
ln
x
B
)
T
,
P
{\displaystyle \left({\frac {\mathrm {d} \ln P_{A}}{\mathrm {d} \ln x_{A}}}\right)_{T,P}=\left({\frac {\mathrm {d} \ln P_{B}}{\mathrm {d} \ln x_{B}}}\right)_{T,P}}
where PA and PB are the partial vapour pressures of the two constituents and xA and xB are the mole fractions of the liquid. The equation gives the relation between changes in mole fraction and partial pressure of the components.
== Derivation ==
Let us consider a binary liquid mixture of two component in equilibrium with their vapor at constant temperature and pressure. Then from the Gibbs–Duhem equation, we have
Where nA and nB are number of moles of the component A and B while μA and μB are their chemical potentials.
Dividing equation (1) by nA + nB, then
n
A
n
A
+
n
B
d
μ
A
+
n
B
n
A
+
n
B
d
μ
B
=
0
{\displaystyle {\frac {n_{A}}{n_{A}+n_{B}}}\mathrm {d} \mu _{A}+{\frac {n_{B}}{n_{A}+n_{B}}}\mathrm {d} \mu _{B}=0}
Or
Now the chemical potential of any component in mixture is dependent upon temperature, pressure and the composition of the mixture. Hence if temperature and pressure are taken to be constant, the chemical potentials must satisfy
Putting these values in equation (2), then
Because the sum of mole fractions of all components in the mixture is unity, i.e.,
x
1
+
x
2
=
1
{\displaystyle x_{1}+x_{2}=1}
we have
d
x
1
+
d
x
2
=
0
{\displaystyle \mathrm {d} x_{1}+\mathrm {d} x_{2}=0}
so equation (5) can be re-written:
Now the chemical potential of any component in mixture is such that
μ
=
μ
0
+
R
T
ln
P
{\displaystyle \mu =\mu _{0}+RT\ln P}
where P is the partial pressure of that component. By differentiating this equation with respect to the mole fraction of a component:
d
μ
d
x
=
R
T
d
ln
P
d
x
{\displaystyle {\frac {\mathrm {d} \mu }{\mathrm {d} x}}=RT{\frac {\mathrm {d} \ln P}{\mathrm {d} x}}}
we have for components A and B
Substituting these value in equation (6), then
x
A
d
ln
P
A
d
x
A
=
x
B
d
ln
P
B
d
x
B
{\displaystyle x_{A}{\frac {\mathrm {d} \ln P_{A}}{\mathrm {d} x_{A}}}=x_{B}{\frac {\mathrm {d} \ln P_{B}}{\mathrm {d} x_{B}}}}
or
(
d
ln
P
A
d
ln
x
A
)
T
,
P
=
(
d
ln
P
B
d
ln
x
B
)
T
,
P
{\displaystyle \left({\frac {\mathrm {d} \ln P_{A}}{\mathrm {d} \ln x_{A}}}\right)_{T,P}=\left({\frac {\mathrm {d} \ln P_{B}}{\mathrm {d} \ln x_{B}}}\right)_{T,P}}
This final equation is the Duhem–Margules equation.
== Sources ==
Atkins, Peter and Julio de Paula. 2002. Physical Chemistry, 7th ed. New York: W. H. Freeman and Co.
Carter, Ashley H. 2001. Classical and Statistical Thermodynamics. Upper Saddle River: Prentice Hall. | Wikipedia/Duhem–Margules_equation |
The classical elements typically refer to earth, water, air, fire, and (later) aether which were proposed to explain the nature and complexity of all matter in terms of simpler substances. Ancient cultures in Greece, Angola, Tibet, India, and Mali had similar lists which sometimes referred, in local languages, to "air" as "wind", and to "aether" as "space".
These different cultures and even individual philosophers had widely varying explanations concerning their attributes and how they related to observable phenomena as well as cosmology. Sometimes these theories overlapped with mythology and were personified in deities. Some of these interpretations included atomism (the idea of very small, indivisible portions of matter), but other interpretations considered the elements to be divisible into infinitely small pieces without changing their nature.
While the classification of the material world in ancient India, Hellenistic Egypt, and ancient Greece into air, earth, fire, and water was more philosophical, during the Middle Ages medieval scientists used practical, experimental observation to classify materials. In Europe, the ancient Greek concept, devised by Empedocles, evolved into the systematic classifications of Aristotle and Hippocrates. This evolved slightly into the medieval system, and eventually became the object of experimental verification in the 17th century, at the start of the Scientific Revolution.
Modern science does not support the classical elements to classify types of substances. Atomic theory classifies atoms into more than a hundred chemical elements such as oxygen, iron, and mercury, which may form chemical compounds and mixtures. The modern categories roughly corresponding to the classical elements are the states of matter produced under different temperatures and pressures. Solid, liquid, gas, and plasma share many attributes with the corresponding classical elements of earth, water, air, and fire, but these states describe the similar behavior of different types of atoms at similar energy levels, not the characteristic behavior of certain atoms or substances.
== Hellenistic philosophy ==
The ancient Greek concept of four basic elements, these being earth (γῆ gê), water (ὕδωρ hýdōr), air (ἀήρ aḗr), and fire (πῦρ pŷr), dates from pre-Socratic times and persisted throughout the Middle Ages and into the Early modern period, deeply influencing European thought and culture.
=== Pre-Socratic elements ===
==== Water, air, or fire? ====
The classical elements were first proposed independently by several early Pre-Socratic philosophers. Greek philosophers had debated which substance was the arche ("first principle"), or primordial element from which everything else was made. Thales (c. 626/623 – c. 548/545 BC) believed that water was this principle. Anaximander (c. 610 – c. 546 BC) argued that the primordial substance was not any of the known substances, but could be transformed into them, and they into each other. Anaximenes (c. 586 – c. 526 BC) favored air, and Heraclitus (fl. c. 500 BC) championed fire.
==== Fire, earth, air, and water ====
The Greek philosopher Empedocles (c. 450 BC) was the first to propose the four classical elements as a set: fire, earth, air, and water. He called them the four "roots" (ῥιζώματα, rhizōmata). Empedocles also proved (at least to his own satisfaction) that air was a separate substance by observing that a bucket inverted in water did not become filled with water, a pocket of air remaining trapped inside.
Fire, earth, air, and water have become the most popular set of classical elements in modern interpretations. One such version was provided by Robert Boyle in The Sceptical Chymist, which was published in 1661 in the form of a dialogue between five characters. Themistius, the Aristotelian of the party, says:
If You but consider a piece of green-Wood burning in a Chimney, You will readily discern in the disbanded parts of it the four Elements, of which we teach It and other mixt bodies to be compos'd. The fire discovers it self in the flame ... the smoke by ascending to the top of the chimney, and there readily vanishing into air ... manifests to what Element it belongs and gladly returnes. The water ... boyling and hissing at the ends of the burning Wood betrayes it self ... and the ashes by their weight, their firiness, and their dryness, put it past doubt that they belong to the Element of Earth.
=== Humorism (Hippocrates) ===
According to Galen, these elements were used by Hippocrates (c. 460 – c. 370 BC) in describing the human body with an association with the four humours: yellow bile (fire), black bile (earth), blood (air), and phlegm (water). Medical care was primarily about helping the patient stay in or return to their own personal natural balanced state.
=== Plato ===
Plato (428/423 – 348/347 BC) seems to have been the first to use the term "element (στοιχεῖον, stoicheîon)" in reference to air, fire, earth, and water. The ancient Greek word for element, stoicheion (from stoicheo, "to line up") meant "smallest division (of a sun-dial), a syllable", as the composing unit of an alphabet it could denote a letter and the smallest unit from which a word is formed.
=== Aristotle ===
In On the Heavens (350 BC), Aristotle defines "element" in general:
An element, we take it, is a body into which other bodies may be analysed, present in them potentially or in actuality (which of these, is still disputable), and not itself divisible into bodies different in form. That, or something like it, is what all men in every case mean by element.
In his On Generation and Corruption, Aristotle related each of the four elements to two of the four sensible qualities:
Fire is both hot and dry.
Air is both hot and wet (for air is like vapor, ἀτμὶς).
Water is both cold and wet.
Earth is both cold and dry.
A classic diagram has one square inscribed in the other, with the corners of one being the classical elements, and the corners of the other being the properties. The opposite corner is the opposite of these properties, "hot – cold" and "dry – wet".
==== Aether ====
Aristotle added a fifth element, aether (αἰθήρ aither), as the quintessence, reasoning that whereas fire, earth, air, and water were earthly and corruptible, since no changes had been perceived in the heavenly regions, the stars cannot be made out of any of the four elements but must be made of a different, unchangeable, heavenly substance. It had previously been believed by pre-Socratics such as Empedocles and Anaxagoras that aether, the name applied to the material of heavenly bodies, was a form of fire. Aristotle himself did not use the term aether for the fifth element, and strongly criticised the pre-Socratics for associating the term with fire. He preferred a number of other terms indicating eternal movement, thus emphasising the evidence for his discovery of a new element. These five elements have been associated since Plato's Timaeus with the five platonic solids. Earth was associated with the cube, air with the octahedron, water with the icosahedron, and fire with the tetrahedron. Of the fifth Platonic solid, the dodecahedron, Plato obscurely remarked, "...the god used [it] for arranging the constellations on the whole heaven". Aristotle added a fifth element, aither (aether in Latin, "ether" in English) and postulated that the heavens were made of this element, but he had no interest in matching it with Plato's fifth solid.
=== Neo-Platonism ===
The Neoplatonic philosopher Proclus rejected Aristotle's theory relating the elements to the sensible qualities hot, cold, wet, and dry. He maintained that each of the elements has three properties. Fire is sharp (ὀξυτητα), subtle (λεπτομερειαν), and mobile (εὐκινησιαν) while its opposite, earth, is blunt (αμβλυτητα), dense (παχυμερειαν), and immobile (ακινησιαν); they are joined by the intermediate elements, air and water, in the following fashion:
=== Hermeticism ===
A text written in Egypt in Hellenistic or Roman times called the Kore Kosmou ("Virgin of the World") ascribed to Hermes Trismegistus (associated with the Egyptian god Thoth), names the four elements fire, water, air, and earth. As described in this book:
And Isis answer made: Of living things, my son, some are made friends with fire, and some with water, some with air, and some with earth, and some with two or three of these, and some with all. And, on the contrary, again some are made enemies of fire, and some of water, some of earth, and some of air, and some of two of them, and some of three, and some of all. For instance, son, the locust and all flies flee fire; the eagle and the hawk and all high-flying birds flee water; fish, air and earth; the snake avoids the open air. Whereas snakes and all creeping things love earth; all swimming things love water; winged things, air, of which they are the citizens; while those that fly still higher love the fire and have the habitat near it. Not that some of the animals as well do not love fire; for instance salamanders, for they even have their homes in it. It is because one or another of the elements doth form their bodies' outer envelope. Each soul, accordingly, while it is in its body is weighted and constricted by these four.
== Ancient Indian philosophy ==
=== Hinduism ===
The system of five elements are found in Vedas, especially Ayurveda, the pancha mahabhuta, or "five great elements", of Hinduism are:
bhūmi or pṛthvī (earth),
āpas or jala (water),
agní or tejas (fire),
vāyu, vyāna, or vāta (air or wind)
ākāśa, vyom, or śūnya (space or zero) or (aether or void).
They further suggest that all of creation, including the human body, is made of these five essential elements and that upon death, the human body dissolves into these five elements of nature, thereby balancing the cycle of nature.
The five elements are associated with the five senses, and act as the gross medium for the experience of sensations. The basest element, earth, created using all the other elements, can be perceived by all five senses — (i) hearing, (ii) touch, (iii) sight, (iv) taste, and (v) smell. The next higher element, water, has no odor but can be heard, felt, seen and tasted. Next comes fire, which can be heard, felt and seen. Air can be heard and felt. "Akasha" (aether) is beyond the senses of smell, taste, sight, and touch; it being accessible to the sense of hearing alone.
=== Buddhism ===
Buddhism has had a variety of thought about the five elements and their existence and relevance, some of which continue to this day.
In the Pali literature, the mahabhuta ("great elements") or catudhatu ("four elements") are earth, water, fire and air. In early Buddhism, the four elements are a basis for understanding suffering and for liberating oneself from suffering. The earliest Buddhist texts explain that the four primary material elements are solidity, fluidity, temperature, and mobility, characterized as earth, water, fire, and air, respectively.
The Buddha's teaching regarding the four elements is to be understood as the base of all observation of real sensations rather than as a philosophy. The four properties are cohesion (water), solidity or inertia (earth), expansion or vibration (air) and heat or energy content (fire). He promulgated a categorization of mind and matter as composed of eight types of "kalapas" of which the four elements are primary and a secondary group of four are colour, smell, taste, and nutriment which are derivative from the four primaries.
Thanissaro Bhikkhu (1997) renders an extract of Shakyamuni Buddha's from Pali into English thus:
Just as a skilled butcher or his apprentice, having killed a cow, would sit at a crossroads cutting it up into pieces, the monk contemplates this very body — however it stands, however it is disposed — in terms of properties: 'In this body there is the earth property, the liquid property, the fire property, & the wind property.'
Tibetan Buddhist medical literature speaks of the pañca mahābhūta (five elements) or "elemental properties": earth, water, fire, wind, and space. The concept was extensively used in traditional Tibetan medicine. Tibetan Buddhist theology, tantra traditions, and "astrological texts" also spoke of them making up the "environment, [human] bodies," and at the smallest or "subtlest" level of existence, parts of thought and the mind. Also at the subtlest level of existence, the elements exist as "pure natures represented by the five female buddhas", Ākāśadhātviśvarī, Buddhalocanā, Mamakī, Pāṇḍarāvasinī, and Samayatārā, and these pure natures "manifest as the physical properties of earth (solidity), water (fluidity), fire (heat and light), wind (movement and energy), and" the expanse of space. These natures exist as all "qualities" that are in the physical world and take forms in it.
== Ancient African philosophy ==
=== Angola ===
In traditional Bakongo religion, the five elements are incorporated into the Kongo cosmogram. This sacred symbol also depicts the physical world (Nseke), the spiritual world of the ancestors (Mpémba), the Kalûnga line that runs between the two worlds, the circular void that originally formed the two worlds (mbûngi), and the path of the sun. Each element correlates to a period in the life cycle, which the Bakongo people also equate to the four cardinal directions. According to their cosmology, all living things go through this cycle.
Aether represents mbûngi, the circular void that begot the universe.
Air (South) represents musoni, the period of conception that takes place during spring.
Fire (East) represent kala, the period of birth that takes place during summer.
Earth (North) represents tukula, the period of maturity that takes place during fall.
Water (West) represents luvemba, the period of death that takes place during winter
=== Mali ===
In traditional Bambara spirituality, the Supreme God created four additional essences of himself during creation. Together, these five essences of the deity correlate with the five classical elements.
Koni is the thought and void (aether).
Bemba (also called Pemba) is the god of the sky and air.
Nyale (also called Koroni Koundyé) is the goddess of fire.
Faro is the androgynous god of water.
Ndomadyiri is the god and master of the earth.
== Post-classical history ==
=== Alchemy ===
The elemental system used in medieval alchemy was developed primarily by the anonymous authors of the Arabic works attributed to Pseudo Apollonius of Tyana. This system consisted of the four classical elements of air, earth, fire, and water, in addition to a new theory called the sulphur-mercury theory of metals, which was based on two elements: sulphur, characterizing the principle of combustibility, "the stone which burns"; and mercury, characterizing the principle of metallic properties. They were seen by early alchemists as idealized expressions of irreducible components of the universe and are of larger consideration within philosophical alchemy.
The three metallic principles—sulphur to flammability or combustion, mercury to volatility and stability, and salt to solidity—became the tria prima of the Swiss alchemist Paracelsus. He reasoned that Aristotle's four element theory appeared in bodies as three principles. Paracelsus saw these principles as fundamental and justified them by recourse to the description of how wood burns in fire. Mercury included the cohesive principle, so that when it left in smoke the wood fell apart. Smoke described the volatility (the mercurial principle), the heat-giving flames described flammability (sulphur), and the remnant ash described solidity (salt).
=== Chinese ===
Chinese traditional concepts adopt a set of elements called the 五行 (wuxing, literally "five phases"). These five are Metal or Gold, Wood, Water, Fire, and Earth or Soil. These can be linked to Taiji, Yinyang, Four Symbols, Bagua, Hexagram and I Ching.
Gold (West) represents the lesser yin symbol, autumn, the white color, and White Tiger mascot, Taotie creature (Earth).
Wood (East) represents the lesser yang symbol, spring, the green color, and Azure Dragon mascot, Feilian creature (Wind).
Water (North) represents the great yin symbol, winter, the black color, and Black Turtle-Snake mascot.
Fire (South) represents the great yang symbol, summer, the red color, and Vermilion Bird mascot.
Soil (Center) represents the Qi symbol, intermediate season, the yellow color, and Yellow Dragon mascot, Hundun creature (Void).
=== Japanese ===
Japanese traditions use a set of elements called the 五大 (godai, literally "five great"). These five are earth, water, fire, wind/air, and void. These came from Indian Vastu shastra philosophy and Buddhist beliefs; in addition, the classical Chinese elements (五行, wu xing) are also prominent in Japanese culture, especially to the influential Neo-Confucianists during the medieval Edo period.
Earth represented rocks and stability.
Water represented fluidity and adaptability.
Fire represented life and energy.
Wind represented movement and expansion.
Void or Sky/Heaven represented spirit and creative energy.
=== Medieval Aristotelian philosophy ===
The Islamic philosophers al-Kindi, Avicenna and Fakhr al-Din al-Razi followed Aristotle in connecting the four elements with the four natures heat and cold (the active force), and dryness and moisture (the recipients).
=== Medicine Wheel ===
The medicine wheel symbol is a modern invention attributed to Native American peoples dating to approximately 1972, with the following descriptions and associations being a later addition. The associations with the classical elements are not grounded in traditional Indigenous teachings and the symbol has not been adopted by all Indigenous American nations.
Earth (South) represents the youth cycle, summer, the Indigenous race, and cedar medicine.
Fire (East) represents the birth cycle, spring, the Asian race, and tobacco medicine.
Wind/Air (North) represents the elder cycle, winter, the European race, and sweetgrass medicine.
Water (West) represents the adulthood cycle, autumn, the African race, and sage medicine.
== Modern history ==
=== Chemical element ===
The Aristotelian tradition and medieval alchemy eventually gave rise to modern chemistry, scientific theories and new taxonomies. By the time of Antoine Lavoisier, for example, a list of elements would no longer refer to classical elements. Some modern scientists see a parallel between the classical elements and the four states of matter: solid, liquid, gas and weakly ionized plasma.
Modern science recognizes classes of elementary particles which have no substructure (or rather, particles that are not made of other particles) and composite particles having substructure (particles made of other particles).
=== Western astrology ===
Western astrology uses the four classical elements in connection with astrological charts and horoscopes. The twelve signs of the zodiac are divided into the four elements: Fire signs are Aries, Leo and Sagittarius, Earth signs are Taurus, Virgo and Capricorn, Air signs are Gemini, Libra and Aquarius, and Water signs are Cancer, Scorpio, and Pisces.
=== Criticism ===
The Dutch historian of science Eduard Jan Dijksterhuis writes that the theory of the classical elements "was bound to exercise a really harmful influence. As is now clear, Aristotle, by adopting this theory as the basis of his interpretation of nature and by never losing faith in it, took a course which promised few opportunities and many dangers for science." Bertrand Russell says that Aristotle's thinking became imbued with almost biblical authority in later centuries. So much so that "Ever since the beginning of the seventeenth century, almost every serious intellectual advance has had to begin with an attack on some Aristotelian doctrine".
== See also ==
Arche – Basic proposition or assumptionPages displaying short descriptions of redirect targets
Bagua – Eight trigrams used in Taoist cosmology
Elemental – Mythic entity personifying one of the classical elements
Jabir ibn Hayyan § The sulfur–mercury theory of metals – Early Islamic alchemy
Periodic table – Tabular arrangement of the chemical elements
Phlogiston theory – Superseded theory of combustion
Prima materia – First or prime matter
Qi – Vital force in traditional Chinese philosophy
States of Matter – Forms which matter can takePages displaying short descriptions of redirect targets
== Notes ==
== References ==
=== Bibliography ===
== External links ==
Media related to Classical elements at Wikimedia Commons
Section on 4 elements in Buddhism | Wikipedia/Four_element_theory |
In surface science, surface energy (also interfacial free energy or surface free energy) quantifies the disruption of intermolecular bonds that occurs when a surface is created. In solid-state physics, surfaces must be intrinsically less energetically favorable than the bulk of the material (that is, the atoms on the surface must have more energy than the atoms in the bulk), otherwise there would be a driving force for surfaces to be created, removing the bulk of the material by sublimation. The surface energy may therefore be defined as the excess energy at the surface of a material compared to the bulk, or it is the work required to build an area of a particular surface. Another way to view the surface energy is to relate it to the work required to cut a bulk sample, creating two surfaces. There is "excess energy" as a result of the now-incomplete, unrealized bonding between the two created surfaces.
Cutting a solid body into pieces disrupts its bonds and increases the surface area, and therefore increases surface energy. If the cutting is done reversibly, then conservation of energy means that the energy consumed by the cutting process will be equal to the energy inherent in the two new surfaces created. The unit surface energy of a material would therefore be half of its energy of cohesion, all other things being equal; in practice, this is true only for a surface freshly prepared in vacuum. Surfaces often change their form away from the simple "cleaved bond" model just implied above. They are found to be highly dynamic regions, which readily rearrange or react, so that energy is often reduced by such processes as passivation or adsorption.
== Assessment ==
=== Measurement ===
==== Contact angle ====
The most common way to measure surface energy is through contact angle experiments. In this method, the contact angle of the surface is measured with several liquids, usually water and diiodomethane. Based on the contact angle results and knowing the surface tension of the liquids, the surface energy can be calculated. In practice, this analysis is done automatically by a contact angle meter.
There are several different models for calculating the surface energy based on the contact angle readings. The most commonly used method is OWRK, which requires the use of two probe liquids and gives out as a result the total surface energy as well as divides it into polar and dispersive components.
Contact angle method is the standard surface energy measurement method due to its simplicity, applicability to a wide range of surfaces and quickness. The measurement can be fully automated and is standardized.
In general, as surface energy increases, the contact angle decreases because more of the liquid is being "grabbed" by the surface. Conversely, as surface energy decreases, the contact angle increases, because the surface doesn't want to interact with the liquid.
==== Other methods ====
The surface energy of a liquid may be measured by stretching a liquid membrane (which increases the surface area and hence the surface energy). In that case, in order to increase the surface area of a mass of liquid by an amount, δA, a quantity of work, γ δA, is needed (where γ is the surface energy density of the liquid). However, such a method cannot be used to measure the surface energy of a solid because stretching of a solid membrane induces elastic energy in the bulk in addition to increasing the surface energy.
The surface energy of a solid is usually measured at high temperatures. At such temperatures the solid creeps and even though the surface area changes, the volume remains approximately constant. If γ is the surface energy density of a cylindrical rod of radius r and length l at high temperature and a constant uniaxial tension P, then at equilibrium, the variation of the total Helmholtz free energy vanishes and we have
δ
F
=
−
P
δ
l
+
γ
δ
A
=
0
⟹
γ
=
P
δ
l
δ
A
{\displaystyle \delta F=-P~\delta l+\gamma ~\delta A=0\quad \implies \quad \gamma =P{\frac {\delta l}{\delta A}}}
where F is the Helmholtz free energy and A is the surface area of the rod:
A
=
2
π
r
2
+
2
π
r
l
⟹
δ
A
=
4
π
r
δ
r
+
2
π
l
δ
r
+
2
π
r
δ
l
{\displaystyle A=2\pi r^{2}+2\pi rl\quad \implies \quad \delta A=4\pi r\delta r+2\pi l\delta r+2\pi r\delta l}
Also, since the volume (V) of the rod remains constant, the variation (δV) of the volume is zero, that is,
V
=
π
r
2
l
is constant
⟹
δ
V
=
2
π
r
l
δ
r
+
π
r
2
δ
l
=
0
⟹
δ
r
=
−
r
2
l
δ
l
.
{\displaystyle V=\pi r^{2}l{\text{ is constant}}\quad \implies \quad \delta V=2\pi rl\delta r+\pi r^{2}\delta l=0\quad \implies \quad \delta r=-{\frac {r}{2l}}\delta l~.}
Therefore, the surface energy density can be expressed as
γ
=
P
l
π
r
(
l
−
2
r
)
.
{\displaystyle \gamma ={\frac {Pl}{\pi r(l-2r)}}~.}
The surface energy density of the solid can be computed by measuring P, r, and l at equilibrium.
This method is valid only if the solid is isotropic, meaning the surface energy is the same for all crystallographic orientations. While this is only strictly true for amorphous solids (glass) and liquids, isotropy is a good approximation for many other materials. In particular, if the sample is polygranular (most metals) or made by powder sintering (most ceramics) this is a good approximation.
In the case of single-crystal materials, such as natural gemstones, anisotropy in the surface energy leads to faceting. The shape of the crystal (assuming equilibrium growth conditions) is related to the surface energy by the Wulff construction. The surface energy of the facets can thus be found to within a scaling constant by measuring the relative sizes of the facets.
=== Calculation ===
==== Deformed solid ====
In the deformation of solids, surface energy can be treated as the "energy required to create one unit of surface area", and is a function of the difference between the total energies of the system before and after the deformation:
γ
=
1
A
(
E
1
−
E
0
)
{\displaystyle \gamma ={\frac {1}{A}}\left(E_{1}-E_{0}\right)}
.
Calculation of surface energy from first principles (for example, density functional theory) is an alternative approach to measurement. Surface energy is estimated from the following variables: width of the d-band, the number of valence d-electrons, and the coordination number of atoms at the surface and in the bulk of the solid.
==== Surface formation energy of a crystalline solid ====
In density functional theory, surface energy can be calculated from the following expression:
γ
=
E
slab
−
N
E
bulk
2
A
{\displaystyle \gamma ={\frac {E_{\text{slab}}-NE_{\text{bulk}}}{2A}}}
where
Eslab is the total energy of surface slab obtained using density functional theory.
N is the number of atoms in the surface slab.
Ebulk is the bulk energy per atom.
A is the surface area.
For a slab, we have two surfaces and they are of the same type, which is reflected by the number 2 in the denominator. To guarantee this, we need to create the slab carefully to make sure that the upper and lower surfaces are of the same type.
Strength of adhesive contacts is determined by the work of adhesion which is also called relative surface energy of two contacting bodies. The relative surface energy can be determined by detaching of bodies of well defined shape made of one material from the substrate made from the second material. For example, the relative surface energy of the interface "acrylic glass – gelatin" is equal to 0.03 N/m. Experimental setup for measuring relative surface energy and its function can be seen in the video.
=== Estimation from the heat of sublimation ===
To estimate the surface energy of a pure, uniform material, an individual region of the material can be modeled as a cube. In order to move a cube from the bulk of a material to the surface, energy is required. This energy cost is incorporated into the surface energy of the material, which is quantified by:
γ
=
(
z
σ
−
z
β
)
1
2
W
AA
a
0
{\displaystyle \gamma ={\frac {\left(z_{\sigma }-z_{\beta }\right){\frac {1}{2}}W_{\text{AA}}}{a_{0}}}}
where zσ and zβ are coordination numbers corresponding to the surface and the bulk regions of the material, and are equal to 5 and 6, respectively; a0 is the surface area of an individual molecule, and WAA is the pairwise intermolecular energy.
Surface area can be determined by squaring the cube root of the volume of the molecule:
a
0
=
V
molecule
2
3
=
(
M
¯
ρ
N
A
)
2
3
{\displaystyle a_{0}=V_{\text{molecule}}^{\frac {2}{3}}=\left({\frac {\bar {M}}{\rho N_{\text{A}}}}\right)^{\frac {2}{3}}}
Here, M̄ corresponds to the molar mass of the molecule, ρ corresponds to the density, and NA is the Avogadro constant.
In order to determine the pairwise intermolecular energy, all intermolecular forces in the material must be broken. This allows thorough investigation of the interactions that occur for single molecules. During sublimation of a substance, intermolecular forces between molecules are broken, resulting in a change in the material from solid to gas. For this reason, considering the enthalpy of sublimation can be useful in determining the pairwise intermolecular energy. Enthalpy of sublimation can be calculated by the following equation:
Δ
sub
H
=
−
1
2
W
AA
N
A
z
b
{\displaystyle \Delta _{\text{sub}}H=-{\frac {1}{2}}W_{\text{AA}}N_{\text{A}}z_{b}}
Using empirically tabulated values for enthalpy of sublimation, it is possible to determine the pairwise intermolecular energy. Incorporating this value into the surface energy equation allows for the surface energy to be estimated.
The following equation can be used as a reasonable estimate for surface energy:
γ
≈
−
Δ
sub
H
(
z
σ
−
z
β
)
a
0
N
A
z
β
{\displaystyle \gamma \approx {\frac {-\Delta _{\text{sub}}H\left(z_{\sigma }-z_{\beta }\right)}{a_{0}N_{\text{A}}z_{\beta }}}}
== Interfacial energy ==
The presence of an interface influences generally all thermodynamic parameters of a system. There are two models that are commonly used to demonstrate interfacial phenomena: the Gibbs ideal interface model and the Guggenheim model. In order to demonstrate the thermodynamics of an interfacial system using the Gibbs model, the system can be divided into three parts: two immiscible liquids with volumes Vα and Vβ and an infinitesimally thin boundary layer known as the Gibbs dividing plane (σ) separating these two volumes.
The total volume of the system is:
V
=
V
α
+
V
β
{\displaystyle V=V_{\alpha }+V_{\beta }}
All extensive quantities of the system can be written as a sum of three components: bulk phase α, bulk phase β, and the interface σ. Some examples include internal energy U, the number of molecules of the ith substance ni, and the entropy S.
U
=
U
α
+
U
β
+
U
σ
N
i
=
N
i
α
+
N
i
β
+
N
i
σ
S
=
S
α
+
S
β
+
S
σ
{\displaystyle {\begin{aligned}U&=U_{\alpha }+U_{\beta }+U_{\sigma }\\N_{i}&=N_{i\alpha }+N_{i\beta }+N_{i\sigma }\\S&=S_{\alpha }+S_{\beta }+S_{\sigma }\end{aligned}}}
While these quantities can vary between each component, the sum within the system remains constant. At the interface, these values may deviate from those present within the bulk phases. The concentration of molecules present at the interface can be defined as:
N
i
σ
=
N
i
−
c
i
α
V
α
−
c
i
β
V
β
{\displaystyle N_{i\sigma }=N_{i}-c_{i\alpha }V_{\alpha }-c_{i\beta }V_{\beta }}
where ciα and ciβ represent the concentration of substance i in bulk phase α and β, respectively.
It is beneficial to define a new term interfacial excess Γi which allows us to describe the number of molecules per unit area:
Γ
i
=
N
i
α
A
{\displaystyle \Gamma _{i}={\frac {N_{i\alpha }}{A}}}
== Wetting ==
=== Spreading parameter ===
Surface energy comes into play in wetting phenomena. To examine this, consider a drop of liquid on a solid substrate. If the surface energy of the substrate changes upon the addition of the drop, the substrate is said to be wetting. The spreading parameter can be used to mathematically determine this:
S
=
γ
s
−
γ
l
−
γ
s-l
{\displaystyle S=\gamma _{\text{s}}-\gamma _{\text{l}}-\gamma _{\text{s-l}}}
where S is the spreading parameter, γs the surface energy of the substrate, γl the surface energy of the liquid, and γs-l the interfacial energy between the substrate and the liquid.
If S < 0, the liquid partially wets the substrate. If S > 0, the liquid completely wets the substrate.
=== Contact angle ===
A way to experimentally determine wetting is to look at the contact angle (θ), which is the angle connecting the solid–liquid interface and the liquid–gas interface (as in the figure).
If θ = 0°, the liquid completely wets the substrate.
If 0° < θ < 90°, high wetting occurs.
If 90° < θ < 180°, low wetting occurs.
If θ = 180°, the liquid does not wet the substrate at all.
The Young equation relates the contact angle to interfacial energy:
γ
s-g
=
γ
s-l
+
γ
l-g
cos
θ
{\displaystyle \gamma _{\text{s-g}}=\gamma _{\text{s-l}}+\gamma _{\text{l-g}}\cos \theta }
where γs-g is the interfacial energy between the solid and gas phases, γs-l the interfacial energy between the substrate and the liquid, γl-g is the interfacial energy between the liquid and gas phases, and θ is the contact angle between the solid–liquid and the liquid–gas interface.
=== Wetting of high- and low-energy substrates ===
The energy of the bulk component of a solid substrate is determined by the types of interactions that hold the substrate together. High-energy substrates are held together by bonds, while low-energy substrates are held together by forces. Covalent, ionic, and metallic bonds are much stronger than forces such as van der Waals and hydrogen bonding. High-energy substrates are more easily wetted than low-energy substrates. In addition, more complete wetting will occur if the substrate has a much higher surface energy than the liquid.
== Modification techniques ==
The most commonly used surface modification protocols are plasma activation, wet chemical treatment, including grafting, and thin-film coating. Surface energy mimicking is a technique that enables merging the device manufacturing and surface modifications, including patterning, into a single processing step using a single device material.
Many techniques can be used to enhance wetting. Surface treatments, such as corona treatment, plasma treatment and acid etching, can be used to increase the surface energy of the substrate. Additives can also be added to the liquid to decrease its surface tension. This technique is employed often in paint formulations to ensure that they will be evenly spread on a surface.
== The Kelvin equation ==
As a result of the surface tension inherent to liquids, curved surfaces are formed in order to minimize the area. This phenomenon arises from the energetic cost of forming a surface. As such the Gibbs free energy of the system is minimized when the surface is curved.
The Kelvin equation is based on thermodynamic principles and is used to describe changes in vapor pressure caused by liquids with curved surfaces. The cause for this change in vapor pressure is the Laplace pressure. The vapor pressure of a drop is higher than that of a planar surface because the increased Laplace pressure causes the molecules to evaporate more easily. Conversely, in liquids surrounding a bubble, the pressure with respect to the inner part of the bubble is reduced, thus making it more difficult for molecules to evaporate. The Kelvin equation can be stated as:
R
T
ln
P
0
K
P
0
=
γ
V
m
(
1
R
1
+
1
R
2
)
{\displaystyle RT\ln {\frac {P_{0}^{K}}{P_{0}}}=\gamma V_{m}\left({\frac {1}{R_{1}}}+{\frac {1}{R_{2}}}\right)}
where PK0 is the vapor pressure of the curved surface, P0 is the vapor pressure of the flat surface, γ is the surface tension, Vm is the molar volume of the liquid, R is the universal gas constant, T is temperature (in kelvin), and R1 and R2 are the principal radii of curvature of the surface.
== Surface modified pigments for coatings ==
Pigments offer great potential in modifying the application properties of a coating. Due to their fine particle size and inherently high surface energy, they often require a surface treatment in order to enhance their ease of dispersion in a liquid medium. A wide variety of surface treatments have been previously used, including the adsorption on the surface of a molecule in the presence of polar groups, monolayers of polymers, and layers of inorganic oxides on the surface of organic pigments.
New surfaces are constantly being created as larger pigment particles get broken down into smaller subparticles. These newly-formed surfaces consequently contribute to larger surface energies, whereby the resulting particles often become cemented together into aggregates. Because particles dispersed in liquid media are in constant thermal or Brownian motion, they exhibit a strong affinity for other pigment particles nearby as they move through the medium and collide. This natural attraction is largely attributed to the powerful short-range van der Waals forces, as an effect of their surface energies.
The chief purpose of pigment dispersion is to break down aggregates and form stable dispersions of optimally sized pigment particles. This process generally involves three distinct stages: wetting, deaggregation, and stabilization. A surface that is easy to wet is desirable when formulating a coating that requires good adhesion and appearance. This also minimizes the risks of surface tension related defects, such as crawling, cratering, and orange peel. This is an essential requirement for pigment dispersions; for wetting to be effective, the surface tension of the pigment's vehicle must be lower than the surface free energy of the pigment. This allows the vehicle to penetrate into the interstices of the pigment aggregates, thus ensuring complete wetting. Finally, the particles are subjected to a repulsive force in order to keep them separated from one another and lowers the likelihood of flocculation.
Dispersions may become stable through two different phenomena: charge repulsion and steric or entropic repulsion. In charge repulsion, particles that possess the same like electrostatic charges repel each other. Alternatively, steric or entropic repulsion is a phenomenon used to describe the repelling effect when adsorbed layers of material (such as polymer molecules swollen with solvent) are present on the surface of the pigment particles in dispersion. Only certain portions (anchors) of the polymer molecules are adsorbed, with their corresponding loops and tails extending out into the solution. As the particles approach each other their adsorbed layers become crowded; this provides an effective steric barrier that prevents flocculation. This crowding effect is accompanied by a decrease in entropy, whereby the number of conformations possible for the polymer molecules is reduced in the adsorbed layer. As a result, energy is increased and often gives rise to repulsive forces that aid in keeping the particles separated from each other.
== Surface energies of common materials ==
== See also ==
Contact angle
Surface tension
Sessile drop technique
Capillary surface
Wulff Construction
== References ==
== External links ==
What is surface free energy?
Surface Energy and Adhesion | Wikipedia/Surface_free_energy |
In mathematics, a rate is the quotient of two quantities, often represented as a fraction. If the divisor (or fraction denominator) in the rate is equal to one expressed as a single unit, and if it is assumed that this quantity can be changed systematically (i.e., is an independent variable), then the dividend (the fraction numerator) of the rate expresses the corresponding rate of change in the other (dependent) variable. In some cases, it may be regarded as a change to a value, which is caused by a change of a value in respect to another value. For example, acceleration is a change in velocity with respect to time
Temporal rate is a common type of rate ("per unit of time"), such as speed, heart rate, and flux.
In fact, often rate is a synonym of rhythm or frequency, a count per second (i.e., hertz); e.g., radio frequencies or sample rates.
In describing the units of a rate, the word "per" is used to separate the units of the two measurements used to calculate the rate; for example, a heart rate is expressed as "beats per minute".
Rates that have a non-time divisor or denominator include exchange rates, literacy rates, and electric field (in volts per meter).
A rate defined using two numbers of the same units will result in a dimensionless quantity, also known as ratio or simply as a rate (such as tax rates) or counts (such as literacy rate). Dimensionless rates can be expressed as a percentage (for example, the global literacy rate in 1998 was 80%), fraction, or multiple.
== Properties and examples ==
Rates and ratios often vary with time, location, particular element (or subset) of a set of objects, etc. Thus they are often mathematical functions.
A rate (or ratio) may often be thought of as an output-input ratio, benefit-cost ratio, all considered in the broad sense. For example, miles per hour in transportation is the output (or benefit) in terms of miles of travel, which one gets from spending an hour (a cost in time) of traveling (at this velocity).
A set of sequential indices may be used to enumerate elements (or subsets) of a set of ratios under study. For example, in finance, one could define I by assigning consecutive integers to companies, to political subdivisions (such as states), to different investments, etc. The reason for using indices I is so a set of ratios (i=0, N) can be used in an equation to calculate a function of the rates such as an average of a set of ratios. For example, the average velocity found from the set of v I 's mentioned above. Finding averages may involve using weighted averages and possibly using the harmonic mean.
A ratio r=a/b has both a numerator "a" and a denominator "b". The value of a and b may be a real number or integer. The inverse of a ratio r is 1/r = b/a. A rate may be equivalently expressed as an inverse of its value if the ratio of its units is also inverse. For example, 5 miles (mi) per kilowatt-hour (kWh) corresponds to 1/5 kWh/mi (or 200 Wh/mi).
Rates are relevant to many aspects of everyday life. For example:
How fast are you driving? The speed of the car (often expressed in miles per hour) is a rate. What interest does your savings account pay you? The amount of interest paid per year is a rate.
== Rate of change ==
Consider the case where the numerator
f
{\displaystyle f}
of a rate is a function
f
(
a
)
{\displaystyle f(a)}
where
a
{\displaystyle a}
happens to be the denominator of the rate
δ
f
/
δ
a
{\displaystyle \delta f/\delta a}
. A rate of change of
f
{\displaystyle f}
with respect to
a
{\displaystyle a}
(where
a
{\displaystyle a}
is incremented by
h
{\displaystyle h}
) can be formally defined in two ways:
Average rate of change
=
f
(
x
+
h
)
−
f
(
x
)
h
Instantaneous rate of change
=
lim
h
→
0
f
(
x
+
h
)
−
f
(
x
)
h
{\displaystyle {\begin{aligned}{\mbox{Average rate of change}}&={\frac {f(x+h)-f(x)}{h}}\\{\mbox{Instantaneous rate of change}}&=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}\end{aligned}}}
where f(x) is the function with respect to x over the interval from a to a+h. An instantaneous rate of change is equivalent to a derivative.
For example, the average speed of a car can be calculated using the total distance traveled between two points, divided by the travel time. In contrast, the instantaneous velocity can be determined by viewing a speedometer.
== Temporal rates ==
In chemistry and physics:
Speed, the rate of change of position, or the change of position per unit of time
Acceleration, the rate of change in speed, or the change in speed per unit of time
Power, the rate of doing work, or the amount of energy transferred per unit time
Frequency, the number of occurrences of a repeating event per unit of time
Angular frequency and rotation speed, the number of turns per unit of time
Reaction rate, the speed at which chemical reactions occur
Volumetric flow rate, the volume of fluid which passes through a given surface per unit of time; e.g., cubic meters per second
=== Counts-per-time rates ===
Radioactive decay, the amount of radioactive material in which one nucleus decays per second, measured in becquerels
In computing:
Bit rate, the number of bits that are conveyed or processed by a computer per unit of time
Symbol rate, the number of symbol changes (signaling events) made to the transmission medium per second
Sampling rate, the number of samples (signal measurements) per second
Miscellaneous definitions:
Rate of reinforcement, number of reinforcements per unit of time, usually per minute
Heart rate, usually measured in beats per minute
== Economics/finance rates/ratios ==
Exchange rate, how much one currency is worth in terms of the other
Inflation rate, the ratio of the change in the general price level during a year to the starting price level
Interest rate, the price a borrower pays for the use of the money they do not own (ratio of payment to amount borrowed)
Price–earnings ratio, market price per share of stock divided by annual earnings per share
Rate of return, the ratio of money gained or lost on an investment relative to the amount of money invested
Tax rate, the tax amount divided by the taxable income
Unemployment rate, the ratio of the number of people who are unemployed to the number in the labor force
Wage rate, the amount paid for working a given amount of time (or doing a standard amount of accomplished work) (ratio of payment to time)
== Other rates ==
Birth rate, and mortality rate, the number of births or deaths scaled to the size of that population, per unit of time
Literacy rate, the proportion of the population over age fifteen that can read and write
Sex ratio or gender ratio, the ratio of males to females in a population
== See also ==
Derivative
Gradient
Hertz
Slope
== References == | Wikipedia/Temporal_rate |
In physics and engineering, in particular fluid dynamics, the volumetric flow rate (also known as volume flow rate, or volume velocity) is the volume of fluid which passes per unit time; usually it is represented by the symbol Q (sometimes
V
˙
{\displaystyle {\dot {V}}}
). Its SI unit is cubic metres per second (m3/s).
It contrasts with mass flow rate, which is the other main type of fluid flow rate. In most contexts a mention of "rate of fluid flow" is likely to refer to the volumetric rate. In hydrometry, the volumetric flow rate is known as discharge.
The volumetric flow rate across a unit area is called volumetric flux, as defined by Darcy's law and represented by the symbol q. Conversely, the integration of a volumetric flux over a given area gives the volumetric flow rate.
== Units ==
The SI unit is cubic metres per second (m3/s). Another unit used is standard cubic centimetres per minute (SCCM). In US customary units and imperial units, volumetric flow rate is often expressed as cubic feet per second (ft3/s) or gallons per minute (either US or imperial definitions). In oceanography, the sverdrup (symbol: Sv, not to be confused with the sievert) is a non-SI metric unit of flow, with 1 Sv equal to 1 million cubic metres per second (35,000,000 cu ft/s); it is equivalent to the SI derived unit cubic hectometer per second (symbol: hm3/s or hm3⋅s−1). Named after Harald Sverdrup, it is used almost exclusively in oceanography to measure the volumetric rate of transport of ocean currents.
== Fundamental definition ==
Volumetric flow rate is defined by the limit
Q
=
V
˙
=
lim
Δ
t
→
0
Δ
V
Δ
t
=
d
V
d
t
,
{\displaystyle Q={\dot {V}}=\lim \limits _{\Delta t\to 0}{\frac {\Delta V}{\Delta t}}={\frac {\mathrm {d} V}{\mathrm {d} t}},}
that is, the flow of volume of fluid V through a surface per unit time t.
Since this is only the time derivative of volume, a scalar quantity, the volumetric flow rate is also a scalar quantity. The change in volume is the amount that flows after crossing the boundary for some time duration, not simply the initial amount of volume at the boundary minus the final amount at the boundary, since the change in volume flowing through the area would be zero for steady flow.
IUPAC prefers the notation
q
v
{\displaystyle q_{v}}
and
q
m
{\displaystyle q_{m}}
for volumetric flow and mass flow respectively, to distinguish from the notation
Q
{\displaystyle Q}
for heat.
== Alternative definition ==
Volumetric flow rate can also be defined by
Q
=
v
⋅
A
,
{\displaystyle Q=\mathbf {v} \cdot \mathbf {A} ,}
where
v = flow velocity,
A = cross-sectional vector area/surface.
The above equation is only true for uniform or homogeneous flow velocity and a flat or planar cross section. In general, including spatially variable or non-homogeneous flow velocity and curved surfaces, the equation becomes a surface integral:
Q
=
∬
A
v
⋅
d
A
.
{\displaystyle Q=\iint _{A}\mathbf {v} \cdot \mathrm {d} \mathbf {A} .}
This is the definition used in practice. The area required to calculate the volumetric flow rate is real or imaginary, flat or curved, either as a cross-sectional area or a surface. The vector area is a combination of the magnitude of the area through which the volume passes through, A, and a unit vector normal to the area,
n
^
{\displaystyle {\hat {\mathbf {n} }}}
. The relation is
A
=
A
n
^
{\displaystyle \mathbf {A} =A{\hat {\mathbf {n} }}}
.
=== Derivation ===
The reason for the dot product is as follows. The only volume flowing through the cross-section is the amount normal to the area, that is, parallel to the unit normal. This amount is
Q
=
v
A
cos
θ
,
{\displaystyle Q=vA\cos \theta ,}
where θ is the angle between the unit normal
n
^
{\displaystyle {\hat {\mathbf {n} }}}
and the velocity vector v of the substance elements. The amount passing through the cross-section is reduced by the factor cos θ. As θ increases less volume passes through. Substance which passes tangential to the area, that is perpendicular to the unit normal, does not pass through the area. This occurs when θ = π/2 and so this amount of the volumetric flow rate is zero:
Q
=
v
A
cos
(
π
2
)
=
0.
{\displaystyle Q=vA\cos \left({\frac {\pi }{2}}\right)=0.}
These results are equivalent to the dot product between velocity and the normal direction to the area.
== Relationship with mass flow rate ==
When the mass flow rate is known, and the density can be assumed constant, this is an easy way to get
Q
{\displaystyle Q}
:
Q
=
m
˙
ρ
,
{\displaystyle Q={\frac {\dot {m}}{\rho }},}
where
ṁ = mass flow rate (in kg/s),
ρ = density (in kg/m3).
== Related quantities ==
In internal combustion engines, the time area integral is considered over the range of valve opening. The time lift integral is given by
∫
L
d
θ
=
R
T
2
π
(
cos
θ
2
−
cos
θ
1
)
+
r
T
2
π
(
θ
2
−
θ
1
)
,
{\displaystyle \int L\,\mathrm {d} \theta ={\frac {RT}{2\pi }}(\cos \theta _{2}-\cos \theta _{1})+{\frac {rT}{2\pi }}(\theta _{2}-\theta _{1}),}
where T is the time per revolution, R is the distance from the camshaft centreline to the cam tip, r is the radius of the camshaft (that is, R − r is the maximum lift), θ1 is the angle where opening begins, and θ2 is where the valve closes (seconds, mm, radians). This has to be factored by the width (circumference) of the valve throat. The answer is usually related to the cylinder's swept volume.
== Some key examples ==
In cardiac physiology: the cardiac output
In hydrology: discharge
List of rivers by discharge
List of waterfalls by flow rate
Weir § Flow measurement
In dust collection systems: the air-to-cloth ratio
== See also ==
Bulk velocity
Flow measurement
Flowmeter
Mass flow rate
Orifice plate
Poiseuille's law
Stokes flow
== References == | Wikipedia/Volumetric_flow_rate |
A mass flow controller (MFC) is a device used to measure and control the flow of liquids and gases. A mass flow controller is designed and calibrated to control a specific type of liquid or gas at a particular range of flow rates. The MFC can be given a setpoint from 0 to 100% of its full scale range but is typically operated in the 10 to 90% of full scale where the best accuracy is achieved. The device will then control the rate of flow to the given setpoint. MFCs can be either analog or digital. A digital flow controller is usually able to control more than one type of fluid whereas an analog controller is limited to the fluid for which it was calibrated.
All mass flow controllers have an inlet port, an outlet port, a mass flow sensor and a proportional control valve. The MFC is fitted with a closed loop control system which is given an input signal by the operator (or an external circuit/computer) that it compares to the value from the mass flow sensor and adjusts the proportional valve accordingly to achieve the required flow. The flow rate is specified as a percentage of its calibrated full scale flow and is supplied to the MFC as a voltage signal.
Mass flow controllers require the supply gas or liquid to be within a specific pressure range. Low pressure will starve the MFC of fluid and cause it to fail to achieve its setpoint. High pressure may cause erratic flow rates. There are many different technologies which can help to measure the flow of the fluids and eventually help in controlling flow. Those technologies define the types of Mass Flow Controllers, and they include differential pressure (ΔP), differential temperature (ΔT), Coriolis, Ultrasonic, electromagnetic, turbine, etc.
== See also ==
== References ==
== External links ==
How a Mass Flow Controller works video
Thermal Mass Flow Meter / Controller (Principle of operation) video
MEMS-based thermal mass flow controller video | Wikipedia/Mass_flow_controller |
The Journal of Experimental and Theoretical Physics (JETP) [Russian: Журнал Экспериментальной и Теоретической Физики (ЖЭТФ), or Zhurnal Éksperimental'noĭ i Teoreticheskoĭ Fiziki (ZhÉTF)] is a peer-reviewed Russian bilingual scientific journal covering all areas of experimental and theoretical physics. For example, coverage includes solid-state physics, elementary particles, and cosmology. The journal is published simultaneously in both Russian and English languages.
The editor-in-chief is Alexander F. Andreev. In addition, this journal is a continuation of Soviet physics, JETP (1931–1992), which began English translation in 1955.
== Indexing ==
JETP is indexed in:
== References ==
== External links ==
J. Exp. Theor. Phys. website (jetp.ras.ru) (in English)
J. Exp. Theor. Phys. website (Maik)
J. Exp. Theor. Phys. website (Springer) | Wikipedia/Journal_of_Experimental_and_Theoretical_Physics |
This list includes well-known general theories in science and pre-scientific natural philosophy and natural history that have since been superseded by other scientific theories. Many discarded explanations were once supported by a scientific consensus, but replaced after more empirical information became available that identified flaws and prompted new theories which better explain the available data. Pre-modern explanations originated before the scientific method, with varying degrees of empirical support.
Some scientific theories are discarded in their entirety, such as the replacement of the phlogiston theory by energy and thermodynamics. Some theories known to be incomplete or in some ways incorrect are still used. For example, Newtonian classical mechanics is accurate enough for practical calculations at everyday distances and velocities, and it is still taught in schools. The more complicated relativistic mechanics must be used for long distances and velocities nearing the speed of light, and quantum mechanics for very small distances and objects.
Some aspects of discarded theories are reused in modern explanations. For example, miasma theory proposed that all diseases were transmitted by "bad air". The modern germ theory of disease has found that diseases are caused by microorganisms, which can be transmitted by a variety of routes, including touching a contaminated object, blood, and contaminated water. Malaria was discovered to be a mosquito-borne disease, explaining why avoiding the "bad air" near swamps prevented it. Increasing ventilation of fresh air, one of the remedies proposed by miasma theory, does remain useful in some circumstances to expel germs spread by airborne transmission, such as SARS-CoV-2.
Some theories originate in, or are perpetuated by, pseudoscience, which claims to be both scientific and factual, but fails to follow the scientific method. Scientific theories are testable and make falsifiable predictions. Thus, it can be a mark of good science if a discipline has a growing list of superseded theories, and conversely, a lack of superseded theories can indicate problems in following the use of the scientific method. Fringe science includes theories that are not currently supported by a consensus in the mainstream scientific community, either because they never had sufficient empirical support, because they were previously mainstream but later disproven, or because they are preliminary theories also known as protoscience which go on to become mainstream after empirical confirmation. Some theories, such as Lysenkoism, race science or female hysteria have been generated for political rather than empirical reasons and promoted by force.
== Science ==
=== Discarded scientific theories ===
==== Biology ====
Spontaneous generation – a principle regarding the spontaneous generation of complex life from inanimate matter, which held that this process was a commonplace and everyday occurrence, as distinguished from univocal generation, or reproduction from parent(s). Falsified by an experiment by Louis Pasteur: where apparently spontaneous generation of microorganisms occurred, it did not happen on repeating the process without access to unfiltered air; on then opening the apparatus to the atmosphere, bacterial growth started.
Transmutation of species, Inheritance of acquired characteristics, Lysenkoism – first theories of evolution. Not supported by experiment, and rendered obsolete by Darwinian evolution and Mendelian genetics, combined in the modern synthesis which finds that genes in the form of DNA are the primary way parental characteristics are passed to descendants. Discoveries in epigenetics have shown that in some very limited ways, the life experiences of organisms can affect the development of their children.
Vitalism – the theory that living things are alive because of some "vital force" independent of matter, as opposed to because of some appropriate assembly of matter. It was gradually discredited by the rise of organic chemistry, biochemistry, and molecular biology, fields that failed to discover any "vital force." Friedrich Wöhler's synthesis of urea from ammonium cyanate was only one step in a long road, not a great refutation.
Maternal impression – the theory that the mother's thoughts created birth defects. No experimental support (a notion rather than a theory), and rendered obsolete by genetic theory (see also fetal origins of adult disease, genomic imprinting).
Preformationism – the theory that all organisms have existed since the beginning of life, and that gametes contain a miniature but complete preformed individual, and in the case of humans, a homunculus. No support when microscopy became available. Rendered obsolete by cytology, discovery of DNA, and atomic theory.
Recapitulation theory – the theory that "ontogeny recapitulates phylogeny". See Baer's laws of embryology.
Telegony – the theory that an offspring can inherit characteristics from a previous mate of its mother's as well as its actual parents, often associated with racism.
Out of Asia theory of human origin – The majority view is of a recent African origin of modern humans, although a multiregional origin of modern humans hypothesis has much support (which incorporates past evidence of Asian origins).
Scientific racism – the theory that humanity consists of physically discrete superior or inferior races. Rendered obsolete by Human evolutionary genetics and modern anthropology.
Germ line theory, explained immunoglobulin diversity by proposing that each antibody was encoded in a separate germline gene.
==== Chemistry ====
Energeticism – a theory that attempted to reinterpret all chemistry in terms of energy, rejecting the concept of atoms.
Caloric theory – the theory that a self-repelling fluid called "caloric" was the substance of heat. Rendered obsolete by the mechanical theory of heat. Origin of the calorie's name, a unit of energy still used for nutrition in some countries.
Classical elements – All matter was once thought composed of various combinations of classical elements (most famously air, earth, fire, and water). Antoine Lavoisier finally refuted this in his 1789 publication, Elements of Chemistry, which contained the first modern list of chemical elements.
Electrochemical dualism – the theory that all molecules are salts composed of basic and acidic oxides
Phlogiston theory – The theory that combustible goods contain a substance called "phlogiston" that entered air during combustion. Replaced by Lavoisier's work on oxidation.
Point 2 of Dalton's Atomic Theory was rendered obsolete by discovery of isotopes, and point 3 by discovery of subatomic particles and nuclear reactions.
Radical theory – the theory that organic compounds exist as combinations of radicals that can be exchanged in chemical reactions just as chemical elements can be interchanged in inorganic compounds.
Vitalism – See section on Biology.
Nascent state refers to the form of a chemical element (or sometimes compound) in the instance of their liberation or formation. Often encountered are atomic oxygen (Onasc) and nascent hydrogen (Hnasc), and chlorine (Clnasc) or bromine (Brnasc).
Polywater, a hypothesized polymer form of water, the properties of which actually arose from contaminants such as sweat.
==== Physics ====
Corpuscularianism – theory that matter, gravity, light and magnetism is composed of tiny corpuscles
Corpuscular theory of light
Emission theory of vision – the belief that vision is caused by rays emanating from the eyes was superseded by the intro-mission approach and more complex theories of vision.
Aristotelian physics – superseded by Newtonian physics.
Ptolemy's law of refraction, replaced by Snell's law.
Luminiferous aether – failed to be detected by the sufficiently sensitive Michelson–Morley experiment, made obsolete by Einstein's work.
Caloric theory – Lavoisier's successor to phlogiston, discredited by Rumford's and Joule's work.
Contact tension – a theory on the source of electricity.
Vis viva – Gottfried Leibniz's elementary and limited early formulation of the principle of conservation of energy.
Horror vacui/plenum – concept that nature 'abhors' the existence of vacuum.
Imponderable fluid – various fluids used to explain the nature of heat and electricity in terms of undetectable fluids
"Purely electrostatic" theories of the generation of voltage differences.
Emitter theory – another now-obsolete theory of light propagation.
Electromotive force § History – the original theory by Alessandro Volta misunderstood the active agent of a voltaic cell to be a new type of force acting on the charges generated merely from contact of the electrodes. Michael Faraday later correctly explained the active agent was chemical reactions.
Line of force – pre-existing theory to field.
Balance of nature – superseded by catastrophe theory and chaos theory.
Progression of atomic theory
Democritus, the originator of atomic theory, held that everything is composed of atoms that are indestructible. His claim that atoms are indestructible is not the reason it is superseded—as it was later scientists who identified the concept of atoms with particles, which later science showed are destructible. Democritus' theory is superseded because of his position that several kinds of atoms explain pure materials like water or iron, and characteristics that science now identifies with molecules rather than with indestructible primary particles. Democritus also held that between atoms, an empty space of a different nature than atoms allowed atoms to move. This view on space and matter persisted until Einstein described spacetime as being relative and connected to matter.
John Dalton's model of the atom, which held that atoms are indivisible and indestructible (superseded by nuclear physics) and that all atoms of a given element are identical in mass (superseded by discovery of atomic isotopes).
Plum pudding model of the atom—assuming the protons and electrons were mixed together in a single mass
Rutherford model of the atom with an impenetrable nucleus orbited by electrons
Bohr model with quantized orbits
Electron cloud model following the development of quantum mechanics in 1925 and the eventual atomic orbital models derived from the quantum mechanical solution to the hydrogen atom
==== Astronomy and cosmology ====
Ptolemaic system – superseded by Nicolaus Copernicus' heliocentric model.
Geocentric universe – superseded by Copernicus
Copernican system – superseded by Tychonic system
Heliocentric universe – made obsolete by discovery of the structure of the Milky Way and the redshift of most galaxies. Heliocentrism only applies to the selected Solar System, and only approximately, since the Sun's center is not at the Solar System's center of mass. Superseded by barycentric coordinates.
Aristotelian Dynamics of the celestial spheres superseded by the Elliptic orbit and Kepler's laws of planetary motion
Tychonic system – superseded by Newton's laws of motion
Luminiferous aether theory
Static Universe theory
Steady state theory, a model developed by Hermann Bondi, Thomas Gold, and Fred Hoyle whereby the expanding universe was in a steady state, and had no beginning. It was a competitor of the Big Bang model until evidence supporting the Big Bang and falsifying the steady state was found.
==== Geography and climate ====
Buenaventura River
Flat Earth theory, generally known to be false among educated people in various ancient and medieval societies
Terra Australis, which technically is Antarctica, but the original idea was based on an unproven belief that land in the Northern hemisphere must have a Southern counterpart for balance.
Hollow Earth theory
The Open Polar Sea, an ice-free sea once supposed to surround the North Pole
Rain follows the plow – the theory that human settlement increases rainfall in arid regions (only true to the extent that crop fields evapotranspirate more than barren wilderness)
Island of California – the theory that California was not part of mainland North America but rather a large island
Inland sea of Australia
Pre-modern environmental determinism (as explanations for moral behavior, as opposed to modern theories such as factor endowments, state formation, and theories of the social effects of global warming)
Climatic determinism
Topographic determinism
Moral geography
Cultural acclimatization
Global cooling
Drainage divides as always being made up by hills and mountains.
Ancient and medieval concepts surrounding the antipodes, including the related theories of antichthones and the alleged existence of a torrid zone
==== Geology ====
Abiogenic petroleum origin – While some petroleum or natural gas is almost certainly abiogenic, the vast majority has origins as living organisms
Catastrophism was largely replaced by uniformitarianism and neocatastrophism
Cryptoexplosion craters, now discarded in favour of impact craters and ordinary volcanism.
Flood geology replaced by modern geology and stratigraphy
Neptunism replaced by plutonism and volcanism
Granitization, a discredited alternative to a magmatic origin of granites
Monoglaciation, the idea that the Earth had a single ice age, replaced by polyglaciation, the idea that the Earth has gone through several periods of widespread ice cover.
Oscillation theory of land-level rise and subsidence during deglaciation
The following were superseded by plate tectonics:
Elevation crater theory
Expanding Earth theory (superseded by subduction)
Contracting Earth
Geosyncline theory
Haarman's Oscillation theory
Various lost landmasses including Lemuria
==== Psychology ====
Pure behaviorist explanations for language acquisition in infancy, falsified by the study of cognitive adaptations for language.
Psychomotor patterning, a pseudoscientific approach to the treatment of intellectual disabilities, brain injury, learning disabilities, and other cognitive diseases.
==== Medicine ====
Theory of the four bodily humours (see also Four temperaments)
Heroic medicine – a therapeutic method derived from the belief in bodily humour imbalances as the cause of ailments.
Miasma theory of disease – the theory that diseases are caused by "bad air". No experimental support, and rendered obsolete by the germ theory of disease.
Phrenology – a theory of highly localised brain function popular in 19th century medicine.
Homeopathy – a theory according to which a disease can be cured by infinitesimal doses of the substance that caused it
Eclectic medicine – transformed into alternative medicine, and is no longer considered a scientific theory
Physiognomy, related to phrenology, held that inner character was strongly correlated with physical appearance
Tooth worm, an erroneous theory of the cause of dental caries, periodontitis, and toothaches
=== Obsolete branches of enquiry ===
Alchemy, which led to the development of chemistry
Astrology, which led to the development of astronomy
Phrenology, a pseudoscience
Numerology, a pseudoscience
=== Theories now considered incomplete ===
These theories that are no longer considered the most complete representation of reality but remain useful in particular domains or under certain conditions. For some theories, a more complete model is known, but for practical use, the coarser approximation provides good results with much less calculation.
Newtonian mechanics was extended by the theory of relativity and by quantum mechanics. Relativistic corrections to Newtonian mechanics are immeasurably small at velocities not approaching the speed of light, and quantum corrections are usually negligible at atomic or larger scales; Newtonian mechanics is totally satisfactory in engineering and physics under most circumstances. The anomalous perihelion precession of Mercury was the first observational evidence that relativity was a more accurate model than Newtonian gravity.
Classical electrodynamics is a very close approximation to quantum electrodynamics except at very small scales and low field strengths.
The Bohr model of the atom was extended by the quantum mechanical model of the atom.
The formula known as Newton's sine-square law of air resistance for the force of a fluid on a body was not actually formulated by Newton but by others using a method of calculation used by Newton; it has been found incorrect and not useful except for high-speed hypersonic flow.
The once-popular cycle of erosion is now considered one of many possibilities for landscape evolution.
The theory of continental drift was incorporated into and improved upon by plate tectonics.
Rational choice theory as a model of human behavior
Mendelian genetics, classical genetics, Boveri–Sutton chromosome theory – first genetic theories. Not invalidated as such, but subsumed into molecular genetics.
== See also ==
Pseudoscience
Scientific theory
Philosophy of science
Protoscience
Fringe science
Pathological science
Paradigm shift
History of evolutionary thought
Creation–evolution controversy
=== Lists ===
List of common misconceptions, including those about scientific subjects
List of discredited substances
List of experiments
List of topics characterized as pseudoscience
List of incorrect mathematical proofs
== Notes ==
== References ==
== External links ==
Media related to Obsolete scientific theories at Wikimedia Commons | Wikipedia/Obsolete_scientific_theory |
The concept entropy was first developed by German physicist Rudolf Clausius in the mid-nineteenth century as a thermodynamic property that predicts that certain spontaneous processes are irreversible or impossible. In statistical mechanics, entropy is formulated as a statistical property using probability theory. The statistical entropy perspective was introduced in 1870 by Austrian physicist Ludwig Boltzmann, who established a new field of physics that provided the descriptive linkage between the macroscopic observation of nature and the microscopic view based on the rigorous treatment of large ensembles of microscopic states that constitute thermodynamic systems.
== Boltzmann's principle ==
Ludwig Boltzmann defined entropy as a measure of the number of possible microscopic states (microstates) of a system in thermodynamic equilibrium, consistent with its macroscopic thermodynamic properties, which constitute the macrostate of the system. A useful illustration is the example of a sample of gas contained in a container. The easily measurable parameters volume, pressure, and temperature of the gas describe its macroscopic condition (state). At a microscopic level, the gas consists of a vast number of freely moving atoms or molecules, which randomly collide with one another and with the walls of the container. The collisions with the walls produce the macroscopic pressure of the gas, which illustrates the connection between microscopic and macroscopic phenomena.
A microstate of the system is a description of the positions and momenta of all its particles. The large number of particles of the gas provides an infinite number of possible microstates for the sample, but collectively they exhibit a well-defined average of configuration, which is exhibited as the macrostate of the system, to which each individual microstate contribution is negligibly small. The ensemble of microstates comprises a statistical distribution of probability for each microstate, and the group of most probable configurations accounts for the macroscopic state. Therefore, the system can be described as a whole by only a few macroscopic parameters, called the thermodynamic variables: the total energy E, volume V, pressure P, temperature T, and so forth. However, this description is relatively simple only when the system is in a state of equilibrium.
Equilibrium may be illustrated with a simple example of a drop of food coloring falling into a glass of water. The dye diffuses in a complicated manner, which is difficult to precisely predict. However, after sufficient time has passed, the system reaches a uniform color, a state much easier to describe and explain.
Boltzmann formulated a simple relationship between entropy and the number of possible microstates of a system, which is denoted by the symbol Ω. The entropy S is proportional to the natural logarithm of this number:
S
=
k
B
ln
Ω
{\displaystyle S=k_{\text{B}}\ln \Omega }
The proportionality constant kB is one of the fundamental constants of physics and is named the Boltzmann constant in honor of its discoverer.
Boltzmann's entropy describes the system when all the accessible microstates are equally likely. It is the configuration corresponding to the maximum of entropy at equilibrium. The randomness or disorder is maximal, and so is the lack of distinction (or information) of each microstate.
Entropy is a thermodynamic property just like pressure, volume, or temperature. Therefore, it connects the microscopic and the macroscopic world view.
Boltzmann's principle is regarded as the foundation of statistical mechanics.
== Gibbs entropy formula ==
The macroscopic state of a system is characterized by a distribution on the microstates. The entropy of this distribution is given by the Gibbs entropy formula, named after J. Willard Gibbs. For a classical system (i.e., a collection of classical particles) with a discrete set of microstates, if Ei is the energy of microstate i, and pi is the probability that it occurs during the system's fluctuations, then the entropy of the system is
S
=
−
k
B
∑
i
p
i
ln
(
p
i
)
{\displaystyle S=-k_{\text{B}}\,\sum _{i}p_{i}\ln(p_{i})}
The quantity
k
B
{\displaystyle k_{\text{B}}}
is the Boltzmann constant, a multiplier of the summation expression. The summation is dimensionless, since the value
p
i
{\displaystyle p_{i}}
is a probability and therefore dimensionless, and ln is the natural logarithm. Hence the SI unit on both sides of the equation is that of heat capacity:
[
S
]
=
[
k
B
]
=
J
K
{\displaystyle [S]=[k_{\text{B}}]=\mathrm {\frac {J}{K}} }
This definition remains meaningful even when the system is far away from equilibrium. Other definitions assume that the system is in thermal equilibrium, either as an isolated system, or as a system in exchange with its surroundings. The set of microstates (with probability distribution) over which the sum is found is called a statistical ensemble. Each type of statistical ensemble (micro-canonical, canonical, grand-canonical, etc.) describes a different configuration of the system's exchanges with the outside, varying from a completely isolated system to a system that can exchange one or more quantities with a reservoir, like energy, volume or molecules. In every ensemble, the equilibrium configuration of the system is dictated by the maximization of the entropy of the union of the system and its reservoir, according to the second law of thermodynamics (see the statistical mechanics article).
Neglecting correlations (or, more generally, statistical dependencies) between the states of individual particles will lead to an incorrect probability distribution on the microstates and hence to an overestimate of the entropy. Such correlations occur in any system with nontrivially interacting particles, that is, in all systems more complex than an ideal gas.
This S is almost universally called simply the entropy. It can also be called the statistical entropy or the thermodynamic entropy without changing the meaning. Note the above expression of the statistical entropy is a discretized version of Shannon entropy. The von Neumann entropy formula is an extension of the Gibbs entropy formula to the quantum-mechanical case.
It has been shown that the Gibbs entropy is equal to the classical "heat engine" entropy characterized by
d
S
=
δ
Q
/
T
{\displaystyle dS={\delta Q}/{T}}
, and the generalized Boltzmann distribution is a sufficient and necessary condition for this equivalence. Furthermore, the Gibbs entropy is the only entropy measure that is equivalent to the classical "heat engine" entropy under the following postulates:
=== Ensembles ===
The various ensembles used in statistical thermodynamics are linked to the entropy by the following relations:
S
=
k
B
ln
Ω
mic
=
k
B
(
ln
Z
can
+
β
E
¯
)
=
k
B
(
ln
Z
gr
+
β
(
E
¯
−
μ
N
¯
)
)
{\displaystyle S=k_{\text{B}}\ln \Omega _{\text{mic}}=k_{\text{B}}(\ln Z_{\text{can}}+\beta {\bar {E}})=k_{\text{B}}(\ln {\mathcal {Z}}_{\text{gr}}+\beta ({\bar {E}}-\mu {\bar {N}}))}
Ω
mic
{\displaystyle \Omega _{\text{mic}}}
is the microcanonical partition function
Z
can
{\displaystyle Z_{\text{can}}}
is the canonical partition function
Z
gr
{\displaystyle {\mathcal {Z}}_{\text{gr}}}
is the grand canonical partition function
== Order through chaos and the second law of thermodynamics ==
We can think of Ω as a measure of our lack of knowledge about a system. To illustrate this idea, consider a set of 100 coins, each of which is either heads up or tails up. In this example, let us suppose that the macrostates are specified by the total number of heads and tails, while the microstates are specified by the facings of each individual coin (i.e., the exact order in which heads and tails occur). For the macrostates of 100 heads or 100 tails, there is exactly one possible configuration, so our knowledge of the system is complete. At the opposite extreme, the macrostate which gives us the least knowledge about the system consists of 50 heads and 50 tails in any order, for which there are 100891344545564193334812497256 (100 choose 50) ≈ 1029 possible microstates.
Even when a system is entirely isolated from external influences, its microstate is constantly changing. For instance, the particles in a gas are constantly moving, and thus occupy a different position at each moment of time; their momenta are also constantly changing as they collide with each other or with the container walls. Suppose we prepare the system in an artificially highly ordered equilibrium state. For instance, imagine dividing a container with a partition and placing a gas on one side of the partition, with a vacuum on the other side. If we remove the partition and watch the subsequent behavior of the gas, we will find that its microstate evolves according to some chaotic and unpredictable pattern, and that on average these microstates will correspond to a more disordered macrostate than before. It is possible, but extremely unlikely, for the gas molecules to bounce off one another in such a way that they remain in one half of the container. It is overwhelmingly probable for the gas to spread out to fill the container evenly, which is the new equilibrium macrostate of the system.
This is an example illustrating the second law of thermodynamics:
the total entropy of any isolated thermodynamic system tends to increase over time, approaching a maximum value.
Since its discovery, this idea has been the focus of a great deal of thought, some of it confused. A chief point of confusion is the fact that the Second Law applies only to isolated systems. For example, the Earth is not an isolated system because it is constantly receiving energy in the form of sunlight. In contrast, the universe may be considered an isolated system, so that its total entropy is constantly increasing. (Needs clarification. See: Second law of thermodynamics#cite note-Grandy 151-21)
== Counting of microstates ==
In classical statistical mechanics, the number of microstates is actually uncountably infinite, since the properties of classical systems are continuous. For example, a microstate of a classical ideal gas is specified by the positions and momenta of all the atoms, which range continuously over the real numbers. If we want to define Ω, we have to come up with a method of grouping the microstates together to obtain a countable set. This procedure is known as coarse graining. In the case of the ideal gas, we count two states of an atom as the "same" state if their positions and momenta are within δx and δp of each other. Since the values of δx and δp can be chosen arbitrarily, the entropy is not uniquely defined. It is defined only up to an additive constant. (As we will see, the thermodynamic definition of entropy is also defined only up to a constant.)
To avoid coarse graining one can take the entropy as defined by the H-theorem.
S
=
−
k
B
H
B
:=
−
k
B
∫
f
(
q
i
,
p
i
)
ln
f
(
q
i
,
p
i
)
d
q
1
d
p
1
⋯
d
q
N
d
p
N
{\displaystyle S=-k_{\text{B}}H_{\text{B}}:=-k_{\text{B}}\int f(q_{i},p_{i})\,\ln f(q_{i},p_{i})\,dq_{1}\,dp_{1}\cdots dq_{N}\,dp_{N}}
However, this ambiguity can be resolved with quantum mechanics. The quantum state of a system can be expressed as a superposition of "basis" states, which can be chosen to be energy eigenstates (i.e. eigenstates of the quantum Hamiltonian). Usually, the quantum states are discrete, even though there may be an infinite number of them. For a system with some specified energy E, one takes Ω to be the number of energy eigenstates within a macroscopically small energy range between E and E + δE. In the thermodynamical limit, the specific entropy becomes independent on the choice of δE.
An important result, known as Nernst's theorem or the third law of thermodynamics, states that the entropy of a system at zero absolute temperature is a well-defined constant. This is because a system at zero temperature exists in its lowest-energy state, or ground state, so that its entropy is determined by the degeneracy of the ground state. Many systems, such as crystal lattices, have a unique ground state, and (since ln(1) = 0) this means that they have zero entropy at absolute zero. Other systems have more than one state with the same, lowest energy, and have a non-vanishing "zero-point entropy". For instance, ordinary ice has a zero-point entropy of 3.41 J/(mol⋅K), because its underlying crystal structure possesses multiple configurations with the same energy (a phenomenon known as geometrical frustration).
The third law of thermodynamics states that the entropy of a perfect crystal at absolute zero (0 K) is zero. This means that nearly all molecular motion should cease. The oscillator equation for predicting quantized vibrational levels shows that even when the vibrational quantum number is 0, the molecule still has vibrational energy:
E
ν
=
h
ν
0
(
n
+
1
2
)
{\displaystyle E_{\nu }=h\nu _{0}\left(n+{\tfrac {1}{2}}\right)}
where
h
{\displaystyle h}
is the Planck constant,
ν
0
{\displaystyle \nu _{0}}
is the characteristic frequency of the vibration, and
n
{\displaystyle n}
is the vibrational quantum number. Even when
n
=
0
{\displaystyle n=0}
(the zero-point energy),
E
n
{\displaystyle E_{n}}
does not equal 0, in adherence to the Heisenberg uncertainty principle.
== See also ==
== References == | Wikipedia/Entropy_(statistical_views) |
Direct digital control is the automated control of a condition or process by a digital device (computer). Direct digital control takes a centralized network-oriented approach. All instrumentation is gathered by various analog and digital converters which use the network to transport these signals to the central controller. The centralized computer then follows all of its production rules (which may incorporate sense points anywhere in the structure) and causes actions to be sent via the same network to valves, actuators, and other heating, ventilating, and air conditioning components that can be adjusted.
== Overview ==
Central controllers and most terminal unit controllers are programmable, meaning the direct digital control program code may be customized for the intended use. The program features include time schedules, setpoints, controllers, logic, timers, trend logs, and alarms.
The unit controllers typically have analog and digital inputs, that allow measurement of the variable (temperature, humidity, or pressure) and analog and digital outputs for control of the medium (hot/cold water and/or steam). Digital inputs are typically (dry) contacts from a control device, and analog inputs are typically a voltage or current measurement from a variable (temperature, humidity, velocity, or pressure) sensing device. Digital outputs are typically relay contacts used to start and stop equipment, and analog outputs are typically voltage or current signals to control the movement of the medium (air/water/steam) control devices.
== History ==
An early example of a direct digital control system was completed by the Australian business Midac in 1981-1982 using R-Tec Australian designed hardware. The system installed at the University of Melbourne used a serial communications network, connecting campus buildings back to a control room "front end" system in the basement of the Old Geology building. Each remote or Satellite Intelligence Unit (SIU) ran 2 Z80 microprocessors whilst the front end ran eleven Z80's in a Parallel Processing configuration with paged common memory. The z80 microprocessors shared the load by passing tasks to each other via the common memory and the communications network. This was possibly the first successful implementation of a distributed processing direct digital control.
== Data communication ==
When direct digital controllers are networked together they can share information through a data bus. The control system may speak 'proprietary' or 'open protocol' language to communicate on the data bus. Examples of open protocol language are Building Automation Control Network (BACnet), LonWorks (Echelon), Modbus TCP and KNX.
== Integration ==
When different direct digital control data networks are linked together they can be controlled from a shared platform. This platform can then share information from one language to another. For example, a LON controller could share a temperature value with a BACnet controller. The integration platform can not only make information shareable, but can interact with all the devices.
Most of the integration platforms are either a PC or a network appliance. In many cases, the HMI (human machine interface) or SCADA (Supervisory Control And Data Acquisition) are part of it. Integration platform examples, to name only a few, are the Tridium Niagara AX, Trend Controls, TAC Vista, CAN2GO and the Unified Architecture i.e. OPC (Open Connectivity) server technology used when direct connectivity is not possible.
== Applications ==
=== In heating, ventilating, and air conditioning ===
Direct digital control is often used to control heating, ventilating, and air conditioning devices such as valves via microprocessors using software to perform the control logic. Such systems receive analog and digital inputs from the sensors and devices and, according to the control logic, provide analog or digital outputs.
These systems may be mated with a software package that graphically allows operators to monitor, control, alarm and diagnose building equipment remotely.
=== Plant growth ===
Direct digital control can be applied to optimize plant growth in a growth chamber.
=== Motor ===
Using an algorithm based on optimal control theory, it is possible to control the speed of an induction motor using a microcontroller.
== See also ==
Building automation
Fieldbus
GE Fanuc Intelligent Platforms
Industrial control systems
Plant process and emergency shutdown systems
Programmable logic controller
Safety instrumented system
== References ==
== External links ==
Role on direct digintal control systems in building commissioning Archived 2009-10-03 at the Wayback Machine
DDCTalk.com - Information, news, and resources related to direct digital control of buildings
www.cipriansusanu.ro
www.directdigital.ro
inginer în informatică, București, România. | Wikipedia/Direct_Digital_Control |
A kugelblitz (German: [ˈkuːɡl̩ˌblɪt͡s] ) is a theoretical astrophysical object predicted by general relativity. It is a concentration of heat, light or radiation so intense that its energy forms an event horizon and becomes self-trapped. In other words, if enough radiation is aimed into a region of space, the concentration of energy can warp spacetime so much that it creates a black hole. This would be a black hole the original mass–energy of which was in the form of radiant energy rather than matter; however, there is currently no uniformly accepted method of distinguishing black holes by origin.
John Archibald Wheeler's 1955 Physical Review paper entitled "Geons" refers to the kugelblitz phenomenon and explores the idea of creating such particles (or toy models of particles) from spacetime curvature.
A study published in Physical Review Letters in 2024 argues that the formation of a kugelblitz is impossible due to dissipative quantum effects like vacuum polarization, which prevent sufficient energy buildup to create an event horizon. The study concludes that such a phenomenon cannot occur in any realistic scenario within our universe.
The kugelblitz phenomenon has been considered a possible basis for interstellar engines (drives) for future black hole starships.
== In fiction ==
A kugelblitz is a major plot point in the third season of the American superhero television series The Umbrella Academy.
A kugelblitz is the home of a major faction in Frederik Pohl's "Gateway" novels.
== See also ==
Bekenstein bound
Micro black hole
== References == | Wikipedia/Kugelblitz_(astrophysics) |
A nonsingular black hole model is a mathematical theory of black holes that avoids certain theoretical problems with the standard black hole model, including information loss and the unobservable nature of the black hole event horizon.
== Avoiding paradoxes in the standard black hole model ==
For a black hole to physically exist as a solution to Einstein's equation, it must form an event horizon in finite time relative to outside observers. This requires an accurate theory of black hole formation, of which several have been proposed. In 2007, Shuan Nan Zhang of Tsinghua University proposed a model in which the event horizon of a potential black hole only forms (or expands) after an object falls into the existing horizon, or after the horizon has exceeded the critical density. In other words, an infalling object causes the horizon of a black hole to expand, which only occurs after the object has fallen into the hole, allowing an observable horizon in finite time. This solution does not solve the information paradox, however.
== Alternative black hole models ==
Nonsingular black hole models have been proposed since theoretical problems with black holes were first realized. Today some of the most viable candidates for the result of the collapse of a star with mass well above the Chandrasekhar limit include the gravastar and the dark energy star.
While black holes were a well-established part of mainstream physics for most of the end of the 20th century, alternative models received new attention when models proposed by George Chapline and later by Lawrence Krauss, Dejan Stojkovic, and Tanmay Vachaspati of Case Western Reserve University showed in several separate models that black hole horizons could not form.
Such research has attracted much media attention, as black holes have long captured the imagination of both scientists and the public for both their innate simplicity and mysteriousness. The recent theoretical results have therefore undergone much scrutiny and most of them are now ruled out by theoretical studies. For example, several alternative black hole models were shown to be unstable in extremely fast rotation, which, by conservation of angular momentum, would be a not unusual physical scenario for a collapsed star (see pulsar). Nevertheless, the existence of a stable model of a nonsingular black hole is still an open question.
=== Hayward metric ===
The Hayward metric is the simplest description of a black hole that is non-singular. The metric was written down by Sean Hayward as the minimal model that is regular, static, spherically symmetric and asymptotically flat.
=== Ayón-Beato–García metric ===
The Ayón-Beato–García model is the first exact charged regular black hole with a source. The model was proposed by Eloy Ayón Beato and Alberto García in 1998 based on the minimal coupling between a nonlinear electrodynamics model and general relativity, considering a static and spherically symmetric spacetime. Later the same authors reinterpreted the first non-singular black hole geometry, the Bardeen toy Model, as a nonlinear-electrodynamics-based regular black hole. Nowadays, it is known that the Ayón-Beato–García model may mimic the absorption properties of the Reissner–Nordström metric, from the perspective of the absorption of massless test scalar fields.
== Nonsingular black holes as dark matter ==
In 2024, Paul C.W. Davies, Damien A. Easson, and Phillip B. Levin proposed that nonsingular black holes are a viable candidate for dark matter. They showed that the nonsingular Schwarzschild-de Sitter black hole slowly evaporates, reaching a maximum but finite temperature, then forms a black hole remnant that does not have a singularity and whose mass is on the order of the Planck mass. This nonsingular black hole can comprise all of the dark matter in the observable universe because the fraction of primordial black holes that is dark matter is inversely proportional to the smallest mass primordial black hole that could have survived since the primordial era. It was previously thought that Hawking evaporation set the lower bound of primordial black holes to be 1012 kg, but nonsingular black holes, which form remnants and do not evaporate completely, lower this bound to the Planck mass, which is 10−8 kg. Thus Planck mass nonsingular black holes formed primordially can comprise all of the dark matter in the observable universe today.
== See also ==
Exotic star
Planck star
Fuzzball (string theory)
== References ==
== External links ==
Black holes don't exist, Case physicists report
George Chapline (1998). "The Black Hole Information Puzzle and Evidence for a Cosmological Constant". arXiv:hep-th/9807175.
Abhas Mitra (2005). "Comments on the proposal of Dark Energy Stars by Chapline". arXiv:astro-ph/0504384. | Wikipedia/Nonsingular_black_hole_models |
In relativistic classical field theories of gravitation, particularly general relativity, an energy condition is a generalization of the statement "the energy density of a region of space cannot be negative" in a relativistically phrased mathematical formulation. There are multiple possible alternative ways to express such a condition such that can be applied to the matter content of the theory. The hope is then that any reasonable matter theory will satisfy this condition or at least will preserve the condition if it is satisfied by the starting conditions.
Energy conditions are not physical constraints per se, but are rather mathematically imposed boundary conditions that attempt to capture a belief that "energy should be positive". Many energy conditions are known to not correspond to physical reality—for example, the observable effects of dark energy are well known to violate the strong energy condition.
In general relativity, energy conditions are often used (and required) in proofs of various important theorems about black holes, such as the no hair theorem or the laws of black hole thermodynamics.
== Motivation ==
In general relativity and allied theories, the distribution of the mass, momentum, and stress due to matter and to any non-gravitational fields is described by the energy–momentum tensor (or matter tensor)
T
a
b
{\displaystyle T^{ab}}
. However, the Einstein field equation in itself does not specify what kinds of states of matter or non-gravitational fields are admissible in a spacetime model. This is both a strength, since a good general theory of gravitation should be maximally independent of any assumptions concerning non-gravitational physics, and a weakness, because without some further criterion the Einstein field equation admits putative solutions with properties most physicists regard as unphysical, i.e. too weird to resemble anything in the real universe even approximately.
The energy conditions represent such criteria. Roughly speaking, they crudely describe properties common to all (or almost all) states of matter and all non-gravitational fields that are well-established in physics while being sufficiently strong to rule out many unphysical "solutions" of the Einstein field equation.
Mathematically speaking, the most apparent distinguishing feature of the energy conditions is that they are essentially restrictions on the eigenvalues and eigenvectors of the matter tensor. A more subtle but no less important feature is that they are imposed eventwise, at the level of tangent spaces. Therefore, they have no hope of ruling out objectionable global features, such as closed timelike curves.
== Some observable quantities ==
In order to understand the statements of the various energy conditions, one must be familiar with the physical interpretation of some scalar and vector quantities constructed from arbitrary timelike or null vectors and the matter tensor.
First, a unit timelike vector field
X
→
{\displaystyle {\vec {X}}}
can be interpreted as defining the world lines of some family of (possibly noninertial) ideal observers. Then the scalar field
ρ
=
T
a
b
X
a
X
b
{\displaystyle \rho =T_{ab}X^{a}X^{b}}
can be interpreted as the total mass–energy density (matter plus field energy of any non-gravitational fields) measured by the observer from our family (at each event on his world line). Similarly, the vector field with components
−
T
a
b
X
b
{\displaystyle -{T^{a}}_{b}X^{b}}
represents (after a projection) the momentum measured by our observers.
Second, given an arbitrary null vector field
k
→
,
{\displaystyle {\vec {k}},}
the scalar field
ν
=
T
a
b
k
a
k
b
{\displaystyle \nu =T_{ab}k^{a}k^{b}}
can be considered a kind of limiting case of the mass–energy density.
Third, in the case of general relativity, given an arbitrary timelike vector field
X
→
{\displaystyle {\vec {X}}}
, again interpreted as describing the motion of a family of ideal observers, the Raychaudhuri scalar is the scalar field obtained by taking the trace of the tidal tensor corresponding to those observers at each event:
E
[
X
→
]
m
m
=
R
a
b
X
a
X
b
{\displaystyle {E[{\vec {X}}]^{m}}_{m}=R_{ab}X^{a}X^{b}}
This quantity plays a crucial role in Raychaudhuri's equation. Then from Einstein field equation we immediately obtain
1
8
π
E
[
X
→
]
m
m
=
1
8
π
R
a
b
X
a
X
b
=
(
T
a
b
−
1
2
T
g
a
b
)
X
a
X
b
,
{\displaystyle {\frac {1}{8\pi }}{E[{\vec {X}}]^{m}}_{m}={\frac {1}{8\pi }}R_{ab}X^{a}X^{b}=\left(T_{ab}-{\frac {1}{2}}Tg_{ab}\right)X^{a}X^{b},}
where
T
=
T
m
m
{\displaystyle T={T^{m}}_{m}}
is the trace of the matter tensor.
== Mathematical statement ==
There are several alternative energy conditions in common use:
=== Null energy condition ===
The null energy condition stipulates that for every future-pointing null vector field
k
→
{\displaystyle {\vec {k}}}
,
ν
=
T
a
b
k
a
k
b
≥
0.
{\displaystyle \nu =T_{ab}k^{a}k^{b}\geq 0.}
Each of these has an averaged version, in which the properties noted above are to hold only on average along the flowlines of the appropriate vector fields. Otherwise, the Casimir effect leads to exceptions. For example, the averaged null energy condition states that for every flowline (integral curve)
C
{\displaystyle C}
of the null vector field
k
→
,
{\displaystyle {\vec {k}},}
we must have
∫
C
T
a
b
k
a
k
b
d
λ
≥
0.
{\displaystyle \int _{C}T_{ab}k^{a}k^{b}d\lambda \geq 0.}
=== Weak energy condition ===
The weak energy condition stipulates that for every timelike vector field
X
→
,
{\displaystyle {\vec {X}},}
the matter density observed by the corresponding observers is always non-negative:
ρ
=
T
a
b
X
a
X
b
≥
0.
{\displaystyle \rho =T_{ab}X^{a}X^{b}\geq 0.}
=== Dominant energy condition ===
The dominant energy condition stipulates that, in addition to the weak energy condition holding true, for every future-pointing causal vector field (either timelike or null)
Y
→
,
{\displaystyle {\vec {Y}},}
the vector field
−
T
a
b
Y
b
{\displaystyle -{T^{a}}_{b}Y^{b}}
must be a future-pointing causal vector. That is, mass–energy can never be observed to be flowing faster than light.
=== Strong energy condition ===
The strong energy condition stipulates that for every timelike vector field
X
→
{\displaystyle {\vec {X}}}
, the trace of the tidal tensor measured by the corresponding observers is always non-negative:
(
T
a
b
−
1
2
T
g
a
b
)
X
a
X
b
≥
0
{\displaystyle \left(T_{ab}-{\frac {1}{2}}Tg_{ab}\right)X^{a}X^{b}\geq 0}
There are many classical matter configurations which violate the strong energy condition, at least from a mathematical perspective. For instance, a scalar field with a positive potential can violate this condition. Moreover, observations of dark energy/cosmological constant show that the strong energy condition fails to describe our universe, even when averaged across cosmological scales. Furthermore, it is strongly violated in any cosmological inflationary process (even one not driven by a scalar field).
== Perfect fluids ==
Perfect fluids possess a matter tensor of form
T
a
b
=
ρ
u
a
u
b
+
p
h
a
b
,
{\displaystyle T^{ab}=\rho u^{a}u^{b}+ph^{ab},}
where
u
→
{\displaystyle {\vec {u}}}
is the four-velocity of the matter particles and where
h
a
b
≡
g
a
b
+
u
a
u
b
{\displaystyle h^{ab}\equiv g^{ab}+u^{a}u^{b}}
is the projection tensor onto the spatial hyperplane elements orthogonal to the four-velocity, at each event. (Notice that these hyperplane elements will not form a spatial hyperslice unless the velocity is vorticity-free, that is, irrotational.) With respect to a frame aligned with the motion of the matter particles, the components of the matter tensor take the diagonal form
T
a
^
b
^
=
[
ρ
0
0
0
0
p
0
0
0
0
p
0
0
0
0
p
]
.
{\displaystyle T^{{\hat {a}}{\hat {b}}}={\begin{bmatrix}\rho &0&0&0\\0&p&0&0\\0&0&p&0\\0&0&0&p\end{bmatrix}}.}
Here,
ρ
{\displaystyle \rho }
is the energy density and
p
{\displaystyle p}
is the pressure.
The energy conditions can then be reformulated in terms of these eigenvalues:
The null energy condition stipulates that
ρ
+
p
≥
0.
{\displaystyle \rho +p\geq 0.}
The weak energy condition stipulates that
ρ
≥
0
,
ρ
+
p
≥
0.
{\displaystyle \rho \geq 0,\;\;\rho +p\geq 0.}
The dominant energy condition stipulates that
ρ
≥
|
p
|
.
{\displaystyle \rho \geq |p|.}
The strong energy condition stipulates that
ρ
+
p
≥
0
,
ρ
+
3
p
≥
0.
{\displaystyle \rho +p\geq 0,\;\;\rho +3p\geq 0.}
The implications among these conditions are indicated in the figure at right. Note that some of these conditions allow negative pressure. Also, note that despite the names the strong energy condition does not imply the weak energy condition even in the context of perfect fluids.
== Non-perfect fluids ==
Finally, there are proposals for extension of the energy conditions to spacetimes containing non-perfect fluids, where the second law of thermodynamics provides a natural Lyapunov function to probe both stability and causality, where the physical origin of the connection between stability and causality lies in the relationship between entropy and information. These attempts generalize the Hawking-Ellis vacuum conservation theorem (according to which, if energy can enter an empty region faster than the speed of light, then the dominant energy condition is violated, and the energy density may become negative in some reference frame) to spacetimes containing out-of-equilibrium matter at finite temperature and chemical potential.
Indeed, the idea that there is a connection between causality violation and fluid instabilities has a long history. For example, in the words of W. Israel: “If the source of an effect can be delayed, it should be possible for a system to borrow energy from its ground state, and this implies instability”. It is possible to show that this is a restatement of the Hawking-Ellis vacuum conservation theorem at finite temperature and chemical potential.
== Attempts at falsifying the energy conditions ==
While the intent of the energy conditions is to provide simple criteria that rule out many unphysical situations while admitting any physically reasonable situation, in fact, at least when one introduces an effective field modeling of some quantum mechanical effects, some possible matter tensors which are known to be physically reasonable and even realistic because they have been experimentally verified, actually fail various energy conditions. In particular, in the Casimir effect, in the region between two conducting plates held parallel at a very small separation d, there is a negative energy density
ε
=
−
π
2
720
ℏ
d
4
{\displaystyle \varepsilon ={\frac {-\pi ^{2}}{720}}{\frac {\hbar }{d^{4}}}}
between the plates. (Be mindful, though, that the Casimir effect is topological, in that the sign of the vacuum energy depends on both the geometry and topology of the configuration. Being negative for parallel plates, the vacuum energy is positive for a conducting sphere.) However, various quantum inequalities suggest that a suitable averaged energy condition may be satisfied in such cases. In particular, the averaged null energy condition is satisfied in the Casimir effect. Indeed, for energy–momentum tensors arising from effective field theories on Minkowski spacetime, the averaged null energy condition holds for everyday quantum fields. Extending these results is an open problem.
The strong energy condition is obeyed by all normal/Newtonian matter, but a false vacuum can violate it. Consider the linear barotropic equation state
p
=
w
ρ
,
{\displaystyle p=w\rho ,}
where
ρ
{\displaystyle \rho }
is the matter energy density,
p
{\displaystyle p}
is the matter pressure, and
w
{\displaystyle w}
is a constant. Then the strong energy condition requires
w
≥
−
1
/
3
{\displaystyle w\geq -1/3}
; but for the state known as a false vacuum, we have
w
=
−
1
{\displaystyle w=-1}
.
== See also ==
Congruence (general relativity)
Exact solutions in general relativity
Frame fields in general relativity
Positive energy theorem
Quantum inequalities
== Notes ==
== References ==
Hawking, Stephen; Ellis, G. F. R. (1973). The Large Scale Structure of Space-Time. Cambridge: Cambridge University Press. ISBN 0-521-09906-4. The energy conditions are discussed in §4.3.
Poisson, Eric (2004). A Relativist's Toolkit: The Mathematics of Black Hole Mechanics. Cambridge: Cambridge University Press. Bibcode:2004rtmb.book.....P. ISBN 0-521-83091-5. Various energy conditions (including all of those mentioned above) are discussed in Section 2.1.
Carroll, Sean M. (2004). Spacetime and Geometry: An Introduction to General Relativity. San Francisco: Addison-Wesley. ISBN 0-8053-8732-3. Various energy conditions are discussed in Section 4.6.
Wald, Robert M. (1984). General Relativity. Chicago: University of Chicago Press. ISBN 0-226-87033-2. Common energy conditions are discussed in Section 9.2.
Ellis, G. F. R.; Maartens, R.; MacCallum, M.A.H. (2012). Relativistic Cosmology. Cambridge: Cambridge University Press. ISBN 978-0-521-38115-4. Violations of the strong energy condition is discussed in Section 6.1. | Wikipedia/Energy_conditions |
Timeline of black hole physics
== Pre-20th century ==
1640 — Ismaël Bullialdus suggests an inverse-square gravitational force law
1676 — Ole Rømer demonstrates that light has a finite speed
1684 — Isaac Newton writes down his inverse-square law of universal gravitation
1758 — Rudjer Josip Boscovich develops his theory of forces, where gravity can be repulsive on small distances. This implied that strange classical bodies that would not allow other bodies to reach their surfaces, such as what we know call white holes, could exist.
1784 — John Michell discusses classical bodies which have escape velocities greater than the speed of light
1795 — Pierre Laplace discusses classical bodies which have escape velocities greater than the speed of light
1798 — Henry Cavendish measures the gravitational constant G
1876 — William Kingdon Clifford suggests that the motion of matter may be due to changes in the geometry of space
== 20th century ==
=== Before 1960s ===
1909 — Albert Einstein, together with Marcel Grossmann, starts to develop a theory which would bind metric tensor gik, which defines a space geometry, with a source of gravity, that is with mass
1910 — Hans Reissner and Gunnar Nordström define Reissner–Nordström singularity, Hermann Weyl solves special case for a point-body source
1915 — Albert Einstein presents (David Hilbert presented this independently five days earlier in Göttingen) the complete Einstein field equations at the Prussian Academy meeting in Berlin on 25 November 1915
1916 — Karl Schwarzschild solves the Einstein vacuum field equations for uncharged spherically symmetric non-rotating systems
1917 — Paul Ehrenfest gives conditional principle a three-dimensional space
1918 — Hans Reissner and Gunnar Nordström solve the Einstein–Maxwell field equations for charged spherically symmetric non-rotating systems
1918 — Friedrich Kottler gets Schwarzschild solution without Einstein vacuum field equations
1923 — George David Birkhoff proves that the Schwarzschild spacetime geometry is the unique spherically symmetric solution of the Einstein vacuum field equations
1931 — Subrahmanyan Chandrasekhar calculates, using special relativity, that a non-rotating body of electron-degenerate matter above a certain limiting mass (at 1.4 solar masses) has no stable solutions
1939 — Robert Oppenheimer and Hartland Snyder calculate the gravitational collapse of a pressure-free homogeneous fluid sphere into a black hole
1939 - Using the work of Richard Chace Tolman, Robert Oppenheimer and George Volkoff calculate the upper mass limit of a cold, non-rotating neutron star to be approximately 0.7 solar masses.
1958 — David Finkelstein theorises that the Schwarzschild radius is a causality barrier: an event horizon of a black hole
=== 1960s ===
1963 — Roy Kerr solves the Einstein vacuum field equations for uncharged symmetric rotating systems, deriving the Kerr metric for a rotating black hole: 69–81
1963 — Maarten Schmidt discovers and analyzes the first quasar, 3C 273, as a highly red-shifted active galactic nucleus, a billion light years away
1964 — Yakov Zel’dovich and independently Edwin Salpeter propose that accretion discs around supermassive black holes are responsible for the huge amounts of energy radiated by quasars
1964 — Hong-Yee Chiu coins the word quasar for a 'quasi-stellar radio source' in his article in Physics Today
1964 — The first recorded use of the term "black hole" in writing, by journalist Ann Ewing
1965 — Roger Penrose proves that an imploding star will necessarily produce a singularity once it has formed an event horizon
1965 — Ezra T. Newman, E. Couch, K. Chinnapared, A. Exton, A. Prakash, and Robert Torrence solve the Einstein–Maxwell field equations for charged, rotating systems
1966 — Yakov Zel’dovich and Igor Novikov propose searching for black hole candidates among binary systems in which one star is optically bright and X-ray dark and the other optically dark but X-ray bright (the black hole candidate)
1967 — Jocelyn Bell discovers and analyzes the first radio pulsar, direct evidence for a neutron star
1967 — Werner Israel presents the proof of the no-hair theorem at King's College London
1967 — John Wheeler introduces the term "black hole" in his lecture to the American Association for the Advancement of Science
1968 — Brandon Carter uses Hamilton–Jacobi theory to derive first-order equations of motion for a charged particle moving in the external fields of a Kerr–Newman black hole
1969 — Roger Penrose discusses the Penrose process for the extraction of the spin energy from a Kerr black hole
1969 — Roger Penrose proposes the cosmic censorship hypothesis
=== After 1960s ===
1972 — Identification of Cygnus X-1/HDE 226868 from dynamic observations as the first binary with a stellar black hole candidate
1972 — Stephen Hawking proves that the area of a classical black hole's event horizon cannot decrease
1972 — James Bardeen, Brandon Carter, and Stephen Hawking propose four laws of black hole mechanics in analogy with the laws of thermodynamics
1972 — Jacob Bekenstein suggests that black holes have an entropy proportional to their surface area due to information loss effects
1974 — Stephen Hawking applies quantum field theory to black hole spacetimes and shows that black holes will radiate particles with a black-body spectrum which can cause black hole evaporation
1975 — James Bardeen and Jacobus Petterson show that the swirl of spacetime around a spinning black hole can act as a gyroscope stabilizing the orientation of the accretion disc and jets
1989 — Identification of microquasar V404 Cygni as a binary black hole candidate system
1989 - Eric Poisson and Werner Israel theorize the concept of mass-inflation, a phenomena in which the curvature and gravitational mass parameter inside a spinning or charged black hole grow to infinity as one approaches the inner horizon, causing an infalling observer to experience a singularity at the inner horizon of the black hole.
1994 — Charles Townes and colleagues observe ionized neon gas swirling around the center of our Galaxy at such high velocities that a possible black hole mass at the very center must be approximately equal to that of 3 million suns
== 21st century ==
2002 — Astronomers at the Max Planck Institute for Extraterrestrial Physics present evidence for the hypothesis that Sagittarius A* is a supermassive black hole at the center of the Milky Way galaxy
2002 — Physicists at The Ohio State University publish fuzzball theory, which is a quantum description of black holes positing that they are extended objects composed of strings and don't have singularities.
2002 — NASA's Chandra X-ray Observatory identifies double galactic black holes system in merging galaxies NGC 6240
2004 — Further observations by a team from UCLA present even stronger evidence supporting Sagittarius A* as a black hole
2006 — The Event Horizon Telescope begins capturing data
2012 — First visual evidence of black-holes: Suvi Gezari's team in Johns Hopkins University, using the Hawaiian telescope Pan-STARRS 1, publish images of a supermassive black hole 2.7 million light-years away swallowing a red giant
2015 — LIGO Scientific Collaboration detects the distinctive gravitational waveforms from a binary black hole merging into a final black hole, yielding the basic parameters (e.g., distance, mass, and spin) of the three spinning black holes involved
2019 — Event Horizon Telescope collaboration releases the first direct photo of a black hole, the supermassive M87* at the core of the Messier 87 galaxy
== References ==
== See also ==
Timeline of gravitational physics and relativity
Schwarzschild radius | Wikipedia/Timeline_of_black_hole_physics |
A dark-energy star is a hypothetical compact astrophysical object, which a minority of physicists think might constitute an alternative explanation for observations of astronomical black hole candidates.
The concept was proposed by physicist George Chapline. The theory states that infalling matter is converted into vacuum energy or dark energy, as the matter falls through the event horizon. The space within the event horizon would end up with a large value for the cosmological constant and have negative pressure to exert against gravity. There would be no information-destroying singularity.
== Theory ==
In March 2005, physicist George Chapline claimed that quantum mechanics makes it a "near certainty" that black holes do not exist and are instead dark-energy stars. The dark-energy star is a different concept from that of a gravastar.
Dark-energy stars were first proposed because in quantum physics, absolute time is required; however, in general relativity, an object falling towards a black hole would, to an outside observer, seem to have time pass infinitely slowly at the event horizon. The object itself would feel as if time flowed normally.
In order to reconcile quantum mechanics with black holes, Chapline theorized that a phase transition in the phase of space occurs at the event horizon. He based his ideas on the physics of superfluids. As a column of superfluid grows taller, at some point, density increases, slowing down the speed of sound, so that it approaches zero. However, at that point, quantum physics makes sound waves dissipate their energy into the superfluid, so that the zero sound speed condition is never encountered.
In the dark-energy star hypothesis, infalling matter approaching the event horizon decays into successively lighter particles. Nearing the event horizon, environmental effects accelerate proton decay. This may account for high-energy cosmic-ray sources and positron sources in the sky. When the matter falls through the event horizon, the energy equivalent of some or all of that matter is converted into dark energy. This negative pressure counteracts the mass the star gains, avoiding a singularity. The negative pressure also gives a very high number for the cosmological constant.
Furthermore, 'primordial' dark-energy stars could form by fluctuations of spacetime itself, which is analogous to "blobs of liquid condensing spontaneously out of a cooling gas". This not only alters the understanding of black holes, but has the potential to explain the dark energy and dark matter that are indirectly observed.
== See also ==
== References ==
== Sources ==
Chapline, George (2005). "Dark Energy Stars". Proceedings of the Texas Symposium on Relativistic Astrophysics. p. 101. arXiv:astro-ph/0503200. Bibcode:2005tsra.conf..101C.
Barbieri, J.; Chapline, G. ″ (2004). "Have Nucleon Decays Already Been Seen?". Physics Letters B. 590 (1–2): 8–12. Bibcode:2004PhLB..590....8B. doi:10.1016/j.physletb.2004.03.054.
Chapline, George; Hohlfeld, E.; Laughlin, R. B.; Santiago, D. I. (2003). "Quantum Phase Transitions and the Failure of Classical General Relativity". International Journal of Modern Physics A. 18 (21): 3587–3590. arXiv:gr-qc/0012094. Bibcode:2003IJMPA..18.3587C. doi:10.1142/S0217751X03016380. S2CID 119456781.
== External links ==
MPIE Galactic Center Research
George Chapline (28 March 2005). "Black holes 'do not exist'". Nature News. (subscription only) | Wikipedia/Dark-energy_star |
The Grand Design is a popular-science book written by physicists Stephen Hawking and Leonard Mlodinow and published by Bantam Books in 2010. The book examines the history of scientific knowledge about the universe and explains eleven-dimensional M-theory. The authors of the book point out that a Unified Field Theory (a theory, based on an early model of the universe, proposed by Albert Einstein and other physicists) may not exist.
It argues that invoking God is not necessary to explain the origins of the universe, and that the Big Bang is a consequence of the laws of physics alone. In response to criticism, Hawking said: "One can't prove that God doesn't exist, but science makes God unnecessary." When pressed on his own religious views by the 2010 Channel 4 documentary Genius of Britain, he clarified that he did not believe in a personal God.
Published in the United States on September 7, 2010, the book became the number one bestseller on Amazon.com just a few days after publication.
It was published in the United Kingdom on September 9, 2010, and became the number two bestseller on Amazon.co.uk on the same day. It topped the list of adult non-fiction books of The New York Times Non-fiction Best Seller list in September–October 2010.
== Synopsis ==
The book examines the history of scientific knowledge about the universe. It starts with the Ionian Greeks, who claimed that nature works by laws, and not by the will of the gods. It later presents the work of Nicolaus Copernicus, who advocated the concept that the Earth is not located in the center of the universe.
It has tried to explain the topics in an easier manner. Many examples related from daily life, mythology and history have been taken to explain, such as- Viking Mythology about Skoll and Hati, movie The Matrix, Ptolemaic universe.
The authors then describe the theory of quantum mechanics using, as an example, the probable movement of an electron around a room. The presentation has been described as easy to understand by some reviewers, but also as sometimes "impenetrable," by others.
The central claim of the book is that the theory of quantum mechanics and the theory of relativity together help us understand how universes could have formed out of nothing.
The authors write: Because there is a law such as gravity, the universe can and will create itself from nothing. Spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist. It is not necessary to invoke God to light the blue touch paper and set the universe going.
The authors explain, in a manner consistent with M-theory, that as the Earth is only one of several planets in our Solar System, and as our Milky Way galaxy is only one of many galaxies, the same may apply to our universe itself: that is, our universe may be one of a huge number of universes.
The book concludes with the statement that only some universes of the multiple universes (or multiverse) support life forms and that we are located in one of those universes. The laws of nature that are required for life forms to exist appear in some universes by pure chance , Hawking and Mlodinow explain (see Anthropic principle).
== Reactions ==
=== Positive reactions ===
Evolutionary biologist and advocate for atheism Richard Dawkins welcomed Hawking's position and said that "Darwinism kicked God out of biology but physics remained more uncertain. Hawking is now administering the coup de grace."
Theoretical physicist Sean M. Carroll, writing in The Wall Street Journal, described the book as speculative but ambitious: "The important lesson of The Grand Design is not so much the particular theory being advocated but the sense that science may be able to answer the deep 'Why?' questions that are part of fundamental human curiosity."
Cosmologist Lawrence Krauss, in his article "Our Spontaneous Universe", wrote that "there are remarkable, testable arguments that provide firmer empirical evidence of the possibility that our universe arose from nothing. ... If our universe arose spontaneously from nothing at all, one might predict that its total energy should be zero. And when we measure the total energy of the universe, which could have been anything, the answer turns out to be the only one consistent with this possibility. Coincidence? Maybe. But data like this coming in from our revolutionary new tools promise to turn much of what is now metaphysics into physics. Whether God survives is anyone's guess."
James Trefil, a professor of physics at George Mason University, said in his Washington Post review: "I've waited a long time for this book. It gets into the deepest questions of modern cosmology without a single equation. The reader will be able to get through it without bogging down in a lot of technical detail and will, I hope, have his or her appetite whetted for books with a deeper technical content. And who knows? Maybe in the end the whole multiverse idea will actually turn out to be right!"
Canada Press journalist Carl Hartman said: "Cosmologists, the people who study the entire cosmos, will want to read British physicist and mathematician Stephen Hawking's new book. The Grand Design may sharpen appetites for answers to questions like 'Why is there something rather than nothing?' and 'Why do we exist?' – questions that have troubled thinking people at least as far back as the ancient Greeks."
Writing in the Los Angeles Times, Michael Moorcock praised the authors: "their arguments do indeed bring us closer to seeing our world, universe and multiverse in terms that a previous generation might easily have dismissed as supernatural. This succinct, easily digested book could perhaps do with fewer dry, academic groaners, but Hawking and Mlodinow pack in a wealth of ideas and leave us with a clearer understanding of modern physics in all its invigorating complexity."
German daily Süddeutsche Zeitung devoted the whole opening page of its culture section to The Grand Design. CERN physicist and novelist Ralf Bönt reviews the history of the theory of everything from the 18th century to M-theory, and takes Hawking's conclusion on God's existence as a very good joke which he obviously welcomes very much.
Best selling author Deepak Chopra in an interview with CNN said: "We have to congratulate Leonard and Stephen for finally, finally contributing to the climatic overthrow of the superstition of materialism. Because everything that we call matter comes from this domain which is invisible, which is beyond space and time. All religious experience is based on just three basic fundamental ideas...And nothing in the book invalidates any of these three ideas".
=== Critical reactions ===
John Lennox, Professor of Mathematics at Oxford University, declared "nonsense remains nonsense, even when talked by world-famous scientists." He points to several self-contradictory elements within the central claim of the text, as well as many logical errors made throughout the book which claims "philosophy is dead."
Roger Penrose in the FT doubts that adequate understandings can come from this approach, and points out that "unlike quantum mechanics, M-theory enjoys no observational support whatsoever". Joe Silk in Science suggests that "Some humbleness would be welcome here...A century or two hence...I expect that M-theory will seem as naïve to cosmologists of the future as we now find Pythagoras's cosmology of the harmony of the spheres".
Gerald Schroeder in "The Big Bang Creation: God or the Laws of Nature" explains that "The Grand Design breaks the news, bitter to some, that … to create a universe from absolute nothing God is not necessary. All that is needed are the laws of nature. … [That is,] there can have been a big bang creation without the help of God, provided the laws of nature pre-date the universe. Our concept of time begins with the creation of the universe. Therefore if the laws of nature created the universe, these laws must have existed prior to time; that is the laws of nature would be outside of time. What we have then is totally non-physical laws, outside of time, creating a universe. Now that description might sound somewhat familiar. Very much like the biblical concept of God: not physical, outside of time, able to create a universe."
Dwight Garner in The New York Times was critical of the book, saying: "The real news about The Grand Design is how disappointingly tinny and inelegant it is. The spare and earnest voice that Mr. Hawking employed with such appeal in A Brief History of Time has been replaced here by one that is alternately condescending, as if he were Mr. Rogers explaining rain clouds to toddlers, and impenetrable."
Craig Callender, in the New Scientist, was not convinced by the theory promoted in the book: "M-theory ... is far from complete. But that doesn't stop the authors from asserting that it explains the mysteries of existence ... In the absence of theory, though, this is nothing more than a hunch doomed – until we start watching universes come into being – to remain untested. The lesson isn't that we face a dilemma between God and the multiverse, but that we shouldn't go off the rails at the first sign of coincidences."
Paul Davies in The Guardian wrote: "The multiverse comes with a lot of baggage, such as an overarching space and time to host all those bangs, a universe-generating mechanism to trigger them, physical fields to populate the universes with material stuff, and a selection of forces to make things happen. Cosmologists embrace these features by envisaging sweeping "meta-laws" that pervade the multiverse and spawn specific bylaws on a universe-by-universe basis. The meta-laws themselves remain unexplained – eternal, immutable transcendent entities that just happen to exist and must simply be accepted as given. In that respect the meta-laws have a similar status to an unexplained transcendent god." Davies concludes "there is no compelling need for a supernatural being or prime mover to start the universe off. But when it comes to the laws that explain the big bang, we are in murkier waters."
Dr. Marcelo Gleiser, in his article "Hawking And God: An Intimate Relationship", stated that "contemplating a final theory is inconsistent with the very essence of physics, an empirical science based on the gradual collection of data. Because we don’t have instruments capable of measuring all of Nature, we cannot ever be certain that we have a final theory. There’ll always be room for surprises, as the history of physics has shown again and again. In fact, I find it quite pretentious to imagine that we humans can achieve such a thing. ... Maybe Hawking should leave God alone."
Physicist Peter Woit, of Columbia University, has criticized the book: "One thing that is sure to generate sales for a book of this kind is to somehow drag in religion. The book's rather conventional claim that "God is unnecessary" for explaining physics and early universe cosmology has provided a lot of publicity for the book. I'm in favor of naturalism and leaving God out of physics as much as the next person, but if you're the sort who wants to go to battle in the science/religion wars, why you would choose to take up such a dubious weapon as M-theory mystifies me."
In Scientific American, John Horgan is not sympathetic to the book:
"M-theory, theorists now realize, comes in an almost infinite number of versions, which "predict" an almost infinite number of possible universes. Critics call this the "Alice's Restaurant problem," a reference to the refrain of the old Arlo Guthrie folk song: "You can get anything you want at Alice's Restaurant." Of course, a theory that predicts everything really doesn't predict anything...
The anthropic principle has always struck me as so dumb that I can't understand why anyone takes it seriously. It's cosmology's version of creationism. ... The physicist Tony Rothman, with whom I worked at Scientific American in the 1990s, liked to say that the anthropic principle in any form is completely ridiculous and hence should be called CRAP. ...
Hawking is telling us that unconfirmable M-theory plus the anthropic tautology represents the end of that quest. If we believe him, the joke's on us."
The Economist is also critical of the book: Hawking and Mlodinow "...say that these surprising ideas have passed every experimental test to which they have been put, but that is misleading in a way that is unfortunately typical of the authors. It is the bare bones of quantum mechanics that have proved to be consistent with what is presently known of the subatomic world. The authors' interpretations and extrapolations of it have not been subjected to any decisive tests, and it is not clear that they ever could be.
Once upon a time it was the province of philosophy to propose ambitious and outlandish theories in advance of any concrete evidence for them. Perhaps science, as Professor Hawking and Mr Mlodinow practice it in their airier moments, has indeed changed places with philosophy, though probably not quite in the way that they think."
The Bishop of Swindon, Dr. Lee Rayfield, said, "Science can never prove the non-existence of God, just as it can never prove the existence of God." Anglican priest, Cambridge theologian and psychologist Rev. Dr. Fraser N. Watts said "a creator God provides a reasonable and credible explanation of why there is a universe, and ... it is somewhat more likely that there is a God than that there is not. That view is not undermined by what Hawking has said."
British scientist Baroness Greenfield also criticized the book in an interview with BBC Radio: "Of course they can make whatever comments they like, but when they assume, rather in a Taliban-like way, that they have all the answers, then I do feel uncomfortable." She later claimed her Taliban remarks were "not intended to be personal", saying she "admired Stephen Hawking greatly" and "had no wish to compare him in particular to the Taliban".
Denis Alexander responded to Stephen Hawking's The Grand Design by stating that "the 'god' that Stephen Hawking is trying to debunk is not the creator God of the Abrahamic faiths who really is the ultimate explanation for why there is something rather than nothing", adding that "Hawking's god is a god-of-the-gaps used to plug present gaps in our scientific knowledge." "Science provides us with a wonderful narrative as to how [existence] may happen, but theology addresses the meaning of the narrative".
Mathematician and philosopher of science Wolfgang Smith wrote a chapter-by-chapter summary and critique of the book, first published in Sophia: The Journal of Traditional Studies, and subsequently published as "From Physics to Science Fiction: Response to Stephen Hawking" in the 2012 edition of his collection of essays, Science & Myth.
== See also ==
A Brief History of Time – 1988 book by Stephen Hawking
A Briefer History of Time – 2005 popular science book by Stephen Hawking
Brief Answers to the Big Questions – 2018 popular science book by Stephen Hawking
Model-dependent realism – View of scientific inquiry that focuses on the role of scientific models of phenomena
A Question and Answer Guide to Astronomy
== References == | Wikipedia/The_Grand_Design_(book) |
The Schwarzschild solution describes spacetime under the influence of a massive, non-rotating, spherically symmetric object. It is considered by some to be one of the simplest and most useful solutions to the Einstein field equations.
== Assumptions and notation ==
Working in a coordinate chart with coordinates
(
r
,
θ
,
ϕ
,
t
)
{\displaystyle \left(r,\theta ,\phi ,t\right)}
labelled 1 to 4 respectively, we begin with the metric in its most general form (10 independent components, each of which is a smooth function of 4 variables). The solution is assumed to be spherically symmetric, static and vacuum. For the purposes of this article, these assumptions may be stated as follows (see the relevant links for precise definitions):
A spherically symmetric spacetime is one that is invariant under rotations and taking the mirror image.
A static spacetime is one in which all metric components are independent of the time coordinate
t
{\displaystyle t}
(so that
∂
∂
t
g
μ
ν
=
0
{\displaystyle {\tfrac {\partial }{\partial t}}g_{\mu \nu }=0}
) and the geometry of the spacetime is unchanged under a time-reversal
t
→
−
t
{\displaystyle t\rightarrow -t}
.
A vacuum solution is one that satisfies the equation
T
a
b
=
0
{\displaystyle T_{ab}=0}
. From the Einstein field equations (with zero cosmological constant), this implies that
R
a
b
=
0
{\displaystyle R_{ab}=0}
since contracting
R
a
b
−
R
2
g
a
b
=
0
{\displaystyle R_{ab}-{\tfrac {R}{2}}g_{ab}=0}
yields
R
=
0
{\displaystyle R=0}
.
Metric signature used here is (+ + + −).
== Diagonalising the metric ==
The first simplification to be made is to diagonalise the metric. Under the coordinate transformation,
(
r
,
θ
,
ϕ
,
t
)
→
(
r
,
θ
,
ϕ
,
−
t
)
{\displaystyle (r,\theta ,\phi ,t)\rightarrow (r,\theta ,\phi ,-t)}
, all metric components should remain the same. The metric components
g
μ
4
{\displaystyle g_{\mu 4}}
(
μ
≠
4
{\displaystyle \mu \neq 4}
) change under this transformation as:
g
μ
4
′
=
∂
x
α
∂
x
′
μ
∂
x
β
∂
x
′
4
g
α
β
=
−
g
μ
4
{\displaystyle g_{\mu 4}'={\frac {\partial x^{\alpha }}{\partial x^{'\mu }}}{\frac {\partial x^{\beta }}{\partial x^{'4}}}g_{\alpha \beta }=-g_{\mu 4}}
(
μ
≠
4
{\displaystyle \mu \neq 4}
)
But, as we expect
g
μ
4
′
=
g
μ
4
{\displaystyle g'_{\mu 4}=g_{\mu 4}}
(metric components remain the same), this means that:
g
μ
4
=
0
{\displaystyle g_{\mu 4}=0}
(
μ
≠
4
{\displaystyle \mu \neq 4}
)
Similarly, the coordinate transformations
(
r
,
θ
,
ϕ
,
t
)
→
(
r
,
θ
,
−
ϕ
,
t
)
{\displaystyle (r,\theta ,\phi ,t)\rightarrow (r,\theta ,-\phi ,t)}
and
(
r
,
θ
,
ϕ
,
t
)
→
(
r
,
−
θ
,
ϕ
,
t
)
{\displaystyle (r,\theta ,\phi ,t)\rightarrow (r,-\theta ,\phi ,t)}
respectively give:
g
μ
3
=
0
{\displaystyle g_{\mu 3}=0}
(
μ
≠
3
{\displaystyle \mu \neq 3}
)
g
μ
2
=
0
{\displaystyle g_{\mu 2}=0}
(
μ
≠
2
{\displaystyle \mu \neq 2}
)
Putting all these together gives:
g
μ
ν
=
0
{\displaystyle g_{\mu \nu }=0}
(
μ
≠
ν
{\displaystyle \mu \neq \nu }
)
and hence the metric must be of the form:
d
s
2
=
g
11
d
r
2
+
g
22
d
θ
2
+
g
33
d
ϕ
2
+
g
44
d
t
2
{\displaystyle ds^{2}=\,g_{11}\,dr^{2}+g_{22}\,d\theta ^{2}+g_{33}\,d\phi ^{2}+g_{44}\,dt^{2}}
where the four metric components are independent of the time coordinate
t
{\displaystyle t}
(by the static assumption).
== Simplifying the components ==
On each hypersurface of constant
t
{\displaystyle t}
, constant
θ
{\displaystyle \theta }
and constant
ϕ
{\displaystyle \phi }
(i.e., on each radial line),
g
11
{\displaystyle g_{11}}
should only depend on
r
{\displaystyle r}
(by spherical symmetry). Hence
g
11
{\displaystyle g_{11}}
is a function of a single variable:
g
11
=
A
(
r
)
{\displaystyle g_{11}=A\left(r\right)}
A similar argument applied to
g
44
{\displaystyle g_{44}}
shows that:
g
44
=
B
(
r
)
{\displaystyle g_{44}=B\left(r\right)}
On the hypersurfaces of constant
t
{\displaystyle t}
and constant
r
{\displaystyle r}
, it is required that the metric be that of a 2-sphere:
d
l
2
=
r
0
2
(
d
θ
2
+
sin
2
θ
d
ϕ
2
)
{\displaystyle dl^{2}=r_{0}^{2}(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2})}
Choosing one of these hypersurfaces (the one with radius
r
0
{\displaystyle r_{0}}
, say), the metric components restricted to this hypersurface (which we denote by
g
~
22
{\displaystyle {\tilde {g}}_{22}}
and
g
~
33
{\displaystyle {\tilde {g}}_{33}}
) should be unchanged under rotations through
θ
{\displaystyle \theta }
and
ϕ
{\displaystyle \phi }
(again, by spherical symmetry). Comparing the forms of the metric on this hypersurface gives:
g
~
22
(
d
θ
2
+
g
~
33
g
~
22
d
ϕ
2
)
=
r
0
2
(
d
θ
2
+
sin
2
θ
d
ϕ
2
)
{\displaystyle {\tilde {g}}_{22}\left(d\theta ^{2}+{\frac {{\tilde {g}}_{33}}{{\tilde {g}}_{22}}}\,d\phi ^{2}\right)=r_{0}^{2}(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2})}
which immediately yields:
g
~
22
=
r
0
2
{\displaystyle {\tilde {g}}_{22}=r_{0}^{2}}
and
g
~
33
=
r
0
2
sin
2
θ
{\displaystyle {\tilde {g}}_{33}=r_{0}^{2}\sin ^{2}\theta }
But this is required to hold on each hypersurface; hence,
g
22
=
r
2
{\displaystyle g_{22}=\,r^{2}}
and
g
33
=
r
2
sin
2
θ
{\displaystyle g_{33}=\,r^{2}\sin ^{2}\theta }
An alternative intuitive way to see that
g
22
{\displaystyle g_{22}}
and
g
33
{\displaystyle g_{33}}
must be the same as for a flat spacetime is that stretching or compressing an elastic material in a spherically symmetric manner (radially) will not change the angular distance between two points.
Thus, the metric can be put in the form:
d
s
2
=
A
(
r
)
d
r
2
+
r
2
d
θ
2
+
r
2
sin
2
θ
d
ϕ
2
+
B
(
r
)
d
t
2
{\displaystyle ds^{2}=A\left(r\right)dr^{2}+r^{2}\,d\theta ^{2}+r^{2}\sin ^{2}\theta \,d\phi ^{2}+B\left(r\right)dt^{2}}
with
A
{\displaystyle A}
and
B
{\displaystyle B}
as yet undetermined functions of
r
{\displaystyle r}
. Note that if
A
{\displaystyle A}
or
B
{\displaystyle B}
is equal to zero at some point, the metric would be singular at that point.
== Calculating the Christoffel symbols ==
Using the metric above, we find the Christoffel symbols, where the indices are
(
1
,
2
,
3
,
4
)
=
(
r
,
θ
,
ϕ
,
t
)
{\displaystyle (1,2,3,4)=(r,\theta ,\phi ,t)}
. The sign
′
{\displaystyle '}
denotes a total derivative of a function.
Γ
i
k
1
=
[
A
′
/
(
2
A
)
0
0
0
0
−
r
/
A
0
0
0
0
−
r
sin
2
θ
/
A
0
0
0
0
−
B
′
/
(
2
A
)
]
{\displaystyle \Gamma _{ik}^{1}={\begin{bmatrix}A'/\left(2A\right)&0&0&0\\0&-r/A&0&0\\0&0&-r\sin ^{2}\theta /A&0\\0&0&0&-B'/\left(2A\right)\end{bmatrix}}}
Γ
i
k
2
=
[
0
1
/
r
0
0
1
/
r
0
0
0
0
0
−
sin
θ
cos
θ
0
0
0
0
0
]
{\displaystyle \Gamma _{ik}^{2}={\begin{bmatrix}0&1/r&0&0\\1/r&0&0&0\\0&0&-\sin \theta \cos \theta &0\\0&0&0&0\end{bmatrix}}}
Γ
i
k
3
=
[
0
0
1
/
r
0
0
0
cot
θ
0
1
/
r
cot
θ
0
0
0
0
0
0
]
{\displaystyle \Gamma _{ik}^{3}={\begin{bmatrix}0&0&1/r&0\\0&0&\cot \theta &0\\1/r&\cot \theta &0&0\\0&0&0&0\end{bmatrix}}}
Γ
i
k
4
=
[
0
0
0
B
′
/
(
2
B
)
0
0
0
0
0
0
0
0
B
′
/
(
2
B
)
0
0
0
]
{\displaystyle \Gamma _{ik}^{4}={\begin{bmatrix}0&0&0&B'/\left(2B\right)\\0&0&0&0\\0&0&0&0\\B'/\left(2B\right)&0&0&0\end{bmatrix}}}
== Using the field equations to find A(r) and B(r) ==
To determine
A
{\displaystyle A}
and
B
{\displaystyle B}
, the vacuum field equations are employed:
R
α
β
=
0
{\displaystyle R_{\alpha \beta }=\,0}
Hence:
Γ
β
α
,
ρ
ρ
−
Γ
ρ
α
,
β
ρ
+
Γ
ρ
λ
ρ
Γ
β
α
λ
−
Γ
β
λ
ρ
Γ
ρ
α
λ
=
0
,
{\displaystyle {\Gamma _{\beta \alpha ,\rho }^{\rho }}-\Gamma _{\rho \alpha ,\beta }^{\rho }+\Gamma _{\rho \lambda }^{\rho }\Gamma _{\beta \alpha }^{\lambda }-\Gamma _{\beta \lambda }^{\rho }\Gamma _{\rho \alpha }^{\lambda }=0\,,}
where a comma is used to set off the index that is being used for the derivative. The Ricci curvature is diagonal in the given coordinates:
R
t
t
=
−
1
4
B
′
A
(
A
′
A
−
B
′
B
+
4
r
)
−
1
2
(
B
′
A
)
′
,
{\displaystyle R_{tt}=-{\frac {1}{4}}{\frac {B'}{A}}\left({\frac {A'}{A}}-{\frac {B'}{B}}+{\frac {4}{r}}\right)-{\frac {1}{2}}\left({\frac {B'}{A}}\right)',}
R
r
r
=
−
1
2
(
B
′
B
)
′
−
1
4
(
B
′
B
)
2
+
1
4
A
′
A
(
B
′
B
+
4
r
)
,
{\displaystyle R_{rr}=-{\frac {1}{2}}\left({\frac {B'}{B}}\right)^{'}-{\frac {1}{4}}\left({\frac {B'}{B}}\right)^{2}+{\frac {1}{4}}{\frac {A'}{A}}\left({\frac {B'}{B}}+{\frac {4}{r}}\right),}
R
θ
θ
=
1
−
(
r
A
)
′
−
r
2
A
(
A
′
A
+
B
′
B
)
,
{\displaystyle R_{\theta \theta }=1-\left({\frac {r}{A}}\right)^{'}-{\frac {r}{2A}}\left({\frac {A'}{A}}+{\frac {B'}{B}}\right),}
R
ϕ
ϕ
=
sin
2
(
θ
)
R
θ
θ
,
{\displaystyle R_{\phi \phi }=\sin ^{2}(\theta )R_{\theta \theta },}
where the prime means the r derivative of the functions.
Only three of the field equations are nontrivial (the fourth equation is just
sin
2
θ
{\displaystyle \sin ^{2}\theta }
times the third equation) and upon simplification become, respectively:
4
A
′
B
2
−
2
r
B
″
A
B
+
r
A
′
B
′
B
+
r
B
′
2
A
=
0
,
{\displaystyle 4A'B^{2}-2rB''AB+rA'B'B+rB'^{2}A=0,}
−
2
r
B
″
A
B
+
r
A
′
B
′
B
+
r
B
′
2
A
−
4
B
′
A
B
=
0
,
{\displaystyle -2rB''AB+rA'B'B+rB'^{2}A-4B'AB=0,}
r
A
′
B
+
2
A
2
B
−
2
A
B
−
r
B
′
A
=
0
{\displaystyle rA'B+2A^{2}B-2AB-rB'A=0}
Subtracting the first and second equations produces:
A
′
B
+
A
B
′
=
0
⇒
A
(
r
)
B
(
r
)
=
K
{\displaystyle A'B+AB'=0\Rightarrow A(r)B(r)=K}
where
K
{\displaystyle K}
is a non-zero real constant. Substituting
A
(
r
)
B
(
r
)
=
K
{\displaystyle A(r)B(r)=K}
into the third equation and tidying up gives:
r
A
′
=
A
(
1
−
A
)
{\displaystyle rA'=A(1-A)}
which has general solution:
A
(
r
)
=
(
1
+
1
S
r
)
−
1
{\displaystyle A(r)=\left(1+{\frac {1}{Sr}}\right)^{-1}}
for some non-zero real constant
S
{\displaystyle S}
. Hence, the metric for a static, spherically symmetric vacuum solution is now of the form:
d
s
2
=
(
1
+
1
S
r
)
−
1
d
r
2
+
r
2
(
d
θ
2
+
sin
2
θ
d
ϕ
2
)
+
K
(
1
+
1
S
r
)
d
t
2
{\displaystyle ds^{2}=\left(1+{\frac {1}{Sr}}\right)^{-1}dr^{2}+r^{2}(d\theta ^{2}+\sin ^{2}\theta d\phi ^{2})+K\left(1+{\frac {1}{Sr}}\right)dt^{2}}
Note that the spacetime represented by the above metric is asymptotically flat, i.e. as
r
→
∞
{\displaystyle r\rightarrow \infty }
, the metric approaches that of the Minkowski metric and the spacetime manifold resembles that of Minkowski space.
== Using the weak-field approximation to find K and S ==
The geodesics of the metric (obtained where
d
s
{\displaystyle ds}
is extremised) must, in some limit (e.g., toward infinite speed of light), agree with the solutions of Newtonian motion (e.g., obtained by Lagrange equations). (The metric must also limit to Minkowski space when the mass it represents vanishes.)
0
=
δ
∫
d
s
d
t
d
t
=
δ
∫
(
K
E
+
P
E
g
)
d
t
{\displaystyle 0=\delta \int {\frac {ds}{dt}}dt=\delta \int (KE+PE_{g})dt}
(where
K
E
{\displaystyle KE}
is the kinetic energy and
P
E
g
{\displaystyle PE_{g}}
is the Potential Energy due to gravity) The constants
K
{\displaystyle K}
and
S
{\displaystyle S}
are fully determined by some variant of this approach; from the weak-field approximation one arrives at the result:
g
44
=
K
(
1
+
1
S
r
)
≈
−
c
2
+
2
G
m
r
=
−
c
2
(
1
−
2
G
m
c
2
r
)
{\displaystyle g_{44}=K\left(1+{\frac {1}{Sr}}\right)\approx -c^{2}+{\frac {2Gm}{r}}=-c^{2}\left(1-{\frac {2Gm}{c^{2}r}}\right)}
where
G
{\displaystyle G}
is the gravitational constant,
m
{\displaystyle m}
is the mass of the gravitational source and
c
{\displaystyle c}
is the speed of light. It is found that:
K
=
−
c
2
{\displaystyle K=\,-c^{2}}
and
1
S
=
−
2
G
m
c
2
{\displaystyle {\frac {1}{S}}=-{\frac {2Gm}{c^{2}}}}
Hence:
A
(
r
)
=
(
1
−
2
G
m
c
2
r
)
−
1
{\displaystyle A(r)=\left(1-{\frac {2Gm}{c^{2}r}}\right)^{-1}}
and
B
(
r
)
=
−
c
2
(
1
−
2
G
m
c
2
r
)
{\displaystyle B(r)=-c^{2}\left(1-{\frac {2Gm}{c^{2}r}}\right)}
So, the Schwarzschild metric may finally be written in the form:
d
s
2
=
(
1
−
2
G
m
c
2
r
)
−
1
d
r
2
+
r
2
(
d
θ
2
+
sin
2
θ
d
ϕ
2
)
−
c
2
(
1
−
2
G
m
c
2
r
)
d
t
2
{\displaystyle ds^{2}=\left(1-{\frac {2Gm}{c^{2}r}}\right)^{-1}dr^{2}+r^{2}(d\theta ^{2}+\sin ^{2}\theta d\phi ^{2})-c^{2}\left(1-{\frac {2Gm}{c^{2}r}}\right)dt^{2}}
Note that:
2
G
m
c
2
=
r
s
{\displaystyle {\frac {2Gm}{c^{2}}}=r_{\text{s}}}
is the definition of the Schwarzschild radius for an object of mass
m
{\displaystyle m}
, so the Schwarzschild metric may be rewritten in the alternative form:
d
s
2
=
(
1
−
r
s
r
)
−
1
d
r
2
+
r
2
(
d
θ
2
+
sin
2
θ
d
ϕ
2
)
−
c
2
(
1
−
r
s
r
)
d
t
2
{\displaystyle ds^{2}=\left(1-{\frac {r_{\text{s}}}{r}}\right)^{-1}dr^{2}+r^{2}(d\theta ^{2}+\sin ^{2}\theta d\phi ^{2})-c^{2}\left(1-{\frac {r_{\text{s}}}{r}}\right)dt^{2}}
which shows that the metric becomes singular approaching the event horizon (that is,
r
→
r
s
{\displaystyle r\rightarrow r_{\text{s}}}
). The metric singularity is not a physical one (although there is a real physical singularity at
r
=
0
{\displaystyle r=0}
), as can be shown by using a suitable coordinate transformation (e.g. the Kruskal–Szekeres coordinate system).
== Alternate derivation using known physics in special cases ==
[This derivation is flawed because it assumes Kepler's 3rd law. This is unfounded because that law has relativistic corrections. For example, the meaning of "r" is physical distance in that classical law, and merely a coordinate in General Relativity.] The Schwarzschild metric can also be derived using the known physics for a circular orbit and a temporarily stationary point mass. Start with the metric with coefficients that are unknown coefficients of
r
{\displaystyle r}
:
−
c
2
=
(
d
s
d
τ
)
2
=
A
(
r
)
(
d
r
d
τ
)
2
+
r
2
(
d
ϕ
d
τ
)
2
+
B
(
r
)
(
d
t
d
τ
)
2
.
{\displaystyle -c^{2}=\left({ds \over d\tau }\right)^{2}=A(r)\left({dr \over d\tau }\right)^{2}+r^{2}\left({d\phi \over d\tau }\right)^{2}+B(r)\left({dt \over d\tau }\right)^{2}.}
Now apply the Euler–Lagrange equation to the arc length integral
J
=
∫
τ
1
τ
2
−
(
d
s
/
d
τ
)
2
d
τ
{\displaystyle \textstyle {J=\int _{\tau _{1}}^{\tau _{2}}{\sqrt {-\left({\text{d}}s/{\text{d}}\tau \right)^{2}}}\,{\text{d}}\tau }}
. Since
d
s
/
d
τ
{\displaystyle ds/d\tau }
is constant, the integrand can be replaced with
(
d
s
/
d
τ
)
2
,
{\displaystyle ({\text{d}}s/{\text{d}}\tau )^{2},}
because the E–L equation is exactly the same if the integrand is multiplied by any constant. Applying the E–L equation to
J
{\displaystyle J}
with the modified integrand yields:
A
′
(
r
)
r
˙
2
+
2
r
ϕ
˙
2
+
B
′
(
r
)
t
˙
2
=
2
A
′
(
r
)
r
˙
2
+
2
A
(
r
)
r
¨
0
=
2
r
r
˙
ϕ
˙
+
r
2
ϕ
¨
0
=
B
′
(
r
)
r
˙
t
˙
+
B
(
r
)
t
¨
{\displaystyle {\begin{array}{lcl}A'(r){\dot {r}}^{2}+2r{\dot {\phi }}^{2}+B'(r){\dot {t}}^{2}&=&2A'(r){\dot {r}}^{2}+2A(r){\ddot {r}}\\0&=&2r{\dot {r}}{\dot {\phi }}+r^{2}{\ddot {\phi }}\\0&=&B'(r){\dot {r}}{\dot {t}}+B(r){\ddot {t}}\end{array}}}
where dot denotes differentiation with respect to
τ
{\displaystyle \tau }
.
In a circular orbit
r
˙
=
r
¨
=
0
{\displaystyle {\dot {r}}={\ddot {r}}=0}
, so the first E–L equation above is equivalent to
2
r
ϕ
˙
2
+
B
′
(
r
)
t
˙
2
=
0
⇔
B
′
(
r
)
=
−
2
r
ϕ
˙
2
/
t
˙
2
=
−
2
r
(
d
ϕ
/
d
t
)
2
.
{\displaystyle 2r{\dot {\phi }}^{2}+B'(r){\dot {t}}^{2}=0\Leftrightarrow B'(r)=-2r{\dot {\phi }}^{2}/{\dot {t}}^{2}=-2r(d\phi /dt)^{2}.}
Kepler's third law of motion is
T
2
r
3
=
4
π
2
G
(
M
+
m
)
.
{\displaystyle {\frac {T^{2}}{r^{3}}}={\frac {4\pi ^{2}}{G(M+m)}}.}
In a circular orbit, the period
T
{\displaystyle T}
equals
2
π
/
(
d
ϕ
/
d
t
)
{\displaystyle 2\pi /(d\phi /dt)}
, implying
(
d
ϕ
d
t
)
2
=
G
M
/
r
3
{\displaystyle \left({d\phi \over dt}\right)^{2}=GM/r^{3}}
since the point mass
m
{\displaystyle m}
is negligible compared to the mass of the central body
M
{\displaystyle M}
. So
B
′
(
r
)
=
−
2
G
M
/
r
2
{\displaystyle B'(r)=-2GM/r^{2}}
and integrating this yields
B
(
r
)
=
2
G
M
/
r
+
C
{\displaystyle B(r)=2GM/r+C}
, where
C
{\displaystyle C}
is an unknown constant of integration.
C
{\displaystyle C}
can be determined by setting
M
=
0
{\displaystyle M=0}
, in which case the spacetime is flat and
B
(
r
)
=
−
c
2
{\displaystyle B(r)=-c^{2}}
. So
C
=
−
c
2
{\displaystyle C=-c^{2}}
and
B
(
r
)
=
2
G
M
/
r
−
c
2
=
c
2
(
2
G
M
/
c
2
r
−
1
)
=
c
2
(
r
s
/
r
−
1
)
.
{\displaystyle B(r)=2GM/r-c^{2}=c^{2}(2GM/c^{2}r-1)=c^{2}(r_{\text{s}}/r-1).}
When the point mass is temporarily stationary,
r
˙
=
0
{\displaystyle {\dot {r}}=0}
and
ϕ
˙
=
0
{\displaystyle {\dot {\phi }}=0}
. The original metric equation becomes
t
˙
2
=
−
c
2
/
B
(
r
)
{\displaystyle {\dot {t}}^{2}=-c^{2}/B(r)}
and the first E–L equation above becomes
A
(
r
)
=
B
′
(
r
)
t
˙
2
/
(
2
r
¨
)
{\displaystyle A(r)=B'(r){\dot {t}}^{2}/(2{\ddot {r}})}
. When the point mass is temporarily stationary,
r
¨
{\displaystyle {\ddot {r}}}
is the acceleration of gravity,
−
M
G
/
r
2
{\displaystyle -MG/r^{2}}
. So
A
(
r
)
=
(
−
2
M
G
r
2
)
(
−
c
2
2
M
G
/
r
−
c
2
)
(
−
r
2
2
M
G
)
=
1
1
−
2
M
G
/
(
r
c
2
)
=
1
1
−
r
s
/
r
.
{\displaystyle A(r)=\left({\frac {-2MG}{r^{2}}}\right)\left({\frac {-c^{2}}{2MG/r-c^{2}}}\right)\left(-{\frac {r^{2}}{2MG}}\right)={\frac {1}{1-2MG/(rc^{2})}}={\frac {1}{1-r_{\text{s}}/r}}.}
== Alternative form in isotropic coordinates ==
The original formulation of the metric uses anisotropic coordinates in which the velocity of light is not the same in the radial and transverse directions. Arthur Eddington gave alternative forms in isotropic coordinates. For isotropic spherical coordinates
r
1
{\displaystyle r_{1}}
,
θ
{\displaystyle \theta }
,
ϕ
{\displaystyle \phi }
, coordinates
θ
{\displaystyle \theta }
and
ϕ
{\displaystyle \phi }
are unchanged, and then (provided that
r
≥
2
G
m
/
c
2
{\displaystyle r\geq {2Gm}/{c^{2}}}
)
r
=
r
1
(
1
+
G
m
2
c
2
r
1
)
2
{\displaystyle r=r_{1}\left(1+{\frac {Gm}{2c^{2}r_{1}}}\right)^{2}}
,
d
r
=
d
r
1
(
1
−
(
G
m
)
2
4
c
4
r
1
2
)
{\displaystyle dr=dr_{1}\left(1-{\frac {(Gm)^{2}}{4c^{4}r_{1}^{2}}}\right)}
and
(
1
−
2
G
m
c
2
r
)
=
(
1
−
G
m
2
c
2
r
1
)
2
/
(
1
+
G
m
2
c
2
r
1
)
2
{\displaystyle \left(1-{\frac {2Gm}{c^{2}r}}\right)=\left(1-{\frac {Gm}{2c^{2}r_{1}}}\right)^{2}/\left(1+{\frac {Gm}{2c^{2}r_{1}}}\right)^{2}}
Then for isotropic rectangular coordinates
x
{\displaystyle x}
,
y
{\displaystyle y}
,
z
{\displaystyle z}
,
x
=
r
1
sin
(
θ
)
cos
(
ϕ
)
,
{\displaystyle x=r_{1}\,\sin(\theta )\,\cos(\phi ),}
y
=
r
1
sin
(
θ
)
sin
(
ϕ
)
,
{\displaystyle y=r_{1}\,\sin(\theta )\,\sin(\phi ),}
z
=
r
1
cos
(
θ
)
{\displaystyle z=r_{1}\,\cos(\theta )}
The metric then becomes, in isotropic rectangular coordinates:
d
s
2
=
(
1
+
G
m
2
c
2
r
1
)
4
(
d
x
2
+
d
y
2
+
d
z
2
)
−
c
2
d
t
2
(
1
−
G
m
2
c
2
r
1
)
2
/
(
1
+
G
m
2
c
2
r
1
)
2
{\displaystyle ds^{2}=\left(1+{\frac {Gm}{2c^{2}r_{1}}}\right)^{4}(dx^{2}+dy^{2}+dz^{2})-c^{2}dt^{2}\left(1-{\frac {Gm}{2c^{2}r_{1}}}\right)^{2}/\left(1+{\frac {Gm}{2c^{2}r_{1}}}\right)^{2}}
== Dispensing with the static assumption – Birkhoff's theorem ==
In deriving the Schwarzschild metric, it was assumed that the metric was vacuum, spherically symmetric and static. The static assumption is unneeded, as Birkhoff's theorem states that any spherically symmetric vacuum solution of Einstein's field equations is stationary; the Schwarzschild solution thus follows. Birkhoff's theorem has the consequence that any pulsating star that remains spherically symmetric does not generate gravitational waves, as the region exterior to the star remains static.
== See also ==
Karl Schwarzschild
Kerr metric
Reissner–Nordström metric
== References == | Wikipedia/Derivation_of_the_Schwarzschild_solution |
In general relativity, a geon is a nonsingular electromagnetic or gravitational wave which is held together in a confined region by the gravitational attraction of its own field energy. They were first investigated theoretically in 1955 by J. A. Wheeler, who coined the term as a contraction of "gravitational electromagnetic entity".
== Overview ==
Since general relativity is a classical field theory, Wheeler's concept of a geon does not treat them as quantum-mechanical entities, and this generally remains true today. Nonetheless, Wheeler speculated that there might be a relationship between geons and elementary particles. This idea continues to attract some attention among physicists, but in the absence of a viable theory of quantum gravity, the accuracy of this speculative idea cannot be tested.
Wheeler did not present explicit geon solutions to the vacuum Einstein field equation, a gap which was partially filled by Dieter R. Brill and James Hartle in 1964 by the Brill–Hartle geon. In 1997, Anderson and Brill gave a rigorous proof that geon solutions of the vacuum Einstein equation exist, though they are not given in a simple closed form.
A major outstanding question regarding geons is whether they are stable, or must decay over time as the energy of the wave gradually "leaks" away. This question has not yet been definitively answered, but the consensus seems to be that they probably cannot be stable. This would lay to rest Wheeler's initial hope that a geon might serve as a classical model for stable elementary particles. However, this would not rule out the possibility that geons are stabilized by quantum effects. In fact, a quantum generalization of the gravitational geon using low-energy quantum gravity shows that geons are stable systems even when quantum effects are turned on. The quantum geon (called "graviball") is described as gravitons bound by their gravitational self-interaction. Since geons (classical or quantum) have a mass but are electromagnetically neutral, they are possible candidates for dark matter.
== See also ==
Black hole electron
Edwin Power
Geometrodynamics
Kugelblitz
Quantum foam
== References ==
== Further reading == | Wikipedia/Geon_(physics) |
Fuzzballs are hypothetical objects in superstring theory, intended to provide a fully quantum description of the black holes predicted by general relativity.
The fuzzball hypothesis dispenses with the singularity at the heart of a black hole by positing that the entire region within the black hole's event horizon is actually an extended object: a ball of strings, which are advanced as the ultimate building blocks of matter and light. Under string theory, strings are bundles of energy vibrating in complex ways in both the three familiar dimensions of space as well as in extra dimensions. Fuzzballs provide resolutions to two major open problems in black hole physics. First, they avoid the gravitational singularity that exists within the event horizon of a black hole. General relativity predicts that at the singularity, the curvature of spacetime becomes infinite, and it cannot determine the fate of matter and energy that falls into it. Physicists generally believe that the singularity is not a real phenomenon, and proposed theories of quantum gravity, such as superstring theory, are expected to explain its true nature. Second, they resolve the black hole information paradox: the quantum information of matter falling into a black hole is trapped behind the event horizon, and seems to disappear from the universe entirely when the black hole evaporates due to Hawking radiation. This would violate a fundamental law of quantum mechanics requiring that quantum information be conserved.
As no direct experimental evidence supports either string theory in general or fuzzballs in particular, both are products purely of calculations and theoretical research. However, the existence of fuzzballs may be testable through gravitational-wave astronomy.
== Physical properties ==
=== String theory and composition ===
Samir D. Mathur of Ohio State University published eight scientific papers between 2001 and 2012, assisted by postdoctoral researcher Oleg Lunin, who contributed to the first two papers. The papers propose that black holes are sphere-like extended objects with a definite volume and are composed of strings. This differs from the classic view of black holes in which there is a singularity at their centers, which are thought to be a zero-dimensional, zero-volume point in which the entire mass of a black hole is concentrated at infinite density, surrounded many kilometers away by an event horizon below which light cannot escape.
All variations of string theory hold that the fundamental constituents of subatomic particles, including the force carriers (e.g., photons and gluons), are actually strings of energy that take on their identities and respective masses by vibrating in different modes and frequencies. The fuzzball concept is rooted in a particular variant of superstring theory called Type IIB (see also String duality), which holds that strings are both "open" (double-ended entities) and "closed" (looped entities) and that there are 9 + 1 spacetime dimensions wherein five of the six extra spatial dimensions are "compactified".
Unlike the view of a black hole as a singularity, a small fuzzball can be thought of as an extra-dense neutron star in which the neutrons have undergone a phase transition and decomposed, liberating the quarks comprising them. Accordingly, fuzzballs are theorized to be the terminal phase of degenerate matter. Mathur calculated that the physical surfaces of fuzzballs have radii equal to that of the event horizon of classic black holes; thus, the Schwarzschild radius of a ubiquitous 6.8 solar mass (M☉) stellar-mass-class black hole—or fuzzball—is 20 kilometers when the effects of spin are excluded. He also determined that the event horizon of a fuzzball would, at a very tiny scale (likely on the order of a few Planck lengths), be very much like a mist: fuzzy, hence the name "fuzzball."
With classical-model black holes, objects passing through the event horizon on their way to the singularity are thought to enter a realm of curved spacetime where the escape velocity exceeds the speed of light—a realm devoid of all structure. Moreover, precisely at the singularity—the heart of a classic black hole—spacetime itself is thought to break down catastrophically since infinite density demands infinite escape velocity; such conditions are problematic with known physics. Under the fuzzball premise, however, the strings comprising matter and photons are believed to fall onto and absorb into the fuzzball's surface, which is located at the event horizon—the threshold at which the escape velocity has achieved the speed of light.
A fuzzball is a black hole; spacetime, photons, and all else not exquisitely close to the surface of a fuzzball are thought to be affected in precisely the same fashion as with the classical model of black holes featuring a singularity at its center. The two theories diverge only at the quantum level; that is, classic black holes and fuzzballs differ only in their internal composition and how they affect virtual particles that form close to their event horizons (see § Information paradox, below). Fuzzballs are thought by their proponents to be the true quantum description of black holes.
=== Densities ===
Fuzzballs become less dense as their mass increases due to fractional tension. When matter or energy (strings) fall onto a fuzzball, more strings are not simply added to the fuzzball; strings fuse, or join. In doing so, all the quantum information of the in‑falling strings becomes part of larger, more complex strings. Due to fractional tension, string tension exponentially decreases as they become more complex with more vibration modes, relaxing to considerable lengths. The string theory formulas of Mathur and Lunin produce fuzzball surface radii that precisely equal Schwarzschild radii, which Karl Schwarzschild calculated using an entirely different mathematical technique 87 years earlier.
Since the volume of fuzzballs is a function of the Schwarzschild radius (2953 meters per M☉ for a non-rotating black hole), fuzzballs have a variable density that decreases as the inverse square of their mass (twice the mass is twice the diameter, which is eight times the volume, resulting in one-quarter the density). A typical 6.8 M☉ fuzzball would have a mean density of 4.0×1017 kg/m3. This is an average, or mean, bulk density; as with neutron stars, the Sun, and its planets, a fuzzball's density varies from the surface where it is less dense, to its center where it is most dense. A bit of such a non-spinning fuzzball the size of a drop of water would, on average, have a mass of twenty million metric tons, which is equivalent to that of a granite ball 243 meters in diameter.
Though such densities are almost unimaginably extreme, they are, mathematically speaking, infinitely far from infinite density. Although the densities of typical stellar-mass fuzzballs are extreme—about the same as neutron stars—their densities are many orders of magnitude less than the Planck density (5.155×1096 kg/m3), which is equivalent to the mass of the universe packed into the volume of a single atomic nucleus.
Since the mean densities of fuzzballs (and the effective densities of classic black holes) decrease as the inverse square of their mass, fuzzballs greater than 7 M☉ are actually less dense than neutron stars possessing the minimum possible density. Due to the mass-density inverse-square rule, fuzzballs need not even have unimaginable densities. Supermassive black holes, which are found at the center of virtually all galaxies, can have modest densities. For instance, Sagittarius A*, the black hole at the center of our Milky Way galaxy, is 4.3 million M☉. The fuzzball model predicts that a non-spinning supermassive black hole with the same mass as Sagittarius A* has a mean density "only" 51 times that of gold. Moreover, at 3.9 billion M☉ (a rather large super-massive black hole), a non-spinning fuzzball would have a radius of 77 astronomical units—about the same size as the termination shock of the Solar System's heliosphere—and a mean density equal to that of the Earth's atmosphere at sea level (1.2 kg/m3).
=== Neutron star collapse ===
Black holes (or fuzzballs) are produced in various ways, most of which are exceedingly violent mass-shedding events like supernovas, kilonovas, and hypernovas. However, an accreting neutron star (one slowly siphoning off mass from a companion star) that exceeds a critical mass limit, Mmax, will suddenly and nonviolently (relatively speaking) collapse into a black hole or fuzzball. Such a collapse can serve as a helpful case study when examining the differences between the physical properties of neutron stars and fuzzballs.
Neutron stars have a maximum possible mass, known as the Tolman–Oppenheimer–Volkoff limit; this limit is not precisely known, but it is believed to lie between 2.2 M☉ and 2.9 M☉. If a neutron star exceeds this mass, neutron degeneracy pressure can no longer resist the force of gravity and it will rapidly collapse until some new physical process takes over. In classical general relativity, the collapsing neutron star reaches a critical density and forms an event horizon; to the outside universe it becomes a black hole, and the collapse proceeds towards a gravitational singularity. In the fuzzball model, the hadrons in its core (neutrons and perhaps a smattering of protons and mesons) decompose into what could be regarded as the final stage of degenerate matter: a ball of strings, which the fuzzball model predicts is the true quantum description of not only black holes but theorized quark stars composed of quark matter.
== Information paradox ==
Classical black holes create a problem for physics known as the black hole information paradox; there is no such paradox under the fuzzball hypothesis. The paradox was first raised in 1972 by Jacob Bekenstein and later popularized by Stephen Hawking. The information paradox is born of a requirement of quantum mechanics that quantum information must be conserved, which conflicts with general relativity's requirement that if black holes have singularities at their centers, quantum information must be extinguished from spacetime. This paradox can be viewed as a contradiction between two very different theories: general relativity, which describes the largest gravity-based phenomena in the Universe, and quantum mechanics, which describes the smallest phenomena. Fuzzball theory purports to resolve this tension because the Type IIB superstring theory it is based on is a quantum description of gravity called supergravity.
A black hole that fed primarily on the stellar atmosphere (protons, neutrons, and electrons) of a nearby companion star should, if it obeyed the known laws of quantum mechanics, grow to have a quantum composition different from another black hole that fed only on light (photons) from neighboring stars and the cosmic microwave background. This follows a core precept of both classical and quantum physics that, in principle, the state of a system at one point in time should determine its state at any other time.
Yet, general relativity's implications for classic black holes are inescapable: Other than the fact that the two black holes would become increasingly massive due to the infalling matter and light, no difference in their quantum compositions would exist because if singularities have zero volume, black holes have no quantum composition. Moreover, even if quantum information was not extinguished at singularities, it could not climb against infinite gravitational intensity and reach up to and beyond the event horizon where it could reveal itself in normal spacetime. This is called the no-hair theorem, which states that black holes can reveal nothing about themselves to outside observers except their mass, angular momentum, and electric charge, whereby the latter two could theoretically be revealed through a phenomenon known as superradiance.
Stephen Hawking showed that quantum effects will make black holes appear to be blackbody radiators with effective temperatures inversely proportional to the mass of a black hole. This radiation, now called Hawking radiation, cannot circumvent the no-hair theorem as it can reveal only a black hole's mass. For all practical purposes, Hawking radiation is undetectable (see §Testability of the theory, below).
In a purely theoretical sense, the fuzzball theory advanced by Mathur and Lunin goes beyond Hawking's formula relating the blackbody temperature of Hawking radiation and the mass of the black hole emitting it. Fuzzball theory satisfies the requirement that quantum information be conserved because it holds, in part, that the quantum information of the strings that fall onto a fuzzball is preserved as those strings dissolve into and contribute to the fuzzball's quantum makeup. The theory further holds that a fuzzball's quantum information is not only expressed at its surface but tunnels up through the tunneling fuzziness of the event horizon where it can be imprinted on Hawking radiation, which very slowly carries that information into regular spacetime in the form of delicate correlations in the outgoing quanta.
Fuzzball theory's proposed solution to the black hole information paradox resolves a significant incompatibility between quantum mechanics and general relativity. At present, there is no widely accepted theory of quantum gravity—a quantum description of gravity—that is in harmony with general relativity. However, all five variations of superstring theory, including the Type IIB variant upon which fuzzball theory is based, have quantum gravity incorporated into them. Moreover, all five versions have been hypothesized as actually constituting five different limits, or subsets, that are unified under M-theory.
== Testability of the theory ==
As no direct experimental evidence supports either string theory or fuzzball theory, both are products purely of calculations and theoretical research. However, theories must be experimentally testable if there is to be a possibility of ascertaining their validity. To be in full accordance with the scientific method and one day be widely accepted as true—as are Einstein's theories of special and general relativity—theories regarding the natural world must make predictions that are consistently affirmed through observations of nature. Superstring theory predicts the existence of highly elusive particles that, while they are actively being searched for, have yet to be detected. Moreover, fuzzball theory cannot be substantiated by observing its predicted subtle effects on Hawking radiation because the radiation itself is for all practical purposes undetectable. However, fuzzball theory may be testable through gravitational-wave astronomy.
The first challenge insofar as the testability of fuzzball theory is it is rooted in unproven superstring theory, which is short for supersymmetric string theory. Supersymmetry predicts that for each known quanta (particle) in the Standard Model, a superpartner particle exists that differs by spin 1/2. This means that for every boson (massless particles in the Standard Model with integer spins like 0, 1, and 2), there is a supersymmetric-spin fermion-like particle known as a gaugino that has a half-odd-integer spin (e.g., 1/2 and 3/2) and possesses a rest mass. Examining this spin-1/2 supersymmetry in the opposite direction, superstring theory predicts that fermions from the Standard Model have boson-like superpartners known as sfermions, except that unlike actual gauge bosons from the Standard Model, sfermions don't strongly act as force carriers. All bosons (e.g., photons) and the boson-like sfermions will readily overlap each other when crowded, whereas fermions and the fermion-like gauginos possessing mass (such as electrons, protons, and quarks) will not; this is one reason why superpartners—if they exist—have properties that are exceedingly different from their Standard Model counterparts. Take the example of the photon, which is a massless boson with an integer spin of 1 and is the carrier of electromagnetism in the Standard Model; it is predicted to have a superpartner called a photino, which is a mass-carrying fermion with a half-odd-integer spin of 1/2. Conversely, the electron (spin 1/2) is an example of a mass-carrying fermion where its superpartner is the spin-0 selectron, which is a massless boson but is not considered to be a primary force carrier.
The experimental detection of superpartners would not only bolster superstring theory but would also help fill gaps in current particle physics, such as the likely composition of dark matter and the muon's anomalous magnetic moment (it should be precisely equal to 2 and is instead about 2.00233184, suggesting hidden interactions); particle physicists have accordingly been searching for these superpartners. Based on cosmological effects, there is strong evidence for the existence of dark matter of some sort (see Dark matter: Observational evidence), but if it is composed of subatomic particles, those particles have proven to be notoriously elusive despite the wide variety of detection techniques that have been employed since 1986. This difficulty in detecting supersymmetric particles is not surprising to particle physicists since the lightest ones are believed to be stable, electrically neutral, and interact weakly with the particles of the Standard Model. Though many searches using particle colliders have ruled out certain mass ranges for supersymmetric particles, the hunt continues.
Fuzzball theory resolves a long-standing conflict between general relativity and quantum mechanics by holding that quantum information is preserved in fuzzballs and that Hawking radiation originating within the Planck-scale quantum foam just above a fuzzball's surface is subtly encoded with that information. As a practical matter, however, Hawking radiation is virtually impossible to detect because black holes emit it at astronomically low power levels and the individual photons constituting Hawking radiation have extraordinarily little energy. This underlies why theoretically perfectly quiescent black holes (ones in a universe containing no matter or other types of electromagnetic radiation to absorb) evaporate so slowly as they lose energy (and equivalent amounts of mass) via Hawking radiation; even a modest 4.9 M☉ black hole would require 1059 times the current age of the Universe to vanish. Moreover, a top-of-the-list 106 billion M☉ supermassive black hole would require ten million-trillion-trillion times longer still to evaporate: 1090 times the age of the Universe.
Hawking showed that the energy of photons released by Hawking radiation is inversely proportional to the mass of a black hole and, consequently, the smallest black holes emit the most energetic photons that are the least difficult to detect. However, the radiation emitted by even a minimum-size, 2.7 M☉ black hole (or fuzzball) comprises extremely low-energy photons that are equivalent to those emitted by a black body with a temperature of around 23 billionths of one kelvin above absolute zero. More challenging still, such a black hole has a radiated power—for the entire black hole—of 1.2×10−29 watt (12 billion-billion-billionths of one milliwatt). Such an infinitesimal transmitted power is to one watt as 1/3000th of a drop of water (about one-quarter the volume of a typical grain of table salt) is to all the Earth's oceans.
Critically though, when signals are this weak, the challenge is no longer one of classic radio astronomy technological issues like gain and signal-to-noise ratio; Hawking radiation comprises individual photon quanta, so such a weak signal means a 2.7 M☉ black hole is emitting at most only ten photons per second. Even if such a black hole was only 100 lightyears away, the odds of just one of its Hawking radiation photons landing anywhere on Earth—let alone being captured by an antenna—while a human is watching are astronomically improbable. Importantly, the above values are for the smallest possible stellar-mass black holes; far more difficult yet to detect is the Hawking radiation emitted by supermassive black holes at the center of galaxies. For instance, M87* which is an unremarkable supermassive black hole, emits Hawking radiation at a near-nonexistent radiant power of at most 13 photons per century and does so with a wavelength so great that a receiving antenna possessing even a modest degree of absorption efficiency would be larger than the Solar System.
However, fuzzball theory may be testable through gravitational-wave astronomy. Gravitational wave observatories like the Laser Interferometer Gravitational-Wave Observatory (LIGO) have proven to be a revolutionary advancement in astronomy and are enabling astronomers and theoretical physicists to develop ever-more detailed insights into compact objects such as neutron stars and black holes. Ever since the first direct detection of gravitational waves, a 2015 event known as GW150914, which was a merger between a binary pair of stellar-mass black holes, gravitational-wave signals have so far matched the predictions of general relativity for classical black holes with singularities at their centers. However, an Italian team of scientists that ran computer simulations suggested in 2021 that existing gravitational-wave observatories are capable of discerning fuzzball-theory-supporting evidence in the signals from merging binary black holes (and the resultant effects on ringdowns) by virtue of the nontrivial unique attributes of fuzzballs, which are extended objects with a physical structure. The team's simulations predicted slower-than-expected decay rates for certain vibration modes that would also be dominated by "echoes" from earlier ring oscillations. Moreover, a separate Italian team a year earlier posited that future gravitational-wave detectors, such as the proposed Laser Interferometer Space Antenna (LISA), which is intended to have the ability to observe high-mass binary mergers at frequencies far below the limits of current observatories, would improve the ability to confirm aspects of fuzzball theory by orders of magnitude.
== References ==
== External links ==
Are Black Holes Fuzzballs? – Space Today Online
The Fuzzball Fix for a Black Hole Paradox, June 23, 2015 – Quanta Magazine
Information paradox solved? If so, Black Holes are "Fuzzballs" – The Ohio State University
ArXiv.org link: Unwinding of strings thrown into a fuzzball – Stefano Giusto and Samir D. Mathur
Astronomers take virtual plunge into black hole (84 MB) (10 MB version), a 40-second animation produced by JILA – a joint venture of the University of Colorado at Boulder and the NIST
Video lecture series at CERN (four parts approximately an hour each): "The black hole information problem and the fuzzball proposal", Part 1, Part 2, Part 3, Part 4 | Wikipedia/Fuzzball_(string_theory) |
A dark star is a theoretical object compatible with Newtonian mechanics that, due to its large mass, has a surface escape velocity that equals or exceeds the speed of light. Whether light is affected by gravity under Newtonian mechanics is unclear but if it were accelerated the same way as projectiles, any light emitted at the surface of a dark star would be trapped by the star's gravity, rendering it dark, hence the name. Dark stars are analogous to black holes in general relativity.
== Dark star theory history ==
=== John Michell and dark stars ===
During 1783 geologist John Michell wrote a letter to Henry Cavendish outlining the expected properties of dark stars, published by The Royal Society in their 1784 volume. Michell calculated that when the escape velocity at the surface of a star was equal to or greater than lightspeed, the generated light would be gravitationally trapped so that the star would not be visible to a distant astronomer.
If the semi-diameter of a sphere of the same density as the Sun were to exceed that of the Sun in the proportion of 500 to 1, a body falling from an infinite height towards it would have acquired at its surface greater velocity than that of light, and consequently supposing light to be attracted by the same force in proportion to its vis inertiae, with other bodies, all light emitted from such a body would be made to return towards it by its own proper gravity.
This assumes that gravity influences light in the same way as massive objects.
Michell's idea for calculating the number of such "invisible" stars anticipated 20th century astronomers' work: he suggested that since a certain proportion of double-star systems might be expected to contain at least one "dark" star, we could search for and catalogue as many double-star systems as possible, and identify cases where only a single circling star was visible. This would then provide a statistical baseline for calculating the amount of other unseen stellar matter that might exist in addition to the visible stars.
=== Dark stars and gravitational shifts ===
Michell also suggested that future astronomers might be able to identify the surface gravity of a distant star by seeing how far the star's light was shifted to the weaker end of the spectrum, a precursor of Einstein's 1911 gravity-shift argument. However, Michell cited Newton as saying that blue light was less energetic than red (Newton thought that more massive particles were associated with bigger wavelengths), so Michell's predicted spectral shifts were in the wrong direction. It is difficult to tell whether Michell's careful citing of Newton's position on this may have reflected a lack of conviction on Michell's part over whether Newton was correct or just academic thoroughness.
=== Wave theory of light ===
In 1796, the mathematician Pierre-Simon Laplace promoted the same idea in the first and second editions of his book Exposition du système du Monde, independently of Michell.
Because of the development of the wave theory of light, Laplace may have removed it from later editions as light came to be thought of as a massless wave, and therefore not influenced by gravity and as a group, physicists dropped the idea although the German physicist, mathematician, and astronomer Johann Georg von Soldner continued with Newton's corpuscular theory of light as late as 1804.
== Comparisons with black holes ==
Indirect radiation
Dark stars and black holes both have a surface escape velocity equal or greater than lightspeed, and a critical radius of r ≤ 2M.
However, the dark star is capable of emitting indirect radiation – outward-aimed light and matter can leave the r = 2M surface briefly before being recaptured, and while outside the critical surface, can interact with other matter, or be accelerated free from the star through such interactions. A dark star, therefore, has a rarefied atmosphere of "visiting particles", and this ghostly halo of matter and light can radiate, albeit weakly. Also as faster-than-light speeds are possible in Newtonian mechanics, it is possible for particles to escape.
Radiation effects
A dark star may emit indirect radiation as described above. Black holes as described by current theories about quantum mechanics emit radiation through a different process, Hawking radiation, first postulated in 1975. The radiation emitted by a dark star depends on its composition and structure; Hawking radiation, by the no-hair theorem, is generally thought of as depending only on the black hole's mass, charge, and angular momentum, although the black hole information paradox makes this controversial.
Light-bending effects
If Newtonian physics does have a gravitational deflection of light (Newton, Cavendish, Soldner), general relativity predicts twice as much deflection in a light beam skimming the Sun. This difference can be explained by the additional contribution of the curvature of space under modern theory: while Newtonian gravitation is analogous to the space-time components of general relativity's Riemann curvature tensor, the curvature tensor only contains purely spatial components, and both forms of curvature contribute to the total deflection.
== See also ==
Black hole
Magnetospheric eternally collapsing object
Q star
== References ==
Michell, John (1784), "On the Means of Discovering the Distance, Magnitude, &c. of the Fixed Stars, in Consequence of the Diminution of the Velocity of Their Light, in Case Such a Diminution Should be Found to Take Place in any of Them, and Such Other Data Should be Procured from Observations, as Would be Farther Necessary for That Purpose. By the Rev. John Michell, B. D. F. R. S. In a Letter to Henry Cavendish, Esq. F. R. S. and A. S", Philosophical Transactions of the Royal Society of London, 74: 35–57, Bibcode:1784RSPT...74...35M, doi:10.1098/rstl.1784.0008, ISSN 0080-4614, JSTOR 106576
Schaffer, Simon (1979). "John Michell and black holes". Journal for the History of Astronomy. 10: 42–43. Bibcode:1979JHA....10...42S. doi:10.1177/002182867901000104. S2CID 123958527. Archived from the original on 2020-05-22. Retrieved 2016-02-04.
Gibbons, Gary (28 June 1979). "The man who invented black holes [his work emerges out of the dark after two centuries]". New Scientist: 1101.
Israel, Werner (1987). "Dark stars: The evolution of an idea". In Hawking, Stephen W; Israel, Wemer (eds.). Three hundred years of gravitation. pp. 199–276. ISBN 9780521379762.
Eisenstaedt, J (Dec 1991). "De L'influence de la gravitation sur la propagation de la lumière en théorie Newtonienne. L'archéologie des trous noirs" [The influence of gravity on the propagation of light in Newtonian theory. The archeology black holes]. Archive for History of Exact Sciences. 42 (4): 315–386. Bibcode:1991AHES...42..315E. doi:10.1007/BF00375157. S2CID 121763556.
Thorne, Kip (January 1, 1995). Black Holes and Time Warps: Einstein's Outrageous Legacy (Reprint ed.). W. W. Norton & Company. ISBN 978-0-393-31276-8. See Chapter 3 "Black holes discovered and rejected" | Wikipedia/Dark_star_(Newtonian_mechanics) |
The Hawking energy or Hawking mass is one of the possible definitions of mass in general relativity. It is a measure of the bending of ingoing and outgoing rays of light that are orthogonal to a 2-sphere surrounding the region of space whose mass is to be defined.
== Definition ==
Let
(
M
3
,
g
a
b
)
{\displaystyle ({\mathcal {M}}^{3},g_{ab})}
be a 3-dimensional sub-manifold of a relativistic spacetime, and let
Σ
⊂
M
3
{\displaystyle \Sigma \subset {\mathcal {M}}^{3}}
be a closed 2-surface. Then the Hawking mass
m
H
(
Σ
)
{\displaystyle m_{H}(\Sigma )}
of
Σ
{\displaystyle \Sigma }
is defined to be
m
H
(
Σ
)
:=
Area
Σ
16
π
(
1
−
1
16
π
∫
Σ
H
2
d
a
)
,
{\displaystyle m_{H}(\Sigma ):={\sqrt {\frac {{\text{Area}}\,\Sigma }{16\pi }}}\left(1-{\frac {1}{16\pi }}\int _{\Sigma }H^{2}da\right),}
where
H
{\displaystyle H}
is the mean curvature of
Σ
{\displaystyle \Sigma }
.
== Properties ==
In the Schwarzschild metric, the Hawking mass of any sphere
S
r
{\displaystyle S_{r}}
about the central mass is equal to the value
m
{\displaystyle m}
of the central mass.
A result of Geroch implies that Hawking mass satisfies an important monotonicity condition. Namely, if
M
3
{\displaystyle {\mathcal {M}}^{3}}
has nonnegative scalar curvature, then the Hawking mass of
Σ
{\displaystyle \Sigma }
is non-decreasing as the surface
Σ
{\displaystyle \Sigma }
flows outward at a speed equal to the inverse of the mean curvature. In particular, if
Σ
t
{\displaystyle \Sigma _{t}}
is a family of connected surfaces evolving according to
d
x
d
t
=
1
H
ν
(
x
)
,
{\displaystyle {\frac {dx}{dt}}={\frac {1}{H}}\nu (x),}
where
H
{\displaystyle H}
is the mean curvature of
Σ
t
{\displaystyle \Sigma _{t}}
and
ν
{\displaystyle \nu }
is the unit vector opposite of the mean curvature direction, then
d
d
t
m
H
(
Σ
t
)
≥
0.
{\displaystyle {\frac {d}{dt}}m_{H}(\Sigma _{t})\geq 0.}
Said otherwise, Hawking mass is increasing for the inverse mean curvature flow.
Hawking mass is not necessarily positive. However, it is asymptotic to the ADM or the Bondi mass, depending on whether the surface is asymptotic to spatial infinity or null infinity.
== See also ==
Mass in general relativity
Inverse mean curvature flow
== References ==
== Further reading ==
Section 6.1 in Szabados, László B. (December 2004). Quasi-Local Energy-Momentum and Angular Momentum in GR: A Review Article. Vol. 7. p. 4. Bibcode:2004LRR.....7....4S. doi:10.12942/lrr-2004-4. ISSN 2367-3613. PMC 5255888. PMID 28179865. S2CID 40602589.
Hoffman, David A., ed. (2005). Global theory of minimal surfaces: proceedings of the Clay Mathematics Institute 2001 Summer School, Mathematical Sciences Research Institute, Berkeley, California, June 25-July 27, 2001. Clay mathematics proceedings. Providence, RI : Cambridge, MA: American Mathematical Society ; Clay Mathematics Institute. ISBN 978-0-8218-3587-6. | Wikipedia/Hawking_energy |
The U.S. Energy Information Administration (EIA) is a principal agency of the U.S. Federal Statistical System responsible for collecting, analyzing, and disseminating energy information to promote sound policymaking, efficient markets, and public understanding of energy and its interaction with the economy and the environment. EIA programs cover data on coal, petroleum, natural gas, electric, renewable and nuclear energy. EIA is part of the U.S. Department of Energy.
== Background ==
The Department of Energy Organization Act of 1977 established EIA as the primary federal government authority on energy statistics and analysis, building upon systems and organizations first established in 1974 following the oil market disruption of 1973.
EIA conducts a comprehensive data collection program that covers the full spectrum of energy sources, end uses, and energy flows; generates short- and long-term domestic and international energy projections; and performs informative energy analyses.
EIA disseminates its data products, analyses, reports, and services to customers and stakeholders primarily through its website and the customer contact center.
Located in Washington, D.C., EIA has about 325 federal employees and a budget of $126.8 million in fiscal year 2021.
== List of administrators ==
== Independence ==
By law, EIA's products are prepared independently of policy considerations. EIA neither formulates nor advocates any policy conclusions. The Department of Energy Organization Act allows EIA's processes and products to be independent from review by Executive Branch officials; specifically, Section 205(d) says:
"The Administrator shall not be required to obtain the approval of any other officer or employee of the Department in connection with the collection or analysis of any information; nor shall the Administrator be required, prior to publication, to obtain the approval of any other officer or employee of the United States with respect to the substance of any statistical or forecasting technical reports which he has prepared in accordance with law."
== Products ==
More than two million people use the EIA's information online each month. Some of the EIA's products include:
General Interest Energy Information
Energy Explained: Energy information written for a general, non-technical audience. A nonpartisan guide to the entire range of energy topics from biodiesel to uranium.
Energy Kids: Educates students, citizens, and even policymakers and journalists about energy.
Energy Glossary: Common energy terms defined in plain language.
Timely Analysis
Today in Energy: Informative content published every weekday that includes a graph or map and a short, timely story written in plain language that highlights current energy issues, topics, and data trends.
This Week in Petroleum: Weekly summary and explanation of events in United States and world petroleum markets, including weekly data. This report, together with its companion, the Weekly Petroleum Status Report, is a handy tool for investors. These are published every Wednesday (unless Monday is a holiday) at 10:30 AM Eastern Time (for the preliminary version) with the full report following at 1 PM Eastern. The Weekly Petroleum Status Report provides estimates of the amount of crude oil and petroleum products in storage, so that one may get a sense of whether stocks are building or declining, and of US oil production, so that an interested party can get a sense of whether it is decreasing or increasing. It is not unusual for the price of crude oil to jump up or down by a few percentage points, immediately after this report is released.
Natural Gas Weekly Update: Weekly summary and discussion of events and trends in U.S. natural gas markets.
Data and Surveys
Gasoline and Diesel Fuel Update: Weekly price data for U.S. national and regional averages.
Monthly Energy Review: Provides statistics on monthly and annual U.S. energy consumption going back in some cases to 1949. The figures are given in units of quads (quadrillion BTUs.)
Annual Energy Review: EIA's primary report of historical annual energy statistics. For many series, data begin with the year 1949. This report has been superseded by the Monthly Energy Review and was not produced for 2012.
Country Energy Profiles: Data by country, region, and commercial group (OECD, OPEC) for 219 countries with additional country analysis notes for 87 of these.
Country Analysis Briefs: EIA's in-depth analyses of energy production, consumption, imports, and exports for 36 individual countries and regions.
Residential Energy Consumption Survey: EIA's comprehensive survey and analysis of residential energy consumption, household characteristics, and appliance saturation.
Commercial Buildings Energy Consumption Survey: A national sample survey that collects information on the stock of U.S. commercial buildings, including their energy-related building characteristics and energy usage data (consumption and expenditures).
Projections and Outlooks
Short-Term Energy Outlook: Energy projections for the next 13–24 months, updated monthly.
Annual Energy Outlook: Projection and analysis of U.S. energy supply, demand, and prices through 2040 based on EIA's National Energy Modeling System. Projections are currently based on existing legislation, without assumption of any future congressional action or technological advancement. In 2015, EIA has been criticized by the Advanced Energy Economy (AEE) Institute after its release of the AEO 2015-report to "consistently underestimate the growth rate of renewable energy, leading to 'misperceptions' about the performance of these resources in the marketplace". AEE points out that the average power purchase agreement (PPA) for wind power was already at $24/MWh in 2013. Likewise, PPA for utility-scale solar PV are seen at current levels of $50–$75/MWh. These figures contrast strongly with EIA's estimated LCOE of $125/MWh (or $114/MWh including subsidies) for solar PV in 2020. This criticism has been repeated every year since.
International Energy Outlook: EIA's assessment of the outlook for international energy markets through 2040.
== Legislation ==
The Federal Energy Administration Act of 1974 created the Federal Energy Administration (FEA), the first U.S. agency with the primary focus on energy and mandated it to collect, assemble, evaluate, and analyze energy information. It also provided the FEA with data collection enforcement authority for gathering data from energy producing and major consuming firms. Section 52 of the FEA Act mandated establishment of the National Energy Information System to "… contain such energy information as is necessary to carry out the Administration's statistical and forecasting activities …"
The Department of Energy Organization Act of 1977, Public Law 95-91, created the Department of Energy. Section 205 of this law established the Energy Information Administration (EIA) as the primary federal government authority on energy statistics and analysis to carry out a "
...central, comprehensive, and unified energy data and information program which will collect, evaluate, assemble, analyze, and disseminate data and information which is relevant to energy resource reserves, energy production, demand, and technology, and related economic and statistical information, or which is relevant to the adequacy of energy resources to meet demands in the near and longer term future for the Nation's economic and social needs."
The same law established that EIA's processes and products are independent from review by Executive Branch officials.
The majority of EIA energy data surveys are based on the general mandates set forth above. However, there are some surveys specifically mandated by law, including:
EIA-28, Financial Reporting System - Section 205(h) of the DOE Organization Act.
EIA-1605 and 1605EZ, Voluntary Reporting of Greenhouse Gases - Section 1605(b) of the Energy Policy Act of 1992.
EIA-886, Annual Survey of Alternative Fueled Vehicle Suppliers and Users - Section 503(b) of the Energy Policy Act of 1992.
EIA-858, Uranium Marketing Annual Survey - Section 1015 of the Energy Policy Act of 1992.
EIA-846A-C, Manufacturing Energy Consumption Survey - Section 205(i) of the DOE Organization Act (the act calls for a biennial survey; however, this survey is done quadrennially due to resource constraints).
EIA-457A-G, Residential Energy Consumption Survey - Section 205(k) of the DOE Organization Act (the act calls for a triennial survey; however, this survey is done quadrennially due to resource constraints).
EIA-871A-F, Commercial Buildings Energy Consumption Survey - Section 205(k) of the DOE Organization Act (the act calls for a triennial survey; however, this survey is done quadrennially due to resource constraints).
Petroleum Marketing Surveys - Section 507 of Part A of Title V of the Energy Policy and Conservation Act of 1975 broadly directs EIA to collect information on the pricing, supply, and distribution of petroleum products by product category at the wholesale and retail levels, on a State-by-State basis, which was collected as of September 1, 1981, by the Energy Information Administration.
== See also ==
Canadian Centre for Energy Information
== References ==
== External links ==
Energy Information Administration
Energy Information Administration in the Federal Register | Wikipedia/Energy_Information_Administration |
Mechanics (from Ancient Greek μηχανική (mēkhanikḗ) 'of machines') is the area of physics concerned with the relationships between force, matter, and motion among physical objects. Forces applied to objects may result in displacements, which are changes of an object's position relative to its environment.
Theoretical expositions of this branch of physics has its origins in Ancient Greece, for instance, in the writings of Aristotle and Archimedes (see History of classical mechanics and Timeline of classical mechanics). During the early modern period, scientists such as Galileo Galilei, Johannes Kepler, Christiaan Huygens, and Isaac Newton laid the foundation for what is now known as classical mechanics.
As a branch of classical physics, mechanics deals with bodies that are either at rest or are moving with velocities significantly less than the speed of light. It can also be defined as the physical science that deals with the motion of and forces on bodies not in the quantum realm.
== History ==
=== Antiquity ===
The ancient Greek philosophers were among the first to propose that abstract principles govern nature. The main theory of mechanics in antiquity was Aristotelian mechanics, though an alternative theory is exposed in the pseudo-Aristotelian Mechanical Problems, often attributed to one of his successors.
There is another tradition that goes back to the ancient Greeks where mathematics is used more extensively to analyze bodies statically or dynamically, an approach that may have been stimulated by prior work of the Pythagorean Archytas. Examples of this tradition include pseudo-Euclid (On the Balance), Archimedes (On the Equilibrium of Planes, On Floating Bodies), Hero (Mechanica), and Pappus (Collection, Book VIII).
=== Medieval age ===
In the Middle Ages, Aristotle's theories were criticized and modified by a number of figures, beginning with John Philoponus in the 6th century. A central problem was that of projectile motion, which was discussed by Hipparchus and Philoponus.
Persian Islamic polymath Ibn Sīnā published his theory of motion in The Book of Healing (1020). He said that an impetus is imparted to a projectile by the thrower, and viewed it as persistent, requiring external forces such as air resistance to dissipate it. Ibn Sina made distinction between 'force' and 'inclination' (called "mayl"), and argued that an object gained mayl when the object is in opposition to its natural motion. So he concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. He also claimed that a projectile in a vacuum would not stop unless it is acted upon, consistent with Newton's first law of motion.
On the question of a body subject to a constant (uniform) force, the 12th-century Jewish-Arab scholar Hibat Allah Abu'l-Barakat al-Baghdaadi (born Nathanel, Iraqi, of Baghdad) stated that constant force imparts constant acceleration. According to Shlomo Pines, al-Baghdaadi's theory of motion was "the oldest negation of Aristotle's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration]."
Influenced by earlier writers such as Ibn Sina and al-Baghdaadi, the 14th-century French priest Jean Buridan developed the theory of impetus, which later developed into the modern theories of inertia, velocity, acceleration and momentum. This work and others was developed in 14th-century England by the Oxford Calculators such as Thomas Bradwardine, who studied and formulated various laws regarding falling bodies. The concept that the main properties of a body are uniformly accelerated motion (as of falling bodies) was worked out by the 14th-century Oxford Calculators.
=== Early modern age ===
Two central figures in the early modern age are Galileo Galilei and Isaac Newton. Galileo's final statement of his mechanics, particularly of falling bodies, is his Two New Sciences (1638). Newton's 1687 Philosophiæ Naturalis Principia Mathematica provided a detailed mathematical account of mechanics, using the newly developed mathematics of calculus and providing the basis of Newtonian mechanics.
There is some dispute over priority of various ideas: Newton's Principia is certainly the seminal work and has been tremendously influential, and many of the mathematics results therein could not have been stated earlier without the development of the calculus. However, many of the ideas, particularly as pertain to inertia and falling bodies, had been developed by prior scholars such as Christiaan Huygens and the less-known medieval predecessors. Precise credit is at times difficult or contentious because scientific language and standards of proof changed, so whether medieval statements are equivalent to modern statements or sufficient proof, or instead similar to modern statements and hypotheses is often debatable.
=== Modern age ===
Two main modern developments in mechanics are general relativity of Einstein, and quantum mechanics, both developed in the 20th century based in part on earlier 19th-century ideas. The development in the modern continuum mechanics, particularly in the areas of elasticity, plasticity, fluid dynamics, electrodynamics, and thermodynamics of deformable media, started in the second half of the 20th century.
== Types of mechanical bodies ==
The often-used term body needs to stand for a wide assortment of objects, including particles, projectiles, spacecraft, stars, parts of machinery, parts of solids, parts of fluids (gases and liquids), etc.
Other distinctions between the various sub-disciplines of mechanics concern the nature of the bodies being described. Particles are bodies with little (known) internal structure, treated as mathematical points in classical mechanics. Rigid bodies have size and shape, but retain a simplicity close to that of the particle, adding just a few so-called degrees of freedom, such as orientation in space.
Otherwise, bodies may be semi-rigid, i.e. elastic, or non-rigid, i.e. fluid. These subjects have both classical and quantum divisions of study.
For instance, the motion of a spacecraft, regarding its orbit and attitude (rotation), is described by the relativistic theory of classical mechanics, while the analogous movements of an atomic nucleus are described by quantum mechanics.
== Sub-disciplines ==
The following are the three main designations consisting of various subjects that are studied in mechanics.
Note that there is also the "theory of fields" which constitutes a separate discipline in physics, formally treated as distinct from mechanics, whether it be classical fields or quantum fields. But in actual practice, subjects belonging to mechanics and fields are closely interwoven. Thus, for instance, forces that act on particles are frequently derived from fields (electromagnetic or gravitational), and particles generate fields by acting as sources. In fact, in quantum mechanics, particles themselves are fields, as described theoretically by the wave function.
=== Classical ===
The following are described as forming classical mechanics:
Newtonian mechanics, the original theory of motion (kinematics) and forces (dynamics)
Analytical mechanics is a reformulation of Newtonian mechanics with an emphasis on system energy, rather than on forces. There are two main branches of analytical mechanics:
Hamiltonian mechanics, a theoretical formalism, based on the principle of conservation of energy
Lagrangian mechanics, another theoretical formalism, based on the principle of the least action
Classical statistical mechanics generalizes ordinary classical mechanics to consider systems in an unknown state; often used to derive thermodynamic properties.
Celestial mechanics, the motion of bodies in space: planets, comets, stars, galaxies, etc.
Astrodynamics, spacecraft navigation, etc.
Solid mechanics, elasticity, plasticity, or viscoelasticity exhibited by deformable solids
Fracture mechanics
Acoustics, sound (density, variation, propagation) in solids, fluids and gases
Statics, semi-rigid bodies in mechanical equilibrium
Fluid mechanics, the motion of fluids
Soil mechanics, mechanical behavior of soils
Continuum mechanics, mechanics of continua (both solid and fluid)
Hydraulics, mechanical properties of liquids
Fluid statics, liquids in equilibrium
Applied mechanics (also known as engineering mechanics)
Biomechanics, solids, fluids, etc. in biology
Biophysics, physical processes in living organisms
Relativistic or Einsteinian mechanics
=== Quantum ===
The following are categorized as being part of quantum mechanics:
Schrödinger wave mechanics, used to describe the movements of the wavefunction of a single particle.
Matrix mechanics is an alternative formulation that allows considering systems with a finite-dimensional state space.
Quantum statistical mechanics generalizes ordinary quantum mechanics to consider systems in an unknown state; often used to derive thermodynamic properties.
Particle physics, the motion, structure, and behavior of fundamental particles
Nuclear physics, the motion, structure, and reactions of nuclei
Condensed matter physics, quantum gases, solids, liquids, etc.
Historically, classical mechanics had been around for nearly a quarter millennium before quantum mechanics developed. Classical mechanics originated with Isaac Newton's laws of motion in Philosophiæ Naturalis Principia Mathematica, developed over the seventeenth century. Quantum mechanics developed later, over the nineteenth century, precipitated by Planck's postulate and Albert Einstein's explanation of the photoelectric effect. Both fields are commonly held to constitute the most certain knowledge that exists about physical nature.
Classical mechanics has especially often been viewed as a model for other so-called exact sciences. Essential in this respect is the extensive use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them.
Quantum mechanics is of a bigger scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the correspondence principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of large quantum numbers, i.e. if quantum mechanics is applied to large systems (for e.g. a baseball), the result would almost be the same if classical mechanics had been applied. Quantum mechanics has superseded classical mechanics at the foundation level and is indispensable for the explanation and prediction of processes at the molecular, atomic, and sub-atomic level. However, for macroscopic processes classical mechanics is able to solve problems which are unmanageably difficult (mainly due to computational limits) in quantum mechanics and hence remains useful and well used.
Modern descriptions of such behavior begin with a careful definition of such quantities as displacement (distance moved), time, velocity, acceleration, mass, and force. Until about 400 years ago, however, motion was explained from a very different point of view. For example, following the ideas of Greek philosopher and scientist Aristotle, scientists reasoned that a cannonball falls down because its natural position is in the Earth; the Sun, the Moon, and the stars travel in circles around the Earth because it is the nature of heavenly objects to travel in perfect circles.
Often cited as father to modern science, Galileo brought together the ideas of other great thinkers of his time and began to calculate motion in terms of distance travelled from some starting position and the time that it took. He showed that the speed of falling objects increases steadily during the time of their fall. This acceleration is the same for heavy objects as for light ones, provided air friction (air resistance) is discounted. The English mathematician and physicist Isaac Newton improved this analysis by defining force and mass and relating these to acceleration. For objects traveling at speeds close to the speed of light, Newton's laws were superseded by Albert Einstein's theory of relativity. [A sentence illustrating the computational complication of Einstein's theory of relativity.] For atomic and subatomic particles, Newton's laws were superseded by quantum theory. For everyday phenomena, however, Newton's three laws of motion remain the cornerstone of dynamics, which is the study of what causes motion.
=== Relativistic ===
Akin to the distinction between quantum and classical mechanics, Albert Einstein's general and special theories of relativity have expanded the scope of Newton and Galileo's formulation of mechanics. The differences between relativistic and Newtonian mechanics become significant and even dominant as the velocity of a body approaches the speed of light. For instance, in Newtonian mechanics, the kinetic energy of a free particle is E = 1/2mv2, whereas in relativistic mechanics, it is E = (γ − 1)mc2 (where γ is the Lorentz factor; this formula reduces to the Newtonian expression in the low energy limit).
For high-energy processes, quantum mechanics must be adjusted to account for special relativity; this has led to the development of quantum field theory.
== Professional organizations ==
Applied Mechanics Division, American Society of Mechanical Engineers
Fluid Dynamics Division, American Physical Society
Society for Experimental Mechanics
International Union of Theoretical and Applied Mechanics
== See also ==
Action principles
Applied mechanics
Computational mechanics
Dynamics
Engineering
Index of engineering science and mechanics articles
Kinematics
Kinetics
Non-autonomous mechanics
Statics
Wiesen Test of Mechanical Aptitude (WTMA)
== References ==
== Further reading ==
Salma Alrasheed (2019). Principles of Mechanics. Springer Nature. ISBN 978-3-030-15195-9.
Landau, L. D.; Lifshitz, E. M. (1972). Mechanics and Electrodynamics, Vol. 1. Franklin Book Company, Inc. ISBN 978-0-08-016739-8.
Practical Mechanics for Boys (1914) by James Slough Zerbe.
== External links ==
Physclips: Mechanics with animations and video clips from the University of New South Wales
The Archimedes Project | Wikipedia/mechanics |
A free motion equation is a differential equation that describes a mechanical system in the absence of external forces, but in the presence only of an inertial force depending on the choice of a reference frame.
In non-autonomous mechanics on a configuration space
Q
→
R
{\displaystyle Q\to \mathbb {R} }
, a free motion equation is defined as a second order non-autonomous dynamic equation on
Q
→
R
{\displaystyle Q\to \mathbb {R} }
which is brought into the form
q
¯
t
t
i
=
0
{\displaystyle {\overline {q}}_{tt}^{i}=0}
with respect to some reference frame
(
t
,
q
¯
i
)
{\displaystyle (t,{\overline {q}}^{i})}
on
Q
→
R
{\displaystyle Q\to \mathbb {R} }
. Given an arbitrary reference frame
(
t
,
q
i
)
{\displaystyle (t,q^{i})}
on
Q
→
R
{\displaystyle Q\to \mathbb {R} }
, a free motion equation reads
q
t
t
i
=
d
t
Γ
i
+
∂
j
Γ
i
(
q
t
j
−
Γ
j
)
−
∂
q
i
∂
q
¯
m
∂
q
¯
m
∂
q
j
∂
q
k
(
q
t
j
−
Γ
j
)
(
q
t
k
−
Γ
k
)
,
{\displaystyle q_{tt}^{i}=d_{t}\Gamma ^{i}+\partial _{j}\Gamma ^{i}(q_{t}^{j}-\Gamma ^{j})-{\frac {\partial q^{i}}{\partial {\overline {q}}^{m}}}{\frac {\partial {\overline {q}}^{m}}{\partial q^{j}\partial q^{k}}}(q_{t}^{j}-\Gamma ^{j})(q_{t}^{k}-\Gamma ^{k}),}
where
Γ
i
=
∂
t
q
i
(
t
,
q
¯
j
)
{\displaystyle \Gamma ^{i}=\partial _{t}q^{i}(t,{\overline {q}}^{j})}
is a connection on
Q
→
R
{\displaystyle Q\to \mathbb {R} }
associates with the initial reference frame
(
t
,
q
¯
i
)
{\displaystyle (t,{\overline {q}}^{i})}
. The right-hand side of this equation is treated as an inertial force.
A free motion equation need not exist in general. It can be defined if and only if a configuration bundle
Q
→
R
{\displaystyle Q\to \mathbb {R} }
of a mechanical system is a toroidal cylinder
T
m
×
R
k
{\displaystyle T^{m}\times \mathbb {R} ^{k}}
.
== See also ==
Non-autonomous mechanics
Non-autonomous system (mathematics)
Analytical mechanics
Fictitious force
== References ==
De Leon, M., Rodrigues, P., Methods of Differential Geometry in Analytical Mechanics (North Holland, 1989).
Giachetta, G., Mangiarotti, L., Sardanashvily, G., Geometric Formulation of Classical and Quantum Mechanics (World Scientific, 2010) ISBN 981-4313-72-6 (arXiv:0911.0411 ). | Wikipedia/Free_motion_equation |
Orbital Mechanics for Engineering Students is an aerospace engineering textbook by Howard D. Curtis, in its fourth edition as of 2019. The book provides an introduction to orbital mechanics, while assuming an undergraduate-level background in physics, rigid body dynamics, differential equations, and linear algebra.
Topics covered by the text include a review of kinematics and Newtonian dynamics, the two-body problem, Kepler's laws of planetary motion, orbit determination, orbital maneuvers, relative motion and rendezvous, and interplanetary trajectories. The text focuses primarily on orbital mechanics, but also includes material on rigid body dynamics, rocket vehicle dynamics, and attitude control. Control theory and spacecraft control systems are less thoroughly covered.
The textbook includes exercises at the end of each chapter, and supplemental material is available online, including MATLAB code for orbital mechanics projects.
== References == | Wikipedia/Orbital_Mechanics_for_Engineering_Students |
The Lamm equation describes the sedimentation and diffusion of a solute under ultracentrifugation in traditional sector-shaped cells. (Cells of
other shapes require much more complex equations.) It was named after Ole Lamm, later professor of physical chemistry at the Royal Institute of Technology, who derived it during his PhD studies under Svedberg at Uppsala University.
The Lamm equation can be written:
∂
c
∂
t
=
D
[
(
∂
2
c
∂
r
2
)
+
1
r
(
∂
c
∂
r
)
]
−
s
ω
2
[
r
(
∂
c
∂
r
)
+
2
c
]
{\displaystyle {\frac {\partial c}{\partial t}}=D\left[\left({\frac {\partial ^{2}c}{\partial r^{2}}}\right)+{\frac {1}{r}}\left({\frac {\partial c}{\partial r}}\right)\right]-s\omega ^{2}\left[r\left({\frac {\partial c}{\partial r}}\right)+2c\right]}
where c is the solute concentration, t and r are the time and radius, and the parameters D, s, and ω represent the solute diffusion constant, sedimentation coefficient and the rotor angular velocity, respectively. The first and second terms on the right-hand side of the Lamm equation are proportional to D and sω2, respectively, and describe the competing processes of diffusion and sedimentation. Whereas sedimentation seeks to concentrate the solute near the outer radius of the cell, diffusion seeks to equalize the solute concentration throughout the cell. The diffusion constant D can be estimated from the hydrodynamic radius and shape of the solute, whereas the buoyant mass mb can be determined from the ratio of s and D
s
D
=
m
b
k
B
T
{\displaystyle {\frac {s}{D}}={\frac {m_{b}}{k_{\text{B}}T}}}
where kBT is the thermal energy, i.e., the Boltzmann constant kB multiplied by
the absolute temperature T.
Solute molecules cannot pass through the inner and outer walls of the
cell, resulting in the boundary conditions on the Lamm equation
D
(
∂
c
∂
r
)
−
s
ω
2
r
c
=
0
{\displaystyle D\left({\frac {\partial c}{\partial r}}\right)-s\omega ^{2}rc=0}
at the inner and outer radii, ra and rb, respectively. By spinning samples at constant angular velocity ω and observing the variation in the concentration c(r, t), one may estimate the parameters s and D and, thence, the (effective or equivalent) buoyant mass of the solute.
== References and notes == | Wikipedia/Lamm_equation |
The Clohessy–Wiltshire equations describe a simplified model of orbital relative motion, in which the target is in a circular orbit, and the chaser spacecraft is in an elliptical or circular orbit. This model gives a first-order approximation of the chaser's motion in a target-centered coordinate system. It is used to plan the rendezvous of the chaser with the target.
== History ==
Early results about relative orbital motion were published by George William Hill in 1878. Hill's paper discussed the orbital motion of the moon relative to the Earth.
In 1960, W. H. Clohessy and R. S. Wiltshire published the Clohessy–Wiltshire equations to describe relative orbital motion of a general satellite for the purpose of designing control systems to achieve orbital rendezvous.
== System Definition ==
Suppose a target body is moving in a circular orbit and a chaser body is moving in an elliptical orbit.
Let
x
,
y
,
z
{\displaystyle x,y,z}
be the relative position of the chaser relative to the target with
x
{\displaystyle x}
radially outward from the target body,
y
{\displaystyle y}
is along the orbit track of the target body, and
z
{\displaystyle z}
is along the orbital angular momentum vector of the target body (i.e.,
x
,
y
,
z
{\displaystyle x,y,z}
form a right-handed triad).
Then, the Clohessy–Wiltshire equations are
x
¨
=
3
n
2
x
+
2
n
y
˙
y
¨
=
−
2
n
x
˙
z
¨
=
−
n
2
z
{\displaystyle {\begin{aligned}{\ddot {x}}&=3n^{2}x+2n{\dot {y}}\\{\ddot {y}}&=-2n{\dot {x}}\\{\ddot {z}}&=-n^{2}z\end{aligned}}}
where
n
=
μ
/
a
3
{\textstyle n={\sqrt {\mu /a^{3}}}}
is the orbital rate (in units of radians/second) of the target body,
a
{\displaystyle a}
is the radius of the target body's circular orbit,
μ
{\displaystyle \mu }
is the standard gravitational parameter,
If we define the state vector as
x
=
(
x
,
y
,
z
,
x
˙
,
y
˙
,
z
˙
)
{\displaystyle \mathbf {x} =(x,y,z,{\dot {x}},{\dot {y}},{\dot {z}})}
, the Clohessy–Wiltshire equations can be written as a linear time-invariant (LTI) system,
x
˙
=
A
x
{\displaystyle {\dot {\mathbf {x} }}=A\mathbf {x} }
where the state matrix
A
{\displaystyle A}
is
A
=
[
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
3
n
2
0
0
0
2
n
0
0
0
0
−
2
n
0
0
0
0
−
n
2
0
0
0
]
.
{\displaystyle A={\begin{bmatrix}0&0&0&1&0&0\\0&0&0&0&1&0\\0&0&0&0&0&1\\3n^{2}&0&0&0&2n&0\\0&0&0&-2n&0&0\\0&0&-n^{2}&0&0&0\end{bmatrix}}.}
For a satellite in low Earth orbit,
μ
=
3.986
×
10
14
m
3
/
s
2
{\displaystyle \mu =3.986\times 10^{14}\;\mathrm {m^{3}/s^{2}} }
and
a
=
6
,
793
,
137
m
{\displaystyle a=6,793,137\;\mathrm {m} }
, implying
n
=
0.00113
s
−
1
{\displaystyle n=0.00113\;\mathrm {s^{-1}} }
, corresponding to an orbital period of about 93 minutes.
If the chaser satellite has mass
m
{\displaystyle m}
and thrusters that apply a force
F
=
(
F
x
,
F
y
,
F
z
)
,
{\displaystyle F=(F_{x},F_{y},F_{z}),}
then the relative dynamics are given by the LTI control system
x
˙
=
A
x
+
B
u
{\displaystyle {\dot {\mathbf {x} }}=A\mathbf {x} +B\mathbf {u} }
where
u
=
F
/
m
{\displaystyle \mathbf {u} =F/m}
is the applied force per unit mass and
B
=
[
0
0
0
0
0
0
0
0
0
1
0
0
0
1
0
0
0
1
]
.
{\displaystyle {\boldsymbol {B}}={\begin{bmatrix}0&0&0\\0&0&0\\0&0&0\\1&0&0\\0&1&0\\0&0&1\end{bmatrix}}.}
== Solution ==
We can obtain closed form solutions of these coupled differential equations in matrix form, allowing us to find the position and velocity of the chaser at any time given the initial position and velocity.
δ
r
→
(
t
)
=
[
Φ
r
r
(
t
)
]
δ
r
0
→
+
[
Φ
r
v
(
t
)
]
δ
v
0
→
δ
v
→
(
t
)
=
[
Φ
v
r
(
t
)
]
δ
r
0
→
+
[
Φ
v
v
(
t
)
]
δ
v
0
→
{\displaystyle {\begin{aligned}\delta {\vec {r}}(t)&=[\Phi _{rr}(t)]\delta {\vec {r_{0}}}+[\Phi _{rv}(t)]\delta {\vec {v_{0}}}\\\delta {\vec {v}}(t)&=[\Phi _{vr}(t)]\delta {\vec {r_{0}}}+[\Phi _{vv}(t)]\delta {\vec {v_{0}}}\end{aligned}}}
where:
Φ
r
r
(
t
)
=
[
4
−
3
cos
n
t
0
0
6
(
sin
n
t
−
n
t
)
1
0
0
0
cos
n
t
]
Φ
r
v
(
t
)
=
[
1
n
sin
n
t
2
n
(
1
−
cos
n
t
)
0
2
n
(
cos
n
t
−
1
)
1
n
(
4
sin
n
t
−
3
n
t
)
0
0
0
1
n
sin
n
t
]
Φ
v
r
(
t
)
=
[
3
n
sin
n
t
0
0
6
n
(
cos
n
t
−
1
)
0
0
0
0
−
n
sin
n
t
]
Φ
v
v
(
t
)
=
[
cos
n
t
2
sin
n
t
0
−
2
sin
n
t
4
cos
n
t
−
3
0
0
0
cos
n
t
]
{\displaystyle {\begin{aligned}\Phi _{rr}(t)&={\begin{bmatrix}4-3\cos {nt}&0&0\\6(\sin {nt}-nt)&1&0\\0&0&\cos {nt}\end{bmatrix}}\\\Phi _{rv}(t)&={\begin{bmatrix}{\frac {1}{n}}\sin {nt}&{\frac {2}{n}}(1-\cos {nt})&0\\{\frac {2}{n}}(\cos {nt}-1)&{\frac {1}{n}}(4\sin {nt}-3nt)&0\\0&0&{\frac {1}{n}}\sin {nt}\end{bmatrix}}\\\Phi _{vr}(t)&={\begin{bmatrix}3n\sin {nt}&0&0\\6n(\cos {nt}-1)&0&0\\0&0&-n\sin {nt}\end{bmatrix}}\\\Phi _{vv}(t)&={\begin{bmatrix}\cos {nt}&2\sin {nt}&0\\-2\sin {nt}&4\cos {nt}-3&0\\0&0&\cos {nt}\end{bmatrix}}\end{aligned}}}
Note that
Φ
v
r
(
t
)
=
Φ
˙
r
r
(
t
)
{\displaystyle \Phi _{vr}(t)={\dot {\Phi }}_{rr}(t)}
and
Φ
v
v
(
t
)
=
Φ
˙
r
v
(
t
)
{\displaystyle \Phi _{vv}(t)={\dot {\Phi }}_{rv}(t)}
. Since these matrices are easily invertible, we can also solve for the initial conditions given only the final conditions and the properties of the target vehicle's orbit.
== See also ==
Orbital maneuver
Orbital mechanics
Space rendezvous
== References ==
== Further reading ==
Prussing, John E. and Conway, Bruce A. (2012). Orbital Mechanics (2nd Edition), Oxford University Press, NY, pp. 179–196. ISBN 9780199837700
== External links ==
The Clohessy-Wiltshire Equations of Relative Motion
Derivation Of Approximate Equations For Solving The Planar Rendezvous Problem | Wikipedia/Clohessy–Wiltshire_equations |
In classical mechanics, the Euler force is the fictitious tangential force
that appears when a non-uniformly rotating reference frame is used for analysis of motion and there is variation in the angular velocity of the reference frame's axes. The Euler acceleration (named for Leonhard Euler), also known as azimuthal acceleration or transverse acceleration, is that part of the absolute acceleration that is caused by the variation in the angular velocity of the reference frame.
== Intuitive example ==
The Euler force will be felt by a person riding a merry-go-round. As the ride starts, the Euler force will be the apparent force pushing the person to the back of the horse; and as the ride comes to a stop, it will be the apparent force pushing the person towards the front of the horse. A person on a horse close to the perimeter of the merry-go-round will perceive a greater apparent force than a person on a horse closer to the axis of rotation.
== Mathematical description ==
The direction and magnitude of the Euler acceleration is given, in the rotating reference frame, by:
a
E
u
l
e
r
=
−
d
ω
d
t
×
r
,
{\displaystyle \mathbf {a} _{\mathrm {Euler} }=-{\frac {d{\boldsymbol {\omega }}}{dt}}\times \mathbf {r} ,}
where ω is the angular velocity of rotation of the reference frame and r is the vector position of the point in the reference frame. The Euler force on an object of mass m in the rotating reference frame is then
F
E
u
l
e
r
=
m
a
E
u
l
e
r
=
−
m
d
ω
d
t
×
r
.
{\displaystyle \mathbf {F} _{\mathrm {Euler} }=m\mathbf {a} _{\mathrm {Euler} }=-m{\frac {d{\boldsymbol {\omega }}}{dt}}\times \mathbf {r} .}
== See also ==
Fictitious force
Coriolis effect
Centrifugal force
Rotating reference frame
Angular acceleration
== Notes and references == | Wikipedia/Euler_force |
In physics, and in particular in biomechanics, the ground reaction force (GRF) is the force exerted by the ground on a body in contact with it.
For example, a person standing motionless on the ground exerts a contact force on it (equal to the person's weight) and at the same time an equal and opposite ground reaction force is exerted by the ground on the person.
In the above example, the ground reaction force coincides with the notion of a normal force. However, in a more general case, the GRF will also have a component parallel to the ground, for example when the person is walking – a motion that requires the exchange of horizontal (frictional) forces with the ground.
The use of the word reaction derives from Newton's third law, which essentially states that if a force, called action, acts upon a body, then an equal and opposite force, called reaction, must act upon another body. The force exerted by the ground is conventionally referred to as the reaction, although, since the distinction between action and reaction is completely arbitrary, the expression ground action would be, in principle, equally acceptable.
The component of the GRF parallel to the surface is the frictional force. When slippage occurs the ratio of the magnitude of the frictional force to the normal force yields the coefficient of static friction.
GRF is often observed to evaluate force production in various groups within the community. One of these groups studied often are athletes to help evaluate a subject's ability to exert force and power. This can help create baseline parameters when creating strength and conditioning regimens from a rehabilitation and coaching standpoint. Plyometric jumps such as a drop-jump is an activity often used to build greater power and force which can lead to overall better ability on the playing field. When landing from a safe height in a bilateral comparisons on GRF in relation to landing with the dominant foot first followed by the non-dominant limb, literature has shown there were no significances in bilateral components with landing with the dominant foot first faster than the non-dominant foot on the GRF of the drop-jump or landing on vertical GRF output.
== References == | Wikipedia/Ground_reaction_force |
Classical Mechanics is a textbook written by Herbert Goldstein, a professor at Columbia University. Intended for advanced undergraduate and beginning graduate students, it has been one of the standard references on its subject around the world since its first publication in 1950.
== Overview ==
In the second edition, Goldstein corrected all the errors that had been pointed out, added a new chapter on perturbation theory, a new section on Bertrand's theorem, and another on Noether's theorem. Other arguments and proofs were simplified and supplemented.
Before the death of its primary author in 2005, a new (third) edition of the book was released, with the collaboration of Charles P. Poole and John L. Safko from the University of South Carolina. In the third edition, the book discusses at length various mathematically sophisticated reformations of Newtonian mechanics, namely analytical mechanics, as applied to particles, rigid bodies and continua. In addition, it covers in some detail classical electromagnetism, special relativity, and field theory, both classical and relativistic. There is an appendix on group theory. New to the third edition include a chapter on nonlinear dynamics and chaos, a section on the exact solutions to the three-body problem obtained by Euler and Lagrange, and a discussion of the damped driven pendulum that explains the Josephson junctions. This is counterbalanced by the reduction of several existing chapters motivated by the desire to prevent this edition from exceeding the previous one in length. For example, the discussions of Hermitian and unitary matrices were omitted because they are more relevant to quantum mechanics rather than classical mechanics, while those of Routh's procedure and time-independent perturbation theory were reduced.
== Table of Contents (3rd Edition) ==
== Editions ==
Goldstein, Herbert (1950). Classical Mechanics (1st ed.). Addison-Wesley.
Goldstein, Herbert (1951). Classical Mechanics (1st ed.). Addison-Wesley. ASIN B000OL8LOM.
Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). Addison-Wesley. ISBN 978-0-201-02918-5.
Goldstein, Herbert; Poole, C. P.; Safko, J. L. (2001). Classical Mechanics (3rd ed.). Addison-Wesley. ISBN 978-0-201-65702-9.
== Reception ==
=== First edition ===
S.L. Quimby of Columbia University noted that the first half of the first edition of the book is dedicated to the development of Lagrangian mechanics with the treatment of velocity-dependent potentials, which are important in electromagnetism, and the use of the Cayley-Klein parameters and matrix algebra for rigid-body dynamics. This is followed by a comprehensive and clear discussion of Hamiltonian mechanics. End-of-chapter references improve the value of the book. Quimby pointed out that although this book is suitable for students preparing for quantum mechanics, it is not helpful for those interested in analytical mechanics because its treatment omits too much. Quimby praised the quality of printing and binding which make the book attractive.
In the Journal of the Franklin Institute, Rupen Eskergian noted that the first edition of Classical Mechanics offers a mature take on the subject using vector and tensor notations and with a welcome emphasis on variational methods. This book begins with a review of elementary concepts, then introduces the principle of virtual work, constraints, generalized coordinates, and Lagrangian mechanics. Scattering is treated in the same chapter as central forces and the two-body problem. Unlike most other books on mechanics, this one elaborates upon the virial theorem. The discussion of canonical and contact transformations, the Hamilton-Jacobi theory, and action-angle coordinates is followed by a presentation of geometric optics and wave mechanics. Eskergian believed this book serves as a bridge to modern physics.
Writing for The Mathematical Gazette on the first edition, L. Rosenhead congratulated Goldstein for a lucid account of classical mechanics leading to modern theoretical physics, which he believed would stand the test of time alongside acknowledged classics such as E.T. Whittaker's Analytical Dynamics and Arnold Sommerfeld's Lectures on Theoretical Physics. This book is self-contained and is suitable for students who have completed courses in mathematics and physics of the first two years of university. End-of-chapter references with comments and some example problems enhance the book. Rosenhead also liked the diagrams, index, and printing.
Concerning the second printing of the first edition, Vic Twersky of the Mathematical Research Group at New York University considered the book to be of pedagogical merit because it explains things in a clear and simple manner, and its humor is not forced. Published in the 1950s, this book replaced the outdated and fragmented treatises and supplements typically assigned to beginning graduate students as a modern text on classical mechanics with exercises and examples demonstrating the link between this and other branches of physics, including acoustics, electrodynamics, thermodynamics, geometric optics, and quantum mechanics. It also has a chapter on the mechanics of fields and continua. At the end of each chapter, there is a list of references with the author's candid reviews of each. Twersky said that Goldstein's Classical Mechanics is more suitable for physicists compared to the much older treatise Analytical Dynamics by E.T. Whittaker, which he deemed more appropriate for mathematicians.
E. W. Banhagel, an instructor from Detroit, Michigan, observed that despite requiring no more than multivariable and vector calculus, the first edition of Classical Mechanics successfully introduces some sophisticated new ideas in physics to students. Mathematical tools are introduced as needed. He believed that the annotated references at the end of each chapter are of great value.
=== Third edition ===
Stephen R. Addison from the University of Central Arkansas commented that while the first edition of Classical Mechanics was essentially a treatise with exercises, the third has become less scholarly and more of a textbook. This book is most useful for students who are interested in learning the necessary material in preparation for quantum mechanics. The presentation of most materials in the third edition remain unchanged compared to that of the second, though many of the old references and footnotes were removed. Sections on the relations between the action-angle coordinates and the Hamilton-Jacobi equation with the old quantum theory, wave mechanics, and geometric optics were removed. Chapter 7, which deals with special relativity, has been heavily revised and could prove to be more useful to students who want to study general relativity than its equivalent in previous editions. Chapter 11 provides a clear, if somewhat dated, survey of classical chaos. Appendix B could help advanced students refresh their memories but may be too short to learn from. In all, Addison believed that this book remains a classic text on the eighteenth- and nineteenth-century approaches to theoretical mechanics; those interested in a more modern approach – expressed in the language of differential geometry and Lie groups – should refer to Mathematical Methods of Classical Mechanics by Vladimir Arnold.
Martin Tiersten from the City University of New York pointed out a serious error in the book that persisted in all three editions and even got promoted to the front cover of the book. Such a closed orbit, depicted in a diagram on page 80 (as Figure 3.7) is impossible for an attractive central force because the path cannot be concave away from the center of force. A similarly erroneous diagram appears on page 91 (as Figure 3.13). Tiersten suggested that the reason why this error remained unnoticed for so long is because advanced mechanics texts typically do not use vectors in their treatment of central-force problems, in particular the tangential and normal components of the acceleration vector. He wrote, "Because an attractive force is always directed in toward the center of force, the direction toward the center of curvature at the turning points must be toward the center of force." In response, Poole and Safko acknowledged the error and stated they were working on a list of errata.
== See also ==
Newtonian mechanics
Classical Mechanics (Kibble and Berkshire)
Course of Theoretical Physics (Landau and Lifshitz)
List of textbooks on classical and quantum mechanics
Introduction to Electrodynamics (Griffiths)
Classical Electrodynamics (Jackson)
== References ==
== External links ==
Errata, corrections, and comments on the third edition. John L. Safko and Charles P. Poole. University of South Carolina. | Wikipedia/Classical_Mechanics_(book) |
In physics, the Rayleigh dissipation function, named after Lord Rayleigh, is a function used to handle the effects of velocity-proportional frictional forces in Lagrangian mechanics.
It was first introduced by him in 1873.
If the frictional force on a particle with velocity
v
→
{\displaystyle {\vec {v}}}
can be written as
F
→
f
=
−
k
v
→
{\displaystyle {\vec {F}}_{f}=-k{\vec {v}}}
, where
k
{\displaystyle k}
is a diagonal matrix, then the Rayleigh dissipation function can be defined for a system of
N
{\displaystyle N}
particles as
R
(
v
)
=
1
2
∑
i
=
1
N
(
k
x
v
i
,
x
2
+
k
y
v
i
,
y
2
+
k
z
v
i
,
z
2
)
.
{\displaystyle R(v)={\frac {1}{2}}\sum _{i=1}^{N}(k_{x}v_{i,x}^{2}+k_{y}v_{i,y}^{2}+k_{z}v_{i,z}^{2}).}
This function represents half of the rate of energy dissipation of the system through friction. The force of friction is negative the velocity gradient of the dissipation function,
F
→
f
=
−
∇
v
R
(
v
)
{\displaystyle {\vec {F}}_{f}=-\nabla _{v}R(v)}
, analogous to a force being equal to the negative position gradient of a potential. This relationship is represented in terms of the set of generalized coordinates
q
i
=
{
q
1
,
q
2
,
…
q
n
}
{\displaystyle q_{i}=\left\{q_{1},q_{2},\ldots q_{n}\right\}}
as
F
f
,
i
=
−
∂
R
∂
q
˙
i
{\displaystyle F_{f,i}=-{\frac {\partial R}{\partial {\dot {q}}_{i}}}}
.
As friction is not conservative, it is included in the
Q
i
{\displaystyle Q_{i}}
term of Lagrange's equations,
d
d
t
∂
L
∂
q
i
˙
−
∂
L
∂
q
i
=
Q
i
{\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q_{i}}}}}-{\frac {\partial L}{\partial q_{i}}}=Q_{i}}
.
Applying of the value of the frictional force described by generalized coordinates into the Euler-Lagrange equations gives
d
d
t
(
∂
L
∂
q
i
˙
)
−
∂
L
∂
q
i
=
−
∂
R
∂
q
˙
i
{\displaystyle {\frac {d}{dt}}\left({\frac {\partial L}{\partial {\dot {q_{i}}}}}\right)-{\frac {\partial L}{\partial q_{i}}}=-{\frac {\partial R}{\partial {\dot {q}}_{i}}}}
.
Rayleigh writes the Lagrangian
L
{\displaystyle L}
as kinetic energy
T
{\displaystyle T}
minus potential energy
V
{\displaystyle V}
, which yields Rayleigh's equation from 1873.
d
d
t
(
∂
T
∂
q
i
˙
)
−
∂
T
∂
q
i
+
∂
R
∂
q
˙
i
+
∂
V
∂
q
i
=
0
{\displaystyle {\frac {d}{dt}}\left({\frac {\partial T}{\partial {\dot {q_{i}}}}}\right)-{\frac {\partial T}{\partial q_{i}}}+{\frac {\partial R}{\partial {\dot {q}}_{i}}}+{\frac {\partial V}{\partial q_{i}}}=0}
.
Since the 1970s the name Rayleigh dissipation potential for
R
{\displaystyle R}
is more common. Moreover, the original theory is generalized from quadratic functions
q
↦
R
(
q
˙
)
=
1
2
q
˙
⋅
V
q
˙
{\displaystyle q\mapsto R({\dot {q}})={\frac {1}{2}}{\dot {q}}\cdot \mathbb {V} {\dot {q}}}
to
dissipation potentials that are depending on
q
{\displaystyle q}
(then called state dependence) and are non-quadratic, which leads to nonlinear friction laws like in Coulomb friction or in plasticity. The main assumption is then, that the mapping
q
˙
↦
R
(
q
,
q
˙
)
{\displaystyle {\dot {q}}\mapsto R(q,{\dot {q}})}
is convex and satisfies
0
=
R
(
q
,
0
)
≤
R
(
q
,
q
˙
)
{\displaystyle 0=R(q,0)\leq R(q,{\dot {q}})}
.
== References == | Wikipedia/Rayleigh_dissipation_function |
In mathematics, Nambu mechanics is a generalization of Hamiltonian mechanics involving multiple Hamiltonians. Recall that Hamiltonian mechanics is based upon the flows generated by a smooth Hamiltonian over a symplectic manifold. The flows are symplectomorphisms and hence obey Liouville's theorem. This was soon generalized to flows generated by a Hamiltonian over a Poisson manifold. In 1973, Yoichiro Nambu suggested a generalization involving Nambu–Poisson manifolds with more than one Hamiltonian. In 1994, Leon Takhtajan revisited Nambu dynamics.
== Nambu bracket ==
Specifically, consider a differential manifold M, for some integer N ≥ 2; one has a smooth N-linear map from N copies of C∞ (M) to itself, such that it is completely antisymmetric:
the Nambu bracket,
{
h
1
,
…
,
h
N
−
1
,
⋅
}
:
C
∞
(
M
)
×
⋯
C
∞
(
M
)
→
C
∞
(
M
)
,
{\displaystyle \{h_{1},\ldots ,h_{N-1},\cdot \}:C^{\infty }(M)\times \cdots C^{\infty }(M)\rightarrow C^{\infty }(M),}
which acts as a derivation
{
h
1
,
…
,
h
N
−
1
,
f
g
}
=
{
h
1
,
…
,
h
N
−
1
,
f
}
g
+
f
{
h
1
,
…
,
h
N
−
1
,
g
}
,
{\displaystyle \{h_{1},\ldots ,h_{N-1},fg\}=\{h_{1},\ldots ,h_{N-1},f\}g+f\{h_{1},\ldots ,h_{N-1},g\},}
whence the Filippov Identities (FI) (evocative of the Jacobi identities,
but unlike them, not antisymmetrized in all arguments, for N ≥ 2 ):
{
f
1
,
⋯
,
f
N
−
1
,
{
g
1
,
⋯
,
g
N
}
}
=
{
{
f
1
,
⋯
,
f
N
−
1
,
g
1
}
,
g
2
,
⋯
,
g
N
}
+
{
g
1
,
{
f
1
,
⋯
,
f
N
−
1
,
g
2
}
,
⋯
,
g
N
}
+
…
{\displaystyle \{f_{1},\cdots ,~f_{N-1},~\{g_{1},\cdots ,~g_{N}\}\}=\{\{f_{1},\cdots ,~f_{N-1},~g_{1}\},~g_{2},\cdots ,~g_{N}\}+\{g_{1},\{f_{1},\cdots ,f_{N-1},~g_{2}\},\cdots ,g_{N}\}+\dots }
+
{
g
1
,
⋯
,
g
N
−
1
,
{
f
1
,
⋯
,
f
N
−
1
,
g
N
}
}
,
{\displaystyle +\{g_{1},\cdots ,g_{N-1},\{f_{1},\cdots ,f_{N-1},~g_{N}\}\},}
so that {f1, ..., fN−1, •} acts as a generalized derivation over the N-fold product {. ,..., .}.
== Hamiltonians and flow ==
There are N − 1 Hamiltonians, H1, ..., HN−1, generating an incompressible flow,
d
d
t
f
=
{
f
,
H
1
,
…
,
H
N
−
1
}
,
{\displaystyle {\frac {d}{dt}}f=\{f,H_{1},\ldots ,H_{N-1}\},}
The generalized phase-space velocity is divergenceless, enabling Liouville's theorem. The case N = 2 reduces to a Poisson manifold, and conventional Hamiltonian mechanics.
For larger even N, the N−1 Hamiltonians identify with the maximal number of independent invariants of motion (cf. Conserved quantity) characterizing a superintegrable system that evolves in N-dimensional phase space. Such systems are also describable by conventional Hamiltonian dynamics; but their description in the framework of Nambu mechanics is substantially more elegant and intuitive, as all invariants enjoy the same geometrical status as the Hamiltonian: the trajectory in phase space is the intersection of the N − 1 hypersurfaces specified by these invariants. Thus, the flow is perpendicular to all N − 1 gradients of these Hamiltonians, whence parallel to the generalized cross product specified by the respective Nambu bracket.
Nambu mechanics can be extended to fluid dynamics, where the resulting Nambu brackets are non-canonical and the Hamiltonians are identified with the Casimir of the system, such as enstrophy or helicity.
== Quantization ==
From the view point of Zariski quatization, Takhtajan et al. propose quantization of Nambu dynamics.
Quantizing Nambu dynamics leads to intriguing structures that coincide with conventional quantization ones when superintegrable systems are involved—as they must.
In relation to matrix models and M2-branes, S. Katagiri has recently discussed about quantization of Nambu dynamics.
== See also ==
Hamiltonian mechanics
Symplectic manifold
Poisson manifold
Poisson algebra
Integrable system
Conserved quantity
Hamiltonian Fluid Mechanics
== Notes ==
== References ==
Curtright, T.; Zachos, C. (2003). "Classical and quantum Nambu mechanics". Physical Review. D68 (8): 085001. arXiv:hep-th/0212267. Bibcode:2003PhRvD..68h5001C. doi:10.1103/PhysRevD.68.085001. S2CID 17388447.
Takhtajan, Leon (1994). "On foundation of the generalized Nambu mechanics". Communications in Mathematical Physics. 160 (2): 295–315. arXiv:hep-th/9301111. Bibcode:1994CMaPh.160..295T. doi:10.1007/BF02103278. S2CID 119137896.
Filippov, V. T. (1986). "n-Lie Algebras". Sib. Math. Journal. 26 (6): 879–891. doi:10.1007/BF00969110. S2CID 125051596.
Nambu, Y. (1973). "Generalized Hamiltonian dynamics". Physical Review. D7 (8): 2405–2412. Bibcode:1973PhRvD...7.2405N. doi:10.1103/PhysRevD.7.2405.
Nevir, P.; Blender, R. (1993). "A Nambu representation of incompressible hydrodynamics using helicity and enstrophy". J. Phys. A. 26 (22): 1189–1193. Bibcode:1993JPhA...26L1189N. doi:10.1088/0305-4470/26/22/010.
Blender, R.; Badin, G. (2015). "Hydrodynamic Nambu mechanics derived by geometric constraints". J. Phys. A. 48 (10): 105501. arXiv:1510.04832. Bibcode:2015JPhA...48j5501B. doi:10.1088/1751-8113/48/10/105501. S2CID 119661148.
Leon, Takhtajan; Flato, Moshe; Sternheimer, Daniel; Giuseppe, Dito (1997). "Deformation Quantization and Nambu Mechanics". Communications in Mathematical Physics. 183 (8): 1–22. arXiv:hep-th/9602016. Bibcode:1994CMaPh.160..295T. doi:10.1103/physrevd.55.5112. S2CID 119137896.
Blender, R.; Badin, G. (2017). "Construction of Hamiltonian and Nambu Forms for the Shallow Water Equations". Fluids. 2 (2): 24. arXiv:1606.03355. doi:10.3390/fluids2020024. S2CID 36189352. | Wikipedia/Nambu_mechanics |
Hamiltonian fluid mechanics is the application of Hamiltonian methods to fluid mechanics. Note that this formalism only applies to non-dissipative fluids.
== Irrotational barotropic flow ==
Take the simple example of a barotropic, inviscid vorticity-free fluid.
Then, the conjugate fields are the mass density field ρ and the velocity potential φ. The Poisson bracket is given by
{
ρ
(
y
→
)
,
φ
(
x
→
)
}
=
δ
d
(
x
→
−
y
→
)
{\displaystyle \{\rho ({\vec {y}}),\varphi ({\vec {x}})\}=\delta ^{d}({\vec {x}}-{\vec {y}})}
and the Hamiltonian by:
H
=
∫
d
d
x
H
=
∫
d
d
x
(
1
2
ρ
(
∇
φ
)
2
+
e
(
ρ
)
)
,
{\displaystyle H=\int \mathrm {d} ^{d}x{\mathcal {H}}=\int \mathrm {d} ^{d}x\left({\frac {1}{2}}\rho (\nabla \varphi )^{2}+e(\rho )\right),}
where e is the internal energy density, as a function of ρ.
For this barotropic flow, the internal energy is related to the pressure p by:
e
″
=
1
ρ
p
′
,
{\displaystyle e''={\frac {1}{\rho }}p',}
where an apostrophe ('), denotes differentiation with respect to ρ.
This Hamiltonian structure gives rise to the following two equations of motion:
∂
ρ
∂
t
=
+
∂
H
∂
φ
=
−
∇
⋅
(
ρ
u
→
)
,
∂
φ
∂
t
=
−
∂
H
∂
ρ
=
−
1
2
u
→
⋅
u
→
−
e
′
,
{\displaystyle {\begin{aligned}{\frac {\partial \rho }{\partial t}}&=+{\frac {\partial {\mathcal {H}}}{\partial \varphi }}=-\nabla \cdot (\rho {\vec {u}}),\\{\frac {\partial \varphi }{\partial t}}&=-{\frac {\partial {\mathcal {H}}}{\partial \rho }}=-{\frac {1}{2}}{\vec {u}}\cdot {\vec {u}}-e',\end{aligned}}}
where
u
→
=
d
e
f
∇
φ
{\displaystyle {\vec {u}}\ {\stackrel {\mathrm {def} }{=}}\ \nabla \varphi }
is the velocity and is vorticity-free. The second equation leads to the Euler equations:
∂
u
→
∂
t
+
(
u
→
⋅
∇
)
u
→
=
−
e
″
∇
ρ
=
−
1
ρ
∇
p
{\displaystyle {\frac {\partial {\vec {u}}}{\partial t}}+({\vec {u}}\cdot \nabla ){\vec {u}}=-e''\nabla \rho =-{\frac {1}{\rho }}\nabla {p}}
after exploiting the fact that the vorticity is zero:
∇
×
u
→
=
0
→
.
{\displaystyle \nabla \times {\vec {u}}={\vec {0}}.}
As fluid dynamics is described by non-canonical dynamics, which possess an infinite amount of Casimir invariants, an alternative formulation of Hamiltonian formulation of fluid dynamics can be introduced through the use of Nambu mechanics
== See also ==
Luke's variational principle
Hamiltonian field theory
== Notes ==
== References ==
Badin, Gualtiero; Crisciani, Fulvio (2018). Variational Formulation of Fluid and Geophysical Fluid Dynamics - Mechanics, Symmetries and Conservation Laws -. Springer. p. 218. Bibcode:2018vffg.book.....B. doi:10.1007/978-3-319-59695-2. ISBN 978-3-319-59694-5. S2CID 125902566.
Morrison, P.J. (2006). "Hamiltonian Fluid Mechanics" (PDF). In Elsevier (ed.). Encyclopedia of Mathematical Physics. Vol. 2. Amsterdam. pp. 593–600.{{cite encyclopedia}}: CS1 maint: location missing publisher (link)
Morrison, P. J. (April 1998). "Hamiltonian Description of the Ideal Fluid" (PDF). Reviews of Modern Physics. 70 (2). Austin, Texas: 467–521. Bibcode:1998RvMP...70..467M. doi:10.1103/RevModPhys.70.467. hdl:2152/61087.
R. Salmon (1988). "Hamiltonian Fluid Mechanics". Annual Review of Fluid Mechanics. 20: 225–256. Bibcode:1988AnRFM..20..225S. doi:10.1146/annurev.fl.20.010188.001301.
Shepherd, Theodore G (1990). "Symmetries, Conservation Laws, and Hamiltonian Structure in Geophysical Fluid Dynamics". Advances in Geophysics Volume 32. Vol. 32. pp. 287–338. Bibcode:1990AdGeo..32..287S. doi:10.1016/S0065-2687(08)60429-X. ISBN 9780120188321.
Swaters, Gordon E. (2000). Introduction to Hamiltonian Fluid Dynamics and Stability Theory. Boca Raton, Florida: Chapman & Hall/CRC. p. 274. ISBN 1-58488-023-6.
Nevir, P.; Blender, R. (1993). "A Nambu representation of incompressible hydrodynamics using helicity and enstrophy". J. Phys. A. 26 (22): 1189–1193. Bibcode:1993JPhA...26L1189N. doi:10.1088/0305-4470/26/22/010.
Blender, R.; Badin, G. (2015). "Hydrodynamic Nambu mechanics derived by geometric constraints". J. Phys. A. 48 (10): 105501. arXiv:1510.04832. Bibcode:2015JPhA...48j5501B. doi:10.1088/1751-8113/48/10/105501. S2CID 119661148. | Wikipedia/Hamiltonian_fluid_mechanics |
In mathematics, Nambu mechanics is a generalization of Hamiltonian mechanics involving multiple Hamiltonians. Recall that Hamiltonian mechanics is based upon the flows generated by a smooth Hamiltonian over a symplectic manifold. The flows are symplectomorphisms and hence obey Liouville's theorem. This was soon generalized to flows generated by a Hamiltonian over a Poisson manifold. In 1973, Yoichiro Nambu suggested a generalization involving Nambu–Poisson manifolds with more than one Hamiltonian. In 1994, Leon Takhtajan revisited Nambu dynamics.
== Nambu bracket ==
Specifically, consider a differential manifold M, for some integer N ≥ 2; one has a smooth N-linear map from N copies of C∞ (M) to itself, such that it is completely antisymmetric:
the Nambu bracket,
{
h
1
,
…
,
h
N
−
1
,
⋅
}
:
C
∞
(
M
)
×
⋯
C
∞
(
M
)
→
C
∞
(
M
)
,
{\displaystyle \{h_{1},\ldots ,h_{N-1},\cdot \}:C^{\infty }(M)\times \cdots C^{\infty }(M)\rightarrow C^{\infty }(M),}
which acts as a derivation
{
h
1
,
…
,
h
N
−
1
,
f
g
}
=
{
h
1
,
…
,
h
N
−
1
,
f
}
g
+
f
{
h
1
,
…
,
h
N
−
1
,
g
}
,
{\displaystyle \{h_{1},\ldots ,h_{N-1},fg\}=\{h_{1},\ldots ,h_{N-1},f\}g+f\{h_{1},\ldots ,h_{N-1},g\},}
whence the Filippov Identities (FI) (evocative of the Jacobi identities,
but unlike them, not antisymmetrized in all arguments, for N ≥ 2 ):
{
f
1
,
⋯
,
f
N
−
1
,
{
g
1
,
⋯
,
g
N
}
}
=
{
{
f
1
,
⋯
,
f
N
−
1
,
g
1
}
,
g
2
,
⋯
,
g
N
}
+
{
g
1
,
{
f
1
,
⋯
,
f
N
−
1
,
g
2
}
,
⋯
,
g
N
}
+
…
{\displaystyle \{f_{1},\cdots ,~f_{N-1},~\{g_{1},\cdots ,~g_{N}\}\}=\{\{f_{1},\cdots ,~f_{N-1},~g_{1}\},~g_{2},\cdots ,~g_{N}\}+\{g_{1},\{f_{1},\cdots ,f_{N-1},~g_{2}\},\cdots ,g_{N}\}+\dots }
+
{
g
1
,
⋯
,
g
N
−
1
,
{
f
1
,
⋯
,
f
N
−
1
,
g
N
}
}
,
{\displaystyle +\{g_{1},\cdots ,g_{N-1},\{f_{1},\cdots ,f_{N-1},~g_{N}\}\},}
so that {f1, ..., fN−1, •} acts as a generalized derivation over the N-fold product {. ,..., .}.
== Hamiltonians and flow ==
There are N − 1 Hamiltonians, H1, ..., HN−1, generating an incompressible flow,
d
d
t
f
=
{
f
,
H
1
,
…
,
H
N
−
1
}
,
{\displaystyle {\frac {d}{dt}}f=\{f,H_{1},\ldots ,H_{N-1}\},}
The generalized phase-space velocity is divergenceless, enabling Liouville's theorem. The case N = 2 reduces to a Poisson manifold, and conventional Hamiltonian mechanics.
For larger even N, the N−1 Hamiltonians identify with the maximal number of independent invariants of motion (cf. Conserved quantity) characterizing a superintegrable system that evolves in N-dimensional phase space. Such systems are also describable by conventional Hamiltonian dynamics; but their description in the framework of Nambu mechanics is substantially more elegant and intuitive, as all invariants enjoy the same geometrical status as the Hamiltonian: the trajectory in phase space is the intersection of the N − 1 hypersurfaces specified by these invariants. Thus, the flow is perpendicular to all N − 1 gradients of these Hamiltonians, whence parallel to the generalized cross product specified by the respective Nambu bracket.
Nambu mechanics can be extended to fluid dynamics, where the resulting Nambu brackets are non-canonical and the Hamiltonians are identified with the Casimir of the system, such as enstrophy or helicity.
== Quantization ==
From the view point of Zariski quatization, Takhtajan et al. propose quantization of Nambu dynamics.
Quantizing Nambu dynamics leads to intriguing structures that coincide with conventional quantization ones when superintegrable systems are involved—as they must.
In relation to matrix models and M2-branes, S. Katagiri has recently discussed about quantization of Nambu dynamics.
== See also ==
Hamiltonian mechanics
Symplectic manifold
Poisson manifold
Poisson algebra
Integrable system
Conserved quantity
Hamiltonian Fluid Mechanics
== Notes ==
== References ==
Curtright, T.; Zachos, C. (2003). "Classical and quantum Nambu mechanics". Physical Review. D68 (8): 085001. arXiv:hep-th/0212267. Bibcode:2003PhRvD..68h5001C. doi:10.1103/PhysRevD.68.085001. S2CID 17388447.
Takhtajan, Leon (1994). "On foundation of the generalized Nambu mechanics". Communications in Mathematical Physics. 160 (2): 295–315. arXiv:hep-th/9301111. Bibcode:1994CMaPh.160..295T. doi:10.1007/BF02103278. S2CID 119137896.
Filippov, V. T. (1986). "n-Lie Algebras". Sib. Math. Journal. 26 (6): 879–891. doi:10.1007/BF00969110. S2CID 125051596.
Nambu, Y. (1973). "Generalized Hamiltonian dynamics". Physical Review. D7 (8): 2405–2412. Bibcode:1973PhRvD...7.2405N. doi:10.1103/PhysRevD.7.2405.
Nevir, P.; Blender, R. (1993). "A Nambu representation of incompressible hydrodynamics using helicity and enstrophy". J. Phys. A. 26 (22): 1189–1193. Bibcode:1993JPhA...26L1189N. doi:10.1088/0305-4470/26/22/010.
Blender, R.; Badin, G. (2015). "Hydrodynamic Nambu mechanics derived by geometric constraints". J. Phys. A. 48 (10): 105501. arXiv:1510.04832. Bibcode:2015JPhA...48j5501B. doi:10.1088/1751-8113/48/10/105501. S2CID 119661148.
Leon, Takhtajan; Flato, Moshe; Sternheimer, Daniel; Giuseppe, Dito (1997). "Deformation Quantization and Nambu Mechanics". Communications in Mathematical Physics. 183 (8): 1–22. arXiv:hep-th/9602016. Bibcode:1994CMaPh.160..295T. doi:10.1103/physrevd.55.5112. S2CID 119137896.
Blender, R.; Badin, G. (2017). "Construction of Hamiltonian and Nambu Forms for the Shallow Water Equations". Fluids. 2 (2): 24. arXiv:1606.03355. doi:10.3390/fluids2020024. S2CID 36189352. | Wikipedia/Nambu_dynamics |
Geometric mechanics is a branch of mathematics applying particular geometric methods to many areas of mechanics, from mechanics of particles and rigid bodies to fluid mechanics and control theory.
Geometric mechanics applies principally to systems for which the configuration space is a Lie group, or a group of diffeomorphisms, or more generally where some aspect of the configuration space has this group structure. For example, the configuration space of a rigid body such as a satellite is the group of Euclidean motions (translations and rotations in space), while the configuration space for a liquid crystal is the group of diffeomorphisms coupled with an internal state (gauge symmetry or order parameter).
== Momentum map and reduction ==
One of the principal ideas of geometric mechanics is reduction, which goes back to Jacobi's elimination of the node in the 3-body problem, but in its modern form is due to K. Meyer (1973) and independently J.E. Marsden and A. Weinstein (1974), both inspired by the work of Smale (1970). Symmetry of a Hamiltonian or Lagrangian system gives rise to conserved quantities, by Noether's theorem, and these conserved quantities are the components of the momentum map J. If P is the phase space and G the symmetry group, the momentum map is a map
J
:
P
→
g
∗
{\displaystyle \mathbf {J} :P\to {\mathfrak {g}}^{*}}
, and the reduced spaces are quotients of the level sets of J by the subgroup of G preserving the level set in question: for
μ
∈
g
∗
{\displaystyle \mu \in {\mathfrak {g}}^{*}}
one defines
P
μ
=
J
−
1
(
μ
)
/
G
μ
{\displaystyle P_{\mu }=\mathbf {J} ^{-1}(\mu )/G_{\mu }}
, and this reduced space is a symplectic manifold if
μ
{\displaystyle \mu }
is a regular value of J.
== Variational principles ==
Hamilton's principle
Lagrange d'Alembert principle
Maupertuis' principle of least action
Euler–Poincaré
Vakonomic
== Geometric integrators ==
One of the important developments arising from the geometric approach to mechanics is the incorporation of the geometry into numerical methods.
In particular symplectic and variational integrators are proving particularly accurate for long-term integration of Hamiltonian and Lagrangian systems.
== History ==
The term "geometric mechanics" occasionally refers to 17th-century mechanics.
As a modern subject, geometric mechanics has its roots in four works written in the 1960s. These were by Vladimir Arnold (1966), Stephen Smale (1970) and Jean-Marie Souriau (1970), and the first edition of Abraham and Marsden's Foundation of Mechanics (1967). Arnold's fundamental work showed that Euler's equations for the free rigid body are the equations for geodesic flow on the rotation group SO(3) and carried this geometric insight over to the dynamics of ideal fluids, where the rotation group is replaced by the group of volume-preserving diffeomorphisms. Smale's paper on Topology and Mechanics investigates the conserved quantities arising from Noether's theorem when a Lie group of symmetries acts on a mechanical system, and defines what is now called the momentum map (which Smale calls angular momentum), and he raises questions about the topology of the energy-momentum level surfaces and the effect on the dynamics. In his book, Souriau also considers the conserved quantities arising from the action of a group of symmetries, but he concentrates more on the geometric structures involved (for example the equivariance properties of this momentum for a wide class of symmetries), and less on questions of dynamics.
These ideas, and particularly those of Smale were central in the second edition of Foundations of Mechanics (Abraham and Marsden, 1978).
== Applications ==
Computer graphics
Control theory — see Bloch (2003)
Liquid Crystals — see Gay-Balmaz, Ratiu, Tronci (2013)
Magnetohydrodynamics
Molecular oscillations
Nonholonomic constraints — see Bloch (2003)
Nonlinear stability
Plasmas — see Holm, Marsden, Weinstein (1985)
Quantum mechanics
Quantum chemistry — see Foskett, Holm, Tronci (2019)
Superfluids
Thermodynamics — see Gay-Balmaz, Yoshimra (2018)
Trajectory planning for space exploration
Underwater vehicles
Variational integrators; see Marsden and West (2001)
== References ==
Abraham, Ralph; Marsden, Jerrold E. (1978), Foundations of Mechanics (2nd ed.), Addison-Wesley
Arnold, Vladimir (1966), "Sur la géométrie différentielle des groupes de Lie de dimension infine et ses applications a l'hydrodynamique des fluides parfaits" (PDF), Annales de l'Institut Fourier, 16: 319–361, doi:10.5802/aif.233
Arnold, Vladimir (1978), Mathematical Methods for Classical Mechanics, Springer-Verlag
Bloch, Anthony (2003). Nonholonomic Mechanics and Control. Springer-Verlag.
Foskett, Michael S.; Holm, Darryl D.; Tronci, Cesare (2019). "Geometry of Nonadiabatic Quantum Hydrodynamics". Acta Applicandae Mathematicae. 162 (1): 63–103. arXiv:1807.01031. doi:10.1007/s10440-019-00257-1. S2CID 85531406.
Gay-Balmaz, Francois; Ratiu, Tudor; Tronci, Cesare (2013). "Equivalent Theories of Liquid Crystal Dynamics". Arch. Ration. Mech. Anal. 210 (3): 773–811. arXiv:1102.2918. Bibcode:2013ArRMA.210..773G. doi:10.1007/s00205-013-0673-1. S2CID 14968950.
Holm, Darryl D.; Marsden, Jerrold E.; Ratiu, Tudor S.; Weinstein, Alan (1985). "Nonlinear stability of fluid and plasma equilibria". Physics Reports. 123 (1–2): 1–116. Bibcode:1985PhR...123....1H. doi:10.1016/0370-1573(85)90028-6.
Libermann, Paulette; Marle, Charles-Michel (1987). Symplectic geometry and analytical mechanics. Mathematics and its Applications. Vol. 35. Dordrecht: D. Reidel. doi:10.1007/978-94-009-3807-6. ISBN 90-277-2438-5.
Marsden, Jerrold; Weinstein, Alan (1974), "Reduction of Symplectic Manifolds with Symmetry", Reports on Mathematical Physics, 5 (1): 121–130, Bibcode:1974RpMP....5..121M, doi:10.1016/0034-4877(74)90021-4
Marsden, Jerrold; Ratiu, Tudor S. (1999). Introduction to mechanics and symmetry. Texts in Applied Mathematics (2 ed.). New York: Springer-Verlag. ISBN 0-387-98643-X.
Meyer, Kenneth (1973). "Symmetries and integrals in mechanics". Dynamical systems (Proc. Sympos., Univ. Bahia, Salvador, 1971). New York: Academic Press. pp. 259–272.
Ortega, Juan-Pablo; Ratiu, Tudor S. (2004). Momentum maps and Hamiltonian reduction. Progress in Mathematics. Vol. 222. Birkhauser Boston. ISBN 0-8176-4307-9.
Smale, Stephen (1970), "Topology and Mechanics I", Inventiones Mathematicae, 10 (4): 305–331, Bibcode:1970InMat..10..305S, doi:10.1007/bf01418778, S2CID 189831616
Souriau, Jean-Marie (1970), Structure des Systemes Dynamiques, Dunod | Wikipedia/Geometric_mechanics |
Symmetry (from Ancient Greek συμμετρία (summetría) 'agreement in dimensions, due proportion, arrangement') in everyday life refers to a sense of harmonious and beautiful proportion and balance. In mathematics, the term has a more precise definition and is usually used to refer to an object that is invariant under some transformations, such as translation, reflection, rotation, or scaling. Although these two meanings of the word can sometimes be told apart, they are intricately related, and hence are discussed together in this article.
Mathematical symmetry may be observed with respect to the passage of time; as a spatial relationship; through geometric transformations; through other kinds of functional transformations; and as an aspect of abstract objects, including theoretic models, language, and music.
This article describes symmetry from three perspectives: in mathematics, including geometry, the most familiar type of symmetry for many people; in science and nature; and in the arts, covering architecture, art, and music.
The opposite of symmetry is asymmetry, which refers to the absence of symmetry.
== In mathematics ==
=== In geometry ===
A geometric shape or object is symmetric if it can be divided into two or more identical pieces that are arranged in an organized fashion. This means that an object is symmetric if there is a transformation that moves individual pieces of the object, but doesn't change the overall shape. The type of symmetry is determined by the way the pieces are organized, or by the type of transformation:
An object has reflectional symmetry (line or mirror symmetry) if there is a line (or in 3D a plane) going through it which divides it into two pieces that are mirror images of each other.
An object has rotational symmetry if the object can be rotated about a fixed point (or in 3D about a line) without changing the overall shape.
An object has translational symmetry if it can be translated (moving every point of the object by the same distance) without changing its overall shape.
An object has helical symmetry if it can be simultaneously translated and rotated in three-dimensional space along a line known as a screw axis.
An object has scale symmetry if it does not change shape when it is expanded or contracted. Fractals also exhibit a form of scale symmetry, where smaller portions of the fractal are similar in shape to larger portions.
Other symmetries include glide reflection symmetry (a reflection followed by a translation) and rotoreflection symmetry (a combination of a rotation and a reflection).
=== In logic ===
A dyadic relation R = S × S is symmetric if for all elements a, b in S, whenever it is true that Rab, it is also true that Rba. Thus, the relation "is the same age as" is symmetric, for if Paul is the same age as Mary, then Mary is the same age as Paul.
In propositional logic, symmetric binary logical connectives include and (∧, or &), or (∨, or |) and if and only if (↔), while the connective if (→) is not symmetric. Other symmetric logical connectives include nand (not-and, or ⊼), xor (not-biconditional, or ⊻), and nor (not-or, or ⊽).
=== Other areas of mathematics ===
Generalizing from geometrical symmetry in the previous section, one can say that a mathematical object is symmetric with respect to a given mathematical operation, if, when applied to the object, this operation preserves some property of the object. The set of operations that preserve a given property of the object form a group.
In general, every kind of structure in mathematics will have its own kind of symmetry. Examples include even and odd functions in calculus, symmetric groups in abstract algebra, symmetric matrices in linear algebra, and Galois groups in Galois theory. In statistics, symmetry also manifests as symmetric probability distributions, and as skewness—the asymmetry of distributions.
== In science and nature ==
=== In physics ===
Symmetry in physics has been generalized to mean invariance—that is, lack of change—under any kind of transformation, for example arbitrary coordinate transformations. This concept has become one of the most powerful tools of theoretical physics, as it has become evident that practically all laws of nature originate in symmetries. In fact, this role inspired the Nobel laureate PW Anderson to write in his widely read 1972 article More is Different that "it is only slightly overstating the case to say that physics is the study of symmetry." See Noether's theorem (which, in greatly simplified form, states that for every continuous mathematical symmetry, there is a corresponding conserved quantity such as energy or momentum; a conserved current, in Noether's original language); and also, Wigner's classification, which says that the symmetries of the laws of physics determine the properties of the particles found in nature.
Important symmetries in physics include continuous symmetries and discrete symmetries of spacetime; internal symmetries of particles; and supersymmetry of physical theories.
=== In biology ===
In biology, the notion of symmetry is mostly used explicitly to describe body shapes. Bilateral animals, including humans, are more or less symmetric with respect to the sagittal plane which divides the body into left and right halves. Animals that move in one direction necessarily have upper and lower sides, head and tail ends, and therefore a left and a right. The head becomes specialized with a mouth and sense organs, and the body becomes bilaterally symmetric for the purpose of movement, with symmetrical pairs of muscles and skeletal elements, though internal organs often remain asymmetric.
Plants and sessile (attached) animals such as sea anemones often have radial or rotational symmetry, which suits them because food or threats may arrive from any direction. Fivefold symmetry is found in the echinoderms, the group that includes starfish, sea urchins, and sea lilies.
In biology, the notion of symmetry is also used as in physics, that is to say to describe the properties of the objects studied, including their interactions. A remarkable property of biological evolution is the changes of symmetry corresponding to the appearance of new parts and dynamics.
=== In chemistry ===
Symmetry is important to chemistry because it undergirds essentially all specific interactions between molecules in nature (i.e., via the interaction of natural and human-made chiral molecules with inherently chiral biological systems). The control of the symmetry of molecules produced in modern chemical synthesis contributes to the ability of scientists to offer therapeutic interventions with minimal side effects. A rigorous understanding of symmetry explains fundamental observations in quantum chemistry, and in the applied areas of spectroscopy and crystallography. The theory and application of symmetry to these areas of physical science draws heavily on the mathematical area of group theory.
=== In psychology and neuroscience ===
For a human observer, some symmetry types are more salient than others, in particular the most salient is a reflection with a vertical axis, like that present in the human face. Ernst Mach made this observation in his book "The analysis of sensations" (1897), and this implies that perception of symmetry is not a general response to all types of regularities. Both behavioural and neurophysiological studies have confirmed the special sensitivity to reflection symmetry in humans and also in other animals. Early studies within the Gestalt tradition suggested that bilateral symmetry was one of the key factors in perceptual grouping. This is known as the Law of Symmetry. The role of symmetry in grouping and figure/ground organization has been confirmed in many studies. For instance, detection of reflectional symmetry is faster when this is a property of a single object. Studies of human perception and psychophysics have shown that detection of symmetry is fast, efficient and robust to perturbations. For example, symmetry can be detected with presentations between 100 and 150 milliseconds.
More recent neuroimaging studies have documented which brain regions are active during perception of symmetry. Sasaki et al. used functional magnetic resonance imaging (fMRI) to compare responses for patterns with symmetrical or random dots. A strong activity was present in extrastriate regions of the occipital cortex but not in the primary visual cortex. The extrastriate regions included V3A, V4, V7, and the lateral occipital complex (LOC). Electrophysiological studies have found a late posterior negativity that originates from the same areas. In general, a large part of the visual system seems to be involved in processing visual symmetry, and these areas involve similar networks to those responsible for detecting and recognising objects.
== In social interactions ==
People observe the symmetrical nature, often including asymmetrical balance, of social interactions in a variety of contexts. These include assessments of reciprocity, empathy, sympathy, apology, dialogue, respect, justice, and revenge.
Reflective equilibrium is the balance that may be attained through deliberative mutual adjustment among general principles and specific judgments.
Symmetrical interactions send the moral message "we are all the same" while asymmetrical interactions may send the message "I am special; better than you." Peer relationships, such as can be governed by the Golden Rule, are based on symmetry, whereas power relationships are based on asymmetry. Symmetrical relationships can to some degree be maintained by simple (game theory) strategies seen in symmetric games such as tit for tat.
== In the arts ==
There exists a list of journals and newsletters known to deal, at least in part, with symmetry and the arts.
=== In architecture ===
Symmetry finds its ways into architecture at every scale, from the overall external views of buildings such as Gothic cathedrals and The White House, through the layout of the individual floor plans, and down to the design of individual building elements such as tile mosaics. Islamic buildings such as the Taj Mahal and the Lotfollah mosque make elaborate use of symmetry both in their structure and in their ornamentation. Moorish buildings like the Alhambra are ornamented with complex patterns made using translational and reflection symmetries as well as rotations.
It has been said that only bad architects rely on a "symmetrical layout of blocks, masses and structures"; Modernist architecture, starting with International style, relies instead on "wings and balance of masses".
=== In pottery and metal vessels ===
Since the earliest uses of pottery wheels to help shape clay vessels, pottery has had a strong relationship to symmetry. Pottery created using a wheel acquires full rotational symmetry in its cross-section, while allowing substantial freedom of shape in the vertical direction. Upon this inherently symmetrical starting point, potters from ancient times onwards have added patterns that modify the rotational symmetry to achieve visual objectives.
Cast metal vessels lacked the inherent rotational symmetry of wheel-made pottery, but otherwise provided a similar opportunity to decorate their surfaces with patterns pleasing to those who used them. The ancient Chinese, for example, used symmetrical patterns in their bronze castings as early as the 17th century BC. Bronze vessels exhibited both a bilateral main motif and a repetitive translated border design.
=== In carpets and rugs ===
A long tradition of the use of symmetry in carpet and rug patterns spans a variety of cultures. American Navajo Indians used bold diagonals and rectangular motifs. Many Oriental rugs have intricate reflected centers and borders that translate a pattern. Not surprisingly, rectangular rugs have typically the symmetries of a rectangle—that is, motifs that are reflected across both the horizontal and vertical axes (see Klein four-group § Geometry).
=== In quilts ===
As quilts are made from square blocks (usually 9, 16, or 25 pieces to a block) with each smaller piece usually consisting of fabric triangles, the craft lends itself readily to the application of symmetry.
=== In other arts and crafts ===
Symmetries appear in the design of objects of all kinds. Examples include beadwork, furniture, sand paintings, knotwork, masks, and musical instruments. Symmetries are central to the art of M.C. Escher and the many applications of tessellation in art and craft forms such as wallpaper, ceramic tilework such as in Islamic geometric decoration, batik, ikat, carpet-making, and many kinds of textile and embroidery patterns.
Symmetry is also used in designing logos. By creating a logo on a grid and using the theory of symmetry, designers can organize their work, create a symmetric or asymmetrical design, determine the space between letters, determine how much negative space is required in the design, and how to accentuate parts of the logo to make it stand out.
=== In music ===
Symmetry is not restricted to the visual arts. Its role in the history of music touches many aspects of the creation and perception of music.
==== Musical form ====
Symmetry has been used as a formal constraint by many composers, such as the arch (swell) form (ABCBA) used by Steve Reich, Béla Bartók, and James Tenney. In classical music, Johann Sebastian Bach used the symmetry concepts of permutation and invariance.
==== Pitch structures ====
Symmetry is also an important consideration in the formation of scales and chords, traditional or tonal music being made up of non-symmetrical groups of pitches, such as the diatonic scale or the major chord. Symmetrical scales or chords, such as the whole tone scale, augmented chord, or diminished seventh chord (diminished-diminished seventh), are said to lack direction or a sense of forward motion, are ambiguous as to the key or tonal center, and have a less specific diatonic functionality. However, composers such as Alban Berg, Béla Bartók, and George Perle have used axes of symmetry and/or interval cycles in an analogous way to keys or non-tonal tonal centers. George Perle explains that "C–E, D–F♯, [and] Eb–G, are different instances of the same interval … the other kind of identity. … has to do with axes of symmetry. C–E belongs to a family of symmetrically related dyads as follows:"
Thus in addition to being part of the interval-4 family, C–E is also a part of the sum-4 family (with C equal to 0).
Interval cycles are symmetrical and thus non-diatonic. However, a seven pitch segment of C5 (the cycle of fifths, which are enharmonic with the cycle of fourths) will produce the diatonic major scale. Cyclic tonal progressions in the works of Romantic composers such as Gustav Mahler and Richard Wagner form a link with the cyclic pitch successions in the atonal music of Modernists such as Bartók, Alexander Scriabin, Edgard Varèse, and the Vienna school. At the same time, these progressions signal the end of tonality.
The first extended composition consistently based on symmetrical pitch relations was probably Alban Berg's Quartet, Op. 3 (1910).
==== Equivalency ====
Tone rows or pitch class sets which are invariant under retrograde are horizontally symmetrical, under inversion vertically. See also Asymmetric rhythm.
=== In aesthetics ===
The relationship of symmetry to aesthetics is complex. Humans find bilateral symmetry in faces physically attractive; it indicates health and genetic fitness. Opposed to this is the tendency for excessive symmetry to be perceived as boring or uninteresting. Rudolf Arnheim suggested that people prefer shapes that have some symmetry, and enough complexity to make them interesting.
=== In literature ===
Symmetry can be found in various forms in literature, a simple example being the palindrome where a brief text reads the same forwards or backwards. Stories may have a symmetrical structure, such as the rise and fall pattern of Beowulf.
== See also ==
== Explanatory notes ==
== References ==
== Further reading ==
The Equation That Couldn't Be Solved: How Mathematical Genius Discovered the Language of Symmetry, Mario Livio, Souvenir Press, 2006, ISBN 0-285-63743-6.
== External links ==
International Symmetry Association (ISA)
Dutch: Symmetry Around a Point in the Plane Archived 2004-01-02 at the Wayback Machine
Chapman: Aesthetics of Symmetry
ISIS Symmetry Archived 2009-09-22 at the Wayback Machine
Symmetry, BBC Radio 4 discussion with Fay Dowker, Marcus du Sautoy & Ian Stewart (In Our Time, Apr. 19, 2007) | Wikipedia/Symmetry_transformation |
In classical mechanics, Euler's rotation equations are a vectorial quasilinear first-order ordinary differential equation describing the rotation of a rigid body, using a rotating reference frame with angular velocity ω whose axes are fixed to the body. They are named in honour of Leonhard Euler.
In the absence of applied torques, one obtains the Euler top. When the torques are due to gravity, there are special cases when the motion of the top is integrable.
== Formulation ==
Their general vector form is
I
ω
˙
+
ω
×
(
I
ω
)
=
M
.
{\displaystyle \mathbf {I} {\dot {\boldsymbol {\omega }}}+{\boldsymbol {\omega }}\times \left(\mathbf {I} {\boldsymbol {\omega }}\right)=\mathbf {M} .}
where M is the applied torques and I is the inertia matrix.
The vector
ω
˙
{\displaystyle {\dot {\boldsymbol {\omega }}}}
is the angular acceleration. Again, note that all quantities are defined in the rotating reference frame.
In orthogonal principal axes of inertia coordinates the equations become
I
1
ω
˙
1
+
(
I
3
−
I
2
)
ω
2
ω
3
=
M
1
I
2
ω
˙
2
+
(
I
1
−
I
3
)
ω
3
ω
1
=
M
2
I
3
ω
˙
3
+
(
I
2
−
I
1
)
ω
1
ω
2
=
M
3
{\displaystyle {\begin{aligned}I_{1}\,{\dot {\omega }}_{1}+(I_{3}-I_{2})\,\omega _{2}\,\omega _{3}&=M_{1}\\I_{2}\,{\dot {\omega }}_{2}+(I_{1}-I_{3})\,\omega _{3}\,\omega _{1}&=M_{2}\\I_{3}\,{\dot {\omega }}_{3}+(I_{2}-I_{1})\,\omega _{1}\,\omega _{2}&=M_{3}\end{aligned}}}
where Mk are the components of the applied torques, Ik are the principal moments of inertia and ωk are the components of the angular velocity.
== Derivation ==
In an inertial frame of reference (subscripted "in"), Euler's second law states that the time derivative of the angular momentum L equals the applied torque:
d
L
in
d
t
=
M
in
{\displaystyle {\frac {d\mathbf {L} _{\text{in}}}{dt}}=\mathbf {M} _{\text{in}}}
For point particles such that the internal forces are central forces, this may be derived using Newton's second law.
For a rigid body, one has the relation between angular momentum and the moment of inertia Iin given as
L
in
=
I
in
ω
{\displaystyle \mathbf {L} _{\text{in}}=\mathbf {I} _{\text{in}}{\boldsymbol {\omega }}}
In the inertial frame, the differential equation is not always helpful in solving for the motion of a general rotating rigid body, as both Iin and ω can change during the motion. One may instead change to a coordinate frame fixed in the rotating body, in which the moment of inertia tensor is constant. Using a reference frame such as that at the center of mass, the frame's position drops out of the equations.
In any rotating reference frame, the time derivative must be replaced so that the equation becomes
(
d
L
d
t
)
r
o
t
+
ω
×
L
=
M
{\displaystyle \left({\frac {d\mathbf {L} }{dt}}\right)_{\mathrm {rot} }+{\boldsymbol {\omega }}\times \mathbf {L} =\mathbf {M} }
and so the cross product arises, see time derivative in rotating reference frame.
The vector components of the torque in the inertial and the rotating frames are related by
M
in
=
Q
M
,
{\displaystyle \mathbf {M} _{\text{in}}=\mathbf {Q} \mathbf {M} ,}
where
Q
{\displaystyle \mathbf {Q} }
is the rotation tensor (not rotation matrix), an orthogonal tensor related to the angular velocity vector by
ω
×
u
=
Q
˙
Q
−
1
u
{\displaystyle {\boldsymbol {\omega }}\times {\boldsymbol {u}}={\dot {\mathbf {Q} }}\mathbf {Q} ^{-1}{\boldsymbol {u}}}
for any vector u.
Now
L
=
I
ω
{\displaystyle \mathbf {L} =\mathbf {I} {\boldsymbol {\omega }}}
is substituted and the time derivatives are taken in the rotating frame, while realizing that the particle positions and the inertia tensor does not depend on time. This leads to the general vector form of Euler's equations which are valid in such a frame
I
ω
˙
+
ω
×
(
I
ω
)
=
M
.
{\displaystyle \mathbf {I} {\dot {\boldsymbol {\omega }}}+{\boldsymbol {\omega }}\times \left(\mathbf {I} {\boldsymbol {\omega }}\right)=\mathbf {M} .}
The equations are also derived from Newton's laws in the discussion of the resultant torque.
More generally, by the tensor transform rules, any rank-2 tensor
T
{\displaystyle \mathbf {T} }
has a time-derivative
T
˙
{\displaystyle \mathbf {\dot {T}} }
such that for any vector
u
{\displaystyle \mathbf {u} }
, one has
T
˙
u
=
ω
×
(
T
u
)
−
T
(
ω
×
u
)
{\displaystyle \mathbf {\dot {T}} \mathbf {u} ={\boldsymbol {\omega }}\times (\mathbf {T} \mathbf {u} )-\mathbf {T} ({\boldsymbol {\omega }}\times \mathbf {u} )}
. This yields the Euler's equations by plugging in
d
d
t
(
I
ω
)
=
M
.
{\displaystyle {\frac {d}{dt}}\left(\mathbf {I} {\boldsymbol {\omega }}\right)=\mathbf {M} .}
=== Principal axes form ===
When choosing a frame so that its axes are aligned with the principal axes of the inertia tensor, its component matrix is diagonal, which further simplifies calculations. As described in the moment of inertia article, the angular momentum L can then be written
L
=
L
1
e
1
+
L
2
e
2
+
L
3
e
3
=
∑
i
=
1
3
I
i
ω
i
e
i
{\displaystyle \mathbf {L} =L_{1}\mathbf {e} _{1}+L_{2}\mathbf {e} _{2}+L_{3}\mathbf {e} _{3}=\sum _{i=1}^{3}I_{i}\omega _{i}\mathbf {e} _{i}}
Also in some frames not tied to the body can it be possible to obtain such simple (diagonal tensor) equations for the rate of change of the angular momentum. Then ω must be the angular velocity for rotation of that frames axes instead of the rotation of the body. It is however still required that the chosen axes are still principal axes of inertia. The resulting form of the Euler rotation equations is useful for rotation-symmetric objects that allow some of the principal axes of rotation to be chosen freely.
== Special case solutions ==
=== Torque-free precessions ===
Torque-free precessions are non-trivial solution for the situation where the torque on the right hand side is zero. When I is not constant in the external reference frame (i.e. the body is moving and its inertia tensor is not constantly diagonal) then I cannot be pulled through the derivative operator acting on L. In this case I(t) and ω(t) do change together in such a way that the derivative of their product is still zero. This motion can be visualized by Poinsot's construction.
== Generalized Euler equations ==
The Euler equations can be generalized to any simple Lie algebra. The original Euler equations come from fixing the Lie algebra to be
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
, with generators
t
1
,
t
2
,
t
3
{\displaystyle {t_{1},t_{2},t_{3}}}
satisfying the relation
[
t
a
,
t
b
]
=
ϵ
a
b
c
t
c
{\displaystyle [t_{a},t_{b}]=\epsilon _{abc}t_{c}}
. Then if
ω
(
t
)
=
∑
a
ω
a
(
t
)
t
a
{\displaystyle {\boldsymbol {\omega }}(t)=\sum _{a}\omega _{a}(t)t_{a}}
(where
t
{\displaystyle t}
is a time coordinate, not to be confused with basis vectors
t
a
{\displaystyle t_{a}}
) is an
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
-valued function of time, and
I
=
d
i
a
g
(
I
1
,
I
2
,
I
3
)
{\displaystyle \mathbf {I} =\mathrm {diag} (I_{1},I_{2},I_{3})}
(with respect to the Lie algebra basis), then the (untorqued) original Euler equations can be written
I
ω
˙
=
[
I
ω
,
ω
]
.
{\displaystyle \mathbf {I} {\dot {\boldsymbol {\omega }}}=[\mathbf {I} {\boldsymbol {\omega }},{\boldsymbol {\omega }}].}
To define
I
{\displaystyle \mathbf {I} }
in a basis-independent way, it must be a self-adjoint map on the Lie algebra
g
{\displaystyle {\mathfrak {g}}}
with respect to the invariant bilinear form on
g
{\displaystyle {\mathfrak {g}}}
. This expression generalizes readily to an arbitrary simple Lie algebra, say in the standard classification of simple Lie algebras.
This can also be viewed as a Lax pair formulation of the generalized Euler equations, suggesting their integrability.
== See also ==
Euler angles
Dzhanibekov effect
Moment of inertia
Poinsot's ellipsoid
Rigid rotor
== References ==
C. A. Truesdell, III (1991) A First Course in Rational Continuum Mechanics. Vol. 1: General Concepts, 2nd ed., Academic Press. ISBN 0-12-701300-8. Sects. I.8-10.
C. A. Truesdell, III and R. A. Toupin (1960) The Classical Field Theories, in S. Flügge (ed.) Encyclopedia of Physics. Vol. III/1: Principles of Classical Mechanics and Field Theory, Springer-Verlag. Sects. 166–168, 196–197, and 294.
Landau L.D. and Lifshitz E.M. (1976) Mechanics, 3rd. ed., Pergamon Press. ISBN 0-08-021022-8 (hardcover) and ISBN 0-08-029141-4 (softcover).
Goldstein H. (1980) Classical Mechanics, 2nd ed., Addison-Wesley. ISBN 0-201-02918-9
Symon KR. (1971) Mechanics, 3rd. ed., Addison-Wesley. ISBN 0-201-07392-7 | Wikipedia/Euler's_equation_of_motion |
In physics, the number of degrees of freedom (DOF) of a mechanical system is the number of independent parameters required to completely specify its configuration or state. That number is an important property in the analysis of systems of bodies in mechanical engineering, structural engineering, aerospace engineering, robotics, and other fields.
As an example, the position of a single railcar (engine) moving along a track has one degree of freedom because the position of the car can be completely specified a single number expressing its distance along the track from some chosen origin. A train of rigid cars connected by hinges to an engine still has only one degree of freedom because the positions of the cars behind the engine are constrained by the shape of the track.
For a second example, an automobile with a very stiff suspension can be considered to be a rigid body traveling on a plane (a flat, two-dimensional space). This body has three independent degrees of freedom consisting of two components of translation (which together specify its position) and one angle of rotation (which specifies its orientation). Skidding or drifting is a good example of an automobile's three independent degrees of freedom.
The position and orientation of a rigid body in space are defined by three components of translation and three components of rotation, which means that the body has six degrees of freedom.
To ensure that a mechanical device's degrees of freedom neither underconstrain nor overconstrain it, its design can be managed using the exact constraint method.
== Motions and dimensions ==
The position of an n-dimensional rigid body is defined by the rigid transformation, [T] = [A, d], where d is an n-dimensional translation and A is an n × n rotation matrix, which has n translational degrees of freedom and n(n − 1)/2 rotational degrees of freedom. The number of rotational degrees of freedom comes from the dimension of the rotation group SO(n).
A non-rigid or deformable body may be thought of as a collection of many minute particles (infinite number of DOFs), this is often approximated by a finite DOF system. When motion involving large displacements is the main objective of study (e.g. for analyzing the motion of satellites), a deformable body may be approximated as a rigid body (or even a particle) in order to simplify the analysis.
The degree of freedom of a system can be viewed as the minimum number of coordinates required to specify a configuration. Applying this definition, we have:
For a single particle in a plane two coordinates define its location so it has two degrees of freedom;
A single particle in space requires three coordinates so it has three degrees of freedom;
Two particles in space have a combined six degrees of freedom;
If two particles in space are constrained to maintain a constant distance from each other, such as in the case of a diatomic molecule, then the six coordinates must satisfy a single constraint equation defined by the distance formula. This reduces the degree of freedom of the system to five, because the distance formula can be used to solve for the remaining coordinate once the other five are specified.
== Rigid bodies ==
A single rigid body has at most six degrees of freedom (6 DOF) 3T3R consisting of three translations 3T and three rotations 3R.
See also Euler angles.
For example, the motion of a ship at sea has the six degrees of freedom of a rigid body, and is described as:
Translation and rotation:
Walking (or surging): Moving forward and backward;
Strafing (or swaying): Moving left and right;
Elevating (or heaving): Moving up and down;
Roll rotation: Pivots side to side;
Pitch rotation: Tilts forward and backward;
Yaw rotation: Swivels left and right;
For example, the trajectory of an airplane in flight has three degrees of freedom and its attitude along the trajectory has three degrees of freedom, for a total of six degrees of freedom.
For rolling in flight and ship dynamics, see roll (aviation) and roll (ship motion), respectively.
An important derivative is the roll rate (or roll velocity), which is the angular speed at which an aircraft can change its roll attitude, and is typically expressed in degrees per second.
For pitching in flight and ship dynamics, see pitch (aviation) and pitch (ship motion), respectively.
For yawing in flight and ship dynamics, see yaw (aviation) and yaw (ship motion), respectively.
One important derivative is the yaw rate (or yaw velocity), the angular speed of yaw rotation, measured with a yaw rate sensor.
Another important derivative is the yawing moment, the angular momentum of a yaw rotation, which is important for adverse yaw in aircraft dynamics.
=== Lower mobility ===
Physical constraints may limit the number of degrees of freedom of a single rigid body. For example, a block sliding around on a flat table has 3 DOF 2T1R consisting of two translations 2T and 1 rotation 1R. An XYZ positioning robot like SCARA has 3 DOF 3T lower mobility.
== Mobility formula ==
The mobility formula counts the number of parameters that define the configuration of a set of rigid bodies that are constrained by joints connecting these bodies.
Consider a system of n rigid bodies moving in space has 6n degrees of freedom measured relative to a fixed frame. In order to count the degrees of freedom of this system, include the fixed body in the count of bodies, so that mobility is independent of the choice of the body that forms the fixed frame. Then the degree-of-freedom of the unconstrained system of N = n + 1 is
M
=
6
n
=
6
(
N
−
1
)
,
{\displaystyle M=6n=6(N-1),\!}
because the fixed body has zero degrees of freedom relative to itself.
Joints that connect bodies in this system remove degrees of freedom and reduce mobility. Specifically, hinges and sliders each impose five constraints and therefore remove five degrees of freedom. It is convenient to define the number of constraints c that a joint imposes in terms of the joint's freedom f, where c = 6 − f. In the case of a hinge or slider, which are one degree of freedom joints, have f = 1 and therefore c = 6 − 1 = 5.
The result is that the mobility of a system formed from n moving links and j joints each with freedom fi, i = 1, ..., j, is given by
M
=
6
n
−
∑
i
=
1
j
(
6
−
f
i
)
=
6
(
N
−
1
−
j
)
+
∑
i
=
1
j
f
i
{\displaystyle M=6n-\sum _{i=1}^{j}\ (6-f_{i})=6(N-1-j)+\sum _{i=1}^{j}\ f_{i}}
Recall that N includes the fixed link.
There are two important special cases: (i) a simple open chain, and (ii) a simple closed chain.
A single open chain consists of n moving links connected end to end by n joints, with one end connected to a ground link. Thus, in this case N = j + 1 and the mobility of the chain is
M
=
∑
i
=
1
j
f
i
{\displaystyle M=\sum _{i=1}^{j}\ f_{i}}
For a simple closed chain, n moving links are connected end-to-end by n + 1 joints such that the two ends are connected to the ground link forming a loop. In this case, we have N = j and the mobility of the chain is
M
=
∑
i
=
1
j
f
i
−
6
{\displaystyle M=\sum _{i=1}^{j}\ f_{i}-6}
An example of a simple open chain is a serial robot manipulator. These robotic systems are constructed from a series of links connected by six one degree-of-freedom revolute or prismatic joints, so the system has six degrees of freedom.
An example of a simple closed chain is the RSSR spatial four-bar linkage. The sum of the freedom of these joints is eight, so the mobility of the linkage is two, where one of the degrees of freedom is the rotation of the coupler around the line joining the two S joints.
=== Planar and spherical movement ===
It is common practice to design the linkage system so that the movement of all of the bodies are constrained to lie on parallel planes, to form what is known as a planar linkage. It is also possible to construct the linkage system so that all of the bodies move on concentric spheres, forming a spherical linkage. In both cases, the degrees of freedom of the links in each system is now three rather than six, and the constraints imposed by joints are now c = 3 − f.
In this case, the mobility formula is given by
M
=
3
(
N
−
1
−
j
)
+
∑
i
=
1
j
f
i
,
{\displaystyle M=3(N-1-j)+\sum _{i=1}^{j}\ f_{i},}
and the special cases become
planar or spherical simple open chain,
M
=
∑
i
=
1
j
f
i
,
{\displaystyle M=\sum _{i=1}^{j}\ f_{i},}
planar or spherical simple closed chain,
M
=
∑
i
=
1
j
f
i
−
3.
{\displaystyle M=\sum _{i=1}^{j}\ f_{i}-3.}
An example of a planar simple closed chain is the planar four-bar linkage, which is a four-bar loop with four one degree-of-freedom joints and therefore has mobility M = 1.
=== Systems of bodies ===
A system with several bodies would have a combined DOF that is the sum of the DOFs of the bodies, less the internal constraints they may have on relative motion. A mechanism or linkage containing a number of connected rigid bodies may have more than the degrees of freedom for a single rigid body. Here the term degrees of freedom is used to describe the number of parameters needed to specify the spatial pose of a linkage. It is also defined in context of the configuration space, task space and workspace of a robot.
A specific type of linkage is the open kinematic chain, where a set of rigid links are connected at joints; a joint may provide one DOF (hinge/sliding), or two (cylindrical). Such chains occur commonly in robotics, biomechanics, and for satellites and other space structures. A human arm is considered to have seven DOFs. A shoulder gives pitch, yaw, and roll, an elbow allows for pitch, and a wrist allows for pitch, yaw and roll. Only 3 of those movements would be necessary to move the hand to any point in space, but people would lack the ability to grasp things from different angles or directions. A robot (or object) that has mechanisms to control all 6 physical DOF is said to be holonomic. An object with fewer controllable DOFs than total DOFs is said to be non-holonomic, and an object with more controllable DOFs than total DOFs (such as the human arm) is said to be redundant. Although keep in mind that it is not redundant in the human arm because the two DOFs; wrist and shoulder, that represent the same movement; roll, supply each other since they can't do a full 360.
The degree of freedom are like different movements that can be made.
In mobile robotics, a car-like robot can reach any position and orientation in 2-D space, so it needs 3 DOFs to describe its pose, but at any point, you can move it only by a forward motion and a steering angle. So it has two control DOFs and three representational DOFs; i.e. it is non-holonomic. A fixed-wing aircraft, with 3–4 control DOFs (forward motion, roll, pitch, and to a limited extent, yaw) in a 3-D space, is also non-holonomic, as it cannot move directly up/down or left/right.
A summary of formulas and methods for computing the degrees-of-freedom in mechanical systems has been given by Pennestri, Cavacece, and Vita.
== Electrical engineering ==
In electrical engineering degrees of freedom is often used to describe the number of directions in which a phased array antenna can form either beams or nulls. It is equal to one less than the number of elements contained in the array, as one element is used as a reference against which either constructive or destructive interference may be applied using each of the remaining antenna elements. Radar practice and communication link practice, with beam steering being more prevalent for radar applications and null steering being more prevalent for interference suppression in communication links.
== See also ==
Gimbal lock – Loss of one degree of freedom in a three-dimensional, three-gimbal mechanism
Kinematics – Branch of physics describing the motion of objects without considering forces
Kinematic pair – Connection between two physical objects which constrains their relative movement
XR-2 – Educational robot
== References == | Wikipedia/Degree_of_freedom_(mechanics) |
In mathematics and physics, many topics are named in honor of Swiss mathematician Leonhard Euler (1707–1783), who made many important discoveries and innovations. Many of these items named after Euler include their own unique function, equation, formula, identity, number (single or sequence), or other mathematical entity. Many of these entities have been given simple yet ambiguous names such as Euler's function, Euler's equation, and Euler's formula.
Euler's work touched upon so many fields that he is often the earliest written reference on a given matter. In an effort to avoid naming everything after Euler, some discoveries and theorems are attributed to the first person to have proved them after Euler.
== Conjectures ==
Euler's sum of powers conjecture – disproved for exponents 4 and 5 during the 20th century; unsolved for higher exponents
Euler's Graeco-Latin square conjecture – proved to be true for
n
=
6
{\displaystyle n=6}
and disproved otherwise, during the 20th century
== Equations ==
Usually, Euler's equation refers to one of (or a set of) differential equations (DEs). It is customary to classify them into ODEs and PDEs.
Otherwise, Euler's equation may refer to a non-differential equation, as in these three cases:
Euler–Lotka equation, a characteristic equation employed in mathematical demography
Euler's pump and turbine equation
Euler transform used to accelerate the convergence of an alternating series and is also frequently applied to the hypergeometric series
=== Ordinary differential equations ===
Euler rotation equations, a set of first-order ODEs concerning the rotations of a rigid body.
Euler–Cauchy equation, a linear equidimensional second-order ODE with variable coefficients. Its second-order version can emerge from Laplace's equation in polar coordinates.
Euler–Bernoulli beam equation, a fourth-order ODE concerning the elasticity of structural beams.
Euler's differential equation, a first order nonlinear ordinary differential equation
=== Partial differential equations ===
Euler conservation equations, a set of quasilinear first-order hyperbolic equations used in fluid dynamics for inviscid flows. In the (Froude) limit of no external field, they are conservation equations.
Euler–Tricomi equation – a second-order PDE emerging from Euler conservation equations.
Euler–Poisson–Darboux equation, a second-order PDE playing important role in solving the wave equation.
Euler–Lagrange equation, a second-order PDE emerging from minimization problems in calculus of variations.
== Formulas ==
== Functions ==
The Euler function, a modular form that is a prototypical q-series.
Euler's totient function (or Euler phi (φ) function) in number theory, counting the number of coprime integers less than an integer.
Euler hypergeometric integral
Euler–Riemann zeta function
== Identities ==
Euler's identity e iπ + 1 = 0.
Euler's four-square identity, which shows that the product of two sums of four squares can itself be expressed as the sum of four squares.
Euler's identity may also refer to the pentagonal number theorem.
== Numbers ==
Euler's number, e = 2.71828 . . . , the base of the natural logarithm
Euler's idoneal numbers, a set of 65 or possibly 66 or 67 integers with special properties
Euler numbers, integers occurring in the coefficients of the Taylor series of 1/cosh t
Eulerian numbers count certain types of permutations.
Euler number (physics), the cavitation number in fluid dynamics.
Euler number (algebraic topology) – now, Euler characteristic, classically the number of vertices minus edges plus faces of a polyhedron.
Euler number (3-manifold topology) – see Seifert fiber space
Lucky numbers of Euler
Euler's constant gamma (γ), also known as the Euler–Mascheroni constant
Eulerian integers, more commonly called Eisenstein integers, the algebraic integers of form a + bω where ω is a complex cube root of 1.
Euler–Gompertz constant
== Theorems ==
Euler's homogeneous function theorem – A homogeneous function is a linear combination of its partial derivatives
Euler's infinite tetration theorem – About the limit of iterated exponentiation
Euler's rotation theorem – Movement with a fixed point is rotation
Euler's theorem (differential geometry) – Orthogonality of the directions of the principal curvatures of a surface
Euler's theorem in geometry – On distance between centers of a triangle
Euler's quadrilateral theorem – Relation between the sides of a convex quadrilateral and its diagonals
Euclid–Euler theorem, characterizing even perfect numbers
Euler's theorem, on modular exponentiation
Euler's partition theorem relating the product and series representations of the Euler function Π(1 − xn)
Goldbach–Euler theorem, stating that sum of 1/(k − 1), where k ranges over positive integers of the form mn for m ≥ 2 and n ≥ 2, equals 1
Gram–Euler theorem
== Laws ==
Euler's first law, the sum of the external forces acting on a rigid body is equal to the rate of change of linear momentum of the body.
Euler's second law, the sum of the external moments about a point is equal to the rate of change of angular momentum about that point.
== Other things ==
== Topics by field of study ==
Selected topics from above, grouped by subject, and additional topics from the fields of music and physical systems
=== Analysis: derivatives, integrals, and logarithms ===
=== Geometry and spatial arrangement ===
=== Graph theory ===
Euler characteristic (formerly called Euler number) in algebraic topology and topological graph theory, and the corresponding Euler's formula
χ
(
S
2
)
=
F
−
E
+
V
=
2
{\textstyle \chi (S^{2})=F-E+V=2}
Eulerian circuit, Euler cycle or Eulerian path – a path through a graph that takes each edge once
Eulerian graph has all its vertices spanned by an Eulerian path
Euler class
Euler diagram – popularly called "Venn diagrams", although some use this term only for a subclass of Euler diagrams.
Euler tour technique
=== Music ===
Euler–Fokker genus
Euler's tritone
=== Number theory ===
Euler's criterion – quadratic residues modulo by primes
Euler product – infinite product expansion, indexed by prime numbers of a Dirichlet series
Euler pseudoprime
Euler–Jacobi pseudoprime
Euler's totient function (or Euler phi (φ) function) in number theory, counting the number of coprime integers less than an integer.
Euler system
Euler's factorization method
=== Physical systems ===
=== Polynomials ===
Euler's homogeneous function theorem, a theorem about homogeneous polynomials.
Euler polynomials
Euler spline – splines composed of arcs using Euler polynomials
== See also ==
Contributions of Leonhard Euler to mathematics
== Notes == | Wikipedia/Euler's_Equation |
Soft-body dynamics is a field of computer graphics that focuses on visually realistic physical simulations of the motion and properties of deformable objects (or soft bodies). The applications are mostly in video games and films. Unlike in simulation of rigid bodies, the shape of soft bodies can change, meaning that the relative distance of two points on the object is not fixed. While the relative distances of points are not fixed, the body is expected to retain its shape to some degree (unlike a fluid). The scope of soft body dynamics is quite broad, including simulation of soft organic materials such as muscle, fat, hair and vegetation, as well as other deformable materials such as clothing and fabric. Generally, these methods only provide visually plausible emulations rather than accurate scientific/engineering simulations, though there is some crossover with scientific methods, particularly in the case of finite element simulations. Several physics engines currently provide software for soft-body simulation.
== Deformable solids ==
The simulation of volumetric solid soft bodies can be realised by using a variety of approaches.
=== Spring/mass models ===
In this approach, the body is modeled as a set of point masses (nodes) connected by ideal weightless elastic springs obeying some variant of Hooke's law. The nodes may either derive from the edges of a two-dimensional polygonal mesh representation of the surface of the object, or from a three-dimensional network of nodes and edges modeling the internal structure of the object (or even a one-dimensional system of links, if for example a rope or hair strand is being simulated). Additional springs between nodes can be added, or the force law of the springs modified, to achieve desired effects. Applying Newton's second law to the point masses including the forces applied by the springs and any external forces (due to contact, gravity, air resistance, wind, and so on) gives a system of differential equations for the motion of the nodes, which is solved by standard numerical schemes for solving ODEs. Rendering of a three-dimensional mass-spring lattice is often done using free-form deformation, in which the rendered mesh is embedded in the lattice and distorted to conform to the shape of the lattice as it evolves. Assuming all point masses equal to zero one can obtain the Stretched grid method aimed at several engineering problems solution relative to the elastic grid behavior. These are sometimes known as mass-spring-damper models. In pressurized soft bodies spring-mass model is combined with a pressure force based on the ideal gas law.
=== Finite element simulation ===
This is a more physically accurate approach, which uses the widely used finite element method to solve the partial differential equations which govern the dynamics of an elastic material. The body is modeled as a three-dimensional elastic continuum by breaking it into a large number of solid elements which fit together, and solving for the stresses and strains in each element using a model of the material. The elements are typically tetrahedral, the nodes being the vertices of the tetrahedra (relatively simple methods exist to tetrahedralize a three dimensional region bounded by a polygon mesh into tetrahedra, similarly to how a two-dimensional polygon may be triangulated into triangles). The strain (which measures the local deformation of the points of the material from their rest state) is quantified by the strain tensor
ϵ
{\displaystyle {\boldsymbol {\epsilon }}}
. The stress (which measures the local forces per-unit area in all directions acting on the material) is quantified by the Cauchy stress tensor
σ
{\displaystyle {\boldsymbol {\sigma }}}
. Given the current local strain, the local stress can be computed via the generalized form of Hooke's law:
σ
=
C
ε
{\displaystyle {\boldsymbol {\sigma }}={\mathsf {C}}{\boldsymbol {\varepsilon }}\,}
where
C
{\displaystyle {\mathsf {C}}}
is the elasticity tensor, which encodes the material properties (parametrized in linear elasticity for an isotropic material by the Poisson ratio and Young's modulus).
The equation of motion of the element nodes is obtained by integrating the stress field over each element and relating this, via Newton's second law, to the node accelerations.
Pixelux (developers of the Digital Molecular Matter system) use a finite-element-based approach for their soft bodies, using a tetrahedral mesh and converting the stress tensor directly into node forces. Rendering is done via a form of free-form deformation.
=== Energy minimization methods ===
This approach is motivated by variational principles and the physics of surfaces, which dictate that a constrained surface will
assume the shape which minimizes the total energy of deformation (analogous to a soap bubble). Expressing the energy of a surface in terms of its local deformation (the energy is due to a combination of stretching and bending), the local force on the surface is given by differentiating the energy with respect to position, yielding an equation of motion which can be solved in the standard ways.
=== Shape matching ===
In this scheme, penalty forces or constraints are applied to the model to drive it towards its original shape (i.e. the material behaves as if it has shape memory). To conserve momentum the rotation of the body must be estimated properly, for example via polar decomposition. To approximate finite element simulation, shape matching can be applied to three dimensional lattices and multiple shape matching constraints blended.
=== Rigid-body based deformation ===
Deformation can also be handled by a traditional rigid-body physics engine, modeling the soft-body motion using a network of multiple rigid bodies connected by constraints, and using (for example) matrix-palette skinning to generate a surface mesh for rendering. This is the approach used for deformable objects in Havok Destruction.
== Cloth simulation ==
In the context of computer graphics, cloth simulation refers to the simulation of soft bodies in the form of two dimensional continuum elastic membranes, that is, for this purpose, the actual structure of real cloth on the yarn level can be ignored (though modeling cloth on the yarn level has been tried). Via rendering effects, this can produce a visually plausible emulation of textiles and clothing, used in a variety of contexts in video games, animation, and film. It can also be used to simulate two dimensional sheets of materials other than textiles, such as deformable metal panels or vegetation. In video games it is often used to enhance the realism of clothed animated characters.
Cloth simulators are generally based on mass-spring models, but a distinction must be made between force-based and position-based solvers.
=== Force-based cloth ===
The mass-spring model (obtained from a polygonal mesh representation of the cloth) determines the internal spring forces acting on the nodes at each timestep (in combination with gravity and applied forces). Newton's second law gives equations of motion which can be solved via standard ODE solvers. To create high resolution cloth with a realistic stiffness is not possible however with simple explicit solvers (such as forward Euler integration), unless the timestep is made too small for interactive applications (since as is well known, explicit integrators are numerically unstable for sufficiently stiff systems). Therefore, implicit solvers must be used, requiring solution of a large sparse matrix system (via e.g. the conjugate gradient method), which itself may also be difficult to achieve at interactive frame rates. An alternative is to use an explicit method with low stiffness, with ad hoc methods to avoid instability and excessive stretching (e.g. strain limiting corrections).
=== Position-based dynamics ===
To avoid needing to do an expensive implicit solution of a system of ODEs, many real-time cloth simulators (notably PhysX, Havok Cloth, and Maya nCloth) use position based dynamics (PBD), an approach based on constraint relaxation. The mass-spring model is converted into a system of constraints, which demands that the distance between the connected nodes be equal to the initial distance. This system is solved sequentially and iteratively, by directly moving nodes to satisfy each constraint, until sufficiently stiff cloth is obtained. This is similar to a Gauss-Seidel solution of the implicit matrix system for the mass-spring model. Care must be taken though to solve the constraints in the same sequence each timestep, to avoid spurious oscillations, and to make sure that the constraints do not violate linear and angular momentum conservation. Additional position constraints can be applied, for example to keep the nodes within desired regions of space (sufficiently close to an animated model for example), or to maintain the body's overall shape via shape matching.
== Collision detection for deformable objects ==
Realistic interaction of simulated soft objects with their environment may be important for obtaining visually realistic results. Cloth self-intersection is important in some applications for acceptably realistic simulated garments. This is challenging to achieve at interactive frame rates, particularly in the case of detecting and resolving self collisions and mutual collisions between two or more deformable objects.
Collision detection may be discrete/a posteriori (meaning objects are advanced in time through a pre-determined interval, and then any penetrations detected and resolved), or continuous/a priori (objects are advanced only until a collision occurs, and the collision is handled before proceeding). The former is easier to implement and faster, but leads to failure to detect collisions (or detection of spurious collisions) if objects move fast enough. Real-time systems generally have to use discrete collision detection, with other ad hoc ways to avoid failing to detect collisions.
Detection of collisions between cloth and environmental objects with a well defined "inside" is straightforward since the system can detect unambiguously whether the cloth mesh vertices and faces are intersecting the body and resolve them accordingly. If a well defined "inside" does not exist (e.g. in the case of collision with a mesh which does not form a closed boundary), an "inside" may be constructed via extrusion. Mutual- or self-collisions of soft bodies defined by tetrahedra is straightforward, since it reduces to detection of collisions between solid tetrahedra.
However, detection of collisions between two polygonal cloths (or collision of a cloth with itself) via discrete collision detection is much more difficult, since there is no unambiguous way to locally detect after a timestep whether a cloth node which has penetrated is on the "wrong" side or not. Solutions involve either using the history of the cloth motion to determine if an intersection event has occurred, or doing a global analysis of the cloth state to detect and resolve self-intersections. Pixar has presented a method which uses a global topological analysis of mesh intersections in configuration space to detect and resolve self-interpenetration of cloth. Currently, this is generally too computationally expensive for real-time cloth systems.
To do collision detection efficiently, primitives which are certainly not colliding must be identified as soon as possible and discarded from consideration to avoid wasting time.
To do this, some form of spatial subdivision scheme is essential, to avoid a brute force test of
O
[
n
2
]
{\displaystyle O[n^{2}]}
primitive collisions. Approaches used include:
Bounding volume hierarchies (AABB trees, OBB trees, sphere trees)
Grids, either uniform (using hashing for memory efficiency) or hierarchical (e.g. Octree, kd-tree)
Coherence-exploiting schemes, such as sweep and prune with insertion sort, or tree-tree collisions with front tracking.
Hybrid methods involving a combination of various of these schemes, e.g. a coarse AABB tree plus sweep-and-prune with coherence between colliding leaves.
== Other applications ==
Other effects which may be simulated via the methods of soft-body dynamics are:
Destructible materials: fracture of brittle solids, cutting of soft bodies, and tearing of cloth. The finite element method is especially suited to modelling fracture as it includes a realistic model of the distribution of internal stresses in the material, which physically is what determines when fracture occurs, according to fracture mechanics.
Plasticity (permanent deformation) and melting
Simulated hair, fur, and feathers
Simulated organs for biomedical applications
Simulating fluids in the context of computer graphics would not normally be considered soft-body dynamics, which is usually restricted to mean simulation of materials which have a tendency to retain their shape and form. In contrast, a fluid assumes the shape of whatever vessel contains it, as the particles are bound together by relatively weak forces.
== Software supporting soft body physics ==
=== Simulation engines ===
=== Games ===
== See also ==
Deformable body
Dynamical simulation
Rigid body dynamics
Strength of materials
Cloth modeling
Breast physics
== References ==
== External links ==
"The Animation of Natural Phenomena", CMU course on physically based animation, including deformable bodies
Soft body dynamics video example
Introductory article
Article by Thomas Jakobsen which explains the basics of the PBD method | Wikipedia/Soft-body_dynamics |
The Physics Abstraction Layer (PAL) is an open-source cross-platform physical simulation API abstraction system. It is similar to a physics engine wrapper, however it is far more flexible providing extended abilities. PAL is free software, released under the BSD license.
PAL is a high-level interface for low-level physics engines used in games, simulation systems, and other 3D applications. It supports a number of dynamic simulation methodologies, including rigid body, liquids, soft body, ragdoll, and vehicle dynamics. PAL features a simple C++ API and intuitive objects (e.g. Solids, Joints, Actuators, Sensors, and Materials). It also features COLLADA, Scythe Physics Editor, and XML-based file storage.
The Physics Abstraction Layer provides a number of benefits over directly using a physics engine:
Flexibility – It allows developers to switch between different physics engines to see which engine provides their needs, as well as quickly testing a new engine.
Portable – Developers are able to use the physics engine which provides the best performance for different platforms, and are able to write platform independent code.
Security – If a middleware provider is acquired by another company or development is discontinued, developers can switch engines.
Scalable – The abstraction layer allows developers to run their code on handheld console platforms up to supercomputers.
Ease of use – Implementation details of the physics engine are abstracted, providing a cleaner interface to the developer.
Benchmarking – Researchers can directly compare the performance of various dynamic simulations systems.
PAL is designed with a pluggable abstract factory allowing code to be written and compiled once and allowing runtime selection of different physics engines, as well as feature upgrades.
== Supported engines ==
PAL supports multiple physics engines, including:
Box2D
Bullet
Newton Game Dynamics
Open Dynamics Engine
PhysX (formerly NovodeX and incorporating Meqon)
Tokamak physics engine
== Supported file formats ==
PAL supports multiple file formats, including:
COLLADA
Scythe Physics Editor file format
XML
== Benchmark ==
The PAL project provides a set of standard benchmarks allowing developers to directly compare the physics engines and select the engine that provides the best solution in terms of computational efficiency and physical accuracy. Care should be taken when deciding on which engine to actually use though, since engines may be tweaked in ways which PAL doesn't support.
== References ==
== External links ==
The official PAL site
Interactive PAL benchmark | Wikipedia/Physics_Abstraction_Layer |
A physics engine is computer software that provides an approximate simulation of certain physical systems, typically classical dynamics, including rigid body dynamics (including collision detection), soft body dynamics, and fluid dynamics. It is of use in the domains of computer graphics, video games and film (CGI). Their main uses are in video games (typically as middleware), in which case the simulations are in real-time. The term is sometimes used more generally to describe any software system for simulating physical phenomena, such as high-performance scientific simulation.
== Description ==
There are generally two classes of physics engines: real-time and high-precision. High-precision physics engines require more processing power to calculate very precise physics and are usually used by scientists and computer-animated movies. Real-time physics engines—as used in video games and other forms of interactive computing—use simplified calculations and decreased accuracy to compute in time for the game to respond at an appropriate rate for game play. A physics engine is essentially a big calculator that does mathematics needed to simulate physics.
=== Scientific engines ===
One of the first general purpose computers, ENIAC, was used as a very simple type of physics engine. It was used to design ballistics tables to help the United States military estimate where artillery shells of various mass would land when fired at varying angles and gunpowder charges, also accounting for drift caused by wind. The results were calculated a single time only, and were tabulated into printed tables handed out to the artillery commanders.
Physics engines have been commonly used on supercomputers since the 1980s to perform computational fluid dynamics modeling, where particles are assigned force vectors that are combined to show circulation. Due to the requirements of speed and high precision, special computer processors known as vector processors were developed to accelerate the calculations. The techniques can be used to model weather patterns in weather forecasting, wind tunnel data for designing air- and watercraft or motor vehicles including racecars, and thermal cooling of computer processors for improving heat sinks. As with many calculation-laden processes in computing, the accuracy of the simulation is related to the resolution of the simulation and the precision of the calculations; small fluctuations not modeled in the simulation can drastically change the predicted results.
Tire manufacturers use physics simulations to examine how new tire tread types will perform under wet and dry conditions, using new tire materials of varying flexibility and under different levels of weight loading.
=== Game engines ===
In most computer games, speed of the processors and gameplay are more important than accuracy of simulation. This leads to designs for physics engines that produce results in real-time but that replicate real world physics only for simple cases and typically with some approximation. More often than not, the simulation is geared towards providing a "perceptually correct" approximation rather than a real simulation. However some game engines, such as Source, use physics in puzzles or in combat situations. This requires more accurate physics so that, for example, the momentum of an object can knock over an obstacle or lift a sinking object.
Physically-based character animation in the past only used rigid body dynamics because they are faster and easier to calculate, but modern games and movies are starting to use soft body physics. Soft body physics are also used for particle effects, liquids and cloth. Some form of limited fluid dynamics simulation is sometimes provided to simulate water and other liquids as well as the flow of fire and explosions through the air.
==== Collision detection ====
Objects in games interact with the player, the environment, and each other. Typically, most 3D objects in games are represented by two separate meshes or shapes. One of these meshes is the highly complex and detailed shape visible to the player in the game, such as a vase with elegant curved and looping handles. For purpose of speed, a second, simplified invisible mesh is used to represent the object to the physics engine so that the physics engine treats the example vase as a simple cylinder. It would thus be impossible to insert a rod or fire a projectile through the handle holes on the vase, because the physics engine model is based on the cylinder and is unaware of the handles. The simplified mesh used for physics processing is often referred to as the collision geometry. This may be a bounding box, sphere, or convex hull. Engines that use bounding boxes or bounding spheres as the final shape for collision detection are considered extremely simple. Generally a bounding box is used for broad phase collision detection to narrow down the number of possible collisions before costly mesh on mesh collision detection is done in the narrow phase of collision detection.
Another aspect of precision in discrete collision detection involves the framerate, or the number of moments in time per second when physics is calculated. Each frame is treated as separate from all other frames, and the space between frames is not calculated. A low framerate and a small fast-moving object causes a situation where the object does not move smoothly through space but instead seems to teleport from one point in space to the next as each frame is calculated. Projectiles moving at sufficiently high speeds will miss targets, if the target is small enough to fit in the gap between the calculated frames of the fast moving projectile. Various techniques are used to overcome this flaw, such as Second Life's representation of projectiles as arrows with invisible trailing tails longer than the gap in frames to collide with any object that might fit between the calculated frames. By contrast, continuous collision detection such as in Bullet or Havok does not suffer this problem.
==== Soft-body dynamics ====
An alternative to using bounding box-based rigid body physics systems is to use a finite element-based system. In such a system, a 3-dimensional, volumetric tessellation is created of the 3D object. The tessellation results in a number of finite elements which represent aspects of the object's physical properties such as toughness, plasticity, and volume preservation. Once constructed, the finite elements are used by a solver to model the stress within the 3D object. The stress can be used to drive fracture, deformation and other physical effects with a high degree of realism and uniqueness. As the number of modeled elements is increased, the engine's ability to model physical behavior increases. The visual representation of the 3D object is altered by the finite element system through the use of a deformation shader run on the CPU or GPU. Finite Element-based systems had been impractical for use in games due to the performance overhead and the lack of tools to create finite element representations out of 3D art objects. With higher performance processors and tools to rapidly create the volumetric tessellations, real-time finite element systems began to be used in games, beginning with Star Wars: The Force Unleashed that used Digital Molecular Matter for the deformation and destruction effects of wood, steel, flesh and plants using an algorithm developed by Dr. James O'Brien as a part of his PhD thesis.
==== Brownian motion ====
In the real world, physics is always active. There is a constant Brownian motion jitter to all particles in our universe as the forces push back and forth against each other. For a game physics engine, such constant active precision is unnecessarily wasting the limited CPU power, which can cause problems such as decreased framerate. Thus, games may put objects to "sleep" by disabling the computation of physics on objects that have not moved a particular distance within a certain amount of time. For example, in the 3D virtual world Second Life, if an object is resting on the floor and the object does not move beyond a minimal distance in about two seconds, then the physics calculations are disabled for the object and it becomes frozen in place. The object remains frozen until physics processing reactivates for the object after collision occurs with some other active physical object.
==== Paradigms ====
Physics engines for video games typically have two core components, a collision detection/collision response system, and the dynamics simulation component responsible for solving the forces affecting the simulated objects. Modern physics engines may also contain fluid simulations, animation control systems and asset integration tools. There are three major paradigms for the physical simulation of solids:
Penalty methods, where interactions are commonly modelled as mass-spring systems. This type of engine is popular for deformable, or soft-body physics.
Constraint based methods, where constraint equations are solved that estimate physical laws.
Impulse based methods, where impulses are applied to object interactions. However, this is actually just a special case of a constraint based method combined with an iterative solver that propagates impulses throughout the system.
Finally, hybrid methods are possible that combine aspects of the above paradigms.
== Limitations ==
A primary limit of physics engine realism is the approximated result of the constraint resolutions and collision result due to the slow convergence of algorithms. Collision detection computed at a too low frequency can result in objects passing through each other and then being repelled with an abnormal correction force. On the other hand, approximated results of reaction force is due to the slow convergence of typical Projected Gauss Seidel solver resulting in abnormal bouncing. Any type of free-moving compound physics object can demonstrate this problem, but it is especially prone to affecting chain links under high tension, and wheeled objects with actively physical bearing surfaces. Higher precision reduces the positional/force errors, but at the cost of needing greater CPU power for the calculations.
== Physics processing unit (PPU) ==
A physics processing unit (PPU) is a dedicated microprocessor designed to handle the calculations of physics, especially in the physics engine of video games. Examples of calculations involving a PPU might include rigid body dynamics, soft body dynamics, collision detection, fluid dynamics, hair and clothing simulation, finite element analysis, and fracturing of objects. The idea is that specialized processors offload time-consuming tasks from a computer's CPU, much like how a GPU performs graphics operations in the main CPU's place. The term was coined by Ageia's marketing to describe their PhysX chip to consumers. Several other technologies in the CPU-GPU spectrum have some features in common with it, although Ageia's solution was the only complete one designed, marketed, supported, and placed within a system exclusively as a PPU.
== General-purpose computing on graphics processing unit (GPGPU) ==
Hardware acceleration for physics processing is now usually provided by graphics processing units that support more general computation, a concept known as general-purpose computing on graphics processing units (GPGPU). AMD and NVIDIA provide support for rigid body dynamics computations on their latest graphics cards.
NVIDIA's GeForce 8 series supports a GPU-based Newtonian physics acceleration technology named Quantum Effects Technology. NVIDIA provides an SDK Toolkit for CUDA (Compute Unified Device Architecture) technology that offers both a low and high-level API to the GPU. For their GPUs, AMD offers a similar SDK, called Close to Metal (CTM), which provides a thin hardware interface.
PhysX is an example of a physics engine that can use GPGPU based hardware acceleration when it is available.
== Engines ==
=== Real-time physics engines ===
=== High precision physics engines ===
VisSim - Visual Simulation engine for linear and nonlinear dynamics
== See also ==
Game physics
Ragdoll physics
Procedural animation
Rigid body dynamics
Soft body dynamics
Physics processing unit
Cell microprocessor
Linear complementarity problem Impulse/constraint physics engines require a solver for such problems to handle multi-point collisions.
Finite Element Analysis
== References ==
== Further reading ==
Bourg, David M. (2002) Physics for Game Developers. O'Reilly & Associates.
== External links ==
"Physics Engines List". Database. Digital Rune. Mar 30, 2015 [2010]. Archived from the original on Mar 9, 2016. | Wikipedia/Physics_engine |
A Treatise on the Analytical Dynamics of Particles and Rigid Bodies is a treatise and textbook on analytical dynamics by British mathematician Sir Edmund Taylor Whittaker. Initially published in 1904 by the Cambridge University Press, the book focuses heavily on the three-body problem and has since gone through four editions and has been translated to German and Russian. Considered a landmark book in English mathematics and physics, the treatise presented what was the state-of-the-art at the time of publication and, remaining in print for more than a hundred years, it is considered a classic textbook in the subject. In addition to the original editions published in 1904, 1917, 1927, and 1937, a reprint of the fourth edition was released in 1989 with a new foreword by William Hunter McCrea.
The book was very successful and received many positive reviews. A 2014 "biography" of the book's development wrote that it had "remarkable longevity" and noted that the book remains more than historically influential. Among many others, G. H. Bryan, E. B. Wilson, P. Jourdain, G. D. Birkhoff, T. M. Cherry, and R. Thiele have reviewed the book. The 1904 review of the first edition by G. H. Bryan, who wrote reviews for the first two editions, sparked controversy among Cambridge University professors related to the use of Cambridge Tripos problems in textbooks. The book is mentioned in other textbooks as well, including Classical Mechanics, where Herbert Goldstein argued in 1980 that, although the book is outdated, it remains "a practically unique source for the discussion of many specialized topics."
== Background ==
Whittaker was 31 years old and working as a lecturer at Trinity College, Cambridge when the book was first published, less than ten years after he graduated from Cambridge University in 1895. Whittaker was branded Second Wrangler in his Cambridge Tripos examination upon graduation in 1895 and elected as a Fellow of Trinity College, Cambridge the next year, where he remained as a lecturer until 1906. Whittaker published his first major work, the celebrated mathematics textbook A Course of Modern Analysis, in 1902, just two years before Analytical Dynamics. Following the success of these works, Whittaker was appointed Royal Astronomer of Ireland in 1906, which came with the role of Andrews Professor of Astronomy at Trinity College, Dublin.
The second half of the treatise is an expanded version of a report Whittaker completed on the three-body problem at the turn of the century at the request of the British Science Association (then called the British Association for the Advancement of Science). In 1898, the council of the British Association passed a resolution that "Mr E. T. Whittaker be requested to draw up a report on the planetary theory". A year later, Whittaker delivered his report, titled “Report on the progress of the solution of the problem of three bodies”, in a lecture to the Association, who published it in 1900. He changed the name from the original "report on the planetary theory" to, in his own words, show "more definitely the aim of the Report", which covered the advances in theoretical astronomy that occurred between 1868 and 1898.
== Content ==
The book is a thorough treatment of analytical dynamics, covering topics in Hamiltonian mechanics and celestial mechanics and the three-body problem. It has been noted that the book can be divided naturally into two parts: Part one, consisting of the twelve chapters, covers the basic principles of dynamics, giving a "state-of-the-art introduction to the principles of dynamics as they stood in the first years of the twentieth century", while part two, consisting of the final four chapters, is based on Whittaker's report on the three-body problem. While the first part remained mostly constant throughout the book's multiple editions, the second part was expanded considerably in the second and third editions.
=== History ===
The book's structure remained constant throughout its development, with fifteen total chapters, though the second and third editions added new sections throughout. Among other changes to the book, Whittaker expanded chapters fifteen and sixteen considerably and renamed chapters nine and sixteen. The title of chapter nine, The Principles of Least Action and Least Curvature, was The principles of Hamilton and Gauss before being renamed in the second edition and the title of chapter sixteen, Integration by series, was Integration by trigonometric series before being renamed for the third edition. The first edition had 188 total consecutively numbered sections, which increased in the second and third editions of the book. Among the most heavily altered, chapter fifteen went from fourteen sections to twenty-two while chapter sixteen doubled its section count from nine to eighteen.
Most of the differences between the second and third editions were adding outlines of and references to works published after the book's second edition. The edition included a major rewrite of chapters fifteen and sixteen to update the book considering developments that had occurred in the eleven years since the publication of the second edition. The first fourteen chapters of the third edition were photolithographically reproduced from the second edition, with some corrections and added references. The new material contained a section on Synge’s geometry of dynamics and tensor analysis. The fourth edition, published in 1937, differed from the third edition only in correcting some errors and supplying references to works published after the previous edition; aside from a new foreword by William Hunter McCrea in a 1989 reprint, the volume represented the book in its ultimate form.
=== Synopsis ===
Part I of the book has been said to give a "state-of-the-art introduction to the principles of dynamics as they were understood in the first years of the twentieth century". The first chapter, on kinematic preliminaries, discusses the mathematical formalism required for describing the motion of rigid bodies. The second chapter begins the advanced study of mechanics, with topics beginning with relatively simple concepts such as motion and rest, frame of reference, mass, force, and work before discussing kinetic energy, introducing Lagrangian mechanics, and discussing impulsive motions. Chapter three discusses the integration of equations of motion at length, the conservation of energy and its role in reducing degrees of freedom, and separation of variables. Chapters one through three focus only on systems of point masses. The first concrete examples of dynamic systems, including the pendulum, central forces, and motion on a surface, are introduced in chapter four, where the methods of the previous chapters are employed in solving problems. Chapter five introduces the moment of inertia and angular momentum to prepare for the study of the dynamics of rigid bodies. Chapter six focuses on the solutions of problems in rigid body dynamics, with exercises including "motion of a rod on which an insect is crawling" and the motion of a spinning top. Chapter seven covers the theory of vibrations, a standard component of mechanics textbooks. Chapter eight introduces dissipative and nonholonomic systems, up to which point all the systems discussed were holonomic and conservative. Chapter nine discusses action principles, such as the principle of least action and the principle of least curvature. Chapters ten through twelve, the final three chapters of part one, discuss Hamiltonian dynamics at length.
Chapter thirteen begins part two and focuses on the applications of the material in part one to the three-body problem, where he introduces both the general problem and several restricted examples. Chapter fourteen includes a proof of Brun's theorem and a similar proof of a theorem by Henri Poincaré on "the non-existence of a certain type of integrals in the problem of three bodies". Chapter fifteen, The General Theory of Orbits, describes two-dimensional mechanics of a particle subject to conservative forces and discusses special-case solutions of the Three-body problem. The last chapter includes discussions of solutions of the problems of previous chapters by integration of series, particularly trigonometric series.
== Reception ==
Receiving generally positive reviews throughout, the book has gone through four editions, each with multiple reviews. A reviewer of the first edition noted that the book contains "the outlines of a long series of researches for which hitherto it has been necessary to consult English, French, German, and Italian transactions". One of those first edition reviews, by George H. Bryan in 1905, began a controversy among Cambridge University professors related to the use of Cambridge Tripos problems in textbooks. In 1980, Herbert Goldstein mentioned the book in his famous textbook Classical Mechanics where he noted that it was outdated, but remained a useful reference for some specialised topics. While it is a historic textbook on the subject, presenting what was the state-of-the-art at the time of publication, a 2014 "biography" of the book's development pointed out that the book remains influential for more than historical purposes.
=== First edition ===
The first edition of the book received several reviews, including George H. Bryan in 1905 and Edwin Bidwell Wilson in 1906, as well as German reviews by Gustav Herglotz, also in 1906 and Emil Lampe in 1918. Lampe called the treatise an "excellent work" and states that Cambridge's treatment of analytical dynamics "has had, as a consequence, that the English student is directed with great energy towards the study of mechanics in which he displays excellent performance, as can be gauged from the many, and not at all easy, problems appended at the end of each chapter of this book."
Bryan's initial book review, published in 1905, was a review of three books published by the Cambridge University Press at around the same time. Bryan opens the review by writing that, though he is not does not care for the "University Presses competing with private firms", he believes "there can only be one opinion as to the series of standard treatises on higher mathematics emanating at the present time from Cambridge". He then noted that England's "lack of national interest in higher scientific research, particularly mathematical research, stands far behind most other important civilised countries" and thus it was necessary for the "University Press to publish advanced mathematical works." He went on to write: "We may take it as certain that the present volumes will be keenly read in Germany and America, and will be taken as proofs that England contains good mathematicians." Bryan criticised the chapter four, The Soluble Problems of Analytical Dynamics, for "mostly [representing] things which have no existence". Sparking a controversy published under the title "Fictitious Problems in Mathematics", Bryan goes on to write: "It is impossible for a particle to move on a smooth curve or surface because, in the first place, there is no such thing as a particle, and in the second place there is no such thing as a smooth curve or surface." Bryan went on to write that the book is "essentially mathematical and advanced" and "written mainly for the advanced mathematician".
Wilson's review was published in 1906 and began with an expression of distaste for the "imminent encroachment by pure mathematics of territory that traditionally belonged to applied mathematics", but then quickly states that at that time "there seems no immediate danger" as three recent books published by the Cambridge University Press were "highly important volumes" that "exhibit great mathematical power and attainments directed firmly and unerringly along the direction of physical research". Noting the novelty of many of the sections in the book, Wilson wrote that the book "breaks the barricade and opens the way to fruitful advance". He then noted that the book is advanced and, though it is self-contained, it is not for a beginning student. He elaborated by writing that "the book is mathematical in nature, written with a precision and developed with a logic sure to appeal to mathematicians" and the "diversity of method taken with the compact style makes the book hard reading for any but the somewhat advanced student". Wilson also expressed a desire to have topics such as statistical mechanics added to the textbook.
=== Fictitious Problems in Mathematics ===
The review George H. Bryan published in Nature on 27 April 1905 sparked controversy among Cambridge professors at the time. The review received several notable responses from Whittaker's colleagues, although Whittaker himself never publicly spoke of it. The main actors in the polemic, other than Whittaker and Bryan, are an anonymous professor referred to only as "An Old Average College Don", Alfred Barnard Basset, Edward Routh, and Charles Baron Clarke. The controversy revolved around Bryan's claim that many of the problems included in the book are "fictitious", similar to those used in the Cambridge Tripos examinations. Of particular contention was Bryan's statement that a "perfectly rough body placed on a perfectly smooth surface forms as interesting a subject for speculation as the well-known irresistible body meeting the impenetrable obstacle" and that "[w]hat the average college don forgets is that roughness or smoothness are matters which concern two surfaces, not one body". The controversy stretched from 18 May to 22 June with letters on the dispute published in five issues of Nature. A reviewer later wrote that "100 years after they were written, it is difficult not to view the whole polemic as prompted by a bout of hair-splitting on the part of Bryan", though it was acknowledged that Bryan's original claim was "undoubtedly correct" and the "polemic" was likely a misunderstanding.
The 18 May issue of Nature contained two letters starting the controversy, the first was an anonymous response under the title "Fictitious Problems in Mathematics" from an author referring to themself only as An Old Average College Don, while the second was a response from Brayan under the same title. The old college don charged Bryan to point to a page number where such problems are used, while Bryan responded by saying that the problems are ubiquitous and finding the places where the correct definition is used is easier than pointing out all the places where it is wrong. In the 25 May issue of Nature, Alfred Barnard Basset and Edward Routh joined the debate. Routh explained that when "bodies are said to be perfectly rough, it is usually meant that they are so rough that the amount of friction necessary to prevent sliding in the given circumstances can certainly be called into play" and states that the statements are abbreviations meant to "make the question concise". In a similar tone, Basset wrote that the wording is used to designate "an ideal state of matter". The 1 June issue of Nature contained a response from Charles Baron Clarke and another rebuttal Bryan. Charles Baron Clarke insinuates that he is the "Old Average College Don" that wrote the first anonymous letter, and again emphasises his original complaint. The final two letters of the controversy were published by Routh and Bryan on the eighth and twenty-second of June, respectively.
=== Second and third editions ===
The second and third editions received several reviews, including another one from George H. Bryan as well as Philip Jourdain, George David Birkhoff, and Thomas MacFarland Cherry. Jourdain published two similar reviews of the second edition in different journals, both in 1917. The more detailed of the two, published in The Mathematical Gazette, summarises the book's topics before making several criticisms of specific parts of the book, including the "neglect of work published from 1904 to 1908" on research over Hamilton's principle and the principle of least action. After listing several other problems, Jourdain ends the review by stating that "all these criticisms do not touch the very great value of the book which has been and will be the chief path by which students in English speaking countries have been and will be introduced to modern work on the general and special problems of dynamics." Bryan also reviewed the second edition of the book in 1918 in which he criticises the book for not including the dynamics of aeroplanes, a lapse Bryan believes was acceptable for the first but not for the second edition of the book. After discussing more about aeroplanes and the development of their dynamics, Bryan closes the review by stating that the book "will be found of much use by such students of a future generation as are able to find time to extend their study of particle and rigid dynamics outside the requirements of aerial navigation" and that it would serve as "a valuable source of information for those who are in search of new material of a theoretical character which they can take over and apply to any particular class of investigation." George David Birkhoff wrote a review in 1920 stating that the book is "invaluable as a condensed and suggestive presentation of the formal side of analytical dynamics". Birkhoff also includes several criticisms of the book, including stating it was incomplete in some respects, pointing to the methods used in chapter sixteen on trigonometric series.
The third edition, published in 1927, was reviewed by Thomas MacFarland Cherry, among others. Cherry's 1928 review stated that the book "has long been recognized as the standard advanced textbook in this subject". Concerning the newly rewritten chapter fifteen the general theory of orbits, he wrote that for the most part "the account given is illustrative and introductory in nature, and from this point of view it is excellent and is a great improvement on the previous edition", but that overall "the chapter hardly lives up to its title." On chapter sixteen, also newly rewritten, he commented further that in treating the formal solutions for Hamiltonian systems using trigonometric series, the third edition replaced the method used in previous editions with a new one published by Whittaker in 1916 which Cherry states "must be regarded as suggestive rather than conclusive", noting that not all applicable proofs are included. He finishes by saying that the "optimistic view" the book takes toward the convergence of trigonometric series can be criticised, closing his review by saying "though the question is a difficult one, all the evidence suggests that the series are generally divergent and only exceptionally convergent." Another reviewer expressed regret that the work of George David Birkhoff was not included in the third edition.
=== Fourth edition ===
The final edition of the book, published in 1937, has received several reviews, including a 1990 review in German by Rüdiger Thiele. Another reviewer of the final edition noted that the discussion of the three-body problem is brief and advanced such that it "will be difficult reading for one not already acquainted with the subject" and that the references to then-recent American articles were incomplete, pointing to specific examples relating to the stability of the equilateral triangle positions for three finite masses. The same reviewer then argued that "this does not detract from the merit of the text, which this reviewer regards as the best in its field in the English language." Another reviewer in 1938 claims that the attainment of a fourth edition "shows that it has become the standard work on the topics with which it deals." According to Victor Lenzen in 1952, the book was "still the best exposition of the subject on the highest possible level".
In the second edition of his Classical Mechanics, published in 1980, Herbert Goldstein wrote that this was a comprehensive, albeit outdated, treatment of analytical mechanics with discussions of topics and side notes rarely found elsewhere, such as the examination of central forces are soluble in terms of elliptic functions. However, he criticised the book for having no diagrams, which harmed the sections on topics such as the Euler angles, tendency to make things more complicated than necessary, refusal to use vector notation, and "pedantic" problems of the kind found on the Cambridge Tripos examination. Despite the book's problems and its need to be updated, he went on to write: "It remains, however, a practically unique source for the discussion of many specialized topics."
=== Influence ===
The book quickly became a classic textbook in its subject and is said to have "remarkable longevity", having remained in print almost continuously since its initial release over a hundred years ago. While it is a historic textbook on the subject, presenting what was the state-of-the-art at the time of publication, it was noted in a 2014 "biography" of the book's development that it is not "used merely as a historical document", highlighting that only three of 114 books and papers that cited the textbook between 2000 and 2012 were historical in nature. In that time, a 2006 engineering textbook Principles of Engineering Mechanics, stated that the book is "highly recommended to advanced readers" and was said to remain "one of the best mathematical treatments of analytical dynamics". In a 2015 article on modern dynamics, Miguel Ángel Fernández Sanjuán wrote: "When we think about textbooks used for the teaching of mechanics in the last century, we may think on the book A Treatise on the Analytical Dynamics of Particles and Rigid Bodies" as well as Principles of Mechanics by John L. Synge and Byron A. Griffith, and Classical Mechanics by Herbert Goldstein.
During the 1910s, Albert Einstein was working on his general theory of relativity when he contacted Constantin Carathéodory asking for clarifications on the Hamilton–Jacobi equation and canonical transformations. He wanted to see a satisfactory derivation of the former and the origins of the latter. Carathéodory explained some fundamental details of the canonical transformations and referred Einstein to E. T. Whittaker's Analytical Dynamics. Einstein was trying to solve the problem of "closed time-lines" or the geodesics corresponding to the closed trajectory of light and free particles in a static universe, which he introduced in 1917.
Paul Dirac, a pioneer of quantum mechanics, is said to be "indebted" to the book, as it contained the only material he could find on Poisson brackets, which he needed to finish his work on quantum mechanics in the 1920s. In September 1925, Dirac received proofs of a seminal paper by Werner Heisenberg on the new physics. Soon he realised that the key idea in Heisenberg's paper was the anti-commutativity of dynamical variables and remembered that the analogous mathematical construction in classical mechanics was Poisson brackets.
In a 1980 review of other works, Ian Sneddon stated that the "theoretical work of the century and more after the death of Lagrange was crystallized by E. T. Whittaker in a treatise Whittaker (1904) which has not been superseded as the definitive account of classical mechanics". In another 1980 review of other works, Shlomo Sternberg states that the books reviewed "should be on the shelf of every serious student of mechanics. One would like to be able to report that such a collection would be complete. Unfortunately, this is not so. There exist topics in the classical repertoire, such as Kowalewskaya's top which are not covered by any of these books. So hold on to your copy of Whittaker (1904)".
== Publication history ==
The treatise has remained in print for more than a hundred years, with four editions, a 1989 reprint with a new foreword by William Hunter McCrea, and translations in German and Russian.
=== Original editions ===
The original four editions of textbook were published in Great Britain by the Cambridge University Press in 1904, 1917, 1927, and 1937.
Whittaker, E. T. (1904). A treatise on the analytical dynamics of particles and rigid bodies: with an introduction to the problem of three bodies (1st ed.). Cambridge: Cambridge University Press. OCLC 1110228082.
Whittaker, E. T. (1917). A treatise on the analytical dynamics of particles and rigid bodies; with an introduction to the problem of three bodies (2nd ed.). Cambridge: Cambridge University Press. OCLC 352133.
Whittaker, E. T (1927). A treatise on the analytical dynamics of particles and rigid bodies: with an introduction to the problem of three bodies (3rd ed.). Cambridge: Cambridge University Press. OCLC 1020880124.
Whittaker, E. T (1937). A treatise on the analytical dynamics of particles and rigid bodies: with an introduction to the problem of three bodies (4th ed.). Cambridge: Cambridge University Press. OCLC 959757497.
=== Reprints and international editions ===
In addition to the four editions and the reprints which have kept the book in circulation in the English language for the past hundred years, the book has a German edition that was printed in 1924 that was based on the book's second edition as well as a Russian edition that was printed in 1999. A 1989 reprint of the fourth edition in English with a new foreword by William Hunter McCrea was published in 1989.
Whittaker, E. T.; Mittelsten, F.; Mittelsten, K. (1924). Analytische Dynamik der Punkte und Starren Körper: Mit Einer Einführung in das Dreikörperproblem und mit Zahlreichen Übungsaufgaben. Grundlehren der mathematischen Wissenschaften (in German). Berlin Heidelberg: Springer-Verlag. ISBN 978-3-662-24567-5. {{cite book}}: ISBN / Date incompatibility (help)
Whittaker, E. T (1937). A treatise on the analytical dynamics of particles and rigid bodies: with an introduction to the problem of three bodies (in Spanish) (4th ed.). Cambridge: Cambridge University Press. OCLC 1123785221.
Whittaker, E. T. (1988). A treatise on the analytical dynamics of particles and rigid bodies : with an introduction to the problem of three bodies (4th ed.). Cambridge: Cambridge University Press. ISBN 0-521-35883-3. OCLC 264423700.
Whittaker, E. T. (1988). A treatise on the analytical dynamics of particles and rigid bodies : with an introduction to the problem of three bodies (4th ed.). Cambridge: Cambridge University Press. doi:10.1017/CBO9780511608797. ISBN 978-0-511-60879-7. OCLC 967696618. (online)
Whittaker, E. T. (1999). A treatise on the analytical dynamics of particles and rigid bodies : with an introduction to the problem of three bodies. McCrea, W. H. (foreword) (4th ed.). Cambridge: Cambridge University Press. ISBN 978-1-316-04314-1. OCLC 1100677089.
Уиттекер, Э. (2004). Аналитическая динамика (in Russian). Russia: Editorial URSS. ISBN 5-354-00849-2.
== See also ==
Bibliography of E. T. Whittaker
Classical Mechanics a textbook on similar topics by Herbert Goldstein
List of textbooks on classical mechanics and quantum mechanics
== References ==
== Further reading ==
== External links ==
Full text of A treatise on the analytical dynamics of particles and rigid bodies (3rd edition) at the Internet Archive
Whittaker, E. T.; McCrae, Sir William (1988). A Treatise on the Analytical Dynamics of Particles and Rigid Bodies. Cambridge University Press. doi:10.1017/CBO9780511608797. ISBN 9780521358835. Retrieved 9 November 2020. | Wikipedia/Analytical_Dynamics_of_Particles_and_Rigid_Bodies |
In physics, gravity (from Latin gravitas 'weight'), also known as gravitation or a gravitational interaction, is a fundamental interaction, a mutual attraction between all massive particles. On Earth, gravity takes a slightly different meaning: the observed force between objects and the Earth. This force is dominated by the combined gravitational interactions of particles but also includes effect of the Earth's rotation. Gravity gives weight to physical objects and is essential to understanding the mechanisms responsible for surface water waves and lunar tides. Gravity also has many important biological functions, helping to guide the growth of plants through the process of gravitropism and influencing the circulation of fluids in multicellular organisms.
The gravitational attraction between primordial hydrogen and clumps of dark matter in the early universe caused the hydrogen gas to coalesce, eventually condensing and fusing to form stars. At larger scales this results in galaxies and clusters, so gravity is a primary driver for the large-scale structures in the universe. Gravity has an infinite range, although its effects become weaker as objects get farther away.
Gravity is accurately described by the general theory of relativity, proposed by Albert Einstein in 1915, which describes gravity in terms of the curvature of spacetime, caused by the uneven distribution of mass. The most extreme example of this curvature of spacetime is a black hole, from which nothing—not even light—can escape once past the black hole's event horizon. However, for most applications, gravity is well approximated by Newton's law of universal gravitation, which describes gravity as a force causing any two bodies to be attracted toward each other, with magnitude proportional to the product of their masses and inversely proportional to the square of the distance between them.
Scientists are currently working to develop a theory of gravity consistent with quantum mechanics, a quantum gravity theory, which would allow gravity to be united in a common mathematical framework (a theory of everything) with the other three fundamental interactions of physics. Although experiments are now being conducted to prove (or disprove) whether gravity is quantum, it is not known with certainty.
== Definitions ==
Gravity is the word used to describe both a fundamental physical interaction and the observed consequences of that interaction on macroscopic objects on Earth. Gravity is, by far, the weakest of the four fundamental interactions, approximately 1038 times weaker than the strong interaction, 1036 times weaker than the electromagnetic force, and 1029 times weaker than the weak interaction. As a result, it has no significant influence at the level of subatomic particles. However, gravity is the most significant interaction between objects at the macroscopic scale, and it determines the motion of planets, stars, galaxies, and even light.
Gravity, as the gravitational attraction at the surface of a planet or other celestial body, may also include the centrifugal force resulting from the planet's rotation (see § Earth's gravity).
== History ==
=== Ancient world ===
The nature and mechanism of gravity were explored by a wide range of ancient scholars. In Greece, Aristotle believed that objects fell towards the Earth because the Earth was the center of the Universe and attracted all of the mass in the Universe towards it. He also thought that the speed of a falling object should increase with its weight, a conclusion that was later shown to be false. While Aristotle's view was widely accepted throughout Ancient Greece, there were other thinkers such as Plutarch who correctly predicted that the attraction of gravity was not unique to the Earth.
Although he did not understand gravity as a force, the ancient Greek philosopher Archimedes discovered the center of gravity of a triangle. He postulated that if two equal weights did not have the same center of gravity, the center of gravity of the two weights together would be in the middle of the line that joins their centers of gravity. Two centuries later, the Roman engineer and architect Vitruvius contended in his De architectura that gravity is not dependent on a substance's weight but rather on its "nature". In the 6th century CE, the Byzantine Alexandrian scholar John Philoponus proposed the theory of impetus, which modifies Aristotle's theory that "continuation of motion depends on continued action of a force" by incorporating a causative force that diminishes over time.
In 628 CE, the Indian mathematician and astronomer Brahmagupta proposed the idea that gravity is an attractive force that draws objects to the Earth and used the term gurutvākarṣaṇ to describe it.: 105
In the ancient Middle East, gravity was a topic of fierce debate. The Persian intellectual Al-Biruni believed that the force of gravity was not unique to the Earth, and he correctly assumed that other heavenly bodies should exert a gravitational attraction as well. In contrast, Al-Khazini held the same position as Aristotle that all matter in the Universe is attracted to the center of the Earth.
=== Scientific revolution ===
In the mid-16th century, various European scientists experimentally disproved the Aristotelian notion that heavier objects fall at a faster rate. In particular, the Spanish Dominican priest Domingo de Soto wrote in 1551 that bodies in free fall uniformly accelerate. De Soto may have been influenced by earlier experiments conducted by other Dominican priests in Italy, including those by Benedetto Varchi, Francesco Beato, Luca Ghini, and Giovan Bellaso which contradicted Aristotle's teachings on the fall of bodies.
The mid-16th century Italian physicist Giambattista Benedetti published papers claiming that, due to specific gravity, objects made of the same material but with different masses would fall at the same speed. With the 1586 Delft tower experiment, the Flemish physicist Simon Stevin observed that two cannonballs of differing sizes and weights fell at the same rate when dropped from a tower.
In the late 16th century, Galileo Galilei's careful measurements of balls rolling down inclines allowed him to firmly establish that gravitational acceleration is the same for all objects.: 334 Galileo postulated that air resistance is the reason that objects with a low density and high surface area fall more slowly in an atmosphere. In his 1638 work Two New Sciences Galileo proved that that the distance traveled by a falling object is proportional to the square of the time elapsed. His method was a form of graphical numerical integration since concepts of algebra and calculus were unknown at the time.: 4 This was later confirmed by Italian scientists Jesuits Grimaldi and Riccioli between 1640 and 1650. They also calculated the magnitude of the Earth's gravity by measuring the oscillations of a pendulum.
Galileo also broke with incorrect ideas of Aristotelian philosophy by regarding inertia as persistence of motion, not a tendency to come to rest. By considering that the laws of physics appear identical on a moving ship to those on land, Galileo developed the concepts of reference frame and the principle of relativity.: 5 These concepts would become central to Newton's mechanics, only to be transformed in Einstein's theory of gravity, the general theory of relativity.: 17
Johannes Kepler, in his 1609 book Astronomia nova described gravity as a mutual attraction, claiming that if the Earth and Moon were not held apart by some force they would come together. He recognized that mechanical forces cause action, creating a kind of celestial machine. On the other hand Kepler viewed the force of the Sun on the planets as magnetic and acting tangential to their orbits and he assumed with Aristotle that inertia meant objects tend to come to rest.: 846
In 1666, Giovanni Alfonso Borelli avoided the key problems that limited Kepler. By Borelli's time the concept of inertia had its modern meaning as the tendency of objects to remain in uniform motion and he viewed the Sun as just another heavenly body. Borelli developed the idea of mechanical equilibrium, a balance between inertia and gravity. Newton cited Borelli's influence on his theory.: 848
In 1657, Robert Hooke published his Micrographia, in which he hypothesized that the Moon must have its own gravity.: 57 In a communication to the Royal Society in 1666 and his 1674 Gresham lecture, An Attempt to prove the Annual Motion of the Earth, Hooke took the important step of combining related hypothesis and then forming predictions based on the hypothesis. He wrote:
I will explain a system of the world very different from any yet received. It is founded on the following positions. 1. That all the heavenly bodies have not only a gravitation of their parts to their own proper centre, but that they also mutually attract each other within their spheres of action. 2. That all bodies having a simple motion, will continue to move in a straight line, unless continually deflected from it by some extraneous force, causing them to describe a circle, an ellipse, or some other curve. 3. That this attraction is so much the greater as the bodies are nearer. As to the proportion in which those forces diminish by an increase of distance, I own I have not discovered it....
Hooke was an important communicator who helped reformulate the scientific enterprise. He was one of the first professional scientists and worked as the then-new Royal Society's curator of experiments for 40 years. However his valuable insights remained hypotheses since he was unable to convert them in to a mathematical theory of gravity and work out the consequences.: 853 For this he turned to Newton, writing him a letter in 1679, outlining a model of planetary motion in a void or vacuum due to attractive action at a distance. This letter likely turned Newton's thinking in a new direction leading to his revolutionary work on gravity. When Newton reported his results in 1686, Hooke claimed the inverse square law portion was his "notion".
=== Newton's theory of gravitation ===
Before 1684, scientists including Christopher Wren, Robert Hooke and Edmund Halley determined that Kepler's third law, relating to planetary orbital periods, would prove the inverse square law if the orbits where circles. However the orbits were known to be ellipses. At Halley's suggestion, Newton tackled the problem and was able to prove that ellipses also proved the inverse square relation from Kepler's observations.: 13 In 1684, Isaac Newton sent a manuscript to Edmond Halley titled De motu corporum in gyrum ('On the motion of bodies in an orbit'), which provided a physical justification for Kepler's laws of planetary motion. Halley was impressed by the manuscript and urged Newton to expand on it, and a few years later Newton published a groundbreaking book called Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy).
The revolutionary aspect of Newton's theory of gravity was the unification of Earth-bound observations of acceleration with celestial mechanics.: 4 In his book, Newton described gravitation as a universal force, and claimed that it operated on objects "according to the quantity of solid matter which they contain and propagates on all sides to immense distances always at the inverse square of the distances".: 546 This formulation had two important parts. First was equating inertial mass and gravitational mass. Newton's 2nd law defines force via
F
=
m
a
{\displaystyle F=ma}
for inertial mass, his law of gravitational force uses the same mass. Newton did experiments with pendulums to verify this concept as best he could.: 11
The second aspect of Newton's formulation was the inverse square of distance. This aspect was not new: the astronomer Ismaël Bullialdus proposed it around 1640. Seeking proof, Newton made quantitative analysis around 1665, considering the period and distance of the Moon's orbit and considering the timing of objects falling on Earth. Newton did not publish these results at the time because he could not prove that the Earth's gravity acts as if all its mass were concentrated at its center. That proof took him twenty years.: 13
Newton's Principia was well received by the scientific community, and his law of gravitation quickly spread across the European world. More than a century later, in 1821, his theory of gravitation rose to even greater prominence when it was used to predict the existence of Neptune. In that year, the French astronomer Alexis Bouvard used this theory to create a table modeling the orbit of Uranus, which was shown to differ significantly from the planet's actual trajectory. In order to explain this discrepancy, many astronomers speculated that there might be a large object beyond the orbit of Uranus which was disrupting its orbit. In 1846, the astronomers John Couch Adams and Urbain Le Verrier independently used Newton's law to predict Neptune's location in the night sky, and the planet was discovered there within a day.
Newton's formulation was later condensed into the inverse-square law:
F
=
G
m
1
m
2
r
2
,
{\displaystyle F=G{\frac {m_{1}m_{2}}{r^{2}}},}
where F is the force, m1 and m2 are the masses of the objects interacting, r is the distance between the centers of the masses and G is the gravitational constant 6.674×10−11 m3⋅kg−1⋅s−2. While G is also called Newton's constant, Newton did not use this constant or formula, he only discussed proportionality. But this allowed him to come to an astounding conclusion we take for granted today: the gravity of the Earth on the Moon is the same as the gravity of the Earth on an apple:
M
earth
∝
a
apple
R
radius of earth
2
=
a
moon
R
lunar orbit
2
{\displaystyle M_{\text{earth}}\propto a_{\text{apple}}R_{\text{radius of earth}}^{2}=a_{\text{moon}}R_{\text{lunar orbit}}^{2}}
Using the values known at the time, Newton was able to verify this form of his law. The value of G was eventually measured by Henry Cavendish in 1797.: 31
=== Einstein's general relativity ===
Eventually, astronomers noticed an eccentricity in the orbit of the planet Mercury which could not be explained by Newton's theory: the perihelion of the orbit was increasing by about 42.98 arcseconds per century. The most obvious explanation for this discrepancy was an as-yet-undiscovered celestial body, such as a planet orbiting the Sun even closer than Mercury, but all efforts to find such a body turned out to be fruitless. In 1915, Albert Einstein developed a theory of general relativity which was able to accurately model Mercury's orbit.
Einstein's theory brought two other ideas with independent histories into the physical theories of gravity: the principle of relativity and non-Euclidean geometry
The principle of relativity, introduced by Galileo and used as a foundational principle by Newton, lead to a long and fruitless search for a luminiferous aether after Maxwell's equations demonstrated that light propagated at a fixed speed independent of reference frame. In Newton's mechanics, velocities add: a cannon ball shot from a moving ship would travel with a trajectory which included the motion of the ship. Since light speed was fixed, it was assumed to travel in a fixed, absolute medium. Many experiments sought to reveal this medium but failed and in 1905 Einstein's special relativity theory showed the aether was not needed. Special relativity proposed that mechanics be reformulated to use the Lorentz transformation already applicable to light rather than the Galilean transformation adopted by Newton. Special relativity, as in special case, specifically did not cover gravity.: 4
While relativity was associated with mechanics and thus gravity, the idea of altering geometry only joined the story of gravity once mechanics required the Lorentz transformations. Geometry was an ancient science that gradually broke free of Euclidean limitations when Carl Gauss discovered in the 1800s that surfaces in any number of dimensions could be characterized by a metric, a distance measurement along the shortest path between two points that reduces to Euclidean distance at infinitesimal separation. Gauss' student Bernhard Riemann developed this into a complete geometry by 1854. These geometries are locally flat but have global curvature.: 4
In 1907, Einstein took his first step by using special relativity to create a new form of the equivalence principle. The equivalence of inertial mass and gravitational mass was a known empirical law. The m in Newton's first law,
F
=
m
a
{\displaystyle F=ma}
, has the same value as the m in Newton's law of gravity on Earth,
F
=
G
M
m
/
r
2
{\displaystyle F=GMm/r^{2}}
. In what he later described as "the happiest thought of my life" Einstein realized this meant that in free-fall, an accelerated coordinate system exists with no local gravitational field. Every description of gravity in any other coordinate system must transform to give no field in the free-fall case, a powerful invariance constraint on all theories of gravity.: 20
Einstein's description of gravity was accepted by the majority of physicists for two reasons. First, by 1910 his special relativity was accepted in German physics and was spreading to other countries. Second, his theory explained experimental results like the perihelion of Mercury and the bending of light around the Sun better than Newton's theory.
In 1919, the British astrophysicist Arthur Eddington was able to confirm the predicted deflection of light during that year's solar eclipse. Eddington measured starlight deflections twice those predicted by Newtonian corpuscular theory, in accordance with the predictions of general relativity. Although Eddington's analysis was later disputed, this experiment made Einstein famous almost overnight and caused general relativity to become widely accepted in the scientific community.
In 1959, American physicists Robert Pound and Glen Rebka performed an experiment in which they used gamma rays to confirm the prediction of gravitational time dilation. By sending the rays down a 74-foot tower and measuring their frequency at the bottom, the scientists confirmed that light is Doppler shifted as it moves towards a source of gravity. The observed shift also supports the idea that time runs more slowly in the presence of a gravitational field (many more wave crests pass in a given interval). If light moves outward from a strong source of gravity it will be observed with a redshift. The time delay of light passing close to a massive object was first identified by Irwin I. Shapiro in 1964 in interplanetary spacecraft signals.
In 1971, scientists discovered the first-ever black hole in the galaxy Cygnus. The black hole was detected because it was emitting bursts of x-rays as it consumed a smaller star, and it came to be known as Cygnus X-1. This discovery confirmed yet another prediction of general relativity, because Einstein's equations implied that light could not escape from a sufficiently large and compact object.
Frame dragging, the idea that a rotating massive object should twist spacetime around it, was confirmed by Gravity Probe B results in 2011. In 2015, the LIGO observatory detected faint gravitational waves, the existence of which had been predicted by general relativity. Scientists believe that the waves emanated from a black hole merger that occurred 1.5 billion light-years away.
== On Earth ==
Every planetary body (including the Earth) is surrounded by its own gravitational field, which can be conceptualized with Newtonian physics as exerting an attractive force on all objects. Assuming a spherically symmetrical planet, the strength of this field at any given point above the surface is proportional to the planetary body's mass and inversely proportional to the square of the distance from the center of the body.
The strength of the gravitational field is numerically equal to the acceleration of objects under its influence. The rate of acceleration of falling objects near the Earth's surface varies very slightly depending on latitude, surface features such as mountains and ridges, and perhaps unusually high or low sub-surface densities. For purposes of weights and measures, a standard gravity value is defined by the International Bureau of Weights and Measures, under the International System of Units (SI).
The force of gravity experienced by objects on Earth's surface is the vector sum of two forces: (a) The gravitational attraction in accordance with Newton's universal law of gravitation, and (b) the centrifugal force, which results from the choice of an earthbound, rotating frame of reference. The force of gravity is weakest at the equator because of the centrifugal force caused by the Earth's rotation and because points on the equator are farthest from the center of the Earth. The force of gravity varies with latitude, and the resultant acceleration increases from about 9.780 m/s2 at the Equator to about 9.832 m/s2 at the poles.
=== Gravity wave ===
Waves on oceans, lakes, and other bodies of water occur when the gravitational equilibrium at the surface of the water is disturbed by for example wind. Similar effects occur in the atmosphere where equilibrium is disturbed by thermal weather fronts or mountain ranges.
== Astrophysics ==
=== Stars and black holes ===
During star formation, gravitational attraction in a cloud of hydrogen gas competes with thermal gas pressure. As the gas density increases, the temperature rises, then the gas radiates energy, allowing additional gravitational condensation. If the mass of gas in the region is low, the process continues until a brown dwarf or gas-giant planet is produced. If more mass is available, the additional gravitational energy allows the central region to reach pressures sufficient for nuclear fusion, forming a star. In a star, again the gravitational attraction competes, with thermal and radiation pressure in hydrostatic equilibrium until the star's atomic fuel runs out. The next phase depends upon the total mass of the star. Very low mass stars slowly cool as white dwarf stars with a small core balancing gravitational attraction with electron degeneracy pressure. Stars with masses similar to the Sun go through a red giant phase before becoming white dwarf stars. Higher mass stars have complex core structures that burn helium and high atomic number elements ultimately producing an iron core. As their fuel runs out, these stars become unstable producing a supernova. The result can be a neutron star where gravitational attraction balances neutron degeneracy pressure or, for even higher masses, a black hole where gravity operates alone with such intensity that even light cannot escape.: 121
=== Gravitational radiation ===
General relativity predicts that energy can be transported out of a system through gravitational radiation also known as gravitational waves. The first indirect evidence for gravitational radiation was through measurements of the Hulse–Taylor binary in 1973. This system consists of a pulsar and neutron star in orbit around one another. Its orbital period has decreased since its initial discovery due to a loss of energy, which is consistent for the amount of energy loss due to gravitational radiation. This research was awarded the Nobel Prize in Physics in 1993.
The first direct evidence for gravitational radiation was measured on 14 September 2015 by the LIGO detectors. The gravitational waves emitted during the collision of two black holes 1.3 billion light years from Earth were measured. This observation confirms the theoretical predictions of Einstein and others that such waves exist. It also opens the way for practical observation and understanding of the nature of gravity and events in the Universe including the Big Bang. Neutron star and black hole formation also create detectable amounts of gravitational radiation. This research was awarded the Nobel Prize in Physics in 2017.
=== Dark matter ===
At the cosmological scale, gravity is a dominant player. About 5/6 of the total mass in the universe consists of dark matter which interacts through gravity but not through electromagnetic interactions. The gravitation of clumps of dark matter known as dark matter halos attract hydrogen gas leading to stars and galaxies.
=== Gravitational lensing ===
Gravity acts on light and matter equally, meaning that a sufficiently massive object could warp light around it and create a gravitational lens. This phenomenon was first confirmed by observation in 1979 using the 2.1 meter telescope at Kitt Peak National Observatory in Arizona, which saw two mirror images of the same quasar whose light had been bent around the galaxy YGKOW G1.
Many subsequent observations of gravitational lensing provide additional evidence for substantial amounts of dark matter around galaxies. Gravitational lenses do not focus like eyeglass lenses, but rather lead to annular shapes called Einstein rings.: 370
=== Speed of gravity ===
In December 2012, a research team in China announced that it had produced measurements of the phase lag of Earth tides during full and new moons which seem to prove that the speed of gravity is equal to the speed of light. This means that if the Sun suddenly disappeared, the Earth would keep orbiting the vacant point normally for 8 minutes, which is the time light takes to travel that distance. The team's findings were released in Science Bulletin in February 2013.
In October 2017, the LIGO and Virgo interferometer detectors received gravitational wave signals within 2 seconds of gamma ray satellites and optical telescopes seeing signals from the same direction. This confirmed that the speed of gravitational waves was the same as the speed of light.
=== Anomalies and discrepancies ===
There are some observations that are not adequately accounted for, which may point to the need for better theories of gravity or perhaps be explained in other ways.
Galaxy rotation curves: Stars in galaxies follow a distribution of velocities where stars on the outskirts are moving faster than they should according to the observed distributions of luminous matter. Galaxies within galaxy clusters show a similar pattern. The pattern is considered strong evidence for dark matter, which would interact through gravitation but not electromagnetically; various modifications to Newtonian dynamics have also been proposed.
Accelerated expansion: The expansion of the universe seems to be accelerating. Dark energy has been proposed to explain this.
Flyby anomaly: Various spacecraft have experienced greater acceleration than expected during gravity assist maneuvers. The Pioneer anomaly has been shown to be explained by thermal recoil due to the distant sun radiation on one side of the space craft.
== General relativity ==
In modern physics, general relativity is considered the most successful theory of gravitation. Physicists continue to work to find solutions to the Einstein field equations that form the basis of general relativity and continue to test the theory, finding excellent agreement in all cases.: p.9
=== Constraints ===
Any theory of gravity must conform to the requirements of special relativity and experimental observations. Newton's theory of gravity assumes action at a distance and therefore cannot be reconciled with special relativity. The simplest generalization of Newton's approach would be a scalar theory with the gravitational potential represented by a single number in a 4 dimensional spacetime. However this type of theory fails to predict gravitational redshift or the deviation of light by matter and gives values for the precession of Mercury which are incorrect. A vector field theory predicts negative energy gravitational waves so it also fails. Furthermore, no theory without curvature in spacetime can be consistent with special relativity. The simplest theory consistent with special relativity and the well-studied observations is general relativity.
=== General characteristics ===
Unlike Newton's formula with one parameter, G, force in general relativity is terms of 10 numbers formed in to a metric tensor.: 70 In general relativity the effects of gravitation are described in different ways in different frames of reference. In a free-falling or co-moving coordinate system, an object travels in a straight line. In other coordinate systems, the object accelerates and thus is seen to move under a force. The path in spacetime (not 3D space) taken by a free-falling object is called a geodesic and the length of that path as measured by time in the objects frame is the shortest (or rarely the longest) one. Consequently the effect of gravity can be described as curving spacetime. In a weak stationary gravitational field, general relativity reduces to Newton's equations. The corrections introduced by general relativity on Earth are on the order of 1 part in a billion.: 77
=== Einstein field equations ===
The Einstein field equations are a system of 10 partial differential equations which describe how matter affects the curvature of spacetime. The system is may be expressed in the form
G
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
,
{\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu },}
where Gμν is the Einstein tensor, gμν is the metric tensor, Tμν is the stress–energy tensor, Λ is the cosmological constant,
G
{\displaystyle G}
is the Newtonian constant of gravitation and
c
{\displaystyle c}
is the speed of light. The constant
κ
=
8
π
G
c
4
{\displaystyle \kappa ={\frac {8\pi G}{c^{4}}}}
is referred to as the Einstein gravitational constant.
=== Solutions ===
The non-linear second-order Einstein field equations are extremely complex and have been solved in only a few special cases. These cases however has been transformational in our understanding of the cosmos. Several solutions are the basis for understanding black holes and for our modern model of the evolution of the universe since the Big Bang.: 227
=== Tests of general relativity ===
Testing the predictions of general relativity has historically been difficult, because they are almost identical to the predictions of Newtonian gravity for small energies and masses. A wide range of experiments provided support of general relativity.: p.1–9 Today, Einstein's theory of relativity is used for all gravitational calculations where absolute precision is desired, although Newton's inverse-square law is accurate enough for virtually all ordinary calculations.: 79
=== Gravity and quantum mechanics ===
Despite its success in predicting the effects of gravity at large scales, general relativity is ultimately incompatible with quantum mechanics. This is because general relativity describes gravity as a smooth, continuous distortion of spacetime, while quantum mechanics holds that all forces arise from the exchange of discrete particles known as quanta. This contradiction is especially vexing to physicists because the other three fundamental forces (strong force, weak force and electromagnetism) were reconciled with a quantum framework decades ago. As a result, researchers have begun to search for a theory that could unite both gravity and quantum mechanics under a more general framework.
One path is to describe gravity in the framework of quantum field theory (QFT), which has been successful to accurately describe the other fundamental interactions. The electromagnetic force arises from an exchange of virtual photons, where the QFT description of gravity is that there is an exchange of virtual gravitons. This description reproduces general relativity in the classical limit. However, this approach fails at short distances of the order of the Planck length, where a more complete theory of quantum gravity (or a new approach to quantum mechanics) is required.
=== Alternative theories ===
General relativity has withstood many tests over a large range of mass and size scales. When applied to interpret astronomical observations, cosmological models based on general relativity introduce two components to the universe, dark matter and dark energy, the nature of which is currently an unsolved problem in physics. The many successful, high precision predictions of the standard model of cosmology has led astrophysicists to conclude it and thus general relativity will be the basis for future progress. However, dark matter is not supported by the standard model of particle physics, physical models for dark energy do not match cosmological data, and some cosmological observations are inconsistent. These issues have led to the study of alternative theories of gravity.
== See also ==
== References ==
== Further reading ==
I. Bernard Cohen (1999) [1687]. "A Guide to Newton's Principia". The Principia : mathematical principles of natural philosophy. By Newton, Isaac. Translated by Cohen, I. Bernard. University of California Press. ISBN 9780520088160. OCLC 313895715.
Halliday, David; Resnick, Robert; Krane, Kenneth S. (2001). Physics v. 1. New York: John Wiley & Sons. ISBN 978-0-471-32057-9.
Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 978-0-534-40842-8.
Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W.H. Freeman. ISBN 978-0-7167-0809-4.
Thorne, Kip S.; Misner, Charles W.; Wheeler, John Archibald (1973). Gravitation. W.H. Freeman. ISBN 978-0-7167-0344-0.
Panek, Richard (2 August 2019). "Everything you thought you knew about gravity is wrong". The Washington Post.
== External links ==
The Feynman Lectures on Physics Vol. I Ch. 7: The Theory of Gravitation
"Gravitation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"Gravitation, theory of", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Force_of_gravity |
Deflection is a change in a moving object's velocity, hence its trajectory, as a consequence of contact (collision) with a surface or the influence of a non-contact force field. Examples of the former include a ball bouncing off the ground or a bat; examples of the latter include a beam of electrons used to produce a picture, or the relativistic bending of light due to gravity.
== Deflective efficiency ==
An object's deflective efficiency can never equal or surpass 100%, for example:
a mirror will never reflect exactly the same amount of light cast upon it, though it may concentrate the light which is reflected into a narrower beam.
on hitting the ground, a ball previously in free-fall (meaning no force other than gravity acted upon it) will never bounce back up to the place where it first started to descend.
This transfer of some energy into heat or other radiation is a consequence of the theory of thermodynamics, where, for every such interaction, some energy must be converted into alternative forms of energy or is absorbed by the deformation of the objects involved in the collision.
== See also ==
Electrostatic deflection
Coriolis effect
Deflection yoke
Impulse
Reflection | Wikipedia/Deflection_(physics) |
A geographical pole or geographic pole is either of the two points on Earth where its axis of rotation intersects its surface. The North Pole lies in the Arctic Ocean while the South Pole is in Antarctica. North and South poles are also defined for other planets or satellites in the Solar System, with a North pole being on the same side of the invariable plane as Earth's North pole.
Relative to Earth's surface, the geographic poles move by a few metres over periods of a few years. This is a combination of Chandler wobble, a free oscillation with a period of about 433 days; an annual motion responding to seasonal movements of air and water masses; and an irregular drift towards the 80th west meridian. As cartography requires exact and unchanging coordinates, the averaged locations of geographical poles are taken as fixed cartographic poles and become the points where the body's great circles of longitude intersect.
== See also ==
Earth's rotation
Polar motion
Poles of astronomical bodies
True polar wander
== References == | Wikipedia/Geographical_pole |
Wolfram Research, Inc. ( WUUL-frəm) is an American multinational company that creates computational technology. Wolfram's flagship product is the technical computing program Wolfram Mathematica, first released on June 23, 1988. Other products include WolframAlpha, Wolfram System Modeler, Wolfram Workbench, gridMathematica, Wolfram Finance Platform, webMathematica, the Wolfram Cloud, and the Wolfram Programming Lab. Wolfram Research founder Stephen Wolfram is the CEO. The company is headquartered in Champaign, Illinois, United States.
== History ==
The company launched Wolfram Alpha, an answer engine on May 16, 2009. It brings a new approach to knowledge generation and acquisition that involves large amounts of curated computable data in addition to semantic indexing of text.
Wolfram Research acquired MathCore Engineering AB on March 30, 2011.
On July 21, 2011, Wolfram Research launched the Computable Document Format (CDF). CDF is an electronic document format designed to allow easy authoring of dynamically generated interactive content.
In June 2014, Wolfram Research officially introduced the Wolfram Language as a new general multi-paradigm programming language. It is the primary programming language used in Mathematica.
On April 15, 2020, Wolfram Research received $5,575,000 to help pay its employees during the COVID-19 pandemic as part of the U.S. government's Paycheck Protection Program administered by the Small Business Administration. The loan was forgiven.
== Products and resources ==
=== Mathematica ===
Mathematica began as a software program for doing mathematics by computer, and has evolved to cover all domains of technical computing software, with features for neural networks, machine learning, image processing, geometry, data science, and visualizations. Central to Mathematica's mission is its ability to perform symbolic computation, for example, the ability to solve indefinite integrals symbolically. Mathematica includes a notebook interface and can produce slides for presentations. Mathematica is available in a desktop version, a grid computing version, and a cloud version.
=== Wolfram Alpha ===
Wolfram Alpha is a free online service that answers factual queries directly by computing the answer from externally sourced curated data, rather than providing a list of documents or web pages that might contain the answer as a search engine might. Users submit queries and computation requests via a text field and Wolfram Alpha then computes answers and relevant visualizations.
On February 8, 2012, Wolfram Alpha Pro was released, offering users additional features(e.g., the ability to upload many common file types and data — including raw tabular data, images, audio, XML, and dozens of specialized scientific, medical, and mathematical formats — for automatic analysis) for a monthly subscription fee.
In 2016, Wolfram Alpha Enterprise, a business-focused analytics tool, was launched. The program combines data supplied by a corporation with the algorithms from Wolfram Alpha to answer questions related to that corporation.
=== Wolfram System Modeler ===
Wolfram System Modeler is a platform for engineering as well as life-science modeling and simulation based on the Modelica language. It provides an interactive graphical modeling and simulation environment and a customizable set of component libraries. The primary interface, ModelCenter, is an interactive graphical environment including a customizable set of component libraries. The software also provides a tight integration with Mathematica. Users can develop, simulate, document, and analyze their models within Mathematica notebooks.
== Publishing ==
Wolfram Research publishes several free websites including the MathWorld and ScienceWorld encyclopedias. ScienceWorld, which launched in 2002, is divided into sites on chemistry, physics, astronomy and scientific biography. In 2005, the physics site was deemed a "valuable resource" by American Scientist magazine. However, by 2009, the astronomy site was said to suffer from outdated information, incomplete articles and link rot.
The Wolfram Demonstrations Project is a collaborative site hosting interactive technical demonstrations powered by a free Mathematica Player runtime.
Wolfram Research publishes The Mathematica Journal. Wolfram has also published several books via Wolfram Media, Wolfram's publishing arm. In addition, they have experimented with electronic textbook creation.
== Media activities ==
Wolfram Research served as the mathematical consultant for the CBS television series Numb3rs, a show about the mathematical aspects of crime-solving.
== See also ==
A New Kind of Science
Ed Pegg, Jr.
Eric W. Weisstein
Computer-based mathematics education
== References ==
== External links ==
Official website
Official Wolfram Research Twitter Account
Hoovers Fact Sheet on Wolfram Research, Inc.
The Mathematics Behind NUMB3RS, Wolfram's site on NUMB3RS mathematics. | Wikipedia/ScienceWorld |
In mathematics, LHS is informal shorthand for the left-hand side of an equation. Similarly, RHS is the right-hand side. The two sides have the same value, expressed differently, since equality is symmetric.
More generally, these terms may apply to an inequation or inequality; the right-hand side is everything on the right side of a test operator in an expression, with LHS defined similarly.
== Example ==
The expression on the right side of the "=" sign is the right side of the equation and the expression on the left of the "=" is the left side of the equation.
For example, in
x
+
5
=
y
+
8
{\displaystyle x+5=y+8}
x + 5 is the left-hand side (LHS) and y + 8 is the right-hand side (RHS).
== Homogeneous and inhomogeneous equations ==
In solving mathematical equations, particularly linear simultaneous equations, differential equations and integral equations, the terminology homogeneous is often used for equations with some linear operator L on the LHS and 0 on the RHS. In contrast, an equation with a non-zero RHS is called inhomogeneous or non-homogeneous, as exemplified by
Lf = g,
with g a fixed function, which equation is to be solved for f. Then any solution of the inhomogeneous equation may have a solution of the homogeneous equation added to it, and still remain a solution.
For example in mathematical physics, the homogeneous equation may correspond to a physical theory formulated in empty space, while the inhomogeneous equation asks for more 'realistic' solutions with some matter, or charged particles.
== Syntax ==
More abstractly, when using infix notation
T * U
the term T stands as the left-hand side and U as the right-hand side of the operator *. This usage is less common, though.
== See also ==
Equals sign
== References == | Wikipedia/Left-hand_side_and_right-hand_side_of_an_equation |
In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with any other DE, its unknown(s) consists of one (or more) function(s) and involves the derivatives of those functions. The term "ordinary" is used in contrast with partial differential equations (PDEs) which may be with respect to more than one independent variable, and, less commonly, in contrast with stochastic differential equations (SDEs) where the progression is random.
== Differential equations ==
A linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form
a
0
(
x
)
y
+
a
1
(
x
)
y
′
+
a
2
(
x
)
y
″
+
⋯
+
a
n
(
x
)
y
(
n
)
+
b
(
x
)
=
0
,
{\displaystyle a_{0}(x)y+a_{1}(x)y'+a_{2}(x)y''+\cdots +a_{n}(x)y^{(n)}+b(x)=0,}
where
a
0
(
x
)
,
…
,
a
n
(
x
)
{\displaystyle a_{0}(x),\ldots ,a_{n}(x)}
and
b
(
x
)
{\displaystyle b(x)}
are arbitrary differentiable functions that do not need to be linear, and
y
′
,
…
,
y
(
n
)
{\displaystyle y',\ldots ,y^{(n)}}
are the successive derivatives of the unknown function
y
{\displaystyle y}
of the variable
x
{\displaystyle x}
.
Among ordinary differential equations, linear differential equations play a prominent role for several reasons. Most elementary and special functions that are encountered in physics and applied mathematics are solutions of linear differential equations (see Holonomic function). When physical phenomena are modeled with non-linear equations, they are generally approximated by linear differential equations for an easier solution. The few non-linear ODEs that can be solved explicitly are generally solved by transforming the equation into an equivalent linear ODE (see, for example Riccati equation).
Some ODEs can be solved explicitly in terms of known functions and integrals. When that is not possible, the equation for computing the Taylor series of the solutions may be useful. For applied problems, numerical methods for ordinary differential equations can supply an approximation of the solution.
== Background ==
Ordinary differential equations (ODEs) arise in many contexts of mathematics and social and natural sciences. Mathematical descriptions of change use differentials and derivatives. Various differentials, derivatives, and functions become related via equations, such that a differential equation is a result that describes dynamically changing phenomena, evolution, and variation. Often, quantities are defined as the rate of change of other quantities (for example, derivatives of displacement with respect to time), or gradients of quantities, which is how they enter differential equations.
Specific mathematical fields include geometry and analytical mechanics. Scientific fields include much of physics and astronomy (celestial mechanics), meteorology (weather modeling), chemistry (reaction rates), biology (infectious diseases, genetic variation), ecology and population modeling (population competition), economics (stock trends, interest rates and the market equilibrium price changes).
Many mathematicians have studied differential equations and contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, d'Alembert, and Euler.
A simple example is Newton's second law of motion—the relationship between the displacement
x
{\displaystyle x}
and the time
t
{\displaystyle t}
of an object under the force
F
{\displaystyle F}
, is given by the differential equation
m
d
2
x
(
t
)
d
t
2
=
F
(
x
(
t
)
)
{\displaystyle m{\frac {\mathrm {d} ^{2}x(t)}{\mathrm {d} t^{2}}}=F(x(t))\,}
which constrains the motion of a particle of constant mass
m
{\displaystyle m}
. In general,
F
{\displaystyle F}
is a function of the position
x
(
t
)
{\displaystyle x(t)}
of the particle at time
t
{\displaystyle t}
. The unknown function
x
(
t
)
{\displaystyle x(t)}
appears on both sides of the differential equation, and is indicated in the notation
F
(
x
(
t
)
)
{\displaystyle F(x(t))}
.
== Definitions ==
In what follows,
y
{\displaystyle y}
is a dependent variable representing an unknown function
y
=
f
(
x
)
{\displaystyle y=f(x)}
of the independent variable
x
{\displaystyle x}
. The notation for differentiation varies depending upon the author and upon which notation is most useful for the task at hand. In this context, the Leibniz's notation
d
y
d
x
,
d
2
y
d
x
2
,
…
,
d
n
y
d
x
n
{\displaystyle {\frac {dy}{dx}},{\frac {d^{2}y}{dx^{2}}},\ldots ,{\frac {d^{n}y}{dx^{n}}}}
is more useful for differentiation and integration, whereas Lagrange's notation
y
′
,
y
″
,
…
,
y
(
n
)
{\displaystyle y',y'',\ldots ,y^{(n)}}
is more useful for representing higher-order derivatives compactly, and Newton's notation
(
y
˙
,
y
¨
,
y
.
.
.
)
{\displaystyle ({\dot {y}},{\ddot {y}},{\overset {...}{y}})}
is often used in physics for representing derivatives of low order with respect to time.
=== General definition ===
Given
F
{\displaystyle F}
, a function of
x
{\displaystyle x}
,
y
{\displaystyle y}
, and derivatives of
y
{\displaystyle y}
. Then an equation of the form
F
(
x
,
y
,
y
′
,
…
,
y
(
n
−
1
)
)
=
y
(
n
)
{\displaystyle F\left(x,y,y',\ldots ,y^{(n-1)}\right)=y^{(n)}}
is called an explicit ordinary differential equation of order
n
{\displaystyle n}
.
More generally, an implicit ordinary differential equation of order
n
{\displaystyle n}
takes the form:
F
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
=
0
{\displaystyle F\left(x,y,y',y'',\ \ldots ,\ y^{(n)}\right)=0}
There are further classifications:
AutonomousA differential equation is autonomous if it does not depend on the variable x.
Linear
A differential equation is linear if
F
{\displaystyle F}
can be written as a linear combination of the derivatives of
y
{\displaystyle y}
; that is, it can be rewritten as
y
(
n
)
=
∑
i
=
0
n
−
1
a
i
(
x
)
y
(
i
)
+
r
(
x
)
{\displaystyle y^{(n)}=\sum _{i=0}^{n-1}a_{i}(x)y^{(i)}+r(x)}
where
a
i
(
x
)
{\displaystyle a_{i}(x)}
and
r
(
x
)
{\displaystyle r(x)}
are continuous functions of
x
{\displaystyle x}
.
The function
r
(
x
)
{\displaystyle r(x)}
is called the source term, leading to further classification.
HomogeneousA linear differential equation is homogeneous if
r
(
x
)
=
0
{\displaystyle r(x)=0}
. In this case, there is always the "trivial solution"
y
=
0
{\displaystyle y=0}
.
Nonhomogeneous (or inhomogeneous)A linear differential equation is nonhomogeneous if
r
(
x
)
≠
0
{\displaystyle r(x)\neq 0}
.
Non-linearA differential equation that is not linear.
=== System of ODEs ===
A number of coupled differential equations form a system of equations. If
y
{\displaystyle \mathbf {y} }
is a vector whose elements are functions;
y
(
x
)
=
[
y
1
(
x
)
,
y
2
(
x
)
,
…
,
y
m
(
x
)
]
{\displaystyle \mathbf {y} (x)=[y_{1}(x),y_{2}(x),\ldots ,y_{m}(x)]}
, and
F
{\displaystyle \mathbf {F} }
is a vector-valued function of
y
{\displaystyle \mathbf {y} }
and its derivatives, then
y
(
n
)
=
F
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
{\displaystyle \mathbf {y} ^{(n)}=\mathbf {F} \left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)}
is an explicit system of ordinary differential equations of order
n
{\displaystyle n}
and dimension
m
{\displaystyle m}
. In column vector form:
(
y
1
(
n
)
y
2
(
n
)
⋮
y
m
(
n
)
)
=
(
f
1
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
f
2
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
⋮
f
m
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
)
{\displaystyle {\begin{pmatrix}y_{1}^{(n)}\\y_{2}^{(n)}\\\vdots \\y_{m}^{(n)}\end{pmatrix}}={\begin{pmatrix}f_{1}\left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)\\f_{2}\left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)\\\vdots \\f_{m}\left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n-1)}\right)\end{pmatrix}}}
These are not necessarily linear. The implicit analogue is:
F
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
=
0
{\displaystyle \mathbf {F} \left(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)}\right)={\boldsymbol {0}}}
where
0
=
(
0
,
0
,
…
,
0
)
{\displaystyle {\boldsymbol {0}}=(0,0,\ldots ,0)}
is the zero vector. In matrix form
(
f
1
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
f
2
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
⋮
f
m
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
)
)
)
=
(
0
0
⋮
0
)
{\displaystyle {\begin{pmatrix}f_{1}(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)})\\f_{2}(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)})\\\vdots \\f_{m}(x,\mathbf {y} ,\mathbf {y} ',\mathbf {y} '',\ldots ,\mathbf {y} ^{(n)})\end{pmatrix}}={\begin{pmatrix}0\\0\\\vdots \\0\end{pmatrix}}}
For a system of the form
F
(
x
,
y
,
y
′
)
=
0
{\displaystyle \mathbf {F} \left(x,\mathbf {y} ,\mathbf {y} '\right)={\boldsymbol {0}}}
, some sources also require that the Jacobian matrix
∂
F
(
x
,
u
,
v
)
∂
v
{\displaystyle {\frac {\partial \mathbf {F} (x,\mathbf {u} ,\mathbf {v} )}{\partial \mathbf {v} }}}
be non-singular in order to call this an implicit ODE [system]; an implicit ODE system satisfying this Jacobian non-singularity condition can be transformed into an explicit ODE system. In the same sources, implicit ODE systems with a singular Jacobian are termed differential algebraic equations (DAEs). This distinction is not merely one of terminology; DAEs have fundamentally different characteristics and are generally more involved to solve than (nonsingular) ODE systems. Presumably for additional derivatives, the Hessian matrix and so forth are also assumed non-singular according to this scheme, although note that any ODE of order greater than one can be (and usually is) rewritten as system of ODEs of first order, which makes the Jacobian singularity criterion sufficient for this taxonomy to be comprehensive at all orders.
The behavior of a system of ODEs can be visualized through the use of a phase portrait.
=== Solutions ===
Given a differential equation
F
(
x
,
y
,
y
′
,
…
,
y
(
n
)
)
=
0
{\displaystyle F\left(x,y,y',\ldots ,y^{(n)}\right)=0}
a function
u
:
I
⊂
R
→
R
{\displaystyle u:I\subset \mathbb {R} \to \mathbb {R} }
, where
I
{\displaystyle I}
is an interval, is called a solution or integral curve for
F
{\displaystyle F}
, if
u
{\displaystyle u}
is
n
{\displaystyle n}
-times differentiable on
I
{\displaystyle I}
, and
F
(
x
,
u
,
u
′
,
…
,
u
(
n
)
)
=
0
x
∈
I
.
{\displaystyle F(x,u,u',\ \ldots ,\ u^{(n)})=0\quad x\in I.}
Given two solutions
u
:
J
⊂
R
→
R
{\displaystyle u:J\subset \mathbb {R} \to \mathbb {R} }
and
v
:
I
⊂
R
→
R
{\displaystyle v:I\subset \mathbb {R} \to \mathbb {R} }
,
u
{\displaystyle u}
is called an extension of
v
{\displaystyle v}
if
I
⊂
J
{\displaystyle I\subset J}
and
u
(
x
)
=
v
(
x
)
x
∈
I
.
{\displaystyle u(x)=v(x)\quad x\in I.\,}
A solution that has no extension is called a maximal solution. A solution defined on all of
R
{\displaystyle \mathbb {R} }
is called a global solution.
A general solution of an
n
{\displaystyle n}
th-order equation is a solution containing
n
{\displaystyle n}
arbitrary independent constants of integration. A particular solution is derived from the general solution by setting the constants to particular values, often chosen to fulfill set 'initial conditions or boundary conditions'. A singular solution is a solution that cannot be obtained by assigning definite values to the arbitrary constants in the general solution.
In the context of linear ODE, the terminology particular solution can also refer to any solution of the ODE (not necessarily satisfying the initial conditions), which is then added to the homogeneous solution (a general solution of the homogeneous ODE), which then forms a general solution of the original ODE. This is the terminology used in the guessing method section in this article, and is frequently used when discussing the method of undetermined coefficients and variation of parameters.
=== Solutions of finite duration ===
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that from its own dynamics, the system will reach the value zero at an ending time and stays there in zero forever after. These finite-duration solutions can't be analytical functions on the whole real line, and because they will be non-Lipschitz functions at their ending time, they are not included in the uniqueness theorem of solutions of Lipschitz differential equations.
As example, the equation:
y
′
=
−
sgn
(
y
)
|
y
|
,
y
(
0
)
=
1
{\displaystyle y'=-{\text{sgn}}(y){\sqrt {|y|}},\,\,y(0)=1}
Admits the finite duration solution:
y
(
x
)
=
1
4
(
1
−
x
2
+
|
1
−
x
2
|
)
2
{\displaystyle y(x)={\frac {1}{4}}\left(1-{\frac {x}{2}}+\left|1-{\frac {x}{2}}\right|\right)^{2}}
== Theories ==
=== Singular solutions ===
The theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century has it received special attention. A valuable but little-known work on the subject is that of Houtain (1854). Darboux (from 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.
=== Reduction to quadratures ===
The primitive attempt in dealing with differential equations had in view a reduction to quadratures, that is, expressing the solutions in terms of known function and their integrals. This is possible for linear equations with constant coefficients, it appeared in the 19th century that this is generally impossible in other cases. Hence, analysts began the study (for their own) of functions that are solutions of differential equations, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter, the real question was no longer whether a solution is possible by quadratures, but whether a given differential equation suffices for the definition of a function, and, if so, what are the characteristic properties of such functions.
=== Fuchsian theory ===
Two memoirs by Fuchs inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869. His method for integrating a non-linear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve that remains unchanged under a rational transformation, Clebsch proposed to classify the transcendent functions defined by differential equations according to the invariant properties of the corresponding surfaces
f
=
0
{\displaystyle f=0}
under rational one-to-one transformations.
=== Lie's theory ===
From 1870, Sophus Lie's work put the theory of differential equations on a better foundation. He showed that the integration theories of the older mathematicians can, using Lie groups, be referred to a common source, and that ordinary differential equations that admit the same infinitesimal transformations present comparable integration difficulties. He also emphasized the subject of transformations of contact.
Lie's group theory of differential equations has been certified, namely: (1) that it unifies the many ad hoc methods known for solving differential equations, and (2) that it provides powerful new ways to find solutions. The theory has applications to both ordinary and partial differential equations.
A general solution approach uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras, and differential geometry are used to understand the structure of linear and non-linear (partial) differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform, and finally finding exact analytic solutions to DE.
Symmetry methods have been applied to differential equations that arise in mathematics, physics, engineering, and other disciplines.
=== Sturm–Liouville theory ===
Sturm–Liouville theory is a theory of a special type of second-order linear ordinary differential equation. Their solutions are based on eigenvalues and corresponding eigenfunctions of linear operators defined via second-order homogeneous linear equations. The problems are identified as Sturm–Liouville problems (SLP) and are named after J. C. F. Sturm and J. Liouville, who studied them in the mid-1800s. SLPs have an infinite number of eigenvalues, and the corresponding eigenfunctions form a complete, orthogonal set, which makes orthogonal expansions possible. This is a key idea in applied mathematics, physics, and engineering. SLPs are also useful in the analysis of certain partial differential equations.
== Existence and uniqueness of solutions ==
There are several theorems that establish existence and uniqueness of solutions to initial value problems involving ODEs both locally and globally. The two main theorems are
In their basic form both of these theorems only guarantee local results, though the latter can be extended to give a global result, for example, if the conditions of Grönwall's inequality are met.
Also, uniqueness theorems like the Lipschitz one above do not apply to DAE systems, which may have multiple solutions stemming from their (non-linear) algebraic part alone.
=== Local existence and uniqueness theorem simplified ===
The theorem can be stated simply as follows. For the equation and initial value problem:
y
′
=
F
(
x
,
y
)
,
y
0
=
y
(
x
0
)
{\displaystyle y'=F(x,y)\,,\quad y_{0}=y(x_{0})}
if
F
{\displaystyle F}
and
∂
F
/
∂
y
{\displaystyle \partial F/\partial y}
are continuous in a closed rectangle
R
=
[
x
0
−
a
,
x
0
+
a
]
×
[
y
0
−
b
,
y
0
+
b
]
{\displaystyle R=[x_{0}-a,x_{0}+a]\times [y_{0}-b,y_{0}+b]}
in the
x
−
y
{\displaystyle x-y}
plane, where
a
{\displaystyle a}
and
b
{\displaystyle b}
are real (symbolically:
a
,
b
∈
R
{\displaystyle a,b\in \mathbb {R} }
) and
x
{\displaystyle x}
denotes the Cartesian product, square brackets denote closed intervals, then there is an interval
I
=
[
x
0
−
h
,
x
0
+
h
]
⊂
[
x
0
−
a
,
x
0
+
a
]
{\displaystyle I=[x_{0}-h,x_{0}+h]\subset [x_{0}-a,x_{0}+a]}
for some
h
∈
R
{\displaystyle h\in \mathbb {R} }
where the solution to the above equation and initial value problem can be found. That is, there is a solution and it is unique. Since there is no restriction on
F
{\displaystyle F}
to be linear, this applies to non-linear equations that take the form
F
(
x
,
y
)
{\displaystyle F(x,y)}
, and it can also be applied to systems of equations.
=== Global uniqueness and maximum domain of solution ===
When the hypotheses of the Picard–Lindelöf theorem are satisfied, then local existence and uniqueness can be extended to a global result. More precisely:
For each initial condition
(
x
0
,
y
0
)
{\displaystyle (x_{0},y_{0})}
there exists a unique maximum (possibly infinite) open interval
I
max
=
(
x
−
,
x
+
)
,
x
±
∈
R
∪
{
±
∞
}
,
x
0
∈
I
max
{\displaystyle I_{\max }=(x_{-},x_{+}),x_{\pm }\in \mathbb {R} \cup \{\pm \infty \},x_{0}\in I_{\max }}
such that any solution that satisfies this initial condition is a restriction of the solution that satisfies this initial condition with domain
I
max
{\displaystyle I_{\max }}
.
In the case that
x
±
≠
±
∞
{\displaystyle x_{\pm }\neq \pm \infty }
, there are exactly two possibilities
explosion in finite time:
lim sup
x
→
x
±
‖
y
(
x
)
‖
→
∞
{\displaystyle \limsup _{x\to x_{\pm }}\|y(x)\|\to \infty }
leaves domain of definition:
lim
x
→
x
±
y
(
x
)
∈
∂
Ω
¯
{\displaystyle \lim _{x\to x_{\pm }}y(x)\ \in \partial {\bar {\Omega }}}
where
Ω
{\displaystyle \Omega }
is the open set in which
F
{\displaystyle F}
is defined, and
∂
Ω
¯
{\displaystyle \partial {\bar {\Omega }}}
is its boundary.
Note that the maximum domain of the solution
is always an interval (to have uniqueness)
may be smaller than
R
{\displaystyle \mathbb {R} }
may depend on the specific choice of
(
x
0
,
y
0
)
{\displaystyle (x_{0},y_{0})}
.
Example.
y
′
=
y
2
{\displaystyle y'=y^{2}}
This means that
F
(
x
,
y
)
=
y
2
{\displaystyle F(x,y)=y^{2}}
, which is
C
1
{\displaystyle C^{1}}
and therefore locally Lipschitz continuous, satisfying the Picard–Lindelöf theorem.
Even in such a simple setting, the maximum domain of solution cannot be all
R
{\displaystyle \mathbb {R} }
since the solution is
y
(
x
)
=
y
0
(
x
0
−
x
)
y
0
+
1
{\displaystyle y(x)={\frac {y_{0}}{(x_{0}-x)y_{0}+1}}}
which has maximum domain:
{
R
y
0
=
0
(
−
∞
,
x
0
+
1
y
0
)
y
0
>
0
(
x
0
+
1
y
0
,
+
∞
)
y
0
<
0
{\displaystyle {\begin{cases}\mathbb {R} &y_{0}=0\\[4pt]\left(-\infty ,x_{0}+{\frac {1}{y_{0}}}\right)&y_{0}>0\\[4pt]\left(x_{0}+{\frac {1}{y_{0}}},+\infty \right)&y_{0}<0\end{cases}}}
This shows clearly that the maximum interval may depend on the initial conditions. The domain of
y
{\displaystyle y}
could be taken as being
R
∖
(
x
0
+
1
/
y
0
)
,
{\displaystyle \mathbb {R} \setminus (x_{0}+1/y_{0}),}
but this would lead to a domain that is not an interval, so that the side opposite to the initial condition would be disconnected from the initial condition, and therefore not uniquely determined by it.
The maximum domain is not
R
{\displaystyle \mathbb {R} }
because
lim
x
→
x
±
‖
y
(
x
)
‖
→
∞
,
{\displaystyle \lim _{x\to x_{\pm }}\|y(x)\|\to \infty ,}
which is one of the two possible cases according to the above theorem.
== Reduction of order ==
Differential equations are usually easier to solve if the order of the equation can be reduced.
=== Reduction to a first-order system ===
Any explicit differential equation of order
n
{\displaystyle n}
,
F
(
x
,
y
,
y
′
,
y
″
,
…
,
y
(
n
−
1
)
)
=
y
(
n
)
{\displaystyle F\left(x,y,y',y'',\ \ldots ,\ y^{(n-1)}\right)=y^{(n)}}
can be written as a system of
n
{\displaystyle n}
first-order differential equations by defining a new family of unknown functions
y
i
=
y
(
i
−
1
)
.
{\displaystyle y_{i}=y^{(i-1)}.\!}
for
i
=
1
,
2
,
…
,
n
{\displaystyle i=1,2,\ldots ,n}
. The
n
{\displaystyle n}
-dimensional system of first-order coupled differential equations is then
y
1
′
=
y
2
y
2
′
=
y
3
⋮
y
n
−
1
′
=
y
n
y
n
′
=
F
(
x
,
y
1
,
…
,
y
n
)
.
{\displaystyle {\begin{array}{rcl}y_{1}'&=&y_{2}\\y_{2}'&=&y_{3}\\&\vdots &\\y_{n-1}'&=&y_{n}\\y_{n}'&=&F(x,y_{1},\ldots ,y_{n}).\end{array}}}
more compactly in vector notation:
y
′
=
F
(
x
,
y
)
{\displaystyle \mathbf {y} '=\mathbf {F} (x,\mathbf {y} )}
where
y
=
(
y
1
,
…
,
y
n
)
,
F
(
x
,
y
1
,
…
,
y
n
)
=
(
y
2
,
…
,
y
n
,
F
(
x
,
y
1
,
…
,
y
n
)
)
.
{\displaystyle \mathbf {y} =(y_{1},\ldots ,y_{n}),\quad \mathbf {F} (x,y_{1},\ldots ,y_{n})=(y_{2},\ldots ,y_{n},F(x,y_{1},\ldots ,y_{n})).}
== Summary of exact solutions ==
Some differential equations have solutions that can be written in an exact and closed form. Several important classes are given here.
In the table below,
P
(
x
)
{\displaystyle P(x)}
,
Q
(
x
)
{\displaystyle Q(x)}
,
P
(
y
)
{\displaystyle P(y)}
,
Q
(
y
)
{\displaystyle Q(y)}
, and
M
(
x
,
y
)
{\displaystyle M(x,y)}
,
N
(
x
,
y
)
{\displaystyle N(x,y)}
are any integrable functions of
x
{\displaystyle x}
,
y
{\displaystyle y}
;
b
{\displaystyle b}
and
c
{\displaystyle c}
are real given constants;
C
1
,
C
2
,
…
{\displaystyle C_{1},C_{2},\ldots }
are arbitrary constants (complex in general). The differential equations are in their equivalent and alternative forms that lead to the solution through integration.
In the integral solutions,
λ
{\displaystyle \lambda }
and
ε
{\displaystyle \varepsilon }
are dummy variables of integration (the continuum analogues of indices in summation), and the notation
∫
x
F
(
λ
)
d
λ
{\displaystyle \int ^{x}F(\lambda )\,d\lambda }
just means to integrate
F
(
λ
)
{\displaystyle F(\lambda )}
with respect to
λ
{\displaystyle \lambda }
, then after the integration substitute
λ
=
x
{\displaystyle \lambda =x}
, without adding constants (explicitly stated).
=== Separable equations ===
=== General first-order equations ===
=== General second-order equations ===
=== Linear to the nth order equations ===
== The guessing method ==
When all other methods for solving an ODE fail, or in the cases where we have some intuition about what the solution to a DE might look like, it is sometimes possible to solve a DE simply by guessing the solution and validating it is correct. To use this method, we simply guess a solution to the differential equation, and then plug the solution into the differential equation to validate if it satisfies the equation. If it does then we have a particular solution to the DE, otherwise we start over again and try another guess. For instance we could guess that the solution to a DE has the form:
y
=
A
e
α
t
{\displaystyle y=Ae^{\alpha t}}
since this is a very common solution that physically behaves in a sinusoidal way.
In the case of a first order ODE that is non-homogeneous we need to first find a solution to the homogeneous portion of the DE, otherwise known as the associated homogeneous equation, and then find a solution to the entire non-homogeneous equation by guessing. Finally, we add both of these solutions together to obtain the general solution to the ODE, that is:
general solution
=
general solution of the associated homogeneous equation
+
particular solution
{\displaystyle {\text{general solution}}={\text{general solution of the associated homogeneous equation}}+{\text{particular solution}}}
== Software for ODE solving ==
Maxima, an open-source computer algebra system.
COPASI, a free (Artistic License 2.0) software package for the integration and analysis of ODEs.
MATLAB, a technical computing application (MATrix LABoratory)
GNU Octave, a high-level language, primarily intended for numerical computations.
Scilab, an open source application for numerical computation.
Maple, a proprietary application for symbolic calculations.
Mathematica, a proprietary application primarily intended for symbolic calculations.
SymPy, a Python package that can solve ODEs symbolically
Julia (programming language), a high-level language primarily intended for numerical computations.
SageMath, an open-source application that uses a Python-like syntax with a wide range of capabilities spanning several branches of mathematics.
SciPy, a Python package that includes an ODE integration module.
Chebfun, an open-source package, written in MATLAB, for computing with functions to 15-digit accuracy.
GNU R, an open source computational environment primarily intended for statistics, which includes packages for ODE solving.
== See also ==
Boundary value problem
Examples of differential equations
Laplace transform applied to differential equations
List of dynamical systems and differential equations topics
Matrix differential equation
Method of undetermined coefficients
Recurrence relation
== Notes ==
== References ==
Halliday, David; Resnick, Robert (1977), Physics (3rd ed.), New York: Wiley, ISBN 0-471-71716-9
Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN 0-13-487538-9
Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: Wiley, ISBN 0-471-50728-8.
Polyanin, A. D. and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition), Chapman & Hall/CRC Press, Boca Raton, 2003. ISBN 1-58488-297-2
Simmons, George F. (1972), Differential Equations with Applications and Historical Notes, New York: McGraw-Hill, LCCN 75173716
Tipler, Paul A. (1991), Physics for Scientists and Engineers: Extended version (3rd ed.), New York: Worth Publishers, ISBN 0-87901-432-6
Boscain, Ugo; Chitour, Yacine (2011), Introduction à l'automatique (PDF) (in French)
Dresner, Lawrence (1999), Applications of Lie's Theory of Ordinary and Partial Differential Equations, Bristol and Philadelphia: Institute of Physics Publishing, ISBN 978-0750305303
Ascher, Uri; Petzold, Linda (1998), Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, SIAM, ISBN 978-1-61197-139-2
== Bibliography ==
Coddington, Earl A.; Levinson, Norman (1955). Theory of Ordinary Differential Equations. New York: McGraw-Hill.
Hartman, Philip (2002) [1964], Ordinary differential equations, Classics in Applied Mathematics, vol. 38, Philadelphia: Society for Industrial and Applied Mathematics, doi:10.1137/1.9780898719222, ISBN 978-0-89871-510-1, MR 1929104
W. Johnson, A Treatise on Ordinary and Partial Differential Equations, John Wiley and Sons, 1913, in University of Michigan Historical Math Collection
Ince, Edward L. (1944) [1926], Ordinary Differential Equations, Dover Publications, New York, ISBN 978-0-486-60349-0, MR 0010757 {{citation}}: ISBN / Date incompatibility (help)
Witold Hurewicz, Lectures on Ordinary Differential Equations, Dover Publications, ISBN 0-486-49510-8
Ibragimov, Nail H. (1993). CRC Handbook of Lie Group Analysis of Differential Equations Vol. 1-3. Providence: CRC-Press. ISBN 0-8493-4488-3..
Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux, Handbook of First Order Partial Differential Equations, Taylor & Francis, London, 2002. ISBN 0-415-27267-X
D. Zwillinger, Handbook of Differential Equations (3rd edition), Academic Press, Boston, 1997.
== External links ==
"Differential equation, ordinary", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
EqWorld: The World of Mathematical Equations, containing a list of ordinary differential equations with their solutions.
Online Notes / Differential Equations by Paul Dawkins, Lamar University.
Differential Equations, S.O.S. Mathematics.
A primer on analytical solution of differential equations from the Holistic Numerical Methods Institute, University of South Florida.
Ordinary Differential Equations and Dynamical Systems lecture notes by Gerald Teschl.
Notes on Diffy Qs: Differential Equations for Engineers An introductory textbook on differential equations by Jiri Lebl of UIUC.
Modeling with ODEs using Scilab A tutorial on how to model a physical system described by ODE using Scilab standard programming language by Openeering team.
Solving an ordinary differential equation in Wolfram|Alpha | Wikipedia/First-order_ordinary_differential_equation |
Rock mechanics is a theoretical and applied science of the mechanical behavior of rocks and rock masses.
Compared to geology, it is the branch of mechanics concerned with the response of rock and rock masses to the force fields of their physical environment.
== Background ==
Rock mechanics is part of a much broader subject of geomechanics, which is concerned with the mechanical responses of all geological materials, including soils.
Rock mechanics is concerned with the application of the principles of engineering mechanics to the design of structures built in or on rock. The structure could include many objects such as a drilling well, a mine shaft, a tunnel, a reservoir dam, a repository component, or a building. Rock mechanics is used in many engineering disciplines, but is primarily used in Mining, Civil, Geotechnical, Transportation, and Petroleum Engineering.
Rock mechanics answers questions such as, "is reinforcement necessary for a rock, or will it be able to handle whatever load it is faced with?" It also includes the design of reinforcement systems, such as rock bolting patterns.
== Assessing the Project Site ==
Before any work begins, the construction site must be investigated properly to inform of the geological conditions of the site. Field observations, deep drilling, and geophysical surveys, can all give necessary information to develop a safe construction plan and create a site geological model. The level of investigation conducted at this site depends on factors such as budget, time frame, and expected geological conditions.
The first step of the investigation is the collection of maps and aerial photos to analyze. This can provide information about potential sinkholes, landslides, erosion, etc. Maps can provide information on the rock type of the site, geological structure, and boundaries between bedrock units.
=== Boreholes ===
Creating a borehole is a technique that consists of drilling through the ground in various areas at various depths, to get a better understanding of the sites geology. Boreholes must be spaced properly from one another and drilled deep enough to provide accurate information for the geological model. Samples from the borehole are investigated and factors such as rock type, degree of weathering, and types of discontinuities are all recorded.
== Methods ==
Testing the properties of a rock is essential to understand how stable or unstable it is. Rock mechanics involves 3 categories of testing methods: tests on intact rocks, discontinuities and rock masses.
Two direct methods of testing that can be done are laboratory tests and in-situ tests. There are also indirect methods of testing which involve correlations and estimations that are obtained by analyzing field observations. The data these testing methods provide are crucial for the design, structure and research of rock mechanics and rock engineering.
Intact rocks and discontinuities can be tested in the laboratory through running small-scale experiments to gather empirical data, however rock masses require some larger-scale field measurements rather than laboratory work due to their more complex nature.
Laboratory tests provide both classification and characterization of the rock as well as a determination of what rock properties will be used in the engineering design. Examples of some of these laboratory tests include: sound velocity tests, hardness tests, creep tests and tensile strength tests. In-situ tests, which is when the rock being studied is subjected to a heavy load and then being watched to see if it deforms, provides an insight into what impacts a rock masses' strength and stability.
Understanding the strength of a rock mass is difficult but necessary for ensuring the safety of anything built on or around it, and it all depends on different factors the rock mass faces, such as the environmental conditions, size of the mass, and how discontinued it might be.
== See also ==
Engineering geology
Geotechnical engineering
Rock mass classification
Slope stability analysis
Rock mass plasticity
Slope mass rating
== References ==
Jaeger, Cook, and Zimmerman (2008). Fundamentals of Rock Mechanics. Blackwell Publishing. ISBN 9780632057597.{{cite book}}: CS1 maint: multiple names: authors list (link)
Coates, D F. (1981) "Rock Mechanics Principles." Canada: Monograph 874. | Wikipedia/Rock_mechanics |
Sports biomechanics is the quantitative based study and analysis of athletes and sports activities in general. It can simply be described as the physics of sports. Within this specialized field of biomechanics, the laws of mechanics are applied in order to gain a greater understanding of athletic performance through mathematical modeling, computer simulation and measurement.
Biomechanics, as a broader discipline, is the study of the structure and function of biological systems by means of the methods of mechanics (the branch of physics involving analysis of the actions of forces).
Within mechanics there are two sub-fields of study: statics, which is the study of systems that are in a state of constant motion either at rest (with no motion) or moving with a constant velocity; and dynamics, which is the study of systems in motion in which acceleration is present, which may involve kinematics (the study of the motion of bodies with respect to time, displacement, velocity, and speed of movement either in a straight line or in a rotary direction) and kinetics (the study of the forces associated with motion, including forces causing motion and forces resulting from motion). Sports biomechanists help people obtain optimal muscle recruitment and performance. A biomechanist also uses their knowledge to apply proper load barring techniques to preserve the body.
Human biomechanics helps analyze the body's movements, exploring how internal forces -- such as muscles, ligaments, and joints -- help create external movement. By incorporating the principles of the broad field of biomechanics with the specific discipline of human biomechanics, sports biomechanics is created. The integration of this broad field and special discipline, forms a more specialized field of biomechanics, meeting the specific demands of athletes, known as sports biomechanics. By analyzing sports biomechanics, changes can be implemented to improve and enhance sports performance, rehabilitation, and injury prevention
== Sports performance ==
Sports performance is one area that can be affected by analyzing the movements of an athlete. A sports biomechanics analyst can identify where an athlete may make errors in their movements and predict possible injury risks. Sports performance can possibly be enhanced and improved by analyzing sports biomechanics. By analyzing the mechanical movements of an athlete, identification of errors and faults can become possible. The errors or faults that can be identified could be improper technique, by comparing to elite level athletes in the same sport. The discovery of possible faults helps improve an athlete's technique and possibly decrease the amount of effort needed to execute the skill. The correction of possible biomechanical errors and/or faults results in improved athletic performance.
Preventative biomechanics is another factor that can lead to improved sports performance. Preventative biomechanics involves the integration of human biomechanical methods and medical clinical practices, with the goal of assessing and reducing the risk of musculoskeletal injuries prior to their occurrence. Preventative biomechanics are able to improve sports performance by mitigating the risk of an athlete becoming injured.
== Rehabilitation ==
Rehabilitation is another area that can be affected by the analyses of the movements of an athlete. Improved rehabilitation can be achieved by analyzing an athlete's sports biomechanics. The use of different modalities in combination with the analysis of sports biomechanics has shortened the time for rehabilitation. A notable modality that's being used during rehabilitation is resistance training. Studies indicate that resistance training has been found to contribute to the enhancement of athletes' joint mobility and stability. The application of resistance training in an athlete's rehabilitation plan has demonstrated strengthening of the muscles surrounding the affected joint and other joints helping support the injury. The results of strengthening help heal from the current injury and prevent more injuries in the future.
== Injury prevention ==
Injury prevention is yet another area that can be influenced by the analysis of an athlete's movements. The analyses of sport biomechanics also have been proven to increase injury prevention and advance injury prevention tactics. By addressing the specific points where injuries most often occur, individual biomechanics around those areas are observed and corrected, if biomechanical faults are discovered. These proactive corrections help lead to the reduction of injuries, due to the early application of preventative measures. As previously highlighted in the sports performance section, preventative sports biomechanics play a large role in injury prevention for athletes. Preventative sports biomechanics involves the combination of human biomechanical methods into medical clinical practices, with a specific emphasis on athletes. The primary goal of preventative sports biomechanics is assessing and reducing the risk of musculoskeletal injuries prior to their occurrence in athletics. By accessing an individual's preventative sports biomechanics, injury prevention increases due to early recognition of errors.
== Things related to biomechanics ==
Food
Engineering mechanics
Muscle mechanics
Motor coordination
Kinematics
Inverse dynamics
Statics
Kinetics
Velocity
Displacement
Acceleration
Moment of Inertia
Torque
Digital filters
== Experimental sports biomechanics ==
Methods:
3D Motion capture analysis
Force plates
Force transducers
Strain gauges
Anthropometric measurements (mathematical models)
Surface EMG (Electromyography)
== Research and applications ==
Golf swing
Tennis
Gymnastics
Track and field
Running blades
Swimming
Diving
Skiing
Trampoline
Rowing
Baseball
Figure Skating
Exergaming design and evaluation
Movement Assessment
Olympic weightlifting
Powerlifting
== See also ==
Leonardo da Vinci
== References ==
== Bibliography ==
Wolfgang Baumann (1989). Grundlagen der Biomechanik. Verlag Karl Hofman. ISBN 3-7780-8141-1.
David A. Winter (2004). Biomechanics and motor control of human movement. Wiley. ISBN 0-471-44989-X.
== External links ==
Modelling Biomechanics - Athletes go to the max - Scientific Computing World
Loughborough University - Sports Biomechanics and Motor Control Research Group
International Society of Biomechanics in Sports
BASES - The British Association of Sport and Exercise Sciences. Biomechanics Archived 2017-06-22 at the Wayback Machine
History of Biomechanics - Ariel Dynamics Video Library
Vicon | Products | Cameras
Kistler - Sports and performance diagnostics
Analysis - National Instruments
PhysiMax | Products | Movement Performance 3D | Wikipedia/Sports_biomechanics |
Heat transfer physics describes the kinetics of energy storage, transport, and energy transformation by principal energy carriers: phonons (lattice vibration waves), electrons, fluid particles, and photons. Heat is thermal energy stored in temperature-dependent motion of particles including electrons, atomic nuclei, individual atoms, and molecules. Heat is transferred to and from matter by the principal energy carriers. The state of energy stored within matter, or transported by the carriers, is described by a combination of classical and quantum statistical mechanics. The energy is different made (converted) among various carriers.
The heat transfer processes (or kinetics) are governed by the rates at which various related physical phenomena occur, such as (for example) the rate of particle collisions in classical mechanics. These various states and kinetics determine the heat transfer, i.e., the net rate of energy storage or transport. Governing these process from the atomic level (atom or molecule length scale) to macroscale are the laws of thermodynamics, including conservation of energy.
== Introduction ==
Heat is thermal energy associated with temperature-dependent motion of particles. The macroscopic energy equation for infinitesimal volume used in heat transfer analysis is
∇
⋅
q
=
−
ρ
c
p
∂
T
∂
t
+
∑
i
,
j
s
˙
i
−
j
,
{\displaystyle \nabla \cdot \mathbf {q} =-\rho c_{p}{\frac {\partial T}{\partial t}}+\sum _{i,j}{\dot {s}}_{i-j},}
where q is heat flux vector, −ρcp(∂T/∂t) is temporal change of internal energy (ρ is density, cp is specific heat capacity at constant pressure, T is temperature and t is time), and
s
˙
{\displaystyle {\dot {s}}}
is the energy conversion to and from thermal energy (i and j are for principal energy carriers). So, the terms represent energy transport, storage and transformation. Heat flux vector q is composed of three macroscopic fundamental modes, which are conduction (qk = −k∇T, k: thermal conductivity), convection (qu = ρcpuT, u: velocity), and radiation (
q
r
=
2
π
∫
0
∞
∫
0
π
s
I
p
h
,
ω
sin
(
θ
)
d
θ
d
ω
{\textstyle \mathbf {q} _{r}=2\pi \int _{0}^{\infty }\int _{0}^{\pi }\mathbf {s} I_{ph,\omega }\sin(\theta )d\theta \,d\omega }
, ω: angular frequency, θ: polar angle, Iph,ω: spectral, directional radiation intensity, s: unit vector), i.e., q = qk + qu + qr.
Once states and kinetics of the energy conversion and thermophysical properties are known, the fate of heat transfer is described by the above equation. These atomic-level mechanisms and kinetics are addressed in heat transfer physics. The microscopic thermal energy is stored, transported, and transformed by the principal energy carriers: phonons (p), electrons (e), fluid particles (f), and photons (ph).
== Length and time scales ==
Thermophysical properties of matter and the kinetics of interaction and energy exchange among the principal carriers are based on the atomic-level configuration and interaction. Transport properties such as thermal conductivity are calculated from these atomic-level properties using classical and quantum physics. Quantum states of principal carriers (e.g.. momentum, energy) are derived from the Schrödinger equation (called first principle or ab initio) and the interaction rates (for kinetics) are calculated using the quantum states and the quantum perturbation theory (formulated as the Fermi golden rule). Variety of ab initio (Latin for from the beginning) solvers (software) exist (e.g., ABINIT, CASTEP, Gaussian, Q-Chem, Quantum ESPRESSO, SIESTA, VASP, WIEN2k). Electrons in the inner shells (core) are not involved in heat transfer, and calculations are greatly reduced by proper approximations about the inner-shells electrons.
The quantum treatments, including equilibrium and nonequilibrium ab initio molecular dynamics (MD), involving larger lengths and times are limited by the computation resources, so various alternate treatments with simplifying assumptions have been used and kinetics. In classical (Newtonian) MD, the motion of atom or molecule (particles) is based on the empirical or effective interaction potentials, which in turn can be based on curve-fit of ab initio calculations or curve-fit to thermophysical properties. From the ensembles of simulated particles, static or dynamics thermal properties or scattering rates are derived.
At yet larger length scales (mesoscale, involving many mean free paths), the Boltzmann transport equation (BTE) which is based on the classical Hamiltonian-statistical mechanics is applied. BTE considers particle states in terms of position and momentum vectors (x, p) and this is represented as the state occupation probability. The occupation has equilibrium distributions (the known boson, fermion, and Maxwell–Boltzmann particles) and transport of energy (heat) is due to nonequilibrium (cause by a driving force or potential). Central to the transport is the role of scattering which turn the distribution toward equilibrium. The scattering is presented by the relations time or the mean free path. The relaxation time (or its inverse which is the interaction rate) is found from other calculations (ab initio or MD) or empirically. BTE can be numerically solved with Monte Carlo method, etc.
Depending on the length and time scale, the proper level of treatment (ab initio, MD, or BTE) is selected. Heat transfer physics analyses may involve multiple scales (e.g., BTE using interaction rate from ab initio or classical MD) with states and kinetic related to thermal energy storage, transport and transformation.
So, heat transfer physics covers the four principal energy carries and their kinetics from classical and quantum mechanical perspectives. This enables multiscale (ab initio, MD, BTE and macroscale) analyses, including low-dimensionality and size effects.
== Phonon ==
Phonon (quantized lattice vibration wave) is a central thermal energy carrier contributing to heat capacity (sensible heat storage) and conductive heat transfer in condensed phase, and plays a very important role in thermal energy conversion. Its transport properties are represented by the phonon conductivity tensor Kp (W/m-K, from the Fourier law qk,p = -Kp⋅∇ T) for bulk materials, and the phonon boundary resistance ARp,b [K/(W/m2)] for solid interfaces, where A is the interface area. The phonon specific heat capacity cv,p (J/kg-K) includes the quantum effect. The thermal energy conversion rate involving phonon is included in
s
˙
i
-
j
{\displaystyle {\dot {s}}_{i{\mbox{-}}j}}
. Heat transfer physics describes and predicts, cv,p, Kp, Rp,b (or conductance Gp,b) and
s
˙
i
-
j
{\displaystyle {\dot {s}}_{i{\mbox{-}}j}}
, based on atomic-level properties.
For an equilibrium potential ⟨φ⟩o of a system with N atoms, the total potential ⟨φ⟩ is found by a Taylor series expansion at the equilibrium and this can be approximated by the second derivatives (the harmonic approximation) as
⟨
φ
⟩
=
⟨
φ
⟩
o
+
∑
i
∑
α
∂
⟨
φ
⟩
∂
d
i
α
|
o
d
i
α
+
1
2
∑
i
,
j
∑
α
,
β
∂
2
⟨
φ
⟩
∂
d
i
α
∂
d
j
β
|
o
d
i
α
d
j
β
+
1
6
∑
i
,
j
,
k
∑
α
,
β
,
γ
∂
3
⟨
φ
⟩
∂
d
i
α
∂
d
j
β
∂
d
k
γ
|
o
d
i
α
d
j
β
d
k
γ
+
⋯
≈
⟨
φ
⟩
o
+
1
2
∑
i
,
j
∑
α
,
β
Γ
α
β
d
i
α
d
j
β
,
{\displaystyle {\begin{aligned}\langle \varphi \rangle &=\langle \varphi \rangle _{\mathrm {o} }+\left.\sum _{i}\sum _{\alpha }{\frac {\partial \langle \varphi \rangle }{\partial d_{i\alpha }}}\right|_{\mathrm {o} }d_{i\alpha }+\left.{\frac {1}{2}}\sum _{i,j}\sum _{\alpha ,\beta }{\frac {\partial ^{2}\langle \varphi \rangle }{\partial d_{i\alpha }\partial d_{j\beta }}}\right|_{\mathrm {o} }d_{i\alpha }d_{j\beta }+\left.{\frac {1}{6}}\sum _{i,j,k}\sum _{\alpha ,\beta ,\gamma }{\frac {\partial ^{3}\langle \varphi \rangle }{\partial d_{i\alpha }\partial d_{j\beta }\partial d_{k\gamma }}}\right|_{\mathrm {o} }d_{i\alpha }d_{j\beta }d_{k\gamma }+\cdots \\&\approx \langle \varphi \rangle _{\mathrm {o} }+{\frac {1}{2}}\sum _{i,j}\sum _{\alpha ,\beta }\Gamma _{\alpha \beta }d_{i\alpha }d_{j\beta },\end{aligned}}}
where di is the displacement vector of atom i, and Γ is the spring (or force) constant as the second-order derivatives of the potential. The equation of motion for the lattice vibration in terms of the displacement of atoms [d(jl,t): displacement vector of the j-th atom in the l-th unit cell at time t] is
m
j
d
2
d
(
j
l
,
t
)
d
t
2
=
−
∑
j
′
l
′
Γ
(
j
j
′
l
l
′
)
⋅
d
(
j
′
l
′
,
T
)
,
{\displaystyle m_{j}{\frac {d^{2}\mathbf {d} (jl,t)}{dt^{2}}}=-\sum _{j'l'}{\boldsymbol {\Gamma }}{\binom {j\ j^{\prime }}{l\ l'}}\cdot \mathbf {d} (j'l',T),}
where m is the atomic mass and Γ is the force constant tensor. The atomic displacement is the summation over the normal modes [sα: unit vector of mode α, ωp: angular frequency of wave, and κp: wave vector]. Using this plane-wave displacement, the equation of motion becomes the eigenvalue equation
M
ω
p
2
(
κ
p
,
α
)
s
α
(
κ
p
)
=
D
(
κ
p
)
s
α
(
κ
p
)
,
{\displaystyle \mathbf {M} \omega _{p}^{2}({\boldsymbol {\kappa }}_{p},\alpha )\mathbf {s} _{\alpha }({\boldsymbol {\kappa }}_{p})=\mathbf {D} ({\boldsymbol {\kappa }}_{p})\mathbf {s} _{\alpha }({\boldsymbol {\kappa }}_{p}),}
where M is the diagonal mass matrix and D is the harmonic dynamical matrix. Solving this eigenvalue equation gives the relation between the angular frequency ωp and the wave vector κp, and this relation is called the phonon dispersion relation. Thus, the phonon dispersion relation is determined by matrices M and D, which depend on the atomic structure and the strength of interaction among constituent atoms (the stronger the interaction and the lighter the atoms, the higher is the phonon frequency and the larger is the slope dωp/dκp). The Hamiltonian of phonon system with the harmonic approximation is
H
p
=
∑
x
1
2
m
p
2
(
x
)
+
1
2
∑
x
,
x
′
d
i
(
x
)
D
i
j
(
x
−
x
′
)
d
j
(
x
′
)
,
{\displaystyle \mathrm {H} _{p}=\sum _{x}{\frac {1}{2m}}\mathbf {p} ^{2}(\mathbf {x} )+{\frac {1}{2}}\sum _{\mathbf {x} ,\mathbf {x} '}\mathbf {d} _{i}(\mathbf {x} )D_{ij}(\mathbf {x} -\mathbf {x} ')\mathbf {d} _{j}(\mathbf {x} '),}
where Dij is the dynamical matrix element between atoms i and j, and di (dj) is the displacement of i (j) atom, and p is momentum. From this and the solution to dispersion relation, the phonon annihilation operator for the quantum treatment is defined as
b
κ
,
α
=
1
N
1
/
2
∑
κ
p
,
α
e
−
i
(
κ
p
⋅
x
)
s
α
(
κ
p
)
⋅
[
(
m
ω
p
,
α
2
ℏ
)
1
/
2
d
(
x
)
+
i
(
1
2
ℏ
m
ω
p
,
α
)
1
/
2
p
(
x
)
]
,
{\displaystyle b_{\kappa ,\alpha }={\frac {1}{N^{1/2}}}\sum _{\kappa _{p},\alpha }e^{-i({\boldsymbol {\kappa }}_{p}\cdot \mathbf {x} )}\mathbf {s} _{\alpha }({\boldsymbol {\kappa }}_{p})\cdot \left[\left({\frac {m\omega _{p,\alpha }}{2\hbar }}\right)^{1/2}\mathbf {d} (\mathbf {x} )+i\left({\frac {1}{2\hbar m\omega _{p,\alpha }}}\right)^{1/2}\mathbf {p} (\mathbf {x} )\right],}
where N is the number of normal modes divided by α and ħ is the reduced Planck constant. The creation operator is the adjoint of the annihilation operator,
b
κ
,
α
†
=
1
N
1
/
2
∑
κ
p
,
α
e
i
(
κ
p
⋅
x
)
s
α
(
κ
p
)
⋅
[
(
m
ω
p
,
α
2
ℏ
)
1
/
2
d
(
x
)
−
i
(
1
2
ℏ
m
ω
p
,
α
)
1
/
2
p
(
x
)
]
.
{\displaystyle b_{\kappa ,\alpha }^{\dagger }={\frac {1}{N^{1/2}}}\sum _{\kappa _{p},\alpha }e^{i({\boldsymbol {\kappa }}_{p}\cdot \mathbf {x} )}\mathbf {s} _{\alpha }({\boldsymbol {\kappa }}_{p})\cdot \left[\left({\frac {m\omega _{p,\alpha }}{2\hbar }}\right)^{1/2}\mathbf {d} (\mathbf {x} )-i\left({\frac {1}{2\hbar m\omega _{p,\alpha }}}\right)^{1/2}\mathbf {p} (\mathbf {x} )\right].}
The Hamiltonian in terms of bκ,α† and bκ,α is Hp = Σκ,αħωp,α[bκ,α†bκ,α + 1/2] and bκ,α†bκ,α is the phonon number operator. The energy of quantum-harmonic oscillator is Ep = Σκ,α [fp(κ,α) + 1/2]ħωp,α(κp), and thus the quantum of phonon energy ħωp.
The phonon dispersion relation gives all possible phonon modes within the Brillouin zone (zone within the primitive cell in reciprocal space), and the phonon density of states Dp (the number density of possible phonon modes). The phonon group velocity up,g is the slope of the dispersion curve, dωp/dκp. Since phonon is a boson particle, its occupancy follows the Bose–Einstein distribution {fpo = [exp(ħωp/kBT)-1]−1, kB: Boltzmann constant}. Using the phonon density of states and this occupancy distribution, the phonon energy is Ep(T) = ∫Dp(ωp)fp(ωp,T)ħωpdωp, and the phonon density is np(T) = ∫Dp(ωp)fp(ωp,T)dωp. Phonon heat capacity cv,p (in solid cv,p = cp,p, cv,p : constant-volume heat capacity, cp,p: constant-pressure heat capacity) is the temperature derivatives of phonon energy for the Debye model (linear dispersion model), is
c
v
,
p
=
d
E
p
d
T
|
v
=
9
k
B
m
(
T
T
D
)
3
n
∫
0
T
D
/
T
x
4
e
x
(
e
x
−
1
)
2
d
x
(
x
=
ℏ
ω
k
B
T
)
,
{\displaystyle c_{v,p}=\left.{\frac {dE_{p}}{dT}}\right|_{v}={\frac {9k_{\mathrm {B} }}{m}}\left({\frac {T}{T_{D}}}\right)^{3}n\int _{0}^{T_{D}/T}{\frac {x^{4}e^{x}}{\left(e^{x}-1\right)^{2}}}dx\qquad (x={\frac {\hbar \omega }{k_{\mathrm {B} }T}}),}
where TD is the Debye temperature, m is atomic mass, and n is the atomic number density (number density of phonon modes for the crystal 3n). This gives the Debye T3 law at low temperature and Dulong-Petit law at high temperatures.
From the kinetic theory of gases, thermal conductivity of principal carrier i (p, e, f and ph) is
k
i
=
1
3
n
i
c
v
,
i
u
i
λ
i
,
{\displaystyle k_{i}={\frac {1}{3}}n_{i}c_{v,i}u_{i}\lambda _{i},}
where ni is the carrier density and the heat capacity is per carrier, ui is the carrier speed and λi is the mean free path (distance traveled by carrier before an scattering event). Thus, the larger the carrier density, heat capacity and speed, and the less significant the scattering, the higher is the conductivity. For phonon λp represents the interaction (scattering) kinetics of phonons and is related to the scattering relaxation time τp or rate (= 1/τp) through λp= upτp. Phonons interact with other phonons, and with electrons, boundaries, impurities, etc., and λp combines these interaction mechanisms through the Matthiessen rule. At low temperatures, scattering by boundaries is dominant and with increase in temperature the interaction rate with impurities, electron and other phonons become important, and finally the phonon-phonon scattering dominants for T > 0.2TD. The interaction rates are reviewed in and includes quantum perturbation theory and MD.
A number of conductivity models are available with approximations regarding the dispersion and λp. Using the single-mode relaxation time approximation (∂fp′/∂t|s = −fp′/τp) and the gas kinetic theory, Callaway phonon (lattice) conductivity model as
k
p
,
s
=
1
8
π
3
∑
α
∫
c
v
,
p
τ
p
(
u
p
,
g
⋅
s
)
2
d
κ
for component along
s
,
{\displaystyle k_{p,\mathbf {s} }={\frac {1}{8\pi ^{3}}}\sum _{\alpha }\int c_{v,p}\tau _{p}(\mathbf {u} _{p,g}\cdot \mathbf {s} )^{2}d\kappa \ \ \ \ \ {\text{ for component along }}\mathbf {s} ,}
k
p
=
1
6
π
3
∑
α
∫
c
v
,
p
τ
p
u
p
,
g
2
κ
2
d
κ
for isotropic conductivity
.
{\displaystyle k_{p}={\frac {1}{6\pi ^{3}}}\sum _{\alpha }\int c_{v,p}\tau _{p}{u}_{p,g}^{2}\kappa ^{2}d\kappa \ \ \ \ \ \ \ \ {\text{for isotropic conductivity}}.}
With the Debye model (a single group velocity up,g, and a specific heat capacity calculated above), this becomes
k
p
=
(
48
π
2
)
1
/
3
k
B
3
T
3
a
h
P
2
T
D
∫
0
T
/
T
D
τ
p
x
4
e
x
(
e
x
−
1
)
2
d
x
,
{\displaystyle k_{p}=\left(48\pi ^{2}\right)^{1/3}{\frac {k_{\mathrm {B} }^{3}T^{3}}{ah_{\mathrm {P} }^{2}T_{\mathrm {D} }}}\int _{0}^{T/T_{\mathrm {D} }}\tau _{p}{\frac {x^{4}e^{x}}{\left(e^{x}-1\right)^{2}}}dx,}
where a is the lattice constant a = n−1/3 for a cubic lattice, and n is the atomic number density. Slack phonon conductivity model mainly considering acoustic phonon scattering (three-phonon interaction) is given as
k
p
=
k
p
,
S
=
3.1
×
10
12
⟨
M
⟩
V
a
1
/
3
T
D
,
∞
3
T
⟨
γ
G
2
⟩
N
o
2
/
3
high temperatures
(
T
>
0.2
T
D
,
phonon-phonon scattering only)
,
{\displaystyle k_{p}=k_{p,S}={\frac {3.1\times 10^{12}\langle M\rangle V_{a}^{1/3}T_{D,\infty }^{3}}{T\langle \gamma _{G}^{2}\rangle N_{o}^{2/3}}}\qquad {\text{ high temperatures }}(T>0.2T_{D},{\text{ phonon-phonon scattering only)}},}
where ⟨M⟩ is the mean atomic weight of the atoms in the primitive cell, Va=1/n is the average volume per atom, TD,∞ is the high-temperature Debye temperature, T is the temperature, No is the number of atoms in the primitive cell, and ⟨γ2G⟩ is the mode-averaged square of the Grüneisen constant or parameter at high temperatures. This model is widely tested with pure nonmetallic crystals, and the overall agreement is good, even for complex crystals.
Based on the kinetics and atomic structure consideration, a material with high crystalline and strong interactions, composed of light atoms (such as diamond and graphene) is expected to have large phonon conductivity. Solids with more than one atom in the smallest unit cell representing the lattice have two types of phonons, i.e., acoustic and optical. (Acoustic phonons are in-phase movements of atoms about their equilibrium positions, while optical phonons are out-of-phase movement of adjacent atoms in the lattice.) Optical phonons have higher energies (frequencies), but make smaller contribution to conduction heat transfer, because of their smaller group velocity and occupancy.
Phonon transport across hetero-structure boundaries (represented with Rp,b, phonon boundary resistance) according to the boundary scattering approximations are modeled as acoustic and diffuse mismatch models. Larger phonon transmission (small Rp,b) occurs at boundaries where material pairs have similar phonon properties (up, Dp, etc.), and in contract large Rp,b occurs when some material is softer (lower cut-off phonon frequency) than the other.
== Electron ==
Quantum electron energy states for electron are found using the electron quantum Hamiltonian, which is generally composed of kinetic (-ħ2∇2/2me) and potential energy terms (φe). Atomic orbital, a mathematical function describing the wave-like behavior of either an electron or a pair of electrons in an atom, can be found from the Schrödinger equation with this electron Hamiltonian. Hydrogen-like atoms (a nucleus and an electron) allow for closed-form solution to Schrödinger equation with the electrostatic potential (the Coulomb law). The Schrödinger equation of atoms or atomic ions with more than one electron has not been solved analytically, because of the Coulomb interactions among electrons. Thus, numerical techniques are used, and an electron configuration is approximated as product of simpler hydrogen-like atomic orbitals (isolate electron orbitals). Molecules with multiple atoms (nuclei and their electrons) have molecular orbital (MO, a mathematical function for the wave-like behavior of an electron in a molecule), and are obtained from simplified solution techniques such as linear combination of atomic orbitals (LCAO). The molecular orbital is used to predict chemical and physical properties, and the difference between highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) is a measure of excitability of the molecules.
In a crystal structure of metallic solids, the free electron model (zero potential, φe = 0) for the behavior of valence electrons is used. However, in a periodic lattice (crystal), there is periodic crystal potential, so the electron Hamiltonian becomes
H
e
=
−
ℏ
2
2
m
e
∇
2
+
φ
c
(
x
)
,
{\displaystyle \mathrm {H} _{e}=-{\frac {\hbar ^{2}}{2m_{e}}}\nabla ^{2}+\varphi _{c}(\mathbf {x} ),}
where me is the electron mass, and the periodic potential is expressed as φc (x) = Σg φgexp[i(g∙x)] (g: reciprocal lattice vector). The time-independent Schrödinger equation with this Hamiltonian is given as (the eigenvalue equation)
H
e
ψ
e
,
x
(
x
)
=
E
e
(
κ
e
)
ψ
e
,
x
(
x
)
,
{\displaystyle \mathrm {H} _{e}\psi _{e,\mathbf {x} }(\mathbf {x} )=E_{e}({\boldsymbol {\kappa }}_{e})\psi _{e,\mathbf {x} }(\mathbf {x} ),}
where the eigenfunction ψe,κ is the electron wave function, and eigenvalue Ee(κe), is the electron energy (κe: electron wavevector). The relation between wavevector, κe and energy Ee provides the electronic band structure. In practice, a lattice as many-body systems includes interactions between electrons and nuclei in potential, but this calculation can be too intricate. Thus, many approximate techniques have been suggested and one of them is density functional theory (DFT), uses functionals of the spatially dependent electron density instead of full interactions. DFT is widely used in ab initio software (ABINIT, CASTEP, Quantum ESPRESSO, SIESTA, VASP, WIEN2k, etc.). The electron specific heat is based on the energy states and occupancy distribution (the Fermi–Dirac statistics). In general, the heat capacity of electron is small except at very high temperature when they are in thermal equilibrium with phonons (lattice). Electrons contribute to heat conduction (in addition to charge carrying) in solid, especially in metals. Thermal conductivity tensor in solid is the sum of electric and phonon thermal conductivity tensors K = Ke + Kp.
Electrons are affected by two thermodynamic forces [from the charge, ∇(EF/ec) where EF is the Fermi level and ec is the electron charge and temperature gradient, ∇(1/T)] because they carry both charge and thermal energy, and thus electric current je and heat flow q are described with the thermoelectric tensors (Aee, Aet, Ate, and Att) from the Onsager reciprocal relations as
j
e
=
A
e
e
⋅
∇
E
F
e
c
+
A
e
t
⋅
∇
1
T
,
and
{\displaystyle \mathbf {j} _{e}=\mathbf {A} _{ee}\cdot \nabla {\frac {E_{\mathrm {F} }}{e_{c}}}+\mathbf {A} _{et}\cdot \nabla {\frac {1}{T}},\ \ {\text{and}}}
q
=
A
t
e
⋅
∇
E
F
e
c
+
A
t
t
⋅
∇
1
T
.
{\displaystyle \mathbf {q} =\mathbf {A} _{te}\cdot \nabla {\frac {E_{\mathrm {F} }}{e_{c}}}+\mathbf {A} _{tt}\cdot \nabla {\frac {1}{T}}.}
Converting these equations to have je equation in terms of electric field ee and ∇T and q equation with je and ∇T, (using scalar coefficients for isotropic transport, αee, αet, αte, and αtt instead of Aee, Aet, Ate, and Att)
j
e
=
α
e
e
e
e
−
α
e
t
T
2
∇
T
(
e
e
=
α
e
e
−
1
j
e
+
α
e
e
−
1
α
e
t
T
2
∇
T
)
,
{\displaystyle \mathbf {j} _{e}=\alpha _{ee}\mathbf {e} _{e}-{\frac {\alpha _{et}}{T^{2}}}\nabla T\qquad (\mathbf {e} _{e}=\alpha _{ee}^{-1}\mathbf {j} _{e}+{\frac {\alpha _{ee}^{-1}\alpha _{et}}{T^{2}}}\nabla T),}
q
=
α
t
e
α
e
e
−
1
j
e
−
α
t
t
−
α
t
e
α
e
e
−
1
α
e
t
T
2
∇
T
.
{\displaystyle \mathbf {q} =\alpha _{te}\alpha _{ee}^{-1}\mathbf {j} _{e}-{\frac {\alpha _{tt}-\alpha _{te}\alpha _{ee}^{-1}\alpha _{et}}{T^{2}}}\nabla T.}
Electrical conductivity/resistivity σe (Ω−1m−1)/ ρe (Ω-m), electric thermal conductivity ke (W/m-K) and the Seebeck/Peltier coefficients αS (V/K)/αP (V) are defined as,
σ
e
=
1
ρ
e
=
α
e
e
,
k
e
=
α
t
t
−
α
t
e
α
e
e
−
1
α
e
t
T
2
,
a
n
d
α
S
=
α
e
t
α
e
e
−
1
T
2
(
α
S
=
α
P
T
)
.
{\displaystyle \sigma _{e}={\frac {1}{\rho _{e}}}=\alpha _{ee},\ \ k_{e}={\frac {\alpha _{tt}-\alpha _{te}\alpha _{ee}^{-1}\alpha _{et}}{T^{2}}},\mathrm {and} \ \alpha _{\mathrm {S} }={\frac {\alpha _{et}\alpha _{ee}^{-1}}{T^{2}}}\ \ (\alpha _{\mathrm {S} }=\alpha _{\mathrm {P} }T).}
Various carriers (electrons, magnons, phonons, and polarons) and their interactions substantially affect the Seebeck coefficient. The Seebeck coefficient can be decomposed with two contributions, αS = αS,pres + αS,trans, where αS,pres is the sum of contributions to the carrier-induced entropy change, i.e., αS,pres = αS,mix + αS,spin + αS,vib (αS,mix: entropy-of-mixing, αS,spin: spin entropy, and αS,vib: vibrational entropy). The other contribution αS,trans is the net energy transferred in moving a carrier divided by qT (q: carrier charge). The electron's contributions to the Seebeck coefficient are mostly in αS,pres. The αS,mix is usually dominant in lightly doped semiconductors. The change of the entropy-of-mixing upon adding an electron to a system is the so-called Heikes formula
α
S
,
m
i
x
=
1
q
∂
S
m
i
x
∂
N
=
k
B
q
ln
(
1
−
f
e
o
f
e
o
)
,
{\displaystyle \alpha _{\mathrm {S,mix} }={\frac {1}{q}}{\frac {\partial S_{\mathrm {mix} }}{\partial N}}={\frac {k_{\mathrm {B} }}{q}}\ln \left({\frac {1-f_{e}^{\mathrm {o} }}{f_{e}^{\mathrm {o} }}}\right),}
where feo = N/Na is the ratio of electrons to sites (carrier concentration). Using the chemical potential (μ), the thermal energy (kBT) and the Fermi function, above equation can be expressed in an alternative form, αS,mix = (kB/q)[(Ee − μ)/(kBT)].
Extending the Seebeck effect to spins, a ferromagnetic alloy can be a good example. The contribution to the Seebeck coefficient that results from electrons' presence altering the systems spin entropy is given by αS,spin = ΔSspin/q = (kB/q)ln[(2s + 1)/(2s0 +1)], where s0 and s are net spins of the magnetic site in the absence and presence of the carrier, respectively. Many vibrational effects with electrons also contribute to the Seebeck coefficient. The softening of the vibrational frequencies produces a change of the vibrational entropy is one of examples. The vibrational entropy is the negative derivative of the free energy, i.e.,
S
v
i
b
=
−
∂
F
m
i
x
∂
T
=
3
N
k
B
T
∫
0
ω
{
ℏ
ω
2
k
B
T
coth
(
ℏ
ω
2
k
B
T
)
−
ln
[
2
sinh
(
ℏ
ω
2
k
B
T
)
]
}
D
p
(
ω
)
d
ω
,
{\displaystyle S_{\mathrm {vib} }=-{\frac {\partial F_{\mathrm {mix} }}{\partial T}}=3Nk_{\mathrm {B} }T\int _{0}^{\omega }\left\{{\frac {\hbar \omega }{2k_{\mathrm {B} }T}}\coth \left({\frac {\hbar \omega }{2k_{\mathrm {B} }T}}\right)-\ln \left[2\sinh \left({\frac {\hbar \omega }{2k_{\mathrm {B} }T}}\right)\right]\right\}D_{p}(\omega )d\omega ,}
where Dp(ω) is the phonon density-of-states for the structure. For the high-temperature limit and series expansions of the hyperbolic functions, the above is simplified as αS,vib = (ΔSvib/q) = (kB/q)Σi(-Δωi/ωi).
The Seebeck coefficient derived in the above Onsager formulation is the mixing component αS,mix, which dominates in most semiconductors. The vibrational component in high-band gap materials such as B13C2 is very important.
Considering the microscopic transport (transport is a results of nonequilibrium),
j
e
=
−
e
c
ℏ
3
∑
p
u
e
f
e
′
=
−
e
c
ℏ
3
k
B
T
∑
p
u
e
τ
e
(
−
∂
f
e
o
∂
E
e
)
(
u
e
⋅
F
t
e
)
,
{\displaystyle \mathbf {j} _{e}=-{\frac {e_{c}}{\hbar ^{3}}}\sum _{p}\mathbf {u} _{e}f_{e}^{\prime }=-{\frac {e_{c}}{\hbar ^{3}k_{\mathrm {B} }T}}\sum _{p}\mathbf {u} _{e}\tau _{e}\left(-{\frac {\partial f_{e}^{\mathrm {o} }}{\partial E_{e}}}\right)(\mathbf {u} _{e}\cdot \mathbf {F} _{te}),}
q
=
1
ℏ
3
∑
p
(
E
e
−
E
F
)
u
e
f
e
′
=
1
ℏ
3
k
B
T
∑
p
u
e
τ
e
(
−
∂
f
e
o
∂
E
e
)
(
E
e
−
E
F
)
(
u
e
⋅
F
t
e
)
,
{\displaystyle \mathbf {q} ={\frac {1}{\hbar ^{3}}}\sum _{p}(E_{e}-E_{\mathrm {F} })\mathbf {u} _{e}f_{e}^{\prime }={\frac {1}{\hbar ^{3}k_{\mathrm {B} }T}}\sum _{p}\mathbf {u} _{e}\tau _{e}\left(-{\frac {\partial f_{e}^{\mathrm {o} }}{\partial E_{e}}}\right)(E_{e}-E_{\mathrm {F} })(\mathbf {u} _{e}\cdot \mathbf {F} _{te}),}
where ue is the electron velocity vector, fe (feo) is the electron nonequilibrium (equilibrium) distribution, τe is the electron scattering time, Ee is the electron energy, and Fte is the electric and thermal forces from ∇(EF/ec) and ∇(1/T).
Relating the thermoelectric coefficients to the microscopic transport equations for je and q, the thermal, electric, and thermoelectric properties are calculated. Thus, ke increases with the electrical conductivity σe and temperature T, as the Wiedemann–Franz law presents [ke/(σeTe) = (1/3)(πkB/ec)2 = 2.44×10−8 W-Ω/K2]. Electron transport (represented as σe) is a function of carrier density ne,c and electron mobility μe (σe = ecne,cμe). μe is determined by electron scattering rates
γ
˙
e
{\displaystyle {\dot {\gamma }}_{e}}
(or relaxation time,
τ
e
=
1
/
γ
˙
e
{\displaystyle \tau _{e}=1/{\dot {\gamma }}_{e}}
) in various interaction mechanisms including interaction with other electrons, phonons, impurities and boundaries.
Electrons interact with other principal energy carriers. Electrons accelerated by an electric field are relaxed through the energy conversion to phonon (in semiconductors, mostly optical phonon), which is called Joule heating. Energy conversion between electric potential and phonon energy is considered in thermoelectrics such as Peltier cooling and thermoelectric generator. Also, study of interaction with photons is central in optoelectronic applications (i.e. light-emitting diode, solar photovoltaic cells, etc.). Interaction rates or energy conversion rates can be evaluated using the Fermi golden rule (from the perturbation theory) with ab initio approach.
== Fluid particle ==
Fluid particle is the smallest unit (atoms or molecules) in the fluid phase (gas, liquid or plasma) without breaking any chemical bond. Energy of fluid particle is divided into potential, electronic, translational, vibrational, and rotational energies. The heat (thermal) energy storage in fluid particle is through the temperature-dependent particle motion (translational, vibrational, and rotational energies). The electronic energy is included only if temperature is high enough to ionize or dissociate the fluid particles or to include other electronic transitions. These quantum energy states of the fluid particles are found using their respective quantum Hamiltonian. These are Hf,t = −(ħ2/2m)∇2, Hf,v = −(ħ2/2m)∇2 + Γx2/2 and Hf,r = −(ħ2/2If)∇2 for translational, vibrational and rotational modes. (Γ: spring constant, If: the moment of inertia for the molecule). From the Hamiltonian, the quantized fluid particle energy state Ef and partition functions Zf [with the Maxwell–Boltzmann (MB) occupancy distribution] are found as
translational
E
f
,
t
,
n
=
π
2
ℏ
2
2
m
(
n
x
2
L
2
+
n
y
2
L
2
+
n
z
2
L
2
)
and
Z
f
,
t
∑
i
=
0
∞
g
f
,
t
,
i
exp
(
−
E
f
,
t
,
i
k
B
T
)
=
V
(
m
k
B
T
2
π
ℏ
2
)
3
/
2
,
{\displaystyle E_{f,t,n}={\frac {\pi ^{2}\hbar ^{2}}{2m}}\left({\frac {n_{x}^{2}}{L^{2}}}+{\frac {n_{y}^{2}}{L^{2}}}+{\frac {n_{z}^{2}}{L^{2}}}\right)\ \ \ {\text{and}}\ \ \ Z_{f,t}\sum _{i=0}^{\infty }g_{f,t,i}\exp \left(-{\frac {E_{f,t,i}}{k_{\mathrm {B} }T}}\right)=V\left({\frac {mk_{\mathrm {B} }T}{2\pi \hbar ^{2}}}\right)^{3/2},}
vibrational
E
f
,
v
,
l
=
ℏ
ω
f
,
v
(
1
+
1
2
)
and
Z
f
,
v
∑
j
=
0
∞
exp
[
−
(
l
+
1
2
)
ℏ
ω
f
,
v
k
B
T
]
=
exp
(
−
T
f
,
v
/
2
T
)
1
−
exp
(
−
T
f
,
v
/
T
)
,
{\displaystyle E_{f,v,l}=\hbar \omega _{f,v}\left(1+{\frac {1}{2}}\right)\ \ {\text{and}}\ \ Z_{f,v}\sum _{j=0}^{\infty }\exp \left[-\left(l+{\frac {1}{2}}\right){\frac {\hbar \omega _{f,v}}{k_{\mathrm {B} }T}}\right]={\frac {\exp(-T_{f,v}/2T)}{1-\exp(-T_{f,v}/T)}},}
rotational
E
f
,
r
,
j
=
ℏ
2
2
I
f
and
Z
f
,
r
∑
j
=
0
∞
(
2
j
+
1
)
exp
[
−
−
ℏ
2
j
(
j
+
1
)
2
I
f
k
B
T
]
≈
T
T
f
,
r
(
1
+
T
f
,
r
3
T
+
T
f
,
r
2
15
T
2
+
⋯
)
,
{\displaystyle E_{f,r,j}={\frac {\hbar ^{2}}{2I_{f}}}\ \ {\text{and}}\ \ Z_{f,r}\sum _{j=0}^{\infty }(2j+1)\exp \left[-{\frac {-\hbar ^{2}j(j+1)}{2I_{f}k_{\mathrm {B} }T}}\right]\approx {\frac {T}{T_{f,r}}}\left(1+{\frac {T_{f,r}}{3T}}+{\frac {T_{f,r}^{2}}{15T^{2}}}+\cdots \right),}
total
E
f
=
∑
i
E
f
,
i
=
E
f
,
t
+
E
f
,
v
+
E
f
,
r
+
…
and
Z
f
=
∏
i
Z
f
,
i
=
Z
f
,
t
Z
f
,
v
Z
f
,
r
…
.
{\displaystyle E_{f}=\sum _{i}E_{f,i}=E_{f,t}+E_{f,v}+E_{f,r}+\dots \ \ {\text{and}}\ \ Z_{f}=\prod _{i}Z_{f,i}=Z_{f,t}Z_{f,v}Z_{f,r}\dots .}
Here, gf is the degeneracy, n, l, and j are the transitional, vibrational and rotational quantum numbers, Tf,v is the characteristic temperature for vibration (= ħωf,v/kB, : vibration frequency), and Tf,r is the rotational temperature [= ħ2/(2IfkB)]. The average specific internal energy is related to the partition function through Zf,
e
f
=
(
k
B
T
2
/
m
)
(
∂
l
n
Z
f
/
∂
T
)
|
N
,
V
.
{\displaystyle e_{f}=(k_{\mathrm {B} }T^{2}/m)(\partial \mathrm {ln} Z_{f}/\partial T)|_{N,V}.}
With the energy states and the partition function, the fluid particle specific heat capacity cv,f is the summation of contribution from various kinetic energies (for non-ideal gas the potential energy is also added). Because the total degrees of freedom in molecules is determined by the atomic configuration, cv,f has different formulas depending on the configuration,
monatomic ideal gas
c
v
,
f
=
∂
e
f
∂
T
|
V
=
3
R
g
2
M
,
{\displaystyle c_{v,f}=\left.{\frac {\partial e_{f}}{\partial T}}\right|_{V}={\frac {3R_{g}}{2M}},}
diatomic ideal gas
c
v
,
f
=
R
g
M
{
3
2
+
(
T
f
,
v
T
)
2
exp
(
T
f
,
v
,
i
/
T
)
[
exp
(
T
f
,
v
,
i
/
T
)
−
1
]
2
+
1
+
2
15
(
T
f
,
v
T
)
2
}
,
{\displaystyle c_{v,f}={\frac {R_{g}}{M}}\left\{{\frac {3}{2}}+\left({\frac {T_{f,v}}{T}}\right)^{2}{\frac {\exp(T_{f,v,i}/T)}{[\exp(T_{f,v,i}/T)-1]^{2}}}+1+{\frac {2}{15}}\left({\frac {T_{f,v}}{T}}\right)^{2}\right\},}
nonlinear, polyatomic ideal gas
c
v
,
f
=
R
g
M
{
3
+
∑
j
=
1
3
N
o
−
6
(
T
f
,
v
T
)
2
exp
(
T
f
,
v
,
i
/
T
)
[
exp
(
T
f
,
v
,
i
/
T
)
−
1
]
2
}
.
{\displaystyle c_{v,f}={\frac {R_{g}}{M}}\left\{3+\sum _{j=1}^{3N_{o}-6}\left({\frac {T_{f,v}}{T}}\right)^{2}{\frac {\exp(T_{f,v,i}/T)}{[\exp(T_{f,v,i}/T)-1]^{2}}}\right\}.}
where Rg is the gas constant (= NAkB, NA: the Avogadro constant) and M is the molecular mass (kg/kmol). (For the polyatomic ideal gas, No is the number of atoms in a molecule.) In gas, constant-pressure specific heat capacity cp,f has a larger value and the difference depends on the temperature T, volumetric thermal expansion coefficient β and the isothermal compressibility κ [cp,f – cv,f = Tβ2/(ρfκ), ρf : the fluid density]. For dense fluids that the interactions between the particles (the van der Waals interaction) should be included, and cv,f and cp,f would change accordingly.
The net motion of particles (under gravity or external pressure) gives rise to the convection heat flux qu = ρfcp,fufT. Conduction heat flux qk for ideal gas is derived with the gas kinetic theory or the Boltzmann transport equations, and the thermal conductivity is
k
f
=
1
3
n
f
c
p
,
f
⟨
u
f
2
⟩
τ
f
-
f
,
{\displaystyle k_{f}={\tfrac {1}{3}}n_{f}c_{p,f}\langle u_{f}^{2}\rangle \tau _{f{\mbox{-}}f},}
where ⟨uf2⟩1/2 is the RMS (root mean square) thermal velocity (3kBT/m from the MB distribution function, m: atomic mass) and τf-f is the relaxation time (or intercollision time period) [(21/2π d2nf ⟨uf⟩)−1 from the gas kinetic theory, ⟨uf⟩: average thermal speed (8kBT/πm)1/2, d: the collision diameter of fluid particle (atom or molecule), nf: fluid number density].
kf is also calculated using molecular dynamics (MD), which simulates physical movements of the fluid particles with the Newton equations of motion (classical) and force field (from ab initio or empirical properties). For calculation of kf, the equilibrium MD with Green–Kubo relations, which express the transport coefficients in terms of integrals of time correlation functions (considering fluctuation), or nonequilibrium MD (prescribing heat flux or temperature difference in simulated system) are generally employed.
Fluid particles can interact with other principal particles. Vibrational or rotational modes, which have relatively high energy, are excited or decay through the interaction with photons. Gas lasers employ the interaction kinetics between fluid particles and photons, and laser cooling has been also considered in CO2 gas laser. Also, fluid particles can be adsorbed on solid surfaces (physisorption and chemisorption), and the frustrated vibrational modes in adsorbates (fluid particles) is decayed by creating e−-h+ pairs or phonons. These interaction rates are also calculated through ab initio calculation on fluid particle and the Fermi golden rule.
== Photon ==
Photon is the quanta of electromagnetic (EM) radiation and energy carrier for radiation heat transfer. The EM wave is governed by the classical Maxwell equations, and the quantization of EM wave is used for phenomena such as the blackbody radiation (in particular to explain the ultraviolet catastrophe). The quanta EM wave (photon) energy of angular frequency ωph is Eph = ħωph, and follows the Bose–Einstein distribution function (fph). The photon Hamiltonian for the quantized radiation field (second quantization) is
H
p
h
=
1
2
∫
(
ε
o
e
e
2
+
μ
o
−
1
b
e
2
)
d
V
=
∑
α
ℏ
ω
p
h
,
α
(
c
α
†
c
α
+
1
2
)
,
{\displaystyle \mathrm {H} _{ph}={\frac {1}{2}}\int \left(\varepsilon _{\mathrm {o} }\mathbf {e} _{e}^{2}+\mu _{\mathrm {o} }^{-1}\mathbf {b} _{e}^{2}\right)dV=\sum _{\alpha }\hbar \omega _{ph,\alpha }\left(c_{\alpha }^{\dagger }c_{\alpha }+{\frac {1}{2}}\right),}
where ee and be are the electric and magnetic fields of the EM radiation, εo and μo are the free-space permittivity and permeability, V is the interaction volume, ωph,α is the photon angular frequency for the α mode and cα† and cα are the photon creation and annihilation operators. The vector potential ae of EM fields (ee = −∂ae/∂t and be = ∇×ae) is
a
e
(
x
,
t
)
=
∑
α
(
ℏ
2
ε
o
ω
p
h
,
α
V
)
1
/
2
s
p
h
,
α
(
c
α
e
i
κ
α
⋅
x
+
c
α
†
e
−
i
κ
α
⋅
x
)
,
{\displaystyle \mathbf {a} _{e}(\mathbf {x} ,t)=\sum _{\alpha }\left({\frac {\hbar }{2\varepsilon _{\mathrm {o} }\omega _{ph,\alpha }V}}\right)^{1/2}\mathbf {s} _{ph,\alpha }\left(c_{\alpha }e^{i{\boldsymbol {\kappa }}_{\alpha }\cdot \mathbf {x} }+c_{\alpha }^{\dagger }e^{-i{\boldsymbol {\kappa }}_{\alpha }\cdot \mathbf {x} }\right),}
where sph,α is the unit polarization vector, κα is the wave vector.
Blackbody radiation among various types of photon emission employs the photon gas model with thermalized energy distribution without interphoton interaction. From the linear dispersion relation (i.e., dispersionless), phase and group speeds are equal (uph = d ωph/dκ = ωph/κ, uph: photon speed) and the Debye (used for dispersionless photon) density of states is Dph,b,ωdω = ωph2dωph/π2uph3. With Dph,b,ω and equilibrium distribution fph, photon energy spectral distribution dIb,ω or dIb,λ (λph: wavelength) and total emissive power Eb are derived as
d
I
b
,
ω
=
D
p
h
,
b
,
ω
f
p
h
u
p
h
d
ω
p
h
4
π
=
ℏ
ω
p
h
3
4
π
3
u
p
h
2
1
e
ℏ
ω
p
h
/
k
B
T
−
1
d
ω
p
h
or
d
I
b
,
λ
=
4
π
ℏ
u
p
h
2
d
λ
p
h
λ
p
h
5
(
e
2
π
ℏ
u
p
h
/
λ
p
h
k
B
T
−
1
)
{\displaystyle dI_{b,\omega }={\frac {D_{ph,b,\omega }f_{ph}u_{ph}d\omega _{ph}}{4\pi }}={\frac {\hbar \omega _{ph}^{3}}{4\pi ^{3}u_{ph}^{2}}}{\frac {1}{e^{\hbar \omega _{ph}/k_{\mathrm {B} }T}-1}}d\omega _{ph}\ {\text{or}}\ dI_{b,\lambda }={\frac {4\pi \hbar u_{ph}^{2}d\lambda _{ph}}{\lambda _{ph}^{5}(e^{2\pi \hbar u_{ph}/\lambda _{ph}k_{\mathrm {B} }T}-1)}}}
(Planck law),
E
b
=
∫
0
∞
d
E
b
,
λ
=
σ
S
B
T
4
, where
σ
S
B
=
π
2
k
B
4
60
ℏ
3
u
p
h
2
{\displaystyle E_{b}=\int _{0}^{\infty }dE_{b,\lambda }=\sigma _{\mathrm {SB} }T^{4}\ {\text{, where}}\ \sigma _{\mathrm {SB} }={\frac {\pi ^{2}k_{\mathrm {B} }^{4}}{60\hbar ^{3}u_{ph}^{2}}}}
(Stefan–Boltzmann law).
Compared to blackbody radiation, laser emission has high directionality (small solid angle ΔΩ) and spectral purity (narrow bands Δω). Lasers range far-infrared to X-rays/γ-rays regimes based on the resonant transition (stimulated emission) between electronic energy states.
Near-field radiation from thermally excited dipoles and other electric/magnetic transitions is very effective within a short distance (order of wavelength) from emission sites.
The BTE for photon particle momentum pph = ħωphs/uph along direction s experiencing absorption/emission
s
˙
f
,
p
h
−
e
{\displaystyle \textstyle {\dot {s}}_{f,ph-e}\ }
(= uphσph,ω[fph(ωph,T) - fph(s)], σph,ω: spectral absorption coefficient), and generation/removal
s
˙
f
,
p
h
,
i
{\displaystyle \textstyle {\dot {s}}_{f,ph,i}}
, is
∂
f
p
h
∂
t
+
u
p
h
s
⋅
∇
f
p
h
=
∂
f
p
h
∂
t
|
s
+
u
p
h
σ
p
h
,
ω
[
f
p
h
(
ω
p
h
,
T
)
−
f
p
h
(
s
)
]
+
s
˙
f
,
p
h
,
i
.
{\displaystyle {\frac {\partial f_{ph}}{\partial t}}+u_{ph}\mathbf {s} \cdot \nabla f_{ph}=\left.{\frac {\partial f_{ph}}{\partial t}}\right|_{s}+u_{ph}\sigma _{ph,\omega }[f_{ph}(\omega _{ph},T)-f_{ph}(\mathbf {s} )]+{\dot {s}}_{f,ph,i}.}
In terms of radiation intensity (Iph,ω = uphfphħωphDph,ω/4π, Dph,ω: photon density of states), this is called the equation of radiative transfer (ERT)
∂
I
p
h
,
ω
(
ω
p
h
,
s
)
u
p
h
∂
t
+
s
⋅
∇
I
p
h
,
ω
(
ω
p
h
,
s
)
=
∂
I
p
h
,
ω
(
ω
p
h
,
s
)
u
p
h
∂
t
|
s
+
σ
p
h
,
ω
[
I
p
h
,
ω
(
ω
p
h
,
T
)
−
I
p
h
(
ω
p
h
,
s
)
]
+
s
˙
p
h
,
i
.
{\displaystyle {\frac {\partial I_{ph,\omega }(\omega _{ph},\mathbf {s} )}{u_{ph}\partial t}}+\mathbf {s} \cdot \nabla I_{ph,\omega }(\omega _{ph},\mathbf {s} )=\left.{\frac {\partial I_{ph,\omega }(\omega _{ph},\mathbf {s} )}{u_{ph}\partial t}}\right|_{s}+\sigma _{ph,\omega }[I_{ph,\omega }(\omega _{ph},T)-I_{ph}(\omega _{ph},\mathbf {s} )]+{\dot {s}}_{ph,i}.}
The net radiative heat flux vector is
q
r
=
q
p
h
=
∫
0
∞
∫
4
π
s
I
p
h
,
ω
d
Ω
d
ω
.
{\textstyle \mathbf {q} _{r}=\mathbf {q} _{ph}=\int _{0}^{\infty }\int _{4\pi }\mathbf {s} I_{ph,\omega }d\Omega d\omega .}
From the Einstein population rate equation, spectral absorption coefficient σph,ω in ERT is,
σ
p
h
,
ω
=
ℏ
ω
γ
˙
p
h
,
a
n
e
u
p
h
,
{\displaystyle \sigma _{ph,\omega }={\frac {\hbar \omega {\dot {\gamma }}_{ph,a}n_{e}}{u_{ph}}},}
where
γ
˙
p
h
,
a
{\displaystyle {\dot {\gamma }}_{ph,a}}
is the interaction probability (absorption) rate or the Einstein coefficient B12 (J−1 m3 s−1), which gives the probability per unit time per unit spectral energy density of the radiation field (1: ground state, 2: excited state), and ne is electron density (in ground state). This can be obtained using the transition dipole moment μe with the FGR and relationship between Einstein coefficients. Averaging σph,ω over ω gives the average photon absorption coefficient σph.
For the case of optically thick medium of length L, i.e., σphL >> 1, and using the gas kinetic theory, the photon conductivity kph is 16σSBT3/3σph (σSB: Stefan–Boltzmann constant, σph: average photon absorption), and photon heat capacity nphcv,ph is 16σSBT3/uph.
Photons have the largest range of energy and central in a variety of energy conversions. Photons interact with electric and magnetic entities. For example, electric dipole which in turn are excited by optical phonons or fluid particle vibration, or transition dipole moments of electronic transitions. In heat transfer physics, the interaction kinetics of phonon is treated using the perturbation theory (the Fermi golden rule) and the interaction Hamiltonian. The photon-electron interaction is
H
p
h
−
e
=
−
e
c
m
e
(
a
+
a
†
)
a
e
⋅
p
e
=
−
(
ℏ
ω
p
h
,
α
2
ε
o
V
)
1
/
2
(
s
p
h
,
α
⋅
e
c
x
e
)
(
a
+
a
†
)
(
c
e
i
κ
⋅
x
+
c
†
e
−
i
κ
⋅
x
)
,
{\displaystyle \mathrm {H} _{ph-e}=-{\frac {e_{c}}{m_{e}}}\left(a+a^{\dagger }\right)\mathbf {a} _{e}\cdot \mathbf {p} _{e}=-\left({\frac {\hbar \omega _{ph,\alpha }}{2\varepsilon _{o}V}}\right)^{1/2}(\mathbf {s} _{ph,\alpha }\cdot e_{c}\mathbf {x} _{e})\left(a+a^{\dagger }\right)\left(ce^{i\mathrm {\kappa } \cdot \mathrm {x} }+c^{\dagger }e^{-i\mathrm {\kappa } \cdot \mathrm {x} }\right),}
where pe is the dipole moment vector and a† and a are the creation and annihilation of internal motion of electron. Photons also participate in ternary interactions, e.g., phonon-assisted photon absorption/emission (transition of electron energy level). The vibrational mode in fluid particles can decay or become excited by emitting or absorbing photons. Examples are solid and molecular gas laser cooling.
Using ab initio calculations based on the first principles along with EM theory, various radiative properties such as dielectric function (electrical permittivity, εe,ω), spectral absorption coefficient (σph,ω), and the complex refraction index (mω), are calculated for various interactions between photons and electric/magnetic entities in matter. For example, the imaginary part (εe,c,ω) of complex dielectric function (εe,ω = εe,r,ω + i εe,c,ω) for electronic transition across a bandgap is
ε
e
,
c
,
ω
=
4
π
2
ω
2
V
∑
i
∈
V
B
,
j
∈
C
B
∑
κ
w
κ
|
p
i
j
|
2
δ
(
E
κ
,
j
−
E
κ
,
i
−
ℏ
ω
)
,
{\displaystyle \varepsilon _{e,c,\omega }={\frac {4\pi ^{2}}{\omega ^{2}V}}\sum _{i\in \mathrm {VB} ,j\in \mathrm {CB} }\sum _{\kappa }w_{\kappa }|p_{ij}|^{2}\delta (E_{\kappa ,j}-E_{\kappa ,i}-\hbar \omega ),}
where V is the unit-cell volume, VB and CB denote the valence and conduction bands, wκ is the weight associated with a κ-point, and pij is the transition momentum matrix element.
The real part is εe,r,ω is obtained from εe,c,ω using the Kramers-Kronig relation
ε
e
,
r
,
ω
=
1
+
4
π
P
∫
0
∞
d
ω
′
ω
′
ε
e
,
c
,
ω
′
ω
′
2
−
ω
2
.
{\displaystyle \varepsilon _{e,r,\omega }=1+{\frac {4}{\pi }}\mathbb {P} \int _{0}^{\infty }\mathrm {d} \omega '{\frac {\omega '\varepsilon _{e,c,\omega '}}{\omega '^{2}-\omega ^{2}}}.}
Here,
P
{\displaystyle \mathbb {P} }
denotes the principal value of the integral.
In another example, for the far IR regions where the optical phonons are involved, the dielectric function (εe,ω) are calculated as
ε
e
,
ω
ε
e
,
∞
=
1
+
∑
j
ω
L
O
,
j
2
−
ω
T
O
,
j
2
ω
T
O
,
j
2
−
ω
2
−
i
γ
ω
,
{\displaystyle {\frac {\varepsilon _{e,\omega }}{\varepsilon _{e,\infty }}}=1+\sum _{j}{\frac {\omega _{\mathrm {LO} ,j}^{2}-\omega _{\mathrm {TO} ,j}^{2}}{\omega _{\mathrm {TO} ,j}^{2}-\omega ^{2}-i\gamma \omega }},}
where LO and TO denote the longitudinal and transverse optical phonon modes, j is all the IR-active modes, and γ is the temperature-dependent damping term in the oscillator model. εe,∞ is high frequency dielectric permittivity, which can be calculated DFT calculation when ions are treated as external potential.
From these dielectric function (εe,ω) calculations (e.g., Abinit, VASP, etc.), the complex refractive index mω(= nω + i κω, nω: refraction index and κω: extinction index) is found, i.e., mω2 = εe,ω = εe,r,ω + i εe,c,ω). The surface reflectance R of an ideal surface with normal incident from vacuum or air is given as R = [(nω - 1)2 + κω2]/[(nω + 1)2 + κω2]. The spectral absorption coefficient is then found from σph,ω = 2ω κω/uph. The spectral absorption coefficient for various electric entities are listed in the below table.
== See also ==
Energy transfer
Mass transfer
Energy transformation (Energy conversion)
Thermal physics
Thermal science
Thermal engineering
== References == | Wikipedia/Heat_transfer_physics |
In hydrodynamics, a plume or a column is a vertical body of one fluid moving through another. Several effects control the motion of the fluid, including momentum (inertia), diffusion and buoyancy (density differences). Pure jets and pure plumes define flows that are driven entirely by momentum and buoyancy effects, respectively. Flows between these two limits are usually described as forced plumes or buoyant jets. "Buoyancy is defined as being positive" when, in the absence of other forces or initial motion, the entering fluid would tend to rise. Situations where the density of the plume fluid is greater than its surroundings (i.e. in still conditions, its natural tendency would be to sink), but the flow has sufficient initial momentum to carry it some distance vertically, are described as being negatively buoyant.
== Movement ==
Usually, as a plume moves away from its source, it widens because of entrainment of the surrounding fluid at its edges. Plume shapes can be influenced by flow in the ambient fluid (for example, if local wind blowing in the same direction as the plume results in a co-flowing jet). This usually causes a plume which has initially been 'buoyancy-dominated' to become 'momentum-dominated' (this transition is usually predicted by a dimensionless number called the Richardson number).
=== Flow and detection ===
A further phenomenon of importance is whether a plume has laminar flow or turbulent flow. Usually, there is a transition from laminar to turbulent as the plume moves away from its source. This phenomenon can be clearly seen in the rising column of smoke from a cigarette. When high accuracy is required, computational fluid dynamics (CFD) can be employed to simulate plumes, but the results can be sensitive to the turbulence model chosen. CFD is often undertaken for rocket plumes, where condensed phase constituents can be present in addition to gaseous constituents. These types of simulations can become quite complex, including afterburning and thermal radiation, and (for example) ballistic missile launches are often detected by sensing hot rocket plumes.
Spacecraft designers are sometimes concerned with impingement of attitude control system thruster plumes onto sensitive subsystems like solar arrays and star trackers, or with the impingement of rocket engine plumes onto moon or planetary surfaces where they can cause local damage or even mid-term disturbances to planetary atmospheres.
Another phenomenon which can also be seen clearly in the flow of smoke from a cigarette is that the leading-edge of the flow, or the starting-plume, is quite often approximately in the shape of a ring-vortex (smoke ring).
== Types ==
Pollutants released to the ground can work their way down into the groundwater, leading to groundwater pollution. The resulting body of polluted water within an aquifer is called a plume, with its migrating edges called plume fronts. Plumes are used to locate, map, and measure water pollution within the aquifer's total body of water, and plume fronts to determine directions and speed of the contamination's spreading in it.
Plumes are of considerable importance in the atmospheric dispersion modelling of air pollution. A classic work on the subject of air pollution plumes is that by Gary Briggs.
A thermal plume is one which is generated by gas rising above a heat source. The gas rises because thermal expansion makes warm gas less dense than the surrounding cooler gas.
== Simple plume modeling ==
Simple modelling will enable many properties of fully developed, turbulent plumes to be investigated. Many of the classic scaling arguments were developed in a combined analytic and laboratory study described in an influential paper by Bruce Morton, G.I. Taylor and Stewart Turner and this and subsequent work is described in the popular monograph of Stewart Turner.
It is usually sufficient to assume that the pressure gradient is set by the gradient far from the plume (this approximation is similar to the usual Boussinesq approximation).
The distribution of density and velocity across the plume are modelled either with simple Gaussian distributions or else are taken as uniform across the plume (the so-called 'top hat' model).
The rate of entrainment into the plume is proportional to the local velocity. Though initially thought to be a constant, recent work has shown that the entrainment coefficient varies with the local Richardson number. Typical values for the entrainment coefficient are of about 0.08 for vertical jets and 0.12 for vertical, buoyant plumes while for bent-over plumes, the entrainment coefficient is about 0.6.
Conservation equations for mass (including entrainment), and momentum and buoyancy fluxes are sufficient for a complete description of the flow in many cases. For a simple rising plume these equations predict that the plume will widen at a constant half-angle of about 6 to 15 degrees.
The value of the entrainment coefficient is the key parameter in simple plume models. Research continues into assessing how the entrainment coefficient is affected by, for example, the geometry of a plume, suspended particles within a plume, and background rotation.
== Gaussian plume modelling ==
Gaussian plume models can be used in several fluid dynamics scenarios to calculate concentration distribution of solutes, such as a smoke stack release or contaminant released in a river. Gaussian distributions are established by Fickian diffusion, and follow a Gaussian (bell-shaped) distribution. For calculating the expected concentration of a one dimensional instantaneous point source we consider a mass
M
{\displaystyle M}
released at an instantaneous point in time, in a one dimensional domain along
x
{\displaystyle x}
. This will give the following equation:
C
(
x
,
t
)
=
M
4
π
D
t
exp
(
(
x
−
x
0
)
2
4
D
(
t
−
t
0
)
)
{\displaystyle C(x,t)={\frac {M}{\sqrt {4\pi Dt}}}\exp \left({\frac {(x-x_{0})^{2}}{4D(t-t_{0})}}\right)}
where
M
{\displaystyle M}
is the mass released at time
t
=
t
0
{\displaystyle t=t_{0}}
and location
x
=
x
0
{\displaystyle x=x_{0}}
, and
D
{\displaystyle D}
is the diffusivity
[
m
2
s
]
{\displaystyle \left[{\frac {{\text{m}}^{2}}{\text{s}}}\right]}
. This equation makes the following four assumptions:
The mass
M
{\displaystyle M}
is released instantaneously.
The mass
M
{\displaystyle M}
is released in an infinite domain.
The mass spreads only through diffusion.
Diffusion does not vary in space.
== Gallery ==
== See also ==
== References == | Wikipedia/Plume_(fluid_dynamics) |
Geomechanics (from the Greek γεός, i.e. prefix geo- meaning "earth"; and "mechanics") is the study of the mechanical state of the Earth's crust and the processes occurring in it under the influence of natural physical factors. It involves the study of the mechanics of soil and rock.
== Background ==
The two main disciplines of geomechanics are soil mechanics and rock mechanics. Former deals with the soil behaviour from a small scale to a landslide scale. The latter deals with issues in geosciences related to rock mass characterization and rock mass mechanics, such as applied to petroleum, mining and civil engineering problems, such as borehole stability, tunnel design, rock breakage, slope stability, foundations, and rock drilling.
Many aspects of geomechanics overlap with parts of geotechnical engineering, engineering geology, and geological engineering. Modern developments relate to seismology, continuum mechanics, discontinuum mechanics, transport phenomena, numerical methods etc.
== Reservoir Geomechanics ==
In the petroleum industry geomechanics is used to:
predict pore pressure
establish the integrity of the cap rock
evaluate reservoir properties
determine in-situ rock stress
evaluate the wellbore stability
calculate the optimal trajectory of the borehole
predict and control sand occurrence in the well
analyze the validity of drilling on depression
characterize fractured reservoirs
increase the efficiency of the development of fractured reservoirs
evaluate hydraulic fractures stability
study the reactivation of natural fractures and structural faults
evaluate the effect of liquid and steam injection into the reservoir
analyze surface subsidence
determine the degree of the reservoir compaction
quantify production loss due to the reservoir rock deformation
evaluate shear deformation and casing collapse
To put into practice the geomechanics capabilities mentioned above, it is necessary to create a Geomechanical Model of the Earth (GEM) which consists of six key components that can be both calculated and estimated using field data:
Vertical stress, δv (often called geostatic pressure or overburden stress)
Maximum horizontal stress, δHmax
Minimum horizontal stress, δHmin
Stress orientation
Pore pressure, Pp
Elastic properties and rock strength: Young's modulus, Poisson's ratio, friction angle, UCS (unconfined compressive strength) and TSTR (tensile strength)
Geotechnical engineers rely on various techniques to obtain reliable data for geomechanical models. These techniques include coring and core testing, seismic data and log analysis, well testing methods such as transient pressure analysis and hydraulic fracturing stress testing, and geophysical methods such as acoustic emission.
== See also ==
Earthquake engineering
Geotechnics
Rock mechanics
== References ==
== Additional sources ==
Jaeger, Cook, and Zimmerman (2008). Fundamentals of Rock Mechanics. Blackwell Publishing. ISBN 9780632057597.{{cite book}}: CS1 maint: multiple names: authors list (link)
Chandramouli, P.N. (2014). Continuum Mechanics. Yes Dee Publishing Pvt Ltd. ISBN 9789380381398. Archived from the original on 2018-08-04. Retrieved 2014-04-03. | Wikipedia/Geomechanics |
In continuum mechanics, plate theories are mathematical descriptions of the mechanics of flat plates that draw on the theory of beams. Plates are defined as plane structural elements with a small thickness compared to the planar dimensions. The typical thickness to width ratio of a plate structure is less than 0.1. A plate theory takes advantage of this disparity in length scale to reduce the full three-dimensional solid mechanics problem to a two-dimensional problem. The aim of plate theory is to calculate the deformation and stresses in a plate subjected to loads.
Of the numerous plate theories that have been developed since the late 19th century, two are widely accepted and used in engineering. These are
the Kirchhoff–Love theory of plates (classical plate theory)
The Reissner-Mindlin theory of plates (first-order shear plate theory)
== Kirchhoff–Love theory for thin plates ==
The Kirchhoff–Love theory is an extension of Euler–Bernoulli beam theory to thin plates. The theory was developed in 1888 by Love using assumptions proposed by Kirchhoff. It is assumed that a mid-surface plane can be used to represent the three-dimensional plate in two-dimensional form.
The following kinematic assumptions are made in this theory:
straight lines normal to the mid-surface remain straight after deformation
straight lines normal to the mid-surface remain normal to the mid-surface after deformation
the thickness of the plate does not change during a deformation.
=== Displacement field ===
The Kirchhoff hypothesis implies that the displacement field has the form
where
x
1
{\displaystyle x_{1}}
and
x
2
{\displaystyle x_{2}}
are the Cartesian coordinates on the mid-surface of the undeformed plate,
x
3
{\displaystyle x_{3}}
is the coordinate for the thickness direction,
u
1
0
,
u
2
0
{\displaystyle u_{1}^{0},u_{2}^{0}}
are the in-plane displacements of the mid-surface, and
w
0
{\displaystyle w^{0}}
is the displacement of the mid-surface in the
x
3
{\displaystyle x_{3}}
direction.
If
φ
α
{\displaystyle \varphi _{\alpha }}
are the angles of rotation of the normal to the mid-surface, then in the Kirchhoff–Love theory
φ
α
=
w
,
α
0
.
{\displaystyle \varphi _{\alpha }=w_{,\alpha }^{0}\,.}
=== Strain-displacement relations ===
For the situation where the strains in the plate are infinitesimal and the rotations of the mid-surface normals are less than 10° the strains-displacement relations are
ε
α
β
=
1
2
(
u
α
,
β
0
+
u
β
,
α
0
)
−
x
3
w
,
α
β
0
ε
α
3
=
−
w
,
α
0
+
w
,
α
0
=
0
ε
33
=
0
{\displaystyle {\begin{aligned}\varepsilon _{\alpha \beta }&={\tfrac {1}{2}}(u_{\alpha ,\beta }^{0}+u_{\beta ,\alpha }^{0})-x_{3}~w_{,\alpha \beta }^{0}\\\varepsilon _{\alpha 3}&=-w_{,\alpha }^{0}+w_{,\alpha }^{0}=0\\\varepsilon _{33}&=0\end{aligned}}}
Therefore, the only non-zero strains are in the in-plane directions.
If the rotations of the normals to the mid-surface are in the range of 10° to 15°, the strain-displacement relations can be approximated using the von Kármán strains. Then the kinematic assumptions of Kirchhoff-Love theory lead to the following strain-displacement relations
ε
α
β
=
1
2
(
u
α
,
β
0
+
u
β
,
α
0
+
w
,
α
0
w
,
β
0
)
−
x
3
w
,
α
β
0
ε
α
3
=
−
w
,
α
0
+
w
,
α
0
=
0
ε
33
=
0
{\displaystyle {\begin{aligned}\varepsilon _{\alpha \beta }&={\frac {1}{2}}(u_{\alpha ,\beta }^{0}+u_{\beta ,\alpha }^{0}+w_{,\alpha }^{0}~w_{,\beta }^{0})-x_{3}~w_{,\alpha \beta }^{0}\\\varepsilon _{\alpha 3}&=-w_{,\alpha }^{0}+w_{,\alpha }^{0}=0\\\varepsilon _{33}&=0\end{aligned}}}
This theory is nonlinear because of the quadratic terms in the strain-displacement relations.
=== Equilibrium equations ===
The equilibrium equations for the plate can be derived from the principle of virtual work. For the situation where the strains and rotations of the plate are small, the equilibrium equations for an unloaded plate are given by
N
α
β
,
α
=
0
M
α
β
,
α
β
=
0
{\displaystyle {\begin{aligned}N_{\alpha \beta ,\alpha }&=0\\M_{\alpha \beta ,\alpha \beta }&=0\end{aligned}}}
where the stress resultants and stress moment resultants are defined as
N
α
β
:=
∫
−
h
h
σ
α
β
d
x
3
;
M
α
β
:=
∫
−
h
h
x
3
σ
α
β
d
x
3
{\displaystyle N_{\alpha \beta }:=\int _{-h}^{h}\sigma _{\alpha \beta }~dx_{3}~;~~M_{\alpha \beta }:=\int _{-h}^{h}x_{3}~\sigma _{\alpha \beta }~dx_{3}}
and the thickness of the plate is
2
h
{\displaystyle 2h}
. The quantities
σ
α
β
{\displaystyle \sigma _{\alpha \beta }}
are the stresses.
If the plate is loaded by an external distributed load
q
(
x
)
{\displaystyle q(x)}
that is normal to the mid-surface and directed in the positive
x
3
{\displaystyle x_{3}}
direction, the principle of virtual work then leads to the equilibrium equations
For moderate rotations, the strain-displacement relations take the von Karman form and the equilibrium equations can be expressed as
N
α
β
,
α
=
0
M
α
β
,
α
β
+
[
N
α
β
w
,
β
0
]
,
α
−
q
=
0
{\displaystyle {\begin{aligned}N_{\alpha \beta ,\alpha }&=0\\M_{\alpha \beta ,\alpha \beta }+[N_{\alpha \beta }~w_{,\beta }^{0}]_{,\alpha }-q&=0\end{aligned}}}
=== Boundary conditions ===
The boundary conditions that are needed to solve the equilibrium equations of plate theory can be obtained from the boundary terms in the principle of virtual work.
For small strains and small rotations, the boundary conditions are
n
α
N
α
β
o
r
u
β
0
n
α
M
α
β
,
β
o
r
w
0
n
β
M
α
β
o
r
w
,
α
0
{\displaystyle {\begin{aligned}n_{\alpha }~N_{\alpha \beta }&\quad \mathrm {or} \quad u_{\beta }^{0}\\n_{\alpha }~M_{\alpha \beta ,\beta }&\quad \mathrm {or} \quad w^{0}\\n_{\beta }~M_{\alpha \beta }&\quad \mathrm {or} \quad w_{,\alpha }^{0}\end{aligned}}}
Note that the quantity
n
α
M
α
β
,
β
{\displaystyle n_{\alpha }~M_{\alpha \beta ,\beta }}
is an effective shear force.
=== Stress–strain relations ===
The stress–strain relations for a linear elastic Kirchhoff plate are given by
[
σ
11
σ
22
σ
12
]
=
[
C
11
C
12
C
13
C
12
C
22
C
23
C
13
C
23
C
33
]
[
ε
11
ε
22
ε
12
]
{\displaystyle {\begin{bmatrix}\sigma _{11}\\\sigma _{22}\\\sigma _{12}\end{bmatrix}}={\begin{bmatrix}C_{11}&C_{12}&C_{13}\\C_{12}&C_{22}&C_{23}\\C_{13}&C_{23}&C_{33}\end{bmatrix}}{\begin{bmatrix}\varepsilon _{11}\\\varepsilon _{22}\\\varepsilon _{12}\end{bmatrix}}}
Since
σ
α
3
{\displaystyle \sigma _{\alpha 3}}
and
σ
33
{\displaystyle \sigma _{33}}
do not appear in the equilibrium equations it is implicitly assumed that these quantities do not have any effect on the momentum balance and are neglected.
It is more convenient to work with the stress and moment resultants that enter the equilibrium equations. These are related to the displacements by
[
N
11
N
22
N
12
]
=
{
∫
−
h
h
[
C
11
C
12
C
13
C
12
C
22
C
23
C
13
C
23
C
33
]
d
x
3
}
[
u
1
,
1
0
u
2
,
2
0
1
2
(
u
1
,
2
0
+
u
2
,
1
0
)
]
{\displaystyle {\begin{bmatrix}N_{11}\\N_{22}\\N_{12}\end{bmatrix}}=\left\{\int _{-h}^{h}{\begin{bmatrix}C_{11}&C_{12}&C_{13}\\C_{12}&C_{22}&C_{23}\\C_{13}&C_{23}&C_{33}\end{bmatrix}}~dx_{3}\right\}{\begin{bmatrix}u_{1,1}^{0}\\u_{2,2}^{0}\\{\frac {1}{2}}~(u_{1,2}^{0}+u_{2,1}^{0})\end{bmatrix}}}
and
[
M
11
M
22
M
12
]
=
−
{
∫
−
h
h
x
3
2
[
C
11
C
12
C
13
C
12
C
22
C
23
C
13
C
23
C
33
]
d
x
3
}
[
w
,
11
0
w
,
22
0
w
,
12
0
]
.
{\displaystyle {\begin{bmatrix}M_{11}\\M_{22}\\M_{12}\end{bmatrix}}=-\left\{\int _{-h}^{h}x_{3}^{2}~{\begin{bmatrix}C_{11}&C_{12}&C_{13}\\C_{12}&C_{22}&C_{23}\\C_{13}&C_{23}&C_{33}\end{bmatrix}}~dx_{3}\right\}{\begin{bmatrix}w_{,11}^{0}\\w_{,22}^{0}\\w_{,12}^{0}\end{bmatrix}}\,.}
The extensional stiffnesses are the quantities
A
α
β
:=
∫
−
h
h
C
α
β
d
x
3
{\displaystyle A_{\alpha \beta }:=\int _{-h}^{h}C_{\alpha \beta }~dx_{3}}
The bending stiffnesses (also called flexural rigidity) are the quantities
D
α
β
:=
∫
−
h
h
x
3
2
C
α
β
d
x
3
{\displaystyle D_{\alpha \beta }:=\int _{-h}^{h}x_{3}^{2}~C_{\alpha \beta }~dx_{3}}
== Isotropic and homogeneous Kirchhoff plate ==
For an isotropic and homogeneous plate, the stress–strain relations are
[
σ
11
σ
22
σ
12
]
=
E
1
−
ν
2
[
1
ν
0
ν
1
0
0
0
1
−
ν
]
[
ε
11
ε
22
ε
12
]
.
{\displaystyle {\begin{bmatrix}\sigma _{11}\\\sigma _{22}\\\sigma _{12}\end{bmatrix}}={\cfrac {E}{1-\nu ^{2}}}{\begin{bmatrix}1&\nu &0\\\nu &1&0\\0&0&1-\nu \end{bmatrix}}{\begin{bmatrix}\varepsilon _{11}\\\varepsilon _{22}\\\varepsilon _{12}\end{bmatrix}}\,.}
The moments corresponding to these stresses are
[
M
11
M
22
M
12
]
=
−
2
h
3
E
3
(
1
−
ν
2
)
[
1
ν
0
ν
1
0
0
0
1
−
ν
]
[
w
,
11
0
w
,
22
0
w
,
12
0
]
{\displaystyle {\begin{bmatrix}M_{11}\\M_{22}\\M_{12}\end{bmatrix}}=-{\cfrac {2h^{3}E}{3(1-\nu ^{2})}}~{\begin{bmatrix}1&\nu &0\\\nu &1&0\\0&0&1-\nu \end{bmatrix}}{\begin{bmatrix}w_{,11}^{0}\\w_{,22}^{0}\\w_{,12}^{0}\end{bmatrix}}}
=== Pure bending ===
The displacements
u
1
0
{\displaystyle u_{1}^{0}}
and
u
2
0
{\displaystyle u_{2}^{0}}
are zero under pure bending conditions. For an isotropic, homogeneous plate under pure bending the governing equation is
∂
4
w
∂
x
1
4
+
2
∂
4
w
∂
x
1
2
∂
x
2
2
+
∂
4
w
∂
x
2
4
=
0
where
w
:=
w
0
.
{\displaystyle {\frac {\partial ^{4}w}{\partial x_{1}^{4}}}+2{\frac {\partial ^{4}w}{\partial x_{1}^{2}\partial x_{2}^{2}}}+{\frac {\partial ^{4}w}{\partial x_{2}^{4}}}=0\quad {\text{where}}\quad w:=w^{0}\,.}
In index notation,
w
,
1111
0
+
2
w
,
1212
0
+
w
,
2222
0
=
0
.
{\displaystyle w_{,1111}^{0}+2~w_{,1212}^{0}+w_{,2222}^{0}=0\,.}
In direct tensor notation, the governing equation is
=== Transverse loading ===
For a transversely loaded plate without axial deformations, the governing equation has the form
∂
4
w
∂
x
1
4
+
2
∂
4
w
∂
x
1
2
∂
x
2
2
+
∂
4
w
∂
x
2
4
=
−
q
D
{\displaystyle {\frac {\partial ^{4}w}{\partial x_{1}^{4}}}+2{\frac {\partial ^{4}w}{\partial x_{1}^{2}\partial x_{2}^{2}}}+{\frac {\partial ^{4}w}{\partial x_{2}^{4}}}=-{\frac {q}{D}}}
where
D
:=
2
h
3
E
3
(
1
−
ν
2
)
.
{\displaystyle D:={\cfrac {2h^{3}E}{3(1-\nu ^{2})}}\,.}
for a plate with thickness
2
h
{\displaystyle 2h}
.
In index notation,
w
,
1111
0
+
2
w
,
1212
0
+
w
,
2222
0
=
−
q
D
{\displaystyle w_{,1111}^{0}+2\,w_{,1212}^{0}+w_{,2222}^{0}=-{\frac {q}{D}}}
and in direct notation
In cylindrical coordinates
(
r
,
θ
,
z
)
{\displaystyle (r,\theta ,z)}
, the governing equation is
1
r
d
d
r
[
r
d
d
r
{
1
r
d
d
r
(
r
d
w
d
r
)
}
]
=
−
q
D
.
{\displaystyle {\frac {1}{r}}{\cfrac {d}{dr}}\left[r{\cfrac {d}{dr}}\left\{{\frac {1}{r}}{\cfrac {d}{dr}}\left(r{\cfrac {dw}{dr}}\right)\right\}\right]=-{\frac {q}{D}}\,.}
== Orthotropic and homogeneous Kirchhoff plate ==
For an orthotropic plate
[
C
11
C
12
C
13
C
12
C
22
C
23
C
13
C
23
C
33
]
=
1
1
−
ν
12
ν
21
[
E
1
ν
12
E
2
0
ν
21
E
1
E
2
0
0
0
2
G
12
(
1
−
ν
12
ν
21
)
]
.
{\displaystyle {\begin{bmatrix}C_{11}&C_{12}&C_{13}\\C_{12}&C_{22}&C_{23}\\C_{13}&C_{23}&C_{33}\end{bmatrix}}={\cfrac {1}{1-\nu _{12}\nu _{21}}}{\begin{bmatrix}E_{1}&\nu _{12}E_{2}&0\\\nu _{21}E_{1}&E_{2}&0\\0&0&2G_{12}(1-\nu _{12}\nu _{21})\end{bmatrix}}\,.}
Therefore,
[
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33
]
=
2
h
1
−
ν
12
ν
21
[
E
1
ν
12
E
2
0
ν
21
E
1
E
2
0
0
0
2
G
12
(
1
−
ν
12
ν
21
)
]
{\displaystyle {\begin{bmatrix}A_{11}&A_{12}&A_{13}\\A_{21}&A_{22}&A_{23}\\A_{31}&A_{32}&A_{33}\end{bmatrix}}={\cfrac {2h}{1-\nu _{12}\nu _{21}}}{\begin{bmatrix}E_{1}&\nu _{12}E_{2}&0\\\nu _{21}E_{1}&E_{2}&0\\0&0&2G_{12}(1-\nu _{12}\nu _{21})\end{bmatrix}}}
and
[
D
11
D
12
D
13
D
21
D
22
D
23
D
31
D
32
D
33
]
=
2
h
3
3
(
1
−
ν
12
ν
21
)
[
E
1
ν
12
E
2
0
ν
21
E
1
E
2
0
0
0
2
G
12
(
1
−
ν
12
ν
21
)
]
.
{\displaystyle {\begin{bmatrix}D_{11}&D_{12}&D_{13}\\D_{21}&D_{22}&D_{23}\\D_{31}&D_{32}&D_{33}\end{bmatrix}}={\cfrac {2h^{3}}{3(1-\nu _{12}\nu _{21})}}{\begin{bmatrix}E_{1}&\nu _{12}E_{2}&0\\\nu _{21}E_{1}&E_{2}&0\\0&0&2G_{12}(1-\nu _{12}\nu _{21})\end{bmatrix}}\,.}
=== Transverse loading ===
The governing equation of an orthotropic Kirchhoff plate loaded transversely by a distributed load
q
{\displaystyle q}
per unit area is
D
x
w
,
1111
0
+
2
D
x
y
w
,
1122
0
+
D
y
w
,
2222
0
=
−
q
{\displaystyle D_{x}w_{,1111}^{0}+2D_{xy}w_{,1122}^{0}+D_{y}w_{,2222}^{0}=-q}
where
D
x
=
D
11
=
2
h
3
E
1
3
(
1
−
ν
12
ν
21
)
D
y
=
D
22
=
2
h
3
E
2
3
(
1
−
ν
12
ν
21
)
D
x
y
=
D
33
+
1
2
(
ν
21
D
11
+
ν
12
D
22
)
=
D
33
+
ν
21
D
11
=
4
h
3
G
12
3
+
2
h
3
ν
21
E
1
3
(
1
−
ν
12
ν
21
)
.
{\displaystyle {\begin{aligned}D_{x}&=D_{11}={\frac {2h^{3}E_{1}}{3(1-\nu _{12}\nu _{21})}}\\D_{y}&=D_{22}={\frac {2h^{3}E_{2}}{3(1-\nu _{12}\nu _{21})}}\\D_{xy}&=D_{33}+{\tfrac {1}{2}}(\nu _{21}D_{11}+\nu _{12}D_{22})=D_{33}+\nu _{21}D_{11}={\frac {4h^{3}G_{12}}{3}}+{\frac {2h^{3}\nu _{21}E_{1}}{3(1-\nu _{12}\nu _{21})}}\,.\end{aligned}}}
== Dynamics of thin Kirchhoff plates ==
The dynamic theory of plates determines the propagation of waves in the plates, and the study of standing waves and vibration modes.
=== Governing equations ===
The governing equations for the dynamics of a Kirchhoff–Love plate are
where, for a plate with density
ρ
=
ρ
(
x
)
{\displaystyle \rho =\rho (x)}
,
J
1
:=
∫
−
h
h
ρ
d
x
3
=
2
ρ
h
;
J
3
:=
∫
−
h
h
x
3
2
ρ
d
x
3
=
2
3
ρ
h
3
{\displaystyle J_{1}:=\int _{-h}^{h}\rho ~dx_{3}=2~\rho ~h~;~~J_{3}:=\int _{-h}^{h}x_{3}^{2}~\rho ~dx_{3}={\frac {2}{3}}~\rho ~h^{3}}
and
u
˙
i
=
∂
u
i
∂
t
;
u
¨
i
=
∂
2
u
i
∂
t
2
;
u
i
,
α
=
∂
u
i
∂
x
α
;
u
i
,
α
β
=
∂
2
u
i
∂
x
α
∂
x
β
{\displaystyle {\dot {u}}_{i}={\frac {\partial u_{i}}{\partial t}}~;~~{\ddot {u}}_{i}={\frac {\partial ^{2}u_{i}}{\partial t^{2}}}~;~~u_{i,\alpha }={\frac {\partial u_{i}}{\partial x_{\alpha }}}~;~~u_{i,\alpha \beta }={\frac {\partial ^{2}u_{i}}{\partial x_{\alpha }\partial x_{\beta }}}}
The figures below show some vibrational modes of a circular plate.
=== Isotropic plates ===
The governing equations simplify considerably for isotropic and homogeneous plates for which the in-plane deformations can be neglected and have the form
D
(
∂
4
w
0
∂
x
1
4
+
2
∂
4
w
0
∂
x
1
2
∂
x
2
2
+
∂
4
w
0
∂
x
2
4
)
=
−
q
(
x
1
,
x
2
,
t
)
−
2
ρ
h
∂
2
w
0
∂
t
2
.
{\displaystyle D\,\left({\frac {\partial ^{4}w^{0}}{\partial x_{1}^{4}}}+2{\frac {\partial ^{4}w^{0}}{\partial x_{1}^{2}\partial x_{2}^{2}}}+{\frac {\partial ^{4}w^{0}}{\partial x_{2}^{4}}}\right)=-q(x_{1},x_{2},t)-2\rho h\,{\frac {\partial ^{2}w^{0}}{\partial t^{2}}}\,.}
where
D
{\displaystyle D}
is the bending stiffness of the plate. For a uniform plate of thickness
2
h
{\displaystyle 2h}
,
D
:=
2
h
3
E
3
(
1
−
ν
2
)
.
{\displaystyle D:={\cfrac {2h^{3}E}{3(1-\nu ^{2})}}\,.}
In direct notation
== Reissner-Mindlin theory for thick plates ==
In the theory of thick plates, contributed to by Eric Reissner, Raymond Mindlin, and Yakov S. Uflyand, the normal to the mid-surface remains straight but not necessarily perpendicular to the mid-surface. If
φ
1
{\displaystyle \varphi _{1}}
and
φ
2
{\displaystyle \varphi _{2}}
designate the angles which the mid-surface makes with the
x
3
{\displaystyle x_{3}}
axis then
φ
1
≠
w
,
1
;
φ
2
≠
w
,
2
{\displaystyle \varphi _{1}\neq w_{,1}~;~~\varphi _{2}\neq w_{,2}}
Then the Mindlin–Reissner hypothesis implies that
=== Strain-displacement relations ===
Depending on the amount of rotation of the plate normals two different approximations for the strains can be derived from the basic kinematic assumptions.
For small strains and small rotations the strain-displacement relations for Mindlin–Reissner plates are
ε
α
β
=
1
2
(
u
α
,
β
0
+
u
β
,
α
0
)
−
x
3
2
(
φ
α
,
β
+
φ
β
,
α
)
ε
α
3
=
1
2
(
w
,
α
0
−
φ
α
)
ε
33
=
0
{\displaystyle {\begin{aligned}\varepsilon _{\alpha \beta }&={\frac {1}{2}}(u_{\alpha ,\beta }^{0}+u_{\beta ,\alpha }^{0})-{\frac {x_{3}}{2}}~(\varphi _{\alpha ,\beta }+\varphi _{\beta ,\alpha })\\\varepsilon _{\alpha 3}&={\cfrac {1}{2}}\left(w_{,\alpha }^{0}-\varphi _{\alpha }\right)\\\varepsilon _{33}&=0\end{aligned}}}
The shear strain, and hence the shear stress, across the thickness of the plate is not neglected in this theory. However, the shear strain is constant across the thickness of the plate. This cannot be accurate since the shear stress is known to be parabolic even for simple plate geometries. To account for the inaccuracy in the shear strain, a shear correction factor (
κ
{\displaystyle \kappa }
) is applied so that the correct amount of internal energy is predicted by the theory. Then
ε
α
3
=
1
2
κ
(
w
,
α
0
−
φ
α
)
{\displaystyle \varepsilon _{\alpha 3}={\cfrac {1}{2}}~\kappa ~\left(w_{,\alpha }^{0}-\varphi _{\alpha }\right)}
=== Equilibrium equations ===
The equilibrium equations have slightly different forms depending on the amount of bending expected in the plate. For the situation where the strains and rotations of the plate are small the equilibrium equations for a Mindlin–Reissner plate are
The resultant shear forces in the above equations are defined as
Q
α
:=
κ
∫
−
h
h
σ
α
3
d
x
3
.
{\displaystyle Q_{\alpha }:=\kappa ~\int _{-h}^{h}\sigma _{\alpha 3}~dx_{3}\,.}
=== Boundary conditions ===
The boundary conditions are indicated by the boundary terms in the principle of virtual work.
If the only external force is a vertical force on the top surface of the plate, the boundary conditions are
n
α
N
α
β
o
r
u
β
0
n
α
M
α
β
o
r
φ
α
n
α
Q
α
o
r
w
0
{\displaystyle {\begin{aligned}n_{\alpha }~N_{\alpha \beta }&\quad \mathrm {or} \quad u_{\beta }^{0}\\n_{\alpha }~M_{\alpha \beta }&\quad \mathrm {or} \quad \varphi _{\alpha }\\n_{\alpha }~Q_{\alpha }&\quad \mathrm {or} \quad w^{0}\end{aligned}}}
=== Constitutive relations ===
The stress–strain relations for a linear elastic Mindlin–Reissner plate are given by
σ
α
β
=
C
α
β
γ
θ
ε
γ
θ
σ
α
3
=
C
α
3
γ
θ
ε
γ
θ
σ
33
=
C
33
γ
θ
ε
γ
θ
{\displaystyle {\begin{aligned}\sigma _{\alpha \beta }&=C_{\alpha \beta \gamma \theta }~\varepsilon _{\gamma \theta }\\\sigma _{\alpha 3}&=C_{\alpha 3\gamma \theta }~\varepsilon _{\gamma \theta }\\\sigma _{33}&=C_{33\gamma \theta }~\varepsilon _{\gamma \theta }\end{aligned}}}
Since
σ
33
{\displaystyle \sigma _{33}}
does not appear in the equilibrium equations it is implicitly assumed that it do not have any effect on the momentum balance and is neglected. This assumption is also called the plane stress assumption. The remaining stress–strain relations for an orthotropic material, in matrix form, can be written as
[
σ
11
σ
22
σ
23
σ
31
σ
12
]
=
[
C
11
C
12
0
0
0
C
12
C
22
0
0
0
0
0
C
44
0
0
0
0
0
C
55
0
0
0
0
0
C
66
]
[
ε
11
ε
22
ε
23
ε
31
ε
12
]
{\displaystyle {\begin{bmatrix}\sigma _{11}\\\sigma _{22}\\\sigma _{23}\\\sigma _{31}\\\sigma _{12}\end{bmatrix}}={\begin{bmatrix}C_{11}&C_{12}&0&0&0\\C_{12}&C_{22}&0&0&0\\0&0&C_{44}&0&0\\0&0&0&C_{55}&0\\0&0&0&0&C_{66}\end{bmatrix}}{\begin{bmatrix}\varepsilon _{11}\\\varepsilon _{22}\\\varepsilon _{23}\\\varepsilon _{31}\\\varepsilon _{12}\end{bmatrix}}}
Then,
[
N
11
N
22
N
12
]
=
{
∫
−
h
h
[
C
11
C
12
0
C
12
C
22
0
0
0
C
66
]
d
x
3
}
[
u
1
,
1
0
u
2
,
2
0
1
2
(
u
1
,
2
0
+
u
2
,
1
0
)
]
{\displaystyle {\begin{bmatrix}N_{11}\\N_{22}\\N_{12}\end{bmatrix}}=\left\{\int _{-h}^{h}{\begin{bmatrix}C_{11}&C_{12}&0\\C_{12}&C_{22}&0\\0&0&C_{66}\end{bmatrix}}~dx_{3}\right\}{\begin{bmatrix}u_{1,1}^{0}\\u_{2,2}^{0}\\{\frac {1}{2}}~(u_{1,2}^{0}+u_{2,1}^{0})\end{bmatrix}}}
and
[
M
11
M
22
M
12
]
=
−
{
∫
−
h
h
x
3
2
[
C
11
C
12
0
C
12
C
22
0
0
0
C
66
]
d
x
3
}
[
φ
1
,
1
φ
2
,
2
1
2
(
φ
1
,
2
+
φ
2
,
1
)
]
{\displaystyle {\begin{bmatrix}M_{11}\\M_{22}\\M_{12}\end{bmatrix}}=-\left\{\int _{-h}^{h}x_{3}^{2}~{\begin{bmatrix}C_{11}&C_{12}&0\\C_{12}&C_{22}&0\\0&0&C_{66}\end{bmatrix}}~dx_{3}\right\}{\begin{bmatrix}\varphi _{1,1}\\\varphi _{2,2}\\{\frac {1}{2}}~(\varphi _{1,2}+\varphi _{2,1})\end{bmatrix}}}
For the shear terms
[
Q
1
Q
2
]
=
κ
2
{
∫
−
h
h
[
C
55
0
0
C
44
]
d
x
3
}
[
w
,
1
0
−
φ
1
w
,
2
0
−
φ
2
]
{\displaystyle {\begin{bmatrix}Q_{1}\\Q_{2}\end{bmatrix}}={\cfrac {\kappa }{2}}\left\{\int _{-h}^{h}{\begin{bmatrix}C_{55}&0\\0&C_{44}\end{bmatrix}}~dx_{3}\right\}{\begin{bmatrix}w_{,1}^{0}-\varphi _{1}\\w_{,2}^{0}-\varphi _{2}\end{bmatrix}}}
The extensional stiffnesses are the quantities
A
α
β
:=
∫
−
h
h
C
α
β
d
x
3
{\displaystyle A_{\alpha \beta }:=\int _{-h}^{h}C_{\alpha \beta }~dx_{3}}
The bending stiffnesses are the quantities
D
α
β
:=
∫
−
h
h
x
3
2
C
α
β
d
x
3
{\displaystyle D_{\alpha \beta }:=\int _{-h}^{h}x_{3}^{2}~C_{\alpha \beta }~dx_{3}}
== Isotropic and homogeneous Reissner-Mindlin plates ==
For uniformly thick, homogeneous, and isotropic plates, the stress–strain relations in the plane of the plate are
[
σ
11
σ
22
σ
12
]
=
E
1
−
ν
2
[
1
ν
0
ν
1
0
0
0
1
−
ν
]
[
ε
11
ε
22
ε
12
]
.
{\displaystyle {\begin{bmatrix}\sigma _{11}\\\sigma _{22}\\\sigma _{12}\end{bmatrix}}={\cfrac {E}{1-\nu ^{2}}}{\begin{bmatrix}1&\nu &0\\\nu &1&0\\0&0&1-\nu \end{bmatrix}}{\begin{bmatrix}\varepsilon _{11}\\\varepsilon _{22}\\\varepsilon _{12}\end{bmatrix}}\,.}
where
E
{\displaystyle E}
is the Young's modulus,
ν
{\displaystyle \nu }
is the Poisson's ratio, and
ε
α
β
{\displaystyle \varepsilon _{\alpha \beta }}
are the in-plane strains. The through-the-thickness shear stresses and strains are related by
σ
31
=
2
G
ε
31
and
σ
32
=
2
G
ε
32
{\displaystyle \sigma _{31}=2G\varepsilon _{31}\quad {\text{and}}\quad \sigma _{32}=2G\varepsilon _{32}}
where
G
=
E
/
(
2
(
1
+
ν
)
)
{\displaystyle G=E/(2(1+\nu ))}
is the shear modulus.
=== Constitutive relations ===
The relations between the stress resultants and the generalized displacements for an isotropic Mindlin–Reissner plate are:
[
N
11
N
22
N
12
]
=
2
E
h
1
−
ν
2
[
1
ν
0
ν
1
0
0
0
1
−
ν
]
[
u
1
,
1
0
u
2
,
2
0
1
2
(
u
1
,
2
0
+
u
2
,
1
0
)
]
,
{\displaystyle {\begin{bmatrix}N_{11}\\N_{22}\\N_{12}\end{bmatrix}}={\cfrac {2Eh}{1-\nu ^{2}}}{\begin{bmatrix}1&\nu &0\\\nu &1&0\\0&0&1-\nu \end{bmatrix}}{\begin{bmatrix}u_{1,1}^{0}\\u_{2,2}^{0}\\{\frac {1}{2}}~(u_{1,2}^{0}+u_{2,1}^{0})\end{bmatrix}}\,,}
[
M
11
M
22
M
12
]
=
−
2
E
h
3
3
(
1
−
ν
2
)
[
1
ν
0
ν
1
0
0
0
1
−
ν
]
[
φ
1
,
1
φ
2
,
2
1
2
(
φ
1
,
2
+
φ
2
,
1
)
]
,
{\displaystyle {\begin{bmatrix}M_{11}\\M_{22}\\M_{12}\end{bmatrix}}=-{\cfrac {2Eh^{3}}{3(1-\nu ^{2})}}{\begin{bmatrix}1&\nu &0\\\nu &1&0\\0&0&1-\nu \end{bmatrix}}{\begin{bmatrix}\varphi _{1,1}\\\varphi _{2,2}\\{\frac {1}{2}}(\varphi _{1,2}+\varphi _{2,1})\end{bmatrix}}\,,}
and
[
Q
1
Q
2
]
=
κ
G
h
[
w
,
1
0
−
φ
1
w
,
2
0
−
φ
2
]
.
{\displaystyle {\begin{bmatrix}Q_{1}\\Q_{2}\end{bmatrix}}=\kappa Gh{\begin{bmatrix}w_{,1}^{0}-\varphi _{1}\\w_{,2}^{0}-\varphi _{2}\end{bmatrix}}\,.}
The bending rigidity is defined as the quantity
D
=
2
E
h
3
3
(
1
−
ν
2
)
.
{\displaystyle D={\cfrac {2Eh^{3}}{3(1-\nu ^{2})}}\,.}
For a plate of thickness
H
{\displaystyle H}
, the bending rigidity has the form
D
=
E
H
3
12
(
1
−
ν
2
)
.
{\displaystyle D={\cfrac {EH^{3}}{12(1-\nu ^{2})}}\,.}
where
h
=
H
2
{\displaystyle h={\frac {H}{2}}}
=== Governing equations ===
If we ignore the in-plane extension of the plate, the governing equations are
M
α
β
,
β
−
Q
α
=
0
Q
α
,
α
+
q
=
0
.
{\displaystyle {\begin{aligned}M_{\alpha \beta ,\beta }-Q_{\alpha }&=0\\Q_{\alpha ,\alpha }+q&=0\,.\end{aligned}}}
In terms of the generalized deformations
w
0
,
φ
1
,
φ
2
{\displaystyle w^{0},\varphi _{1},\varphi _{2}}
, the three governing equations are
The boundary conditions along the edges of a rectangular plate are
simply supported
w
0
=
0
,
M
11
=
0
(
or
M
22
=
0
)
,
φ
1
=
0
(
or
φ
2
=
0
)
clamped
w
0
=
0
,
φ
1
=
0
,
φ
2
=
0
.
{\displaystyle {\begin{aligned}{\text{simply supported}}\quad &\quad w^{0}=0,M_{11}=0~({\text{or}}~M_{22}=0),\varphi _{1}=0~({\text{or}}~\varphi _{2}=0)\\{\text{clamped}}\quad &\quad w^{0}=0,\varphi _{1}=0,\varphi _{2}=0\,.\end{aligned}}}
== Reissner–Stein static theory for isotropic cantilever plates ==
In general, exact solutions for cantilever plates using plate theory are quite involved and few exact solutions can be found in the literature. Reissner and Stein provide a simplified theory for cantilever plates that is an improvement over older theories such as Saint-Venant plate theory.
The Reissner-Stein theory assumes a transverse displacement field of the form
w
(
x
,
y
)
=
w
x
(
x
)
+
y
θ
x
(
x
)
.
{\displaystyle w(x,y)=w_{x}(x)+y\,\theta _{x}(x)\,.}
The governing equations for the plate then reduce to two coupled ordinary differential equations:
where
q
1
(
x
)
=
∫
−
b
/
2
b
/
2
q
(
x
,
y
)
d
y
,
q
2
(
x
)
=
∫
−
b
/
2
b
/
2
y
q
(
x
,
y
)
d
y
,
n
1
(
x
)
=
∫
−
b
/
2
b
/
2
n
x
(
x
,
y
)
d
y
n
2
(
x
)
=
∫
−
b
/
2
b
/
2
y
n
x
(
x
,
y
)
d
y
,
n
3
(
x
)
=
∫
−
b
/
2
b
/
2
y
2
n
x
(
x
,
y
)
d
y
.
{\displaystyle {\begin{aligned}q_{1}(x)&=\int _{-b/2}^{b/2}q(x,y)\,{\text{d}}y~,~~q_{2}(x)=\int _{-b/2}^{b/2}y\,q(x,y)\,{\text{d}}y~,~~n_{1}(x)=\int _{-b/2}^{b/2}n_{x}(x,y)\,{\text{d}}y\\n_{2}(x)&=\int _{-b/2}^{b/2}y\,n_{x}(x,y)\,{\text{d}}y~,~~n_{3}(x)=\int _{-b/2}^{b/2}y^{2}\,n_{x}(x,y)\,{\text{d}}y\,.\end{aligned}}}
At
x
=
0
{\displaystyle x=0}
, since the beam is clamped, the boundary conditions are
w
(
0
,
y
)
=
d
w
d
x
|
x
=
0
=
0
⟹
w
x
(
0
)
=
d
w
x
d
x
|
x
=
0
=
θ
x
(
0
)
=
d
θ
x
d
x
|
x
=
0
=
0
.
{\displaystyle w(0,y)={\cfrac {dw}{dx}}{\Bigr |}_{x=0}=0\qquad \implies \qquad w_{x}(0)={\cfrac {dw_{x}}{dx}}{\Bigr |}_{x=0}=\theta _{x}(0)={\cfrac {d\theta _{x}}{dx}}{\Bigr |}_{x=0}=0\,.}
The boundary conditions at
x
=
a
{\displaystyle x=a}
are
b
D
d
3
w
x
d
x
3
+
n
1
(
x
)
d
w
x
d
x
+
n
2
(
x
)
d
θ
x
d
x
+
q
x
1
=
0
b
3
D
12
d
3
θ
x
d
x
3
+
[
n
3
(
x
)
−
2
b
D
(
1
−
ν
)
]
d
θ
x
d
x
+
n
2
(
x
)
d
w
x
d
x
+
t
=
0
b
D
d
2
w
x
d
x
2
+
m
1
=
0
,
b
3
D
12
d
2
θ
x
d
x
2
+
m
2
=
0
{\displaystyle {\begin{aligned}&bD{\cfrac {d^{3}w_{x}}{dx^{3}}}+n_{1}(x){\cfrac {dw_{x}}{dx}}+n_{2}(x){\cfrac {d\theta _{x}}{dx}}+q_{x1}=0\\&{\frac {b^{3}D}{12}}{\cfrac {d^{3}\theta _{x}}{dx^{3}}}+\left[n_{3}(x)-2bD(1-\nu )\right]{\cfrac {d\theta _{x}}{dx}}+n_{2}(x){\cfrac {dw_{x}}{dx}}+t=0\\&bD{\cfrac {d^{2}w_{x}}{dx^{2}}}+m_{1}=0\quad ,\quad {\frac {b^{3}D}{12}}{\cfrac {d^{2}\theta _{x}}{dx^{2}}}+m_{2}=0\end{aligned}}}
where
m
1
=
∫
−
b
/
2
b
/
2
m
x
(
y
)
d
y
,
m
2
=
∫
−
b
/
2
b
/
2
y
m
x
(
y
)
d
y
,
q
x
1
=
∫
−
b
/
2
b
/
2
q
x
(
y
)
d
y
t
=
q
x
2
+
m
3
=
∫
−
b
/
2
b
/
2
y
q
x
(
y
)
d
y
+
∫
−
b
/
2
b
/
2
m
x
y
(
y
)
d
y
.
{\displaystyle {\begin{aligned}m_{1}&=\int _{-b/2}^{b/2}m_{x}(y)\,{\text{d}}y~,~~m_{2}=\int _{-b/2}^{b/2}y\,m_{x}(y)\,{\text{d}}y~,~~q_{x1}=\int _{-b/2}^{b/2}q_{x}(y)\,{\text{d}}y\\t&=q_{x2}+m_{3}=\int _{-b/2}^{b/2}y\,q_{x}(y)\,{\text{d}}y+\int _{-b/2}^{b/2}m_{xy}(y)\,{\text{d}}y\,.\end{aligned}}}
== See also ==
Bending of plates
Vibration of plates
Infinitesimal strain theory
Membrane theory of shells
Finite strain theory
Stress (mechanics)
Stress resultants
Linear elasticity
Bending
Föppl–von Kármán equations
Euler–Bernoulli beam equation
Timoshenko beam theory
== References == | Wikipedia/Theory_of_plates_and_shells |
Materials science is an interdisciplinary field of researching and discovering materials. Materials engineering is an engineering field of finding uses for materials in other fields and industries.
The intellectual origins of materials science stem from the Age of Enlightenment, when researchers began to use analytical thinking from chemistry, physics, and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics, chemistry, and engineering. As such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools for its study.
Materials scientists emphasize understanding how the history of a material (processing) influences its structure, and thus the material's properties and performance. The understanding of processing -structure-properties relationships is called the materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy.
Materials science is also an important part of forensic engineering and failure analysis – investigating materials, products, structures or components, which fail or do not function as intended, causing personal injury or damage to property. Such investigations are key to understanding, for example, the causes of various aviation accidents and incidents.
== History ==
The material of choice of a given era is often a defining point. Phases such as Stone Age, Bronze Age, Iron Age, and Steel Age are historic, if arbitrary examples. Originally deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science were products of the Space Race; the understanding and engineering of the metallic alloys, and silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, and been driven by, the development of revolutionary technologies such as rubbers, plastics, semiconductors, and biomaterials.
Before the 1960s (and in some cases decades after), many eventual materials science departments were metallurgy or ceramics engineering departments, reflecting the 19th and early 20th-century emphasis on metals and ceramics. The growth of material science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s, "to expand the national program of basic research and training in the materials sciences." In comparison with mechanical engineering, the nascent material science field focused on addressing materials from the macro-level and on the approach that materials are designed on the basis of knowledge of behavior at the microscopic level. Due to the expanded knowledge of the link between atomic and molecular processes as well as the overall properties of materials, the design of materials came to be based on specific desired properties. The materials science field has since broadened to include every class of materials, including ceramics, polymers, semiconductors, magnetic materials, biomaterials, and nanomaterials, generally classified into three distinct groups: ceramics, metals, and polymers. The prominent change in materials science during the recent decades is active usage of computer simulations to find new materials, predict properties and understand phenomena.
== Fundamentals ==
A material is defined as a substance (most often a solid, but other condensed phases can be included) that is intended to be used for certain applications. There are a myriad of materials around us; they can be found in anything from new and advanced materials that are being developed include nanomaterials, biomaterials, and energy materials to name a few.
The basis of materials science is studying the interplay between the structure of materials, the processing methods to make that material, and the resulting material properties. The complex combination of these produce the performance of a material in a specific application. Many features across many length scales impact material performance, from the constituent chemical elements, its microstructure, and macroscopic features from processing. Together with the laws of thermodynamics and kinetics materials scientists aim to understand and improve materials.
=== Structure ===
Structure is one of the most important components of the field of materials science. The very definition of the field holds that it is concerned with the investigation of "the relationships that exist between the structures and properties of materials". Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way materials scientists examine the structure of a material. This involves methods such as diffraction with X-rays, electrons or neutrons, and various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy, chromatography, thermal analysis, electron microscope analysis, etc.
Structure is studied in the following levels.
==== Atomic structure ====
Atomic structure deals with the atoms of the materials, and how they are arranged to give rise to molecules, crystals, etc. Much of the electrical, magnetic and chemical properties of materials arise from this level of structure. The length scales involved are in angstroms (Å). The chemical bonding and atomic arrangement (crystallography) are fundamental to studying the properties and behavior of any material.
===== Bonding =====
To obtain a full understanding of the material structure and how it relates to its properties, the materials scientist must study how the different atoms, ions and molecules are arranged and bonded to each other. This involves the study and use of quantum chemistry or quantum physics. Solid-state physics, solid-state chemistry and physical chemistry are also involved in the study of bonding and structure.
===== Crystallography =====
Crystallography is the science that examines the arrangement of atoms in crystalline solids. Crystallography is a useful tool for materials scientists. One of the fundamental concepts regarding the crystal structure of a material includes the unit cell, which is the smallest unit of a crystal lattice (space lattice) that repeats to make up the macroscopic crystal structure. Most common structural materials include parallelpiped and hexagonal lattice types. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically, because the natural shapes of crystals reflect the atomic structure. Further, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Examples of crystal defects consist of dislocations including edges, screws, vacancies, self inter-stitials, and more that are linear, planar, and three dimensional types of defects. New and advanced materials that are being developed include nanomaterials, biomaterials. Mostly, materials do not occur as a single crystal, but in polycrystalline form, as an aggregate of small crystals or grains with different orientations. Because of this, the powder diffraction method, which uses diffraction patterns of polycrystalline samples with a large number of crystals, plays an important role in structural determination. Most materials have a crystalline structure, but some important materials do not exhibit regular crystal structure. Polymers display varying degrees of crystallinity, and many are completely non-crystalline. Glass, some ceramics, and many natural materials are amorphous, not possessing any long-range order in their atomic arrangements. The study of polymers combines elements of chemical and statistical thermodynamics to give thermodynamic and mechanical descriptions of physical properties.
==== Nanostructure ====
Materials, which atoms and molecules form constituents in the nanoscale (i.e., they form nanostructures) are called nanomaterials. Nanomaterials are the subject of intense research in the materials science community due to the unique properties that they exhibit.
Nanostructure deals with objects and structures that are in the 1 – 100 nm range. In many materials, atoms or molecules agglomerate to form objects at the nanoscale. This causes many interesting electrical, magnetic, optical, and mechanical properties.
In describing nanostructures, it is necessary to differentiate between the number of dimensions on the nanoscale.
Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm.
Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater.
Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometre range. The term 'nanostructure' is often used, when referring to magnetic technology. Nanoscale structure in biology is often called ultrastructure.
==== Microstructure ====
Microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25× magnification. It deals with objects from 100 nm to a few cm. The microstructure of a material (which can be broadly classified into metallic, polymeric, ceramic and composite) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behavior, wear resistance, and so on. Most of the traditional materials (such as metals and ceramics) are microstructured.
The manufacture of a perfect crystal of a material is physically impossible. For example, any crystalline material will contain defects such as precipitates, grain boundaries (Hall–Petch relationship), vacancies, interstitial atoms or substitutional atoms. The microstructure of materials reveals these larger defects and advances in simulation have allowed an increased understanding of how defects can be used to enhance material properties.
==== Macrostructure ====
Macrostructure is the appearance of a material in the scale millimeters to meters, it is the structure of the material as seen with the naked eye.
=== Properties ===
Materials exhibit myriad properties, including the following.
Mechanical properties, see Strength of materials
Chemical properties, see Chemistry
Electrical properties, see Electricity
Thermal properties, see Thermodynamics
Optical properties, see Optics and Photonics
Magnetic properties, see Magnetism
The properties of a material determine its usability and hence its engineering application.
=== Processing ===
Synthesis and processing involves the creation of a material with the desired micro-nanostructure. A material cannot be used in industry if no economically viable production method for it has been developed. Therefore, developing processing methods for materials that are reasonably effective and cost-efficient is vital to the field of materials science. Different materials require different processing or synthesis methods. For example, the processing of metals has historically defined eras such as the Bronze Age and Iron Age and is studied under the branch of materials science named physical metallurgy. Chemical and physical methods are also used to synthesize other materials such as polymers, ceramics, semiconductors, and thin films. As of the early 21st century, new methods are being developed to synthesize nanomaterials such as graphene.
=== Thermodynamics ===
Thermodynamics is concerned with heat and temperature and their relation to energy and work. It defines macroscopic variables, such as internal energy, entropy, and pressure, that partly describe a body of matter or radiation. It states that the behavior of those variables is subject to general constraints common to all materials. These general constraints are expressed in the four laws of thermodynamics. Thermodynamics describes the bulk behavior of the body, not the microscopic behaviors of the very large numbers of its microscopic constituents, such as molecules. The behavior of these microscopic particles is described by, and the laws of thermodynamics are derived from, statistical mechanics.
The study of thermodynamics is fundamental to materials science. It forms the foundation to treat general phenomena in materials science and engineering, including chemical reactions, magnetism, polarizability, and elasticity. It explains fundamental tools such as phase diagrams and concepts such as phase equilibrium.
=== Kinetics ===
Chemical kinetics is the study of the rates at which systems that are out of equilibrium change under the influence of various forces. When applied to materials science, it deals with how a material changes with time (moves from non-equilibrium to equilibrium state) due to application of a certain field. It details the rate of various processes evolving in materials including shape, size, composition and structure. Diffusion is important in the study of kinetics as this is the most common mechanism by which materials undergo change. Kinetics is essential in processing of materials because, among other things, it details how the microstructure changes with application of heat.
== Research ==
Materials science is a highly active area of research. Together with materials science departments, physics, chemistry, and many engineering departments are involved in materials research. Materials research covers a broad range of topics; the following non-exhaustive list highlights a few important research areas.
=== Nanomaterials ===
Nanomaterials describe, in principle, materials of which a single unit is sized (in at least one dimension) between 1 and 1000 nanometers (10−9 meter), but is usually 1 nm – 100 nm. Nanomaterials research takes a materials science based approach to nanotechnology, using advances in materials metrology and synthesis, which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, or mechanical properties. The field of nanomaterials is loosely organized, like the traditional field of chemistry, into organic (carbon-based) nanomaterials, such as fullerenes, and inorganic nanomaterials based on other elements, such as silicon. Examples of nanomaterials include fullerenes, carbon nanotubes, nanocrystals, etc.
=== Biomaterials ===
A biomaterial is any matter, surface, or construct that interacts with biological systems. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering, and materials science.
Biomaterials can be derived either from nature or synthesized in a laboratory using a variety of chemical approaches using metallic components, polymers, bioceramics, or composite materials. They are often intended or adapted for medical applications, such as biomedical devices which perform, augment, or replace a natural function. Such functions may be benign, like being used for a heart valve, or may be bioactive with a more interactive functionality such as hydroxylapatite-coated hip implants. Biomaterials are also used every day in dental applications, surgery, and drug delivery. For example, a construct with impregnated pharmaceutical products can be placed into the body, which permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as an organ transplant material.
=== Electronic, optical, and magnetic ===
Semiconductors, metals, and ceramics are used today to form highly complex systems, such as integrated electronic circuits, optoelectronic devices, and magnetic and optical mass storage media. These materials form the basis of our modern computing world, and hence research into these materials is of vital importance.
Semiconductors are a traditional example of these types of materials. They are materials that have properties that are intermediate between conductors and insulators. Their electrical conductivities are very sensitive to the concentration of impurities, which allows the use of doping to achieve desirable electronic properties. Hence, semiconductors form the basis of the traditional computer.
This field also includes new areas of research such as superconducting materials, spintronics, metamaterials, etc. The study of these materials involves knowledge of materials science and solid-state physics or condensed matter physics.
=== Computational materials science ===
With continuing increases in computing power, simulating the behavior of materials has become possible. This enables materials scientists to understand behavior and mechanisms, design new materials, and explain properties formerly poorly understood. Efforts surrounding integrated computational materials engineering are now focusing on combining computational methods with experiments to drastically reduce the time and effort to optimize materials properties for a given application. This involves simulating materials at all length scales, using methods such as density functional theory, molecular dynamics, Monte Carlo, dislocation dynamics, phase field, finite element, and many more.
== Industry ==
Radical materials advances can drive the creation of new products or even new industries, but stable industries also employ materials scientists to make incremental improvements and troubleshoot issues with currently used materials. Industrial applications of materials science include materials design, cost-benefit tradeoffs in industrial production of materials, processing methods (casting, rolling, welding, ion implantation, crystal growth, thin-film deposition, sintering, glassblowing, etc.), and analytic methods (characterization methods such as electron microscopy, X-ray diffraction, calorimetry, nuclear microscopy (HEFIB), Rutherford backscattering, neutron diffraction, small-angle X-ray scattering (SAXS), etc.).
Besides material characterization, the material scientist or engineer also deals with extracting materials and converting them into useful forms. Thus ingot casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. Often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. For example, steels are classified based on 1/10 and 1/100 weight percentages of the carbon and other alloying elements they contain. Thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced.
Solid materials are generally grouped into three basic classifications: ceramics, metals, and polymers. This broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. An item that is often made from each of these materials types is the beverage container. The material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. Ceramic (glass) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. Metal (aluminum alloy) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. However, the cans are opaque, expensive to produce, and are easily dented and punctured. Polymers (polyethylene plastic) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass.
=== Ceramics and glasses ===
Another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. Many ceramics and glasses exhibit covalent or ionic-covalent bonding with SiO2 (silica) as a fundamental building block. Ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. The vast majority of commercial glasses contain a metal oxide fused with silica. At the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. Windowpanes and eyeglasses are important examples. Fibers of glass are also used for long-range telecommunication and optical transmission. Scratch resistant Corning Gorilla Glass is a well-known example of the application of materials science to drastically improve the properties of common components.
Engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. Alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. Hot pressing provides higher density material. Chemical vapor deposition can place a film of a ceramic on another material. Cermets are ceramic particles containing some metals. The wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties.
Ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. This process involves the strategic addition of second-phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. This approach enhances fracture toughness, paving the way for the creation of advanced, high-performance ceramics in various industries.
=== Composites ===
Another application of materials science in industry is making composite materials. These are structured materials composed of two or more macroscopic phases.
Applications range from structural elements such as steel-reinforced concrete, to the thermal insulating tiles, which play a key and integral role in NASA's Space Shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re-entry into the Earth's atmosphere. One example is reinforced Carbon-Carbon (RCC), the light gray material, which withstands re-entry temperatures up to 1,510 °C (2,750 °F) and protects the Space Shuttle's wing leading edges and nose cap. RCC is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured-pyrolized to convert the furfuryl alcohol to carbon. To provide oxidation resistance for reusability, the outer layers of the RCC are converted to silicon carbide.
Other examples can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene (ABS) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. These additions may be termed reinforcing fibers, or dispersants, depending on their purpose.
=== Polymers ===
Polymers are chemical compounds made up of a large number of identical components linked together like chains. Polymers are the raw materials (the resins) used to make what are commonly called plastics and rubber. Plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride (PVC), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. Rubbers include natural rubber, styrene-butadiene rubber, chloroprene, and butadiene rubber. Plastics are generally classified as commodity, specialty and engineering plastics.
Polyvinyl chloride (PVC) is widely used, inexpensive, and annual production quantities are large. It lends itself to a vast array of applications, from artificial leather to electrical insulation and cabling, packaging, and containers. Its fabrication and processing are simple and well-established. The versatility of PVC is due to the wide range of plasticisers and other additives that it accepts. The term "additives" in polymer science refers to the chemicals and compounds added to the polymer base to modify its material properties.
Polycarbonate would be normally considered an engineering plastic (other examples include PEEK, ABS). Such plastics are valued for their superior strengths and other special material properties. They are usually not used for disposable applications, unlike commodity plastics.
Specialty plastics are materials with unique characteristics, such as ultra-high strength, electrical conductivity, electro-fluorescence, high thermal stability, etc.
The dividing lines between the various types of plastics is not based on material but rather on their properties and applications. For example, polyethylene (PE) is a cheap, low friction polymer commonly used to make disposable bags for shopping and trash, and is considered a commodity plastic, whereas medium-density polyethylene (MDPE) is used for underground gas and water pipes, and another variety called ultra-high-molecular-weight polyethylene (UHMWPE) is an engineering plastic which is used extensively as the glide rails for industrial equipment and the low-friction socket in implanted hip joints.
=== Metal alloys ===
The alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steels) make up the largest proportion of metals today both by quantity and commercial value.
Iron alloyed with various proportions of carbon gives low, mid and high carbon steels. An iron-carbon alloy is only considered steel if the carbon level is between 0.01% and 2.00% by weight. For steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. Heat treatment processes such as quenching and tempering can significantly change these properties, however. In contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. Cast iron is defined as an iron–carbon alloy with more than 2.00%, but less than 6.67% carbon. Stainless steel is defined as a regular steel alloy with greater than 10% by weight alloying content of chromium. Nickel and molybdenum are typically also added in stainless steels.
Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been known for a long time (since the Bronze Age), while the alloys of the other three metals have been relatively recently developed. Due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. The alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. These materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications.
=== Semiconductors ===
A semiconductor is a material that has a resistivity between a conductor and insulator. Modern day electronics run on semiconductors, and the industry had an estimated US$530 billion market in 2021. Its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. Semiconductor materials are used to build diodes, transistors, light-emitting diodes (LEDs), and analog and digital electric circuits, among their many uses. Semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. Semiconductor devices are manufactured both as single discrete devices and as integrated circuits (ICs), which consist of a number—from a few to millions—of devices manufactured and interconnected on a single semiconductor substrate.
Of all the semiconductors in use today, silicon makes up the largest portion both by quantity and commercial value. Monocrystalline silicon is used to produce wafers used in the semiconductor and electronics industry. Gallium arsenide (GaAs) is the second most popular semiconductor used. Due to its higher electron mobility and saturation velocity compared to silicon, it is a material of choice for high-speed electronics applications. These superior properties are compelling reasons to use GaAs circuitry in mobile phones, satellite communications, microwave point-to-point links and higher frequency radar systems. Other semiconductor materials include germanium, silicon carbide, and gallium nitride and have various applications.
== Relation with other fields ==
Materials science evolved, starting from the 1950s because it was recognized that to create, discover and design new materials, one had to approach it in a unified manner. Thus, materials science and engineering emerged in many ways: renaming and/or combining existing metallurgy and ceramics engineering departments; splitting from existing solid state physics research (itself growing into condensed matter physics); pulling in relatively new polymer engineering and polymer science; recombining from the previous, as well as chemistry, chemical engineering, mechanical engineering, and electrical engineering; and more.
The field of materials science and engineering is important both from a scientific perspective, as well as for applications field. Materials are of the utmost importance for engineers (or other applied fields) because usage of the appropriate materials is crucial when designing systems. As a result, materials science is an increasingly important part of an engineer's education.
Materials physics is the use of physics to describe the physical properties of materials. It is a synthesis of physical sciences such as chemistry, solid mechanics, solid state physics, and materials science. Materials physics is considered a subset of condensed matter physics and applies fundamental condensed matter concepts to complex multiphase media, including materials of technological interest. Current fields that materials physicists work in include electronic, optical, and magnetic materials, novel materials and structures, quantum phenomena in materials, nonequilibrium physics, and soft condensed matter physics. New experimental and computational tools are constantly improving how materials systems are modeled and studied and are also fields when materials physicists work in.
The field is inherently interdisciplinary, and the materials scientists or engineers must be aware and make use of the methods of the physicist, chemist and engineer. Conversely, fields such as life sciences and archaeology can inspire the development of new materials and processes, in bioinspired and paleoinspired approaches. Thus, there remain close relationships with these fields. Conversely, many physicists, chemists and engineers find themselves working in materials science due to the significant overlaps between the fields.
== Emerging technologies ==
== Subdisciplines ==
The main branches of materials science stem from the four main classes of materials: ceramics, metals, polymers and composites.
Ceramic engineering
Metallurgy
Polymer science and engineering
Composite engineering
There are additionally broadly applicable, materials independent, endeavors.
Materials characterization (spectroscopy, microscopy, diffraction)
Computational materials science
Materials informatics and selection
There are also relatively broad focuses across materials on specific phenomena and techniques.
Crystallography
Surface science
Tribology
Microelectronics
== Related or interdisciplinary fields ==
Condensed matter physics, solid-state physics and solid-state chemistry
Nanotechnology
Mineralogy
Supramolecular chemistry
Biomaterials science
== Professional societies ==
American Ceramic Society
ASM International
Association for Iron and Steel Technology
Materials Research Society
The Minerals, Metals & Materials Society
== See also ==
== References ==
=== Citations ===
=== Bibliography ===
Ashby, Michael; Hugh Shercliff; David Cebon (2007). Materials: engineering, science, processing and design (1st ed.). Butterworth-Heinemann. ISBN 978-0-7506-8391-3.
Askeland, Donald R.; Pradeep P. Phulé (2005). The Science & Engineering of Materials (5th ed.). Thomson-Engineering. ISBN 978-0-534-55396-8.
Callister, Jr., William D. (2000). Materials Science and Engineering – An Introduction (5th ed.). John Wiley and Sons. ISBN 978-0-471-32013-5.
Eberhart, Mark (2003). Why Things Break: Understanding the World by the Way It Comes Apart. Harmony. ISBN 978-1-4000-4760-4.
Gaskell, David R. (1995). Introduction to the Thermodynamics of Materials (4th ed.). Taylor and Francis Publishing. ISBN 978-1-56032-992-3.
González-Viñas, W. & Mancini, H.L. (2004). An Introduction to Materials Science. Princeton University Press. ISBN 978-0-691-07097-1.
Gordon, James Edward (1984). The New Science of Strong Materials or Why You Don't Fall Through the Floor (eissue ed.). Princeton University Press. ISBN 978-0-691-02380-9.
Mathews, F.L. & Rawlings, R.D. (1999). Composite Materials: Engineering and Science. Boca Raton: CRC Press. ISBN 978-0-8493-0621-1.
Lewis, P.R.; Reynolds, K. & Gagg, C. (2003). Forensic Materials Engineering: Case Studies. Boca Raton: CRC Press. ISBN 9780849311826.
Wachtman, John B. (1996). Mechanical Properties of Ceramics. New York: Wiley-Interscience, John Wiley & Son's. ISBN 978-0-471-13316-2.
Walker, P., ed. (1993). Chambers Dictionary of Materials Science and Technology. Chambers Publishing. ISBN 978-0-550-13249-9.
Mahajan, S. (2015). "The role of materials science in the evolution of microelectronics". MRS Bulletin. 12 (40): 1079–1088. Bibcode:2015MRSBu..40.1079M. doi:10.1557/mrs.2015.276.
== Further reading ==
Timeline of Materials Science Archived 2011-07-27 at the Wayback Machine at The Minerals, Metals & Materials Society (TMS) – accessed March 2007
Burns, G.; Glazer, A.M. (1990). Space Groups for Scientists and Engineers (2nd ed.). Boston: Academic Press, Inc. ISBN 978-0-12-145761-7.
Cullity, B.D. (1978). Elements of X-Ray Diffraction (2nd ed.). Reading, Massachusetts: Addison-Wesley Publishing Company. ISBN 978-0-534-55396-8.
Giacovazzo, C; Monaco HL; Viterbo D; Scordari F; Gilli G; Zanotti G; Catti M (1992). Fundamentals of Crystallography. Oxford: Oxford University Press. ISBN 978-0-19-855578-0.
Green, D.J.; Hannink, R.; Swain, M.V. (1989). Transformation Toughening of Ceramics. Boca Raton: CRC Press. ISBN 978-0-8493-6594-2.
Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 1: Neutron Scattering. Oxford: Clarendon Press. ISBN 978-0-19-852015-3.
Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 2: Condensed Matter. Oxford: Clarendon Press. ISBN 978-0-19-852017-7.
O'Keeffe, M.; Hyde, B.G. (1996). "Crystal Structures; I. Patterns and Symmetry". Zeitschrift für Kristallographie – Crystalline Materials. 212 (12). Washington, DC: Mineralogical Society of America, Monograph Series: 899. Bibcode:1997ZK....212..899K. doi:10.1524/zkri.1997.212.12.899. ISBN 978-0-939950-40-9.
Squires, G.L. (1996). Introduction to the Theory of Thermal Neutron Scattering (2nd ed.). Mineola, New York: Dover Publications Inc. ISBN 978-0-486-69447-4.
Young, R.A., ed. (1993). The Rietveld Method. Oxford: Oxford University Press & International Union of Crystallography. ISBN 978-0-19-855577-3.
== External links ==
MS&T conference organized by the main materials societies
MIT OpenCourseWare for MSE | Wikipedia/Materials_Engineering |
The Journal of Applied Mathematics and Mechanics, also known as Zeitschrift für Angewandte Mathematik und Mechanik or ZAMM is a monthly peer-reviewed scientific journal dedicated to applied mathematics. It is published by Wiley-VCH on behalf of the Gesellschaft für Angewandte Mathematik und Mechanik. The editor-in-chief is Holm Altenbach (Otto von Guericke University Magdeburg). According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.3.
== Publication history ==
The journal's first issue appeared in 1921, published by the Verein Deutscher Ingenieure and edited by Richard von Mises.
== References ==
== External links ==
Official website | Wikipedia/Journal_of_Applied_Mathematics_and_Mechanics |
The membrane theory of shells, or membrane theory for short, describes the mechanical properties of shells when twisting or under bending and assumes that bending moments are small enough to be negligible.
The spectacular simplification of membrane theory makes possible the examination of a wide variety of shapes and supports, in particular, tanks and shell roofs. There are heavy penalties paid for this simplification, and such inadequacies are apparent through critical inspection, remaining within the theory, of solutions. However, this theory is more than a first approximation. If a shell is shaped and supported so as to carry the load within a membrane stress system it may be a desirable solution to the design problem, i.e., thin, light and stiff.
== See also ==
Theory of plates and shells
Stress resultants in plates and shells
== References ==
== Literature ==
Ventsel, Eduard; Krauthammer, Theodor (24 August 2001). "Chapter 13. The Membrane Theory Of Shells". Thin Plates and Shells - Theory: Analysis, and Applications. CRC Press. pp. 349–372. doi:10.1201/9780203908723.ch13 (inactive 2024-11-12). ISBN 978-0-8247-0575-6. Retrieved 5 October 2014.{{cite book}}: CS1 maint: DOI inactive as of November 2024 (link)
Practical industry example for plates and shell analysis - animated video | Wikipedia/Membrane_theory_of_shells |
In fluid dynamics, a wake may either be:
the region of recirculating flow immediately behind a moving or stationary blunt body, caused by viscosity, which may be accompanied by flow separation and turbulence, or
the wave pattern on the water surface downstream of an object in a flow, or produced by a moving object (e.g. a ship), caused by density differences of the fluids above and below the free surface and gravity (or surface tension).
== Viscosity ==
The wake is the region of disturbed flow (often turbulent) downstream of a solid body moving through a fluid, caused by the flow of the fluid around the body.
For a blunt body in subsonic external flow, for example the Apollo or Orion capsules during descent and landing, the wake is massively separated and behind the body is a reverse flow region where the flow is moving toward the body. This phenomenon is often observed in wind tunnel testing of aircraft, and is especially important when parachute systems are involved, because unless the parachute lines extend the canopy beyond the reverse flow region, the chute can fail to inflate and thus collapse. Parachutes deployed into wakes suffer dynamic pressure deficits which reduce their expected drag forces. High-fidelity computational fluid dynamics simulations are often undertaken to model wake flows, although such modeling has uncertainties associated with turbulence modeling (for example RANS versus LES implementations), in addition to unsteady flow effects. Example applications include rocket stage separation and aircraft store separation.
== Density differences ==
In incompressible fluids (liquids) such as water, a bow wake is created when a watercraft moves through the medium; as the medium cannot be compressed, it must be displaced instead, resulting in a wave. As with all wave forms, it spreads outward from the source until its energy is overcome or lost, usually by friction or dispersion.
The non-dimensional parameter of interest is the Froude number.
=== Kelvin wake pattern ===
== Other effects ==
The above describes an ideal wake, where the body's means of propulsion has no other effect on the water. In practice the wave pattern between the V-shaped wavefronts is usually mixed with the effects of propeller backwash and eddying behind the boat's (usually square-ended) stern.
The Kelvin angle is also derived for the case of deep water in which the fluid is not flowing in different speed or directions as a function of depth ("shear"). In cases where the water (or fluid) has shear, the results may be more complicated. Also, the deep water model neglects surface tension, which implies that the wave source is large compared to capillary length.
== Recreation ==
"No wake zones" may prohibit wakes in marinas, near moorings and within some distance of shore in order to facilitate recreation by other boats and reduce the damage wakes cause. Powered narrowboats on British canals are not permitted to create a breaking wash (a wake large enough to create a breaking wave) along the banks, as this erodes them. This rule normally restricts these vessels to 4 knots (4.6 mph; 7.4 km/h) or less.
Wakes are occasionally used recreationally. Swimmers, people riding personal watercraft, and aquatic mammals such as dolphins can ride the leading edge of a wake. In the sport of wakeboarding the wake is used as a jump. The wake is also used to propel a surfer in the sport of wakesurfing. In the sport of water polo, the ball carrier can swim while advancing the ball, propelled ahead with the wake created by alternating armstrokes in crawl stroke, a technique known as dribbling. Furthermore, in the sport of canoe marathon, competitors use the wake of fellow kayaks in order to save energy and gain an advantage, through the practice of sitting their boats on the wake of another, so their kayak is propelled by the wash.
== See also ==
Bow shock (aerodynamics)
Slipstream
Wake turbulence
Karman vortex street
== References ==
== External links ==
Erosion caused by boat wakes
NIST detailed derivation | Wikipedia/Wake_(physics) |
Propulsion is the generation of force by any combination of pushing or pulling to modify the translational motion of an object, which is typically a rigid body (or an articulated rigid body) but may also concern a fluid. The term is derived from two Latin words: pro, meaning before or forward; and pellere, meaning to drive.
A propulsion system consists of a source of mechanical power, and a propulsor (means of converting this power into propulsive force).
Plucking a guitar string to induce a vibratory translation is technically a form of propulsion of the guitar string; this is not commonly depicted in this vocabulary, even though human muscles are considered to propel the fingertips. The motion of an object moving through a gravitational field is affected by the field, and within some frames of reference physicists speak of the gravitational field generating a force upon the object, but for deep theoretic reasons, physicists now consider the curved path of an object moving freely through space-time as shaped by gravity as a natural movement of the object, unaffected by a propulsive force (in this view, the falling apple is considered to be unpropelled, while the observer of the apple standing on the ground is considered to be propelled by the reactive force of the Earth's surface).
Biological propulsion systems use an animal's muscles as the power source, and limbs such as wings, fins or legs as the propulsors. A technological system uses an engine or motor as the power source (commonly called a powerplant), and wheels and axles, propellers, or a propulsive nozzle to generate the force. Components such as clutches or gearboxes may be needed to connect the motor to axles, wheels, or propellers. A technological/biological system may use human, or trained animal, muscular work to power a mechanical device.
Small objects, such as bullets, propelled at high speed are known as projectiles; larger objects propelled at high speed, often into ballistic flight, are known as rockets or missiles.
Influencing rotational motion is also technically a form of propulsion, but in speech, an automotive mechanic might prefer to describe the hot gasses in an engine cylinder as propelling the piston (translational motion), which drives the crankshaft (rotational motion), the crankshaft then drives the wheels (rotational motion), and the wheels propel the car forward (translational motion). In common speech, propulsion is associated with spatial displacement more strongly than locally contained forms of motion, such as rotation or vibration. As another example, internal stresses in a rotating baseball cause the surface of the baseball to travel along a sinusoidal or helical trajectory, which would not happen in the absence of these interior forces; these forces meet the technical definition of propulsion from Newtonian mechanics, but are not commonly spoken of in this language.
== Vehicular propulsion ==
=== Air propulsion ===
An aircraft propulsion system generally consists of an aircraft engine and some means to generate thrust, such as a propeller or a propulsive nozzle.
An aircraft propulsion system must achieve two things. First, the thrust from the propulsion system must balance the drag of the airplane when the airplane is cruising. And second, the thrust from the propulsion system must exceed the drag of the airplane for the airplane to accelerate. The greater the difference between the thrust and the drag, called the excess thrust, the faster the airplane will accelerate.
Some aircraft, like airliners and cargo planes, spend most of their life in a cruise condition. For these airplanes, excess thrust is not as important as high engine efficiency and low fuel usage. Since thrust depends on both the amount of gas moved and the velocity, we can generate high thrust by accelerating a large mass of gas by a small amount, or by accelerating a small mass of gas by a large amount. Because of the aerodynamic efficiency of propellers and fans, it is more fuel efficient to accelerate a large mass by a small amount, which is why high-bypass turbofans and turboprops are commonly used on cargo planes and airliners.
Some aircraft, like fighter planes or experimental high speed aircraft, require very high excess thrust to accelerate quickly and to overcome the high drag associated with high speeds. For these airplanes, engine efficiency is not as important as very high thrust. Modern combat aircraft usually have an afterburner added to a low bypass turbofan. Future hypersonic aircraft may use some type of ramjet or rocket propulsion.
=== Ground ===
Ground propulsion is any mechanism for propelling solid bodies along the ground, usually for the purposes of transportation. The propulsion system often consists of a combination of an engine or motor, a gearbox and wheel and axles in standard applications.
=== Maglev ===
Maglev (derived from magnetic levitation) is a system of transportation that uses magnetic levitation to suspend, guide and propel vehicles with magnets rather than using mechanical methods, such as wheels, axles and bearings. With maglev a vehicle is levitated a short distance away from a guide way using magnets to create both lift and thrust. Maglev vehicles are claimed to move more smoothly and quietly and to require less maintenance than wheeled mass transit systems. It is claimed that non-reliance on friction also means that acceleration and deceleration can far surpass that of existing forms of transport. The power needed for levitation is not a particularly large percentage of the overall energy consumption; most of the power used is needed to overcome air resistance (drag), as with any other high-speed form of transport.
=== Marine ===
Marine propulsion is the mechanism or system used to generate thrust to move a ship or boat across water. While paddles and sails are still used on some smaller boats, most modern ships are propelled by mechanical systems consisting of a motor or engine turning a propeller, or less frequently, in jet drives, an impeller. Marine engineering is the discipline concerned with the design of marine propulsion systems.
Steam engines were the first mechanical engines used in marine propulsion, but have mostly been replaced by two-stroke or four-stroke diesel engines, outboard motors, and gas turbine engines on faster ships. Nuclear reactors producing steam are used to propel warships and icebreakers, and there have been attempts to utilize them to power commercial vessels. Electric motors have been used on submarines and electric boats and have been proposed for energy-efficient propulsion. Recent development in liquified natural gas (LNG) fueled engines are gaining recognition for their low emissions and cost advantages.
=== Space ===
Spacecraft propulsion is any method used to accelerate spacecraft and artificial satellites. There are many different methods. Each method has drawbacks and advantages, and spacecraft propulsion is an active area of research. However, most spacecraft today are propelled by forcing a gas from the back/rear of the vehicle at very high speed through a supersonic de Laval nozzle. This sort of engine is called a rocket engine.
All current spacecraft use chemical rockets (bipropellant or solid-fuel) for launch, though some (such as the Pegasus rocket and SpaceShipOne) have used air-breathing engines on their first stage. Most satellites have simple reliable chemical thrusters (often monopropellant rockets) or resistojet rockets for orbital station-keeping and some use momentum wheels for attitude control. Soviet bloc satellites have used electric propulsion for decades, and newer Western geo-orbiting spacecraft are starting to use them for north–south stationkeeping and orbit raising. Interplanetary vehicles mostly use chemical rockets as well, although a few have used ion thrusters and Hall-effect thrusters (two different types of electric propulsion) to great success.
=== Cable ===
A cable car is any of a variety of transportation systems relying on cables to pull vehicles along or lower them at a steady rate. The terminology also refers to the vehicles on these systems. The cable car vehicles are motor-less and engine-less and they are pulled by a cable that is rotated by a motor off-board.
== Animal ==
Animal locomotion, which is the act of self-propulsion by an animal, has many manifestations, including running, swimming, jumping and flying. Animals move for a variety of reasons, such as to find food, a mate, or a suitable microhabitat, and to escape predators. For many animals the ability to move is essential to survival and, as a result, selective pressures have shaped the locomotion methods and mechanisms employed by moving organisms. For example, migratory animals that travel vast distances (such as the Arctic tern) typically have a locomotion mechanism that costs very little energy per unit distance, whereas non-migratory animals that must frequently move quickly to escape predators (such as frogs) are likely to have costly but very fast locomotion. The study of animal locomotion is typically considered to be a sub-field of biomechanics.
Locomotion requires energy to overcome friction, drag, inertia, and gravity, though in many circumstances some of these factors are negligible. In terrestrial environments gravity must be overcome, though the drag of air is much less of an issue. In aqueous environments however, friction (or drag) becomes the major challenge, with gravity being less of a concern. Although animals with natural buoyancy need not expend much energy maintaining vertical position, some will naturally sink and must expend energy to remain afloat. Drag may also present a problem in flight, and the aerodynamically efficient body shapes of birds highlight this point. Flight presents a different problem from movement in water however, as there is no way for a living organism to have lower density than air. Limbless organisms moving on land must often contend with surface friction, but do not usually need to expend significant energy to counteract gravity.
Newton's third law of motion is widely used in the study of animal locomotion: if at rest, to move forward an animal must push something backward. Terrestrial animals must push the solid ground; swimming and flying animals must push against a fluid (either water or air). The effect of forces during locomotion on the design of the skeletal system is also important, as is the interaction between locomotion and muscle physiology, in determining how the structures and effectors of locomotion enable or limit animal movement.
== See also ==
Jetpack
Transport
== References ==
== External links ==
Media related to Propulsion at Wikimedia Commons
Pickering, Steve (2009). "Propulsion Efficiency". Sixty Symbols. Brady Haran from the University of Nottingham. | Wikipedia/Propulsion_systems |
In engineering, a process is a series of interrelated tasks that, together, transform inputs into a given output. These tasks may be carried out by people, nature or machines using various resources; an engineering process must be considered in the context of the agents carrying out the tasks and the resource attributes involved. Systems engineering normative documents and those related to Maturity Models are typically based on processes, for example, systems engineering processes of the EIA-632 and processes involved in the Capability Maturity Model Integration (CMMI) institutionalization and improvement approach. Constraints imposed on the tasks and resources required to implement them are essential for executing the tasks mentioned.
== Semiconductor industry ==
Semiconductor process engineers face the unique challenge of transforming raw materials into high-tech devices. Common semiconductor devices include Integrated Circuits (ICs), Light-Emitting Diodes (LEDs), solar cells, and solid-state lasers. To produce these and other semiconductor devices, semiconductor process engineers rely heavily on interconnected physical and chemical processes.
A prominent example of these combined processes is the use of ultra-violet photolithography which is then followed by wet etching, the process of creating an IC pattern that is transferred onto an organic coating and etched onto the underlying semiconductor chip. Other examples include the ion implantation of dopant species to tailor the electrical properties of a semiconductor chip and the electrochemical deposition of metallic interconnects (e.g. electroplating). Process Engineers are generally involved in the development, scaling, and quality control of new semiconductor processes from lab bench to manufacturing floor.
== Chemical engineering ==
A chemical process is a series of unit operations used to produce a material in large quantities.
In the chemical industry, chemical engineers will use the following to define or illustrate a process:
Process flow diagram (PFD)
Piping and instrumentation diagram (P&ID)
Simplified process description
Detailed process description
Project management
Process simulation
== CPRET ==
The Association Française d'Ingénierie Système has developed a process definition dedicated to Systems engineering (SE), but open to all domains.
The CPRET representation integrates the process Mission and Environment in order to offer an external standpoint. Several models may correspond to a single definition depending on the language used (UML or another language).
Note: process definition and modeling are interdependent notions but different the one from the other.
Process
A process is a set of transformations of input elements into products: respecting constraints,
requiring resources,
meeting a defined mission, corresponding to a specific purpose adapted to a given environment.
Environment
Natural conditions and external factors impacting a process.
Mission
Purpose of the process tailored to a given environment.
This definition requires a process description to include the Constraints, Products, Resources, Input Elements and Transformations. This leads to the CPRET acronym to be used as name and mnemonic for this definition.
Constraints
Imposed conditions, rules or regulations.
Products
All whatever is generated by transformations. The products can be of the desired or not desired type (e.g., the software system and bugs, the defined products and waste).
Resources
Human resources, energy, time and other means required to carry out the transformations.
Elements as inputs
Elements submitted to transformations for producing the products.
Transformations
Operations organized according to a logic aimed at optimizing the attainment of specific products from the input elements, with the allocated resources and on compliance with the imposed constraints.
=== CPRET through examples ===
The purpose of the following examples is to illustrate the definitions with concrete cases. These examples come from the Engineering field but also from other fields to show that the CPRET definition of processes is not limited to the System Engineering context.
Examples of processes
An engineering (EIA-632, ISO/IEC 15288, etc.)
A concert
A polling campaign
A certification
Examples of environment
Various levels of maturity, technicality, equipment
An audience
A political system
Practices
Examples of mission
Supply better quality products
Satisfy the public, critics
Have candidates elected
Obtain the desired approval
Examples of constraints
Imposed technologies
Correct acoustics
Speaking times
A reference model (ISO, CMMI, etc.)
Examples of products
A mobile telephone network
A show
Vote results
A quality label
Examples of resources
Development teams
An orchestra and its instruments
An organization
An assessment team
Examples of elements as inputs
Specifications
Scores
Candidates
A company and its practices
Examples of transformations
Define an architecture
Play the scores
Make people vote for a candidate
Audit the organization
== Conclusions ==
The CPRET formalized definition systematically addresses the input Elements, Transformations, and Products but also the other essential components of a Process, namely the Constraints and Resources. Among the resources, note the specificity of the Resource-Time component which passes inexorably and irreversibly, with problems of synchronization and sequencing.
This definition states that environment is an external factor which cannot be avoided: as a matter of fact, a process is always interdependent with other phenomena including other processes.
== References ==
== Bibliography == | Wikipedia/Materials_Processing |
A fictitious force, also known as an inertial force or pseudo-force, is a force that appears to act on an object when its motion is described or experienced from a non-inertial frame of reference. Unlike real forces, which result from physical interactions between objects, fictitious forces occur due to the acceleration of the observer’s frame of reference rather than any actual force acting on a body. These forces are necessary for describing motion correctly within an accelerating frame, ensuring that Newton's second law of motion remains applicable.
Common examples of fictitious forces include the centrifugal force, which appears to push objects outward in a rotating system; the Coriolis force, which affects moving objects in a rotating frame such as the Earth; and the Euler force, which arises when a rotating system changes its angular velocity. While these forces are not real in the sense of being caused by physical interactions, they are essential for accurately analyzing motion within accelerating reference frames, particularly in disciplines such as classical mechanics, meteorology, and astrophysics.
Fictitious forces play a crucial role in understanding everyday phenomena, such as weather patterns influenced by the Coriolis effect and the perceived weightlessness experienced by astronauts in free-fall orbits. They are also fundamental in engineering applications, including navigation systems and rotating machinery.
According to General relativity theory we perceive gravitational force when space is bending near heavy objects, so even this might be called a fictitious force.
== Measurable examples of fictitious forces ==
Passengers in a vehicle accelerating in the forward direction may perceive they are acted upon by a force moving them into the direction of the backrest of their seats for instance. An example in a rotating reference frame may be the impression that it is a force which seems to move objects outward toward the rim of a centrifuge or carousel.
The fictitious force called a pseudo force might also be referred to as a body force. It is due to an object's inertia when the reference frame does not move inertially any more but begins to accelerate relative to the free object. In terms of the example of the passenger vehicle, a pseudo force seems to be active just before the body touches the backrest of the seat in the car. A person in the car leaning forward first moves a bit backward in relation to the already accelerating car before touching the backrest. The motion in this short period seems to be the result of a force on the person; i.e., it is a pseudo force. A pseudo force does not arise from any physical interaction between two objects, such as electromagnetism or contact forces. It is only a consequence of the acceleration of the physical object the non-inertial reference frame is connected to, i.e. the vehicle in this case. From the viewpoint of the respective accelerating frame, an acceleration of the inert object appears to be present, apparently requiring a "force" for this to have happened.
As stated by Iro:
Such an additional force due to nonuniform relative motion of two reference frames is called a pseudo-force.
The pseudo force on an object arises as an imaginary influence when the frame of reference used to describe the object's motion is accelerating compared to a non-accelerating frame. The pseudo force "explains", using Newton's second law mechanics, why an object does not follow Newton's second law and "floats freely" as if weightless. As a frame may accelerate in any arbitrary way, so may pseudo forces also be as arbitrary (but only in direct response to the acceleration of the frame). An example of a pseudo force as defined by Iro is the Coriolis force, maybe better to be called: the Coriolis effect. The gravitational force would also be a fictitious force (pseudo force) in a field model in which particles distort spacetime due to their mass, such as in the theory of general relativity.
Assuming Newton's second law in the form F = ma, fictitious forces are always proportional to the mass m.
The fictitious force that has been called an inertial force is also referred to as a d'Alembert force, or sometimes as a pseudo force. D'Alembert's principle is just another way of formulating Newton's second law of motion. It defines an inertial force as the negative of the product of mass times acceleration, just for the sake of easier calculations.
(A d'Alembert force is not to be confused with a contact force arising from the physical interaction between two objects, which is the subject of Newton's third law – 'action is reaction'. In terms of the example of the passenger vehicle above, a contact force emerges when the body of the passenger touches the backrest of the seat in the car. It is present for as long as the car is accelerated.)
Four fictitious forces have been defined for frames accelerated in commonly occurring ways:
one caused by any acceleration relative to the origin in a straight line (rectilinear acceleration);
two involving rotation: centrifugal force and Coriolis effect
and a fourth, called the Euler force, caused by a variable rate of rotation, should that occur.
== Background ==
The role of fictitious forces in Newtonian mechanics is described by Tonnelat:
For Newton, the appearance of acceleration always indicates the existence of absolute motion – absolute motion of matter where real forces are concerned; absolute motion of the reference system, where so-called fictitious forces, such as inertial forces or those of Coriolis, are concerned.
Fictitious forces arise in classical mechanics and special relativity in all non-inertial frames.
Inertial frames are privileged over non-inertial frames because they do not have physics whose causes are outside of the system, while non-inertial frames do. Fictitious forces, or physics whose cause is outside of the system, are no longer necessary in general relativity, since these physics are explained with the geodesics of spacetime: "The field of all possible space-time null geodesics or photon paths unifies the absolute local non-rotation standard throughout space-time.".
== On Earth ==
The surface of the Earth is a rotating reference frame. To solve classical mechanics problems exactly in an Earthbound reference frame, three fictitious forces must be introduced: the Coriolis force, the centrifugal force (described below) and the Euler force. The Euler force is typically ignored because the variations in the angular velocity of the rotating surface of the Earth are usually insignificant. Both of the other fictitious forces are weak compared to most typical forces in everyday life, but they can be detected under careful conditions.
For example, Léon Foucault used his Foucault pendulum to show that the Coriolis force results from the Earth's rotation. If the Earth were to rotate twenty times faster (making each day only ~72 minutes long), people could easily get the impression that such fictitious forces were pulling on them, as on a spinning carousel. People in temperate and tropical latitudes would, in fact, need to hold on, in order to avoid being launched into orbit by the centrifugal force.
When moving along the equator in a ship heading in an easterly direction, objects appear to be slightly lighter than on the way back. This phenomenon has been observed and is called the Eötvös effect.
== Detection of non-inertial reference frame ==
Observers inside a closed box that is moving with a constant velocity cannot detect their own motion; however, observers within an accelerating reference frame can detect that they are in a non-inertial reference frame from the fictitious forces that arise. For example, for straight-line acceleration Vladimir Arnold presents the following theorem:
In a coordinate system K which moves by translation relative to an inertial system k, the motion of a mechanical system takes place as if the coordinate system were inertial, but on every point of mass m an additional "inertial force" acted: F = −ma, where a is the acceleration of the system K.
Other accelerations also give rise to fictitious forces, as described mathematically below. The physical explanation of motions in an inertial frame is the simplest possible, requiring no fictitious forces: fictitious forces are zero, providing a means to distinguish inertial frames from others.
An example of the detection of a non-inertial, rotating reference frame is the precession of a Foucault pendulum. In the non-inertial frame of the Earth, the fictitious Coriolis force is necessary to explain observations. In an inertial frame outside the Earth, no such fictitious force is necessary.
== Example concerning Circular motion ==
The effect of a fictitious force also occurs when a car takes the bend. Observed from a non-inertial frame of reference attached to the car, the fictitious force called the centrifugal force appears. As the car enters a left turn, a suitcase first on the left rear seat slides to the right rear seat and then continues until it comes into contact with the closed door on the right. This motion marks the phase of the fictitious centrifugal force as it is the inertia of the suitcase which plays a role in this piece of movement. It may seem that there must be a force responsible for this movement, but actually, this movement arises because of the inertia of the suitcase, which is (still) a 'free object' within an already accelerating frame of reference.
After the suitcase has come into contact with the closed door of the car, the situation with the emergence of contact forces becomes current. The centripetal force on the car is now also transferred to the suitcase and the situation of Newton's third law comes into play, with the centripetal force as the action part and with the so-called reactive centrifugal force as the reaction part. The reactive centrifugal force is also due to the inertia of the suitcase. Now however the inertia appears in the form of a manifesting resistance to a change in its state of motion.
Suppose a few miles further the car is moving at constant speed travelling a roundabout, again and again, then the occupants will feel as if they are being pushed to the outside of the vehicle by the (reactive) centrifugal force, away from the centre of the turn.
The situation can be viewed from inertial as well as from non-inertial frames.
From the viewpoint of an inertial reference frame stationary with respect to the road, the car is accelerating toward the centre of the circle. It is accelerating, because the direction of the velocity is changing, despite the car having constant speed. This inward acceleration is called centripetal acceleration, it requires a centripetal force to maintain the circular motion. This force is exerted by the ground upon the wheels, in this case, from the friction between the wheels and the road. The car is accelerating, due to the unbalanced force, which causes it to move in a circle. (See also banked turn.)
From the viewpoint of a rotating frame, moving with the car, a fictitious centrifugal force appears to be present pushing the car toward the outside of the road (and pushing the occupants toward the outside of the car). The centrifugal force balances the friction between wheels and the road, making the car stationary in this non-inertial frame.
A classic example of a fictitious force in circular motion is the experiment of rotating spheres tied by a cord and spinning around their centre of mass. In this case, the identification of a rotating, non-inertial frame of reference can be based upon the vanishing of fictitious forces. In an inertial frame, fictitious forces are not necessary to explain the tension in the string joining the spheres. In a rotating frame, Coriolis and centrifugal forces must be introduced to predict the observed tension.
In the rotating reference frame perceived on the surface of the Earth, a centrifugal force reduces the apparent force of gravity by about one part in a thousand, depending on latitude. This reduction is zero at the poles, maximum at the equator.
The fictitious Coriolis force, which is observed in rotational frames, is ordinarily visible only in very large-scale motion like the projectile motion of long-range guns or the circulation of the Earth's atmosphere (see Rossby number). Neglecting air resistance, an object dropped from a 50-meter-high tower at the equator will fall 7.7 millimetres eastward of the spot below where it is dropped because of the Coriolis force.
== Fictitious forces and work ==
Fictitious forces can be considered to do work, provided that they move an object on a trajectory that changes its energy from potential to kinetic. For example, consider some persons in rotating chairs holding a weight in their outstretched hands. If they pull their hand inward toward their body, from the perspective of the rotating reference frame, they have done work against the centrifugal force. When the weight is let go, it spontaneously flies outward relative to the rotating reference frame, because the centrifugal force does work on the object, converting its potential energy into kinetic. From an inertial viewpoint, of course, the object flies away from them because it is suddenly allowed to move in a straight line. This illustrates that the work done, like the total potential and kinetic energy of an object, can be different in a non-inertial frame than in an inertial one.
== Gravity as a fictitious force ==
The notion of "fictitious force" also arises in Einstein's general theory of relativity. All fictitious forces are proportional to the mass of the object upon which they act, which is also true for gravity. This led Albert Einstein to wonder whether gravity could be modeled as a fictitious force. He noted that a freefalling observer in a closed box would not be able to detect the force of gravity; hence, freefalling reference frames are equivalent to inertial reference frames (the equivalence principle). Developing this insight, Einstein formulated a theory with gravity as a fictitious force, and attributed the apparent acceleration due to gravity to the curvature of spacetime. This idea underlies Einstein's theory of general relativity. See the Eötvös experiment.
== Mathematical derivation of fictitious forces ==
=== General derivation ===
Many problems require use of noninertial reference frames, for example, those involving satellites and particle accelerators. Figure 2 shows a particle with mass m and position vector xA(t) in a particular inertial frame A. Consider a non-inertial frame B whose origin relative to the inertial one is given by XAB(t). Let the position of the particle in frame B be xB(t). What is the force on the particle as expressed in the coordinate system of frame B?
To answer this question, let the coordinate axis in B be represented by unit vectors uj with j any of { 1, 2, 3 } for the three coordinate axes. Then
x
B
=
∑
j
=
1
3
x
j
u
j
.
{\displaystyle \mathbf {x} _{\mathrm {B} }=\sum _{j=1}^{3}x_{j}\mathbf {u} _{j}\,.}
The interpretation of this equation is that xB is the vector displacement of the particle as expressed in terms of the coordinates in frame B at the time t. From frame A the particle is located at:
x
A
=
X
A
B
+
∑
j
=
1
3
x
j
u
j
.
{\displaystyle \mathbf {x} _{\mathrm {A} }=\mathbf {X} _{\mathrm {AB} }+\sum _{j=1}^{3}x_{j}\mathbf {u} _{j}\,.}
As an aside, the unit vectors { uj } cannot change magnitude, so derivatives of these vectors express only rotation of the coordinate system B. On the other hand, vector XAB simply locates the origin of frame B relative to frame A, and so cannot include rotation of frame B.
Taking a time derivative, the velocity of the particle is:
d
x
A
d
t
=
d
X
A
B
d
t
+
∑
j
=
1
3
d
x
j
d
t
u
j
+
∑
j
=
1
3
x
j
d
u
j
d
t
.
{\displaystyle {\frac {d\mathbf {x} _{\mathrm {A} }}{dt}}={\frac {d\mathbf {X} _{\mathrm {AB} }}{dt}}+\sum _{j=1}^{3}{\frac {dx_{j}}{dt}}\mathbf {u} _{j}+\sum _{j=1}^{3}x_{j}{\frac {d\mathbf {u} _{j}}{dt}}\,.}
The second term summation is the velocity of the particle, say vB as measured in frame B. That is:
d
x
A
d
t
=
v
A
B
+
v
B
+
∑
j
=
1
3
x
j
d
u
j
d
t
.
{\displaystyle {\frac {d\mathbf {x} _{\mathrm {A} }}{dt}}=\mathbf {v} _{\mathrm {AB} }+\mathbf {v} _{\mathrm {B} }+\sum _{j=1}^{3}x_{j}{\frac {d\mathbf {u} _{j}}{dt}}.}
The interpretation of this equation is that the velocity of the particle seen by observers in frame A consists of what observers in frame B call the velocity, namely vB, plus two extra terms related to the rate of change of the frame-B coordinate axes. One of these is simply the velocity of the moving origin vAB. The other is a contribution to velocity due to the fact that different locations in the non-inertial frame have different apparent velocities due to the rotation of the frame; a point seen from a rotating frame has a rotational component of velocity that is greater the further the point is from the origin.
To find the acceleration, another time differentiation provides:
d
2
x
A
d
t
2
=
a
A
B
+
d
v
B
d
t
+
∑
j
=
1
3
d
x
j
d
t
d
u
j
d
t
+
∑
j
=
1
3
x
j
d
2
u
j
d
t
2
.
{\displaystyle {\frac {d^{2}\mathbf {x} _{\mathrm {A} }}{dt^{2}}}=\mathbf {a} _{\mathrm {AB} }+{\frac {d\mathbf {v} _{\mathrm {B} }}{dt}}+\sum _{j=1}^{3}{\frac {dx_{j}}{dt}}{\frac {d\mathbf {u} _{j}}{dt}}+\sum _{j=1}^{3}x_{j}{\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}}.}
Using the same formula already used for the time derivative of xB, the velocity derivative on the right is:
d
v
B
d
t
=
∑
j
=
1
3
d
v
j
d
t
u
j
+
∑
j
=
1
3
v
j
d
u
j
d
t
=
a
B
+
∑
j
=
1
3
v
j
d
u
j
d
t
.
{\displaystyle {\frac {d\mathbf {v} _{\mathrm {B} }}{dt}}=\sum _{j=1}^{3}{\frac {dv_{j}}{dt}}\mathbf {u} _{j}+\sum _{j=1}^{3}v_{j}{\frac {d\mathbf {u} _{j}}{dt}}=\mathbf {a} _{\mathrm {B} }+\sum _{j=1}^{3}v_{j}{\frac {d\mathbf {u} _{j}}{dt}}.}
Consequently,
The interpretation of this equation is as follows: the acceleration of the particle in frame A consists of what observers in frame B call the particle acceleration aB, but in addition, there are three acceleration terms related to the movement of the frame-B coordinate axes: one term related to the acceleration of the origin of frame B, namely aAB, and two terms related to the rotation of frame B. Consequently, observers in B will see the particle motion as possessing "extra" acceleration, which they will attribute to "forces" acting on the particle, but which observers in A say are "fictitious" forces arising simply because observers in B do not recognize the non-inertial nature of frame B.
The factor of two in the Coriolis force arises from two equal contributions: (i) the apparent change of an inertially constant velocity with time because rotation makes the direction of the velocity seem to change (a dvB/dt term) and (ii) an apparent change in the velocity of an object when its position changes, putting it nearer to or further from the axis of rotation (the change in
∑
x
j
d
u
j
/
d
t
{\textstyle \sum x_{j}\,d\mathbf {u} _{j}/dt}
due to change in x j ).
To put matters in terms of forces, the accelerations are multiplied by the particle mass:
F
A
=
F
B
+
m
a
A
B
+
2
m
∑
j
=
1
3
v
j
d
u
j
d
t
+
m
∑
j
=
1
3
x
j
d
2
u
j
d
t
2
.
{\displaystyle \mathbf {F} _{\mathrm {A} }=\mathbf {F} _{\mathrm {B} }+m\mathbf {a} _{\mathrm {AB} }+2m\sum _{j=1}^{3}v_{j}{\frac {d\mathbf {u} _{j}}{dt}}+m\sum _{j=1}^{3}x_{j}{\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}}\ .}
The force observed in frame B, FB = maB is related to the actual force on the particle, FA, by
F
B
=
F
A
+
F
f
i
c
t
i
t
i
o
u
s
,
{\displaystyle \mathbf {F} _{\mathrm {B} }=\mathbf {F} _{\mathrm {A} }+\mathbf {F} _{\mathrm {fictitious} },}
where:
F
f
i
c
t
i
t
i
o
u
s
=
−
m
a
A
B
−
2
m
∑
j
=
1
3
v
j
d
u
j
d
t
−
m
∑
j
=
1
3
x
j
d
2
u
j
d
t
2
.
{\displaystyle \mathbf {F} _{\mathrm {fictitious} }=-m\mathbf {a} _{\mathrm {AB} }-2m\sum _{j=1}^{3}v_{j}{\frac {d\mathbf {u} _{j}}{dt}}-m\sum _{j=1}^{3}x_{j}{\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}}\,.}
Thus, problems may be solved in frame B by assuming that Newton's second law holds (with respect to quantities in that frame) and treating Ffictitious as an additional force.
Below are a number of examples applying this result for fictitious forces. More examples can be found in the article on centrifugal force.
=== Rotating coordinate systems ===
A common situation in which noninertial reference frames are useful is when the reference frame is rotating. Because such rotational motion is non-inertial, due to the acceleration present in any rotational motion, a fictitious force can always be invoked by using a rotational frame of reference. Despite this complication, the use of fictitious forces often simplifies the calculations involved.
To derive expressions for the fictitious forces, derivatives are needed for the apparent time rate of change of vectors that take into account time-variation of the coordinate axes. If the rotation of frame 'B' is represented by a vector Ω pointed along the axis of rotation with the orientation given by the right-hand rule, and with magnitude given by
|
Ω
|
=
d
θ
d
t
=
ω
(
t
)
,
{\displaystyle |{\boldsymbol {\Omega }}|={\frac {d\theta }{dt}}=\omega (t),}
then the time derivative of any of the three unit vectors describing frame B is
d
u
j
(
t
)
d
t
=
Ω
×
u
j
(
t
)
,
{\displaystyle {\frac {d\mathbf {u} _{j}(t)}{dt}}={\boldsymbol {\Omega }}\times \mathbf {u} _{j}(t),}
and
d
2
u
j
(
t
)
d
t
2
=
d
Ω
d
t
×
u
j
+
Ω
×
d
u
j
(
t
)
d
t
=
d
Ω
d
t
×
u
j
+
Ω
×
[
Ω
×
u
j
(
t
)
]
,
{\displaystyle {\frac {d^{2}\mathbf {u} _{j}(t)}{dt^{2}}}={\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {u} _{j}+{\boldsymbol {\Omega }}\times {\frac {d\mathbf {u} _{j}(t)}{dt}}={\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {u} _{j}+{\boldsymbol {\Omega }}\times \left[{\boldsymbol {\Omega }}\times \mathbf {u} _{j}(t)\right],}
as is verified using the properties of the vector cross product. These derivative formulas now are applied to the relationship between acceleration in an inertial frame, and that in a coordinate frame rotating with time-varying angular velocity ω(t). From the previous section, where subscript A refers to the inertial frame and B to the rotating frame, setting aAB = 0 to remove any translational acceleration, and focusing on only rotational properties (see Eq. 1):
d
2
x
A
d
t
2
=
a
B
+
2
∑
j
=
1
3
v
j
d
u
j
d
t
+
∑
j
=
1
3
x
j
d
2
u
j
d
t
2
,
{\displaystyle {\frac {d^{2}\mathbf {x} _{\mathrm {A} }}{dt^{2}}}=\mathbf {a} _{\mathrm {B} }+2\sum _{j=1}^{3}v_{j}\ {\frac {d\mathbf {u} _{j}}{dt}}+\sum _{j=1}^{3}x_{j}{\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}},}
a
A
=
a
B
+
2
∑
j
=
1
3
v
j
Ω
×
u
j
(
t
)
+
∑
j
=
1
3
x
j
d
Ω
d
t
×
u
j
+
∑
j
=
1
3
x
j
Ω
×
[
Ω
×
u
j
(
t
)
]
=
a
B
+
2
Ω
×
∑
j
=
1
3
v
j
u
j
(
t
)
+
d
Ω
d
t
×
∑
j
=
1
3
x
j
u
j
+
Ω
×
[
Ω
×
∑
j
=
1
3
x
j
u
j
(
t
)
]
.
{\displaystyle {\begin{aligned}\mathbf {a} _{\mathrm {A} }&=\mathbf {a} _{\mathrm {B} }+\ 2\sum _{j=1}^{3}v_{j}{\boldsymbol {\Omega }}\times \mathbf {u} _{j}(t)+\sum _{j=1}^{3}x_{j}{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {u} _{j}\ +\sum _{j=1}^{3}x_{j}{\boldsymbol {\Omega }}\times \left[{\boldsymbol {\Omega }}\times \mathbf {u} _{j}(t)\right]\\&=\mathbf {a} _{\mathrm {B} }+2{\boldsymbol {\Omega }}\times \sum _{j=1}^{3}v_{j}\mathbf {u} _{j}(t)+{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \sum _{j=1}^{3}x_{j}\mathbf {u} _{j}+{\boldsymbol {\Omega }}\times \left[{\boldsymbol {\Omega }}\times \sum _{j=1}^{3}x_{j}\mathbf {u} _{j}(t)\right].\end{aligned}}}
Collecting terms, the result is the so-called acceleration transformation formula:
a
A
=
a
B
+
2
Ω
×
v
B
+
d
Ω
d
t
×
x
B
+
Ω
×
(
Ω
×
x
B
)
.
{\displaystyle \mathbf {a} _{\mathrm {A} }=\mathbf {a} _{\mathrm {B} }+2{\boldsymbol {\Omega }}\times \mathbf {v} _{\mathrm {B} }+{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {x} _{\mathrm {B} }+{\boldsymbol {\Omega }}\times \left({\boldsymbol {\Omega }}\times \mathbf {x} _{\mathrm {B} }\right)\,.}
The physical acceleration aA due to what observers in the inertial frame A call real external forces on the object is, therefore, not simply the acceleration aB seen by observers in the rotational frame B, but has several additional geometric acceleration terms associated with the rotation of B. As seen in the rotational frame, the acceleration aB of the particle is given by rearrangement of the above equation as:
a
B
=
a
A
−
2
Ω
×
v
B
−
Ω
×
(
Ω
×
x
B
)
−
d
Ω
d
t
×
x
B
.
{\displaystyle \mathbf {a} _{\mathrm {B} }=\mathbf {a} _{\mathrm {A} }-2{\boldsymbol {\Omega }}\times \mathbf {v} _{\mathrm {B} }-{\boldsymbol {\Omega }}\times ({\boldsymbol {\Omega }}\times \mathbf {x} _{\mathrm {B} })-{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {x} _{\mathrm {B} }.}
The net force upon the object according to observers in the rotating frame is FB = maB. If their observations are to result in the correct force on the object when using Newton's laws, they must consider that the additional force Ffict is present, so the end result is FB = FA + Ffict. Thus, the fictitious force used by observers in B to get the correct behaviour of the object from Newton's laws equals:
F
f
i
c
t
=
−
2
m
Ω
×
v
B
−
m
Ω
×
(
Ω
×
x
B
)
−
m
d
Ω
d
t
×
x
B
.
{\displaystyle \mathbf {F} _{\mathrm {fict} }=-2m{\boldsymbol {\Omega }}\times \mathbf {v} _{\mathrm {B} }-m{\boldsymbol {\Omega }}\times ({\boldsymbol {\Omega }}\times \mathbf {x} _{\mathrm {B} })-m{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {x} _{\mathrm {B} }.}
Here, the first term is the Coriolis force, the second term is the centrifugal force, and the third term is the Euler force.
=== Orbiting coordinate systems ===
As a related example, suppose the moving coordinate system B rotates with a constant angular speed ω in a circle of radius R about the fixed origin of inertial frame A, but maintains its coordinate axes fixed in orientation, as in Figure 3. The acceleration of an observed body is now (see Eq. 1):
d
2
x
A
d
t
2
=
a
A
B
+
a
B
+
2
∑
j
=
1
3
v
j
d
u
j
d
t
+
∑
j
=
1
3
x
j
d
2
u
j
d
t
2
=
a
A
B
+
a
B
,
{\displaystyle {\begin{aligned}{\frac {d^{2}\mathbf {x} _{A}}{dt^{2}}}&=\mathbf {a} _{AB}+\mathbf {a} _{B}+2\ \sum _{j=1}^{3}v_{j}\ {\frac {d\mathbf {u} _{j}}{dt}}+\sum _{j=1}^{3}x_{j}\ {\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}}\\&=\mathbf {a} _{AB}\ +\mathbf {a} _{B}\ ,\end{aligned}}}
where the summations are zero inasmuch as the unit vectors have no time dependence. The origin of the system B is located according to frame A at:
X
A
B
=
R
(
cos
(
ω
t
)
,
sin
(
ω
t
)
)
,
{\displaystyle \mathbf {X} _{AB}=R\left(\cos(\omega t),\ \sin(\omega t)\right)\ ,}
leading to a velocity of the origin of frame B as:
v
A
B
=
d
d
t
X
A
B
=
Ω
×
X
A
B
,
{\displaystyle \mathbf {v} _{AB}={\frac {d}{dt}}\mathbf {X} _{AB}=\mathbf {\Omega \times X} _{AB}\ ,}
leading to an acceleration of the origin of B given by:
a
A
B
=
d
2
d
t
2
X
A
B
=
Ω
×
(
Ω
×
X
A
B
)
=
−
ω
2
X
A
B
.
{\displaystyle \mathbf {a} _{AB}={\frac {d^{2}}{dt^{2}}}\mathbf {X} _{AB}=\mathbf {\Omega \ \times } \left(\mathbf {\Omega \times X} _{AB}\right)=-\omega ^{2}\mathbf {X} _{AB}\,.}
Because the first term, which is
Ω
×
(
Ω
×
X
A
B
)
,
{\displaystyle \mathbf {\Omega \ \times } \left(\mathbf {\Omega \times X} _{AB}\right)\,,}
is of the same form as the normal centrifugal force expression:
Ω
×
(
Ω
×
x
B
)
,
{\displaystyle {\boldsymbol {\Omega }}\times \left({\boldsymbol {\Omega }}\times \mathbf {x} _{B}\right)\,,}
it is a natural extension of standard terminology (although there is no standard terminology for this case) to call this term a "centrifugal force". Whatever terminology is adopted, the observers in frame B must introduce a fictitious force, this time due to the acceleration from the orbital motion of their entire coordinate frame, that is radially outward away from the centre of rotation of the origin of their coordinate system:
F
f
i
c
t
=
m
ω
2
X
A
B
,
{\displaystyle \mathbf {F} _{\mathrm {fict} }=m\omega ^{2}\mathbf {X} _{AB}\,,}
and of magnitude:
|
F
f
i
c
t
|
=
m
ω
2
R
.
{\displaystyle |\mathbf {F} _{\mathrm {fict} }|=m\omega ^{2}R\,.}
This "centrifugal force" has differences from the case of a rotating frame. In the rotating frame the centrifugal force is related to the distance of the object from the origin of frame B, while in the case of an orbiting frame, the centrifugal force is independent of the distance of the object from the origin of frame B, but instead depends upon the distance of the origin of frame B from its centre of rotation, resulting in the same centrifugal fictitious force for all objects observed in frame B.
=== Orbiting and rotating ===
As a combination example, Figure 4 shows a coordinate system B that orbits inertial frame A as in Figure 3, but the coordinate axes in frame B turn so unit vector u1 always points toward the centre of rotation. This example might apply to a test tube in a centrifuge, where vector u1 points along the axis of the tube toward its opening at its top. It also resembles the Earth–Moon system, where the Moon always presents the same face to the Earth. In this example, unit vector u3 retains a fixed orientation, while vectors u1, u2 rotate at the same rate as the origin of coordinates. That is,
u
1
=
(
−
cos
ω
t
,
−
sin
ω
t
)
;
u
2
=
(
sin
ω
t
,
−
cos
ω
t
)
.
{\displaystyle \mathbf {u} _{1}=(-\cos \omega t,\ -\sin \omega t)\ ;\ \mathbf {u} _{2}=(\sin \omega t,\ -\cos \omega t)\,.}
d
d
t
u
1
=
Ω
×
u
1
=
ω
u
2
;
d
d
t
u
2
=
Ω
×
u
2
=
−
ω
u
1
.
{\displaystyle {\frac {d}{dt}}\mathbf {u} _{1}=\mathbf {\Omega \times u_{1}} =\omega \mathbf {u} _{2}\ ;\ {\frac {d}{dt}}\mathbf {u} _{2}=\mathbf {\Omega \times u_{2}} =-\omega \mathbf {u} _{1}\ \ .}
Hence, the acceleration of a moving object is expressed as (see Eq. 1):
d
2
x
A
d
t
2
=
a
A
B
+
a
B
+
2
∑
j
=
1
3
v
j
d
u
j
d
t
+
∑
j
=
1
3
x
j
d
2
u
j
d
t
2
=
Ω
×
(
Ω
×
X
A
B
)
+
a
B
+
2
∑
j
=
1
3
v
j
Ω
×
u
j
+
∑
j
=
1
3
x
j
Ω
×
(
Ω
×
u
j
)
=
Ω
×
(
Ω
×
X
A
B
)
+
a
B
+
2
Ω
×
v
B
+
Ω
×
(
Ω
×
x
B
)
=
Ω
×
(
Ω
×
(
X
A
B
+
x
B
)
)
+
a
B
+
2
Ω
×
v
B
,
{\displaystyle {\begin{aligned}{\frac {d^{2}\mathbf {x} _{A}}{dt^{2}}}&=\mathbf {a} _{AB}+\mathbf {a} _{B}+2\ \sum _{j=1}^{3}v_{j}\ {\frac {d\mathbf {u} _{j}}{dt}}+\ \sum _{j=1}^{3}x_{j}\ {\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}}\\&=\mathbf {\Omega \ \times } \left(\mathbf {\Omega \times X} _{AB}\right)+\mathbf {a} _{B}+2\ \sum _{j=1}^{3}v_{j}\ \mathbf {\Omega \times u_{j}} \ +\ \sum _{j=1}^{3}x_{j}\ {\boldsymbol {\Omega }}\times \left({\boldsymbol {\Omega }}\times \mathbf {u} _{j}\right)\\&=\mathbf {\Omega \ \times } \left(\mathbf {\Omega \times X} _{AB}\right)+\mathbf {a} _{B}+2\ {\boldsymbol {\Omega }}\times \mathbf {v} _{B}\ \ +\ {\boldsymbol {\Omega }}\times \left({\boldsymbol {\Omega }}\times \mathbf {x} _{B}\right)\\&=\mathbf {\Omega \ \times } \left(\mathbf {\Omega \times } (\mathbf {X} _{AB}+\mathbf {x} _{B})\right)+\mathbf {a} _{B}+2\ {\boldsymbol {\Omega }}\times \mathbf {v} _{B}\ \,,\end{aligned}}}
where the angular acceleration term is zero for the constant rate of rotation.
Because the first term, which is
Ω
×
(
Ω
×
(
X
A
B
+
x
B
)
)
,
{\displaystyle \mathbf {\Omega \ \times } \left(\mathbf {\Omega \times } (\mathbf {X} _{AB}+\mathbf {x} _{B})\right)\,,}
is of the same form as the normal centrifugal force expression:
Ω
×
(
Ω
×
x
B
)
,
{\displaystyle {\boldsymbol {\Omega }}\times \left({\boldsymbol {\Omega }}\times \mathbf {x} _{B}\right)\,,}
it is a natural extension of standard terminology (although there is no standard terminology for this case) to call this term the "centrifugal force". Applying this terminology to the example of a tube in a centrifuge, if the tube is far enough from the center of rotation, |XAB| = R ≫ |xB|, all the matter in the test tube sees the same acceleration (the same centrifugal force). Thus, in this case, the fictitious force is primarily a uniform centrifugal force along the axis of the tube, away from the centre of rotation, with a value |Ffict| = ω2 R, where R is the distance of the matter in the tube from the centre of the centrifuge. It is the standard specification of a centrifuge to use the "effective" radius of the centrifuge to estimate its ability to provide centrifugal force. Thus, the first estimate of centrifugal force in a centrifuge can be based upon the distance of the tubes from the centre of rotation, and corrections applied if needed.
Also, the test tube confines motion to the direction down the length of the tube, so vB is opposite to u1 and the Coriolis force is opposite to u2, that is, against the wall of the tube. If the tube is spun for a long enough time, the velocity vB drops to zero as the matter comes to an equilibrium distribution. For more details, see the articles on sedimentation and the Lamm equation.
A related problem is that of centrifugal forces for the Earth–Moon–Sun system, where three rotations appear: the daily rotation of the Earth about its axis, the lunar-month rotation of the Earth–Moon system about its centre of mass, and the annual revolution of the Earth–Moon system about the Sun. These three motions influence the tides.
=== Crossing a carousel ===
Figure 5 shows another example comparing the observations of an inertial observer with those of an observer on a rotating carousel. The carousel rotates at a constant angular velocity represented by the vector Ω with magnitude ω, pointing upward according to the right-hand rule. A rider on the carousel walks radially across it at a constant speed, in what appears to the walker to be the straight line path inclined at 45° in Figure 5. To the stationary observer, however, the walker travels a spiral path. The points identified on both paths in Figure 5 correspond to the same times spaced at equal time intervals. We ask how two observers, one on the carousel and one in an inertial frame, formulate what they see using Newton's laws.
==== Inertial observer ====
The observer at rest describes the path followed by the walker as a spiral. Adopting the coordinate system shown in Figure 5, the trajectory is described by r(t):
r
(
t
)
=
R
(
t
)
u
R
=
[
x
(
t
)
y
(
t
)
]
=
[
R
(
t
)
cos
(
ω
t
+
π
/
4
)
R
(
t
)
sin
(
ω
t
+
π
/
4
)
]
,
{\displaystyle \mathbf {r} (t)=R(t)\mathbf {u} _{R}={\begin{bmatrix}x(t)\\y(t)\end{bmatrix}}={\begin{bmatrix}R(t)\cos(\omega t+\pi /4)\\R(t)\sin(\omega t+\pi /4)\end{bmatrix}},}
where the added π/4 sets the path angle at 45° to start with (just an arbitrary choice of direction), uR is a unit vector in the radial direction pointing from the centre of the carousel to the walker at the time t. The radial distance R(t) increases steadily with time according to:
R
(
t
)
=
s
t
,
{\displaystyle R(t)=st,}
with s the speed of walking. According to simple kinematics, the velocity is then the first derivative of the trajectory:
v
(
t
)
=
d
R
d
t
[
cos
(
ω
t
+
π
/
4
)
sin
(
ω
t
+
π
/
4
)
]
+
ω
R
(
t
)
[
−
sin
(
ω
t
+
π
/
4
)
cos
(
ω
t
+
π
/
4
)
]
=
d
R
d
t
u
R
+
ω
R
(
t
)
u
θ
,
{\displaystyle {\begin{aligned}\mathbf {v} (t)&={\frac {dR}{dt}}{\begin{bmatrix}\cos(\omega t+\pi /4)\\\sin(\omega t+\pi /4)\end{bmatrix}}+\omega R(t){\begin{bmatrix}-\sin(\omega t+\pi /4)\\\cos(\omega t+\pi /4)\end{bmatrix}}\\&={\frac {dR}{dt}}\mathbf {u} _{R}+\omega R(t)\mathbf {u} _{\theta },\end{aligned}}}
with uθ a unit vector perpendicular to uR at time t (as can be verified by noticing that the vector dot product with the radial vector is zero) and pointing in the direction of travel.
The acceleration is the first derivative of the velocity:
a
(
t
)
=
d
2
R
d
t
2
[
cos
(
ω
t
+
π
/
4
)
sin
(
ω
t
+
π
/
4
)
]
+
2
d
R
d
t
ω
[
−
sin
(
ω
t
+
π
/
4
)
cos
(
ω
t
+
π
/
4
)
]
−
ω
2
R
(
t
)
[
cos
(
ω
t
+
π
/
4
)
sin
(
ω
t
+
π
/
4
)
]
=
2
s
ω
[
−
sin
(
ω
t
+
π
/
4
)
cos
(
ω
t
+
π
/
4
)
]
−
ω
2
R
(
t
)
[
cos
(
ω
t
+
π
/
4
)
sin
(
ω
t
+
π
/
4
)
]
=
2
s
ω
u
θ
−
ω
2
R
(
t
)
u
R
.
{\displaystyle {\begin{aligned}\mathbf {a} (t)&={\frac {d^{2}R}{dt^{2}}}{\begin{bmatrix}\cos(\omega t+\pi /4)\\\sin(\omega t+\pi /4)\end{bmatrix}}+2{\frac {dR}{dt}}\omega {\begin{bmatrix}-\sin(\omega t+\pi /4)\\\cos(\omega t+\pi /4)\end{bmatrix}}-\omega ^{2}R(t){\begin{bmatrix}\cos(\omega t+\pi /4)\\\sin(\omega t+\pi /4)\end{bmatrix}}\\&=2s\omega {\begin{bmatrix}-\sin(\omega t+\pi /4)\\\cos(\omega t+\pi /4)\end{bmatrix}}-\omega ^{2}R(t){\begin{bmatrix}\cos(\omega t+\pi /4)\\\sin(\omega t+\pi /4)\end{bmatrix}}\\&=2s\ \omega \ \mathbf {u} _{\theta }-\omega ^{2}R(t)\ \mathbf {u} _{R}\,.\end{aligned}}}
The last term in the acceleration is radially inward of magnitude ω2 R, which is therefore the instantaneous centripetal acceleration of circular motion. The first term is perpendicular to the radial direction, and pointing in the direction of travel. Its magnitude is 2sω, and it represents the acceleration of the walker as the edge of the carousel is neared, and the arc of the circle travelled in a fixed time increases, as can be seen by the increased spacing between points for equal time steps on the spiral in Figure 5 as the outer edge of the carousel is approached.
Applying Newton's laws, multiplying the acceleration by the mass of the walker, the inertial observer concludes that the walker is subject to two forces: the inward radially directed centripetal force and another force perpendicular to the radial direction that is proportional to the speed of the walker.
==== Rotating observer ====
The rotating observer sees the walker travel a straight line from the centre of the carousel to the periphery, as shown in Figure 5. Moreover, the rotating observer sees that the walker moves at a constant speed in the same direction, so applying Newton's law of inertia, there is zero force upon the walker. These conclusions do not agree with the inertial observer. To obtain agreement, the rotating observer has to introduce fictitious forces that appear to exist in the rotating world, even though there is no apparent reason for them, no apparent gravitational mass, electric charge or what have you, that could account for these fictitious forces.
To agree with the inertial observer, the forces applied to the walker must be exactly those found above. They can be related to the general formulas already derived, namely:
F
f
i
c
t
=
−
2
m
Ω
×
v
B
−
m
Ω
×
(
Ω
×
x
B
)
−
m
d
Ω
d
t
×
x
B
.
{\displaystyle \mathbf {F} _{\mathrm {fict} }=-2m{\boldsymbol {\Omega }}\times \mathbf {v} _{\mathrm {B} }-m{\boldsymbol {\Omega }}\times ({\boldsymbol {\Omega }}\times \mathbf {x} _{\mathrm {B} })-m{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {x} _{\mathrm {B} }.}
In this example, the velocity seen in the rotating frame is:
v
B
=
s
u
R
,
{\displaystyle \mathbf {v} _{\mathrm {B} }=s\mathbf {u} _{R},}
with uR a unit vector in the radial direction. The position of the walker as seen on the carousel is:
x
B
=
R
(
t
)
u
R
,
{\displaystyle \mathbf {x} _{\mathrm {B} }=R(t)\mathbf {u} _{R},}
and the time derivative of Ω is zero for uniform angular rotation. Noticing that
Ω
×
u
R
=
ω
u
θ
{\displaystyle {\boldsymbol {\Omega }}\times \mathbf {u} _{R}=\omega \mathbf {u} _{\theta }}
and
Ω
×
u
θ
=
−
ω
u
R
,
{\displaystyle {\boldsymbol {\Omega }}\times \mathbf {u} _{\theta }=-\omega \mathbf {u} _{R}\,,}
we find:
F
f
i
c
t
=
−
2
m
ω
s
u
θ
+
m
ω
2
R
(
t
)
u
R
.
{\displaystyle \mathbf {F} _{\mathrm {fict} }=-2m\omega s\mathbf {u} _{\theta }+m\omega ^{2}R(t)\mathbf {u} _{R}.}
To obtain a straight-line motion in the rotating world, a force exactly opposite in sign to the fictitious force must be applied to reduce the net force on the walker to zero, so Newton's law of inertia will predict a straight line motion, in agreement with what the rotating observer sees. The fictitious forces that must be combated are the Coriolis force (first term) and the centrifugal force (second term). (These terms are approximate.) By applying forces to counter these two fictitious forces, the rotating observer ends up applying exactly the same forces upon the walker that the inertial observer predicted were needed.
Because they differ only by the constant walking velocity, the walker and the rotational observer see the same accelerations. From the walker's perspective, the fictitious force is experienced as real, and combating this force is necessary to stay on a straight line radial path holding a constant speed. It is like battling a crosswind while being thrown to the edge of the carousel.
=== Observation ===
Notice that this kinematical discussion does not delve into the mechanism by which the required forces are generated. That is the subject of kinetics. In the case of the carousel, the kinetic discussion would involve perhaps a study of the walker's shoes and the friction they need to generate against the floor of the carousel, or perhaps the dynamics of skateboarding if the walker switched to travel by skateboard. Whatever the means of travel across the carousel, the forces calculated above must be realized. A very rough analogy is heating your house: you must have a certain temperature to be comfortable, but whether you heat by burning gas or by burning coal is another problem. Kinematics sets the thermostat, kinetics fires the furnace.
== See also ==
== References ==
== Further reading ==
Lev D. Landau and E. M. Lifshitz (1976). Mechanics. Course of Theoretical Physics. Vol. 1 (3rd ed.). Butterworth-Heinenan. pp. 128–130. ISBN 0-7506-2896-0.
Keith Symon (1971). Mechanics (3rd ed.). Addison-Wesley. ISBN 0-201-07392-7.
Jerry B. Marion (1970). Classical Dynamics of Particles and Systems. Academic Press. ISBN 0-12-472252-0.
Marcel J. Sidi (1997). Spacecraft Dynamics and Control: A Practical Engineering Approach. Cambridge University Press. Chapter 4.8. ISBN 0-521-78780-7.
== External links ==
Q and A from Richard C. Brill, Honolulu Community College
NASA's David Stern: Lesson Plans for Teachers #23 on Inertial Forces
Coriolis Force
Motion over a flat surface Java physlet by Brian Fiedler illustrating fictitious forces. The physlet shows both the perspective as seen from a rotating and from a non-rotating point of view.
Motion over a parabolic surface Java physlet by Brian Fiedler illustrating fictitious forces. The physlet shows both the perspective as seen from a rotating and as seen from a non-rotating point of view. | Wikipedia/Inertial_force |
The g-force or gravitational force equivalent is a mass-specific force (force per unit mass), expressed in units of standard gravity (symbol g or g0, not to be confused with "g", the symbol for grams).
It is used for sustained accelerations that cause a perception of weight. For example, an object at rest on Earth's surface is subject to 1 g, equaling the conventional value of gravitational acceleration on Earth, about 9.8 m/s2.
More transient acceleration, accompanied with significant jerk, is called shock.
When the g-force is produced by the surface of one object being pushed by the surface of another object, the reaction force to this push produces an equal and opposite force for every unit of each object's mass. The types of forces involved are transmitted through objects by interior mechanical stresses. Gravitational acceleration is one cause of an object's acceleration in relation to free fall.
The g-force experienced by an object is due to the vector sum of all gravitational and non-gravitational forces acting on an object's freedom to move. In practice, as noted, these are surface-contact forces between objects. Such forces cause stresses and strains on objects, since they must be transmitted from an object surface. Because of these strains, large g-forces may be destructive.
For example, a force of 1 g on an object sitting on the Earth's surface is caused by the mechanical force exerted in the upward direction by the ground, keeping the object from going into free fall. The upward contact force from the ground ensures that an object at rest on the Earth's surface is accelerating relative to the free-fall condition. (Free fall is the path that the object would follow when falling freely toward the Earth's center). Stress inside the object is ensured from the fact that the ground contact forces are transmitted only from the point of contact with the ground.
Objects allowed to free-fall in an inertial trajectory, under the influence of gravitation only, feel no g-force – a condition known as weightlessness. Being in free fall in an inertial trajectory is colloquially called "zero-g", which is short for "zero g-force". Zero g-force conditions would occur inside an elevator falling freely toward the Earth's center (in vacuum), or (to good approximation) inside a spacecraft in Earth orbit. These are examples of coordinate acceleration (a change in velocity) without a sensation of weight.
In the absence of gravitational fields, or in directions at right angles to them, proper and coordinate accelerations are the same, and any coordinate acceleration must be produced by a corresponding g-force acceleration. An example of this is a rocket in free space: when the engines produce simple changes in velocity, those changes cause g-forces on the rocket and the passengers.
== Unit and measurement ==
The unit of measure of acceleration in the International System of Units (SI) is m/s2. However, to distinguish acceleration relative to free fall from simple acceleration (rate of change of velocity), the unit g is often used. One g is the force per unit mass due to gravity at the Earth's surface and is the standard gravity (symbol: gn), defined as 9.80665 metres per second squared, or equivalently 9.80665 newtons of force per kilogram of mass. The unit definition does not vary with location—the g-force when standing on the Moon is almost exactly 1⁄6 that on Earth.
The unit g is not one of the SI units, which uses "g" for gram. Also, "g" should not be confused with "G", which is the standard symbol for the gravitational constant. This notation is commonly used in aviation, especially in aerobatic or combat military aviation, to describe the increased forces that must be overcome by pilots in order to remain conscious and not g-LOC (g-induced loss of consciousness).
Measurement of g-force is typically achieved using an accelerometer (see discussion below in section #Measurement using an accelerometer). In certain cases, g-forces may be measured using suitably calibrated scales.
== Acceleration and forces ==
The term g-"force" is technically incorrect as it is a measure of acceleration, not force. While acceleration is a vector quantity, g-force accelerations ("g-forces" for short) are often expressed as a scalar, based on the vector magnitude, with positive g-forces pointing downward (indicating upward acceleration), and negative g-forces pointing upward. Thus, a g-force is a vector of acceleration. It is an acceleration that must be produced by a mechanical force, and cannot be produced by simple gravitation. Objects acted upon only by gravitation experience (or "feel") no g-force, and are weightless.
g-forces, when multiplied by a mass upon which they act, are associated with a certain type of mechanical force in the correct sense of the term "force", and this force produces compressive stress and tensile stress. Such forces result in the operational sensation of weight, but the equation carries a sign change due to the definition of positive weight in the direction downward, so the direction of weight-force is opposite to the direction of g-force acceleration:
Weight = mass × −g-force
The reason for the minus sign is that the actual force (i.e., measured weight) on an object produced by a g-force is in the opposite direction to the sign of the g-force, since in physics, weight is not the force that produces the acceleration, but rather the equal-and-opposite reaction force to it. If the direction upward is taken as positive (the normal cartesian convention) then positive g-force (an acceleration vector that points upward) produces a force/weight on any mass, that acts downward (an example is positive-g acceleration of a rocket launch, producing downward weight). In the same way, a negative-g force is an acceleration vector downward (the negative direction on the y axis), and this acceleration downward produces a weight-force in a direction upward (thus pulling a pilot upward out of the seat, and forcing blood toward the head of a normally oriented pilot).
If a g-force (acceleration) is vertically upward and is applied by the ground (which is accelerating through space-time) or applied by the floor of an elevator to a standing person, most of the body experiences compressive stress which at any height, if multiplied by the area, is the related mechanical force, which is the product of the g-force and the supported mass (the mass above the level of support, including arms hanging down from above that level). At the same time, the arms themselves experience a tensile stress, which at any height, if multiplied by the area, is again the related mechanical force, which is the product of the g-force and the mass hanging below the point of mechanical support. The mechanical resistive force spreads from points of contact with the floor or supporting structure, and gradually decreases toward zero at the unsupported ends (the top in the case of support from below, such as a seat or the floor, the bottom for a hanging part of the body or object). With compressive force counted as negative tensile force, the rate of change of the tensile force in the direction of the g-force, per unit mass (the change between parts of the object such that the slice of the object between them has unit mass), is equal to the g-force plus the non-gravitational external forces on the slice, if any (counted positive in the direction opposite to the g-force).
For a given g-force the stresses are the same, regardless of whether this g-force is caused by mechanical resistance to gravity, or by a coordinate-acceleration (change in velocity) caused by a mechanical force, or by a combination of these. Hence, for people all mechanical forces feels exactly the same whether they cause coordinate acceleration or not. For objects likewise, the question of whether they can withstand the mechanical g-force without damage is the same for any type of g-force. For example, upward acceleration (e.g., increase of speed when going up or decrease of speed when going down) on Earth feels the same as being stationary on a celestial body with a higher surface gravity. Gravitation acting alone does not produce any g-force; g-force is only produced from mechanical pushes and pulls. For a free body (one that is free to move in space) such g-forces only arise as the "inertial" path that is the natural effect of gravitation, or the natural effect of the inertia of mass, is modified. Such modification may only arise from influences other than gravitation.
Examples of important situations involving g-forces include:
The g-force acting on a stationary object resting on the Earth's surface is 1 g (upwards) and results from the resisting reaction of the Earth's surface bearing upwards equal to an acceleration of 1 g, and is equal and opposite to gravity. The number 1 is approximate, depending on location.
The g-force acting on an object in any weightless environment such as free-fall in a vacuum is 0 g.
The g-force acting on an object under acceleration can be much greater than 1 g, for example, the dragster pictured at top right can exert a horizontal g-force of 5.3 when accelerating.
The g-force acting on an object under acceleration may be downwards, for example when cresting a sharp hill on a roller coaster.
If there are no other external forces than gravity, the g-force in a rocket is the thrust per unit mass. Its magnitude is equal to the thrust-to-weight ratio times g, and to the consumption of delta-v per unit time.
In the case of a shock, e.g., a collision, the g-force can be very large during a short time.
A classic example of negative g-force is in a fully inverted roller coaster which is accelerating (changing velocity) toward the ground. In this case, the roller coaster riders are accelerated toward the ground faster than gravity would accelerate them, and are thus pinned upside down in their seats. In this case, the mechanical force exerted by the seat causes the g-force by altering the path of the passenger downward in a way that differs from gravitational acceleration. The difference in downward motion, now faster than gravity would provide, is caused by the push of the seat, and it results in a g-force toward the ground.
All "coordinate accelerations" (or lack of them), are described by Newton's laws of motion as follows:
The second law of motion, the law of acceleration, states that F = ma, meaning that a force F acting on a body is equal to the mass m of the body times its acceleration a.
The third law of motion, the law of reciprocal actions, states that all forces occur in pairs, and these two forces are equal in magnitude and opposite in direction. Newton's third law of motion means that not only does gravity behave as a force acting downwards on, say, a rock held in your hand but also that the rock exerts a force on the Earth, equal in magnitude and opposite in direction.
In an airplane, the pilot's seat can be thought of as the hand holding the rock, the pilot as the rock. When flying straight and level at 1 g, the pilot is acted upon by the force of gravity. His weight (a downward force) is 725 newtons (163 lbf). In accordance with Newton's third law, the plane and the seat underneath the pilot provides an equal and opposite force pushing upwards with a force of 725 N. This mechanical force provides the 1.0 g upward proper acceleration on the pilot, even though this velocity in the upward direction does not change (this is similar to the situation of a person standing on the ground, where the ground provides this force and this g-force).
If the pilot were suddenly to pull back on the stick and make his plane accelerate upwards at 9.8 m/s2, the total g‑force on his body is 2 g, half of which comes from the seat pushing the pilot to resist gravity, and half from the seat pushing the pilot to cause his upward acceleration—a change in velocity which also is a proper acceleration because it also differs from a free fall trajectory. Considered in the frame of reference of the plane his body is now generating a force of 1,450 N (330 lbf) downwards into his seat and the seat is simultaneously pushing upwards with an equal force of 1450 N.
Unopposed acceleration due to mechanical forces, and consequentially g-force, is experienced whenever anyone rides in a vehicle because it always causes a proper acceleration, and (in the absence of gravity) also always a coordinate acceleration (where velocity changes). Whenever the vehicle changes either direction or speed, the occupants feel lateral (side to side) or longitudinal (forward and backwards) forces produced by the mechanical push of their seats.
The expression "1 g = 9.80665 m/s2" means that for every second that elapses, velocity changes 9.80665 metres per second (35.30394 km/h). This rate of change in velocity can also be denoted as 9.80665 (metres per second) per second, or 9.80665 m/s2. For example: An acceleration of 1 g equates to a rate of change in velocity of approximately 35 km/h (22 mph) for each second that elapses. Therefore, if an automobile is capable of braking at 1 g and is traveling at 35 km/h, it can brake to a standstill in one second and the driver will experience a deceleration of 1 g. The automobile traveling at three times this speed, 105 km/h (65 mph), can brake to a standstill in three seconds.
In the case of an increase in speed from 0 to v with constant acceleration within a distance of s this acceleration is v2/(2s).
Preparing an object for g-tolerance (not getting damaged when subjected to a high g-force) is called g-hardening. This may apply to, e.g., instruments in a projectile shot by a gun.
== Human tolerance ==
Human tolerances depend on the magnitude of the gravitational force, the length of time it is applied, the direction it acts, the location of application, and the posture of the body.: 350
The human body is flexible and deformable, particularly the softer tissues. A hard slap on the face may briefly impose hundreds of g locally but not produce any real damage; a constant 16 g for a minute, however, may be deadly. When vibration is experienced, relatively low peak g-force levels can be severely damaging if they are at the resonant frequency of organs or connective tissues.
To some degree, g-tolerance can be trainable, and there is also considerable variation in innate ability between individuals. In addition, some illnesses, particularly cardiovascular problems, reduce g-tolerance.
=== Vertical ===
Aircraft pilots (in particular) sustain g-forces along the axis aligned with the spine. This causes significant variation in blood pressure along the length of the subject's body, which limits the maximum g-forces that can be tolerated.
Positive, or "upward" g-force, drives blood downward to the feet of a seated or standing person (more naturally, the feet and body may be seen as being driven by the upward force of the floor and seat, upward around the blood). Resistance to positive g-force varies. A typical person can handle about 5 g0 (49 m/s2) (meaning some people might pass out when riding a higher-g roller coaster, which in some cases exceeds this point) before losing consciousness, but through the combination of special g-suits and efforts to strain muscles—both of which act to force blood back into the brain—modern pilots can typically handle a sustained 9 g0 (88 m/s2) (see High-G training).
In aircraft particularly, vertical g-forces are often positive (force blood towards the feet and away from the head); this causes problems with the eyes and brain in particular. As positive vertical g-force is progressively increased (such as in a centrifuge) the following symptoms may be experienced:
Grey-out, where the vision loses hue, easily reversible on levelling out
Tunnel vision, where peripheral vision is progressively lost
Blackout, a loss of vision while consciousness is maintained, caused by a lack of blood flow to the head
G-LOC, a g-force induced loss of consciousness
Death, if g-forces are not quickly reduced
Resistance to "negative" or "downward" g, which drives blood to the head, is much lower. This limit is typically in the −2 to −3 g0 (−20 to −29 m/s2) range. This condition is sometimes referred to as red out where vision is literally reddened due to the blood-laden lower eyelid being pulled into the field of vision. Negative g-force is generally unpleasant and can cause damage. Blood vessels in the eyes or brain may swell or burst under the increased blood pressure, resulting in degraded sight or even blindness.
=== Horizontal ===
The human body is better at surviving g-forces that are perpendicular to the spine. In general when the acceleration is forwards (subject essentially lying on their back, colloquially known as "eyeballs in"), a much higher tolerance is shown than when the acceleration is backwards (lying on their front, "eyeballs out") since blood vessels in the retina appear more sensitive in the latter direction.
Early experiments showed that untrained humans were able to tolerate a range of accelerations depending on the time of exposure. This ranged from as much as 20 g0 for less than 10 seconds, to 10 g0 for 1 minute, and 6 g0 for 10 minutes for both eyeballs in and out. These forces were endured with cognitive facilities intact, as subjects were able to perform simple physical and communication tasks. The tests were determined not to cause long- or short-term harm although tolerance was quite subjective, with only the most motivated non-pilots capable of completing tests. The record for peak experimental horizontal g-force tolerance is held by acceleration pioneer John Stapp, in a series of rocket sled deceleration experiments culminating in a late 1954 test in which he was clocked in a little over a second from a land speed of Mach 0.9. He survived a peak "eyeballs-out" acceleration of 46.2 times the acceleration of gravity, and more than 25 g0 for 1.1 seconds, proving that the human body is capable of this. Stapp lived another 45 years to age 89 without any ill effects.
The highest recorded g-force experienced by a human who survived was during the 2003 IndyCar Series finale at Texas Motor Speedway on 12 October 2003, in the 2003 Chevy 500 when the car driven by Kenny Bräck made wheel-to-wheel contact with Tomas Scheckter's car. This immediately resulted in Bräck's car impacting the catch fence that would record a peak of 214 g0.
== Short duration shock, impact, and jerk ==
Impact and mechanical shock are usually used to describe a high-kinetic-energy, short-term excitation. A shock pulse is often measured by its peak acceleration in ɡ0·s and the pulse duration. Vibration is a periodic oscillation which can also be measured in ɡ0·s as well as frequency. The dynamics of these phenomena are what distinguish them from the g-forces caused by a relatively longer-term accelerations.
After a free fall from a height
h
{\displaystyle h}
followed by deceleration over a distance
d
{\displaystyle d}
during an impact, the shock on an object is
(
h
/
d
)
{\displaystyle (h/d)}
· ɡ0. For example, a stiff and compact object dropped from 1 m that impacts over a distance of 1 mm is subjected to a 1000 ɡ0 deceleration.
Jerk is the rate of change of acceleration. In SI units, jerk is expressed as m/s3; it can also be expressed in standard gravity per second (ɡ0/s; 1 ɡ0/s ≈ 9.81 m/s3).
== Other biological responses ==
Recent research carried out on extremophiles in Japan involved a variety of bacteria (including E. coli as a non-extremophile control) being subject to conditions of extreme gravity. The bacteria were cultivated while being rotated in an ultracentrifuge at high speeds corresponding to 403,627 g. Paracoccus denitrificans was one of the bacteria that displayed not only survival but also robust cellular growth under these conditions of hyperacceleration, which are usually only to be found in cosmic environments, such as on very massive stars or in the shock waves of supernovas. Analysis showed that the small size of prokaryotic cells is essential for successful growth under hypergravity. Notably, two multicellular species, the nematodes Panagrolaimus superbus and Caenorhabditis elegans were shown to be able to tolerate 400,000 × g for 1 hour.
The research has implications on the feasibility of panspermia.
== Typical examples ==
== Measurement using an accelerometer ==
An accelerometer, in its simplest form, is a damped mass on the end of a spring, with some way of measuring how far the mass has moved on the spring in a particular direction, called an 'axis'.
Accelerometers are often calibrated to measure g-force along one or more axes. If a stationary, single-axis accelerometer is oriented so that its measuring axis is horizontal, its output will be 0 g, and it will continue to be 0 g if mounted in an automobile traveling at a constant velocity on a level road. When the driver presses on the brake or gas pedal, the accelerometer will register positive or negative acceleration.
If the accelerometer is rotated by 90° so that it is vertical, it will read +1 g upwards even though stationary. In that situation, the accelerometer is subject to two forces: the gravitational force and the ground reaction force of the surface it is resting on. Only the latter force can be measured by the accelerometer, due to mechanical interaction between the accelerometer and the ground. The reading is the acceleration the instrument would have if it were exclusively subject to that force.
A three-axis accelerometer will output zero‑g on all three axes if it is dropped or otherwise put into a ballistic trajectory (also known as an inertial trajectory), so that it experiences "free fall", as do astronauts in orbit (astronauts experience small tidal accelerations called microgravity, which are neglected for the sake of discussion here). Some amusement park rides can provide several seconds at near-zero g. Riding NASA's "Vomit Comet" provides near-zero g-force for about 25 seconds at a time.
== See also ==
Artificial gravity
Earth's gravity
Gravitational acceleration
Gravitational interaction
Hypergravity
Load factor (aeronautics)
Peak ground acceleration – g-force of earthquakes
Prone pilot
Relation between g-force and apparent weight
Shock and vibration data logger
Shock detector
Supine cockpit
== Notes and references ==
== Further reading ==
Faller, James E. (November–December 2005). "The Measurement of Little g: A Fertile Ground for Precision Measurement Science". Journal of Research of the National Institute of Standards and Technology. 110 (6): 559–581. doi:10.6028/jres.110.082. PMC 4846227. PMID 27308179.
== External links ==
"How Many Gs Can a Flyer Take?", October 1944, Popular Science—one of the first detailed public articles explaining this subject
Enduring a human centrifuge at the NASA Ames Research Center at Wired
[1]
[2]
[3]
[4]
HUMAN CAPABILITIES IN THE PRONE AND SUPINE POSITIONS. AN ANNOTATED BIBLIOGRAPHY | Wikipedia/G-force |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.