id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
7,500,675
https://en.wikipedia.org/wiki/Calmar%20ratio
Calmar ratio (or Drawdown ratio) is a performance measurement used to evaluate Commodity Trading Advisors and hedge funds. It was created by Terry W. Young and first published in 1991 in the trade journal Futures. Young owned California Managed Accounts, a firm in Santa Ynez, California, which managed client funds and published the newsletter CMA Reports. The name of his ratio "Calmar" is an acronym of his company's name and its newsletter: CALifornia Managed Accounts Reports. Young defined it thus: Young believed the Calmar ratio was superior because It should be mentioned that a competitor newsletter, Managed Account Reports (founded in 1979 by publisher Leon Rose), had previously defined and popularized another performance measurement, the MAR Ratio, equal to the compound annual return from inception, divided by the maximum drawdown from inception. Although the Calmar ratio and MAR ratio are sometimes assumed to be identical, they are in fact different: Calmar ratio uses 36 months of performance data, whereas MAR ratio uses all performance data from inception onwards. Later versions of the Calmar ratio introduce the risk free rate into the numerator to create a Sharpe type ratio. See also Modigliani risk-adjusted performance Omega ratio Risk return ratio Sharpe ratio Sterling ratio Sortino ratio References Financial ratios
Calmar ratio
[ "Mathematics" ]
257
[ "Financial ratios", "Quantity", "Metrics" ]
7,501,487
https://en.wikipedia.org/wiki/Personal%20robot
A personal robot is one whose human interface and design make it useful for individuals. This is by contrast to industrial robots which are generally configured and operated by robotics specialists. A personal robot is one that enables an individual to automate the repetitive or menial part of home or work life making them more productive. Similar to the way that the transition from mainframe computers to the personal computers revolutionized personal productivity, the transition from industrial robotics to personal robotics is changing productivity in home and work settings. Turning a robot like ASIMO or Atlas into a universally applicable personal robot or artificial servant is mainly a programming task. As of today vast improvements in motion planning, computer vision (esp. scene recognition), natural language processing, and automated reasoning are indispensable to make this a possibility. History iRobot Corp. introduced the Roomba in 2002 The Institute for Personal Robots in Education introduced the concept to teach computing using personal robots in 2006. Stanford University Personal Robotics Program introduced PR1 in 2007. Willow Garage introduced the PR2 robot in 2010. RoboDynamics introduced Luna in 2011. Milagrow HumanTech introduced India's 1st Robotic vacuum cleaner, the RedHawk in 2011 and then the World's 1st Body Massaging Robot in 2012. Toys Robotic toys, such as the well known Furby, have been popular since 1998. There are also small humanoid remote controlled robots. Electronic pets, such as robotic dogs, can be good companions. They have also been used by many universities in competitions such as the RoboCup. There are many different kinds of toy robots that have been invented since the late 1900s. There were many robotic toys invented that were used for entertainment. One popular example was called the Furby, a toy that children nourished every day. The toy robot made it seem like it was alive, like a pet that you have to watch and give attention to. There are many different kinds of toy robots that are animal related, such as robotic dogs. Another type of robotic toy is the phone-powered robot. Using this toy, you are able to connect with your phone and control the toy while using an application. Now, robotic toys are becoming more integrated with mobile device platforms. This in turn is creating a larger demand for these types of products. The increase in demand has a direct effect on the advancement of the technology used in the toys. Social robots Social robots take on the function of social communication. Domestic humanoid robots are used by elderly and immobilized residents to keep them company. Wakamaru is a domestic humanoid robot developed in Japan. Its function is to act as a care taker. Wakamaru has a number of operations and “can be programmed to remind patients to take their medicine and even call a doctor when it appears that someone is in distress.” Paro, a robotic baby seal, is intended to provide comfort to nursing home patients. Home-telepresence robots can move around in a remote location and let one communicate with people there via its camera, speaker, and microphone. Through other remote-controlled telepresence robots, the user can visit a distant location and explore it as if they were physically present. These robots can, among other applications, permit health-care workers to monitor patients or allow children who are homebound because of injuries, illnesses, or other physical challenges to attend school remotely. Kuri, JIBO and ConnectR are family robots that includes telepresence. Entertainment services Network robots link ubiquitous networks with robots, contributing to the creation of new lifestyles and solutions to address a variety of social problems including the aging of population and nursing care. Therapy Robots built for therapy have been in production for quite some time now. Some of these uses can be for autism or physical therapy. As for robots designed to help autism, authors Daniel J. Ricks and Mark B. Colton suggest that these robots will elicit and bring out specific behaviors of children, ones not previously seen before the use of these robots. This shows the goals set for robots in therapy for children with autism is to help form social behaviors. A large number of children with autism have communication issues, these robots can assist them and help them learn. They can also be used to assist adults with physical issues involving muscles or limbs. See also Companion robot DARPA Robotics Challenge Domestic robot Microsoft Robotics Developer Studio Open-source hardware Robot Operating System Soft robotics URBI References External links EZ-Robot Stanford University Personal Robotics Program Cornell University Personal Robotics Program Imperial College London Personal Robotics -
Personal robot
[ "Physics", "Technology" ]
913
[ "Machines", "Robots", "Physical systems", "Social robots", "Computing and society" ]
7,502,814
https://en.wikipedia.org/wiki/Field%20telephone
Field telephones are telephones used for military communications. They can draw power from their own battery, from a telephone exchange (via a central battery known as CB), or from an external power source. Some need no battery, being sound-powered telephones. Field telephones replaced flag signals and the telegraph as an efficient means of communication. The first field telephones had a battery to power the voice transmission, a hand-cranked generator to signal another field telephone or a manually-operated telephone exchange, and an electromagnetic ringer which sounded when current from a remote generator arrived. This technology was used from the 1910s to the 1980s. Later the ring signal was operated by a pushbutton or automatically as on domestic telephones. Manual systems are still widely used, and are often compatible with the older equipment. Shortly after the invention of the telephone, attempts were made to adapt the technology for military use. Telephones were already being used to support military campaigns in British India and in British colonies in Africa in the late 1870s and early 1880s. In the United States telephone lines connected fortresses with each other and with army headquarters. They were also used for fire control at fixed coastal defence installations. The first telephone for use in the field was developed in the United States in 1889 but it was too expensive for mass production. Subsequent developments in several countries made the field telephone more practicable. The wire material was changed from iron to copper, devices for laying wire in the field were developed and systems with both battery-operated sets for command posts and hand generator sets for use in the field were developed. The first purposely-designed field telephones were used by the British in the Second Boer War. They were used more extensively in the Russo-Japanese War, where all infantry regiments and artillery divisions on both sides were equipped with telephone sets. By the First World War the use of field telephones was widespread, and a start was made at intercepting them. Field telephones operate over wire lines, sometimes commandeering civilian circuits when available, but often using wires strung in combat conditions. At least as of World War II, wire communications were the preferred method for the U.S. Army, with radio use only when needed, e.g. to communicate with mobile units, or until wires could be set up. Field phones could operate point to point or via a switchboard at a command post. A variety of wire types are used, ranging from light weight "assault wire", e.g. W-130 —— with a talking range about , to heavier cable with multiple pairs. Equipment for laying the wire ranges from reels on backpacks to trucks equipped with plows to bury lines. War in Ukraine During the Russo-Ukrainian War Russian electronic warfare (EW) has excelled. During the annexation of Crimea and the war in the Donbas Russia used "electronic warfare systems to jam and intercept communications signals, jam and spoof GPS receivers, and tap into cellular networks and hack cell phones." Russian EW was poorly optimized and as a result, usage of the EW system caused problems with their own communications and GPS. Due to the negative effects on their own forces, it fell out of use. During the Battle of Bakhmut Ukraine's forces made heavy use of field telephone as "Russian technologies aren't able to track or block field phones." One commander told the BBC that: "This technology is very old - but it works really well." and it's impossible to listen in". Torture of POWs It has been documented in human rights reports as an instrument of electric torture with euphemisms utilizing the TA-57 telephone as a "phone call to Putin" or "call to Lenin". In 2024, a leaked photograph showed one of the suspects accused of the 2024 Crocus City Hall attack being tortured by Russian FSB interrogators by having his genitals electrocuted by a TA-57. According to the United States Army's Vietnam War Crimes Working Group Files, field telephones were sometimes used in Vietnam to torture POWs with electric shocks during interrogations. United States Army EE-8, World War II era through Vietnam War. TA-1 4 mile range self powered no batteries TA-43 TA-312 TA-838, includes touch-tone key pad. Soviet Armed Forces УНА "Unified unit" (Унифицированный аппарат) ТАИ-43 field telephone set (Полевой Телефонный Аппарат) ТА-57 field telephone set (Полевой Телефонный Аппарат) Royal Norwegian Defence Forces TP-6N Developed in Norway for the armed forces early 1970s. TP-6NA Versions of TP-6N A to C M37 Swedish field telephone used by the Norwegian Civil Defence. This phone is fully interoperable with the EE-8, TA-1, TA-43 and TA-312 series of US Field Phones. EE-8 A part of The Marshall Plan (from its enactment, officially the European Recovery Program, ERP) The EE-8* was used in USA from World War II to late seventies, and in Norway from World War II until the TP-6 could replace it. FF33 This phone was widely used from mid 1950s until it was replaced by TP-6 (after the EE-8) FF33 was left by the Germans when World War II ended, but was not used immediately due to political reasons. Mod 1932 Developed by Elektrisk Bureau for the Norwegian forces, approved in 1932 (as the 1st std. field telephone), but never made in great numbers, due to bureaucracy and the start of World War II. Based on a model made for the Turkish Army by Elektrisk Burau. Finnish Defence Forces TA-57, made in the Soviet Union P78, made in Sweden by L.M. Ericsson P90, made in the UK by Racal Acoustics Ltd. ET-10, made by Terma A/S German Armed Forces (Wehrmacht) FF33 (Wehrmacht) FF OB/ZB (Bundeswehr, Feldfernsprecher Ortsbetrieb / Zentralbetrieb) Austrian Armed Forces (Bundesheer) SFT800, made by Siemens AG References External links Gordon L. Rottman (2010): "World War II Battlefield Communications (Elite)", Osprey Military, . TA-312 Field Phone TP-6 Norwegian Field Telephone EE-8 Field Phone FF33 Field Phone Norwegian mod. 1932 Field Phone Comprehensive collectors page about field telephones worldwide Military communications Telephony equipment
Field telephone
[ "Engineering" ]
1,404
[ "Military communications", "Telecommunications engineering" ]
7,506,128
https://en.wikipedia.org/wiki/Retrocausality
Retrocausality, or backwards causation, is a concept of cause and effect in which an effect precedes its cause in time and so a later event affects an earlier one. In quantum physics, the distinction between cause and effect is not made at the most fundamental level and so time-symmetric systems can be viewed as causal or retrocausal. Philosophical considerations of time travel often address the same issues as retrocausality, as do treatments of the subject in fiction, but the two phenomena are distinct. Philosophy Philosophical efforts to understand causality extend back at least to Aristotle's discussions of the four causes. It was long considered that an effect preceding its cause is an inherent self-contradiction because, as 18th century philosopher David Hume discussed, when examining two related events, the cause is by definition the one that precedes the effect. The idea of retrocausality is also found in Indian philosophy. It was defended by at least two Indian Buddhist philosophers, Prajñākaragupta (ca. 8th–9th century) and Jitāri (ca. 940–1000), the latter wrote a specific treatise on the topic, the Treatise on Future Cause (Bhāvikāraṇavāda). In the 1950s, Michael Dummett wrote in opposition to such definitions, stating that there was no philosophical objection to effects preceding their causes. This argument was rebutted by fellow philosopher Antony Flew and, later, by Max Black. Black's "bilking argument" held that retrocausality is impossible because the observer of an effect could act to prevent its future cause from ever occurring. A more complex discussion of how free will relates to the issues Black raised is summarized by Newcomb's paradox. Essentialist philosophers have proposed other theories, such as the existence of "genuine causal powers in nature" or by raising concerns about the role of induction in theories of causality. Physics Most physical theories are time symmetric: microscopic models like Newton's laws or electromagnetism have no inherent direction of time. The "arrow of time" that distinguishes cause and effect must have another origin. To reduce confusion, physicists distinguish strong (macroscopic) from weak (microscopic) causality. Macroscopic causality The imaginary ability to affect the past is sometimes taken to suggest that causes could be negated by their own effects, creating a logical contradiction such as the grandfather paradox. This contradiction is not necessarily inherent to retrocausality or time travel; by limiting the initial conditions of time travel with consistency constraints, such paradoxes and others are avoided. Aspects of modern physics, such as the hypothetical tachyon particle and certain time-independent aspects of quantum mechanics, may allow particles or information to travel backward in time. Logical objections to macroscopic time travel may not necessarily prevent retrocausality at other scales of interaction. Even if such effects are possible, however, they may not be capable of producing effects different from those that would have resulted from normal causal relationships. Physicist John G. Cramer has explored various proposed methods for nonlocal or retrocausal quantum communication and found them all flawed and, consistent with the no-communication theorem, unable to transmit nonlocal signals. Relativity "In relativity, time and space are intertwined in the fabric of space-time, so time can contract and stretch under the influence of gravity." Closed timelike curves (CTCs), sometimes referred to as time loops, in which the world line of an object returns to its origin, arise from some exact solutions to the Einstein field equation. However, the chronology protection conjecture of Stephen Hawking suggests that any such closed timelike curve would be destroyed before it could be used. Although CTCs do not appear to exist under normal conditions, extreme environments of spacetime, such as a traversable wormhole or the region near certain cosmic strings, may allow their brief formation, implying a theoretical possibility of retrocausality. The exotic matter or topological defects required for the creation of those environments have not been observed. Microscopic causality Most physical models are time symmetric; some use retrocausality at the microscopic level. Electromagnetism Wheeler–Feynman absorber theory, proposed by John Archibald Wheeler and Richard Feynman, uses retrocausality and a temporal form of destructive interference to explain the absence of a type of converging concentric wave suggested by certain solutions to Maxwell's equations. These advanced waves have nothing to do with cause and effect: they are simply a different mathematical way to describe normal waves. The reason they were proposed is that a charged particle would not have to act on itself, which, in normal classical electromagnetism, leads to an infinite self-force. Quantum physics Ernst Stueckelberg, and later Richard Feynman, proposed an interpretation of the positron as an electron moving backward in time, reinterpreting the negative-energy solutions of the Dirac equation. Electrons moving backward in time would have a positive electric charge. This time-reversal of anti-particles is required in modern quantum field theory, and is for example a component of how nucleons in atoms are held together with the nuclear force, via exchange of virtual mesons such as the pion. A meson is made up by an equal number of normal quarks and anti-quarks, and is thus simultaneously both emitted and absorbed. Wheeler invoked this time-reversal concept to explain the identical properties shared by all electrons, suggesting that "they are all the same electron" with a complex, self-intersecting world line. Yoichiro Nambu later applied it to all production and annihilation of particle-antiparticle pairs, stating that "the eventual creation and annihilation of pairs that may occur now and then is no creation or annihilation, but only a change of direction of moving particles, from past to future, or from future to past." The backwards-in-time point of view is nowadays accepted as completely equivalent to other pictures, but it has nothing to do with the macroscopic terms "cause" and "effect", which do not appear in a microscopic physical description. Retrocausality is associated with the Double Inferential state-Vector Formalism (DIVF), later known as the two-state vector formalism (TSVF) in quantum mechanics, where the present is characterised by quantum states of the past and the future taken in combination. Retrocausality is sometimes associated with nonlocal correlations that generically arise from quantum entanglement, including for example the delayed choice quantum eraser. However accounts of quantum entanglement can be given which do not involve retrocausality. They treat the experiments demonstrating these correlations as being described from different reference frames that disagree on which measurement is a "cause" versus an "effect", as necessary to be consistent with special relativity. That is to say, the choice of which event is the cause and which the effect is not absolute but is relative to the observer. The description of such nonlocal quantum entanglements can be described in a way that is free of retrocausality if the states of the system are considered. Tachyons Hypothetical superluminal particles called tachyons have a spacelike trajectory, and thus can appear to move backward in time, according to an observer in a conventional reference frame. Despite frequent depiction in science fiction as a method to send messages back in time, hypothetical tachyons do not interact with normal tardyonic matter in a way that would violate standard causality. Specifically, the Feinberg reinterpretation principle means that ordinary matter cannot be used to make a tachyon detector capable of receiving information. Parapsychology Retrocausality is claimed to occur in some psychic phenomena such as precognition. J. W. Dunne's 1927 book An Experiment with Time studied precognitive dreams and has become a definitive classic. Parapsychologist J. B. Rhine and colleagues made intensive investigations during the mid-twentieth century. His successor Helmut Schmidt presented quantum mechanical justifications for retrocausality, eventually claiming that experiments had demonstrated the ability to manipulate radioactive decay through retrocausal psychokinesis. Such results and their underlying theories have been rejected by the mainstream scientific community and are widely accepted as pseudoscience, although they continue to have some support from fringe science sources. Efforts to associate retrocausality with prayer healing have been similarly rejected. From 1994, psychologist Daryl J. Bem has argued for precognition. He subsequently showed experimental subjects two sets of curtains and instructed them to guess which one had a picture behind it, but did not display the picture behind the curtain until after the subject made their guess. Some results showed a higher margin of success (p. 17) for a subset of erotic images, with subjects who identified as "stimulus-seeking" in the pre-screening questionnaire scoring even higher. However, like his predecessors, his methodology has been strongly criticised and his results discounted. See also References Causality Metaphysics of science Quantum mechanics Thought experiments Parapsychology
Retrocausality
[ "Physics" ]
1,862
[ "Theoretical physics", "Quantum mechanics" ]
7,508,653
https://en.wikipedia.org/wiki/Chvorinov%27s%20rule
Chvorinov's rule is a physical relationship that relates the solidification time for a simple casting to the volume and surface area of the casting. It was first expressed by Czech engineer Nicolas Chvorinov in 1940. Description According to the rule, a casting with a larger surface area and smaller volume will cool more quickly than a casting with a smaller surface area and a larger volume under otherwise comparable conditions. The relationship can be mathematically expressed as: Where is the solidification time, is the volume of the casting, is the surface area of the casting that contacts the mold, is a constant, and is the mold constant. This relationship can be expressed more simply as: Where the modulus is the ratio of the casting's volume to its surface area: The mold constant depends on the properties of the metal, such as density, heat capacity, heat of fusion and superheat, and the mold, such as initial temperature, density, thermal conductivity, heat capacity and wall thickness. Mold Constant The S.I. unit for the mold constant is seconds per metre squared (). According to Askeland, the constant is usually 2, however Degarmo claims it is between 1.5 and 2. The mold constant can be calculated using the following formula: Where = melting or freezing temperature of the liquid (in kelvins), = initial temperature of the mold (in kelvins), = superheat (in kelvins), = latent heat of fusion (in ), = thermal conductivity of the mold (in ), = density of the mold (in ), = specific heat of the mold (in ), = density of the metal (in ), = specific heat of the metal (in ). It is most useful in determining if a riser will solidify before the casting, because if the riser solidifies first then defects like shrinkage or porosity can form. References Casting (manufacturing) Metallurgy
Chvorinov's rule
[ "Chemistry", "Materials_science", "Engineering" ]
400
[ "Metallurgy", "Materials science", "nan" ]
7,510,462
https://en.wikipedia.org/wiki/Lindstr%C3%B6m%27s%20theorem
In mathematical logic, Lindström's theorem (named after Swedish logician Per Lindström, who published it in 1969) states that first-order logic is the strongest logic (satisfying certain conditions, e.g. closure under classical negation) having both the (countable) compactness property and the (downward) Löwenheim–Skolem property. Lindström's theorem is perhaps the best known result of what later became known as abstract model theory, the basic notion of which is an abstract logic; the more general notion of an institution was later introduced, which advances from a set-theoretical notion of model to a category-theoretical one. Lindström had previously obtained a similar result in studying first-order logics extended with Lindström quantifiers. Lindström's theorem has been extended to various other systems of logic, in particular modal logics by Johan van Benthem and Sebastian Enqvist. Notes References Per Lindström, "On Extensions of Elementary Logic", Theoria 35, 1969, 1–11. Johan van Benthem, "A New Modal Lindström Theorem", Logica Universalis 1, 2007, 125–128. Sebastian Enqvist, "A General Lindström Theorem for Some Normal Modal Logics", Logica Universalis 7, 2013, 233–264. Shawn Hedman, A first course in logic: an introduction to model theory, proof theory, computability, and complexity, Oxford University Press, 2004, , section 9.4 Mathematical logic Theorems in the foundations of mathematics Metatheorems
Lindström's theorem
[ "Mathematics" ]
331
[ "Foundations of mathematics", "Mathematical logic", "Mathematical problems", "Mathematical logic stubs", "Mathematical theorems", "Theorems in the foundations of mathematics" ]
13,752,085
https://en.wikipedia.org/wiki/Woodward%27s%20rules
Woodward's rules, named after Robert Burns Woodward and also known as Woodward–Fieser rules (for Louis Fieser) are several sets of empirically derived rules which attempt to predict the wavelength of the absorption maximum (λmax) in an ultraviolet–visible spectrum of a given compound. Inputs used in the calculation are the type of chromophores present, the auxochromes (substituents on the chromophores, and solvent. Examples are conjugated carbonyl compounds, conjugated dienes, and polyenes. Implementation One set of Woodward–Fieser rules for dienes is outlined in table 1. A diene is either homoannular with both double bonds contained in one ring or heteroannular with two double bonds distributed between two rings. With the aid of these rules the UV absorption maximum can be predicted, for example in these two compounds: In the compound on the left, the base value is 214 nm (a heteroannular diene). This diene group has 4 alkyl substituents (labeled 1,2,3,4) and the double bond in one ring is exocyclic to the other (adding 5 nm for an exocyclic double bond). In the compound on the right, the diene is homoannular with 4 alkyl substituents. Both double bonds in the central B ring are exocyclic with respect to rings A and C. For polyenes having more than 4 conjugated double bonds one must use Fieser–Kuhn rules. References Eponymous chemical rules Physical organic chemistry Absorption spectroscopy
Woodward's rules
[ "Physics", "Chemistry" ]
343
[ "Spectroscopy", "Spectrum (physical sciences)", "Absorption spectroscopy", "Physical organic chemistry" ]
13,752,761
https://en.wikipedia.org/wiki/Sod%20shock%20tube
The Sod shock tube problem, named after Gary A. Sod, is a common test for the accuracy of computational fluid codes, like Riemann solvers, and was heavily investigated by Sod in 1978. The test consists of a one-dimensional Riemann problem with the following parameters, for left and right states of an ideal gas. , where is the density is the pressure is the velocity The time evolution of this problem can be described by solving the Euler equations, which leads to three characteristics, describing the propagation speed of the various regions of the system. Namely the rarefaction wave, the contact discontinuity and the shock discontinuity. If this is solved numerically, one can test against the analytical solution, and get information how well a code captures and resolves shocks and contact discontinuities and reproduce the correct density profile of the rarefaction wave. Analytic derivation NOTE: The equations provided below are only correct when rarefaction takes place on left side of domain and shock happens on right side of domain. The different states of the solution are separated by the time evolution of the three characteristics of the system, which is due to the finite speed of information propagation. Two of them are equal to the speed of sound of the left and right states where is the adiabatic gamma. The first one is the position of the beginning of the rarefaction wave while the other is the velocity of the propagation of the shock. Defining: , The states after the shock are connected by the Rankine Hugoniot shock jump conditions. But to calculate the density in Region 4 we need to know the pressure in that region. This is related by the contact discontinuity with the pressure in region 3 by Unfortunately the pressure in region 3 can only be calculated iteratively, the right solution is found when equals This function can be evaluated to an arbitrary precision thus giving the pressure in the region 3 finally we can calculate and follows from the adiabatic gas law References See also Shock tube Computational fluid dynamics Fluid dynamics Computational fluid dynamics
Sod shock tube
[ "Physics", "Chemistry", "Engineering" ]
417
[ "Computational fluid dynamics", "Chemical engineering", "Computational physics", "Piping", "Fluid dynamics" ]
13,753,918
https://en.wikipedia.org/wiki/Tetrachloroaluminate
Tetrachloroaluminate [AlCl4]− is an anion formed from aluminium and chlorine. The anion has a tetrahedral shape and is isoelectronic with silicon tetrachloride. Some tetrachloroaluminates are soluble in organic solvents, creating an ionic non-aqueous solution, making them suitable as component of electrolytes for batteries. For example, lithium tetrachloroaluminate is used in some lithium batteries. Formation Tetrachloroaluminate ions are formed as intermediates in the Friedel–Crafts reactions when aluminium chloride is used as the catalyst. In the case of the Friedel–Crafts alkylation, the reaction can be broken into three steps as follows: The alkyl halide reacts with the strong Lewis acid to form an activated electrophile composed of the tetrachloroaluminate ion and the alkyl group. The aromatic ring (benzene in this case) reacts with the activated electrophile forming an alkylbenzenium carbocation. The alkylbenzenium carbocation reacts with a tetrachloroaluminate anion, regenerating the aromatic ring and the Lewis acid and forming hydrochloric acid (HCl). A similar mechanism occurs in the Friedel-Crafts acylation. References Non-coordinating anions
Tetrachloroaluminate
[ "Chemistry" ]
289
[ "Coordination chemistry", "Non-coordinating anions" ]
13,754,920
https://en.wikipedia.org/wiki/Bundle%20adjustment
In photogrammetry and computer stereo vision, bundle adjustment is simultaneous refining of the 3D coordinates describing the scene geometry, the parameters of the relative motion, and the optical characteristics of the camera(s) employed to acquire the images, given a set of images depicting a number of 3D points from different viewpoints. Its name refers to the geometrical bundles of light rays originating from each 3D feature and converging on each camera's optical center, which are adjusted optimally according to an optimality criterion involving the corresponding image projections of all points. Uses Bundle adjustment is almost always used as the last step of feature-based 3D reconstruction algorithms. It amounts to an optimization problem on the 3D structure and viewing parameters (i.e., camera pose and possibly intrinsic calibration and radial distortion), to obtain a reconstruction which is optimal under certain assumptions regarding the noise pertaining to the observed image features: If the image error is zero-mean Gaussian, then bundle adjustment is the Maximum Likelihood Estimator. Bundle adjustment was originally conceived in the field of photogrammetry during the 1950s and has increasingly been used by computer vision researchers during recent years. General approach Bundle adjustment boils down to minimizing the reprojection error between the image locations of observed and predicted image points, which is expressed as the sum of squares of a large number of nonlinear, real-valued functions. Thus, the minimization is achieved using nonlinear least-squares algorithms. Of these, Levenberg–Marquardt has proven to be one of the most successful due to its ease of implementation and its use of an effective damping strategy that lends it the ability to converge quickly from a wide range of initial guesses. By iteratively linearizing the function to be minimized in the neighborhood of the current estimate, the Levenberg–Marquardt algorithm involves the solution of linear systems termed the normal equations. When solving the minimization problems arising in the framework of bundle adjustment, the normal equations have a sparse block structure owing to the lack of interaction among parameters for different 3D points and cameras. This can be exploited to gain tremendous computational benefits by employing a sparse variant of the Levenberg–Marquardt algorithm which explicitly takes advantage of the normal equations zeros pattern, avoiding storing and operating on zero-elements. Mathematical definition Bundle adjustment amounts to jointly refining a set of initial camera and structure parameter estimates for finding the set of parameters that most accurately predict the locations of the observed points in the set of available images. More formally, assume that 3D points are seen in views and let be the projection of the th point on image . Let denote the binary variables that equal 1 if point is visible in image and 0 otherwise. Assume also that each camera is parameterized by a vector and each 3D point by a vector . Bundle adjustment minimizes the total reprojection error with respect to all 3D point and camera parameters, specifically where is the predicted projection of point on image and denotes the Euclidean distance between the image points represented by vectors and . Because the minimum is computed over many points and many images, bundle adjustment is by definition tolerant to missing image projections, and if the distance metric is chosen reasonably (e.g., Euclidean distance), bundle adjustment will also minimize a physically meaningful criterion. See also Adjustment of observations Stereoscopy Levenberg–Marquardt algorithm Sparse matrix Collinearity equation Structure from motion Simultaneous localization and mapping References Further reading A. Zisserman. Bundle adjustment. CV Online. External links Software : Apero/MicMac, a free open source photogrammetric software. Cecill-B licence. sba: A Generic Sparse Bundle Adjustment C/C++ Package Based on the Levenberg–Marquardt Algorithm (C, MATLAB). GPL. cvsba : An OpenCV wrapper for sba library (C++). GPL. ssba: Simple Sparse Bundle Adjustment package based on the Levenberg–Marquardt Algorithm (C++). LGPL. OpenCV: Computer Vision library in the Images stitching module. BSD license. mcba: Multi-Core Bundle Adjustment (CPU/GPU). GPL3. libdogleg: General-purpose sparse non-linear least squares solver, based on Powell's dogleg method. LGPL. ceres-solver: A Nonlinear Least Squares Minimizer. BSD license. g2o: General Graph Optimization (C++) - framework with solvers for sparse graph-based non-linear error functions. LGPL. DGAP: The program DGAP implement the photogrammetric method of bundle adjustment invented by Helmut Schmid and Duane Brown. GPL. Bundler: A structure-from-motion (SfM) system for unordered image collections (for instance, images from the Internet) by Noah Snavely. GPL. COLMAP: A general-purpose Structure-from-Motion (SfM) and Multi-View Stereo (MVS) pipeline with a graphical and command-line interface. BSD license. Theia: A computer vision library aimed at providing efficient and reliable algorithms for Structure from Motion (SfM). New BSD license. Ames Stereo Pipeline has a tool for bundle adjustment (Apache II licence). Geometry in computer vision Geodesy Photogrammetry Surveying Cartography
Bundle adjustment
[ "Mathematics", "Engineering" ]
1,107
[ "Applied mathematics", "Geometry in computer vision", "Surveying", "Civil engineering", "Geometry", "Geodesy" ]
13,757,128
https://en.wikipedia.org/wiki/Cuprate%20superconductor
Cuprate superconductors are a family of high-temperature superconducting materials made of layers of copper oxides (CuO2) alternating with layers of other metal oxides, which act as charge reservoirs. At ambient pressure, cuprate superconductors are the highest temperature superconductors known. However, the mechanism by which superconductivity occurs is still not understood. History The first cuprate superconductor was found in 1986 in the non-stoichiometric cuprate lanthanum barium copper oxide by IBM researchers Georg Bednorz and Karl Alex Müller. The critical temperature for this material was 35K, well above the previous record of 23 K. The discovery led to a sharp increase in research on the cuprates, resulting in thousands of publications between 1986 and 2001. Bednorz and Müller were awarded the Nobel Prize in Physics in 1987, only a year after their discovery. From 1986, many cuprate superconductors were identified, and can be put into three groups on a phase diagram critical temperature vs. oxygen hole content and copper hole content: lanthanum barium copper oxide (LB–CO), TC = (35 K). yttrium barium copper oxide (YB–CO), TC = (93 K). bismuth strontium calcium copper oxide (BiSC–CO), TC = (95 K). thallium barium calcium copper oxide (TBC–CO), TC = (125 K). mercury barium calcium copper oxide (HGBC–CO) 1993, with TC = (133 K), currently the highest cuprate critical temperature. Structure Cuprates are layered materials, consisting of superconducting planes of copper oxide, separated by layers containing ions such as lanthanum, barium, strontium, which act as a charge reservoir, doping electrons or holes into the copper-oxide planes. Thus the structure is described as a superlattice of superconducting CuO2 layers separated by spacer layers, resulting in a structure often closely related to the perovskite structure. Superconductivity takes place within the copper-oxide (CuO2) sheets, with only weak coupling between adjacent CuO2 planes, making the properties close to that of a two-dimensional material. Electrical currents flow within the CuO2 sheets, resulting in a large anisotropy in normal conducting and superconducting properties, with a much higher conductivity parallel to the CuO2 plane than in the perpendicular direction. Critical superconducting temperatures depend on the chemical compositions, cations substitutions and oxygen content. Chemical formulae of superconducting materials generally contain fractional numbers to describe the doping required for superconductivity. There are several families of cuprate superconductors which can be categorized by the elements they contain and the number of adjacent copper-oxide layers in each superconducting block. For example, YBCO and BSCCO can alternatively be referred to as Y123 and Bi2201/Bi2212/Bi2223 depending on the number of layers in each superconducting block (n). The superconducting transition temperature has been found to peak at an optimal doping value (p=0.16) and an optimal number of layers in each superconducting block, typically n=3. The undoped "parent" or "mother" compounds are Mott insulators with long-range antiferromagnetic order at sufficiently low temperatures. Single band models are generally considered to be enough to describe the electronic properties. Cuprate superconductors usually feature copper oxides in both the oxidation states 3+ and 2+. For example, YBa2Cu3O7 is described as Y3+(Ba2+)2(Cu3+)(Cu2+)2(O2−)7. The copper 2+ and 3+ ions tend to arrange themselves in a checkerboard pattern, a phenomenon known as charge ordering. All superconducting cuprates are layered materials having a complex structure described as a superlattice of superconducting CuO2 layers separated by spacer layers, where the misfit strain between different layers and dopants in the spacers induce a complex heterogeneity that in the superstripes scenario is intrinsic for high-temperature superconductivity. Superconducting mechanism Superconductivity in the cuprates is considered unconventional and is not explained by BCS theory. Possible pairing mechanisms for cuprate superconductivity continue to be the subject of considerable debate and further research. Similarities between the low-temperature antiferromagnetic state in undoped materials and the low-temperature superconducting state that emerges upon doping, primarily the dx2−y2 orbital state of the Cu2+ ions, suggest that electron-phonon coupling is less relevant in cuprates. Recent work on the Fermi surface has shown that nesting occurs at four points in the antiferromagnetic Brillouin zone where spin waves exist and that the superconducting energy gap is larger at these points. The weak isotope effects observed for most cuprates contrast with conventional superconductors that are well described by BCS theory. In 1987, Philip Anderson proposed that superexchange could act as a high-temperature superconductor pairing mechanism. In 2016, Chinese physicists found a correlation between a cuprate's critical temperature and the size of the charge transfer gap in that cuprate, providing support for the superexchange hypothesis. A 2022 study found that the varying density of actual Cooper pairs in a bismuth strontium calcium copper oxide superconductor matched with numerical predictions based on superexchange. But so far there is no consensus on the mechanism, and the search for an explanation continues. Applications BSCCO superconductors already have large-scale applications. For example, tens of kilometers of BSCCO-2223 at 77 K superconducting wires are being used in the current leads of the Large Hadron Collider at CERN (but the main field coils are using metallic lower temperature superconductors, mainly based on niobium–tin). See also Thallium barium calcium copper oxide Lanthanum barium copper oxide Bismuth strontium calcium copper oxide Superconducting wire Bibliography Rybicki et al, Perspective on the phase diagram of cuprate high-temperature superconductors, University of Leipzig, 2015 References Copper compounds Superconductors Oxides
Cuprate superconductor
[ "Chemistry", "Materials_science" ]
1,369
[ "Superconductors", "Superconductivity", "Oxides", "Salts" ]
13,757,191
https://en.wikipedia.org/wiki/Sedimentation%20potential
Sedimentation potential occurs when dispersed particles move under the influence of either gravity or centrifugation or electricity in a medium. This motion disrupts the equilibrium symmetry of the particle's double layer. While the particle moves, the ions in the electric double layer lag behind due to the liquid flow. This causes a slight displacement between the surface charge and the electric charge of the diffuse layer. As a result, the moving particle creates a dipole moment. The sum of all of the dipoles generates an electric field which is called sedimentation potential. It can be measured with an open electrical circuit, which is also called sedimentation current. There are detailed descriptions of this effect in many books on colloid and interface science. Surface energy Background related to phenomenon Electrokinetic phenomena are a family of several different effects that occur in heterogeneous fluids or in porous bodies filled with fluid. The sum of these phenomena deals with the effect on a particle from some outside resulting in a net electrokinetic effect. The common source of all these effects stems from the interfacial 'double layer' of charges. Particles influenced by an external force generate tangential motion of a fluid with respect to an adjacent charged surface. This force may consist of electric, pressure gradient, concentration gradient, gravity. In addition, the moving phase might be either the continuous fluid or dispersed phase. Sedimentation potential is the field of electrokinetic phenomena dealing with the generation of an electric field by sedimenting colloid particles. History of models This phenomenon was first discovered by Dorn in 1879. He observed that a vertical electric field had developed in a suspension of glass beads in water, as the beads were settling. This was the origin of sedimentation potential, which is often referred to as the Dorn effect. Smoluchowski built the first models to calculate the potential in the early 1900s. Booth created a general theory on sedimentation potential in 1954 based on Overbeek's 1943 theory on electrophoresis. In 1980, Stigter extended Booth's model to allow for higher surface potentials. Ohshima created a model based on O'Brien and White 's 1978 model used to analyze the sedimentation velocity of a single charged sphere and the sedimentation potential of a dilute suspension. Generation of a potential As a charged particle moves through a gravitational force or centrifugation, an electric potential is induced. While the particle moves, ions in the electric double layer lag behind creating a net dipole moment behind due to liquid flow. The sum of all dipoles on the particle is what causes sedimentation potential. Sedimentation potential has the opposite effect compared to electrophoresis where an electric field is applied to the system. Ionic conductivity is often referred to when dealing with sedimentation potential. The following relation provides a measure of the sedimentation potential due to the settling of charged spheres. First discovered by Smoluchowski in 1903 and 1921. This relationship only holds true for non-overlapping electric double layers and for dilute suspensions. In 1954, Booth proved that this idea held true for Pyrex glass powder settling in a KCl solution. From this relation, the sedimentation potential, ES, is independent of the particle radius and that ES → 0, Φ p → 0 (a single particle). Smoluchowski's sedimentation potential is defined where ε0 is the permitivity of free space, D the dimensionless dielectric constant, ξ the zeta potential, g the acceleration due to gravity, Φ the particle volume fraction, ρ the particle density, ρo the medium density, λ the specific volume conductivity, and η the viscosity. Smoluchowski developed the equation under five assumptions: Particles are spherical, nonconducting, and monodispersed. Laminar flow around the particles occurs (Reynolds number <1). Interparticle interactions are negligible. Surface conduction is negligible. The double-layer thickness 1/κ is small compared to the particle radius a (κa>>1). Where Di is the diffusion coefficient of the ith solute species, and ni∞ is the number concentration of electrolyte solution. Ohshima's model was developed in 1984 and was originally used to analyze the sedimentation velocity of a single charged sphere and the sedimentation potential of a dilute suspension. The model provided below holds true for dilute suspensions of low zeta potential, i.e. eζ/κBT ≤2 Testing Measurement Sedimentation potential is measured by attaching electrodes to a glass column filled with the dispersion of interest. A voltmeter is attached to measure the potential generated from the suspension. To account for different geometries of the electrode, the column is typically rotated 180 degrees while measuring the potential. This difference in potential through rotation by 180 degrees is twice the sedimentation potential. The zeta potential can be determined through measurement by sedimentation potential, as the concentration, conductivity of the suspension, density of the particle, and potential difference are known. By rotating the column 180 degrees, drift and geometry differences of the column can be ignored. When dealing with the case of concentrated systems, the zeta potential can be determined through measurement of the sedimentation potential , from the potential difference relative to the distance between the electrodes. The other parameters represent the following: the viscosity of the medium; the bulk conductivity; the relative permittivity of the medium; the permittivity of free space; the density of the particle; the density of the medium; is the acceleration due to gravity; and σ∞ is the electrical conductivity of the bulk electrolyte solution. An improved design cell was developed to determine sedimentation potential, specific conductivity, volume fraction of the solids as well as pH. Two pairs of electrodes are used in this set up, one to measure potential difference and the other for resistance. A flip switch is utilized to avoid polarization of the resistance electrodes and buildup of charge by alternating the current. The pH of the system could be monitored and the electrolyte was drawn into the tube using a vacuum pump. Applications Applications of sedimentation field flow fractionation (SFFF) Sedimentation field flow fractionation (SFFF) is a non-destructive separation technique which can be used for both separation, and collecting fractions. Some applications of SFFF include characterization of particle size of latex materials for adhesives, coatings and paints, colloidal silica for binders, coatings and compounding agents, titanium oxide pigments for paints, paper and textiles, emulsion for soft drinks, and biological materials like viruses and liposomes. Some main aspects of SFFF include: it provides high-resolution possibilities for size distribution measurements with high precision, the resolution is dependent on experimental conditions, the typical analysis time is 1 to 2 hours, and it is a non-destructive technique which offers the possibility of collecting fraction. Particle size analysis by sedimentation field flow fractionation As sedimentation field flow fractionation (SFFF) is one of field flow fractionation separation techniques, it is appropriate for fractionation and characterization of particulate materials and soluble samples in the colloid size range. Differences in interaction between a centrifugal force field and particles with different masses or sizes lead to the separation. An exponential distribution of particles of a certain size or weight is results due to the Brownian motion. Some of the assumptions to develop the theoretical equations include that there is no interaction between individual particles and equilibrium can occur anywhere in separation channels. See also Various combinations of the driving force and moving phase determine various electrokinetic effects. Following "Fundamentals of Interface and Colloid Science" by Lyklema (1995), the complete family of electrokinetic phenomena includes: References Anand Plappally, Alfred Soboyejo, Norman Fausey, Winston Soboyejo and Larry Brown,"Stochastic Modeling of Filtrate Alkalinity in Water Filtration Devices: Transport through Micro/Nano Porous Clay Based Ceramic Materials" J Nat Env Sci 2010 1(2):96-105. Colloidal chemistry Condensed matter physics Soft matter Non-equilibrium thermodynamics Electrochemistry Electrochemical potentials
Sedimentation potential
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,687
[ "Electrochemical potentials", "Colloidal chemistry", "Non-equilibrium thermodynamics", "Soft matter", "Phases of matter", "Materials science", "Colloids", "Surface science", "Electrochemistry", "Condensed matter physics", "Matter", "Dynamical systems" ]
13,757,374
https://en.wikipedia.org/wiki/Cold%20ironing
Cold ironing, or shore connection, shore-to-ship power (SSP) or alternative maritime power (AMP), is the process of providing shoreside electrical power to a ship at berth while its main and auxiliary engines are turned off. Cold ironing permits emergency equipment, refrigeration, cooling, heating, lighting and other equipment to receive continuous electrical power while the ship loads or unloads its cargo. Shorepower is a general term to describe supply of electric power to ships, small craft, aircraft and road vehicles while stationary. Cold ironing is a shipping industry term that first came into use when all ships had coal-fired engines. When a ship tied up at port there was no need to continue to feed the fire and the iron engines would literally cool down, eventually going completely cold, hence the term cold ironing. Shutting down main engines while in port continues as a majority practice. However, auxiliary diesel generators that power cargo handling equipment and other ship's services while in port are the primary source of air emissions from ships in ports today, because the auxiliaries run on heavy fuel oil or bunkers. Cold ironing mitigates harmful emissions from diesel engines by connecting a ship's load to a more environmentally friendly, shore-based source of electrical power. An alternative is to run auxiliary diesels either on gas (LNG or LPG) or extra low sulphur distillate fuels, however if noise pollution is a problem, then cold ironing becomes the only option. A ship can cold iron by simply connecting to another ship's power supply. Naval ships have standardized processes and equipment for this procedure. However, this does not change the power source type nor does it eliminate the source of air pollution. The source for land-based power may be grid power from an electric utility company, but also possibly an external remote generator. These generators may be powered by diesel or renewable energy sources such as wind, water or solar. Shore power saves consumption of fuel that would otherwise be used to power vessels while in port, and eliminates the air pollution associated with consumption of that fuel. Use of shore power facilitates maintenance of the ship's engines and generators, and reduces noise. Background Unlike navies, whose ships can berth for extended periods at their bases, merchant ships have shorter port stays, during which they sustain electrical loads through on-board fossil fuel powered electrical generators (auxiliary engines). Oceangoing ships have generally not been subject to emissions controls, so merchant vessels throughout the world have been using bunker fuel or HFO – which is residual petroleum – as the optimal choice of fuel. This fuel has high particulate matter. Studies show that a single ship can produce particulate emissions equal to the same amount as 50 million cars annually. Further research indicates cardio-pulmonary conditions caused by particulate matter from ship emissions are responsible for 60,000 deaths annually. These deaths have been detected far inland due to prevailing wind conditions. The total world trading fleet stands at 50,000+ merchant ships (Lloyds data as of January 2008), and each ship spends some 100 days in port in a year. For every of electricity, about 200 g of bunker fuel is consumed. Each 1 kg of bunker oil generates 3.1 kg of carbon dioxide. It is assessed that globally ships use 411,223,484 tonnes of fuel annually. Keeping these reports in mind, new regulatory norms have been mandated by the International Maritime Organization (IMO). The level of sulphur is one of the benchmarks in measuring quality of fuel and Marpol Annex VI requires use of <4.5% sulphur fuel, effective 2010. The target is to reduce world maritime sulphur output to <0.5% by 2020. Some regions (e.g., California) already require ships switch to cleaner fuel when in their local waters. Cold ironing does away with the need to burn fossil fuel on board the ships while they are docked. Under this concept as it is legally implemented, ships visiting ports are hooked on to local grid power or other power sources, which are already regulated by local pollution norms. This externally sourced power serves the ship's internal cargo handling machinery and hotelling requirements. Effectively, all the power generating sources are shut down and the ship is hence cold-ironed. This brings immediate relief from pollution by shipboard emissions and allows a more holistic maintenance schedule to be followed by ship operators, which are typically hard pressed to maintain planned maintenance schedules due to commercial operating pressures. The immediate result is lowered heat outputs from ships, lowered air emissions, lowered risk of accidents from fuel based machinery, lowered disturbance to the ecosystem, and various others. Concerns and problems Incompatibility of electricity parameters: ships, having been built in diverse international yards, have no uniform voltage and frequency requirement. Some ships use 220 volts at 50 Hz, some at 60 Hz, some others use 110 volts. Primary distribution voltage can vary from 440 volts to 11 kilovolts. Wide variations in load requirements, from a few hundred kW in case of car carriers to a dozen or more MW in case of passenger ships or reefer ships. Connectors and cables are not internationally standardised, though work has progressed in this direction. There are other legal implications to outsourcing primary power source. The legal implications stem from possible impact levels in international trade, commercial responsibilities of stakeholders and other risk as assessed. Some ports have glass fiber data communication systems installed. The standard for data communication via glass fiber should be as set in the IEC/IEEE 80005 par 2 standard, published in 2016. All these problems are addressable and work has already begun in reducing ship emissions by cold ironing. Various studies are being conducted to fully implement a viable, controllable and monitored method of powering the most important arm of modern-day logistics, the merchant ships. The U.S. State of California is requiring a percentage of ships calling there to use shore power by 2014. The Port of Oakland is implementing a High Voltage Shore Connection (HVSC) at 6,600 volts. The first Hapag-Lloyd vessel to use the system, Dallas Express, docked there in December 2012. The electrical and mechanical equipment to interface the ship's electrical load with the shore power is in a 40-foot container at the vessel's stern. Initially 15 Hapag-Lloyd ships will receive the system. The Massachusetts Port Authority carried out a study of cold ironing and alternatives in 2016 that pointed out a number of problems, including the high peak power demand (13 MW for a cruise ship, 3 MW for a container ship) and the high cost of providing the necessary equipment and upgraded electrical power infrastructure for Boston Harbor. It also expressed concern about loss of competitiveness without a U.S. East Coast regional agreement to install such systems. The ports of Halifax and Brooklyn have installed cold ironing at one cruise-ship berth each at a cost of $10 and $20 million, respectively, mostly paid by government grants. See also Shorepower – for electrical supply of planes, trucks, etc. References External links Thesis Papoutsoglou – A Cold Ironing Study on Modern Ports, Implementation and Benefits Thriving for Worldwide Ports Thesis by Pamela Brieske – Cold Ironing (Externe Stromversorgung für Schiffe im Hafen, German, 2007) Key Factors and Barriers to Adoption of Cold Ironing in Europe Evaluation of cold ironing and speed reduction policies to reduce ship emissions near and at ports Air pollution control systems Nautical terminology Maritime transport Ports and harbours Electrical systems Power electronics Port infrastructure
Cold ironing
[ "Physics", "Engineering" ]
1,553
[ "Physical systems", "Electronic engineering", "Power electronics", "Electrical systems" ]
13,758,067
https://en.wikipedia.org/wiki/Air-free%20technique
Air-free techniques refer to a range of manipulations in the chemistry laboratory for the handling of compounds that are air-sensitive. These techniques prevent the compounds from reacting with components of air, usually water and oxygen; less commonly carbon dioxide and nitrogen. A common theme among these techniques is the use of a fine (100–10−3 Torr) or high (10−3–10−6 Torr) vacuum to remove air, and the use of an inert gas: preferably argon, but often nitrogen. The two most common types of air-free technique involve the use of a glovebox and a Schlenk line, although some rigorous applications use a high-vacuum line. In both methods, glassware (often Schlenk tubes) are pre-dried in ovens prior to use. They may be flame-dried to remove adsorbed water. Prior to coming into an inert atmosphere, vessels are further dried by purge-and-refill — the vessel is subjected to a vacuum to remove gases and water, and then refilled with inert gas. This cycle is usually repeated three times or the vacuum is applied for an extended period of time. One of the differences between the use of a glovebox and a Schlenk line is where the purge-and-refill cycle is applied. When using a glovebox the purge-and-refill is applied to an airlock attached to the glovebox, commonly called the "port" or "ante-chamber". In contrast when using a Schlenk line the purge-and-refill is applied directly to the reaction vessel through a hose or ground glass joint that is connected to the manifold. Glovebox The most straightforward type of air-free technique is the use of a glovebox. A "glove bag" uses the same idea, but is usually a poorer substitute because it is more difficult to purge, and less well sealed. Inventive ways of accessing items beyond the reach of the gloves exist, such as the use of tongs and strings. The main drawbacks to using a glovebox are the cost of the glovebox, and limited dexterity while wearing the gloves. In the glovebox, conventional laboratory equipment can often be set up and manipulated, despite the need to handle the apparatus with the gloves. By providing a sealed but recirculating atmosphere of the inert gas, the glove box necessitates few other precautions. Cross contamination of samples due to poor technique is also problematic, especially where a glovebox is shared between workers using differing reagents, volatile ones in particular. Two styles have evolved in the use of gloveboxes for synthetic chemistry. In a more conservative mode, they are used solely to store, weigh, and transfer air-sensitive reagents. Reactions are thereafter carried out using Schlenk techniques. The gloveboxes are thus only used for the most air-sensitive stages in an experiment. In their more liberal use, gloveboxes are used for the entire synthetic operations including reactions in solvents, work-up, and preparation of samples for spectroscopy. Not all reagents and solvents are acceptable for use in the glovebox, although different laboratories adopt different cultures. The "box atmosphere" is usually continuously deoxygenated over a copper catalyst. Certain volatile chemicals such as halogenated compounds and especially strongly coordinating species such as phosphines and thiols can be problematic because they irreversibly poison the copper catalyst. Because of this, many experimentalists choose to handle such compounds using Schlenk techniques. In the more liberal use of gloveboxes, it is accepted that the copper catalyst will require more frequent replacement but this cost is considered to be an acceptable trade-off for the efficiency of conducting an entire synthesis within a protected environment Schlenk line The other main technique for the preparation and handing of air-sensitive compounds are associated with the use of a Schlenk line. The main techniques include: counterflow additions, where air-stable reagents are added to the reaction vessel against a flow of inert gas. the use of syringes and rubber septa (stoppers that reseal after puncturing) to transfer liquids and solutions cannula transfer, where liquids or solutions of air-sensitive reagents are transferred between different vessels stoppered with septa using a long thin tube known as a cannula. Liquid flow is achieved via vacuum or inert gas pressure. Glassware are usually connected via tightly-fitting and greased ground glass joints. Round bends of glass tubing with ground glass joints may be used to adjust the orientation of various vessels. Filtrations may be accomplished by dedicated equipment. Associated preparations Commercially available purified inert gas (argon or nitrogen) is adequate for most purposes. However, for certain applications, it is necessary to further remove water and oxygen. This additional purification can be accomplished by piping the inert gas line through a heated column of copper, which converts the oxygen to copper oxide. Water is removed by piping the gas through a column of desiccant such as phosphorus pentoxide or molecular sieves. Air- and water-free solvents are also necessary. If high-purity solvents are available in nitrogen-purged Winchesters, they can be brought directly into the glovebox. For use with Schlenk technique, they can be quickly poured into Schlenk flasks charged with molecular sieves, and degassed. More typically, solvent is dispensed directly from a still or solvent purification column. Degassing Two procedures for degassing are common. The first is known as freeze-pump-thaw — the solvent is frozen under liquid nitrogen, and a vacuum is applied. Thereafter, the stopcock is closed and the solvent is thawed in warm water, allowing trapped bubbles of gas to escape. The second procedure is to simply subject the solvent to a vacuum. Stirring or mechanical agitation using an ultrasonicator is useful. Dissolved gases evolve first; once the solvent starts to evaporate, noted by condensation outside the flask walls, the flask is refilled with inert gas. Both procedures are repeated three times. Drying Solvents are a major source of contamination in chemical reactions. Although traditional drying techniques involve distillation from an aggressive desiccant, molecular sieves are far superior. Aside from being inefficient, sodium as a desiccant (below its melting point) reacts slowly with trace amounts of water. When however, the desiccant is soluble, the speed of drying is accelerated, although still inferior to molecular sieves. Benzophenone is often used to generate such a soluble drying agent. An advantage to this application is the intense blue color of the ketyl radical anion. Thus, sodium/benzophenone can be used as an indicator of air-free and moisture-free conditions in the purification of solvents by distillation. Distillation stills are fire hazards and are increasingly being replaced by alternative solvent-drying systems. Popular are systems for the filtration of deoxygenated solvents through columns filled with activated alumina. Drying of solids can be brought about by storing the solid over a drying agent such as phosphorus pentoxide () or silica gel, storing in a drying oven/vacuum-drying oven, heating under a high vacuum or in a drying pistol, or to remove trace amounts of water, simply storing the solid in a glove box that has a dry atmosphere. Alternatives Both these techniques require rather expensive equipment and can be time consuming. Where air-free requirements are not stringent, other techniques can be used. For example, using a sacrificial excess of a reagent that reacts with water/oxygen can be used. The sacrificial excess in effect "dries" the reaction by reacting with the water (e.g. in the solvent). However, this method is only suitable where the impurities produced in this reaction are not in turn detrimental to the desired product of the reaction or can be easily removed. Typically, reactions using such a sacrificial excess are only effective when doing reactions on a reasonably large scale such that this by-reaction is negligible compared to the desired product reaction. For example, when preparing Grignard reagents, magnesium (the cheapest reagent) is often used in excess, which reacts to remove trace water, either by reacting directly with water to give magnesium hydroxide or via the in situ formation of the Grignard reagent which in turn reacts with water (e.g. R-Mg-X + H2O → HO-Mg-X + R-H). To maintain the resultant "dry" environment it is usually sufficient to connect a guard tube filled with calcium chloride to the reflux condenser to slow moisture re-entering the reaction over time, or connect an inert gas line. Drying can also be achieved by the use of in situ desiccants such as molecular sieves, or the use of azeotropic distillation techniques e.g. with a Dean-Stark apparatus. Detection of O2 and water A number of reagents can be used to detect and/or destroy O2 and water. Deeply colored radicals are often used because they bleach upon reaction with water and oxygen. One such reagent is benzophenone ketyl, which is easily generated by this reaction Na + Ph2CO → Na+Ph2CO•− This deep purple ketyl rapidly gives colorless products upon oxidation or hydrolysis Another reagent is generated in situ by treatment of titanocene dichloride with zinc. That blue green Ti(III)-containing solution is highly sensitive to oxygen. Such solutions are useful for testing the inertness of an atmosphere within a glove box. See also Sparging (chemistry) Degasification Schlenk-frit References External links Gallery Laboratory techniques
Air-free technique
[ "Chemistry", "Engineering" ]
2,072
[ "Vacuum systems", "Air-free techniques", "nan" ]
1,645,042
https://en.wikipedia.org/wiki/Valence%20%28chemistry%29
In chemistry, the valence (US spelling) or valency (British spelling) of an atom is a measure of its combining capacity with other atoms when it forms chemical compounds or molecules. Valence is generally understood to be the number of chemical bonds that each atom of a given chemical element typically forms. Double bonds are considered to be two bonds, triple bonds to be three, quadruple bonds to be four, quintuple bonds to be five and sextuple bonds to be six. In most compounds, the valence of hydrogen is 1, of oxygen is 2, of nitrogen is 3, and of carbon is 4. Valence is not to be confused with the related concepts of the coordination number, the oxidation state, or the number of valence electrons for a given atom. Description The valence is the combining capacity of an atom of a given element, determined by the number of hydrogen atoms that it combines with. In methane, carbon has a valence of 4; in ammonia, nitrogen has a valence of 3; in water, oxygen has a valence of 2; and in hydrogen chloride, chlorine has a valence of 1. Chlorine, as it has a valence of one, can be substituted for hydrogen in many compounds. Phosphorus has a valence 3 in phosphine () and a valence of 5 in phosphorus pentachloride (), which shows that an element may exhibit more than one valence. The structural formula of a compound represents the connectivity of the atoms, with lines drawn between two atoms to represent bonds. The two tables below show examples of different compounds, their structural formulas, and the valences for each element of the compound. Definition Valence is defined by the IUPAC as: The maximum number of univalent atoms (originally hydrogen or chlorine atoms) that may combine with an atom of the element under consideration, or with a fragment, or for which an atom of this element can be substituted. An alternative modern description is: The number of hydrogen atoms that can combine with an element in a binary hydride or twice the number of oxygen atoms combining with an element in its oxide or oxides. This definition differs from the IUPAC definition as an element can be said to have more than one valence. Historical development The etymology of the words valence (plural valences) and valency (plural valencies) traces back to 1425, meaning "extract, preparation", from Latin valentia "strength, capacity", from the earlier valor "worth, value", and the chemical meaning referring to the "combining power of an element" is recorded from 1884, from German Valenz. The concept of valence was developed in the second half of the 19th century and helped successfully explain the molecular structure of inorganic and organic compounds. The quest for the underlying causes of valence led to the modern theories of chemical bonding, including the cubical atom (1902), Lewis structures (1916), valence bond theory (1927), molecular orbitals (1928), valence shell electron pair repulsion theory (1958), and all of the advanced methods of quantum chemistry. In 1789, William Higgins published views on what he called combinations of "ultimate" particles, which foreshadowed the concept of valency bonds. If, for example, according to Higgins, the force between the ultimate particle of oxygen and the ultimate particle of nitrogen were 6, then the strength of the force would be divided accordingly, and likewise for the other combinations of ultimate particles (see illustration). The exact inception, however, of the theory of chemical valencies can be traced to an 1852 paper by Edward Frankland, in which he combined the older radical theory with thoughts on chemical affinity to show that certain elements have the tendency to combine with other elements to form compounds containing 3, i.e., in the 3-atom groups (e.g., , , , etc.) or 5, i.e., in the 5-atom groups (e.g., , , , etc.), equivalents of the attached elements. According to him, this is the manner in which their affinities are best satisfied, and by following these examples and postulates, he declares how obvious it is that This "combining power" was afterwards called quantivalence or valency (and valence by American chemists). In 1857 August Kekulé proposed fixed valences for many elements, such as 4 for carbon, and used them to propose structural formulas for many organic molecules, which are still accepted today. Lothar Meyer in his 1864 book, Die modernen Theorien der Chemie, contained an early version of the periodic table containing 28 elements, for the first time classified elements into six families by their valence. Works on organizing the elements by atomic weight, until then had been stymied by the widespread use of equivalent weights for the elements, rather than atomic weights. Most 19th-century chemists defined the valence of an element as the number of its bonds without distinguishing different types of valence or of bond. However, in 1893 Alfred Werner described transition metal coordination complexes such as , in which he distinguished principal and subsidiary valences (German: 'Hauptvalenz' and 'Nebenvalenz'), corresponding to the modern concepts of oxidation state and coordination number respectively. For main-group elements, in 1904 Richard Abegg considered positive and negative valences (maximal and minimal oxidation states), and proposed Abegg's rule to the effect that their difference is often 8. An alternative definition of valence, developed in the 1920's and having modern proponents, differs in cases where an atom's formal charge is not zero. It defines the valence of a given atom in a covalent molecule as the number of electrons that an atom has used in bonding: valence = number of electrons in valence shell of free atom − number of non-bonding electrons on atom in molecule, or equivalently: valence = number of bonds + formal charge. In this convention, the nitrogen in an ammonium ion bonds to four hydrogen atoms, but it is considered to be pentavalent because all five of nitrogen's valence electrons participate in the bonding. Electrons and valence The Rutherford model of the nuclear atom (1911) showed that the exterior of an atom is occupied by electrons, which suggests that electrons are responsible for the interaction of atoms and the formation of chemical bonds. In 1916, Gilbert N. Lewis explained valence and chemical bonding in terms of a tendency of (main-group) atoms to achieve a stable octet of 8 valence-shell electrons. According to Lewis, covalent bonding leads to octets by the sharing of electrons, and ionic bonding leads to octets by the transfer of electrons from one atom to the other. The term covalence is attributed to Irving Langmuir, who stated in 1919 that "the number of pairs of electrons which any given atom shares with the adjacent atoms is called the covalence of that atom". The prefix co- means "together", so that a co-valent bond means that the atoms share a valence. Subsequent to that, it is now more common to speak of covalent bonds rather than valence, which has fallen out of use in higher-level work from the advances in the theory of chemical bonding, but it is still widely used in elementary studies, where it provides a heuristic introduction to the subject. In the 1930s, Linus Pauling proposed that there are also polar covalent bonds, which are intermediate between covalent and ionic, and that the degree of ionic character depends on the difference of electronegativity of the two bonded atoms. Pauling also considered hypervalent molecules, in which main-group elements have apparent valences greater than the maximal of 4 allowed by the octet rule. For example, in the sulfur hexafluoride molecule (), Pauling considered that the sulfur forms 6 true two-electron bonds using sp3d2 hybrid atomic orbitals, which combine one s, three p and two d orbitals. However more recently, quantum-mechanical calculations on this and similar molecules have shown that the role of d orbitals in the bonding is minimal, and that the molecule should be described as having 6 polar covalent (partly ionic) bonds made from only four orbitals on sulfur (one s and three p) in accordance with the octet rule, together with six orbitals on the fluorines. Similar calculations on transition-metal molecules show that the role of p orbitals is minor, so that one s and five d orbitals on the metal are sufficient to describe the bonding. Common valences For elements in the main groups of the periodic table, the valence can vary between 1 and 8. Many elements have a common valence related to their position in the periodic table, and nowadays this is rationalised by the octet rule. The Greek/Latin numeral prefixes (mono-/uni-, di-/bi-, tri-/ter-, and so on) are used to describe ions in the charge states 1, 2, 3, and so on, respectively. Polyvalence or multivalence refers to species that are not restricted to a specific number of valence bonds. Species with a single charge are univalent (monovalent). For example, the cation is a univalent or monovalent cation, whereas the cation is a divalent cation, and the cation is a trivalent cation. Unlike Cs and Ca, Fe can also exist in other charge states, notably 2+ and 4+, and is thus known as a multivalent (polyvalent) ion. Transition metals and metals to the right are typically multivalent but there is no simple pattern predicting their valency. † The same adjectives are also used in medicine to refer to vaccine valence, with the slight difference that in the latter sense, quadri- is more common than tetra-. ‡ As demonstrated by hit counts in Google web search and Google Books search corpora (accessed 2017). § A few other forms can be found in large English-language corpora (for example, *quintavalent, *quintivalent, *decivalent), but they are not the conventionally established forms in English and thus are not entered in major dictionaries. Valence versus oxidation state Because of the ambiguity of the term valence, other notations are currently preferred. Beside the lambda notation, as used in the IUPAC nomenclature of inorganic chemistry, oxidation state is a more clear indication of the electronic state of atoms in a molecule. The oxidation state of an atom in a molecule gives the number of valence electrons it has gained or lost. In contrast to the valency number, the oxidation state can be positive (for an electropositive atom) or negative (for an electronegative atom). Elements in a high oxidation state have an oxidation state higher than +4, and also, elements in a high valence state (hypervalent elements) have a valence higher than 4. For example, in perchlorates , chlorine has 7 valence bonds (thus, it is heptavalent, in other words, it has valence 7), and it has oxidation state +7; in ruthenium tetroxide , ruthenium has 8 valence bonds (thus, it is octavalent, in other words, it has valence 8), and it has oxidation state +8. In some molecules, there is a difference between valence and oxidation state for a given atom. For example, in disulfur decafluoride molecule , each sulfur atom has 6 valence bonds (5 single bonds with fluorine atoms and 1 single bond with the other sulfur atom). Thus, each sulfur atom is hexavalent or has valence 6, but has oxidation state +5. In the dioxygen molecule , each oxygen atom has 2 valence bonds and so is divalent (valence 2), but has oxidation state 0. In acetylene , each carbon atom has 4 valence bonds (1 single bond with hydrogen atom and a triple bond with the other carbon atom). Each carbon atom is tetravalent (valence 4), but has oxidation state −1. Examples * The perchlorate ion is monovalent, in other words, it has valence 1. ** Valences may also be different from absolute values of oxidation states due to different polarity of bonds. For example, in dichloromethane, , carbon has valence 4 but oxidation state 0. *** Iron oxides appear in a crystal structure, so no typical molecule can be identified. In ferrous oxide, Fe has oxidation state +2; in ferric oxide, oxidation state +3. "Maximum number of bonds" definition Frankland took the view that the valence (he used the term "atomicity") of an element was a single value that corresponded to the maximum value observed. The number of unused valencies on atoms of what are now called the p-block elements is generally even, and Frankland suggested that the unused valencies saturated one another. For example, nitrogen has a maximum valence of 5, in forming ammonia two valencies are left unattached; sulfur has a maximum valence of 6, in forming hydrogen sulphide four valencies are left unattached. The International Union of Pure and Applied Chemistry (IUPAC) has made several attempts to arrive at an unambiguous definition of valence. The current version, adopted in 1994: The maximum number of univalent atoms (originally hydrogen or chlorine atoms) that may combine with an atom of the element under consideration, or with a fragment, or for which an atom of this element can be substituted. Hydrogen and chlorine were originally used as examples of univalent atoms, because of their nature to form only one single bond. Hydrogen has only one valence electron and can form only one bond with an atom that has an incomplete outer shell. Chlorine has seven valence electrons and can form only one bond with an atom that donates a valence electron to complete chlorine's outer shell. However, chlorine can also have oxidation states from +1 to +7 and can form more than one bond by donating valence electrons. Hydrogen has only one valence electron, but it can form bonds with more than one atom. In the bifluoride ion (), for example, it forms a three-center four-electron bond with two fluoride atoms: Another example is the three-center two-electron bond in diborane (). Maximum valences of the elements Maximum valences for the elements are based on the data from list of oxidation states of the elements. They are shown by the color code at the bottom of the table. See also Abegg's rule Oxidation state References Chemical bonding Chemical properties Dimensionless numbers of chemistry
Valence (chemistry)
[ "Physics", "Chemistry", "Materials_science" ]
3,108
[ "Dimensionless numbers of chemistry", "Chemical bonding", "Condensed matter physics", "nan" ]
1,645,046
https://en.wikipedia.org/wiki/Hydrant
A hydrant is an outlet from a fluid main often consisting of an upright pipe with a valve attached, from which fluid (e.g. water or fuel) can be tapped. Depending on the fluid involved, the term may refer to: Fire hydrant for firefighting water supply Flushing hydrant for cleaning water mains Hydrant network systems used to transport aviation fuel from an oil depot to an airport, to fuel aircraft Snowmaking hydrants, which use water and air Standpipe (street), a type of domestic or neighbourhood hydrant for dispensing water when supply is interrupted or absent Fluid dynamics
Hydrant
[ "Chemistry", "Engineering" ]
125
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
1,646,818
https://en.wikipedia.org/wiki/Radius%20gauge
A radius gauge, also known as a fillet gauge, is a tool used to measure the radius of an object. Radius gauges require a bright light behind the object to be measured. The gauge is placed against the edge to be checked and any light leakage between the blade and edge indicates a mismatch that requires correction. A good set of gauges will offer both convex and concave sections, and allow for their application in awkward locations. Every leaf has a different radius, for example with radius intervals of 0.25 mm or 0.5 mm. The material of the leaves is stainless steel. Each gauge is one of two types; either internal or external, which are used to check the radius of inner and outer surfaces, respectively. See also Thread pitch gauge Spherometer, an instrument for the precise measurement of radiuses References Notes Bibliography . . Dimensional instruments Metalworking measuring instruments Radii
Radius gauge
[ "Physics", "Mathematics" ]
185
[ "Quantity", "Dimensional instruments", "Physical quantities", "Size" ]
1,646,838
https://en.wikipedia.org/wiki/Grid%20energy%20storage
Grid energy storage, also known as large-scale energy storage, are technologies connected to the electrical power grid that store energy for later use. These systems help balance supply and demand by storing excess electricity from variable renewables such as solar and inflexible sources like nuclear power, releasing it when needed. They further provide essential grid services, such as helping to restart the grid after a power outage. , the largest form of grid storage is pumped-storage hydroelectricity, with utility-scale batteries and behind-the-meter batteries coming second and third. Lithium-ion batteries are highly suited for shorter duration storage up to 8 hours. Flow batteries and compressed air energy storage may provide storage for medium duration. Two forms of storage are suited for long-duration storage: green hydrogen, produced via electrolysis and thermal energy storage. Energy storage is one option to making grids more flexible. An other solution is the use of more dispatchable power plants that can change their output rapidly, for instance peaking power plants to fill in supply gaps. Demand response can shift load to other times and interconnections between regions can balance out fluctuations in renewables production. The price of storage technologies typically goes down with experience. For instance, lithium-ion batteries have been getting some 20% cheaper for each doubling of worldwide capacity. Systems with under 40% variable renewables need only short-term storage. At 80%, medium-duration storage becomes essential and beyond 90%, long-duration storage does too. The economics of long-duration storage is challenging, and alternative flexibility options like demand response may be more economic. Roles in the power grid Any electrical power grid must match electricity production to consumption, both of which vary significantly over time. Energy derived from solar and wind sources varies with the weather on time scales ranging from less than a second to weeks or longer. Nuclear power is less flexible than fossil fuels, meaning it cannot easily match the variations in demand. Thus, low-carbon electricity without storage presents special challenges to electric utilities. Electricity storage is one of the three key ways to replace flexibility from fossil fuels in the grid. Other options are demand-side response, in which consumers change when they use electricity or how much they use. For instance, households may have cheaper night tariffs to encourage them to use electricity at night. Industry and commercial consumers can also change their demand to meet supply. Improved network interconnection smooths the variations of renewables production and demand. When there is little wind in one location, another might have a surplus of production. Expansion of transmission lines usually takes a long time. Energy storage has a large set of roles in the electricity grid and can therefore provide many different services. For instance, it can arbitrage by keeping it until the electricity price rises, it can help make the grid more stable, and help reduce investment into transmission infrastructure. The type of service provided by storage depends on who manages the technology, whether the technology is based alongside generation of electricity, within the network, or at the side of consumption. Providing short-term flexibility is a key role for energy storage. On the generation side, it can help with the integration of variable renewable energy, storing it when there is an oversupply of wind and solar and electricity prices are low. More generally, it can exploit the changes in prices of electricity over time in the wholesale market, charging when electricity is cheap and selling when it is expensive. It can further help with grid congestion (where there is insufficient capacity on transmission lines). Consumers can use storage to use more of their self-produced electricity (for instance from rooftop solar power). Storage can also be used to provide essential grid services. On the generation side, storage can smooth out the variations in production, for instance for solar and wind. It can assist in a black start after a power outage. On the network side, these include frequency regulation (continuously) and frequency response (after unexpected changes in supply or demand). On the consumption side, storage can help to improve the quality of the delivered electricity in less stable grids. Investment in storage may make some investments in the transmission and distribution network unnecessary, or may allow them to be scaled down. Additionally, storage can ensure there is sufficient capacity to meet peak demand within the electricity grid. Finally, in off-grid home systems or mini-grids, electricity storage can help provide energy access in areas that were previously not connected to the electricity grid. Forms Electricity can be stored directly for a short time in capacitors, somewhat longer electrochemically in batteries, and much longer chemically (e.g. hydrogen), mechanically (e.g. pumped hydropower) or as heat. The first pumped hydroelectricity was constructed at the end of the 19th century around the Alps in Italy, Austria, and Switzerland. The technique rapidly expanded during the 1960s to 1980s nuclear boom, due to nuclear power's inability to quickly adapt to changes in electricity demand. In the 21st century, interest in storage surged due to the rise of sustainable energy sources, which are often weather-dependent. Commercial batteries have been available for over a century, their widespread use in the power grid is more recent, with only 1 GW available in 2013. Batteries Lithium-ion batteries Lithium-ion batteries are the most commonly used batteries for grid applications, , following the application of batteries in electric vehicles (EVs). In comparison with EVs, grid batteries require less energy density, meaning that more emphasis can be put on costs, the ability to charge and discharge often and lifespan. This has led to a shift towards lithium iron phosphate batteries (LFP batteries), which are cheaper and last longer than traditional lithium-ion batteries. Costs of batteries are declining rapidly; from 2010 to 2023 costs fell by 90%. , utility-scale systems account for two thirds of added capacity, and home applications (behind-the-meter) for one third. Lithium-ion batteries are highly suited to short-duration storage (<8h) due to cost and degradation associated with high states of charge. Electric vehicles The electric vehicle fleet has a large overall battery capacity, which can potentially be used for grid energy storage. This could be in the form of vehicle-to-grid (V2G), where cars store energy when they are not in use, or by repurposing batteries from cars at the end of the vehicle's life. Car batteries typically range between 33 and 100 kWh; for comparison, a typical upper-middle-class household in Spain might use some 18 kWh in a day. By 2030, batteries in electric vehicles may be able to meet all short-term storage demand globally. , there have been more than 100 V2G pilot projects globally. The effect of V2G charging on battery life can be positive or negative. Increased cycling of batteries can lead to faster degradation, but due to better management of the state of charge and gentler charging and discharing, V2G might instead increase the lifetime of batteries. Second-hand batteries may be useable for stationary grid storage for roughly 6 years, when their capacity drops from roughly 80% to 60% of the initial capacity. LFP batteries are particularly suitable for reusing, as they degrade less than other lithium-ion batteries and recycling is less attractive as their materials are not as valuable. Other battery types In redox flow batteries, energy is stored in liquids, which are placed in two separate tanks. When charging or discharging, the liquids are pumped into a cell with the electrodes. The amount of energy stored (as set by the size of the tanks) can be adjusted separately from the power output (as set by the speed of the pumps). Flow batteries have the advantages of low capital cost for charge-discharge duration over 4 h, and of long durability (many years). Flow batteries are inferior to lithium-ion batteries in terms of energy efficiency, averaging efficiencies between 60% and 75%. Vanadium redox batteries is most commercially advanced type of flow battery, with roughly 40 companies making them . Sodium-ion batteries are a possible alternative to lithium-ion batteries, as they are less flammable, and use cheaper and less critical materials. They have a lower energy density, and possibly a shorter lifespan. If produced at the same scale as lithium-ion batteries, they may become 20% to 30% cheaper. Iron-air batteries may be suitable for even longer duration storage than flow batteries (weeks), but the technology is not yet mature. Electrical Storage in supercapacitors works well for applications where a lot of power is needed for short amount of time. In the power grid, they are therefore mostly used in short-term frequency regulation. Hydrogen and chemical storage Various power-to-gas technologies exist that can convert excess electricity into an easier to store chemical. The lowest cost and most efficient one is hydrogen. However, it is easier to use synthetic methane with existing infrastructure and appliances, as it is very similar to natural gas. , there have been a number of demonstration plants where hydrogen is burned in gas turbines, either co-firing with natural gas, or on its own. Similarly, a number of coal plants have demonstrated it is possible to co-fire ammonia when burning coal. In 2022, there was also a small pilot to burn pure ammonia in a gas turbine. A portion of existing gas turbines are capable of co-firing hydrogen, which means there is, as a lower estimate, 80 GW of capacity ready to burn hydrogen. Hydrogen Hydrogen can be used as a long-term storage medium. Green hydrogen is produced from the electrolysis of water and converted back into electricity in an internal combustion engine, or a fuel cell, with a round-trip efficiency of roughly 41%. Together with thermal storage, it is expected to be best suited to seasonal energy storage. Hydrogen can be stored aboveground in tanks or underground in larger quantities. Underground storage is easiest in salt caverns, but only a certain number of places have suitable geology. Storage in porous rocks, for instance in empty gas fields and some aquifers, can store hydrogen at a larger scale, but this type of storage may have some drawbacks. For instance, some of the hydrogen may leak, or react into H2S or methane. Ammonia Hydrogen can be converted into ammonia in a reaction with nitrogen in the Haber-Bosch process. Ammonia, a gas at room temperature, is more expensive to produce than hydrogen. However, it can be stored more cheaply than hydrogen. Tank storage is usually done at between one and ten times atmospheric pressure and at a temperature of , in liquid form. Ammonia has multiple uses besides being an energy carrier: it is the basis for the production of many chemicals; the most common use is for fertilizer. It can be used for power generation directly, or converted back to hydrogen first. Alternatively, it has potential applications as a fuel in shipping. Methane It is possible to further convert hydrogen into methane via the Sabatier reaction, a chemical reaction which combines and H2. While the reaction that converts CO from gasified coal into is mature, the process to form methane out of is less so. Efficiencies of around 80% one-way can be achieved, that is, some 20% of the energy in hydrogen is lost in the reaction. Mechanical Flywheel Flywheels store energy in the form of mechanical energy. They are suited to supplying high levels of electricity over minutes and can also be charged rapidly. They have a long lifetime and can be used in settings with widely varying temperatures. The technology is mature, but more expensive than batteries and supercapacitors and not used frequently. Pumped hydro , pumped-storage hydroelectricity (PSH) was the largest form of grid energy storage globally, with an installed capacity of 181 GW, surpassing the combined capacity of utility-scale and behind-the-meter battery storage, which totaled approximately 88 GW. PSH is particularly effective for managing daily fluctuations in energy demand. During periods of low demand, water is pumped to a higher-elevation reservoir, and during peak demand, the stored water is released to generate electricity through turbines. The system has an efficiency rate of 75% to 85% and can quickly respond to changes in demand, typically within seconds to minutes. While traditional PSH systems require specific geographical conditions, alternative designs have been proposed. These include utilizing deep salt caverns or constructing hollow structures on the seabed, where the ocean serves as the upper reservoir. However, PSH construction is often expensive, time-consuming, and can have significant environmental and social impacts on nearby communities. Innovative solutions, such as installing floating solar panels on reservoirs, can enhance the efficiency of PSH systems. These panels reduce water evaporation and benefit from cooling by the water surface, which improves their energy generation efficiency. Hydroelectric dams Hydroelectric dams with large reservoirs can also be operated to provide peak generation at times of peak demand. Water is stored in the reservoir during periods of low demand and released through the plant when demand is higher. While technically no electricity is stored, the net effect is the similar as pumped storage. The amount of storage available in hydroelectric dams is much larger than in pumped storage. Upgrades may be needed so that these dams can respond to variable demand. For instance, additional investment may be needed in transmission lines, or additional turbines may need to be installed to increase the peak output from the dam. Dams usually have multiple purposes. As well as energy generation, they often play a role in flood defense and protection of ecosystems, recreation, and they supply water for irrigation. This means it is not always possible to change their operation much, but even with low flexibility, they may still play an important role in responding to changes in wind and solar production. Gravity Alternative methods that use gravity include storing energy by moving large solid masses upward against gravity. This can be achieved inside old mine shafts or in specially constructed towers where heavy weights are winched up to store energy and allowed a controlled descent to release it. Compressed air Compressed air energy storage (CAES) stores electricity by compressing air. The compressed air is typically stored in large underground caverns. The expanding air can be used to drive turbines, converting the energy back into electricity. As air cools when expanding, some heat needs to be added in this stage to prevent freezing. This can be provided by a low-carbon source, or in the case of advanced CAES, by reusing the heat that is released when air is compressed. , there are three advanced CAES project in operation in China. Typical efficiencies of advanced CAES are between 60% and 80%. Liquid air or Another electricity storage method is to compress and cool air, turning it into liquid air, which can be stored and expanded when needed, turning a turbine to generate electricity. This is called liquid air energy storage (LAES). The air would be cooled to temperatures of to become liquid. Like with compressed air, heat is needed for the expansion step. In the case of LAES, low-grade industrial heat can be used for this. Energy efficiency for LAES lies between 50% and 70%. , LAES is moving from pre-commercial to commercial. An alternative is the compression of to store electricity. Thermal Electricity can be directly stored thermally with a Carnot battery. A Carnot battery is a type of energy storage system that stores electricity in heat storage and converts the stored heat back to electricity via thermodynamic cycles (for instance, a turbine). While less efficient than pumped hydro or battery storage, this type of system is expected to be cheap and can provide long-duration storage. A pumped-heat electricity storage system is a Carnot battery that uses a reversible heat pump to convert the electricity into heat. It usually stores the energy in both a hot and cold reservoir. To achieve decent efficiencies (>50%), the temperature ratio between the two must reach a factor of 5. Thermal energy storage is also used in combination with concentrated solar power (CSP). In CSP, solar energy is first converted into heat, and then either directly converted into electricity or first stored. The energy is released when there is little or no sunshine. This means that CSP can be used as a dispatchable (flexible) form of generation. The energy in a CSP system can for instance be stored in molten salts or in a solid medium such as sand. Finally, heating and cooling systems in buildings can be controlled to store thermal energy in either the building's mass or dedicated thermal storage tanks. This thermal storage can provide load-shifting or even more complex ancillary services by increasing power consumption (charging the storage) during off-peak times and lowering power consumption (discharging the storage) during higher-priced peak times. Economics Costs The levelized cost of storing electricity (LCOS) is a measure of the lifetime costs of storing electricity per MWh of electricity discharged. It includes investment costs, but also operational costs and charging costs. It depends highly on storage type and purpose; as subsecond-scale frequency regulation, minute/hour-scale peaker plants, or day/week-scale season storage. For power applications (for instance around ancillary services or black starts), a similar metric is the annuitized capacity cost (ACC), which measures the lifetime costs per kW. ACC is lowest when there are few cycles (<300) and when the discharge is less than one hour. This is because the technology is reimbursed only when it provides spare capacity, not when it is discharged. The cost of storage is coming down following technology-dependent experience curves, the price drop for each doubling in cumulative capacity (or experience). Lithium-ion battery prices fast: the price utitlities pay for them falls 19% with each doubling of capacity. Hydrogen production via electrolysis has a similar learning rate, but it is much more uncertain. Vanadium-flow batteries typically get 14% cheaper for each doubling of capacity. Pumped hydropower has not seen prices fall much with increased experience. Market and system value There are four categories of services which provide economic value for storage: those related to power quality (such as frequency regulation), reliability (ensuring peak demand can be met), better use of assets in the system (e.g. avoiding transmission investments) and arbitrage (exploiting price differences over time). Before 2020, most value for storage was in providing power quality services. Arbitrage is the service with the largest economic potential for storage applications. In systems with under 40% of variable renewables, only short-term storage (of less than 4 hours) is needed for integration. When the share of variable renewables climbs to 80%, medium-duration storage (between 4 and 16 hours, for instance compressed air) is needed. Above 90%, large-scale long-duration storage is required. The economics of long-duration storage is challenging even then, as the costs are high. Alternative flexibility options, such as demand response, network expansions or flexible generation (geothermal or fossil gas with carbon capture and storage) may be lower-cost. Like with renewables, storage will "cannibalise" its own income, but even more strongly. That is, with more storage on the market, there is less of an opportunity to do arbitrage or deliver other services to the grid. How markets are designed impacts revenue potential too. The income from arbitrage is quite variable between years, whereas markets that have capacity payments likely show less volatility. Electricity storage is not 100% efficient, so more electricity needs to be bought than can be sold. This implies that if there is only a small variation in price, it may not be economical to charge and discharge. For instance, if the storage application is 75% efficient, the price at which the electricity is sold needs to be at least 1.33 higher than the price for which it was bought. Typically, electricity prices vary most between day and night, which means that storage up to 8 hours has relatively high potential for profit. See also Distributed generation Energy storage as a service (ESaaS) List of energy storage projects Power-to-X U.S. Department of Energy International Energy Storage Database, a list of grid energy storage projects Virtual power plant References Cited sources External links UK Government report on the Benefits of long-duration electricity storage (Aug 2022) Power engineering
Grid energy storage
[ "Engineering" ]
4,173
[ "Power engineering", "Electrical engineering", "Energy engineering" ]
1,647,676
https://en.wikipedia.org/wiki/Bond%20energy
In chemistry, bond energy (BE) is one measure of the strength of a chemical bond. It is sometimes called the mean bond, bond enthalpy, average bond enthalpy, or bond strength. IUPAC defines bond energy as the average value of the gas-phase bond-dissociation energy (usually at a temperature of 298.15 K) for all bonds of the same type within the same chemical species. The bond dissociation energy (enthalpy) is also referred to as bond disruption energy, bond energy, bond strength, or binding energy (abbreviation: BDE, BE, or D). It is defined as the standard enthalpy change of the following fission: R—X → R + X. The BDE, denoted by Dº(R—X), is usually derived by the thermochemical equation, This equation tells us that the BDE for a given bond is equal to the energy of the individual components that make up the bond when they are free and unbonded minus the energy of the components when they are bonded together. These energies are given by the enthalpy of formation ΔHfº of the components in each state. The enthalpy of formation of a large number of atoms, free radicals, ions, clusters and compounds is available from the websites of NIST, NASA, CODATA, and IUPAC. Most authors use the BDE values at 298.15 K. For example, the carbon–hydrogen bond energy in methane BE(C–H) is the enthalpy change (∆H) of breaking one molecule of methane into a carbon atom and four hydrogen radicals, divided by four. The exact value for a certain pair of bonded elements varies somewhat depending on the specific molecule, so tabulated bond energies are generally averages from a number of selected typical chemical species containing that type of bond. Bond energy versus bond-dissociation energy Bond energy (BE) is the average of all bond-dissociation energies of a single type of bond in a given molecule. The bond-dissociation energies of several different bonds of the same type can vary even within a single molecule. For example, a water molecule is composed of two O–H bonds bonded as H–O–H. The bond energy for H2O is the average energy required to break each of the two O–H bonds in sequence: Although the two bonds are the equivalent in the original symmetric molecule, the bond-dissociation energy of an oxygen–hydrogen bond varies slightly depending on whether or not there is another hydrogen atom bonded to the oxygen atom. Thus, the bond energy of a molecule of water is 461.5 kJ/mol (110.3 kcal/mol). When the bond is broken, the bonding electron pair will split equally to the products. This process is called homolytic bond cleavage (homolytic cleavage; homolysis) and results in the formation of radicals. Predicting the bond strength by radius The strength of a bond can be estimated by comparing the atomic radii of the atoms that form the bond to the length of bond itself. For example, the atomic radius of boron is estimated at 85 pm, while the length of the B–B bond in B2Cl4 is 175 pm. Dividing the length of this bond by the sum of each boron atom's radius gives a ratio of . This ratio is slightly larger than 1, indicating that the bond itself is slightly longer than the expected minimum overlap between the two boron atoms' valence electron clouds. Thus, we can conclude that this bond is a rather weak single bond. In another example, the atomic radius of rhenium is 135 pm, with a Re–Re bond length of 224 pm in the compound [Re2Cl8]−2. Taking the same steps as above gives a ratio of . This ratio is notably lower than 1, indicating that there is a large amount of overlap between the valence electron clouds of the two rhenium atoms. From this data, we can conclude that this is a very strong bond. Experimentally, the Re-Re bond in [Re2Cl8]−2 was found to be a quadruple bond. This method of determination is most useful for covalently bonded compounds. Factors affecting ionic bond energy In ionic compounds, the electronegativity of the two atoms bonding together has a major effect on their bond energy. The extent of this effect is described by the compound's lattice energy, where a more negative lattice energy corresponds to a stronger force of attraction between the ions. Generally, greater differences in electronegativity correspond to stronger ionic bonds. For example, the compound sodium chloride (NaCl) has a lattice energy of -786 kJ/mol with an electronegativity difference of 2.23 between sodium and chlorine. Meanwhile, the compound sodium iodide (NaI) has a lower lattice energy of -704 kJ/mol with a similarly lower electronegativity difference of 1.73 between sodium and iodine. See also Bond-dissociation energy Binding energy Ionization energy Isodesmic reaction Lattice energy References Energy Binding energy
Bond energy
[ "Chemistry" ]
1,069
[ "Chemical bond properties" ]
1,648,316
https://en.wikipedia.org/wiki/Michael%20addition%20reaction
In organic chemistry, the Michael reaction or Michael 1,4 addition is a reaction between a Michael donor (an enolate or other nucleophile) and a Michael acceptor (usually an α,β-unsaturated carbonyl) to produce a Michael adduct by creating a carbon-carbon bond at the acceptor's β-carbon. It belongs to the larger class of conjugate additions and is widely used for the mild formation of carbon-carbon bonds. The Michael addition is an important atom-economical method for diastereoselective and enantioselective C–C bond formation, and many asymmetric variants exist In this general Michael addition scheme, either or both of R and R' on the nucleophile (the Michael donor) represent electron-withdrawing substituents such as acyl, cyano, nitro, or sulfone groups, which make the adjacent methylene hydrogen acidic enough to form a carbanion when reacted with the base, B:. For the alkene (the Michael acceptor), the R" substituent is usually a carbonyl, which makes the compound an α,β-unsaturated carbonyl compound (either an enone or an enal), or R" may be any electron withdrawing group. Definition As originally defined by Arthur Michael, the reaction is the addition of an enolate of a ketone or aldehyde to an α,β-unsaturated carbonyl compound at the β carbon. The current definition of the Michael reaction has broadened to include nucleophiles other than enolates. Some examples of nucleophiles include doubly stabilized carbon nucleophiles such as beta-ketoesters, malonates, and beta-cyanoesters. The resulting product contains a highly useful 1,5-dioxygenated pattern. Non-carbon nucleophiles such as water, alcohols, amines, and enamines can also react with an α,β-unsaturated carbonyl in a 1,4-addition. Some authors have broadened the definition of the Michael addition to essentially refer to any 1,4-addition reaction of α,β-unsaturated carbonyl compounds. Others, however, insist that such a usage is an abuse of terminology, and limit the Michael addition to the formation of carbon–carbon bonds through the addition of carbon nucleophiles. The terms oxa-Michael reaction and aza-Michael reaction have been used to refer to the 1,4-addition of oxygen and nitrogen nucleophiles, respectively. The Michael reaction has also been associated with 1,6-addition reactions. Mechanism In the reaction mechanism, there is 1 as the nucleophile: Deprotonation of 1 by a base leads to carbanion 2, stabilized by its electron-withdrawing groups. Structures 2a to 2c are three resonance structures that can be drawn for this species, two of which have enolate ions. This nucleophile reacts with the electrophilic alkene 3 to form 4 in a conjugate addition reaction. Finally, enolate 4 abstracts a proton from protonated base (or solvent) to produce 5. The reaction is dominated by orbital, rather than electrostatic, considerations. The HOMO of stabilized enolates has a large coefficient on the central carbon atom while the LUMO of many alpha, beta unsaturated carbonyl compounds has a large coefficient on the beta carbon. Thus, both reactants can be considered soft. These polarized frontier orbitals are of similar energy, and react efficiently to form a new carbon–carbon bond. Like the aldol addition, the Michael reaction may proceed via an enol, silyl enol ether in the Mukaiyama–Michael addition, or more usually, enolate nucleophile. In the latter case, the stabilized carbonyl compound is deprotonated with a strong base (hard enolization) or with a Lewis acid and a weak base (soft enolization). The resulting enolate attacks the activated olefin with 1,4-regioselectivity, forming a carbon–carbon bond. This also transfers the enolate to the electrophile. Since the electrophile is much less acidic than the nucleophile, rapid proton transfer usually transfers the enolate back to the nucleophile if the product is enolizable; however, one may take advantage of the new locus of nucleophilicity if a suitable electrophile is pendant. Depending on the relative acidities of the nucleophile and product, the reaction may be catalytic in base. In most cases, the reaction is irreversible at low temperature. History The research done by Arthur Michael in 1887 at Tufts University was prompted by an 1884 publication by Conrad & Kuthzeit on the reaction of ethyl 2,3-dibromopropionate with diethyl sodiomalonate forming a cyclopropane derivative (now recognized as involving two successive substitution reactions). Michael was able to obtain the same product by replacing the propionate by 2-bromacrylic acid ethylester and realized that this reaction could only work by assuming an addition reaction to the double bond of the acrylic acid. He then confirmed this assumption by reacting diethyl malonate and the ethyl ester of cinnamic acid forming the first Michael adduct: In the same year Rainer Ludwig Claisen claimed priority for the invention. He and T. Komnenos had observed addition products to double bonds as side-products earlier in 1883 while investigating condensation reactions of malonic acid with aldehydes. However, according to biographer Takashi Tokoroyama, this claim is without merit. Asymmetric Michael reaction Researchers have expanded the scope of Michael additions to include elements of chirality via asymmetric versions of the reaction. The most common methods involve chiral phase transfer catalysis, such as quaternary ammonium salts derived from the Cinchona alkaloids; or organocatalysis, which is activated by enamine or iminium with chiral secondary amines, usually derived from proline. In the reaction between cyclohexanone and β-nitrostyrene sketched below, the base proline is derivatized and works in conjunction with a protic acid such as p-toluenesulfonic acid: Syn addition is favored with 99% ee. In the transition state believed to be responsible for this selectivity, the enamine (formed between the proline nitrogen and the cycloketone) and β-nitrostyrene are co-facial with the nitro group hydrogen bonded to the protonated amine in the proline side group. A well-known Michael reaction is the synthesis of warfarin from 4-hydroxycoumarin and benzylideneacetone first reported by Link in 1944: Several asymmetric versions of this reaction exist using chiral catalysts. Examples Classical examples of the Michael reaction are the reaction between diethyl malonate (Michael donor) and diethyl fumarate (Michael acceptor), that of diethyl malonate and mesityl oxide (forming Dimedone), that of diethyl malonate and methyl crotonate, that of 2-nitropropane and methyl acrylate, that of ethyl phenylcyanoacetate and acrylonitrile and that of nitropropane and methyl vinyl ketone. A classic tandem sequence of Michael and aldol additions is the Robinson annulation. Mukaiyama-Michael addition In the Mukaiyama–Michael addition, the nucleophile is a silyl enol ether and the catalyst is usually titanium tetrachloride: 1,6-Michael reaction The 1,6-Michael reaction proceeds via nucleophilic attack on the 𝛿 carbon of an α,β-,𝛿-diunsaturated Michael acceptor. The 1,6-addition mechanism is similar to the 1,4-addition, with one exception being the nucleophilic attack occurring at the 𝛿 carbon of the Michael acceptor. However, research shows that organocatalysis often favours the 1,4-addition. In many syntheses where 1,6-addition was favoured, the substrate contained certain structural features. Research has shown that catalysts can also influence the regioselectivity and enantioselectivity of a 1,6-addition reaction. For example, the image below shows the addition of ethylmagnesium bromide to ethyl sorbate 1 using a copper catalyst with a reversed josiphos (R,S)-(–)-3 ligand. This reaction produced the 1,6-addition product 2 in 0% yield, the 1,6-addition product 3 in approximately 99% yield, and the 1,4-addition product 4 in less than 2% yield. This particular catalyst and set of reaction conditions led to the mostly regioselective and enantioselective 1,6-Michael addition of ethyl sorbate 1 to product 3. Applications Pharmaceuticals A Michael reaction is used as a mechanistic step by many covalent inhibitor drugs. Cancer drugs such as ibrutinib, osimertinib, and rociletinib have an acrylamide functional group as a Michael acceptor. The Michael donor on the drug reacts with a Michael acceptor in the active site of an enzyme. This is a viable cancer treatment because the target enzyme is inhibited following the Michael reaction. Polymerization reactions Mechanism All polymerization reactions have three basic steps: initiation, propagation, and termination. The initiation step is the Michael addition of the nucleophile to a monomer. The resultant species undergoes a Michael addition with another monomer, with the latter acting as an acceptor. This extends the chain by forming another nucleophilic species to act as a donor for the next addition. This process repeats until the reaction is quenched by chain termination. The original Michael donor can be a neutral donor such as amines, thiols, and alkoxides, or alkyl ligands bound to a metal. Examples Linear step growth polymerizations are some of the earliest applications of the Michael reaction in polymerizations. A wide variety of Michael donors and acceptors have been used to synthesize a diverse range of polymers. Examples of such polymers include poly(amido amine), poly(amino ester), poly(imido sulfide), poly(ester sulfide), poly(aspartamide), poly(imido ether), poly(amino quinone), poly(enone sulfide) and poly(enamine ketone). For example, linear step growth polymerization produces the redox active poly(amino quinone), which serves as an anti-corrosion coatings on various metal surfaces. Another example includes network polymers, which are used for drug delivery, high performance composites, and coatings. These network polymers are synthesized using a dual chain growth, photo-induced radical and step growth Michael addition system. References Addition reactions Carbon-carbon bond forming reactions Name reactions
Michael addition reaction
[ "Chemistry" ]
2,355
[ "Name reactions", "Carbon-carbon bond forming reactions", "Organic reactions" ]
1,648,479
https://en.wikipedia.org/wiki/Galactomannan
Galactomannans are polysaccharides consisting of a mannose backbone with galactose side groups, more specifically, a (1-4)-linked beta-D-mannopyranose backbone with branchpoints from their 6-positions linked to alpha-D-galactose, (i.e. 1-6-linked alpha-D-galactopyranose). In order of increasing number of mannose-to-galactose ratio: fenugreek gum, mannose:galactose ~1:1 guar gum, mannose:galactose ~2:1 tara gum, mannose:galactose ~3:1 locust bean gum or carob gum, mannose:galactose ~4:1 cassia gum, mannose:galactose ~5:1 Galactomannans are often used in food products to increase the viscosity of the water phase. Guar gum has been used to add viscosity to artificial tears, but is not as stable as carboxymethylcellulose. Food use Galactomannans are used in foods as stabilisers. Guar and locust bean gum (LBG) are commonly used in ice cream to improve texture and reduce ice cream meltdown. LBG is also used extensively in cream cheese, fruit preparations and salad dressings. Tara gum is seeing growing acceptability as a food ingredient but is still used to a much lesser extent than guar or LBG. Guar has the highest usage in foods, largely due to its low and stable price. Clinical use Galactomannan is a component of the cell wall of the mold Aspergillus and is released during growth. Detection of galactomannan in blood is used to diagnose invasive aspergillosis infections in humans. This is performed with monoclonal antibodies in a double-sandwich ELISA; this assay from Bio-Rad Laboratories was approved by the FDA in 2003 and is of moderate accuracy. The assay is most useful in patients who have had hemopoietic cell transplants (stem cell transplants). False positive Aspergillus Galactomannan test have been found in patients on intravenous treatment with some antibiotics or fluids containing gluconate or citric acid such as some transfusion platelets, parenteral nutrition or PlasmaLyte. References Edible thickening agents Polysaccharides Carbohydrates Natural gums Aspergillus compounds Immunologic tests
Galactomannan
[ "Chemistry", "Biology" ]
526
[ "Biomolecules by chemical classification", "Carbohydrates", "Immunologic tests", "Organic compounds", "Carbohydrate chemistry", "Polysaccharides" ]
1,648,525
https://en.wikipedia.org/wiki/Nuclear%20localization%20sequence
A nuclear localization signal or sequence (NLS) is an amino acid sequence that 'tags' a protein for import into the cell nucleus by nuclear transport. Typically, this signal consists of one or more short sequences of positively charged lysines or arginines exposed on the protein surface. Different nuclear localized proteins may share the same NLS. An NLS has the opposite function of a nuclear export signal (NES), which targets proteins out of the nucleus. Types Classical These types of NLSs can be further classified as either monopartite or bipartite. The major structural differences between the two are that the two basic amino acid clusters in bipartite NLSs are separated by a relatively short spacer sequence (hence bipartite - 2 parts), while monopartite NLSs are not. The first NLS to be discovered was the sequence PKKKRKV in the SV40 Large T-antigen (a monopartite NLS). The NLS of nucleoplasmin, KR[PAATKKAGQA]KKKK, is the prototype of the ubiquitous bipartite signal: two clusters of basic amino acids, separated by a spacer of about 10 amino acids. Both signals are recognized by importin α. Importin α contains a bipartite NLS itself, which is specifically recognized by importin β. The latter can be considered the actual import mediator. Chelsky et al. proposed the consensus sequence K-K/R-X-K/R for monopartite NLSs. A Chelsky sequence may, therefore, be part of the downstream basic cluster of a bipartite NLS. Makkah et al. carried out comparative mutagenesis on the nuclear localization signals of SV40 T-Antigen (monopartite), C-myc (monopartite), and nucleoplasmin (bipartite), and showed amino acid features common to all three. The role of neutral and acidic amino acids was shown for the first time in contributing to the efficiency of the NLS. Rotello et al. compared the nuclear localization efficiencies of eGFP fused NLSs of SV40 Large T-Antigen, nucleoplasmin (AVKRPAATKKAGQAKKKKLD), EGL-13 (MSRRRKANPTKLSENAKKLAKEVEN), c-Myc (PAAKRVKLD) and TUS-protein (KLKIKRPVK) through rapid intracellular protein delivery. They found significantly higher nuclear localization efficiency of c-Myc NLS compared to that of SV40 NLS. Non-classical There are many other types of NLS, such as the acidic M9 domain of hnRNP A1, the sequence KIPIK in yeast transcription repressor Matα2, and the complex signals of U snRNPs. Most of these NLSs appear to be recognized directly by specific receptors of the importin β family without the intervention of an importin α-like protein. A signal that appears to be specific for the massively produced and transported ribosomal proteins, seems to come with a specialized set of importin β-like nuclear import receptors. Recently a class of NLSs known as PY-NLSs has been proposed, originally by Lee et al. This PY-NLS motif, so named because of the proline-tyrosine amino acid pairing in it, allows the protein to bind to Importin β2 (also known as transportin or karyopherin β2), which then translocates the cargo protein into the nucleus. The structural basis for the binding of the PY-NLS contained in Importin β2 has been determined and an inhibitor of import designed. Discovery The presence of the nuclear membrane that sequesters the cellular DNA is the defining feature of eukaryotic cells. The nuclear membrane, therefore, separates the nuclear processes of DNA replication and RNA transcription from the cytoplasmic process of protein production. Proteins required in the nucleus must be directed there by some mechanism. The first direct experimental examination of the ability of nuclear proteins to accumulate in the nucleus was carried out by John Gurdon when he showed that purified nuclear proteins accumulate in the nucleus of frog (Xenopus) oocytes after being micro-injected into the cytoplasm. These experiments were part of a series that subsequently led to studies of nuclear reprogramming, directly relevant to stem cell research. The presence of several million pore complexes in the oocyte nuclear membrane and the fact that they appeared to admit many different molecules (insulin, bovine serum albumin, gold nanoparticles) led to the view that the pores are open channels and nuclear proteins freely enter the nucleus through the pore and must accumulate by binding to DNA or some other nuclear component. In other words, there was thought to be no specific transport mechanism. This view was shown to be incorrect by Dingwall and Laskey in 1982. Using a protein called nucleoplasmin, the archetypal ‘molecular chaperone’, they identified a domain in the protein that acts as a signal for nuclear entry. This work stimulated research in the area, and two years later the first NLS was identified in SV40 Large T-antigen (or SV40, for short). However, a functional NLS could not be identified in another nuclear protein simply on the basis of similarity to the SV40 NLS. In fact, only a small percentage of cellular (non-viral) nuclear proteins contained a sequence similar to the SV40 NLS. A detailed examination of nucleoplasmin identified a sequence with two elements made up of basic amino acids separated by a spacer arm. One of these elements was similar to the SV40 NLS but was not able to direct a protein to the cell nucleus when attached to a non-nuclear reporter protein. Both elements are required. This kind of NLS has become known as a bipartite classical NLS. The bipartite NLS is now known to represent the major class of NLS found in cellular nuclear proteins and structural analysis has revealed how the signal is recognized by a receptor (importin α) protein (the structural basis of some monopartite NLSs is also known). Many of the molecular details of nuclear protein import are now known. This was made possible by the demonstration that nuclear protein import is a two-step process; the nuclear protein binds to the nuclear pore complex in a process that does not require energy. This is followed by an energy-dependent translocation of the nuclear protein through the channel of the pore complex. By establishing the presence of two distinct steps in the process the possibility of identifying the factors involved was established and led on to the identification of the importin family of NLS receptors and the GTPase Ran. Mechanism of nuclear import Proteins gain entry into the nucleus through the nuclear envelope. The nuclear envelope consists of concentric membranes, the outer and the inner membrane. The inner and outer membranes connect at multiple sites, forming channels between the cytoplasm and the nucleoplasm. These channels are occupied by nuclear pore complexes (NPCs), complex multiprotein structures that mediate the transport across the nuclear membrane. A protein translated with an NLS will bind strongly to importin (aka karyopherin), and, together, the complex will move through the nuclear pore. At this point, Ran-GTP will bind to the importin-protein complex, and its binding will cause the importin to lose affinity for the protein. The protein is released, and now the Ran-GTP/importin complex will move back out of the nucleus through the nuclear pore. A GTPase-activating protein (GAP) in the cytoplasm hydrolyzes the Ran-GTP to GDP, and this causes a conformational change in Ran, ultimately reducing its affinity for importin. Importin is released and Ran-GDP is recycled back to the nucleus where a Guanine nucleotide exchange factor (GEF) exchanges its GDP back for GTP. See also A nuclear export signal (NES) can direct a protein to be exported from the nucleus. References Further reading External links Cell biology Molecular genetics Short linear motifs
Nuclear localization sequence
[ "Chemistry", "Biology" ]
1,718
[ "Molecular genetics", "Cell biology", "Molecular biology" ]
1,648,765
https://en.wikipedia.org/wiki/Random%20matrix
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices. Applications Physics In nuclear physics, random matrices were introduced by Eugene Wigner to model the nuclei of heavy atoms. Wigner postulated that the spacings between the lines in the spectrum of a heavy atom nucleus should resemble the spacings between the eigenvalues of a random matrix, and should depend only on the symmetry class of the underlying evolution. In solid-state physics, random matrices model the behaviour of large disordered Hamiltonians in the mean-field approximation. In quantum chaos, the Bohigas–Giannoni–Schmit (BGS) conjecture asserts that the spectral statistics of quantum systems whose classical counterparts exhibit chaotic behaviour are described by random matrix theory. In quantum optics, transformations described by random unitary matrices are crucial for demonstrating the advantage of quantum over classical computation (see, e.g., the boson sampling model). Moreover, such random unitary transformations can be directly implemented in an optical circuit, by mapping their parameters to optical circuit components (that is beam splitters and phase shifters). Random matrix theory has also found applications to the chiral Dirac operator in quantum chromodynamics, quantum gravity in two dimensions, mesoscopic physics, spin-transfer torque, the fractional quantum Hall effect, Anderson localization, quantum dots, and superconductors Mathematical statistics and numerical analysis In multivariate statistics, random matrices were introduced by John Wishart, who sought to estimate covariance matrices of large samples. Chernoff-, Bernstein-, and Hoeffding-type inequalities can typically be strengthened when applied to the maximal eigenvalue (i.e. the eigenvalue of largest magnitude) of a finite sum of random Hermitian matrices. Random matrix theory is used to study the spectral properties of random matrices—such as sample covariance matrices—which is of particular interest in high-dimensional statistics. Random matrix theory also saw applications in neuronal networks and deep learning, with recent work utilizing random matrices to show that hyper-parameter tunings can be cheaply transferred between large neural networks without the need for re-training. In numerical analysis, random matrices have been used since the work of John von Neumann and Herman Goldstine to describe computation errors in operations such as matrix multiplication. Although random entries are traditional "generic" inputs to an algorithm, the concentration of measure associated with random matrix distributions implies that random matrices will not test large portions of an algorithm's input space. Number theory In number theory, the distribution of zeros of the Riemann zeta function (and other L-functions) is modeled by the distribution of eigenvalues of certain random matrices. The connection was first discovered by Hugh Montgomery and Freeman Dyson. It is connected to the Hilbert–Pólya conjecture. Free probability The relation of free probability with random matrices is a key reason for the wide use of free probability in other subjects. Voiculescu introduced the concept of freeness around 1983 in an operator algebraic context; at the beginning there was no relation at all with random matrices. This connection was only revealed later in 1991 by Voiculescu; he was motivated by the fact that the limit distribution which he found in his free central limit theorem had appeared before in Wigner's semi-circle law in the random matrix context. Computational neuroscience In the field of computational neuroscience, random matrices are increasingly used to model the network of synaptic connections between neurons in the brain. Dynamical models of neuronal networks with random connectivity matrix were shown to exhibit a phase transition to chaos when the variance of the synaptic weights crosses a critical value, at the limit of infinite system size. Results on random matrices have also shown that the dynamics of random-matrix models are insensitive to mean connection strength. Instead, the stability of fluctuations depends on connection strength variation and time to synchrony depends on network topology. In the analysis of massive data such as fMRI, random matrix theory has been applied in order to perform dimension reduction. When applying an algorithm such as PCA, it is important to be able to select the number of significant components. The criteria for selecting components can be multiple (based on explained variance, Kaiser's method, eigenvalue, etc.). Random matrix theory in this content has its representative the Marchenko-Pastur distribution, which guarantees the theoretical high and low limits of the eigenvalues associated with a random variable covariance matrix. This matrix calculated in this way becomes the null hypothesis that allows one to find the eigenvalues (and their eigenvectors) that deviate from the theoretical random range. The components thus excluded become the reduced dimensional space (see examples in fMRI ). Optimal control In optimal control theory, the evolution of n state variables through time depends at any time on their own values and on the values of k control variables. With linear evolution, matrices of coefficients appear in the state equation (equation of evolution). In some problems the values of the parameters in these matrices are not known with certainty, in which case there are random matrices in the state equation and the problem is known as one of stochastic control. A key result in the case of linear-quadratic control with stochastic matrices is that the certainty equivalence principle does not apply: while in the absence of multiplier uncertainty (that is, with only additive uncertainty) the optimal policy with a quadratic loss function coincides with what would be decided if the uncertainty were ignored, the optimal policy may differ if the state equation contains random coefficients. Computational mechanics In computational mechanics, epistemic uncertainties underlying the lack of knowledge about the physics of the modeled system give rise to mathematical operators associated with the computational model, which are deficient in a certain sense. Such operators lack certain properties linked to unmodeled physics. When such operators are discretized to perform computational simulations, their accuracy is limited by the missing physics. To compensate for this deficiency of the mathematical operator, it is not enough to make the model parameters random, it is necessary to consider a mathematical operator that is random and can thus generate families of computational models in the hope that one of these captures the missing physics. Random matrices have been used in this sense, with applications in vibroacoustics, wave propagations, materials science, fluid mechanics, heat transfer, etc. Engineering Random matrix theory can be applied to the electrical and communications engineering research efforts to study, model and develop Massive Multiple-Input Multiple-Output (MIMO) radio systems. History Random matrix theory first gained attention beyond mathematics literature in the context of nuclear physics. Experiments by Enrico Fermi and others demonstrated evidence that individual nucleons cannot be approximated to move independently, leading Niels Bohr to formulate the idea of a compound nucleus. Because there was no knowledge of direct nucleon-nucleon interactions, Eugene Wigner and Leonard Eisenbud approximated that the nuclear Hamiltonian could be modeled as a random matrix. For larger atoms, the distribution of the energy eigenvalues of the Hamiltonian could be computed in order to approximate scattering cross sections by invoking the Wishart distribution. Gaussian ensembles The most-commonly studied random matrix distributions are the Gaussian ensembles: GOE, GUE and GSE. They are often denoted by their Dyson index, β = 1 for GOE, β = 2 for GUE, and β = 4 for GSE. This index counts the number of real components per matrix element. Definitions The Gaussian unitary ensemble is described by the Gaussian measure with density on the space of Hermitian matrices . Here is a normalization constant, chosen so that the integral of the density is equal to one. The term unitary refers to the fact that the distribution is invariant under unitary conjugation. The Gaussian unitary ensemble models Hamiltonians lacking time-reversal symmetry. The Gaussian orthogonal ensemble is described by the Gaussian measure with density on the space of n × n real symmetric matrices H = (Hij). Its distribution is invariant under orthogonal conjugation, and it models Hamiltonians with time-reversal symmetry. Equivalently, it is generated by , where is an matrix with IID samples from the standard normal distribution. The Gaussian symplectic ensemble is described by the Gaussian measure with density on the space of n × n Hermitian quaternionic matrices, e.g. symmetric square matrices composed of quaternions, . Its distribution is invariant under conjugation by the symplectic group, and it models Hamiltonians with time-reversal symmetry but no rotational symmetry. Point correlation functions The ensembles as defined here have Gaussian distributed matrix elements with mean ⟨Hij⟩ = 0, and two-point correlations given by from which all higher correlations follow by Isserlis' theorem. Moment generating functions The moment generating function for the GOE iswhere is the Frobenius norm. Spectral density The joint probability density for the eigenvalues of GUE/GOE/GSE is given by where Zβ,n is a normalization constant which can be explicitly computed, see Selberg integral. In the case of GUE (β = 2), the formula (1) describes a determinantal point process. Eigenvalues repel as the joint probability density has a zero (of th order) for coinciding eigenvalues . The distribution of the largest eigenvalue for GOE, and GUE, are explicitly solvable. They converge to the Tracy–Widom distribution after shifting and scaling appropriately. Convergence to Wigner semicircular distribution The spectrum, divided by , converges in distribution to the semicircular distribution on the interval : . Here is the variance of off-diagonal entries. The variance of the on-diagonal entries do not matter. Distribution of level spacings From the ordered sequence of eigenvalues , one defines the normalized spacings , where is the mean spacing. The probability distribution of spacings is approximately given by, for the orthogonal ensemble GOE , for the unitary ensemble GUE , and for the symplectic ensemble GSE . The numerical constants are such that is normalized: and the mean spacing is, for . Generalizations Wigner matrices are random Hermitian matrices such that the entries above the main diagonal are independent random variables with zero mean and have identical second moments. Invariant matrix ensembles are random Hermitian matrices with density on the space of real symmetric/Hermitian/quaternionic Hermitian matrices, which is of the form where the function is called the potential. The Gaussian ensembles are the only common special cases of these two classes of random matrices. This is a consequence of a theorem by Porter and Rosenzweig. Spectral theory of random matrices The spectral theory of random matrices studies the distribution of the eigenvalues as the size of the matrix goes to infinity. Empirical spectral measure The empirical spectral measure of is defined by Usually, the limit of is a deterministic measure; this is a particular case of self-averaging. The cumulative distribution function of the limiting measure is called the integrated density of states and is denoted N(λ). If the integrated density of states is differentiable, its derivative is called the density of states and is denoted ρ(λ). Alternative expressions Types of convergence Given a matrix ensemble, we say that its spectral measures converge weakly to iff for any measurable set , the ensemble-average converges:Convergence weakly almost surely: If we sample independently from the ensemble, then with probability 1,for any measurable set . In another sense, weak almost sure convergence means that we sample , not independently, but by "growing" (a stochastic process), then with probability 1, for any measurable set . For example, we can "grow" a sequence of matrices from the Gaussian ensemble as follows: Sample an infinite doubly infinite sequence of standard random variables . Define each where is the matrix made of entries . Note that generic matrix ensembles do not allow us to grow, but most of the common ones, such as the three Gaussian ensembles, do allow us to grow. Global regime In the global regime, one is interested in the distribution of linear statistics of the form . The limit of the empirical spectral measure for Wigner matrices was described by Eugene Wigner; see Wigner semicircle distribution and Wigner surmise. As far as sample covariance matrices are concerned, a theory was developed by Marčenko and Pastur. The limit of the empirical spectral measure of invariant matrix ensembles is described by a certain integral equation which arises from potential theory. Fluctuations For the linear statistics , one is also interested in the fluctuations about ∫ f(λ) dN(λ). For many classes of random matrices, a central limit theorem of the form is known. The variational problem for the unitary ensembles Consider the measure where is the potential of the ensemble and let be the empirical spectral measure. We can rewrite with as the probability measure is now of the form where is the above functional inside the squared brackets. Let now be the space of one-dimensional probability measures and consider the minimizer For there exists a unique equilibrium measure through the Euler-Lagrange variational conditions for some real constant where is the support of the measure and define . The equilibrium measure has the following Radon–Nikodym density Mesoscopic regime The typical statement of the Wigner semicircular law is equivalent to the following statement: For each fixed interval centered at a point , as , the number of dimensions of the gaussian ensemble increases, the proportion of the eigenvalues falling within the interval converges to , where is the density of the semicircular distribution. If can be allowed to decrease as increases, then we obtain strictly stronger theorems, named "local laws" or "mesoscopic regime". The mesoscopic regime is intermediate between the local and the global. In the mesoscopic regime, one is interested in the limit distribution of eigenvalues in a set that shrinks to zero, but slow enough, such that the number of eigenvalues inside . For example, the Ginibre ensemble has a mesoscopic law: For any sequence of shrinking disks with areas inside the unite disk, if the disks have area , the conditional distribution of the spectrum inside the disks also converges to a uniform distribution. That is, if we cut the shrinking disks along with the spectrum falling inside the disks, then scale the disks up to unit area, we would see the spectra converging to a flat distribution in the disks. Local regime In the local regime, one is interested in the limit distribution of eigenvalues in a set that shrinks so fast that the number of eigenvalues remains . Typically this means the study of spacings between eigenvalues, and, more generally, in the joint distribution of eigenvalues in an interval of length of order 1/n. One distinguishes between bulk statistics, pertaining to intervals inside the support of the limiting spectral measure, and edge statistics, pertaining to intervals near the boundary of the support. Bulk statistics Formally, fix in the interior of the support of . Then consider the point process where are the eigenvalues of the random matrix. The point process captures the statistical properties of eigenvalues in the vicinity of . For the Gaussian ensembles, the limit of is known; thus, for GUE it is a determinantal point process with the kernel (the sine kernel). The universality principle postulates that the limit of as should depend only on the symmetry class of the random matrix (and neither on the specific model of random matrices nor on ). Rigorous proofs of universality are known for invariant matrix ensembles and Wigner matrices. Edge statistics One example of edge statistics is the Tracy–Widom distribution. As another example, consider the Ginibre ensemble. It can be real or complex. The real Ginibre ensemble has i.i.d. standard Gaussian entries , and the complex Ginibre ensemble has i.i.d. standard complex Gaussian entries . Now let be sampled from the real or complex ensemble, and let be the absolute value of its maximal eigenvalue:We have the following theorem for the edge statistics: This theorem refines the circular law of the Ginibre ensemble. In words, the circular law says that the spectrum of almost surely falls uniformly on the unit disc. and the edge statistics theorem states that the radius of the almost-unit-disk is about , and fluctuates on a scale of , according to the Gumbel law. Correlation functions The joint probability density of the eigenvalues of random Hermitian matrices , with partition functions of the form where and is the standard Lebesgue measure on the space of Hermitian matrices, is given by The -point correlation functions (or marginal distributions) are defined as which are skew symmetric functions of their variables. In particular, the one-point correlation function, or density of states, is Its integral over a Borel set gives the expected number of eigenvalues contained in : The following result expresses these correlation functions as determinants of the matrices formed from evaluating the appropriate integral kernel at the pairs of points appearing within the correlator. Theorem [Dyson-Mehta] For any , the -point correlation function can be written as a determinant where is the th Christoffel-Darboux kernel associated to , written in terms of the quasipolynomials where is a complete sequence of monic polynomials, of the degrees indicated, satisfying the orthogonilty conditions Other classes of random matrices Wishart matrices Wishart matrices are n × n random matrices of the form , where X is an n × m random matrix (m ≥ n) with independent entries, and X* is its conjugate transpose. In the important special case considered by Wishart, the entries of X are identically distributed Gaussian random variables (either real or complex). The limit of the empirical spectral measure of Wishart matrices was found by Vladimir Marchenko and Leonid Pastur. Random unitary matrices Non-Hermitian random matrices Selected bibliography Books Survey articles Historic works References External links Algebra of random variables Mathematical physics Probability theory
Random matrix
[ "Physics", "Mathematics" ]
3,911
[ "Random matrices", "Applied mathematics", "Theoretical physics", "Mathematical objects", "Matrices (mathematics)", "Statistical mechanics", "Mathematical physics" ]
1,648,915
https://en.wikipedia.org/wiki/Chemical%20plant
A chemical plant is an industrial process plant that manufactures (or otherwise processes) chemicals, usually on a large scale. The general objective of a chemical plant is to create new material wealth via the chemical or biological transformation and or separation of materials. Chemical plants use specialized equipment, units, and technology in the manufacturing process. Other kinds of plants, such as polymer, pharmaceutical, food, and some beverage production facilities, power plants, oil refineries or other refineries, natural gas processing and biochemical plants, water and wastewater treatment, and pollution control equipment use many technologies that have similarities to chemical plant technology such as fluid systems and chemical reactor systems. Some would consider an oil refinery or a pharmaceutical or polymer manufacturer to be effectively a chemical plant. Petrochemical plants (plants using chemicals from petroleum as a raw material or feedstock) are usually located adjacent to an oil refinery to minimize transportation costs for the feedstocks produced by the refinery. Speciality chemical and fine chemical plants are usually much smaller and not as sensitive to location. Tools have been developed for converting a base project cost from one geographic location to another. Chemical processes Chemical plants use chemical processes, which are detailed industrial-scale methods, to transform feedstock chemicals into products. The same chemical process can be used at more than one chemical plant, with possibly differently scaled capacities at each plant. Also, a chemical plant at a site may be constructed to utilize more than one chemical process, for instance to produce multiple products. A chemical plant commonly has usually large vessels or sections called units or lines that are interconnected by piping or other material-moving equipment which can carry streams of material. Such material streams can include fluids (gas or liquid carried in piping) or sometimes solids or mixtures such as slurries. An overall chemical process is commonly made up of steps called unit operations which occur in the individual units. A raw material going into a chemical process or plant as input to be converted into a product is commonly called a feedstock, or simply feed. In addition to feedstocks for the plant, as a whole, an input stream of material to be processed in a particular unit can similarly be considered feed for that unit. Output streams from the plant as a whole are final products and sometimes output streams from individual units may be considered intermediate products for their units. However, final products from one plant may be intermediate chemicals used as feedstock in another plant for further processing. For example, some products from an oil refinery may be used as feedstock in petrochemical plants, which may in turn produce feedstocks for pharmaceutical plants. Either the feedstock(s), the product(s), or both may be individual compounds or mixtures. It is often not worthwhile separating the components in these mixtures completely; specific levels of purity depend on product requirements and process economics. Operations Chemical processes may be run in continuous or batch operation. Batch operation In batch operation, production occurs in time-sequential steps in discrete batches. A batch of feedstock(s) is fed (or charged) into a process or unit, then the chemical process takes place, then the product(s) and any other outputs are removed. Such batch production may be repeated over again and again with new batches of feedstock. Batch operation is commonly used in smaller scale plants such as pharmaceutical or specialty chemicals production, for purposes of improved traceability as well as flexibility. Continuous plants are usually used to manufacture commodity or petrochemicals while batch plants are more common in speciality and fine chemical production as well as active pharmaceutical ingredient (API) manufacture. Continuous operation In continuous operation, all steps are ongoing continuously in time. During usual continuous operation, the feeding and product removal are ongoing streams of moving material, which together with the process itself, all take place simultaneously and continuously. Chemical plants or units in continuous operation are usually in a steady state or approximate steady state. Steady state means that quantities related to the process do not change as time passes during operation. Such constant quantities include stream flow rates, heating or cooling rates, temperatures, pressures, and chemical compositions at any given point (location). Continuous operation is more efficient in many large-scale operations like petroleum refineries. It is possible for some units to operate continuously and others be in batch operation in a chemical plant; for example, see Continuous distillation and Batch distillation. The amount of primary feedstock or product per unit of time which a plant or unit can process is referred to as the capacity of that plant or unit. For examples: the capacity of an oil refinery may be given in terms of barrels of crude oil refined per day; alternatively chemical plant capacity may be given in tons of product produced per day. In actual daily operation, a plant (or unit) will operate at a percentage of its full capacity. Engineers typically assume 90% operating time for plants which work primarily with fluids, and 80% uptime for plants which primarily work with solids. Units and fluid systems Specific unit operations are conducted in specific kinds of units. Although some units may operate at ambient temperature or pressure, many units operate at higher or lower temperatures or pressures. Vessels in chemical plants are often cylindrical with rounded ends, a shape which can be suited to hold either high pressure or vacuum. Chemical reactions can convert certain kinds of compounds into other compounds in chemical reactors. Chemical reactors may be packed beds and may have solid heterogeneous catalysts which stay in the reactors as fluids move through, or may simply be stirred vessels in which reactions occur. Since the surface of solid heterogeneous catalysts may sometimes become "poisoned" from deposits such as coke, regeneration of catalysts may be necessary. Fluidized beds may also be used in some cases to ensure good mixing. There can also be units (or subunits) for mixing (including dissolving), separation, heating, cooling, or some combination of these. For example, chemical reactors often have stirring for mixing and heating or cooling to maintain temperature. When designing plants on a large scale, heat produced or absorbed by chemical reactions must be considered. Some plants may have units with organism cultures for biochemical processes such as fermentation or enzyme production. Separation processes include filtration, settling (sedimentation), extraction or leaching, distillation, recrystallization or precipitation (followed by filtration or settling), reverse osmosis, drying, and adsorption. Heat exchangers are often used for heating or cooling, including boiling or condensation, often in conjunction with other units such as distillation towers. There may also be storage tanks for storing feedstock, intermediate or final products, or waste. Storage tanks commonly have level indicators to show how full they are. There may be structures holding or supporting sometimes massive units and their associated equipment. There are often stairs, ladders, or other steps for personnel to reach points in the units for sampling, inspection, or maintenance. An area of a plant or facility with numerous storage tanks is sometimes called a tank farm, especially at an oil depot. Fluid systems for carrying liquids and gases include piping and tubing of various diameter sizes, various types of valves for controlling or stopping flow, pumps for moving or pressurizing liquid, and compressors for pressurizing or moving gases. Vessels, piping, tubing, and sometimes other equipment at high or very low temperatures are commonly covered with insulation for personnel safety and to maintain temperature inside. Fluid systems and units commonly have instrumentation such as temperature and pressure sensors and flow measuring devices at select locations in a plant. Online analyzers for chemical or physical property analysis have become more common. Solvents can sometimes be used to dissolve reactants or materials such as solids for extraction or leaching, to provide a suitable medium for certain chemical reactions to run, or so they can otherwise be treated as fluids. Chemical plant design Today, the fundamental aspects of designing chemical plants are done by chemical engineers. Historically, this was not always the case, and many chemical plants were constructed haphazardly before the discipline of chemical engineering became established. Chemical engineering was first established as a profession in the United Kingdom when the first chemical engineering course was given at the University of Manchester in 1887 by George E. Davis in the form of twelve lectures covering various aspects of industrial chemical practice. As a consequence George E. Davis is regarded as the world's first chemical engineer. Today chemical engineering is a profession and those professional chemical engineers with experience can gain "Chartered" engineer status through the Institution of Chemical Engineers. In plant design, typically less than 1 percent of ideas for new designs ever become commercialized. During this solution process, typically, cost studies are used as an initial screening to eliminate unprofitable designs. If a process appears profitable, then other factors are considered, such as safety, environmental constraints, controllability, etc. The general goal in plant design, is to construct or synthesize “optimum designs” in the neighborhood of the desired constraints. Many times chemists research chemical reactions or other chemical principles in a laboratory, commonly on a small scale in a "batch-type" experiment. Chemistry information obtained is then used by chemical engineers, along with expertise of their own, to convert to a chemical process and scale up the batch size or capacity. Commonly, a small chemical plant called a pilot plant is built to provide design and operating information before construction of a large plant. From data and operating experience obtained from the pilot plant, a scaled-up plant can be designed for higher or full capacity. After the fundamental aspects of a plant design are determined, mechanical or electrical engineers may become involved with mechanical or electrical details, respectively. Structural engineers may become involved in the plant design to ensure the structures can support the weight of the units, piping, and other equipment. The units, streams, and fluid systems of chemical plants or processes can be represented by block flow diagrams which are very simplified diagrams, or process flow diagrams which are somewhat more detailed. The streams and other piping are shown as lines with arrow heads showing usual direction of material flow. In block diagrams, units are often simply shown as blocks. Process flow diagrams may use more detailed symbols and show pumps, compressors, and major valves. Likely values or ranges of material flow rates for the various streams are determined based on desired plant capacity using material balance calculations. Energy balances are also done based on heats of reaction, heat capacities, expected temperatures, and pressures at various points to calculate amounts of heating and cooling needed in various places and to size heat exchangers. Chemical plant design can be shown in fuller detail in a piping and instrumentation diagram (P&ID) which shows all piping, tubing, valves, and instrumentation, typically with special symbols. Showing a full plant is often complicated in a P&ID, so often only individual units or specific fluid systems are shown in a single P&ID. In the plant design, the units are sized for the maximum capacity each may have to handle. Similarly, sizes for pipes, pumps, compressors, and associated equipment are chosen for the flow capacity they have to handle. Utility systems such as electric power and water supply should also be included in the plant design. Additional piping lines for non-routine or alternate operating procedures, such as plant or unit startups and shutdowns, may have to be included. Fluid systems design commonly includes isolation valves around various units or parts of a plant so that a section of a plant could be isolated in case of a problem such as a leak in a unit. If pneumatically or hydraulically actuated valves are used, a system of pressurizing lines to the actuators is needed. Any points where process samples may have to be taken should have sampling lines, valves, and access to them included in the detailed design. If necessary, provisions should be made for reducing high pressure or temperature of a sampling stream, such including a pressure reducing valve or sample cooler. Units and fluid systems in the plant including all vessels, piping, tubing, valves, pumps, compressors, and other equipment must be rated or designed to be able to withstand the entire range of pressures, temperatures, and other conditions which they could possibly encounter, including any appropriate safety factors. All such units and equipment should also be checked for materials compatibility to ensure they can withstand long-term exposure to the chemicals they will come in contact with. Any closed system in a plant which has a means of pressurizing possibly beyond the rating of its equipment, such as heating, exothermic reactions, or certain pumps or compressors, should have an appropriately sized pressure relief valve included to prevent overpressurization for safety. Frequently all of these parameters (temperatures, pressures, flow, etc.) are exhaustively analyzed in combination through a Hazop or fault tree analysis, to ensure that the plant has no known risk of serious hazard. Within any constraints the plant is subject to, design parameters are optimized for good economic performance while ensuring the safety and welfare of personnel and the surrounding community. For flexibility, a plant may be designed to operate in a range around some optimal design parameters in case feedstock or economic conditions change and re-optimization is desirable. In more modern times, computer simulations or other computer calculations have been used to help in chemical plant design or optimization. Plant operation Process control In process control, information gathered automatically from various sensors or other devices in the plant is used to control various equipment for running the plant, thereby controlling operation of the plant. Instruments receiving such information signals and sending out control signals to perform this function automatically are process controllers. Previously, pneumatic controls were sometimes used. Electrical controls are now common. A plant often has a control room with displays of parameters such as key temperatures, pressures, fluid flow rates and levels, operating positions of key valves, pumps, and other equipment, etc. In addition, operators in the control room can control various aspects of the plant operation, often including overriding automatic control. Process control with a computer represents more modern technology. Based on possible changing feedstock composition, changing products requirements or economics, or other changes in constraints, operating conditions may be re-optimized to maximize profit. Workers As in any industrial setting, there are a variety of workers working throughout a chemical plant facility, often organized into departments, sections, or other work groups. Such workers typically include engineers, plant operators, and maintenance technicians. Other personnel at the site could include chemists, management/administration, and office workers. Types of engineers involved in operations or maintenance may include chemical process engineers, mechanical engineers for maintaining mechanical equipment, and electrical/computer engineers for electrical or computer equipment. Transport Large quantities of fluid feedstock or product may enter or leave a plant by pipeline, railroad tank car, or tanker truck. For example, petroleum commonly comes to a refinery by pipeline. Pipelines can also carry petrochemical feedstock from a refinery to a nearby petrochemical plant. Natural gas is a product which comes all the way from a natural gas processing plant to final consumers by pipeline or tubing. Large quantities of liquid feedstock are typically pumped into process units. Smaller quantities of feedstock or product may be shipped to or from a plant in drums. Use of drums about 55 gallons in capacity is common for packaging industrial quantities of chemicals. Smaller batches of feedstock may be added from drums or other containers to process units by workers. Maintenance In addition to feeding and operating the plant, and packaging or preparing the product for shipping, plant workers are needed for taking samples for routine and troubleshooting analysis and for performing routine and non-routine maintenance. Routine maintenance can include periodic inspections and replacement of worn catalyst, analyzer reagents, various sensors, or mechanical parts. Non-routine maintenance can include investigating problems and then fixing them, such as leaks, failure to meet feed or product specifications, mechanical failures of valves, pumps, compressors, sensors, etc. Statutory and regulatory compliance When working with chemicals, safety is a concern in order to avoid problems such as chemical accidents. In the United States, the law requires that employers provide workers working with chemicals with access to a material safety data sheet (MSDS) for every kind of chemical they work with. An MSDS for a certain chemical is prepared and provided by the supplier to whoever buys the chemical. Other laws covering chemical safety, hazardous waste, and pollution must be observed, including statutes such as the Resource Conservation and Recovery Act (RCRA) and the Toxic Substances Control Act (TSCA), and regulations such as the Chemical Facility Anti-Terrorism Standards in the United States. Hazmat (hazardous materials) teams are trained to deal with chemical leaks or spills. Process Hazard Analysis (PHA) is used to assess potential hazards in chemical plants. In 1998, the U. S. Chemical Safety and Hazard Investigation Board has become operational. Clustering of commodity chemical plants Chemical Plants used particularly for commodity chemical and petrochemical manufacture, are located in relatively few manufacturing locations around the world largely due to infrastructural needs. This is less important for speciality or fine chemical batch plants. Not all commodity/petrochemicals are produced in any one location but groups of related materials often are, to induce industrial symbiosis as well as material, energy and utility efficiency and other economies of scale. These manufacturing locations often have business clusters of units called chemical plants that share utilities and large scale infrastructure such as power stations, port facilities, road and rail terminals. In the United Kingdom for example there are four main locations for commodity chemical manufacture: near the River Mersey in Northwest England, on the Humber on the East coast of Yorkshire, in Grangemouth near the Firth of Forth in Scotland and on Teesside as part of the Northeast of England Process Industry Cluster (NEPIC). Approximately 50% of the UK's petrochemicals, which are also commodity chemicals, are produced by the industry cluster companies on Teesside at the mouth of the River Tees on three large chemical parks at Wilton, Billingham and Seal Sands. Corrosion and use of new materials Corrosion in chemical process plants is a major issue that consumes billions of dollars yearly. Electrochemical corrosion of metals is pronounced in chemical process plants due to the presence of acid fumes and other electrolytic interactions. Recently, FRP (Fibre-reinforced plastic) is used as a material of construction. The British standard specification BS4994 is widely used for design and construction of the vessels, tanks, etc. See also Chemical industry Chemical process modeling FRP tanks and vessels - chemical process plant equipment made of FRP BS4994 Electrical equipment in hazardous areas Instrumentation in petrochemical industries Chemical accident Chemical plant cost indexes Fine chemicals Speciality chemicals Petrochemical Northeast of England Process Industry Cluster Institution of Chemical Engineers S-graph Unit operations References Further reading ASME B73 Standards Committee, Chemical Standard Pumps Chemical Plant Chemical process engineering Chemical industry
Chemical plant
[ "Chemistry", "Engineering" ]
3,857
[ "Chemical process engineering", "Chemical engineering", "nan", "Chemical plants" ]
5,751,973
https://en.wikipedia.org/wiki/Pasch%27s%20axiom
In geometry, Pasch's axiom is a statement in plane geometry, used implicitly by Euclid, which cannot be derived from the postulates as Euclid gave them. Its essential role was discovered by Moritz Pasch in 1882. Statement The axiom states that, The fact that segments AC and BC are not both intersected by the line is proved in Supplement I,1, which was written by P. Bernays. A more modern version of this axiom is as follows: (In case the third side is parallel to our line, we count an "intersection at infinity" as external.) A more informal version of the axiom is often seen: History Pasch published this axiom in 1882, and showed that Euclid's axioms were incomplete. The axiom was part of Pasch's approach to introducing the concept of order into plane geometry. Equivalences In other treatments of elementary geometry, using different sets of axioms, Pasch's axiom can be proved as a theorem; it is a consequence of the plane separation axiom when that is taken as one of the axioms. Hilbert uses Pasch's axiom in his axiomatic treatment of Euclidean geometry. Given the remaining axioms in Hilbert's system, it can be shown that Pasch's axiom is logically equivalent to the plane separation axiom. Hilbert's use of Pasch's axiom David Hilbert uses Pasch's axiom in his book Foundations of Geometry which provides an axiomatic basis for Euclidean geometry. Depending upon the edition, it is numbered either II.4 or II.5. His statement is given above. In Hilbert's treatment, this axiom appears in the section concerning axioms of order and is referred to as a plane axiom of order. Since he does not phrase the axiom in terms of the sides of a triangle (considered as lines rather than line segments) there is no need to talk about internal and external intersections of the line with the sides of the triangle ABC. Caveats Pasch's axiom is distinct from Pasch's theorem which is a statement about the order of four points on a line. However, in literature there are many instances where Pasch's axiom is referred to as Pasch's theorem. A notable instance of this is . Pasch's axiom should not be confused with the Veblen-Young axiom for projective geometry, which may be stated as: There is no mention of internal and external intersections in the statement of the Veblen-Young axiom which is only concerned with the incidence property of the lines meeting. In projective geometry the concept of betweeness (required to define internal and external) is not valid and all lines meet (so the issue of parallel lines does not arise). Notes References External links Euclidean plane geometry Foundations of geometry
Pasch's axiom
[ "Mathematics" ]
592
[ "Planes (geometry)", "Euclidean plane geometry", "Foundations of geometry", "Mathematical axioms" ]
5,752,817
https://en.wikipedia.org/wiki/Strasbourg%20astronomical%20clock
The Strasbourg astronomical clock is located in the Cathédrale Notre-Dame of Strasbourg, Alsace, France. It is the third clock on that spot and dates from the time of the first French possession of the city (1681–1870). The first clock had been built in the 14th century and the second in the 16th century when Strasbourg was a Free imperial city of the Holy Roman Empire. The current, third clock dates from 1843. Its main features, besides the automata, are a perpetual calendar (including a computus), an orrery (planetary dial), a display of the real position of the Sun and the Moon, and solar and lunar eclipses. The main attraction is the procession of the 18-inch high figures of Christ and the Apostles, which occurs every day at solar noon, while the life-size clock crows thrice. Clocks First clock The first clock was built in the Cathedrale Notre-Dame of Strasbourg sometime between 1352-1354, the name of its maker is unknown. The first, a mechanical gilded rooster, sat as the centerpiece and is believed to be the oldest example of automata in the world. It was used as part of the second clock before being put on display at the Strasbourg Museum for Decorative Arts in the Palais du Rohan. This bird, a symbol of Christ's passion, was made of iron, copper, and wood. At noon it flapped its wings and spread out its feathers. It also opened its beak, put out its tongue, and by means of a bellows and a reed, crowed. In the top compartment at noon, to the sound of a small carillon, the Three Kings bowed before the figure of The Virgin Mary and the Christ Child. The clock most certainly had an astrolabe dial and a calendar dial. It was standing on the wall opposite the current clock, and a staircase led to its various levels. Supports for former balconies can still be seen today, and suggest that the height of the clock was about 18 m (59'), with a width of about 7.70 m (25') at the base. At the base a painted figure of a zodiacal man showed the relationship between the signs of the zodiac and parts of the human body. There is also a big circle engraved on the wall, but this circle is not a remnant of the first clock. It was added at a later stage, for some unclarified reason. The entire structure was dismantled in 1572–4 when the second and even more ambitious clock was mounted on the opposite wall of the south transept. Second clock The first clock stopped working and a new one was started in the 16th century. It was designed by the mathematician Christian Herlin. At the start of the construction, around 1547, the cathedral was under the control of the Protestants. During this time the stone case and the staircase were built along with the dial and iron framework. Work was halted a year later when the cathedral was put under Catholic control and would not resume until 1571 when the cathedral was again under the control of the Protestants. Construction of the clock resumed under the direction of Conrad Dasypodius, a pupil of and successor to Herlin who had since passed away. Dasypodius enrolled the Swiss clockmakers Isaac Habrecht and Josia Habrecht, as well as the astronomer and musician David Wolckenstein, and Swiss artists Tobias Stimmer and his brother Josias. The clock was completed in 1574. This clock was remarkable both for its complexity as an astronomical device and for the range and richness of its decorations and accessories. As well as the many dials and indicators - the calendar dial, the astrolabe, the indicators for planets, and eclipses - the clock was also well endowed with paintings, moving statues, automata, and musical entertainment in the form of a six tune carillon. The Stimmers painted large panels that depicted the three Fates, Urania, Colossus, Nicolaus Copernicus, and various sacred themes, including the Creation, the resurrection of the Dead, the last judgment, and the rewards of virtue and vice. At the base of a clock there was an 86 cm (34") diameter celestial globe, accompanied by the figure of a pelican. The globe was connected to the clock movement, and set for the meridian of Strasbourg. A popular feature of the new clock was the golden cockerel, a relic of the first clock, which perched on the top of the cupola and entertained the onlookers at noon every day until 1640, when it was struck by lightning. Most of the works are still preserved in the Museum of Decorative Arts. Third clock The second clock stopped working around 1788 and stood still until 1838 when Jean-Baptiste Schwilgué (1776–1856) started to build the current clock. He designed new mechanisms to replace the old ones and which were meant to be state-of-the-art. Schwilgué had wanted to work on the clock since his boyhood, but he only got the contract 50 years later. In the meantime, he had become acquainted with clockmaking, mathematics, and mechanics. He spent one year preparing his 30 workers before actually starting construction. Then, construction lasted from 1838 until June 24, 1843. The clock, however, was inaugurated on December 31, 1842. The gold hands of the clock show mean solar time, or "temps moyen"; the silver hands show Central European Time, labelled "heure publique". In winter, mean solar time is approximately 30½ minutes behind Central European Time. The clock features a planetary calendar, which shows the current positions of the sun and moon, and a mechanical rooster. Every day at 12:30 the rooster crows and apostles move around the clock. This clock contains probably the first perpetual mechanical Gregorian computus, designed by Schwilgué in 1816. In the 1970s, Frédéric Klinghammer built a reduced replica of it. Model In 1887, a 25-year-old Sydney watchmaker named Richard Smith built a working model of the third clock in the scale 1:5. Having never seen the original, Smith had to work from a pamphlet which described its timekeeping and astronomical functions. This model is in the collection of the Powerhouse Museum, Sydney, Australia. Detail See also Astronomical clock References External links Strasbourg Cathedral, astronomical clock, 1574 Images of Strasbourg Astronomical clock Further reading The Astronomical Clock of Strasbourg Cathedral Notes Henry King: "Geared to the Stars: the evolution of planetariums, orreries, and astronomical clocks", University of Toronto Press, 1978 Alfred Ungerer and Théodore Ungerer: "L'horloge astronomique de la cathédrale de Strasbourg", Strasbourg, 1922 Henri Bach, Jean-Pierre Rieb, and Robert Wilhelm: "Les trois horloges astronomiques de la cathédrale de Strasbourg", 1992 Günther Oestmann: Die Straßburger Münsteruhr: Funktion und Bedeutung eines Kosmos-Modells des 16. Jahrhunderts. Stuttgart 1993; 2nd edition Berlin/Diepholz 2000. Buildings and structures in Strasbourg Astronomical clocks in France Collection of the Powerhouse Museum Tourist attractions in Strasbourg de:Astronomische Uhr#Straßburg Clocks
Strasbourg astronomical clock
[ "Physics", "Technology", "Engineering" ]
1,501
[ "Physical systems", "Machines", "Clocks", "Measuring instruments" ]
5,752,845
https://en.wikipedia.org/wiki/Radium%20bromide
Radium bromide is the bromide salt of radium, with the formula RaBr2. It is produced during the process of separating radium from uranium ore. This inorganic compound was discovered by Pierre and Marie Curie in 1898, and the discovery sparked a huge interest in radiochemistry and radiotherapy. Since elemental radium oxidizes readily in air and water, radium salts are the preferred chemical form of radium to work with. Even though it is more stable than elemental radium, radium bromide is still extremely toxic, and can explode under certain conditions. History After the Curies discovered radium (in the form of radium chloride) in 1898, scientists began to isolate radium on an industrial scale, with the intent of using it for radiotherapy treatments. Radium salts, including radium bromide, were most often used by placing the chemical in a tube that was then passed over or inserted into diseased tissue in the body. Many of the first scientists to try to determine radium's uses were affected by their exposure to the radioactive material. Pierre Curie went so far as to self-inflict a severe chemical skin reaction by applying a radium source directly to his forearm, which ultimately created a skin lesion. All types of therapeutic tests were performed for different skin diseases including eczema, lichen and psoriasis. Later, it was hypothesized that radium could be used to treat cancerous diseases. However, during this time frame, radium also gained popularity among pseudoscientific "health remedy" industries, which promoted radium as an essential element that could "heal" and "reinvigorate" cells in the human body and remove poisonous substances. As a result, radium gained popularity as a "health trend" in the 1920s and radium salts were added to food, drinks, clothing, toys, and even toothpaste. Furthermore, many respectable journals and newspapers in the early 1900s published statements claiming that radium posed no health hazard. The main problem with the growth of interest in radium was the lack of radium on earth itself. In 1913, it was reported that the Radium Institute had four grams of radium total, which at the time was more than half the world supply. Numerous countries and institutions across the world set out to extract as much radium as possible, a time-consuming and expensive task. It was reported in Science magazine in 1919 that the United States had produced approximately 55 grams of radium since 1913, which was also more than half the radium produced in the world at the time. A principal source for radium is pitchblende, which holds a total of 257 mg of radium per ton of U3O8. With so little product recovered from such a large amount of material, it was difficult to extract a large quantity of radium. This was the reason radium bromide became one of the most expensive materials on earth. In 1921, it was stated in Time magazine that one ton of radium cost 17,000,000,000 Euros, whereas one ton of gold cost 208,000 Euros and one ton of diamond cost 400,000,000 Euros. Radium bromide was also found to induce phosphorescence at normal temperatures. This led to the US army manufacturing and supplying luminous watches and gun sights to soldiers. It also allowed for the invention of the spinthariscope, which soon became a popular household item. Properties Radium bromide is a luminous salt that causes the air surrounding it, even when encased in a tube, to glow a brilliant green and demonstrate all bands of the nitrogen spectrum. It is possible that the effect of the alpha radiation on the nitrogen in the air causes this luminescence. Radium bromide is highly reactive and crystals can sometimes explode, especially if heated. Helium gas evolved from alpha particles can accumulate within the crystals, which can cause them to weaken and rupture. Radium bromide will crystallize when separated from aqueous solution. It forms a dihydrate, very similar to barium bromide. Production Radium is obtained from uranium or pitchblende ores by the "Curie method", which involves two major stages. In the first stage the ore is treated with sulfuric acid dissolves many components. The residue contains, barium, radium, and lead sulfates. The mixture will then be treated with sodium chloride and sodium carbonate to remove the lead. The second stage involves separation of the barium from the radium. Radium bromide can be obtained from radium chloride by reaction with a stream of hydrogen bromide. Hazards Radium bromide, like all radium compounds, is highly radioactive and very toxic. Due to its chemical similarity to calcium, radium tends to accumulate in the bones, where it irradiates the bone marrow and can cause anemia, leukemia, sarcoma, bone cancer, genetic defects, infertility, ulcers, and necrosis. Symptoms of poisoning can take years to develop, by which time it is usually too late for any effective medical treatment. Radium bromide also poses a severe environmental hazard, amplified due to its high solubility in water, and it can bioaccumulate and cause long-lasting damage to organisms. Radium bromide is highly reactive, and crystals can explode if violently shocked or heated. This is, in part, due to self-damage of the crystals by alpha radiation, which weakens the lattice structure. Uses Radium and radium salts were commonly used for treating cancer; however, these treatments have been mostly phased out in favor of less toxic chemicals such as technetium or strontium-89. Radium bromide was also used in luminous paint on watches, but its use was ultimately phased out in the 1960-1970s in favor of less dangerous chemicals like promethium and tritium. See also Radiation therapy Ionizing radiation Radiology References Bromides Alkaline earth metal halides Explosive chemicals Radium compounds
Radium bromide
[ "Chemistry" ]
1,236
[ "Explosive chemicals", "Bromides", "Salts" ]
5,753,978
https://en.wikipedia.org/wiki/Ohnesorge%20number
The Ohnesorge number (Oh) is a dimensionless number that relates the viscous forces to inertial and surface tension forces. The number was defined by Wolfgang von Ohnesorge in his 1936 doctoral thesis. It is defined as: Where μ is the dynamic viscosity of the liquid ρ is the density of the liquid σ is the surface tension L is the characteristic length scale (typically drop diameter) Re is the Reynolds number We is the Weber number Applications The Ohnesorge number for a 3 mm diameter rain drop is typically ~0.002. Larger Ohnesorge numbers indicate a greater influence of the viscosity. This is often used to relate to free surface fluid dynamics such as dispersion of liquids in gases and in spray technology. In inkjet printing, liquids whose Ohnesorge number are in the range 0.1 < Oh < 1.0 are jettable (1<Z<10 where Z is the reciprocal of the Ohnesorge number). See also Laplace number - There is an inverse relationship, , between the Laplace number and the Ohnesorge number. It is more historically correct to use the Ohnesorge number, but often mathematically neater to use the Laplace number. References Dimensionless numbers of fluid mechanics Fluid dynamics
Ohnesorge number
[ "Chemistry", "Engineering" ]
270
[ "Piping", "Chemical engineering", "Fluid dynamics stubs", "Fluid dynamics" ]
5,754,502
https://en.wikipedia.org/wiki/Prefabricated%20building
A prefabricated building, informally a prefab, is a building that is manufactured and constructed using prefabrication. It consists of factory-made components or units that are transported and assembled on-site to form the complete building. Various materials were combined to create a part of the installation process. History Buildings have been built in one place and reassembled in another throughout history. This was especially true for mobile activities, or for new settlements. Elmina Castle, the first slave fort in West Africa, was also the first European prefabricated building in Sub-saharan Africa. In North America, in 1624 one of the first buildings at Cape Ann was probably partially prefabricated, and was rapidly disassembled and moved at least once. John Rollo described in 1801 earlier use of portable hospital buildings in the West Indies. Possibly the first advertised prefab house was the "Manning cottage". A London carpenter, Henry Manning, constructed a house that was built in components, then shipped and assembled by British emigrants. This was published at the time (advertisement, South Australian Record, 1837) and a few still stand in Australia. One such is the Friends Meeting House, Adelaide. The peak year for the importation of portable buildings to Australia was 1853, when several hundred arrived. These have been identified as coming from Liverpool, Boston and Singapore (with Chinese instructions for re-assembly). In Barbados the Chattel house was a form of prefabricated building which was developed by emancipated slaves who had limited rights to build upon land they did not own. As the buildings were moveable they were legally regarded as chattels. In 1855 during the Crimean War, after Florence Nightingale wrote a letter to The Times, Isambard Kingdom Brunel was commissioned to design a prefabricated modular hospital. In five months he designed the Renkioi Hospital: a 1,000 patient hospital, with innovations in sanitation, ventilation and a flushing toilet. Fabricator William Eassie constructed the required 16 units in Gloucester Docks, shipped directly to the Dardanelles. Only used from March 1856 to September 1857, it reduced the death rate from 42% to 3.5%. The world's first prefabricated, pre-cast panelled apartment blocks were pioneered in Liverpool. A process was invented by city engineer John Alexander Brodie, whose inventive genius also had him inventing the football goal net. The tram stables at Walton in Liverpool followed in 1906. The idea was not extensively adopted in Britain, however was widely adopted elsewhere, particularly in Eastern Europe. Prefabricated homes were produced during the Gold Rush in the United States, when kits were produced to enable Californian prospectors to quickly construct accommodation. Homes were available in kit form by mail order in the United States in 1908. Prefabricated housing was popular during the Second World War due to the need for mass accommodation for military personnel. The United States used Quonset huts as military buildings, and in the United Kingdom prefabricated buildings used included Nissen huts and Bellman Hangars. 'Prefabs' were built after the war as a means of quickly and cheaply providing quality housing as a replacement for the housing destroyed during the Blitz. The proliferation of prefabricated housing across the country was a result of the Burt Committee and the Housing (Temporary Accommodation) Act 1944. Under the Ministry of Works Emergency Factory Made housing programme, a specification was drawn up and bid on by various private construction and manufacturing companies. After approval by the MoW, companies could bid on Council led development schemes, resulting in whole estates of prefabs constructed to provide accommodation for those made homeless by the War and ongoing slum clearance. Almost 160,000 had been built in the UK by 1948 at a cost of close to £216 million. The largest single prefab estate in Britain was at Belle Vale (South Liverpool), where more than 1,100 were built after World War 2. The estate was demolished in the 1960s amid much controversy as the prefabs were very popular with residents at the time. Prefabs were aimed at families, and typically had an entrance hall, two bedrooms (parents and children), a bathroom (a room with a bath)  — which was a novel innovation for many Britons at that time, a separate toilet, a living room and an equipped (not fitted in the modern sense) kitchen. Construction materials included steel, aluminium, timber or asbestos cement, depending on the type of dwelling. The aluminium Type B2 prefab was produced as four pre-assembled sections which could be transported by lorry anywhere in the country. The Universal House (pictured left & lounge diner right) was given to the Chiltern Open Air Museum after 40 years temporary use. The Mark 3 was manufactured by the Universal Housing Company Ltd, Rickmansworth. The United States used prefabricated housing for troops during the war and for GIs returning home. Prefab classrooms were popular with UK schools increasing their rolls during the baby boom of the 1950s and 1960s. Many buildings were designed with a five-ten year life span, but have far exceeded this, with a number surviving today. In 2002, for example, the city of Bristol still had residents living in 700 examples. Many UK councils have been in the process of demolishing the last surviving examples of Second World War prefabs in order to comply with the British government's Decent Homes Standard, which came into effect in 2010. There has, however, been a recent revival in prefabricated methods of construction in order to compensate for the United Kingdom's current housing shortage. Prefabs and the modernist movement Architects are incorporating modern designs into the prefabricated houses of today. Prefab housing should no longer be compared to a mobile home in terms of appearance, but to that of a complex modernist design. There has also been an increase in the use of "green" materials in the construction of these prefab houses. Consumers can easily select between different environmentally friendly finishes and wall systems. Since these homes are built in parts, it is easy for a home owner to add additional rooms or even solar panels to the roofs. Many prefab houses can be customized to the client's specific location and climate, making prefab homes much more flexible and modern than before. There is a zeitgeist or trend in architectural circles and the spirit of the age favors the small carbon footprint of "prefab". Efficiency The process of building pre-fabricated buildings has become so efficient in China that a builder in Changsha built a ten-storey building in 28 hours and 45 minutes. Sustainability Prefabricated construction generates less carbon footprint, improves energy use and efficiency, and produces less waste, making it more sustainable and environmentally friendly, and compliant with sustainable design standards. Modular Architecture The modular architecture allows, thanks to 3D modeling, the design and construction of the modular structure outside the site where it will be installed. This offers several advantages such as more sustainable design, greater cost and time savings and standardization of design. This is especially important for large-scale construction projects. In communist countries Many eastern European countries had suffered physical damage during World War II and their economies were in a very poor state. There was a need to reconstruct cities which had been severely damaged due to the war. For example, Warsaw had been practically razed to the ground under the planned destruction of Warsaw by German forces after the 1944 Warsaw Uprising. The centre of Dresden, Germany, had been totally destroyed by the 1945 Allied bombardment. Stalingrad had been largely destroyed and only a small number of structures were left standing. Prefabricated buildings served as an inexpensive and quick way to alleviate the massive housing shortages associated with the wartime destruction and large-scale urbanization and rural flight. Prefabricated commercial buildings Prefabrication for commercial uses has a long history - a major expansion was made in the Second World War when ARCON (short for Architecture Consultants) developed a system using steel components that could be rapidly erected and then clad with a variety of materials to suit local conditions, availability, and cost. McDonald's uses prefabricated structures for their buildings, and set a record of constructing a building and opening for business within 13 hours (on pre-prepared ground works). In the UK, the major supermarkets have each developed a modular unit system to shop building, based on the systems developed by German cost retailer Aldi and the Danish supermarket chain Netto. PEB Structure In structural engineering, a pre-engineered building (PEB) is designed by a PEB supplier or PEB manufacturer with a single design to be fabricated using various materials and methods to satisfy a wide range of structural and aesthetic design requirements. This is contrasted with a building built to a design that was created specifically for that building. Within some geographic industry sectors pre-engineered buildings are also called pre-engineered metal buildings (PEMB) or, as is becoming increasingly common due to the reduced amount of pre-engineering involved in custom computer-aided designs, simply engineered metal buildings (EMB). During the 1960s, standardized engineering designs for buildings were first marketed as PEBs. Historically, the primary framing structure of a pre-engineered building is an assembly of Ɪ-shaped members, often referred to as I-beams. In pre-engineered buildings, the I beams used are usually formed by welding together steel plates to form the I section. The I beams are then field-assembled (e.g. bolted connections) to form the entire frame of the pre-engineered building. Some manufacturers taper the framing members (varying in web depth) according to the local loading effects. Larger plate dimensions are used in areas of higher load effects. Other forms of primary framing can include trusses, mill sections rather than three-plate welded, castellated beams, etc. The choice of economic form can vary depending on factors such as local capabilities (e.g. manufacturing, transportation, construction) and variations in material vs. labour costs. Typically, primary frames are 2D type frames (i.e. may be analyzed using two-dimensional techniques). Advances in computer-aided design technology, materials and manufacturing capabilities have assisted a growth in alternate forms of pre-engineered building such as the tension fabric building and more sophisticated analysis (e.g. three-dimensional) as is required by some building codes. Cold formed Z- and C-shaped members may be used as secondary structural elements to fasten and support the external cladding. Roll-formed profiled steel sheet, wood, tensioned fabric, precast concrete, masonry block, glass curtainwall or other materials may be used for the external cladding of the building. In order to accurately design a pre-engineered building, engineers consider the clear span between bearing points, bay spacing, roof slope, live loads, dead loads, collateral loads, wind uplift, deflection criteria, internal crane system and maximum practical size and weight of fabricated members. Historically, pre-engineered building manufacturers have developed pre-calculated tables for different structural elements in order to allow designers to select the most efficient I beams size for their projects. However, the table selection procedures are becoming rare with the evolution in computer-aided custom designs. While pre-engineered buildings can be adapted to suit a wide variety of structural applications, the greatest economy will be realized when utilising standard details. An efficiently designed pre-engineered building can be lighter than the conventional steel buildings by up to 30%. Lighter weight equates to less steel and a potential price savings in structural framework. Project professionals and manufacturer-designed buildings The project architect, sometimes called the Architect of Record, is typically responsible for aspects such as aesthetic, dimensional, occupant comfort and fire safety. When a pre-engineered building is selected for a project, the architect accepts conditions inherent in the manufacturer's product offerings for aspects such as materials, colours, structural form, dimensional modularity, etc. Despite the existence of the manufacturer's standard assembly details, the architect remains responsible to ensure that the manufacturer's product and assembly is consistent with the building code requirements (e.g. continuity of air/vapour retarders, insulation, rain screen; size and location of exits; fire rated assemblies) and occupant/owner expectations. Many jurisdictions recognize the distinction between the project engineer, sometimes called the Engineer of Record, and the manufacturer's employee or subcontract engineer, sometimes called a specialty engineer. The principal differences between these two entities on a project are the limits of commercial obligation, professional responsibility and liability. The structural Engineer of Record is responsible to specify the design parameters for the project (e.g. materials, loads, design standards, service limits) and to ensure that the element and assembly designs by others are consistent in the global context of the finished building. The specialty engineer is responsible to design only those elements which the manufacturer is commercially obligated to supply (e.g. by contract) and to communicate the assembly procedures, design assumptions and responses, to the extent that the design relies on or affects work by others, to the Engineer of Record – usually described in the manufacturer's erection drawings and assembly manuals. The manufacturer produces an engineered product but does not typically provide engineering services to the project. In the context described, the Architect and Engineer of Record are the designers of the building and bear ultimate responsibility for the performance of the completed work. A buyer should be aware of the project professional distinctions when developing the project plan. These prefabricated structures are widely used in the residential as well as industrial sector  for its unmatched characteristics. Evolving design features and functional flexibility Recent advancements in pre-engineered building systems have led to the integration of diverse structural sub-systems and accessories, enhancing both functionality and aesthetic appeal. These structures now commonly include mezzanine floors for optimised interior space, crane runway beams for industrial applications, and specialised roof platforms or catwalks for operational efficiency. Aesthetic components such as fascias, parapets, and customised canopies contribute to modern design flexibility, catering to varied architectural requirements. Furthermore, pre-engineered buildings have gained recognition for their superior cost-effectiveness and speed of construction compared to traditional methods, making them a preferred choice for both commercial and industrial projects worldwide. See also Kit house MAN steel house Modular building Prefabricated home Prefabrication Containerized housing unit References External links National Association of Home Builders (US) - "NAHB's Building Systems Council's Concrete, Log, Modular, and Panelized Homes Estate of Prefabs in SE London is listed for preservation Full questions and answers to building a prefab home Warehouses Building engineering Buildings and structures by type Prefabricated buildings Building Buildings and structures
Prefabricated building
[ "Engineering" ]
3,023
[ "Buildings and structures by type", "Building", "Building engineering", "Construction", "Civil engineering", "Buildings and structures", "Prefabricated buildings", "Architecture" ]
5,755,338
https://en.wikipedia.org/wiki/Radical%20polynomial
In mathematics, in the realm of abstract algebra, a radical polynomial is a multivariate polynomial over a field that can be expressed as a polynomial in the sum of squares of the variables. That is, if is a polynomial ring, the ring of radical polynomials is the subring generated by the polynomial Radical polynomials are characterized as precisely those polynomials that are invariant under the action of the orthogonal group. The ring of radical polynomials is a graded subalgebra of the ring of all polynomials. The standard separation of variables theorem asserts that every polynomial can be expressed as a finite sum of terms, each term being a product of a radical polynomial and a harmonic polynomial. This is equivalent to the statement that the ring of all polynomials is a free module over the ring of radical polynomials. References Abstract algebra Polynomials Invariant theory
Radical polynomial
[ "Physics", "Mathematics" ]
163
[ "Symmetry", "Group actions", "Polynomials", "Invariant theory", "Abstract algebra", "Algebra" ]
5,758,407
https://en.wikipedia.org/wiki/Coproporphyrinogen%20III%20oxidase
Coproporphyrinogen-III oxidase, mitochondrial (abbreviated as CPOX) is an enzyme that in humans is encoded by the CPOX gene. A genetic defect in the enzyme results in a reduced production of heme in animals. The medical condition associated with this enzyme defect is called hereditary coproporphyria. CPOX, the sixth enzyme of the haem biosynthetic pathway, converts coproporphyrinogen III to protoporphyrinogen IX through two sequential steps of oxidative decarboxylation. The activity of the CPOX enzyme, located in the mitochondrial membrane, is measured in lymphocytes. Function CPOX is an enzyme involved in the sixth step of porphyrin metabolism it catalyses the oxidative decarboxylation of coproporphyrinogen III to proto-porphyrinogen IX in the haem and chlorophyll biosynthetic pathways. The protein is a homodimer containing two internally bound iron atoms per molecule of native protein. The enzyme is active in the presence of molecular oxygen that acts as an electron acceptor. The enzyme is widely distributed having been found in a variety of eukaryotic and prokaryotic sources. Structure Gene Human CPOX is a mitochondrial enzyme encoded by a 14 kb CPOX gene containing seven exons located on chromosome 3 at q11.2. Protein CPOX is expressed as a 40 kDa precursor and contains an amino terminal mitochondrial targeting signal. After proteolytic processing, the protein is present as a mature form of a homodimer with a molecular mass of 37 kDa. Clinical significance Hereditary coproporphyria (HCP) and harderoporphyria are two phenotypically separate disorders that concern partial deficiency of CPOX. Neurovisceral symptomatology predominates in HCP. Additionally, it may be associated with abdominal pain and/or skin photosensitivity. Hyper-excretion of coproporphyrin III in urine and faeces has been recorded in biochemical tests. HCP is an autosomal dominant inherited disorder, whereas harderoporphyria is a rare erythropoietic variant form of HCP and is inherited in an autosomal recessive fashion. Clinically, it is characterized by neonatal haemolytic anaemia. Sometimes, the presence of skin lesions with marked faecal excretion of harderoporphyrin is also described in harderoporphyric patients. To date, over 50 CPOX mutations causing HCP have been described. Most of these mutations result in substitution of amino acid residues within the structural framework of CPOX. At least 32 of these mutations are considered to be disease-causing mutations. In terms of the molecular basis of HCP and harderoporphyria, mutations of CPOX in patients with harderoporphyria were demonstrated in the region of exon 6, where mutations in those with HCP were also identified. As only patients with mutation in this region (K404E) would develop harderoporphyria, this mutation led to diminishment of the second step of the decarboxylation reaction during the conversion of coproporphyrinogen to protoporphyrinogen, implying that the active site of the enzyme involved in the second step of decarboxylation is located in exon 6. Interactions CPOX has been shown to interact with the atypical keto-isocoproporphyrin (KICP) in human subjects with mercury (Hg) exposure. References Further reading External links Protein families
Coproporphyrinogen III oxidase
[ "Biology" ]
747
[ "Protein families", "Protein classification" ]
5,758,621
https://en.wikipedia.org/wiki/Porphobilinogen
Porphobilinogen (PBG) is an organic compound that occurs in living organisms as an intermediate in the biosynthesis of porphyrins, which include critical substances like hemoglobin and chlorophyll. The structure of the molecule can be described as molecule of pyrrole with sidechains substituted for hydrogen atoms at positions 2, 3 and 4 in the ring (1 being the nitrogen atom); respectively, an aminomethyl group , an acetic acid (carboxymethyl) group , and a propionic acid (carboxyethyl) group . Biosynthesis In the first step of the porphyrin biosynthesis pathway, porphobilinogen is generated from aminolevulinate (ALA) by the enzyme ALA dehydratase. Metabolism In the typical porphyrin biosynthesis pathway, four molecules of porphobilinogen are concatenated by carbons 2 and 5 of the pyrrole ring (adjacent to the nitrogen atom) into hydroxymethyl bilane by the enzyme porphobilinogen deaminase, also known as hydroxymethylbilane synthase. Pathologies Acute intermittent porphyria causes an increase in urinary porphobilinogen. References Metabolism Pyrroles Carboxylic acids Amines
Porphobilinogen
[ "Chemistry", "Biology" ]
286
[ "Carboxylic acids", "Functional groups", "Cellular processes", "Biochemistry", "Metabolism" ]
5,758,805
https://en.wikipedia.org/wiki/Uroporphyrinogen
Uroporphyrinogens are cyclic tetrapyrroles with four propionic acid groups ("P" groups) and four acetic acid groups ("A" groups). There are four forms, which vary based upon the arrangements of the "P" and "A" groups (in clockwise order): In the "I" variety (i.e. uroporphyrinogen I), the order repeats four times: AP-AP-AP-AP. In the "III" variety (i.e. uroporphyrinogen III), the fourth is reversed: AP-AP-AP-PA. This is the most common form. In the synthesis of porphyrin, it is created from the linear tetrapyrrole hydroxymethylbilane by the enzyme uroporphyrinogen III synthase, and is further converted into coproporphyrinogen III by the enzyme uroporphyrinogen III decarboxylase. The "II" and "IV" varieties can be created synthetically, but do not appear in nature. External links Biomolecules
Uroporphyrinogen
[ "Chemistry", "Biology" ]
237
[ "Natural products", "Biotechnology stubs", "Organic compounds", "Biochemistry stubs", "Structural biology", "Biomolecules", "Biochemistry", "Molecular biology" ]
11,198,470
https://en.wikipedia.org/wiki/Annual%20Review%20of%20Astronomy%20and%20Astrophysics
The Annual Review of Astronomy and Astrophysics is an annual peer-reviewed scientific journal published by Annual Reviews. The co-editors are Ewine van Dishoeck and Robert C. Kennicutt. The journal reviews scientific literature pertaining to local and distant celestial entities throughout the observable universe, as well as cosmology, instrumentation, techniques, and the history of developments. It was established in 1963. As of 2023, it is being published as open access, under the Subscribe to Open model. History In November 1960, the board of directors of the nonprofit publisher Annual Reviews began investigating the need for a new journal of review articles that covered developments in astronomy and astrophysics. The board consulted an advisory group of experts, including Ronald Bracewell, Robert Jastrow, Joseph Kaplan, Paul Merrill, Otto Struve, and Harold Urey. The editorial committee met in August 1961 to determine the authors and topics for the first volume, which was published in 1963. As of 2020, it was published both in print and electronically. It defines its scope as covering significant developments in astronomy and astrophysics, including the Sun, the Solar System, exoplanets, stars, the interstellar medium, the Milky Way and other galaxies, galactic nuclei, cosmology, and the instrumentation and techniques used for research and analysis. As of 2024, Journal Citation Reports gives the journal an impact factor of 26.3, ranking it second out of 84 journals in the category "Astronomy and Astrophysics". It is abstracted and indexed in Scopus, Science Citation Index Expanded, Civil Engineering Abstracts, Inspec, and Academic Search, among others. Editorial processes The Annual Review of Astronomy and Astrophysics is led by the editor or co-editors. They are is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee. Editors of volumes Dates indicate publication years in which someone was credited as a lead editor or co-editor of a journal volume. The planning process for a volume begins well before the volume appears, so appointment to the position of lead editor generally occurred prior to the first year shown here. An editor who has retired or died may be credited as a lead editor of a volume that they helped to plan, even if it is published after their retirement or death. Leo Goldberg (1963–1973) Geoffrey Burbidge (1974–2004) Roger Blandford (2005–2011) Ewine van Dishoeck and Sandra Faber (2012–2021) Ewine van Dishoeck and Robert C. Kennicutt (present) Current editorial committee As of 2022, the editorial committee consists of the co-editors and the following members: Conny Aerts Joss Bland-Hawthorn Xiaohui Fan Jonathan J. Fortney Eve C. Ostriker Eliot Quataert See also List of astronomy journals References Astronomy and Astrophysics Annual journals Astronomy journals Astrophysics journals Academic journals established in 1963 English-language journals Physics review journals
Annual Review of Astronomy and Astrophysics
[ "Physics", "Astronomy" ]
686
[ "Astrophysics journals", "Astronomy journals", "Works about astronomy", "Astrophysics" ]
11,200,529
https://en.wikipedia.org/wiki/Manufacturing%20engineering
Manufacturing engineering or production engineering is a branch of professional engineering that shares many common concepts and ideas with other fields of engineering such as mechanical, chemical, electrical, and industrial engineering. Manufacturing engineering requires the ability to plan the practices of manufacturing; to research and to develop tools, processes, machines, and equipment; and to integrate the facilities and systems for producing quality products with the optimum expenditure of capital. The manufacturing or production engineer's primary focus is to turn raw material into an updated or new product in the most effective, efficient & economic way possible. An example would be a company uses computer integrated technology in order for them to produce their product so that it is faster and uses less human labor. Overview Manufacturing Engineering is based on core industrial engineering and mechanical engineering skills, adding important elements from mechatronics, commerce, economics, and business management. This field also deals with the integration of different facilities and systems for producing quality products (with optimal expenditure) by applying the principles of physics and the results of manufacturing systems studies, such as the following: Craft Putting-out system British factory system American system of manufacturing Mass production Computer integrated manufacturing Computer-aided technologies in manufacturing Just in time manufacturing Lean manufacturing Flexible manufacturing Mass customization Agile manufacturing Rapid manufacturing Prefabrication Ownership Fabrication Publication Additive manufacturing Manufacturing engineers develop and create physical artifacts, production processes, and technology. It is a very broad area which includes the design and development of products. Manufacturing engineering is considered to be a subdiscipline of industrial engineering/systems engineering and has very strong overlaps with mechanical engineering. Manufacturing engineers' success or failure directly impacts the advancement of technology and the spread of innovation. This field of manufacturing engineering emerged from the tool and die discipline in the early 20th century. It expanded greatly from the 1960s when industrialized countries introduced factories with: 1. Numerical control machine tools and automated systems of production. 2. Advanced statistical methods of quality control: These factories were pioneered by the American electrical engineer William Edwards Deming, who was initially ignored by his home country. The same methods of quality control later turned Japanese factories into world leaders in cost-effectiveness and production quality. 3. Industrial robots on the factory floor, introduced in the late 1970s: These computer-controlled welding arms and grippers could perform simple tasks such as attaching a car door quickly and flawlessly 24 hours a day. This cut costs and improved production speed. History The history of manufacturing engineering can be traced to factories in the mid-19th century USA and 18th century UK. Although large home production sites and workshops were established in China, ancient Rome, and the Middle East, the Venice Arsenal provides one of the first examples of a factory in the modern sense of the word. Founded in 1104 in the Republic of Venice several hundred years before the Industrial Revolution, this factory mass-produced ships on assembly lines using manufactured parts. The Venice Arsenal apparently produced nearly one ship every day and, at its height, employed 16,000 people. Many historians regard Matthew Boulton's Soho Manufactory (established in 1761 in Birmingham) as the first modern factory. Similar claims can be made for John Lombe's silk mill in Derby (1721), or Richard Arkwright's Cromford Mill (1771). The Cromford Mill was purpose-built to accommodate the equipment it held and to take the material through the various manufacturing processes. One historian, Jack Weatherford, contends that the first factory was in Potosí. The Potosi factory took advantage of the abundant silver that was mined nearby and processed silver ingot slugs into coins. British colonies in the 19th century built factories simply as buildings where a large number of workers gathered to perform hand labor, usually in textile production. This proved more efficient for the administration and distribution of materials to individual workers than earlier methods of manufacturing, such as cottage industries or the putting-out system. Cotton mills used inventions such as the steam engine and the power loom to pioneer the industrial factories of the 19th century, where precision machine tools and replaceable parts allowed greater efficiency and less waste. This experience formed the basis for the later studies of manufacturing engineering. Between 1820 and 1850, non-mechanized factories supplanted traditional artisan shops as the predominant form of manufacturing institution. Henry Ford further revolutionized the factory concept and thus manufacturing engineering in the early 20th century with the innovation of mass production. Highly specialized workers situated alongside a series of rolling ramps would build up a product such as (in Ford's case) an automobile. This concept dramatically decreased production costs for virtually all manufactured goods and brought about the age of consumerism. Modern developments Modern manufacturing engineering studies include all intermediate processes required for the production and integration of a product's components. Some industries, such as semiconductor and steel manufacturers use the term "fabrication" for these processes. Automation is used in different processes of manufacturing such as machining and welding. Automated manufacturing refers to the application of automation to produce goods in a factory. The main advantages of automated manufacturing for the manufacturing process are realized with effective implementation of automation and include higher consistency and quality, reduction of lead times, simplification of production, reduced handling, improved workflow, and improved worker morale. Robotics is the application of mechatronics and automation to create robots, which are often used in manufacturing to perform tasks that are dangerous, unpleasant, or repetitive. These robots may be of any shape and size, but all are preprogrammed and interact physically with the world. To create a robot, an engineer typically employs kinematics (to determine the robot's range of motion) and mechanics (to determine the stresses within the robot). Robots are used extensively in manufacturing engineering. Robots allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform economically, and ensure better quality. Many companies employ assembly lines of robots, and some factories are so robotized that they can run by themselves. Outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. Robots are also sold for various residential applications. Education Manufacturing Engineers Manufacturing Engineers focus on the design, development, and operation of integrated systems of production to obtain high quality & economically competitive products. These systems may include material handling equipment, machine tools, robots, or even computers or networks of computers. Certification Programs Manufacturing engineers possess an associate's or bachelor's degree in engineering with a major in manufacturing engineering. The length of study for such a degree is usually two to five years followed by five more years of professional practice to qualify as a professional engineer. Working as a manufacturing engineering technologist involves a more applications-oriented qualification path. Academic degrees for manufacturing engineers are usually the Associate or Bachelor of Engineering, [BE] or [BEng], and the Associate or Bachelor of Science, [BS] or [BSc]. For manufacturing technologists the required degrees are Associate or Bachelor of Technology [B.TECH] or Associate or Bachelor of Applied Science [BASc] in Manufacturing, depending upon the university. Master's degrees in engineering manufacturing include Master of Engineering [ME] or [MEng] in Manufacturing, Master of Science [M.Sc] in Manufacturing Management, Master of Science [M.Sc] in Industrial and Production Management, and Master of Science [M.Sc] as well as Master of Engineering [ME] in Design, which is a subdiscipline of manufacturing. Doctoral [PhD] or [DEng] level courses in manufacturing are also available depending on the university. The undergraduate degree curriculum generally includes courses in physics, mathematics, computer science, project management, and specific topics in mechanical and manufacturing engineering. Initially, such topics cover most, if not all, of the subdisciplines of manufacturing engineering. Students then choose to specialize in one or more subdisciplines towards the end of their degree work. Syllabus The Foundational Curriculum for a Bachelor's Degree in Manufacturing Engineering or Production Engineering includes below mentioned syllabus. This syllabus is closely related to Industrial Engineering and Mechanical Engineering, but it differs by placing more emphasis on Manufacturing Science or Production Science. It includes the following areas: Mathematics (Calculus, Differential Equations, Statistics and Linear Algebra) Mechanics (Statics & Dynamics) Solid Mechanics Fluid Mechanics Materials Science Strength of Materials Fluid Dynamics Hydraulics Pneumatics HVAC (Heating, Ventilation & Air Conditioning) Heat Transfer Applied Thermodynamics Energy Conversion Instrumentation and Measurement Engineering Drawing (Drafting) & Engineering Design Engineering Graphics Mechanism Design including Kinematics and Dynamics Manufacturing Processes Mechatronics Circuit Analysis Lean Manufacturing Automation Reverse Engineering Quality Control CAD (Computer Aided Design) CAM (Computer Aided Manufacturing) Project Management A degree in Manufacturing Engineering typically differs from Mechanical Engineering in only a few specialized classes. Mechanical Engineering degrees focus more on the product design process and on complex products which requires more mathematical expertise. Manufacturing engineering certification Certification and licensure: In some countries, "professional engineer" is the term for registered or licensed engineers who are permitted to offer their professional services directly to the public. Professional Engineer, abbreviated (PE - USA) or (PEng - Canada), is the designation for licensure in North America. To qualify for this license, a candidate needs a bachelor's degree from an ABET-recognized university in the USA, a passing score on a state examination, and four years of work experience usually gained via a structured internship. In the USA, more recent graduates have the option of dividing this licensure process into two segments. The Fundamentals of Engineering (FE) exam is often taken immediately after graduation and the Principles and Practice of Engineering exam is taken after four years of working in a chosen engineering field. Society of Manufacturing Engineers (SME) certification (USA): The SME administers qualifications specifically for the manufacturing industry. These are not degree level qualifications and are not recognized at the professional engineering level. The following discussion deals with qualifications in the USA only. Qualified candidates for the Certified Manufacturing Technologist Certificate (CMfgT) must pass a three-hour, 130-question multiple-choice exam. The exam covers math, manufacturing processes, manufacturing management, automation, and related subjects. Additionally, a candidate must have at least four years of combined education and manufacturing-related work experience. Certified Manufacturing Engineer (CMfgE) is an engineering qualification administered by the Society of Manufacturing Engineers, Dearborn, Michigan, USA. Candidates qualifying for a Certified Manufacturing Engineer credential must pass a four-hour, 180-question multiple-choice exam which covers more in-depth topics than the CMfgT exam. CMfgE candidates must also have eight years of combined education and manufacturing-related work experience, with a minimum of four years of work experience. Certified Engineering Manager (CEM). The Certified Engineering Manager Certificate is also designed for engineers with eight years of combined education and manufacturing experience. The test is four hours long and has 160 multiple-choice questions. The CEM certification exam covers business processes, teamwork, responsibility, and other management-related categories. Modern tools Many manufacturing companies, especially those in industrialized nations, have begun to incorporate computer-aided engineering (CAE) programs into their existing design and analysis processes, including 2D and 3D solid modeling computer-aided design (CAD). This method has many benefits, including easier and more exhaustive visualization of products, the ability to create virtual assemblies of parts, and ease of use in designing mating interfaces and tolerances. Other CAE programs commonly used by product manufacturers include product life cycle management (PLM) tools and analysis tools used to perform complex simulations. Analysis tools may be used to predict product response to expected loads, including fatigue life and manufacturability. These tools include finite element analysis (FEA), computational fluid dynamics (CFD), and computer-aided manufacturing (CAM). Using CAE programs, a mechanical design team can quickly and cheaply iterate the design process to develop a product that better meets cost, performance, and other constraints. No physical prototype need be created until the design nears completion, allowing hundreds or thousands of designs to be evaluated, instead of relatively few. In addition, CAE analysis programs can model complicated physical phenomena which cannot be solved by hand, such as viscoelasticity, complex contact between mating parts, or non-Newtonian flows. Just as manufacturing engineering is linked with other disciplines, such as mechatronics, multidisciplinary design optimization (MDO) is also being used with other CAE programs to automate and improve the iterative design process. MDO tools wrap around existing CAE processes, allowing product evaluation to continue even after the analyst goes home for the day. They also utilize sophisticated optimization algorithms to more intelligently explore possible designs, often finding better, innovative solutions to difficult multidisciplinary design problems. On the business side of manufacturing engineering, enterprise resource planning (ERP) tools can overlap with PLM tools and use connector programs with CAD tools to share drawings, sync revisions, and be the master for certain data used in the other modern tools above, like part numbers and descriptions. Manufacturing Engineering around the world Manufacturing engineering is an extremely important discipline worldwide. It goes by different names in different countries. In the United States and the continental European Union it is commonly known as Industrial Engineering and in the United Kingdom and Australia it is called Manufacturing Engineering. Subdisciplines Mechanics Mechanics, in the most general sense, is the study of forces and their effects on matter. Typically, engineering mechanics is used to analyze and predict the acceleration and deformation (both elastic and plastic) of objects under known forces (also called loads) or stresses. Subdisciplines of mechanics include: Statics, the study of non-moving bodies under known loads Dynamics (or kinetics), the study of how forces affect moving bodies Mechanics of materials, the study of how different materials deform under various types of stress Fluid mechanics, the study of how fluids react to forces Continuum mechanics, a method of applying mechanics that assumes that objects are continuous (rather than discrete) If the engineering project were to design a vehicle, statics might be employed to design the frame of the vehicle to evaluate where the stresses will be most intense. Dynamics might be used when designing the car's engine to evaluate the forces in the pistons and cams as the engine cycles. Mechanics of materials might be used to choose appropriate materials for the manufacture of the frame and engine. Fluid mechanics might be used to design a ventilation system for the vehicle or to design the intake system for the engine. Kinematics Kinematics is the study of the motion of bodies (objects) and systems (groups of objects), while ignoring the forces that cause the motion. The movement of a crane and the oscillations of a piston in an engine are both simple kinematic systems. The crane is a type of open kinematic chain, while the piston is part of a closed four-bar linkage. Engineers typically use kinematics in the design and analysis of mechanisms. Kinematics can be used to find the possible range of motion for a given mechanism, or, working in reverse, can be used to design a mechanism that has a desired range of motion. Drafting Drafting or technical drawing is the means by which manufacturers create instructions for manufacturing parts. A technical drawing can be a computer model or hand-drawn schematic showing all the dimensions necessary to manufacture a part, as well as assembly notes, a list of required materials, and other pertinent information. A U.S engineer or skilled worker who creates technical drawings may be referred to as a drafter or draftsman. Drafting has historically been a two-dimensional process, but computer-aided design (CAD) programs now allow the designer to create in three dimensions. Instructions for manufacturing a part must be fed to the necessary machinery, either manually, through programmed instructions, or through the use of a computer-aided manufacturing (CAM) or combined CAD/CAM program. Optionally, an engineer may also manually manufacture a part using the technical drawings, but this is becoming an increasing rarity with the advent of computer numerically controlled (CNC) manufacturing. Engineers primarily manufacture parts manually in the areas of applied spray coatings, finishes, and other processes that cannot economically or practically be done by a machine. Drafting is used in nearly every subdiscipline of mechanical and manufacturing engineering, and by many other branches of engineering and architecture. Three-dimensional models created using CAD software are also commonly used in finite element analysis (FEA) and computational fluid dynamics (CFD). Machine tools and metal fabrication Machine tools employ some sort of tool that does the cutting or shaping. All machine tools have some means of constraining the workpiece and providing a guided movement of the parts of the machine. Metal fabrication is the building of metal structures by cutting, bending, and assembling processes. Computer Integrated Manufacturing Computer-integrated manufacturing (CIM) is the manufacturing approach of using computers to control the entire production process. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries. Mechatronics Mechatronics is an engineering discipline that deals with the convergence of electrical, mechanical and manufacturing systems. Such combined systems are known as electromechanical systems and are widespread. Examples include automated manufacturing systems, heating, ventilation and air-conditioning systems, and various aircraft and automobile subsystems. The term mechatronics is typically used to refer to macroscopic systems, but futurists have predicted the emergence of very small electromechanical devices. Already such small devices, known as Microelectromechanical systems (MEMS), are used in automobiles to initiate the deployment of airbags, in digital projectors to create sharper images, and in inkjet printers to create nozzles for high-definition printing. In the future, it is hoped that such devices will be used in tiny implantable medical devices and to improve optical communication. Textile engineering Textile engineering courses deal with the application of scientific and engineering principles to the design and control of all aspects of fiber, textile, and apparel processes, products, and machinery. These include natural and man-made materials, interaction of materials with machines, safety and health, energy conservation, and waste and pollution control. Additionally, students are given experience in plant design and layout, machine and wet process design and improvement, and designing and creating textile products. Throughout the textile engineering curriculum, students take classes from other engineering and disciplines including: mechanical, chemical, materials and industrial engineering. Advanced composite materials Advanced composite materials (engineering) (ACMs) are also known as advanced polymer matrix composites. These are generally characterized or determined by unusually high strength fibres with unusually high stiffness, or modulus of elasticity characteristics, compared to other materials, while bound together by weaker matrices. Advanced composite materials have broad, proven applications, in the aircraft, aerospace, and sports equipment sectors. Even more specifically ACMs are very attractive for aircraft and aerospace structural parts. Manufacturing ACMs is a multibillion-dollar industry worldwide. Composite products range from skateboards to components of the space shuttle. The industry can be generally divided into two basic segments, industrial composites and advanced composites. Employment Manufacturing engineering is just one facet of the engineering manufacturing industry. Manufacturing engineers enjoy improving the production process from start to finish. They have the ability to keep the whole production process in mind as they focus on a particular portion of the process. Successful students in manufacturing engineering degree programs are inspired by the notion of starting with a natural resource, such as a block of wood, and ending with a usable, valuable product, such as a desk, produced efficiently and economically. Manufacturing engineers are closely connected with engineering and industrial design efforts. Examples of major companies that employ manufacturing engineers in the United States include General Motors Corporation, Ford Motor Company, Chrysler, Boeing, Gates Corporation and Pfizer. Examples in Europe include Airbus, Daimler, BMW, Fiat, Navistar International, and Michelin Tyre. Industries where manufacturing engineers are generally employed include: Aerospace industry Automotive industry Chemical industry Computer industry Engineering management Food processing industry Garment industry Industrial engineering Mechanical engineering Pharmaceutical industry Process engineering Pulp and paper industry Systems engineering Toy industry Frontiers of research Flexible manufacturing systems A flexible manufacturing system (FMS) is a manufacturing system in which there is some amount of flexibility that allows the system to react to changes, whether predicted or unpredicted. This flexibility is generally considered to fall into two categories, both of which have numerous subcategories. The first category, machine flexibility, covers the system's ability to be changed to produce new product types and the ability to change the order of operations executed on a part. The second category, called routing flexibility, consists of the ability to use multiple machines to perform the same operation on a part, as well as the system's ability to absorb large-scale changes, such as in volume, capacity, or capability. Most FMS systems comprise three main systems. The work machines, which are often automated CNC machines, are connected by a material handling system to optimize parts flow, and to a central control computer, which controls material movements and machine flow. The main advantages of an FMS is its high flexibility in managing manufacturing resources like time and effort in order to manufacture a new product. The best application of an FMS is found in the production of small sets of products from a mass production. Computer integrated manufacturing Computer-integrated manufacturing (CIM) in engineering is a method of manufacturing in which the entire production process is controlled by computer. Traditionally separated process methods are joined through a computer by CIM. This integration allows the processes to exchange information and to initiate actions. Through this integration, manufacturing can be faster and less error-prone, although the main advantage is the ability to create automated manufacturing processes. Typically CIM relies on closed-loop control processes based on real-time input from sensors. It is also known as flexible design and manufacturing. Friction stir welding Friction stir welding was discovered in 1991 by The Welding Institute (TWI). This innovative steady state (non-fusion) welding technique joins previously un-weldable materials, including several aluminum alloys. It may play an important role in the future construction of airplanes, potentially replacing rivets. Current uses of this technology to date include: welding the seams of the aluminum main space shuttle external tank, the Orion Crew Vehicle test article, Boeing Delta II and Delta IV Expendable Launch Vehicles and the SpaceX Falcon 1 rocket; armor plating for amphibious assault ships; and welding the wings and fuselage panels of the new Eclipse 500 aircraft from Eclipse Aviation, among an increasingly growing range of uses. Other areas of research are Product Design, MEMS (Micro-Electro-Mechanical Systems), Lean Manufacturing, Intelligent Manufacturing Systems, Green Manufacturing, Precision Engineering, Smart Materials, etc. See also Industrial engineering Mechanical engineering Automation Computer-aided design Manufacturing Industrial Revolution Mechatronics Robotics Associations Society of Manufacturing Engineers INFORMS Institute of Industrial Engineers Notes External links Institute of Manufacturing - UK Georgia Tech Manufacturing Institute Application note "Yield Learning Flow Provides Faster Production Ramp" Industrial engineering de:Fertigungstechnik fi:Tuotantotekniikka
Manufacturing engineering
[ "Engineering" ]
4,734
[ "Industrial engineering", "Manufacturing", "Mechanical engineering" ]
4,320,083
https://en.wikipedia.org/wiki/Clausius%20theorem
The Clausius theorem, also known as the Clausius inequality, states that for a thermodynamic system (e.g. heat engine or heat pump) exchanging heat with external thermal reservoirs and undergoing a thermodynamic cycle, the following inequality holds. where is the total entropy change in the external thermal reservoirs (surroundings), is an infinitesimal amount of heat that is taken from the reservoirs and absorbed by the system ( if heat from the reservoirs is absorbed by the system, and < 0 if heat is leaving from the system to the reservoirs) and is the common temperature of the reservoirs at a particular instant in time. The closed integral is carried out along a thermodynamic process path from the initial/final state to the same initial/final state (thermodynamic cycle). In principle, the closed integral can start and end at an arbitrary point along the path. The Clausius theorem or inequality obviously implies per thermodynamic cycle, meaning that the entropy of the reservoirs increases or does not change, and never decreases, per cycle. For multiple thermal reservoirs with different temperatures interacting a thermodynamic system undergoing a thermodynamic cycle, the Clausius inequality can be written as the following for expression clarity: where is an infinitesimal heat from the reservoir to the system. In the special case of a reversible process, the equality holds, and the reversible case is used to introduce the state function known as entropy. This is because in a cyclic process the variation of a state function is zero per cycle, so the fact that this integral is equal to zero per cycle in a reversible process implies that there is some function (entropy) whose infinitesimal change is . The generalized "inequality of Clausius" for as an infinitesimal change in entropy of a system (denoted by sys) under consideration applies not only to cyclic processes, but to any process that occurs in a closed system. The Clausius inequality is a consequence of applying the second law of thermodynamics at each infinitesimal stage of heat transfer. The Clausius statement states that it is impossible to construct a device whose sole effect is the transfer of heat from a cool reservoir to a hot reservoir. Equivalently, heat spontaneously flows from a hot body to a cooler one, not the other way around. History The Clausius theorem is a mathematical representation of the second law of thermodynamics. It was developed by Rudolf Clausius who intended to explain the relationship between the heat flow in a system and the entropy of the system and its surroundings. Clausius developed this in his efforts to explain entropy and define it quantitatively. In more direct terms, the theorem gives us a way to determine if a cyclical process is reversible or irreversible. The Clausius theorem provides a quantitative formula for understanding the second law. Clausius was one of the first to work on the idea of entropy and is even responsible for giving it that name. What is now known as the Clausius theorem was first published in 1862 in Clausius' sixth memoir, "On the Application of the Theorem of the Equivalence of Transformations to Interior Work". Clausius sought to show a proportional relationship between entropy and the energy flow by heating (δQ) into a system. In a system, this heat energy can be transformed into work, and work can be transformed into heat through a cyclical process. Clausius writes that "The algebraic sum of all the transformations occurring in a cyclical process can only be less than zero, or, as an extreme case, equal to nothing." In other words, the equation with 𝛿Q being energy flow into the system due to heating and T being absolute temperature of the body when that energy is absorbed, is found to be true for any process that is cyclical and reversible. Clausius then took this a step further and determined that the following relation must be found true for any cyclical process that is possible, reversible or not. This relation is the "Clausius inequality", where is an infinitesimal amount of heat that is from the thermal reservoir interacting with the system and absorbed by the system ( if heat from the reservoir is absorbed by the system, and < 0 if heat is leaving from the system to the reservoir) and is the temperature of the reservoir at a particular instant in time. Now that this is known, there must be a relation developed between the Clausius inequality and entropy. The amount of entropy S added to the system during the cycle is defined as It has been determined, as stated in the second law of thermodynamics, that the entropy is a state function: It depends only upon the state that the system is in, and not what path the system took to get there. This is in contrast to the amount of energy added as heat (𝛿Q) and as work (𝛿W), which may vary depending on the path. In a cyclic process, therefore, the entropy of the system at the beginning of the cycle must equal to the entropy at the end of the cycle (because the entropy is a state function), , regardless of whether the process is reversible or irreversible. In irreversible cases, the net entropy is added to the system reservoirs per thermodynamic cycle while in reversible cases, no entropy is created or added to the reservoirs. If the amount of energy added by heating can be measured during the process, and the temperature can be measured during the process, then the Clausius inequality can be used to determine whether the process is reversible or irreversible by carrying out the integration in the Clausius inequality. If integral result is equal to zero then it is a reversible process, while if greater than zero then an irreversible process (less than zero cannot be possible). Proof The temperature that enters in the denominator of the integrand in the Clausius inequality is the temperature of the external thermal reservoir with which the system exchanges heat. At each instant of the process, the system is in contact with an external reservoir. Because of the Second Law of Thermodynamics, in each infinitesimal heat exchange process between the system and the reservoirs, the net change in entropy of the "universe", so to say, is , where Sys and Res stand for System and Reservoir, respectively. In the proof of the Clausius theorem or inequality, a sign convention of heat is used; in the perspective of an object under consideration, when heat is absorbed by the object then the heat is positive, while when heat leaves from the object then the heat is negative. When the system takes heat from a hotter (hot) reservoir by an infinitesimal amount (), for the net change in entropy to be positive or zero (i.e., non-negative) in this step (called the step 1 here) to fulfill the Second Law of Thermodynamics, the temperature of the hot reservoir needs to be equal to or greater than the temperature of the system at that instant; if the temperature of the system is given by at that instant, then as the entropy change in the system at the instant, and forces us to have: This means the magnitude of the entropy "loss" from the hot reservoir, is equal to or less than the magnitude of the entropy "gain" () by the system, so the net entropy change is zero or positive. Similarly, when the system at temperature expels heat in magnitude () into a colder (cold) reservoir (at temperature ) in an infinitesimal step (called the step 2), then again, for the Second Law of Thermodynamics to hold, one would have, in a very similar manner:Here, the amount of heat 'absorbed' by the system is given by , signifying that heat is actually transferring (leaving) from the system to the cold reservoir, with . The magnitude of the entropy gained by the cold reservoir is equal to or greater than the magnitude of the entropy loss of the system , so the net entropy change is also zero or positive in this case. Because the total change in entropy for the system is zero in a thermodynamic cyclic process where all state functions of the system are reset or returned to initial values (values at the process starts) upon the completion of each cycle, if one adds all the infinitesimal steps of heat intake from and heat expulsion to the reservoirs, signified by the previous two equations, with the temperature of each reservoir at each instant given by , one gets In particular, which was to be proven (and is now proven). In summary, (the inequality in the third statement below, being obviously guaranteed by the second law of thermodynamics, which is the basis of our calculation), (as a cyclic process), For a reversible cyclic process, there is no generation of entropy in each of the infinitesimal heat transfer processes since there is practically no temperature difference between the system and the thermal reservoirs (I.e., the system entropy change and the reservoirs entropy change is equal in magnitude and opposite in sign at any instant.), so the following equality holds, (as a cyclic process), The Clausius inequality is a consequence of applying the second law of thermodynamics at each infinitesimal stage of heat transfer, and is thus in a sense a weaker condition than the Second Law itself. Heat engine efficiency In the heat engine model with two thermal reservoirs (hot and cold reservoirs), the limit of the efficiency of any heat engine , where and are work done by the heat engine and heat transferred from the hot thermal reservoir to the engine, respectively, can be derived by the first law of thermodynamics (i.e., the law of conservation of energy) and the Clausius theorem or inequality. In respecting the abovementioned sign convention of heat, , where is heat transferred from the engine to the cold reservoir. The Clausius inequality can be expressed as . By substituting this inequality to the above equation results in, . This is the limit of heat engine efficiencies, and the equality of this expression is what is called the Carnot efficiency, that is the efficiency of all reversible heat engines and the maximum efficiency of all heat engines. See also Kelvin-Planck statement Carnot's theorem (thermodynamics) Carnot heat engine Introduction to entropy References Further reading Morton, A. S., and P.J. Beckett. Basic Thermodynamics. New York: Philosophical Library Inc., 1969. Print. Saad, Michel A. Thermodynamics for Engineers. Englewood Cliffs: Prentice-Hall, 1966. Print. Hsieh, Jui Sheng. Principles of Thermodynamics. Washington, D.C.: Scripta Book Company, 1975. Print. Zemansky, Mark W. Heat and Thermodynamics. 4th ed. New York: McGraw-Hill Book Company, 1957. Print. Clausius, Rudolf. The Mechanical Theory of Heat. London: Taylor and Francis, 1867. eBook External links Eponymous theorems of physics Laws of thermodynamics
Clausius theorem
[ "Physics", "Chemistry" ]
2,300
[ "Equations of physics", "Eponymous theorems of physics", "Thermodynamics", "Laws of thermodynamics", "Physics theorems" ]
4,322,402
https://en.wikipedia.org/wiki/Globally%20Harmonized%20System%20of%20Classification%20and%20Labelling%20of%20Chemicals
The Globally Harmonized System of Classification and Labelling of Chemicals (GHS) is an internationally agreed-upon standard managed by the United Nations that was set up to replace the assortment of hazardous material classification and labelling schemes previously used around the world. Core elements of the GHS include standardized hazard testing criteria, universal warning pictograms, and safety data sheets which provide users of dangerous goods relevant information with consistent organization. The system acts as a complement to the UN numbered system of regulated hazardous material transport. Implementation is managed through the UN Secretariat. Although adoption has taken time, as of 2017, the system has been enacted to significant extents in most major countries of the world. This includes the European Union, which has implemented the United Nations' GHS into EU law as the CLP Regulation, and United States Occupational Safety and Health Administration standards. History Before the GHS was created and implemented, there were many different regulations on hazard classification in use in different countries, resulting in multiple standards, classifications and labels for the same hazard. Given the $1.7 trillion per year international trade in chemicals requiring hazard classification, the cost of compliance with multiple systems of classification and labeling is significant. Developing a worldwide standard accepted as an alternative to local and regional systems presented an opportunity to reduce costs and improve compliance. The GHS development began at the 1992 Rio Conference on Environment and Development by the United Nations, also called Earth Summit (1992), when the International Labour Organization (ILO), the Organisation for Economic Co-operation and Development (OECD), various governments, and other stakeholders agreed that "A globally harmonized hazard classification and compatible labelling system, including material safety data sheets and easily understandable symbols, should be available if feasible, by the year 2000". The universal standard for all countries was to replace all the diverse classification systems; however, it is not a compulsory provision of any treaty. The GHS provides a common infrastructure for participating countries to use when implementing a hazard classification and Hazard Communication Standard. Hazard classification The GHS classification system defines and classifies the physical, health, and/or environmental hazards of a substance. Each category within the classifications has associated pictograms to be used when applied to a material or mixture. Physical hazards As of the 10th revision of the GHS, substances or articles are assigned to 17 different hazard classes largely based on the United Nations Dangerous Goods System. Explosives are assigned to one of four subcategories depending on the type of hazard they present, similar to the categories used in the UN Dangerous Goods System. Category 1 includes explosives not covered by the 6 Dangerous Goods categories. Flammable gases are assigned to one of 3 categories based on reactivity: Category 1A includes extremely flammable gases ignitable at 20 °C and standard pressure of 101.3 kPa, pyrophoric gases, and chemically unstable gases that may react in the absence of oxygen. Category 1B gases meet the flammability criteria of 1A, but are not pyrophoric or chemically unstable and have a lower flammability limit in air. Category 2 includes gases which do not meet the above criteria but otherwise are flammable at 20 °C and standard pressure. Aerosols and chemicals under pressure are categorized into one of 3 categories, but may be additionally classified as explosives or flammable gases if material properties match the previous classifications. From category 1 to 3, aerosols are classified as most to least flammable. All aerosols under these categories carry a bursting hazard. Oxidizing gases are any gaseous substance which contribute to combustion of other materials more than air would. There is only one category of oxidizing gases. Gases under pressure are categorized as compressed, liquefied, refrigerated, or dissolved gases, all of which may explode when heated or (in the case of refrigerated gases) cause cryogenic injury, such as frostbite. Flammable liquids are categorized by flammability, from Category 1 with flash point < 23 °C and initial boiling point < 35 °C to Category 4 with flash point > 60 °C and < 93 °C. Flammable solids are classified as solid substances which are readily combustible or may contribute to a fire through friction, and ignitable metal powders. They are placed into Category 1 if a fire is not stopped by wetting the substance, and Category 2 if wetting stops the fire for at least 4 minutes. Self-reactive substances and mixtures are liable to detonate or combust without the participation of air and are placed into 7 categories from A to G with decreasing reactivity. Pyrophoric liquids are liable to ignite after 5 minutes of coming in contact with air. Pyrophoric solids follow the same criteria as pyrophoric liquids. Self-heating substances, which differ from self-reactive substances in that they will only ignite in large quantities (kilograms) and after a long duration of time (hours or days). Category 1 is reserved for samples which self-heat in small quantities (25 mm2), and all other self-heating substances that only heat in large quantities are listed under Category 2. Substances and mixtures which, in contact with water, emit flammable gases are categorized from 1 to 3 based on the ignitability of the gas emitted. Oxidizing liquids contribute to the combustion of other materials and are categorized from 1 to 3 in decreasing oxidizing potential. Oxidizing solids follow the same criteria as oxidizing liquids. Organic peroxides are unstable substances or mixtures and may be derivatives of hydrogen peroxide. They are categorized from A to G based on inherent ability to explode or otherwise combust. Corrosive to metals materials may damage or destroy metals, based on tests done on aluminum and steel. The corrosion rate must be greater than 6.25 mm/year on either material to qualify under this classification. Desensitized explosives are materials that would otherwise be classified as explosive, but have been stabilized, or phlegmatized, to be exempted from said class. Health hazards Acute toxicity includes five GHS categories from which the appropriate elements relevant to transport, consumer, worker and environment protection can be selected. Substances are assigned to one of the five toxicity categories on the basis of LD50 (oral, dermal) or LC50 (inhalation). Skin corrosion means the production of irreversible damage to the skin following the application of a test substance for up to 4 hours. Substances and mixtures in this hazard class are assigned to a single harmonized corrosion category. Skin irritation means the production of reversible damage to the skin following the application of a test substance for up to 4 hours. Substances and mixtures in this hazard class are assigned to a single irritant category. For those authorities, such as pesticide regulators, wanting more than one designation for skin irritation, an additional mild irritant category is provided. Serious eye damage means the production of tissue damage in the eye, or serious physical decay of vision, following application of a test substance to the front surface of the eye, which is not fully reversible within 21 days of application. Substances and mixtures in this hazard class are assigned to a single harmonized category. Eye irritation means changes in the eye following the application of a test substance to the front surface of the eye, which are fully reversible within 21 days of application. Substances and mixtures in this hazard class are assigned to a single harmonized hazard category. For authorities, such as pesticide regulators, wanting more than one designation for eye irritation, one of two subcategories can be selected, depending on whether the effects are reversible in 21 or 7 days. Respiratory sensitizer means a substance that induces hypersensitivity of the airways following inhalation of the substance. Substances and mixtures in this hazard class are assigned to one hazard category. Skin sensitizer means a substance that will induce an allergic response following skin contact. The definition for "skin sensitizer" is equivalent to "contact sensitizer". Substances and mixtures in this hazard class are assigned to one hazard category. Germ cell mutagenicity means an agent giving rise to an increased occurrence of mutations in populations of cells and/or organisms. Substances and mixtures in this hazard class are assigned to one of two hazard categories. Category 1 has two subcategories. Carcinogenicity means a chemical substance or a mixture of chemical substances that induce cancer or increase its incidence. Substances and mixtures in this hazard class are assigned to one of two hazard categories. Category 1 has two subcategories. Reproductive toxicity includes adverse effects on sexual function and fertility in adult males and females, as well as developmental toxicity in offspring. Substances and mixtures with reproductive and/or developmental effects are assigned to one of two hazard categories, 'known or presumed' and 'suspected'. Category 1 has two subcategories for reproductive and developmental effects. Materials which cause concern for the health of breastfed children have a separate category: effects on or via Lactation. Specific target organ toxicity (STOT) category distinguishes between single and repeated exposure for Target Organ Effects. All significant health effects, not otherwise specifically included in the GHS, that can impair function, both reversible and irreversible, immediate and/or delayed are included in the non-lethal target organ/systemic toxicity class (TOST). Narcotic effects and respiratory tract irritation are considered to be target organ systemic effects following a single exposure. Substances and mixtures of the single exposure target organ toxicity hazard class are assigned to one of three hazard categories. Substances and mixtures of the repeated exposure target organ toxicity hazard class are assigned to one of two hazard categories. Aspiration hazard includes severe acute effects such as chemical pneumonia, varying degrees of pulmonary injury or death following aspiration. Aspiration is the entry of a liquid or solid directly through the oral or nasal cavity, or indirectly from vomiting, into the trachea and lower respiratory system. Substances and mixtures of this hazard class are assigned to one of two hazard categories this hazard class on the basis of viscosity. Environmental hazards Acute aquatic toxicity indicates the intrinsic property of a material of causing injury to an aquatic organism in a short-term exposure. Substances and mixtures of this hazard class are assigned to one of three toxicity categories on the basis of acute toxicity data: LC50 (fish) or EC50 (crustacean) or ErC50 (for algae or other aquatic plants). These acute toxicity categories may be subdivided or extended for certain sectors. Chronic aquatic toxicity indicates the potential or actual properties of a material to cause adverse effects to aquatic organisms during exposures that are determined in relation to the lifecycle of the organism. Substances and mixtures in this hazard class are assigned to one of four toxicity categories on the basis of acute data and environmental fate data: LC50 (fish), EC50 (crustacea) ErC50 (for algae or other aquatic plants), and degradation or bioaccumulation. Ozone Depleting Potential indicates the ability of the materials to damage the Ozone Layer, determined by the Montreal Protocol. Substances and mixtures bearing this quality have the Hazard Statement H420. Classification of mixtures The GHS approach to the classification of mixtures for health and environmental hazards uses a tiered approach and is dependent upon the amount of information available for the mixture itself and for its components. Principles that have been developed for the classification of mixtures, drawing on existing systems such as the European Union (EU) system for classification of preparations laid down in Directive 1999/45/EC. The process for the classification of mixtures is based on the following steps: Where toxicological or ecotoxicological test data are available for the mixture itself, the classification of the mixture will be based on that data; Where test data are not available for the mixture itself, then the appropriate bridging principles should be applied, which uses test data for components and/or similar mixtures; If (1) test data are not available for the mixture itself, and (2) the bridging principles cannot be applied, then use the calculation or cutoff values described in the specific endpoint to classify the mixture. Substitute substances Companies are encouraged to replace hazardous substances with substances featuring a reduced health risk. As an assistance to assess possible substitute substances, the Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA) has developed the Column Model. On the basis of just a small amount of information on a product, substitute substances can be evaluated with the support of this table. The current version from 2020 already includes the amendments of the 12th CLP Adaptation Regulation 2019/521. Testing requirements The GHS generally defers to the United States Environmental Protection Agency and OECD to provide and verify toxicity testing requirements for substances or mixtures. Overall, the GHS criteria for determining health and environmental hazards are test method neutral, allowing different approaches as long as they are scientifically sound and validated according to international procedures and criteria already referred to in existing systems. Test data already generated for the classification of chemicals under existing systems should be accepted when classifying these chemicals under the GHS, thereby avoiding duplicative testing and the unnecessary use of test animals. For physical hazards, the test criteria are linked to specific UN test methods. Hazard communication Per GHS, hazards need to be communicated: in more than one form (for example, placards, labels or Safety Data Sheets), with hazard statements and precautionary statements, in an easily comprehensible and standardized manner, consistent with other statements to reduce confusion, and taking into account all existing research and any new evidence. Comprehensibility is a significant consideration in GHS implementation. The GHS Purple Book includes a comprehensibility-testing instrument in Annex 6. Factors that were considered in developing the GHS communication tools include: Different philosophies in existing systems on how and what should be communicated; Language differences around the world; Ability to translate phrases meaningfully; Ability to understand and appropriately respond to pictograms. GHS label elements The standardized label elements included in the GHS are: Symbols (GHS hazard pictograms): Convey health, physical and environmental hazard information, assigned to a GHS hazard class and category. Pictograms include the harmonized hazard symbols plus other graphic elements, such as borders, background patterns or cozers and substances which have target organ toxicity. Also, harmful chemicals and irritants are marked with an exclamation mark, replacing the European saltire. Pictograms will have a black symbol on a white background with a red diamond frame. For transport, pictograms will have the background, symbol and colors currently used in the UN Recommendations on the Transport of Dangerous Goods. Where a transport pictogram appears, the GHS pictogram for the same hazard should not appear. Signal word: "Danger" or "Warning" will be used to emphasize hazards and indicate the relative level of severity of the hazard, assigned to a GHS hazard class and category. Some lower level hazard categories do not use signal words. Only one signal word corresponding to the class of the most severe hazard should be used on a label. GHS hazard statements: Standard phrases assigned to a hazard class and category that describe the nature of the hazard. An appropriate statement for each GHS hazard should be included on the label for products possessing more than one hazard. The additional label elements included in the GHS are: GHS precautionary statements: Measures to minimize or prevent adverse effects. There are four types of precautionary statements covering: prevention, response in cases of accidental spillage or exposure, storage, and disposal. The precautionary statements have been linked to each GHS hazard statement and type of hazard. Product identifier (ingredient disclosure): Name or number used for a hazardous product on a label or in the SDS. The GHS label for a substance should include the chemical identity of the substance. For mixtures, the label should include the chemical identities of all ingredients that contribute to acute toxicity, skin corrosion or serious eye damage, germ cell mutagenicity, carcinogenicity, reproductive toxicity, skin or respiratory sensitization, or Specific Target Organ Toxicity (STOT), when these hazards appear on the label. Supplier identification: The name, address and telephone number should be provided on the label. Supplemental information: Non-harmonized information on the container of a hazardous product that is not required or specified under the GHS. Supplemental information may be used to provide further detail that does not contradict or cast doubt on the validity of the standardized hazard information. GHS label format The GHS includes directions for application of the hazard communication elements on the label. In particular, it specifies for each hazard, and for each class within the hazard, what signal word, pictogram, and hazard statement should be used. The GHS hazard pictograms, signal words and hazard statements should be located together on the label. The actual label format or layout is not specified. National authorities may choose to specify where information should appear on the label, or to allow supplier discretion in the placement of GHS information. The diamond shape of GHS pictograms resembles the shape of signs mandated for use by the United States Department of Transportation. To address this, in cases where a pictogram would be required by both the Department of Transportation and the GHS indicating the same hazard, only the Transportation pictogram is to be used. Safety data sheet Safety data sheets or SDS are specifically aimed at use in the workplace. Safety data sheets take precedence over and are intended to replace the previously used material safety data sheets (MSDS), which did not have a standard layout and section format. It should provide comprehensive information about the chemical product that allows employers and workers to obtain concise, relevant and accurate information in perspective to the hazards, uses and risk management of the chemical product in the workplace. Compared to the differences found between manufacturers in MSDS, SDS have specific requirements to include the following headings in the order specified: Identification Hazard(s) identification Composition/ information on ingredients First-aid measures Fire-fighting measures Accidental release measures Handling and storage Exposure control/ personal protection Physical and chemical properties Chemical stability and reactivity Toxicological information Ecological information Disposal considerations Transport information Regulatory information Other information The primary difference between the GHS and previous international industry recommendations is that sections 2 and 3 have been reversed in order. The GHS SDS headings, sequence, and content are similar to the ISO, European Union and ANSI MSDS/SDS requirements. A table comparing the content and format of a MSDS/SDS versus the GHS SDS is provided in Appendix A of the U.S. Occupational Safety and Health Administration (OSHA) GHS guidance. Training Current training procedures for hazard communication in the United States are more detailed than the GHS training recommendations. Training is a key component of the overall GHS approach. Employees and emergency responders must be trained on all program elements, though there has been confusion among these groups of workers in the implementation process regarding which training elements have changed and are required to maintain regulatory compliance. Implementation The United Nations goal was for broad international adoption of the system, and as of 2017, the GHS had been adopted to varying degrees in many major countries. Smaller economies continue to develop regulations to implement the GHS throughout the 2020s. GHS adoption by country Australia: In 2012, adopted regulation for GHS implementation, setting January 1, 2017 as the GHS implementation deadline. Brazil: Established an implementation deadline of February 2011 for substances and June 2015 for mixtures. Canada: GHS was incorporated into WHMIS 2015 as of February 2015. In 2023 the WHMIS requirements were updated to align with the 7th revised edition and certain provisions of the 8th revised edition of the Globally Harmonized System of Classification and Labelling of Chemicals (GHS). China: Established implementation deadline of December 1, 2011. Colombia: Following the issuance of Resolution 0773/2021 on April 9, 2021, Colombia enforced the implementation of the GHS, with deadlines taking effect April 7, 2023 for pure substances, with mixtures following the same protocols the following year. European Union: The deadline for substance classification was December 1, 2010 and for mixtures it was June 1, 2015, per regulation for GHS implementation on December 31, 2008. Japan: Established deadline of December 31, 2010 for products containing one of 640 designated substances. South Korea: Established the GHS implementation deadline of July 1, 2013. Malaysia: Deadline for substance and mixture was April 17, 2015 per its Industry Code of Practice on Chemicals Classification and Hazard Communication (ICOP) on 16 April 2014. Mexico: GHS has been incorporated into the Official Mexican Standard as of 2015. Pakistan: Country does not a single streamlined system for chemical labeling, although there are many rules in place. The Pakistani government has requested assistance in developing future regulations to implement GHS. Philippines: The deadline for substances and mixtures was March 14, 2015, per Guidelines for the Implementation of GHS in Chemical Safety Program in the Workplace in 2014. Russian Federation: GHS was approved for optional use as of August 2014. Manufacturers may continue using non-GHS Russian labels through 2021, after which compliance with the system is compulsory. Taiwan: Full GHS implementation was scheduled for 2016 for all hazardous chemicals with physical and health hazards. Thailand: The deadline for substances was March 13, 2013. The deadline for mixtures was March 13, 2017. Turkey: Published Turkish CLP regulation and SDS regulation in 2013 and 2014 respectively. The deadline for substance classification was June 1, 2015, for mixtures, it was June 1, 2016. United Kingdom: Implemented under EU directive by REACH regulations, this may be subject to change due to Brexit. United States: GHS compliant labels and SDSs are required for many applications including laboratory chemicals, commercial cleaning agents, and other workplace cases regulated by previous US Occupational Health and Safety Administration (OSHA) standards. First widespread implementation set by OSHA was on March 26, 2012, requiring manufacturers to adopt the standard by June 1, 2015, and product distributors to adopt the standard by December 1, 2015. Workers had to be trained by December 1, 2013. In the US, GHS labels are not required on most hazardous consumer grade products (ex. laundry detergent) however some manufacturers which also sell the same product in Canada or Europe include GHS compliant warnings on these products too. The US Consumer Product Safety Commission is not opposed to this and has been evaluating the possibility of incorporating elements of GHS into future consumer regulations. Uruguay: regulation approved in 2011, setting December 31, 2012 as deadline for pure substances and December 31, 2017, for compounds. Vietnam: The deadline for substances was March 30, 2014. The deadline for mixtures was March 30, 2016. See also ISO 7010 NFPA Toxicity category rating UN number GHS hazard pictograms References Bibliography External links About the GHS Globally Harmonized System of Classification and Labelling of Chemicals (GHS) - Ninth revised edition Chemical safety Chemical classification Globally Harmonized System Hazard analysis International standards
Globally Harmonized System of Classification and Labelling of Chemicals
[ "Chemistry", "Engineering" ]
4,781
[ "Chemical accident", "Safety engineering", "Hazard analysis", "nan", "Globally Harmonized System", "Chemical safety" ]
4,323,521
https://en.wikipedia.org/wiki/J-integral
The J-integral represents a way to calculate the strain energy release rate, or work (energy) per unit fracture surface area, in a material. The theoretical concept of J-integral was developed in 1967 by G. P. Cherepanov and independently in 1968 by James R. Rice, who showed that an energetic contour path integral (called J) was independent of the path around a crack. Experimental methods were developed using the integral that allowed the measurement of critical fracture properties in sample sizes that are too small for Linear Elastic Fracture Mechanics (LEFM) to be valid. These experiments allow the determination of fracture toughness from the critical value of fracture energy JIc, which defines the point at which large-scale plastic yielding during propagation takes place under mode I loading. The J-integral is equal to the strain energy release rate for a crack in a body subjected to monotonic loading. This is generally true, under quasistatic conditions, only for linear elastic materials. For materials that experience small-scale yielding at the crack tip, J can be used to compute the energy release rate under special circumstances such as monotonic loading in mode III (antiplane shear). The strain energy release rate can also be computed from J for pure power-law hardening plastic materials that undergo small-scale yielding at the crack tip. The quantity J is not path-independent for monotonic mode I and mode II loading of elastic-plastic materials, so only a contour very close to the crack tip gives the energy release rate. Also, Rice showed that J is path-independent in plastic materials when there is no non-proportional loading. Unloading is a special case of this, but non-proportional plastic loading also invalidates the path-independence. Such non-proportional loading is the reason for the path-dependence for the in-plane loading modes on elastic-plastic materials. Two-dimensional J-integral The two-dimensional J-integral was originally defined as (see Figure 1 for an illustration) where W(x1,x2) is the strain energy density, x1,x2 are the coordinate directions, t = [σ]n is the surface traction vector, n is the normal to the curve Γ, [σ] is the Cauchy stress tensor, and u is the displacement vector. The strain energy density is given by The J-integral around a crack tip is frequently expressed in a more general form (and in index notation) as where is the component of the J-integral for crack opening in the direction and is a small region around the crack tip. Using Green's theorem we can show that this integral is zero when the boundary is closed and encloses a region that contains no singularities and is simply connected. If the faces of the crack do not have any surface tractions on them then the J-integral is also path independent. Rice also showed that the value of the J-integral represents the energy release rate for planar crack growth. The J-integral was developed because of the difficulties involved in computing the stress close to a crack in a nonlinear elastic or elastic-plastic material. Rice showed that if monotonic loading was assumed (without any plastic unloading) then the J-integral could be used to compute the energy release rate of plastic materials too. {| class="toccolours collapsible collapsed" width="80%" style="text-align:left" !Proof that the J-integral is zero over a closed path |- |To show the path independence of the J-integral, we first have to show that the value of is zero over a closed contour in a simply connected domain. Let us just consider the expression for which is We can write this as From Green's theorem (or the two-dimensional divergence theorem) we have Using this result we can express as where is the area enclosed by the contour . Now, if there are no body forces present, equilibrium (conservation of linear momentum) requires that Also, Therefore, From the balance of angular momentum we have . Hence, The J-integral may then be written as Now, for an elastic material the stress can be derived from the stored energy function using Then, if the elastic modulus tensor is homogeneous, using the chain rule of differentiation, Therefore, we have for a closed contour enclosing a simply connected region without any elastic inhomogeneity, such as voids and cracks. |} {| class="toccolours collapsible collapsed" width="80%" style="text-align:left" !Proof that the J-integral is path-independent |- | Consider the contour . Since this contour is closed and encloses a simply connected region, the J-integral around the contour is zero, i.e. assuming that counterclockwise integrals around the crack tip have positive sign. Now, since the crack surfaces are parallel to the axis, the normal component on these surfaces. Also, since the crack surfaces are traction free, . Therefore, Therefore, and the J-integral is path independent. |} J-integral and fracture toughness For isotropic, perfectly brittle, linear elastic materials, the J-integral can be directly related to the fracture toughness if the crack extends straight ahead with respect to its original orientation. For plane strain, under Mode I loading conditions, this relation is where is the critical strain energy release rate, is the fracture toughness in Mode I loading, is the Poisson's ratio, and E is the Young's modulus of the material. For Mode II loading, the relation between the J-integral and the mode II fracture toughness () is For Mode III loading, the relation is Elastic-plastic materials and the HRR solution Hutchinson, Rice and Rosengren subsequently showed that J characterizes the singular stress and strain fields at the tip of a crack in nonlinear (power law hardening) elastic-plastic materials where the size of the plastic zone is small compared with the crack length. Hutchinson used a material constitutive law of the form suggested by W. Ramberg and W. Osgood: where σ is the stress in uniaxial tension, σy is a yield stress, ε is the strain, and εy = σy/E is the corresponding yield strain. The quantity E is the elastic Young's modulus of the material. The model is parametrized by α, a dimensionless constant characteristic of the material, and n, the coefficient of work hardening. This model is applicable only to situations where the stress increases monotonically, the stress components remain approximately in the same ratios as loading progresses (proportional loading), and there is no unloading. If a far-field tensile stress σfar is applied to the body shown in the adjacent figure, the J-integral around the path Γ1 (chosen to be completely inside the elastic zone) is given by Since the total integral around the crack vanishes and the contributions along the surface of the crack are zero, we have If the path Γ2 is chosen such that it is inside the fully plastic domain, Hutchinson showed that where K is a stress amplitude, (r,θ) is a polar coordinate system with origin at the crack tip, s is a constant determined from an asymptotic expansion of the stress field around the crack, and I is a dimensionless integral. The relation between the J-integrals around Γ1 and Γ2 leads to the constraint and an expression for K in terms of the far-field stress where β = 1 for plane stress and β = 1 − ν2 for plane strain (ν is the Poisson's ratio). The asymptotic expansion of the stress field and the above ideas can be used to determine the stress and strain fields in terms of the J-integral: where and are dimensionless functions. These expressions indicate that J can be interpreted as a plastic analog to the stress intensity factor (K) that is used in linear elastic fracture mechanics, i.e., we can use a criterion such as J > JIc as a crack growth criterion. See also Fracture toughness Toughness Fracture mechanics Stress intensity factor Nature of the slip band local field References External links J. R. Rice, "A Path Independent Integral and the Approximate Analysis of Strain Concentration by Notches and Cracks", Journal of Applied Mechanics, 35, 1968, pp. 379–386. Van Vliet, Krystyn J. (2006); "3.032 Mechanical Behavior of Materials", X. Chen (2014), "Path-Independent Integral", In: Encyclopedia of Thermal Stresses, edited by R. B. Hetnarski, Springer, . Nonlinear Fracture Mechanics Notes by Prof. John Hutchinson (from Harvard University) Notes on Fracture of Thin Films and Multilayers by Prof. John Hutchinson (from Harvard University) Mixed mode cracking in layered materials by Profs. John Hutchinson and Zhigang Suo (from Harvard University) Fracture Mechanics by Piet Schreurs (from TU Eindhoven, The Netherlands) Introduction to Fracture Mechanics by Dr. C. H. Wang (DSTO - Australia) Fracture mechanics course notes by Prof. Rui Huang (from Univ. of Texas at Austin) HRR solutions by Ludovic Noels (University of Liege) Failure Solid mechanics Materials testing Mechanics Fracture mechanics
J-integral
[ "Physics", "Materials_science", "Engineering" ]
1,924
[ "Structural engineering", "Solid mechanics", "Fracture mechanics", "Materials science", "Materials testing", "Mechanics", "Mechanical engineering", "Materials degradation" ]
16,506,844
https://en.wikipedia.org/wiki/Committed%20dose%20equivalent
Committed dose equivalent and Committed effective dose equivalent are dose quantities used in the United States system of radiological protection for irradiation due to an internal source. Committed dose equivalent (CDE) CDE is defined by the United States Nuclear Regulatory Commission in Title 10, Section 20.1003, of the Code of Federal Regulations (10 CFR 20.1003), such that "The Committed dose equivalent, CDE (HT,50) is the dose to some specific organ or tissue of reference (T) that will be received from an intake of radioactive material by an individual during the 50-year period following the intake". "The calculation of the committed effective dose equivalent (CEDE) begins with the determination of the equivalent dose, HT, to a tissue or organ, T. Where DT,R is the absorbed dose in rads (one gray, an SI unit, equals 100 rads) averaged over the tissue or organ, T, due to radiation type, R, and WR is the radiation weighting factor. The unit of equivalent dose is the rem (sievert, in SI units)." Committed effective dose equivalent (CEDE) This is defined in Title 10, Section 20.1003, of the Code of Federal Regulations of the USA the CEDE dose (HE,50) as the sum of the products of the committed dose equivalents for each of the body organs or tissues that are irradiated multiplied by the weighting factors (WT) applicable to each of those organs or tissues. "The probability of occurrence of a stochastic effect in a tissue or organ is assumed to be proportional to the equivalent dose in the tissue or organ. The constant of proportionality differs for the various tissues of the body, but in assessing health detriment the total risk is required. This is taken into account using the tissue weighting factors, WT, which represent the proportion of the stochastic risk resulting from irradiation of the tissue or organ to the total risk when the whole body is irradiated uniformly and HT is the equivalent dose in the tissue or organ, T, in the equation:" Committed Effective Dose Equivalent (CEDE) refers to the dose resulting from internal radiation exposures. The CEDE is combined with the Deep-Dose Equivalent (DDE), the dose from external whole body exposures, to produce the Total Effective Dose Equivalent (TEDE), the dose resulting from internal and external radiation exposures. Units Both quantities can be expressed in rem or sieverts (Sv). Pathways for Exposure The intake of radioactive material can occur through four pathways: inhalation of airborne contaminants such as radon, ingestion of contaminated food or liquids, absorption of vapors such as tritium oxide through the skin, and injection of medical radioisotopes such as technetium-99m. Some artificial radioisotopes such as iodine-131 are chemically identical to natural isotopes needed by the body, and may be more readily absorbed if the individual has a deficit of that element. For instance, potassium iodide (KI), administered orally immediately after exposure, may be used to protect the thyroid from ingested radioactive iodine in the event of an accident or attack at a nuclear power plant, or the detonation of a nuclear explosive which would release radioactive iodine. Other radioisotopes have an affinity for particular tissues, such as plutonium into bone, and may be retained there for years in spite of their foreign nature. Not all radiation is harmful. The radiation can be absorbed through multiple pathways, varying due to the circumstances of the situation. If the radioactive material is necessary, it can be ingested orally via stable isotopes of specific elements. This is only suggested to those that have a lack of these elements however, because radioactive material can go from healthy to harmful with very small amounts. The most harmful way to absorb radiation is that of ingestion absorption because it is almost impossible to control how much will enter the body. Committed dose equivalent in the practice of radiological protection In the case of internal exposure, the dose is not received at the moment of exposure, as happens with external exposure, since the incorporated radionuclide irradiates the various organs and tissues during the time it is present in the body. By definition, the committed dose equivalent corresponds to the received dose integrated over 50 years from the date of intake. In order to calculate it, one has to know the intake activity and the value of the committed dose equivalent per unit of intake activity. The uncertainties of the first parameter are such that the committed dose equivalent can only be regarded as an order of magnitude and not as a very accurate quantity. The use of it is justified, however, for, like the dose equivalent for external exposure, it expresses the risk of stochastic effects for the individual concerned since these effects, should they appear, would do so only after a latent period which is generally longer than the dose integration time. Moreover, the use of the committed dose equivalent offers certain advantages for dosimetric management, especially when it is simplified. A practical problem which may arise is that the annual dose limit is apparently exceeded by virtue of the fact that one is taking account, in the first year, of doses which will actually be received only in the following years. These problems are rare enough in practice to be dealt with individually in each case. Cigarette smoke measured with SSNTD and corresponding committed equivalent dose "Uranium and Thorium contents were measured inside various tobacco samples by using a method based on determining detection efficiencies of the CR-39 and LR-115 II solid state nuclear track detectors (SSNTD) for the emitted alpha particles. Alpha and beta activities per unit volume, due to radon, thoron and their decay products, were evaluated inside cigarette smokes of tobacco samples studied. Annual committed equivalent doses due to short-lived radon decay products from the inhalation of various cigarette smokes were determined in the thoracic and extrathoracic regions of the respiratory tract. Three types of cigarettes made in Morocco of black tobacco show higher annual committed equivalent doses in the extrathoracic and thoracic regions of the respiratory tract than the other studied cigarettes (except one type of cigarettes made in France of yellow tobacco); their corresponding annual committed equivalent dose ratios are larger than 1.8. Measured annual committed equivalent doses ranged from 1.8×10−9 Sv/yr to in the extrathoracic region and from 1.3×10−10 Sv/yr to in the thoracic region of the respiratory tract for a smoker consuming 20 cigarettes a day." See also Radioactivity Radiation poisoning Ionizing radiation Collective dose Cumulative dose Committed effective dose equivalent References US nuclear regulatory commission glossary Radon and daughters in cigarette smoke measured with SSNTD and corresponding committed equivalent dose to respiratory tract SCHLENKER R, A. "Comparison of Intake and Committed Dose Equivalent Permitted by Radiation Protection Systems Based on Annual Dose Equivalent and Committed Dose Equivalent for a Nuclide of Intermediate Effective Half-life." Health Physics, 51.2 (1986): 207–214. External links - "The confusing world of radiation dosimetry" - M.A. Boyd, U.S. Environmental Protection Agency. An account of chronological differences between USA and ICRP dosimetry systems. Radioactivity Radiation health effects Equivalent units
Committed dose equivalent
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
1,527
[ "Radiation health effects", "Equivalent quantities", "Units of measurement", "Quantity", "Equivalent units", "Nuclear physics", "Radiation effects", "Radioactivity" ]
16,507,851
https://en.wikipedia.org/wiki/Cumulative%20dose
Cumulative dose is the total dose resulting from repeated exposures of ionizing radiation to an occupationally exposed worker to the same portion of the body, or to the whole body, over a period of time. In medicine, the total amount of a drug or radiation given to a patient over time; for example, the total dose of radiation given in a series of radiation treatments or imaging exams. Recent studies have drawn attention to high cumulative doses (>100 mSv) to millions of patients undergoing recurrent CT scans during a 1- to 5-year period. This has resulted in a debate on whether CT is really a low-dose imaging modality. See also Radioactivity Radiation poisoning Collective dose Committed dose equivalent Committed effective dose equivalent References USNRC Glossary Radioactivity Radiation health effects
Cumulative dose
[ "Physics", "Chemistry", "Materials_science" ]
161
[ "Radiation health effects", "Nuclear chemistry stubs", "Nuclear and atomic physics stubs", "Nuclear physics", "Radiation effects", "Radioactivity" ]
16,508,375
https://en.wikipedia.org/wiki/Fuel%20temperature%20coefficient%20of%20reactivity
Fuel temperature coefficient of reactivity is the change in reactivity of the nuclear fuel per degree change in the fuel temperature. The coefficient quantifies the amount of neutrons that the nuclear fuel (such as uranium-238) absorbs from the fission process as the fuel temperature increases. It is a measure of the stability of the reactor operations. This coefficient is also known as the Doppler coefficient due to the contribution of Doppler broadening, which is the dominant effect in thermal systems. Contributing effects Doppler broadening Increased thermal motion of atoms within the fuel results in a broadening of resonance capture cross-section peaks, resulting in an increased neutron capture rate in the non-fissile portions of the fuel, resulting in an overall reduction of neutron flux. Thermal expansion Thermal expansion of the fuel at higher temperatures results in a lower density which reduces the likelihood of a neutron interacting with the fuel. See also Nuclear fission Nuclear reactor physics Void coefficient References USNRC Glossary Nuclear technology
Fuel temperature coefficient of reactivity
[ "Physics" ]
202
[ "Nuclear technology", "Nuclear physics" ]
16,509,260
https://en.wikipedia.org/wiki/Total%20effective%20dose%20equivalent
The Total effective dose equivalent (TEDE) is a radiation dosimetry quantity defined by the US Nuclear Regulatory Commission to monitor and control human exposure to ionizing radiation. It is defined differently in the NRC regulations and NRC glossary. According to the regulations, it is the sum of effective dose equivalent from external exposure and committed effective dose equivalent from internal exposure, thereby taking into account all known exposures. However, the NRC glossary defines it as the sum of the deep-dose equivalent and committed effective dose equivalent, which would appear to exclude the effective dose to the skin and eyes from non-penetrating radiation such as beta. These surface doses are included in the NRC's shallow dose equivalent, along with contributions from penetrating (gamma) radiation. Regulatory limits are imposed on the TEDE for occupationally exposed individuals and members of the general public. See also Radioactivity Radiation poisoning Ionizing radiation Deep-dose equivalent Collective dose Cumulative dose Committed dose equivalent Committed effective dose equivalent References 10 CFR 20.1003 External links - "The confusing world of radiation dosimetry" - M.A. Boyd, U.S. Environmental Protection Agency. An account of chronological differences between USA and ICRP dosimetry systems. Radioactivity Radiation health effects
Total effective dose equivalent
[ "Physics", "Chemistry", "Materials_science" ]
258
[ "Radiation health effects", "Nuclear chemistry stubs", "Nuclear and atomic physics stubs", "Nuclear physics", "Radiation effects", "Radioactivity" ]
16,509,322
https://en.wikipedia.org/wiki/Deep-dose%20equivalent
The Deep-dose equivalent (DDE) is a measure of external radiation exposure defined by US regulations. It is reported alongside eye and shallow dose equivalents on typical US dosimetry reports. It represents the dose equivalent at a tissue depth of 1 cm (1000 mg/cm2) due to external whole-body exposure to ionizing radiation. Dose due to external radiation tends to decrease with depth because of the shielding effects of outer tissues. The reference depth of 1 cm essentially discounts alpha and beta radiation that are easily shielded by the skin, clothing, and bone surface, while taking minimal credit for any self-shielding from the more penetrating gamma rays. This makes the deep-dose equivalent a conservative measure of internal organ exposure to external radiation, while eye and skin exposure to external radiation must be accounted differently. Deep-dose equivalent does include any contribution from internal contamination. See also Radioactivity Radiation poisoning Ionizing radiation Dosimetry Absorbed dose Total effective dose equivalent Collective dose Cumulative dose Committed dose equivalent Committed effective dose equivalent References USNRC glossary External links - "The confusing world of radiation dosimetry" - M.A. Boyd, 2009, U.S. Environmental Protection Agency. An account of chronological differences between USA and ICRP dosimetry systems. Radioactivity Radiation health effects Equivalent units
Deep-dose equivalent
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
266
[ "Radiation health effects", "Equivalent quantities", "Units of measurement", "Quantity", "Equivalent units", "Nuclear chemistry stubs", "Nuclear and atomic physics stubs", "Nuclear physics", "Radiation effects", "Radioactivity" ]
16,509,656
https://en.wikipedia.org/wiki/Power%20plant%20efficiency
The efficiency of a plant is the percentage of the total energy content of a power plant's fuel that is converted into electricity. The remaining energy is usually lost to the environment as heat unless it is used for district heating. Rating efficiency is complicated by the fact that there are two different ways to measure the fuel energy input: LCV = Lower Calorific Value (same as NCV = Net Calorific Value) neglects thermal energy gained from exhaust H2O condensation HCV = Higher Calorific Value (same as GCV, Gross Calorific Value) includes exhaust H2O condensed to liquid water Depending on which convention is used, a differences of 10% in the apparent efficiency of a gas fired plant can arise, so it is very important to know which convention, HCV or LCV (NCV or GCV) is being used. Heat rate Heat rate is a term commonly used in power stations to indicate the power plant efficiency. The heat rate is the inverse of the efficiency: a lower heat rate is better. The term efficiency is a dimensionless measure (sometimes quoted in percent), and strictly heat rate is dimensionless as well, but often written as energy per energy in relevant units. In SI-units it is joule per joule, but often also expressed as joule/kilowatt hour or British thermal units/kWh. This is because kilowatt hour is often used when referring to electrical energy and joule or Btu is commonly used when referring to thermal energy. Heat rate in the context of power plants can be thought of as the input needed to produce one unit of output. It generally indicates the amount of fuel required to generate one unit of electricity. Performance parameters tracked for any thermal power plant like efficiency, fuel costs, plant load factor, emissions level, etc. are a function of the station heat rate and can be linked directly. Given that heat rate and efficiency are inversely related to each other, it is easy to convert from one to the other. A 100% efficiency implies equal input and output: for 1 kWh of output, the input is 1 kWh. This thermal energy input of 1 kWh = 3.6 MJ = 3,412 Btu Therefore, the heat rate of a 100% efficient plant is simply 1, or 1 kWh/kWh, or 3.6 MJ/kWh, or 3,412 Btu/kWh To express the efficiency of a generator or power plant as a percentage, invert the value if dimensionless notation or same unit are used. For example: A heat rate value of 5 gives an efficiency factor of 20%. A heat rate value of 2 kWh/kWh gives an efficiency factor of 50%. A heat rate value of 4 MJ/MJ gives an efficiency factor of 25%. For other units, make sure to use a corresponding conversion factor for the units. For example, if using Btu/kWh, use a conversion factor of 3,412 Btu per kWh to calculate the efficiency factor. For example, if the heat rate is 10,500 Btu/kWh, the efficiency is 32.5% (since 3,412 Btu / 10,500 Btu = 32.5%). The higher the heat rate (i.e. the more energy input that is required to produce one unit of electric output), the lower the efficiency of the power plant. The U.S. Energy Information Administration gives a general explanation for how to translate a heat rate value into a power plant's efficiency value. Most power plants have a target or design heat rate. If the actual heat rate does not match the target, the difference between the actual and target heat rate is the heat rate deviation. See also Fuel efficiency Energy conversion efficiency Thermal efficiency Electrical efficiency Mechanical efficiency Cost of electricity by source References Thermodynamic properties Energy conversion Energy conservation Engineering thermodynamics Energy economics Power stations
Power plant efficiency
[ "Physics", "Chemistry", "Mathematics", "Engineering", "Environmental_science" ]
818
[ "Thermodynamic properties", "Physical quantities", "Engineering thermodynamics", "Quantity", "Energy economics", "Thermodynamics", "Mechanical engineering", "Environmental social science" ]
16,510,408
https://en.wikipedia.org/wiki/Nuclear%20entombment
Nuclear entombment (also referred to as "safe enclosure") is a method of nuclear decommissioning in which radioactive contaminants are encased in a structurally long-lived material, such as concrete. This prevents radioactive material and other contaminated substances from being exposed to human activity and the environment. Entombment is usually applied to nuclear reactors, but also some nuclear test sites. Nuclear entombment is the least used of three methods for decommissioning nuclear power plants, the others being dismantling and deferred dismantling (also known as "safe storage"). The use of nuclear entombment is more practical for larger nuclear power plants that are in need of both long and short term burials, as well as for power plants which seek to terminate their facility licenses. Entombment is used on a case-by-case basis because of its major commitment with years of surveillance and complexity until the radioactivity is no longer a major concern, permitting decommissioning and ultimate unrestricted release of the property. Considerations such as financial backing and the availability of technical know-how are also major factors. Preparation The first step is to cease operations and stow any spent fuel or waste. Nuclear reactors produce high-level waste in the form of spent nuclear fuel, which continues to release decay heat due to its powerful radioactivity. Storing this waste underwater in a spent fuel pool prevents damage and safely absorbs the radiation. Over a period of years the radioactivity and heat generation declines, until the spent fuel can be removed from the water and stored in casks for burial. When a reactor is decommissioned, partially spent fuel can be treated the same way. The reactor is sealed in order to allow no escape of radioactive particles or gases. Lastly the heating water is then pumped out and put in containers to await proper decontamination. Decontamination is the process of removal of radioactive contaminants on the remaining surface. Washing and mechanical cleaning are processed during the decontamination process by using the chemical reactors, and the global objective is to protect public safety and the environment. The coolant is also removed and stored for proper disposal. This procedure is often performed by the company that owns the plant, and if the company is unable to then properly qualified contractors are brought in. After this procedure comes the next one which deals with the radioactivity and radioactive waste. The second procedure is the dismantling of the site. The decommissioning project is for removing the radioactive materials. Thermal cutting and mechanical cutting are two technical ways to dismantle and demolish. Thermal cutting is used for the metals by burning with high energy in one concentration area. The mechanical cutting takes place in the workshop with mechanical force and cuts reactive materials into two parts or in small pieces. The most dangerous waste is placed inside radioactive-resistant containers, after which the containers are transported to storage facilities. The rest of the site can then be decontaminated. The site is then checked thoroughly for any signs of radiation. Most of the remaining waste onsite can be disposed of normally as it is either not contaminated or radioactivity levels have dropped to within safe limits. This process is often completed using robots, which are able to access the difficult to reach areas deemed too radioactive for human workers. The robot was made by WWER-440-type-NNR and is mostly in central and Eastern of Europe, Russia. The main idea of using robots in decontamination is to reduce the radioactive to a level, therefore workers can be exposed. The robot's energy was provided from the robot control system and was placed in the manipulator. The manipulator can be controlled by the remote. The “Decomler” robot works in decontamination by using the wheel system and track system. Also, the robot needs to be strictly licensed by national regulating authorities, because the materials processed by the robot need to ensure they are not discharged to outside. Otherwise, it will cause nuclear pollution to both the environment and humans. Entombment Entombment is a more time-intensive process than protective storage and dismantlement as a decommissioning mode. The simplest of the procedures is entombing the radioactive waste source at the site itself. After containment and disposal of lower-level radioactive spent fuel sources, the entombment process of high-level radioactive parts of the plant may begin. The entombment itself is accomplished by numerous layers of sturdy materials, concrete usually among them. The first step is to cover the area with a protective shield which is usually made up of radioactive-resistant materials - this allows workers to continue working with a significantly lower radioactive environment. The second step is the most crucial and time-consuming. Cementitious materials are used to encase the site in cement, absorbent grout, and/or infills. Each layer of cement, grout or infills must set and cure before the next layer is added. Time and proper testing is required to ensure the safe containment of radiation within the layers of cement. The final step is often to surround the site in a clay or sand/gravel mixture and then soil is laid on top of the site. Entombment designs must be defined and agreed upon by an authorized organization, like the NRC. These designs must also be an approved alternative to other decommissioning methods. Furthermore, because the nuclear facility is often in close proximity to other public environments, the public must accept entombment as a decontamination & decommissioning (D&D) option before proceeding. Small-scale tests will sometimes be performed to prove to organizations like the NRC that a standard process can be transferred. A consortium approach is also necessary to ensure a broader understanding and funding of nuclear entombment. Sites for potential entombment have been identified in the U.K., Japan, Lithuania, Russia, and Taiwan but further research and development of nuclear entombment methods has been called for as of the early 21st century. Sites must be routinely checked for breaches in the containment barrier for decades. Therefore, entombment is often considered as a last resort solution to the decommissioning of a nuclear power plant or nuclear disaster site. Concerns Many of the concerns of nuclear entombment center around ethics and long-term reliability. Given the inherently dangerous contents of entombment structures, they serve as a serious disamenity to nearby residents. Once established, entombment structures cannot practically be transported or modified, making disposal sites effectively permanent for their intended lifespan often up to 1,000 years. In addition, the intended permanence of such structures raises the concern of leak integrity over long periods of time. Should a leak occur, the nuclear waste contents could potentially radioactively contaminate nearby water sources, posing a serious health risk to surrounding inhabitants and the biosphere, possibly violating the polluter pays principle. Public perception plays an important role in the development of nuclear entombment sites and it can be difficult to ensure a steady supply of both funding and willing workers. Constant, thorough monitoring and sanitation of any nuclear entombment site is required to ensure its stability and effectiveness over a long period of time, a significant expense that is not necessarily predictable for the entire life of the site, leaving a financial liability for future generations. The health and safety of workers monitoring the structure is also a concern; for reference Chernobyl Entombment workers receive about 9.2 mSv per month, compared to the average US resident receiving 3.1 mSv per year. Entombment is not a solution for every type of radioactive waste and is not viable for long-lived radionuclides. Benefits The surveillance cost will be lower than the surveillance cost for SAFSTOR (safe storage) option. The cost for entombment is less than the cost for dismantling, since it uses for disposal the same facility from which the waste came. However, this cost is eternal and may be higher in the course of years. The use of entombment requires fewer workers and prevents them from being in major contact with the nuclear waste. In some cases, entombment also provides further financial benefits through reducing costs devoted to waste conditioning and management, as radioactive waste can be placed within the vicinity of entombment enclosures to benefit from decay. In addition to reducing cost, it also minimizes public interaction with the project and the amount of nuclear radiation emitted from the waste. By disposing of the nuclear waste in the same facility it will allow engineers to reinforce the facility to ensure safety for the public and the environment. Entombment is also preferable in instances of time sensitive scenarios, in which the deferred dismantling of a nuclear power plant could potentially increase financial burden and/or the hazardous radioactive decay. Beyond direct practical benefits, entombment has also been explored as a step that can benefit the overall decontamination and decommissioning process, though further research and development is needed before it can be deemed a viable option. United States Nuclear Regulatory Commission The United States Nuclear Regulatory Commission (USNRC) provides licensing for the entombment process, as well research and development (R&D) programs to help decommission nuclear power plants. USNRC will continue the development of rule making for entombment. NRC asks companies running power plants to set money aside while the power plant is operating, for future shut down and cleanup costs. The NRC has decided in order for nuclear entombment to be possible, a long-term structure must be created specifically for the encasing of the radioactive waste. If the structures are not correctly built, water can seep into them and infect the public with radioactive waste. The NRC has imposed acts such as the Nuclear Waste Policy Act of 1982 and the Low-level radioactive waste policy. These policies help regulate state governments on the procedures and precautions needed to dispose of the nuclear waste. The Nuclear Waste Policy of 1982 states the federal government's responsibility is to provide a permanent disposal facility for high-level radioactive waste and spent nuclear fuel. If states have also agreed to follow §274 of the Atomic Energy Act they may take on the responsibility of disposing of low-level waste and receive facilities from the federal government for this purpose. Other commissions in the pursuit of improving nuclear entombment as a solution include the Cementitious Barriers Partnership (CBP) and the U.S. Department of Energy (DOE). Research facilities such as those at the Savannah River and Lawrence Livermore Laboratory have contributed to the understanding of safe nuclear entombment. Containment examples There are several examples of successful entombment procedures completed. In El Cabril, Spain a multi-concrete barrier concept was used wherein the radioactive waste drums are placed inside concrete boxes. Those boxes are then placed inside a reinforced concrete vault sealed with a waterproof coating to prevent any hazardous liquid from escaping the drums. In the Hallam Nuclear Power Facility, expanding concrete, seal-welding at penetrations, sand, waterproof polyvinyl membranes, and earth were all used to envelop radioactive residuals. At the Piqua Nuclear Power Facility, seal-weldings and sand were again used to seal the internal reactor, and lastly sealed with a waterproof membrane. At the Boiling Nuclear Superheater Power Station (BONUS) in Rincón, Puerto Rico, a concrete slab was constructed to cover the upper surface while seal-welding was used to secure lower surface penetrations. The Chernobyl disaster is one of the worst nuclear disasters. The initial containment building, commonly known as the sarcophagus, did not classify as a proper entombment device. It was difficult or impossible to repair and maintain because of extremely high levels of radiation. A new structure was structurally completed and put in place in late 2016, and was completed in 2019. The structure measures 108 meters tall, with a length of 260 meters and a span of 165 meters. The main arch is composed of triple-layered radiation resistant panels made up of stainless steel coated in polycarbonate, which will provide the shielding necessary for radioactive containment. The structure weighs over 30,000 tons and completely covers Reactor number 4. This new tomb is designed to last over 100 years, and has special ventilation and temperature systems to prevent condensation of radioactive fluids on the inside which could result in a compromised containment. The new containment structure is still intended to be temporary, with the goal of allowing the Ukrainian Government and the EU time to develop ways of properly decommissioning the plant and cleaning up the site. Other examples Lucens, Switzerland - initially entombed in a cavern and later decontaminated Dodewaard, the Netherlands - entombed for 40 years, awaiting final decommissioning; also referred to as 'safe enclosure' Runit Dome, Marshall Islands - large concrete tomb constructed in 1980 in an atomic blast crater, encasing contaminated soil See also Nuclear Decommissioning Authority References Further reading Lochbaum, Dave.(2013). Nuclear Plant Decommissioning. Bulletin of the Atomic Scientists. Retrieved from http://thebulletin.org/nuclear-plant-decommissioning United States Nuclear Regulatory Commission.(2017). NRC: Decommissioning of Nuclear Facilities. Retrieved from https://www.nrc.gov/waste/decommissioning.html Nuclear technology Nuclear power stations Radioactive waste Nuclear safety and security
Nuclear entombment
[ "Physics", "Chemistry", "Technology" ]
2,801
[ "Nuclear technology", "Environmental impact of nuclear power", "Hazardous waste", "Radioactivity", "Nuclear physics", "Radioactive waste" ]
16,513,502
https://en.wikipedia.org/wiki/Thermal%20conductivity%20measurement
There are a number of possible ways to measure thermal conductivity, each of them suitable for a limited range of materials, depending on the thermal properties and the medium temperature. Three classes of methods exist to measure the thermal conductivity of a sample: steady-state, time-domain, and frequency-domain methods. Steady-state methods In general, steady-state techniques perform a measurement when the temperature of the material measured does not change with time. This makes the signal analysis straightforward (steady state implies constant signals). The disadvantage is that a well-engineered experimental setup is usually needed. Steady-state methods, in general, work by applying a known heat flux, , to a sample with a surface area, , and thickness, ; once the sample's steady-state temperature is reached, the difference in temperature, , across the thickness of the sample is measured. After assuming one-dimensional heat flow and an isotropic medium, Fourier's law is then used to calculate the measured thermal conductivity, : Major sources of error in steady-state measurements include radiative and convective heat losses in the setup, as well as errors in the thickness of the sample propagating to the thermal conductivity. In geology and geophysics, the most common method for consolidated rock samples is the divided bar. There are various modifications to these devices depending on the temperatures and pressures needed as well as sample sizes. A sample of unknown conductivity is placed between two samples of known conductivity (usually brass plates). The setup is usually vertical with the hot brass plate at the top, the sample in between then the cold brass plate at the bottom. Heat is supplied at the top and made to move downwards to stop any convection within the sample. Measurements are taken after the sample has reached to the steady state (with zero heat gradient or constant heat over entire sample), this usually takes about 30 minutes and over. Other steady-state methods For good conductors of heat, Searle's bar method can be used. For poor conductors of heat, Lee's disc method can be used. Time-domain methods The transient techniques perform a measurement during the process of heating up. The advantage is that measurements can be made relatively quickly. Transient methods are usually carried out by needle probes. Non-steady-state methods to measure the thermal conductivity do not require the signal to obtain a constant value. Instead, the signal is studied as a function of time. The advantage of these methods is that they can in general be performed more quickly, since there is no need to wait for a steady-state situation. The disadvantage is that the mathematical analysis of the data is generally more difficult. Transient hot wire method The transient hot wire method (THW) is a very popular, accurate and precise technique to measure the thermal conductivity of gases, liquids, solids, nanofluids and refrigerants in a wide temperature and pressure range. The technique is based on recording the transient temperature rise of a thin vertical metal wire with infinite length when a step voltage is applied to it. The wire is immersed in a fluid and can act both as an electrical heating element and a resistance thermometer. The transient hot wire method has advantage over the other thermal conductivity method since there is a fully developed theory and there is no calibration or single-point calibration. Furthermore, because of the very small measuring time (1 s) there is no convection present in the measurements and only the thermal conductivity of the fluid is measured with very high accuracy. Most of the THW sensors used in academia consist of two identical very thin wires with only difference in the length. Sensors using a single wire, are used both in academia and industry with the advantage over the two-wire sensors the ease of handling of the sensor and change of the wire. An ASTM standard is published for the measurements of engine coolants using a single-transient hot wire method. Transient plane source method Transient Plane Source Method, utilizing a plane sensor and a special mathematical model describing the heat conductivity, combined with electronics, enables the method to be used to measure Thermal Transport Properties. It covers a thermal conductivity range of at least 0.01-500 W/m/K (in accordance with ISO 22007-2) and can be used for measuring various kinds of materials, such as solids, liquid, paste and thin films etc. In 2008 it was approved as an ISO-standard for measuring thermal transport properties of polymers (November 2008). This TPS standard also covers the use of this method to test both isotropic and anisotropic materials. The Transient Plane Source technique typically employs two samples halves, in-between which the sensor is sandwiched. Normally the samples should be homogeneous, but extended use of transient plane source testing of heterogeneous material is possible, with proper selection of sensor size to maximize sample penetration. This method can also be used in a single-sided configuration, with the introduction of a known insulation material used as sensor support. The flat sensor consists of a continuous double spiral of electrically conducting nickel (Ni) metal, etched out of a thin foil. The nickel spiral is situated between two layers of thin polyimide film Kapton. The thin Kapton films provides electrical insulation and mechanical stability to the sensor. The sensor is placed between two halves of the sample to be measured. During the measurement a constant electrical effect passes through the conducting spiral, increasing the sensor temperature. The heat generated dissipates into the sample on both sides of the sensor, at a rate depending on the thermal transport properties of the material. By recording temperature vs. time response in the sensor, the thermal conductivity, thermal diffusivity and specific heat capacity of the material can be calculated. For highly conducting materials, very large samples are needed (some litres of volume). Modified transient plane source (MTPS) method A variation of the above method is the Modified Transient Plane Source Method (MTPS) developed by Dr. Nancy Mathis. The device uses a one-sided, interfacial, heat reflectance sensor that applies a momentary, constant heat source to the sample. The difference between this method and traditional transient plane source technique described above is that the heating element is supported on a backing, which provides mechanical support, electrical insulation and thermal insulation. This modification provides a one-sided interfacial measurement in offering maximum flexibility in testing liquids, powders, pastes and solids. Transient line source method The physical model behind this method is the infinite line source with constant power per unit length. The temperature profile at a distance at time is as follows where is the power per unit length, in [W·m−1] is the thermal conductivity of the sample, in [W·m−1·K−1] is the exponential integral, a transcendent mathematical function is the radial distance to the line source is the thermal diffusivity, in [m2·s−1] is the amount of time that has passed since heating has started, in [s] When performing an experiment, one measures the temperature at a point at fixed distance, and follows that temperature in time. For large times, the exponential integral can be approximated by making use of the following relation where is the Euler–Mascheroni constant This leads to the following expression Note that the first two terms in the brackets on the RHS are constants. Thus if the probe temperature is plotted versus the natural logarithm of time, the thermal conductivity can be determined from the slope given knowledge of Q. Typically this means ignoring the first 60 to 120 seconds of data and measuring for 600 to 1200 seconds. Typically, this method is used for gases and liquids whose thermal conductivities are between 0.1 and 50 W/(mK). If the thermal conductivities are too high, the diagram often does not show a linearity, so that no evaluation is possible. Modified transient line source method A variation on the Transient Line Source method is used for measuring the thermal conductivity of a large mass of the earth for Geothermal Heat Pump (GHP/GSHP) system design. This is generally called Ground Thermal Response Testing (TRT) by the GHP industry. Understanding the ground conductivity and thermal capacity is essential to proper GHP design, and using TRT to measure these properties was first presented in 1983 (Mogensen). The now commonly used procedure, introduced by Eklöf and Gehlin in 1996 and now approved by ASHRAE involves inserting a pipe loop deep into the ground (in a well bore, filling the anulus of the bore with a grout substance of known thermal properties, heating the fluid in the pipe loop, and measuring the temperature drop in the loop from the inlet and return pipes in the bore. The ground thermal conductivity is estimated using the line source approximation method—plotting a straight line on the log of the thermal response measured. A very stable thermal source and pumping circuit are required for this procedure. More advanced Ground TRT methods are currently under development. The DOE is now validating a new Advanced Thermal Conductivity test said to require half the time as the existing approach, while also eliminating the requirement for a stable thermal source. This new technique is based on multi-dimensional model-based TRT data analysis. Laser flash method The laser flash method is used to measure thermal diffusivity of a thin disc in the thickness direction. This method is based upon the measurement of the temperature rise at the rear face of the thin-disc specimen produced by a short energy pulse on the front face. With a reference sample specific heat can be achieved and with known density the thermal conductivity results as follows where is the thermal conductivity of the sample, in [W·m−1·K−1] is the thermal diffusivity of the sample, in [m2 ·s−1] is the specific heat capacity of the sample, in [J·kg−1·K−1] is the density of the sample, in [kg·m−3] It is suitable for a multiplicity of different materials over a broad temperature range (−120 °C to 2800 °C). Time-domain thermoreflectance method Time-domain thermoreflectance is a method by which the thermal properties of a material can be measured, most importantly thermal conductivity. This method can be applied most notably to thin film materials, which have properties that vary greatly when compared to the same materials in bulk. The idea behind this technique is that once a material is heated up, the change in the reflectance of the surface can be utilized to derive the thermal properties. The change in reflectivity is measured with respect to time, and the data received can be matched to a model which contain coefficients that correspond to thermal properties. Frequency-domain methods 3ω-method One popular technique for electro-thermal characterization of materials is the 3ω-method, in which a thin metal structure (generally a wire or a film) is deposited on the sample to function as a resistive heater and a resistance temperature detector (RTD). The heater is driven with AC current at frequency ω, which induces periodic joule heating at frequency 2ω due to the oscillation of the AC signal during a single period. There will be some delay between the heating of the sample and the temperature response which is dependent upon the thermal properties of the sensor/sample. This temperature response is measured by logging the amplitude and phase delay of the AC voltage signal from the heater across a range of frequencies (generally accomplished using a lock-in-amplifier). Note, the phase delay of the signal is the lag between the heating signal and the temperature response. The measured voltage will contain both the fundamental and third harmonic components (ω and 3ω respectively), because the Joule heating of the metal structure induces oscillations in its resistance with frequency 2ω due to the temperature coefficient of resistance (TCR) of the metal heater/sensor as stated in the following equation: , where C0 is constant. Thermal conductivity is determined by the linear slope of ΔT vs. log(ω) curve. The main advantages of the 3ω-method are minimization of radiation effects and easier acquisition of the temperature dependence of the thermal conductivity than in the steady-state techniques. Although some expertise in thin film patterning and microlithography is required, this technique is considered as the best pseudo-contact method available. (ch23) Frequency-domain hot-wire method The transient hot wire method can be combined with the 3ω-method to accurately measure the thermal conductivity of solid and molten compounds from room temperature up to 800 °C. In high temperature liquids, errors from convection and radiation make steady-state and time-domain thermal conductivity measurements vary widely; this is evident in the previous measurements for molten nitrates. By operating in the frequency-domain, the thermal conductivity of the liquid can be measured using a 25 μm diameter hot-wire while rejecting the influence of ambient temperature fluctuations, minimizing error from radiation, and minimizing errors from convection by keeping the probed volume below 1 μL. Freestanding sensor-based 3ω-method The freestanding sensor-based 3ω technique is proposed and developed as a candidate for the conventional 3ω method for thermophysical properties measurement. The method covers the determination of solids, powders and fluids from cryogenic temperatures to around 400 K. For solid samples, the method is applicable to both bulks and tens of micrometers thick wafers/membranes, dense or porous surfaces. The thermal conductivity and thermal effusivity can be measured using selected sensors, respectively. Two basic forms are now available: the linear source freestanding sensor and the planar source freestanding sensor. The range of thermophysical properties can be covered by different forms of the technique, with the exception that the recommended thermal conductivity range where the highest precision can be attained is 0.01 to 150 W/m•K for the linear source freestanding sensor and 500 to 8000 J/m2•K•s0.5 for the planar source freestanding sensor. Measuring devices A thermal conductance tester, one of the instruments of gemology, determines if gems are genuine diamonds using diamond's uniquely high thermal conductivity. For an example, see Measuring Instrument of Heat Conductivity of ITP-MG4 "Zond" (Russia). Standards EN 12667, "Thermal performance of building materials and products. Determination of thermal resistance by means of guarded hot plate and heat flow meter methods. Products of high and medium thermal resistance", . ISO 8301, "Thermal insulation – Determination of steady-state thermal resistance and related properties – Heat flow meter apparatus" ISO 8497, "Thermal insulation – Determination of steady-state thermal transmission properties of thermal insulation for circular pipes", ISO 22007-2:2008 "Plastics – Determination of thermal conductivity and thermal diffusivity – Part 2: Transient plane heat source (hot disc) method" ISO 22007-4:2008 "Plastics – Determination of thermal conductivity and thermal diffusivity – Part 4: Laser flash method" IEEE Standard 442–1981, "IEEE guide for soil thermal resistivity measurements", . See also soil thermal properties. IEEE Standard 98-2002, "Standard for the Preparation of Test Procedures for the Thermal Evaluation of Solid Electrical Insulating Materials", ASTM Standard C518 – 10, "Standard Test Method for Steady-State Thermal Transmission Properties by Means of the Heat Flow Meter Apparatus" ASTM Standard D5334-14, "Standard Test Method for Determination of Thermal Conductivity of Soil and Soft Rock by Thermal Needle Probe Procedure" ASTM Standard D5470-06, "Standard Test Method for Thermal Transmission Properties of Thermally Conductive Electrical Insulation Materials" ASTM Standard E1225-04, "Standard Test Method for Thermal Conductivity of Solids by Means of the Guarded-Comparative-Longitudinal Heat Flow Technique" ASTM Standard D5930-01, "Standard Test Method for Thermal Conductivity of Plastics by Means of a Transient Line-Source Technique" ASTM Standard D2717-95, "Standard Test Method for Thermal Conductivity of Liquids" ASTM Standard E1461-13(2022), "Standard Test Method for Thermal Diffusivity by the Flash Method." References External links An alternative traditional method using real thermometers is described at . A brief review of new methods measuring thermal conductivity, thermal diffusivity and specific heat within a single measurement is available at . A brief description of Modified Transient Plane Source (MTPS) at http://patents.ic.gc.ca/opic-cipo/cpd/eng/patent/2397102/page/2397102_20120528_description.pdf Materials testing
Thermal conductivity measurement
[ "Materials_science", "Engineering" ]
3,467
[ "Materials testing", "Materials science" ]
16,515,095
https://en.wikipedia.org/wiki/EF86
The EF86 is a high transconductance sharp cutoff pentode vacuum tube with Noval (B9A) base for audio-frequency applications. It was introduced by the Mullard company in 1953 and was produced by Philips, Mullard, Telefunken, Valvo, and GEC among others. It is very similar electrically to the octal base EF37A and the Rimlock base EF40. Unlike many pentodes, it was designed specifically for audio applications, with low noise and low microphony claimed advantages, although a rubber-mounted vibration-resistant base was still recommended. It has a much higher stage gain than any triode, which makes it susceptible to microphony. The EF86 was used in many preamplifier designs during the last decades of vacuum tube hi-fi development. An industrial tube variant is known as 6267. In the former Soviet Union a variant was also produced as type 6Zh32P (Russian: 6Ж32П.) EF86s were being produced in Russia in two versions under the Electro-Harmonix brand and in the Slovak Republic as JJ Electronic (formerly Tesla). Characteristics 6.3 Volt, 200 mA indirectly heated A.F. miniature pentode with Noval (B9A) base with an EIA 9CQ (or 9BJ) basing diagram. Transconductance: 2.2 mA/V at Ia=3.0 mA, Ig2=0.6 mA, Va=250 V, Vg1=-2.2 V, Vg2=140 V, Vg3=0 V Voltage gain: 185 (45dB) at Vsupply=250 V, Ik=0.9 mA, Rk=2.2 kilohm, Ra=220 kilohm, Rg1=1 megohm, Vout<44 VRMS Special precautions have been taken in the design to reduce the: Hum (through a bifilar-wound twisted pair of heater wires), noise, and microphony (through a rigid internal structure to reduce resonances). The EF86 is much less noisy than other pentodes, but slightly noisier than some triodes at about 2 μV equivalent input noise to 10 kHz. Although used in circuits such as tape recorder input stages and instrument amplifiers, microphony can be a problem, even when mounted in a vibration-reducing valve holder. Equivalent and similar devices 6267 * Z729 * CV2901 * 6BK8 * 6CF8 * 6F22 * CV8068 * CV10098 Special quality: EF86SQ * M8195 * CV4085 * EF806S Different heater requirements: PF86, 300 mA (4.5 V) UF86, 100 mA (12.6 V) The rarely used EF83 is a remote-cutoff pentode otherwise similar to the EF86; the remote cutoff (variable mu) makes it suitable for applications such as automatic gain control (AGC) in tape recorders. References External links Tubeworld.com's EF86 page Mullard's EF86 at the National Valve Museum The Mullard EF36, EF37 and EF37A at the National Valve Museum TDSL Tube data: EF86 Datasheet for Thorn/Mazda 6F22/EF86 (pdf) Vacuum tubes Guitar amplification tubes
EF86
[ "Physics" ]
741
[ "Vacuum tubes", "Vacuum", "Matter" ]
16,523,806
https://en.wikipedia.org/wiki/Od%20%28Unix%29
od is a command on various operating systems for displaying ("dumping") data in various human-readable output formats. The name is an acronym for "octal dump" since it defaults to printing in the octal data format. Overview The od program can display output in a variety of formats, including octal, hexadecimal, decimal, and ASCII. It is useful for visualizing data that is not in a human-readable format, like the executable code of a program, or where the primary form is ambiguous (e.g. some Latin, Greek and Cyrillic characters looking similar). od is one of the earliest Unix programs, having appeared in version 1 AT&T Unix. It is also specified in the POSIX standards. The implementation for od used on Linux systems is usually provided by GNU Core Utilities. Since it predates the Bourne shell, its existence causes an inconsistency in the do loop syntax. Other loops and logical blocks are opened by the name, and closed by the reversed name, e.g. if ... fi and case ... esac, but od's existence necessitates do ... done. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. The command has also been ported to the IBM i operating system. Example session Normally a dump of an executable file is very long. The head program prints out the first few lines of the output. Here is an example of a dump of the "Hello world" program, piped through head. % od hello | head 0000000 042577 043114 000401 000001 000000 000000 000000 000000 0000020 000002 000003 000001 000000 101400 004004 000064 000000 0000040 003610 000000 000000 000000 000064 000040 000006 000050 0000060 000033 000030 000006 000000 000064 000000 100064 004004 0000100 100064 004004 000300 000000 000300 000000 000005 000000 0000120 000004 000000 000003 000000 000364 000000 100364 004004 0000140 100364 004004 000023 000000 000023 000000 000004 000000 0000160 000001 000000 000001 000000 000000 000000 100000 004004 0000200 100000 004004 002121 000000 002121 000000 000005 000000 0000220 010000 000000 000001 000000 002124 000000 112124 004004 Here is an example of od used to diagnose the output of echo where the user types and after writing "Hello" to literal insert a tab and ^C character: % echo "Hello ^C" | od -cb 0000000 H e l l o \t 003 \n 110 145 154 154 157 011 003 012 0000010 See also Hex editor Hex dump References External links od - GNU Core Utilities manpage Unix SUS2008 utilities IBM i Qshell commands
Od (Unix)
[ "Technology" ]
729
[ "IBM i Qshell commands", "Computing commands" ]
16,524,351
https://en.wikipedia.org/wiki/Protein%E2%80%93protein%20interaction%20screening
Protein–protein interaction screening refers to the identification of Protein–protein interaction with high-throughput screening methods such as computer- and/or robot-assisted plate reading, flow cytometry analyzing. The interactions between proteins are central to virtually every process in a living cell. Information about these interactions improves understanding of diseases and can provide the basis for new therapeutic approaches. Methods to screen protein–protein interactions Though there are many methods to detect protein–protein interactions, the majority of these methods—such as co-immunoprecipitation, fluorescence resonance energy transfer (FRET) and dual polarisation interferometry—are not screening approaches. Ex vivo or in vivo methods Methods that screen protein–protein interactions in the living cells. Bimolecular fluorescence complementation (BiFC) is a technique for observing the interactions of proteins. Combining it with other new techniques, dual expression recombinase based (DERB) methods can enable the screening of protein–protein interactions and their modulators. The yeast two-hybrid screen investigates the interaction between artificial fusion proteins inside the nucleus of yeast. This approach can identify the binding partners of a protein without bias. However, the method has a notoriously high false-positive rate, which makes it necessary to verify the identified interactions by co-immunoprecipitation. In-vitro methods The tandem affinity purification (TAP) method allows the high-throughput identification of proteins interactions. In contrast with the Y2H approach, the accuracy of the method can be compared to those of small-scale experiments (Collins et al., 2007) and the interactions are detected within the correct cellular environment as by co-immunoprecipitation. However, the TAP tag method requires two successive steps of protein purification, and thus can not readily detect transient protein–protein interactions. Recent genome-wide TAP experiments were performed by Krogan et al., 2006, and Gavin et al., 2006, providing updated protein interaction data for yeast organisms. Chemical crosslinking is often used to "fix" protein interactions in place before trying to isolate/identify interacting proteins. Common crosslinkers for this application include the non-cleavable [NHS-ester] crosslinker, [bis-sulfosuccinimidyl suberate] (BS3); a cleavable version of BS3, [dithiobis(sulfosuccinimidyl propionate)](DTSSP); and the [imidoester] crosslinker [dimethyl dithiobispropionimidate] (DTBP) that is popular for fixing interactions in ChIP assays. References External links HPRD Human Protein Reference Database, a (manually) curated database of human protein information with visualization tools IntAct Interaction Database, a public repository for manually curated molecular interaction data from the literature DIP Database of Interacting Proteins, a manual and automatic catalog of experimentally determined interactions between proteins MIPS Mammalian Protein–Protein Interaction Database, the MIPS mammalian protein–protein interaction database Signal transduction Biochemistry methods Proteomics
Protein–protein interaction screening
[ "Chemistry", "Biology" ]
639
[ "Biochemistry methods", "Neurochemistry", "Biochemistry", "Signal transduction" ]
17,716,161
https://en.wikipedia.org/wiki/Naphthenic%20oil
Crude oil is extracted from the bedrock before being processed in several stages, removing natural contaminants and undesirable hydrocarbons. This separation process produces mineral oil, which can in turn be denoted as paraffinic, naphthenic or aromatic. The differences between these different types of oils are not clear-cut, but mainly depend on the predominant hydrocarbon types in the oil. Paraffinic oil, for example, contains primarily higher alkanes, whereas naphthenic oils have a high share of cyclic alkanes in the mixture. Classification Crude oil appears in a host of different forms, which in turn determine how it should be refined. Classification of the crude oil can vary, because different actors have different starting points. For refineries, the interest has been primarily focused on the distribution between the distillation fractions: petrol, paraffin, gas oil, lubricant distillate, etc. Refiners look at the density of the crude oil – whether it is light, medium or heavy – or the sulfur content, i.e. whether the crude oil is “sweet” or “sour”. The general classification of different kinds of crude oil is based on the guidelines drawn up by American Petroleum Institute (API), in which the properties can vary depending on, for example, hydrocarbon composition and sulfur content. 1. General Crude Oil Classification Crude oil classification provides refiners with a rough guide to appropriate processing condition to reach desired products. Terminology like paraffinic, asphaltic, aromatic and naphthenic have been in use for a long time. With the progress of the science of the petroleum, addition of physical and chemical properties has been utilized to further enhance classification of crude oils. 1.1 API gravity Density has always been an important criterion of oils, generally an oil with low density is considered to be more valuable than an oil with higher density due to the fact that if contains more light fractions (i.e. gasoline). Thus, the API gravity or specific gravity is widely used for the classification of crude oils, based on a scheme proposed by the American Petroleum Institute (Table 1). A high API value >30 means a light crude with paraffinic character; a low API value means a heavy crude with increasing aromatic character. Table 1. Classifications of crude oil according to API gravity Low specific gravity ⇒ High °API value = paraffinic High specific gravity ⇒ Low °API value = naphthenic 1.2 UOP characterisation factor, K factor The UOP characterisation factor (Kw) (UOP 375-07), is based on the observation that the specific gravities of the hydrocarbons are related to their H/C ratio (and thus to their chemical character) and that their boiling points are linked to their number of carbon atoms in their molecules. High values of Kw (13-12.5) indicate a predominately paraffinic character of its components; naphthenic hydrocarbons vary between 12-11 and values near 10 indicate aromatic character. 2. Chemical composition The major types of hydrocarbons present in crude oils consist of 1) normal paraffins, 2) branched paraffins (iso-paraffins), 3) cycloparaffins (naphthenes) and 4) aromatics. 3. General Base Oil Classification 3.1 API base stock classification According to API guidelines base stocks (the lubricant component, which are obtained after the crude oil is refined) are divided into five general categories. Table 2. API base stock classifications.*Viscosity Index (section 4.1). Besides the above-mentioned properties, there are other methods that can be used to check if an oil is naphthenic or paraffinic. Among those: 3.2 Viscosity-gravity-constant (VGC) The viscosity gravity constant is a mathematical relationship between the viscosity and specific gravity (ASTM D2501) Paraffinic cuts have lower densities (and specific gravities) than naphthenic ones of about the same distillation range. VGC is of particular value in indicating a predominately paraffinic or cyclic composition. VGC is low for paraffinic crudes and high for naphthenic. VGC is reported for base stocks and ranges from approximately 0.78 (paraffinic base stocks) to 1.0 (highly aromatic base stocks) and its value provides some guidance for the solvency properties of the oil. Like the results of the n-d-M method, VGC is usually reported for naphthenic products, but not for paraffinic ones. 3.3 n-d-M method One way of obtaining compositional information on lubricating base oil is the n-d-M method (ASTM D3238), an empirical method for determining the carbon type distribution by indicating the percentage of carbons in aromatic structure (%CA), the percentage of carbon in naphthenic structure (%CN) and the percentage of carbon in paraffinic structure (%CP). Development of the n-d-M method was the consequence of much preceding work relating composition refractive index (n), density (d), and molecular weight (M). 3.4 VGC and refractivity intercept If the viscosity, density, relative density (specific gravity) and refractive index for a mineral oil is determined, the viscosity-gravity constant (VGC) and refractivity intercept (ri) can be calculated. Using the given values, the percent carbons (%CA, %CN, %CP) can be derived from a correlation chart, the ASTM D2140 method. As with the n-d-M method, the results are normally reported for naphthenic oils. 4. Properties of Naphthenic Base Oils Although several systems have been developed with the purpose for the classification of crude oils, they are usually mentioned as (1) paraffin base, (2) naphthene base (3) mixed base or (4) asphalt base. However, there appears to be no specific definition for these classifications. Base oil specifications, as defined by the producer or the purchaser, largely encompass the physical properties required for the fluid; density, viscosity, viscosity index (VI), pour point and flash point, and solubility information from aniline point or viscosity-gravity constant (VGC). Naphthenic base oils generally have intermediate VI's and very low pour points which make them useful in the manufacture of specialty lubricants. Dewaxing is normally not required due to the low quantities of linear paraffins (n-paraffins). 4.1 Viscosity index (VI) Viscosity index (ASTM D2270) is a measure of the extent of viscosity change with temperature; the higher the VI, the less the change. VI is calculated from viscosity measurements at 40°C and 100°C. The viscosities of paraffinic and naphthenic base oils have very different behavior with temperature change. Normally, paraffinic base oils have less viscosity variation (higher VI) than naphthenic base oil which display larger variation with temperature (lower VI). The low to intermediate VI make naphthenic base oils particularly suitable for specialty applications. 4.2 Pour point The pour point (ASTM D97) measures the temperature at which a base oil no longer flows. For paraffinic base oils, pour points are usually between −12 °C and −15 °C, and are determined by operation of the dewaxing unit. The pour points of naphthenic base oils, generally devoid of wax content, may be much lower (down to <−70 °C). 4.3 Aniline point The aniline point (ASTM D611) is of considerable value in the characterization of petroleum products. The aniline point measure of the ability of the base oil to act as a solvent and is determined from the temperature at which equal volumes of aniline and the base stock are soluble High aniline points (approximately 100°C or greater) imply a paraffinic base stock, while low aniline points (less than 100°C) imply a naphthenic or aromatic stock. 4.4 Viscosity gravity constant (VGC) VGC is an indicator of base oil composition and solvency that is calculated from the density and viscosity. High values indicate higher solvency and therefore greater naphthenic or aromatic content. 4.5 Refractive index (RI) The refractive index can provide information of the composition of the base oil. Low RI values indicate paraffinic materials and high RI values indicate aromatic components. The RI value also increases with molecular weight. 5. Naphthenic base oils, summary Contain little or no wax Excellent low temperature properties Good solvency Low aniline point High VGC High proportion of naphthenic molecules %CN (~50%) as determined by ASTM D 3238 or ASTM D2140 Preferred for specialty products manufacture and compounding Intermediate VI 6. Areas of application Naphthenic oils have extraordinary low-temperature properties, high compatibility with many polymers and good solvent power. These are properties that make naphthenic oils particularly useful for the speciality oil market: 1. Transformer oils. Naphthenic oils have excellent cooling and insulating properties because of a low viscosity index. The good solubility of the oils is also important for enhanced compatibility with seals and gaskets, for example. 2. Process oils. Naphthenic oils are used in a large number of chemical processes due to their good solvent power. These include, for example, plasticizers in polymer-based formulations, rheology modifier in printing inks and carrier oil in anti-foaming agents. Naphthenic oils have been proven to be suitable in the tyre oils segment because of their low content of polycyclic aromatic hydrocarbons (PAHs), which are hazardous to health and the environment. 3. Lubricating oils. Base oils are needed to manufacture products such as greases and industrial lubricants. Naphthenic base oils are particularly suited as metalworking fluids. The main functions of the naphthenic oil in this case are cooling and lubrication, providing a balance between the two. References Oils
Naphthenic oil
[ "Chemistry" ]
2,189
[ "Oils", "Carbohydrates" ]
14,862,734
https://en.wikipedia.org/wiki/Alpha-4%20beta-2%20nicotinic%20receptor
The alpha-4 beta-2 nicotinic receptor, also known as the α4β2 receptor, is a type of nicotinic acetylcholine receptor implicated in learning, consisting of α4 and β2 subunits. It is located in the brain, where activation yields post- and presynaptic excitation, mainly by increased Na+ and K+ permeability. Stimulation of this receptor subtype is also associated with growth hormone secretion. People with the inactive CHRNA4 mutation Ser248Phe are an average of 10 cm (4 inches) shorter than average and predisposed to obesity. A 2015 review noted that stimulation of the α4β2 nicotinic receptor in the brain is responsible for certain improvements in attentional performance; among the nicotinic receptor subtypes, nicotine has the highest binding affinity at the α4β2 receptor (ki=1 ), which is also the primary biological target that mediates nicotine's addictive properties. The receptors exist in the two stoichiometries: (α4)2(β2)3 receptors have high sensitivity to nicotine and low Ca2+ permeability (HS receptors) (α4)3(β2)2 receptors have low sensitivity to nicotine and high Ca2+ permeability (LS receptors) Structure The α4β2 receptor assemble in two distinct stoichiometric forms. One stoichiometry contains three α4 and two β2 subunits [ (α4)3(β2)2 ] whereas the other stoichiometry contains two α4 and three β2 [ (α4)2(β2)3 ]. The x-ray structure of the (α4)2(β2)3 receptor is known since 2016 and reveals a circular α–β–β–α–β ordering of subunits. Ligands Source: Agonists 3-Bromocytisine Acetylcholine Cytisine Galantamine Epibatidine Epiboxidine Nicotine A-84,543 A-366,833 ABT-418 Arecoline Altinicline Dianicline Ispronicline Pozanicline Rivanicline Tebanicline TC-1827 Varenicline Sazetidine A: full agonist on (α4)2(β2)3, 6% efficacy on (α4)3(β2)2 N-(3-pyridinyl)-bridged bicyclic diamines PAMs NS-9283: 60-fold left-shifting of concentration-response curve, no change in maximum efficacy Desformylflustrabromine Further compounds.. (see references →) Antagonists (−)-7-methyl-2-exo-[3'-(6-[18F]fluoropyridin-2-yl)-5'-pyridinyl]-7-azabicyclo[2.2.1]heptane 2-fluoro-3-(4-nitro-phenyl)deschloroepibatidine Coclaurine - alkaloid from Nelumbo nucifera Mecamylamine α-Conotoxin PNU-120,596 Bupropion Dihydro-β-erythroidine, selective Nitrous oxide Isoflurane 1-(6-(((R,S)-7-Hydroxychroman-2-yl)methylamino]hexyl)-3-((S)-1-methylpyrrolidin-2-yl)pyridinium bromide (compound 2) (heterobivalent ligand: D2R agonist and nAChR antagonist) NAMs Oxantel See also α3β2-Nicotinic receptor α3β4-Nicotinic receptor α6β2-Nicotinic receptor α7-Nicotinic receptor References Ion channels Addiction Nicotinic acetylcholine receptors
Alpha-4 beta-2 nicotinic receptor
[ "Chemistry" ]
862
[ "Neurochemistry", "Ion channels" ]
14,862,757
https://en.wikipedia.org/wiki/Alpha-7%20nicotinic%20receptor
The alpha-7 nicotinic receptor, also known as the α7 receptor, is a type of nicotinic acetylcholine receptor implicated in long-term memory, consisting entirely of α7 subunits. As with other nicotinic acetylcholine receptors, functional α7 receptors are pentameric [i.e., (α7)5 stoichiometry]. It is located in the brain, spleen, and lymphocytes of lymph nodes where activation yields post- and presynaptic excitation, mainly by increased Ca2+ permeability. Further, recent work has implicated this receptor as being important for generation of adult mammal neurons in the retina. Functional α7 receptors are present in the submucous plexus neurons of the guinea-pig ileum. Medical relevance The Recent work has demonstrated a potential role in reducing inflammatory neurotoxicity in stroke, myocardial infarction, sepsis, and Alzheimer's disease. The α7 receptor is highly implicated in the efficacy of Varenicline for smoking cessation therapy significantly more than α4β2 which is responsible for Nicotine's rewarding effects. Upregulation of α7-nAChR's is greatly correlated with a stronger response. An α7 nicotinic agonist appears to have positive effects on neurocognition in persons with schizophrenia. Activation of α7 nicotinic acetylcholine receptor on mast cells, is a mechanism by which nicotine enhances atherosclerosis. Both α4β2 and α7 nicotinic receptors appear to be critical for memory, working memory, learning, and attention. α7-nicotinic receptors also appear to be involved in cancer progression. They have been shown to mediate cancer cell proliferation and metastasis. α7 receptors are also involved in angiogenic and neurogenic activity, and have anti-apoptotic effects. Ligands Agonists (+)-N-(1-azabicyclo[2.2.2]oct-3-yl)benzo[b]furan- 2-carboxamide: potent and highly subtype-selective Tilorone. A-582941: partial agonist; activates ERK1/2 and CREB phosphorylation; enhances cognitive performance AR-R17779: full agonist, nootropic Amyloid beta: neurotoxic marker of Alzheimer's disease TC-1698: subtype-selective; neuroprotective effects via activation of the JAK2/PI-3K cascade, neutralized by angiotensin II AT(2) receptor activation Bradanicline — partial agonist, in development for treatment of schizophrenia Encenicline — partial agonist with nootropic properties, in development for treatment of schizophrenia and Alzheimer's disease GTS-21 — partial agonist, in development for treatment of schizophrenia and/or Alzheimer's disease PHA-543,613 — selective and potent agonist with nootropic properties PNU-282,987 — selective and potent agonist, but may cause long QT syndrome PHA-709829: potent and subtype-selective; robust in vivo efficacy in a rat auditory sensory gating model Analogues: improved hERG safety profile over PNU-282,987 SSR-180,711: partial agonist Tropisetron: subtype-selective partial agonist; 5-HT3 receptor antagonist WAY-317,538 — selective potent full agonist with nootropic and neuroprotective properties Anabasine Acetylcholine Nicotine Varenicline Epiboxidine Choline ICH-3: subtype-selective partial agonist Positive allosteric modulators (PAMs) At least two types of positive allosteric modulators (PAMs) can be distinguished. PNU-120,596 NS-1738: marginal effects on α7 desensitization kinetics; modestly brain-penetrant AVL-3288: unlike the above PAMs, AVL-3288 does not affect α7 desensitization kinetics, and is readily brain penetrant. Improves cognitive behavior in animal models In clinical development for cognitive deficits in schizophrenia. A-867744 Ivermectin Other Nefiracetam Antagonists It is found that anandamide and ethanol cause an additive inhibition on the function of α7-receptor by interacting with distinct regions of the receptor. Although ethanol inhibition of the α7-receptor is likely to involve the N-terminal region of the receptor, the site of action for anandamide is located in the transmembrane and carboxyl-terminal domains of the receptors. Anandamide α-Bungarotoxin α-Conotoxin ArIB[V11L,V16D]: potent and highly subtype-selective; slowly reversible β-Caryophyllene Bupropion (very weakly) Dehydronorketamine Ethanol Hydroxybupropion (very weakly) Kynurenic acid Memantine Lobeline Methyllycaconitine Norketamine Quinolizidine (−)-1-epi-207I: α7 subtype preferring blocker Negative allosteric modulators (NAMs) Hydroxynorketamine See also α3β2-Nicotinic receptor α3β4-Nicotinic receptor α4β2-Nicotinic receptor RIC3, a chaperone protein for α7 receptors Endocannabinoids References Nicotinic acetylcholine receptors Ion channels
Alpha-7 nicotinic receptor
[ "Chemistry" ]
1,210
[ "Neurochemistry", "Ion channels" ]
14,863,422
https://en.wikipedia.org/wiki/TRISPHAT
TRISPHAT (full name tris(tetrachlorocatecholato)phosphate(1−)) is an inorganic anion with the formula often prepared as the tributylammonium () or tetrabutylammonium ( salt. The anion features phosphorus(V) bonded to three tetrachlorocatecholate () ligands. This anion can be resolved into the axially chiral enantiomers, which are optically stable (the picture shows the Δ enantiomer). The TRISPHAT anion has been used as a chiral shift reagent for cations. It improves the resolution of 1H NMR spectra by forming diastereomeric ion pairs. Preparation The anion is prepared by treatment of phosphorus pentachloride with tetrachlorocatechol followed by a tertiary amine gives the anion: PCl5 + 3 C6Cl4(OH)2 → H[P(O2C6Cl4)3] + 5 HCl H[P(O2C6Cl4)3] + Bu3N → Bu3NH+ [P(O2C6Cl4)3]− Using a chiral amine, the anion can be readily resolved. References Organophosphates Anions Chloroarenes Catechols Phosphates Nuclear magnetic resonance
TRISPHAT
[ "Physics", "Chemistry" ]
288
[ "Matter", "Nuclear magnetic resonance", "Anions", "Salts", "Phosphates", "Nuclear physics", "Ions" ]
14,873,529
https://en.wikipedia.org/wiki/DTX2
Protein deltex-2 also known as E3 ubiquitin-protein ligase DTX2 is an enzyme that in humans is encoded by the DTX2 gene. DTX2 functions as a ubiquitin E3 ligase in conjunction with the E2 enzyme UBCH5A. References Further reading External links Transcription factors
DTX2
[ "Chemistry", "Biology" ]
73
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,873,775
https://en.wikipedia.org/wiki/ATF5
Activating transcription factor 5, also known as ATF5, is a protein that, in humans, is encoded by the ATF5 gene. Function First described by Nishizawa and Nagata, ATF5 has been classified as a member of the activating transcription factor (ATF)/cAMP response-element binding protein (CREB) family. ATF5 transcripts and protein are expressed in a wide variety of tissues, in particular, high expression of transcripts in liver. It is also present in a variety of tumor cell types. ATF5 expression is regulated at both the transcriptional and translational level. ATF5 is expressed in VZ and SVZ during brain development. The human ATF5 protein is made up of 282 amino acids. ATF5 is a transcription factor that contains a bZip domain. See also Activating transcription factor Interactions ATF5 has been shown to interact with DISC1 and TRIB3. References Further reading External links Transcription factors Biology of bipolar disorder
ATF5
[ "Chemistry", "Biology" ]
208
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,873,851
https://en.wikipedia.org/wiki/Nordic%20Data%20Grid%20Facility
The Nordic Data Grid Facility, or NDGF, is a common e-Science infrastructure provided by the Nordic countries (Denmark, Finland, Norway, Sweden and Iceland) for scientific computing and data storage. It is the first and so far only internationally distributed WLCG Tier1 center, providing computing and storage services to experiments at CERN. History Nordic Data Grid Facility traces its history back to end-2001, being intrinsically related to the NorduGrid project. Success of the latter indicated need for a larger pan-Nordic facility, with storage resources being of high priority. This need has been addressed by establishing a pilot NDGF infrastructure, which was operational in 2002-2005, and provided distributed storage in addition to the NorduGrid computing resources. During this phase, NDGF committed to provide a Nordic Tier1 (regional computing center) for the Worldwide LHC Computing Grid project at CERN. Specifics of this Tier1 are such that it has to be an internationally distributed Facility. The Nordic Data Grid Facility in its present function as a provider of the Nordic Grid Infrastructure was established in April 2006 by the Nordic Research Councils. It came into operation on June 1, 2006, and its initial priority is to live up to the original commitment of establishing the Nordic Tier1, with the traditional focus on storage facilities. NDGF team includes software experts who take part in various Grid middleware development. In 2012 NDGF became a part of a wider initiative, the Nordic e-Infrastructure Collaboration. Users and operations NDGF Tier1 is a production Grid facility that leverages existing, national computational resources and Grid infrastructures. To qualify for support research groups should form a Virtual Organization, a VO. The VO provides compute resources for sharing and NDGF Tier1 operates a Grid interface for the sharing of these resources. Currently, most computational resources of NDGF Tier1 are accessible through ARC middleware. Some resources are also available via AliEn software. Distributed storage facility is realised through dCache storage management solution. Today, the dominant user community of the NDGF Tier1 is the High Energy Physics - the ALICE, ATLAS and CMS Virtual Organizations - through the operation of the Nordic Tier1, which together with the Tier0, CERN, and the other 12 Tier1s collects, stores and processes the data produced by the Large Hadron Collider at CERN. Since 2010, NDGF Tier1 is a part of the European Grid Infrastructure. NDGF Tier1 was hosted by NORDUnet in 2006-2011, and since 2012 is hosted by NordForsk. NDGF vs NorduGrid Many confuse NDGF and NorduGrid - which is not surprising, especially since in its second phase NDGF was proposed to assume the name "NorduGrid". It was however decided to distinguish between the mostly development-oriented project, NorduGrid, and the mostly operations-oriented one, NDGF. As a rule of thumb, NDGF provides mostly services, while NorduGrid provides mostly ARC middleware. See also NeIC, the organisation operating NDGF Tier-1 since 2012 NorduGrid collaboration, provider of the ARC middleware NORDUnet, the NDGF hosting organisation in 2006-2011 References External links NDGF Tier-1 page CERN, host of the Worldwide LHC Computing Grid dCache, system for storing and retrieving massive data Distributed computing projects Grid computing Information technology organizations based in Europe Institutes associated with CERN
Nordic Data Grid Facility
[ "Engineering" ]
713
[ "Distributed computing projects", "Information technology projects" ]
14,873,917
https://en.wikipedia.org/wiki/FOSB
Protein fosB, also known as FosB and G0/G1 switch regulatory protein 3 (G0S3), is a protein that in humans is encoded by the FBJ murine osteosarcoma viral oncogene homolog B (FOSB) gene. The FOS gene family consists of four members: FOS, FOSB, FOSL1, and FOSL2. These genes encode leucine zipper proteins that can dimerize with proteins of the JUN family (e.g., c-Jun, JunD), thereby forming the transcription factor complex AP-1. As such, the FOS proteins have been implicated as regulators of cell proliferation, differentiation, and transformation. FosB and its truncated splice variants, ΔFosB and further truncated Δ2ΔFosB, are all involved in osteosclerosis, although Δ2ΔFosB lacks a known transactivation domain, in turn preventing it from affecting transcription through the AP-1 complex. The ΔFosB splice variant has been identified as playing a central, crucial role in the development and maintenance of addiction. ΔFosB overexpression (i.e., an abnormally and excessively high level of ΔFosB expression which produces a pronounced gene-related phenotype) triggers the development of addiction-related neuroplasticity throughout the reward system and produces a behavioral phenotype that is characteristic of an addiction. ΔFosB differs from the full length FosB and further truncated Δ2ΔFosB in its capacity to produce these effects, as only accumbal ΔFosB overexpression is associated with pathological responses to drugs. DeltaFosB DeltaFosB – more commonly written as ΔFosB – is a truncated splice variant of the FOSB gene. ΔFosB has been implicated as a critical factor in the development of virtually all forms of behavioral and drug addictions. In the brain's reward system, it is linked to changes in a number of other gene products, such as CREB and sirtuins. In the body, ΔFosB regulates the commitment of mesenchymal precursor cells to the adipocyte or osteoblast lineage. In the nucleus accumbens, ΔFosB functions as a "sustained molecular switch" and "master control protein" in the development of an addiction. In other words, once "turned on" (sufficiently overexpressed) ΔFosB triggers a series of transcription events that ultimately produce an addictive state (i.e., compulsive reward-seeking involving a particular stimulus); this state is sustained for months after cessation of drug use due to the abnormal and exceptionally long half-life of ΔFosB isoforms. ΔFosB expression in D1-type nucleus accumbens medium spiny neurons directly and positively regulates drug self-administration and reward sensitization through positive reinforcement while decreasing sensitivity to aversion. Based upon the accumulated evidence, a medical review from late 2014 argued that accumbal ΔFosB expression can be used as an addiction biomarker and that the degree of accumbal ΔFosB induction by a drug is a metric for how addictive it is relative to others. Chronic administration of anandamide, or N-arachidonylethanolamide (AEA), an endogenous cannabinoid, and additives such as sucralose, a noncaloric sweetener used in many food products of daily intake, are found to induce an overexpression of ΔFosB in the infralimbic cortex (Cx), nucleus accumbens (NAc) core, shell, and central nucleus of amygdala (Amy), that induce long-term changes in the reward system. Role in addiction Chronic addictive drug use causes alterations in gene expression in the mesocorticolimbic projection, which arise through transcriptional and epigenetic mechanisms. The most important transcription factors that produce these alterations are ΔFosB, cyclic adenosine monophosphate (cAMP) response element binding protein (CREB), and nuclear factor kappa B (NF-κB). ΔFosB is the most significant biomolecular mechanism in addiction because the overexpression of ΔFosB in the D1-type medium spiny neurons in the nucleus accumbens is necessary and sufficient for many of the neural adaptations and behavioral effects (e.g., expression-dependent increases in drug self-administration and reward sensitization) seen in drug addiction. ΔFosB overexpression has been implicated in addictions to alcohol, cannabinoids, cocaine, methylphenidate, nicotine, opioids, phencyclidine, propofol, and substituted amphetamines, among others. ΔJunD, a transcription factor, and G9a, a histone methyltransferase, both oppose the function of ΔFosB and inhibit increases in its expression. Increases in nucleus accumbens ΔJunD expression (via viral vector-mediated gene transfer) or G9a expression (via pharmacological means) reduces, or with a large increase can even block, many of the neural and behavioral alterations seen in chronic drug abuse (i.e., the alterations mediated by ΔFosB). Repression of c-Fos by ΔFosB, which consequently further induces expression of ΔFosB, forms a positive feedback loop that serves to indefinitely perpetuate the addictive state. ΔFosB also plays an important role in regulating behavioral responses to natural rewards, such as palatable food, sex, and exercise. Natural rewards, similar to drugs of abuse, induce gene expression of ΔFosB in the nucleus accumbens, and chronic acquisition of these rewards can result in a similar pathological addictive state through ΔFosB overexpression. Consequently, ΔFosB is the key mechanism involved in addictions to natural rewards (i.e., behavioral addictions) as well; in particular, ΔFosB in the nucleus accumbens is critical for the reinforcing effects of sexual reward. Research on the interaction between natural and drug rewards suggests that dopaminergic psychostimulants (e.g., amphetamine) and sexual behavior act on similar biomolecular mechanisms to induce ΔFosB in the nucleus accumbens and possess bidirectional reward cross-sensitization effects that are mediated through ΔFosB. This phenomenon is notable since, in humans, a dopamine dysregulation syndrome, characterized by drug-induced compulsive engagement in natural rewards (specifically, sexual activity, shopping, and gambling), has also been observed in some individuals taking dopaminergic medications. ΔFosB inhibitors (drugs or treatments that oppose its action or reduce its expression) may be an effective treatment for addiction and addictive disorders. Current medical reviews of research involving lab animals have identified a drug class – class I histone deacetylase inhibitors – that indirectly inhibits the function and further increases in the expression of accumbal ΔFosB by inducing G9a expression in the nucleus accumbens after prolonged use. These reviews and subsequent preliminary evidence which used oral administration or intraperitoneal administration of the sodium salt of butyric acid or other class I HDAC inhibitors for an extended period indicate that these drugs have efficacy in reducing addictive behavior in lab animals that have developed addictions to ethanol, psychostimulants (i.e., amphetamine and cocaine), nicotine, and opiates; however, , few clinical trials involving humans with addiction and any HDAC class I inhibitors have been conducted to test for treatment efficacy in humans or identify an optimal dosing regimen. Plasticity in cocaine addiction ΔFosB levels have been found to increase upon the use of cocaine. Each subsequent dose of cocaine continues to increase ΔFosB levels with no apparent ceiling of tolerance. Elevated levels of ΔFosB leads to increases in brain-derived neurotrophic factor (BDNF) levels, which in turn increases the number of dendritic branches and spines present on neurons involved with the nucleus accumbens and prefrontal cortex areas of the brain. This change can be identified rather quickly, and may be sustained weeks after the last dose of the drug. Transgenic mice exhibiting inducible expression of ΔFosB primarily in the nucleus accumbens and dorsal striatum exhibit sensitized behavioural responses to cocaine. They self-administer cocaine at lower doses than control, but have a greater likelihood of relapse when the drug is withheld. ΔFosB increases the expression of AMPA receptor subunit GluR2 and also decreases expression of dynorphin, thereby enhancing sensitivity to reward. Summary of addiction-related plasticity Other functions in the brain Viral overexpression of ΔFosB in the output neurons of the nigrostriatal dopamine pathway (i.e., the medium spiny neurons in the dorsal striatum) induces levodopa-induced dyskinesias in animal models of Parkinson's disease. Dorsal striatal ΔFosB is overexpressed in rodents and primates with dyskinesias; postmortem studies of individuals with Parkinson's disease that were treated with levodopa have also observed similar dorsal striatal ΔFosB overexpression. Levetiracetam, an antiepileptic drug, has been shown to dose-dependently decrease the induction of dorsal striatal ΔFosB expression in rats when co-administered with levodopa; the signal transduction involved in this effect is unknown. ΔFosB expression in the nucleus accumbens shell increases resilience to stress and is induced in this region by acute exposure to social defeat stress. Antipsychotic drugs have been shown to increase ΔFosB as well, more specifically in the prefrontal cortex. This increase has been found to be part of pathways for the negative side effects that such drugs produce. See also AP-1 (transcription factor) Notes Image legend References Further reading External links ROLE OF ΔFOSB IN THE NUCLEUS ACCUMBENS KEGG Pathway – human alcohol addiction KEGG Pathway – human amphetamine addiction KEGG Pathway – human cocaine addiction Δ0 Oncogenes Transcription factors
FOSB
[ "Chemistry", "Biology" ]
2,161
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,873,938
https://en.wikipedia.org/wiki/FOSL2
Fos-related antigen 2 (FRA2) is a protein that in humans is encoded by the FOSL2 gene. Function The Fos gene family consists of 4 members: c-Fos, FOSB, FOSL1, and FOSL2. These genes encode leucine zipper proteins that can dimerize with proteins of the JUN family, thereby forming the transcription factor complex AP-1. As such, the FOS proteins have been implicated as regulators of cell proliferation, differentiation, and transformation. See also AP-1 (transcription factor) References Further reading External links Transcription factors
FOSL2
[ "Chemistry", "Biology" ]
126
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,874,007
https://en.wikipedia.org/wiki/SEPT8
Septin-8 is a protein that in humans is encoded by the SEPT8 gene. Function SEPT8 is a member of the highly conserved septin family. Septins are 40- to 60-kD GTPases that assemble as filamentous scaffolds. They are involved in the organization of submembranous structures, in neuronal polarity, and in vesicle trafficking. Interactions SEPT8 has been shown to interact with PFTK1 and SEPT5. References Further reading
SEPT8
[ "Chemistry" ]
107
[ "Biochemistry stubs", "Protein stubs" ]
14,874,037
https://en.wikipedia.org/wiki/EXOC7
Exocyst complex component 7 is a protein that in humans is encoded by the EXOC7 gene. It was formerly known as Exo70. It forms one subunit of the exocyst complex. First discovered in Saccharomyces cerevisiae, this and other exocyst proteins have been observed in several other eukaryotes, including humans. In S. cerevisiae, the exocyst complex is involved in the late stages of exocytosis, and is localised at the tip of the bud, the major site of exocytosis in yeast. It interacts with the Rho3 GTPase. This interaction mediates one of the three known functions of Rho3 in cell polarity: vesicle docking and fusion with the plasma membrane (the other two functions are regulation of actin polarity and transport of exocytic vesicles from the mother cell to the bud). In humans, the functions of this protein and the exocyst complex are less well characterised: this protein is expressed in several tissues and is thought to also be involved in exocytosis. Interactions EXOC7 has been shown to interact with EXOC4 and RHOQ. References Further reading Protein families
EXOC7
[ "Biology" ]
257
[ "Protein families", "Protein classification" ]
14,874,270
https://en.wikipedia.org/wiki/HIVEP1
Zinc finger protein 40 is a protein that in humans is encoded by the HIVEP1 gene. Members of the ZAS family, such as ZAS1 (HIVEP1), are large proteins that contain a ZAS domain, a modular protein structure consisting of a pair of C2H2 zinc fingers with an acidic-rich region and a serine/threonine-rich sequence. These proteins bind specific DNA sequences, including the kappa-B motif (GGGACTTTCC), in the promoters and enhancer regions of several genes and viruses, including human immunodeficiency virus (HIV). ZAS genes span more than 150 kb and contain at least 10 exons, one of which is longer than 5.5 kb (Allen and Wu, 2004).[supplied by OMIM] References Further reading External links Transcription factors
HIVEP1
[ "Chemistry", "Biology" ]
175
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,874,292
https://en.wikipedia.org/wiki/HLX%20%28gene%29
Homeobox Protein HB24 is a protein that in humans is encoded by the HLX gene. Role in development Hlx belongs to the class of homeobox transcription factors, initially cloned from a B-lymphocyte cell line. Targeted knockout of the gene has demonstrated its vital role in liver and gut organogenesis. Its expression is first noticed in embryonic day 9.5 (E9.5) in the splanchnic mesoderm caudal to the level of the heart and foregut pocket, and in the branchial arches. Around E10- E12.5, the expression becomes more prominent in the mesenchyme of the visceral organs of the gut such as liver, intestines and gall bladder. Hlx is essential for liver and gut expansion, but not for onset of their development. Heterozygous knockouts of Hlx (Hlx +/−) are normal whereas homozygous knockouts (Hlx −/–) develop severe hypoplasia of the liver and gut along with anaemia. Hlx controls the epithelial-mesenchymal interaction necessary for liver and gut expansion. At E8.0, the primary liver bud is formed from the midgut endoderm in response to signals from the cardiogenic mesoderm. This is followed by signals from the septum transversum that induce epithelial-mesenchymal transition in the hepatic progenitors of the gut endoderm. In a third stage, these signaling factors induce the liver endoderm to undergo proliferation and form liver cords. The same factor controls gut proliferation, and Hlx governs its expression. Although these mice develop anaemia, it is likely due to insufficient support from the liver in producing matrix component needed for hematopoiesis rather than an intrinsic defect in the hematopoietic cells. References Further reading External links Transcription factors
HLX (gene)
[ "Chemistry", "Biology" ]
414
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,874,339
https://en.wikipedia.org/wiki/IGHA2
Ig alpha-2 chain C region is a protein that in humans is encoded by the IGHA2 gene. References Further reading Proteins
IGHA2
[ "Chemistry" ]
30
[ "Biomolecules by chemical classification", "Protein stubs", "Biochemistry stubs", "Molecular biology", "Proteins" ]
14,874,442
https://en.wikipedia.org/wiki/EPHA6
Ephrin type-A receptor 6 is a protein that in humans is encoded by the EPHA6 gene. EphA6 may serve an important role in breast carcinogenesis and may pose as a novel prognostic indicator and therapeutic target for breast cancer, particularly in patients with steroid receptor negative expression and HER‑2 overexpression References Further reading External links Tyrosine kinase receptors
EPHA6
[ "Chemistry" ]
82
[ "Tyrosine kinase receptors", "Signal transduction" ]
14,874,586
https://en.wikipedia.org/wiki/MEOX2
Homeobox protein MOX-2 is a protein that in humans is encoded by the MEOX2 gene. Function This gene encodes a member of a subfamily of non-clustered, diverged, antennapedia-like homeobox-containing genes. The encoded protein may play a role in the regulation of vertebrate limb myogenesis. Mutations in the related mouse protein may be associated with craniofacial and/or skeletal abnormalities, in addition to neurovascular dysfunction observed in Alzheimer's disease. MEOX2 has been implicated in the initiation of tumors in glioma. Additionally, MEOX2 influences several critical processes in lung cancer, including cellular proliferation, invasion, metastasis, angiogenesis, and the development of drug resistance. Interactions MEOX2 has been shown to interact with PAX1 and PAX3. References Further reading External links Transcription factors
MEOX2
[ "Chemistry", "Biology" ]
185
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,874,715
https://en.wikipedia.org/wiki/NFIA
Nuclear factor 1 A-type is a protein that in humans is encoded by the NFIA gene. Function Nuclear factor I (NFI) proteins constitute a family of dimeric DNA-binding proteins with similar, and possibly identical, DNA-binding specificity. They function as cellular transcription factors and as replication factors for adenovirus DNA replication. Diversity in this protein family is generated by multiple genes, differential splicing, and heterodimerization.[supplied by OMIM] References Further reading External links Transcription factors
NFIA
[ "Chemistry", "Biology" ]
107
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,874,725
https://en.wikipedia.org/wiki/NFIX
Nuclear factor 1 X-type is a protein that in humans is encoded by the NFIX gene. NFI-X3, a splice variant of NFIX, regulates Glial fibrillary acidic protein and YKL-40 in astrocytes. Interactions Nfix has been shown to interact with SKI protein and it is also known to interact with AP-1. NFI-X3 has been shown to interact with STAT3. In embryonic cells, Nfix has been shown to regulate intermediate progenitor cell (IPC) generation by promoting the transcription of the protein inscuteable (INSC). INSC regulates spindle orientation to facilitate the division of radial glia cells into IPC's. Nfix is thought to be necessary for the commitment of glia progeny into the intermediate progenitors. Mutations may cause overproduction of radial glia, impaired and improperly timed IPC development, and underproduction of neurons. In adult development, the timing of neural differentiation is regulated by Nfix to promote ongoing growth of the hippocampus and proper memory function. Nfix may suppress oligodendrocyte expression so cells remain committed to neuron development within the dentate gyrus. Intermediate progenitor cells can divide to produce neuroblasts. Neurons produced by Nfix null IPC's do not mature, usually die, and can contribute to cognitive impairments. Nfix interacts with myostatin and regulates temporal progression of muscle regeneration through modulation of myostatin expression. Nfix also inhibits the slow-twitch muscle phenotype. References Further reading External links Transcription factors
NFIX
[ "Chemistry", "Biology" ]
348
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,875,325
https://en.wikipedia.org/wiki/60S%20ribosomal%20protein%20L21
60S ribosomal protein L21 is a protein that in humans is encoded by the RPL21 gene. Ribosomes, the organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes a ribosomal protein that is a component of the 60S subunit. The protein belongs to the L21E family of ribosomal proteins. It is located in the cytoplasm. As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome. Clinical relevance Mutations in the RPL21 gene result in Hypotrichosis simplex of the scalp. References Further reading External links Ribosomal proteins
60S ribosomal protein L21
[ "Chemistry" ]
163
[ "Biochemistry stubs", "Protein stubs" ]
9,729,735
https://en.wikipedia.org/wiki/Stonesetting
Stonesetting is the art of securely setting or attaching gemstones into jewelry. Cuts There are two general types of gemstone cutting: cabochon and facet. Cabochons are smooth, often domed, with flat backs. Agates and turquoise are usually cut this way, but precious stones such as rubies, emeralds and sapphires may also be. Many stones like star sapphires and moonstones must be cut this way in order to properly display their unusual appearance. A faceted shape resembles that of the modern diamond. It has a flat, polished surface, typically with a transparent surface that refracts light inside the gemstone and reflects light on the outside. In the case of a cabochon stone, the side of the stone is usually cut at a shallow angle, so that when the bezel is pushed over the stone, the angle permits it to hold the stone tightly in place. In the case of faceted stones, a shallow groove is cut into the side of the bezel into which the girdle of the stone is placed, with metal prongs then pushed over the face of the stone, holding it in place; cabochons may also be set into prong settings. In both cases, the pressure and the angle of the prongs holds the stone in place. Just as the angle of the sides of a cabochon creates the pressure to hold the stone in place, so there is an overlying principle in setting faceted stones. If one looks at a side view of a round diamond, for example, one will see that there is an outer edge, called the girdle, and the top angles up from there, and the bottom angles down from there. Faceted stones are set by "pinching" that angle with metal. All of the styles of faceted stone setting use this concept in one way or another. Settings There are thousands of variations of setting styles, but there are several fundamental types: Bezel The earliest known technique of attaching stones to jewelry was bezel setting. A bezel is a strip of metal bent into the shape and size of the stone and then soldered to the piece of jewelry. The stone is then inserted into the bezel, and the metal edge of the bezel pressed over the edge of the stone, holding it in place. This method works well for either cabochons or faceted stones. Prong A prong setting is the simplest and most common type of setting, largely because it uses the least amount of metal to hold the stone in place, displaying most of the stone and forming a secure setting. Generally, a prong setting is formed of a number of short, thin strips of metal, called prongs, which are arranged in a shape and size to hold the given stone, and are fixed at the base. Then a burr of the proper size is used to cut what is known as a "bearing", which is a notch that corresponds to the angles of the stone. The burr most often used is called a "hart bur", and is angled and sized for the job of setting diamonds. The bearing is cut equally into all of the prongs and at the same height above the base. The stone is then inserted, and pliers or a pusher are used to bend the prongs gently over the crown of the stone, with the tops of the prongs are clipped off with snips, filed to an even height above the stone, and finished. Usually a "cup burr" is used to give the prong a nice round tip. A cup burr is in the shape of a hemisphere with teeth on the inside, for making rounded tips on wires and prongs. There are many variations of prong settings including just two prongs, the more common four prong setting or up to 24 or more, with many variations involving decoration, size and shapes of the prongs themselves, and how they are fixed or used in jewelry. The method of setting is generally the same for all, no matter how many prongs are present. Channel A channel setting is a method whereby stones are suspended between two bars or strips of metal, called channels. Typically, a line of small stones set between two bars is called a channel setting, and a design where the bars cross the stones is called a bar set. The channel is a variation of a "U" shape, with two sides and a bottom. The sides are made slightly narrower than the width of the stone or stones to be set, and then, using the same burs as in prong setting, a small notch, called a bearing, is cut into each wall. The stone is put in place in those notches, and the metal on top is pushed down, tightening the stone in place. The proper way to set a channel is to cut a notch for each stone, but for cheaper production work sometimes a groove is cut along each channel. Since the metal can be very stiff and strong, a reciprocating hammer, similar to a jackhammer but jewelry sized, may be used to hammer down the metal, as it can be difficult to do by hand. The metal is then filed down and finished, and the inner edge near the stones cleaned up and straightened as necessary. As with all jewelry, there can be many variations of channel work. At times the walls will be raised—sometimes a center stone will be set between two bars that rise high from the base ring—or the channel may be cut directly into the surface, making the stones flush with the metal. Bead "Bead setting" is a generic term for setting a stone directly into metal using gravers, also called burins, which are essentially tiny chisels. A hole is drilled directly into the surface of the metal, before a ball burr is used to make a concave depression the size of the stone. Some setters will set the stone into the concave depression, and some will use a hart burr to cut a bearing around the edge. The stone is then inserted into the space, and gravers or burins are used to lift and push a tiny bit of the metal into and over the edge of the stone. Then a beading tool – a simple steel shaft with a concave dimple cut into the tip – is pushed onto the bit of metal, rounding and smoothing it, pushing it firmly onto the stone, and creating a "bead". There are many types of setting that use the bead setting technique. When many stones are set closely together in this fashion, roughly apart, covering a surface, this is known as a "pavé" setting, from the French for "paved" or "cobblestoned". When a long line is engraved into the metal going up to each of the beads, this is known as a "star setting". The other common usage of this setting is known as "bead and bright", "grain setting" or "threading" in Europe, alongside many other names. This is when, after the stone is set as described above, the background metal around the stone is cut away, usually in geometric shapes, resulting in the stone being left with four beads in a lowered box shape with an edge around it. Often it is a row of stones, so it will be in a long shape with a raised edge and a row of stones and beads down the center. This type of setting is still used often, but it was very common in the early- to mid-20th century. Burnish Burnish setting, also sometimes referred to as flush setting, shot setting, or gypsy setting, is similar to bead setting, but after the stone is inserted into the space, instead of using a graver to lift beads, a burnishing tool is used to push the metal around the stone. The stone will be roughly flush with the surface, with a burnished or rubbed edge around it. This type of setting has a long history, and has seen a resurgence in contemporary jewelry. Sometimes the metal is finished using sandblasting. References Jewellery making Gemstones hr:Uglavljivanje dragog kamenja
Stonesetting
[ "Physics" ]
1,670
[ "Materials", "Gemstones", "Matter" ]
9,731,029
https://en.wikipedia.org/wiki/Ceiling%20rose
In the United Kingdom and Australia, a ceiling rose is a decorative element affixed to the ceiling from which a chandelier or light fitting is often suspended. They are typically round in shape and display a variety of ornamental designs. In modern British wiring setups, light fittings usually use loop-in ceiling roses, which also include the functionality of a junction box. Etymology The rose has symbolised secrecy since Roman times, due to a confused association with the Egyptian god Horus. For its associations with ceilings and confidentiality, refer to the Scottish Government's Sub Rosa initiative. Through its promise of secrecy, the rose, suspended above a meeting table, symbolises the freedom to speak plainly without repercussion. The physical carving of a rose on a ceiling was used for this purpose during the rule of England's Tudor King Henry VIII and has over the centuries evolved into a standard item of domestic vernacular architecture, to such an extent that it now constitutes a term for the aforementioned circular device that conceals and comprises the wiring box for an overhead light fitting. See also Sub rosa References Electrical wiring
Ceiling rose
[ "Physics", "Engineering" ]
223
[ "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring" ]
9,733,137
https://en.wikipedia.org/wiki/Promela
PROMELA (Process or Protocol Meta Language) is a verification modeling language introduced by Gerard J. Holzmann. The language allows for the dynamic creation of concurrent processes to model, for example, distributed systems. In PROMELA models, communication via message channels can be defined to be synchronous (i.e., rendezvous), or asynchronous (i.e., buffered). PROMELA models can be analyzed with the SPIN model checker, to verify that the modeled system produces the desired behavior. An implementation verified with Isabelle/HOL is also available, as part of the Computer Aided Verification of Automata (CAVA) project. Files written in Promela traditionally have a .pml file extension. Introduction PROMELA is a process-modeling language whose intended use is to verify the logic of parallel systems. Given a program in PROMELA, Spin can verify the model for correctness by performing random or iterative simulations of the modeled system's execution, or it can generate a C program that performs a fast exhaustive verification of the system state space. During simulations and verifications, SPIN checks for the absence of deadlocks, unspecified receptions, and unexecutable code. The verifier can also be used to prove the correctness of system invariants and it can find non-progress execution cycles. Finally, it supports the verification of linear time temporal constraints; either with Promela never-claims or by directly formulating the constraints in temporal logic. Each model can be verified with SPIN under different types of assumptions about the environment. Once the correctness of a model has been established with SPIN, that fact can be used in the construction and verification of all subsequent models. PROMELA programs consist of processes, message channels, and variables. Processes are global objects that represent the concurrent entities of the distributed system. Message channels and variables can be declared either globally or locally within a process. Processes specify behavior, channels and global variables define the environment in which the processes run. Language reference Data types The basic data types used in PROMELA are presented in the table below. The sizes in bits are given for a PC i386/Linux machine. The names and are synonyms for a single bit of information. A is an unsigned quantity that can store a value between 0 and 255. s and s are signed quantities that differ only in the range of values they can hold. Variables can also be declared as arrays. For example, the declaration: declares an array of 10 integers that can be accessed in array subscript expressions like: x[0] = x[1] + x[2]; But the arrays can not be enumerated on creation, so they must be initialised as follows: The index to an array can be any expression that determines a unique integer value. The effect of an index outside the range is undefined. Multi-dimensional arrays can be defined indirectly with the help of the construct (see below). Processes The state of a variable or of a message channel can only be changed or inspected by processes. The behavior of a process is defined by a declaration. For example, the following declares a process type with one variable state: The definition only declares process behavior, it does not execute it. Initially, in the PROMELA model, just one process will be executed: a process of type init, that must be declared explicitly in every PROMELA specification. New processes can be spawned using the run statement, which takes an argument consisting of the name of a , from which a process is then instantiated. The run operator can be used in the body of the definitions, not only in the initial process. This allows for dynamic creation of processes in PROMELA. An executing process disappears when it terminates—that is, when it reaches the end of the body in the definition, and all child processes that it started have terminated. A proctype may also be active (below). Atomic construct By prefixing a sequence of statements enclosed in curly braces with the keyword , the user can indicate that the sequence is to be executed as one indivisible unit, non-interleaved with any other processes. Atomic sequences can be an important tool in reducing the complexity of verification models. Note that atomic sequences restrict the amount of interleaving that is allowed in a distributed system. Intractable models can be made tractable by labeling all manipulations of local variables with atomic sequences. Message passing Message channels are used to model the transfer of data from one process to another. They are declared either locally or globally, for instance as follows: This declares a buffered channel that can store up to 16 messages of type (capacity is 16 here). The statement: qname ! expr; sends the value of the expression to the channel with name , that is, it appends the value to the tail of the channel. The statement: qname ? msg; receives the message, retrieves it from the head of the channel, and stores it in the variable msg. The channels pass messages in first-in-first-out order. A rendezvous port can be declared as a message channel with the store length zero. For example, the following: defines a rendezvous port that can pass messages of type . Message interactions via such rendezvous ports are by definition synchronous, i.e. sender or receiver (the one that arrives first at the channel) will block for the contender that arrives second (receiver or sender). When a buffered channel has been filled to its capacity (sending is "capacity" number of outputs ahead of receiving inputs), the default behavior of the channel is to become synchronous, and the sender will block on the next sending. Observe that there is no common message buffer shared between channels. Increasing complexity, as compared to using a channel as unidirectional and point to point, it is possible to share channels between multiple receivers or multiple senders, and to merge independent data-streams into a single shared channel. From this follows that a single channel may also be used for bidirectional communication. Control flow constructs There are three control flow constructs in PROMELA. They are the case selection, the repetition and the unconditional jump. Case selection The simplest construct is the selection structure. Using the relative values of two variables and , for example, one can write: The selection structure contains two execution sequences, each preceded by a double colon. One sequence from the list will be executed. A sequence can be selected only if its first statement is executable. The first statement of a control sequence is called a guard. In the example above, the guards are mutually exclusive, but they need not be. If more than one guard is executable, one of the corresponding sequences is selected non-deterministically. If all guards are unexecutable, the process will block until one of them can be selected. (Opposite, the occam programming language would stop or not be able to proceed on no executable guards.) The consequence of the non-deterministic choice is that, in the example above, if A is true, both choices may be taken. In "traditional" programming, one would understand an structure sequentially. Here, the if – double colon – double colon must be understood as "any one being ready" and if none is ready, only then would the be taken. In the example above, value is non-deterministically given the value 3 or 4. There are two pseudo-statements that can be used as guards: the timeout statement and the statement. The timeout statement models a special condition that allows a process to abort the waiting for a condition that may never become true. The else statement can be used as the initial statement of the last option sequence in a selection or iteration statement. The is only executable if all other options in the same selection are not executable. Also, the may not be used together with channels. Repetition (loop) A logical extension of the selection structure is the repetition structure. For example: describes a repetition structure in PROMELA. Only one option can be selected at a time. After the option completes, the execution of the structure is repeated. The normal way to terminate the repetition structure is with a statement. It transfers the control to the instruction that immediately follows the repetition structure. Unconditional jumps Another way to break a loop is the statement. For example, one can modify the example above as follows: The in this example jumps to a label named done. A label can only appear before a statement. To jump at the end of the program, for example, a dummy statement is useful: it is a place-holder that is always executable and has no effect. Assertions An important language construct in PROMELA that needs a little explanation is the statement. Statements of the form: assert(any_boolean_condition) are always executable. If a boolean condition specified holds, the statement has no effect. If, however, the condition does not necessarily hold, the statement will produce an error during verifications with SPIN. Complex data structures A PROMELA definition can be used to introduce a new name for a list of data objects of predefined or earlier defined types. The new type name can be used to declare and instantiate new data objects, which can be used in any context in an obvious way: The access to the fields declared in a construction is done in the same manner as in C programming language. For example: MyStruct x; x.Field1 = 1; is a valid PROMELA sequence that assigns to the field of the variable the value . Active proctypes The keyword can be prefixed to any definition. If the keyword is present, an instance of that proctype will be active in the initial system state. Multiple instantiations of that proctype can be specified with an optional array suffix of the keyword. Example: Executability The semantics of executability provides the basic means in Promela for modeling process synchronizations. In the example, the two processes P1 and P2 have non-deterministic choices of (1) input from the other or (2) output to the other. Two rendezvous handshakes are possible, or executable, and one of them is chosen. This repeats forever. Therefore, this model will not deadlock. When Spin analyzes a model like the above, it will verify the choices with a non-deterministic algorithm, where all executable choices will be explored. However, when Spin's simulator visualizes possible non-verified communication patterns, it may use a random generator to resolve the "non-deterministic" choice. Therefore, the simulator may fail to show a bad execution (in the example, there is no bad trail). This illustrates a difference between verification and simulation. In addition, it is also possible to generate executable code from Promela models using Refinement. Keywords The following identifiers are reserved for use as keywords. References External links Spin homepage Spin Tutorials and Online References Concise Promela Reference Computer Aided Verification of Automata: "The CAVA Project" (2010-2019) and "Verified Model Checkers" (2016-curr) website Specification languages Model checkers
Promela
[ "Mathematics", "Engineering" ]
2,335
[ "Software engineering", "Specification languages", "Model checkers", "Mathematical software" ]
9,733,327
https://en.wikipedia.org/wiki/Blackwell%20channel
The Blackwell channel is a deterministic broadcast channel model used in coding theory and information theory. It was first proposed by mathematician David Blackwell. In this model, a transmitter transmits one of three symbols to two receivers. For two of the symbols, both receivers receive exactly what was sent; the third symbol, however, is received differently at each of the receivers. This is one of the simplest examples of a non-trivial capacity result for a non-stochastic channel. Definition The Blackwell channel is composed of one input (transmitter) and two outputs (receivers). The channel input is ternary (three symbols) and is selected from {0, 1, 2}. This symbol is broadcast to the receivers; that is, the transmitter sends one symbol simultaneously to both receivers. Each of the channel outputs is binary (two symbols), labeled {0, 1}. Whenever a 0 is sent, both outputs receive a 0. Whenever a 1 is sent, both outputs receive a 1. When a 2 is sent, however, the first output is 0 and the second output is 1. Therefore, the symbol 2 is confused by each of the receivers in a different way. The operation of the channel is memoryless and completely deterministic. Capacity of the Blackwell channel The capacity of the channel was found by S. I. Gel'fand. It is defined by the region: 1. R1 = 1, 0 ≤ R2 ≤ 2. R1 = H(a), R2 = 1 − a, for  ≤ a ≤  3. R1 + R2 = log2 3, log2 3 - ≤ R1  ≤  4. R1 = 1 − a, R2 = H(a), for ≤ a ≤ 5. 0 ≤ R1 ≤ , R2 = 1 A solution was also found by Pinkser et al. (1995). References Coding theory
Blackwell channel
[ "Mathematics" ]
387
[ "Discrete mathematics", "Coding theory" ]
9,734,068
https://en.wikipedia.org/wiki/Indium%20halides
There are three sets of Indium halides, the trihalides, the monohalides, and several intermediate halides. In the monohalides the oxidation state of indium is +1 and their proper names are indium(I) fluoride, indium(I) chloride, indium(I) bromide and indium(I) iodide. The intermediate halides contain indium with oxidation states, +1, +2 and +3. Indium trihalides In all of the trihalides the oxidation state of indium is +3, and their proper names are indium(III) fluoride, indium(III) chloride, indium(III) bromide, and indium(III) iodide. The trihalides are Lewis acidic. Indium trichloride is a starting point in the production of trimethylindium which is used in the semiconductor industry. Indium(III) fluoride InF3 is a white solid, m.p. 1170 °C. Its structure contains 6 coordinate indium. Indium(III) chloride InCl3 is a white solid, m.p. 586 °C. It is obtained by oxidation of indium with chlorine. It is isostructural with AlCl3. Indium(III) bromide InBr3 is a pale yellow solid, m.p. 435 °C. It is isostructural with AlCl3. It is prepared by combining the elements. InBr3 finds some use in organic synthesis as a water tolerant Lewis acid. Indium(III) iodide InI3 is a yellow solid. It is obtained by evaporation of a solution of indium in HI. Distinct yellow and a red forms are known. The red form undergoes a transition to the yellow at 57 °C. The structure of the red form has not been determined by X-ray crystallography, however spectroscopic evidence indicates that indium may be six coordinate. The yellow form consists of In2I6 with 4 coordinate indium centres. It is used as an "iodide getter" in the Cativa process. Intermediate halides A surprising number of intermediate chlorides and bromides are known, but only one iodide, and no difluoride. Rather than the apparent oxidation state of +2, these compounds contain indium in the +1 and +3 oxidation states. Thus the diiodide is described as InIInIIIX4. It was some time later that the existence of compounds containing the anion were confirmed which contains an indium-indium bond. Early work on the chlorides and bromides involved investigations of the binary phase diagrams of the trihalides and the related monohalide. Many of the compounds were initially misidentified as many of them are incongruent and decompose before melting. The majority of the previously reported chlorides and bromides have now either had their existence and structures confirmed by X Ray diffraction studies or have been consigned to history. Perhaps the most unexpected case of mistaken identity was the surprising result that a careful reinvestigation of the InCl/InCl3 binary phase diagram did not find InCl2. The reason for this abundance of compounds is that indium forms 4 and 6 coordinate anions containing indium(III) e.g. , as well as the anion that surprisingly contains an indium-indium bond. In7Cl9 and In7Br9 In7Cl9 is yellow solid stable up to 250 °C that is formulated InI6(InIIICl6)Cl3 In7Br9 has a similar structure to In7Cl9 and can be formulated as InI6(InIIIBr6)Br3 In5Br7 In5Br7 is a pale yellow solid. It is formulated InI3(InII2Br6)Br. The InII2Br6 anion has an eclipsed ethane like structure with a metal-metal bond length of 270 pm. In2Cl3 and In2Br3 In2Cl3 is colourless and is formulated InI3 InIIICl6 In contrast In2Br3 contains the In2Br6 anion as present in In5Br7, and is formulated InI2(InII2Br6) with a structure similar to Ga2Br3. In4Br7 In4Br7 is near colourless with a pale greenish yellow tint. It is light sensitive (like TlCl and TlBr) decaying to InBr2 and In metal. It is a mixed salt containing the and anions balanced by In+ cations. It is formulated InI5(InIIIBr4)2(InIIIBr6) The reasons for the distorted lattice have been ascribed to an antibonding combination between doubly filled, non-directional indium 5s orbitals and neighboring bromine 4p hybrid orbitals. In5Cl9 In5Cl9 is formulated as InI3InIII2Cl9. The anion has two 6 coordinate indium atoms with 3 bridging chlorine atoms, face sharing bioctahedra, with a similar structure to and . InBr2 InBr2 is a greenish white crystalline solid, which is formulated InIInIII Br4. It has the same structure as GaCl2. InBr2 is soluble in aromatic solvents and some compounds containing η6-arene In(I) complexes have been identified. (See hapticity for an explanation of the bonding in such arene-metal ion complexes). With some ligands InBr2 forms neutral complexes containing an indium-indium bond. InI2 InI2 is a yellow solid that is formulated InIInIIII4. Monohalides The solid monohalides InCl, InBr and InI are all unstable with respect to water, decomposing to the metal and indium(III) species. They fall between gallium(I) compounds, which are more reactive and thallium(I) that are stable with respect to water. InI is the most stable. Up until relatively recently the monohalides have been scientific curiosities, however with the discovery that they can be used to prepare indium cluster and chain compounds they are now attracting much more interest. InF InF only known as an unstable gaseous compound. InCl The room temperature form of InCl is yellow, with a cubic distorted NaCl structure. The red high temperature (>390 K) form has the \beta-TlI structure. InBr InBr is a red crystalline solid, mp 285 °C. It has the same structure as \beta-TlI, with an orthorhombic distorted rock salt structure. It can be prepared from indium metal and InBr3. InI InI is a deep red purple crystalline solid. It has the same structure as \beta-TlI. It can be made by direct combination of its constituent elements at high temperature. Alternatively it can be prepared from InI3 and indium metal in refluxing xylenes. It is the most stable of the solid monohalides and is soluble in some organic solvents. Solutions of InI in a pyridine/m-xylene mixture are stable below 243 K. Anionic halide complexes of In(III) The trihalides are Lewis Acids and form addition compounds with ligands. For InF3 there are few examples known however for the other halides addition compounds with tetrahedral, trigonal bipyramidal and octahedral coordination geometries are known. With halide ions there are examples of all of these geometries along with some anions with octahedrally coordinated indium and with bridging halogen atoms, with three bridging halogen atoms and with just one. Additionally there are examples of indium with square planar geometry in the InX52− ion. The square planar geometry of was the first found for a main group element. and Salts of , and are known. The salt LiInF4 has been prepared, however it has an unusual layer structure with octahedrally coordinated indium center. Salts of InF63−, and have all been made. and The ion has been found to be square pyramidal in the salt (NEt4)2InCl5, with the same structure as (NEt4)2 TlCl5, but is trigonal bipyramidal in tetraphenylphosphonium pentachloroindate acetonitrile solvate. The ion has similarly been found square pyramidal, albeit distorted, in the Bis(4-chloropyridinium) salt and trigonal bipyramidal in Bi37InBr48. The ions contain a single bridging halogen atom. Whether the bridge is bent or linear cannot be determined from the spectra. The chloride and bromide have been detected using electrospray mass spectrometry. The ion has been prepared in the salt CsIn2I7. The caesium salts of and both contain binuclear anions with octahedrally coordinated Indium atoms. Anionic halide complexes of In(I) and In(II) and InIX2− is produced when the In2X62− ion disproportionates. Salts containing the ions have been made and their vibrational spectra interpreted as showing that they have C3v symmetry, trigonal pyramidal geometry, with structures similar to the isoelectronic ions. , and Salts of the chloride, bromide and iodide ions have been prepared. In non-aqueous solvents this ion disproportionates to give and . Neutral Indium(II) halide adducts Following the discovery of the In2Br62− a number of related neutral compounds containing the InII2X4 kernel have been formed from the reaction of indium dihalides with neutral ligands. Some chemists refer to these adducts, when used as the starting point for the synthesis of cluster compounds as ‘In2X4’ e.g. the TMEDA adduct. General sources WebElements Periodic Table » Indium » compounds information References Indium compounds Metal halides Mixed valence compounds
Indium halides
[ "Chemistry" ]
2,163
[ "Mixed valence compounds", "Inorganic compounds", "Metal halides", "Salts" ]
9,736,518
https://en.wikipedia.org/wiki/Myogenic%20regulatory%20factors
Myogenic regulatory factors (MRF) are basic helix-loop-helix (bHLH) transcription factors that regulate myogenesis: MyoD, Myf5, myogenin, and MRF4. These proteins contain a conserved basic DNA binding domain that binds the E box DNA motif. They dimerize with other HLH containing proteins through an HLH-HLH interaction. MRF Gene Family Evolution There are typically four vertebrate MRF paralogues which are homologous to typically a single MRF gene in non-vertebrates. These four genes are thought to have been duplicated in the two rounds of whole-genome duplication early in vertebrate evolution that played a role in the evolution of more complex vertebrate body plans. The four MRFs have four distinct expression profiles, though with some redundancy, as MyoD and Myf5 are both involved in myoblast determination, and are followed by the activation of Myf6 (MRF4) and Myog in myoblast differentiation. There have also been instances of independent duplication of the MRFs in invertebrate lineages, similarly followed by subfunctionalization of the expression of the genes in time and/or in space. In amphioxus, an invertebrate chordate closely related to vertebrates, there are five MRFs which are expressed in different patterns during development. References External links Transcription factors DNA-binding proteins
Myogenic regulatory factors
[ "Chemistry", "Biology" ]
307
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
9,738,540
https://en.wikipedia.org/wiki/Phylogenetic%20comparative%20methods
Phylogenetic comparative methods (PCMs) use information on the historical relationships of lineages (phylogenies) to test evolutionary hypotheses. The comparative method has a long history in evolutionary biology; indeed, Charles Darwin used differences and similarities between species as a major source of evidence in The Origin of Species. However, the fact that closely related lineages share many traits and trait combinations as a result of the process of descent with modification means that lineages are not independent. This realization inspired the development of explicitly phylogenetic comparative methods. Initially, these methods were primarily developed to control for phylogenetic history when testing for adaptation; however, in recent years the use of the term has broadened to include any use of phylogenies in statistical tests. Although most studies that employ PCMs focus on extant organisms, many methods can also be applied to extinct taxa and can incorporate information from the fossil record. PCMs can generally be divided into two types of approaches: those that infer the evolutionary history of some character (phenotypic or genetic) across a phylogeny and those that infer the process of evolutionary branching itself (diversification rates), though there are some approaches that do both simultaneously. Typically the tree that is used in conjunction with PCMs has been estimated independently (see computational phylogenetics) such that both the relationships between lineages and the length of branches separating them is assumed to be known. Applications Phylogenetic comparative approaches can complement other ways of studying adaptation, such as studying natural populations, experimental studies, and mathematical models. Interspecific comparisons allow researchers to assess the generality of evolutionary phenomena by considering independent evolutionary events. Such an approach is particularly useful when there is little or no variation within species. And because they can be used to explicitly model evolutionary processes occurring over very long time periods, they can provide insight into macroevolutionary questions, once the exclusive domain of paleontology. Phylogenetic comparative methods are commonly applied to such questions as: What is the slope of an allometric scaling relationship? → Example: how does brain mass vary in relation to body mass? Do different clades of organisms differ with respect to some phenotypic trait? → Example: do canids have larger hearts than felids? Do groups of species that share a behavioral or ecological feature (e.g., social system, diet) differ in average phenotype? → Example: do carnivores have larger home ranges than herbivores? What was the ancestral state of a trait? → Example: where did endothermy evolve in the lineage that led to mammals? → Example: where, when, and why did placentas and viviparity evolve? Does a trait exhibit significant phylogenetic signal in a particular group of organisms? Do certain types of traits tend to "follow phylogeny" more than others? → Example: are behavioral traits more labile during evolution? Do species differences in life history traits trade-off, as in the so-called fast-slow continuum? → Example: why do small-bodied species have shorter life spans than their larger relatives? Phylogenetically independent contrasts Felsenstein proposed the first general statistical method in 1985 for incorporating phylogenetic information, i.e., the first that could use any arbitrary topology (branching order) and a specified set of branch lengths. The method is now recognized as an algorithm that implements a special case of what are termed phylogenetic generalized least-squares models. The logic of the method is to use phylogenetic information (and an assumed Brownian motion like model of trait evolution) to transform the original tip data (mean values for a set of species) into values that are statistically independent and identically distributed. The algorithm involves computing values at internal nodes as an intermediate step, but they are generally not used for inferences by themselves. An exception occurs for the basal (root) node, which can be interpreted as an estimate of the ancestral value for the entire tree (assuming that no directional evolutionary trends [e.g., Cope's rule] have occurred) or as a phylogenetically weighted estimate of the mean for the entire set of tip species (terminal taxa). The value at the root is equivalent to that obtained from the "squared-change parsimony" algorithm and is also the maximum likelihood estimate under Brownian motion. The independent contrasts algebra can also be used to compute a standard error or confidence interval. Phylogenetic generalized least squares (PGLS) Probably the most commonly used PCM is phylogenetic generalized least squares (PGLS). This approach is used to test whether there is a relationship between two (or more) variables while accounting for the fact that lineage are not independent. The method is a special case of generalized least squares (GLS) and as such the PGLS estimator is also unbiased, consistent, efficient, and asymptotically normal. In many statistical situations where GLS (or, ordinary least squares [OLS]) is used residual errors ε are assumed to be independent and identically distributed random variables that are assumed to be normal whereas in PGLS the errors are assumed to be distributed as where V is a matrix of expected variance and covariance of the residuals given an evolutionary model and a phylogenetic tree. Therefore, it is the structure of residuals and not the variables themselves that show phylogenetic signal. This has long been a source of confusion in the scientific literature. A number of models have been proposed for the structure of V such as Brownian motion Ornstein-Uhlenbeck, and Pagel's λ model. (When a Brownian motion model is used, PGLS is identical to the independent contrasts estimator.). In PGLS, the parameters of the evolutionary model are typically co-estimated with the regression parameters. PGLS can only be applied to questions where the dependent variable is continuously distributed; however, the phylogenetic tree can also be incorporated into the residual distribution of generalized linear models, making it possible to generalize the approach to a broader set of distributions for the response. Phylogenetically informed Monte Carlo computer simulations Martins and Garland proposed in 1991 that one way to account for phylogenetic relations when conducting statistical analyses was to use computer simulations to create many data sets that are consistent with the null hypothesis under test (e.g., no correlation between two traits, no difference between two ecologically defined groups of species) but that mimic evolution along the relevant phylogenetic tree. If such data sets (typically 1,000 or more) are analyzed with the same statistical procedure that is used to analyze a real data set, then results for the simulated data sets can be used to create phylogenetically correct (or "PC") null distributions of the test statistic (e.g., a correlation coefficient, t, F). Such simulation approaches can also be combined with such methods as phylogenetically independent contrasts or PGLS (see above). See also Allometry Behavioral ecology Biodiversity Bioinformatics Cladistics Comparative anatomy Comparative method in linguistics Comparative physiology Computational phylogenetics Disk-covering method Ecophysiology Evolutionary neurobiology Evolutionary physiology Generalized least squares (GLS) Generalized linear model Joe Felsenstein Mark Pagel Maximum likelihood Maximum parsimony Paul H. Harvey Phylogenetics Phylogenetic reconciliation Roderic D.M. Page Sexual selection Statistics Systematics Theodore Garland Jr. References Further reading Ackerly, D. D. 1999. Comparative plant ecology and the role of phylogenetic information. Pages 391–413 in M. C. Press, J. D. Scholes, and M. G. Braker, eds. Physiological plant ecology. The 39th symposium of the British Ecological Society held at the University of York 7–9 September 1998. Blackwell Science, Oxford, U.K. Brooks, D. R., and D. A. McLennan. 1991. Phylogeny, ecology, and behavior: a research program in comparative biology. Univ. Chicago Press, Chicago. 434 pp. Eggleton, P., and R. I. Vane-Wright, eds. 1994. Phylogenetics and ecology. Linnean Society Symposium Series Number 17. Academic Press, London. Felsenstein, J. 2004. Inferring phylogenies. Sinauer Associates, Sunderland, Mass. xx + 664 pp. Ives, A. R. 2018. Mixed and phylogenetic models: a conceptual introduction to correlated data. leanpub.com, 125 pp., https://leanpub.com/correlateddata Maddison, W. P., and D. R. Maddison. 1992. MacClade. Analysis of phylogeny and character evolution. Version 3. Sinauer Associates, Sunderland, Mass. 398 pp. Martins, E. P., ed. 1996. Phylogenies and the comparative method in animal behavior. Oxford University Press, Oxford. 415 pp. Erratum Am. Nat. 153:448. Page, R. D. M., ed. 2003. Tangled trees: phylogeny, cospeciation, and coevolution. University of Chicago Press, Chicago. Rezende, E. L., and Garland, T. Jr. 2003. Comparaciones interespecíficas y métodos estadísticos filogenéticos. Pages 79–98 in F. Bozinovic, ed. Fisiología Ecológica & Evolutiva. Teoría y casos de estudios en animales. Ediciones Universidad Católica de Chile, Santiago. PDF Ridley, M. 1983. The explanation of organic diversity: The comparative method and adaptations for mating. Clarendon, Oxford, U.K. External links Adaptation and the comparative method online lecture, with worked example of phylogenetically independent contrasts and mastery quiz List of phylogeny programs Phylogenetic Tools for Comparative Biology Phylogeny of Sleep website Tree of Life Journals American Naturalist Behavioral Ecology Ecology Evolution Evolutionary Ecology Research Functional Ecology Journal of Evolutionary Biology Philosophical Transactions of the Royal Society of London B Physiological and Biochemical Zoology Systematic Biology Software packages (incomplete list) Analyses of Phylogenetics and Evolution BayesTraits Comparative Analysis by Independent Contrasts COMPARE Felsenstein's List Mesquite PDAP:PDTree for Mesquite mvMorph ouch: Ornstein-Uhlenbeck for Comparative Hypotheses PDAP: Phenotypic Diversity Analysis Programs Phylogenetic Regression PHYSIG Laboratories Ackerly Bininda-Emonds Blomberg Butler Felsenstein Freckleton Garland Gittleman Grafen Hansen Harmon Harvey Housworth Irschick Ives Losos Martins Mooers Mort Nunn Oakley Page Pagel Paradis Purvis Rambaut Rohlf Sanderson Phylogenetics
Phylogenetic comparative methods
[ "Biology" ]
2,195
[ "Bioinformatics", "Phylogenetics", "Taxonomy (biology)" ]
3,181,579
https://en.wikipedia.org/wiki/Trichostatin%20A
Trichostatin A (TSA) is an organic compound that serves as an antifungal antibiotic and selectively inhibits the class I and II mammalian histone deacetylase (HDAC) families of enzymes, but not class III HDACs (i.e., sirtuins). However, there are recent reports of the interactions of this molecule with Sirt 6 protein. TSA inhibits the eukaryotic cell cycle during the beginning of the growth stage. TSA can be used to alter gene expression by interfering with the removal of acetyl groups from histones (histone deacetylases, HDAC) and therefore altering the ability of DNA transcription factors to access the DNA molecules inside chromatin. It is a member of a larger class of histone deacetylase inhibitors (HDIs or HDACIs) that have a broad spectrum of epigenetic activities. Thus, TSA has some potential as an anti-cancer drug. One suggested mechanism is that TSA promotes the expression of apoptosis-related genes, leading to cancerous cells surviving at lower rates, thus slowing the progression of cancer. Other mechanisms may include the activity of HDIs to induce cell differentiation, thus acting to "mature" some of the de-differentiated cells found in tumors. HDIs have multiple effects on non-histone effector molecules, so the anti-cancer mechanisms are truly not understood at this time. TSA inhibits HDACs 1, 3, 4, 6 and 10 with IC50 values around 20 nM. TSA represses IL (interleukin)-1β/LPS (lipopolysaccharide)/IFNγ (interferon γ)-induced nitric oxide synthase 2 (NOS2) expression in murine macrophage-like cells but increases LPS-stimulated NOS2 expression in murine N9 and primary rat microglial cells. Vorinostat is structurally related to trichostatin A and used to treat cutaneous T cell lymphoma. See also Histone deacetylase inhibitor Vorinostat (SAHA) References Further reading External links Trichostatin_A Safety data sheet by Fermentek Anilines Antibiotics Antifungals Aromatic ketones Histone deacetylase inhibitors Hydroxamic acids
Trichostatin A
[ "Chemistry", "Biology" ]
489
[ "Biotechnology products", "Functional groups", "Organic compounds", "Antibiotics", "Biocides", "Hydroxamic acids" ]
3,182,367
https://en.wikipedia.org/wiki/Lowest%20usable%20high%20frequency
The lowest usable high frequency (LUF), in radio transmission, is a frequency in the HF band at which the received field intensity is sufficient to provide the required signal-to-noise ratio for a specified time period, e.g., 0100 to 0200 UTC, on 90% of the undisturbed days of the month. Any frequency lower than this is not able to fulfill those requirements, while higher frequencies usually yield better result until the maximum usable frequency is reached. The amount of energy absorbed by the lower regions of the ionosphere (D region, primarily) directly impacts the LUF. See also Maximum usable frequency Frequency of optimum transmission Sources Federal Standard 1037C Radio frequency propagation
Lowest usable high frequency
[ "Physics" ]
145
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
3,183,005
https://en.wikipedia.org/wiki/Blytt%E2%80%93Sernander%20system
The Blytt–Sernander classification, or sequence, is a series of North European climatic periods or phases based on the study of Danish peat bogs by Axel Blytt (1876) and Rutger Sernander (1908). The classification was incorporated into a sequence of pollen zones later defined by Lennart von Post, one of the founders of palynology. Description Layers of peat were first noticed by Heinrich Dau in 1829. A prize was offered by the Royal Danish Academy of Sciences and Letters to anyone who could explain them. Blytt hypothesized that the darker layers were deposited in drier times and lighter in moister times, applying his terms Atlantic (warm, moist) and Boreal (cool, dry). In 1926 C. A. Weber noticed the sharp boundary horizons, or Grenzhorizonte, in German peat, which matched Blytt's classification. Sernander defined the subboreal and subatlantic periods, as well as the late glacial periods. Other scientists have since added other information. The classification was devised before the development of more accurate dating methods, such as C-14 dating and oxygen isotope ratio cycles. Geologists working in different regions are studying sea levels, peat bogs, and ice core samples by a variety of methods, intending to further verify, and refine the Blytt–Sernander sequence. They find a general correspondence across Eurasia and North America. The fluctuations of climatic change are more complex than Blytt–Sernander periodizations can identify. For example, recent peat core samples at Roskilde Fjord and Lake Kornerup in Denmark identified 40 to 62 distinguishable layers of pollen, respectively. However, no universally accepted replacement model has been proposed. Problems Dating and calibration Today the Blytt–Sernander sequence has been substantiated by a wide variety of scientific dating methods, mainly radiocarbon dates obtained from peat. Earlier radiocarbon dates were often left uncalibrated; that is, they were derived by assuming a constant concentration of atmospheric radiocarbon. The atmospheric radiocarbon concentration has varied over time and thus radiocarbon dates need to be calibrated. Cross-discipline correlation The Blytt–Sernander classification has been used as a temporal framework for the archaeological cultures of Europe and America. Some have gone so far as to identify stages of technology in north Europe with specific periods; however, this approach is an oversimplification not generally accepted. There is no reason, for example, why the north Europeans should stop using bronze and start using iron abruptly at the lower boundary of the Subatlantic at 600 BC. In the warm Atlantic period, Denmark was occupied by Mesolithic cultures, rather than Neolithic, notwithstanding the climatic evidence. Moreover, the technology stages vary widely globally. Sequence The Pleistocene phases and approximate calibrated dates (see above) are: Older Dryas stadial, 14,000–13,600 BP (Before Present) Allerød interstadial, 13,600–12,900 BP Younger Dryas stadial, 12,900–11,640 BP The Holocene phases are: Preboreal Boreal, cool, dry, rising temperature, 11,500–8,900 BP Atlantic, warm, moist, maximum temperature, 8900–5700 BP Subboreal, 5700–2600 BP Subatlantic, 2600–0 BP Marker species Some marker plant genera or species studied in peat are Sphagnum Carex limosa Scheuchzeria palustris, Rannock rush Eriophorum vaginatum, cotton grass Vaccinium oxycoccos, bog cranberry Andromeda polifolia, bog rosemary Erica tetralix, cross-leaved heather Calluna vulgaris, heather Pinus, pine Betula, birch More sphagnum appears in wet periods. Dry periods feature more tree stumps, of birch and pine. References External links the Holocene 10,000 Years of Climate Change Bogs and Mires of the Baltic Region Chronology Holocene Paleoclimatology Paleoecology Palynology Dating methodologies in archaeology
Blytt–Sernander system
[ "Physics", "Biology" ]
866
[ "Evolution of the biosphere", "Chronology", "Physical quantities", "Time", "Spacetime", "Paleoecology" ]
3,183,152
https://en.wikipedia.org/wiki/Gun-type%20fission%20weapon
Gun-type fission weapons are fission-based nuclear weapons whose design assembles their fissile material into a supercritical mass by the use of the "gun" method: shooting one piece of sub-critical material into another. Although this is sometimes pictured as two sub-critical hemispheres driven together to make a supercritical sphere, typically a hollow projectile is shot onto a spike, which fills the hole in its center. Its name is a reference to the fact that it is shooting the material through an artillery barrel as if it were a projectile. Since it is a relatively slow method of assembly, plutonium cannot be used unless it is purely the 239Pu isotope. Production of impurity-free plutonium is very difficult and is impractical. The required amount of uranium is relatively large, and thus the overall efficiency is relatively low. The main reason for this is the uranium metal does not undergo compression (and resulting density increase) as does the implosion design. Instead, gun-type bombs assemble the supercritical mass by amassing such a large quantity of uranium that the overall distance through which daughter neutrons must travel has so many mean free paths it becomes very probable most neutrons will find uranium nuclei to collide with, before escaping the supercritical mass. The first time gun-type fission weapons were discussed was as part of the British Tube Alloys nuclear bomb development program, the world's first nuclear bomb development program. The British MAUD Report of 1941 laid out how "an effective uranium bomb which, containing some 25 lb of active material, would be equivalent as regards destructive effect to 1,800 tons of T.N.T". The bomb would use the gun-type design "to bring the two halves together at high velocity and it is proposed to do this by firing them together with charges of ordinary explosive in a form of double gun". The method was applied in four known US programs. First, the "Little Boy" weapon which was detonated over Hiroshima and several additional units of the same design prepared after World War II, in 40 Mark 8 bombs, and their replacement, 40 Mark 11 bombs. Both the Mark 8 and Mark 11 designs were intended for use as earth-penetrating bombs (see nuclear bunker buster), for which the gun-type method was preferred for a time by designers who were less than certain that early implosion-type weapons would successfully detonate following an impact. The second program was a family of 11-inch (280 mm) nuclear artillery shells, the W9 and its derivative W19, plus a repackaged W19 in a 16-inch (406 mm) shell for US Navy battleships, the W23. The third family was an 8-inch (203 mm) artillery shell, the W33. South Africa also developed six nuclear bombs based on the gun-type principle, and was working on missile warheads using the same basic design – See South Africa and weapons of mass destruction. There are currently no known gun-type weapons in service: advanced nuclear weapon states tended to abandon the design in favor of the implosion-type weapons, boosted fission weapons, and thermonuclear weapons. New nuclear weapon states tend to develop boosted fission and thermonuclear weapons only. All known gun-type nuclear weapons previously built worldwide have been dismantled. Little Boy The "gun" method is roughly how the Little Boy weapon, which was detonated over Hiroshima, worked, using uranium-235 as its fissile material. In the Little Boy design, the U-235 "bullet" had a mass of around , and it was long, with a diameter of . The hollow cylindrical shape made it subcritical. It was powered by a cordite charge. The uranium target spike was about . Both the bullet and the target consisted of multiple rings stacked together. The use of "rings" had two advantages: it allowed the larger bullet to confidently remain subcritical (the hollow column served to keep the material from having too much contact with other material), and it allowed sub-critical assemblies to be tested using the same bullet but with just one ring. The barrel had an inside diameter of . Its length was , which allowed the bullet to accelerate to its final speed of about before coming into contact with the target. When the bullet is at a distance of , the combination becomes critical. This means that some free neutrons may cause the chain reaction to take place before the material could be fully joined (see nuclear chain reaction). Typically the chain reaction takes less than 1 μs (100 shakes), during which time the bullet travels only 0.3 mm ( inch). Although the chain reaction is slower when the supercriticality is low, it still happens in a time so brief that the bullet hardly moves in that time. This could cause a fizzle, a predetonation which would blow the material apart before creating much of an explosion. Thus, it is important that the frequency at which free neutrons occur is kept low, compared with the assembly time from this point. This also means that the speed of the projectile must be sufficiently high; its speed can be increased but this requires a longer and heavier barrel, or a higher pressure of the propellant gas for greater acceleration of the bullet subcritical mass. In the case of Little Boy, the 20% 238U in the uranium had 70 spontaneous fissions per second. With the fissionable material in a supercritical state, each gave a large probability of detonation: each fission creates on average 2.52 neutrons, which each have a probability of more than 1:2.52 of creating another fission. During the 1.35 ms of supercriticality prior to full assembly, there was a 10% probability of a fission, with somewhat less probability of pre-detonation. Initially the Manhattan Project gun-type effort was directed at making a gun weapon that used plutonium as its source of fissile material, known as the "Thin Man" because of its extreme length. It was thought that if a plutonium gun-type bomb could be created, then the uranium gun-type bomb would be very easy to make by comparison. However, it was discovered in April 1944 that reactor-bred plutonium (Pu-239) is contaminated with another isotope of plutonium, Pu-240, which increases the material's spontaneous neutron-release rate, making pre-detonation inevitable. For this reason, a gun-type bomb is thought to only be usable with enriched uranium fuel. It is unknown though possible to make a composite design using high grade plutonium in the bullet only. After it was discovered that the "Thin Man" program would not be successful, Los Alamos redirected its efforts into creating the implosion-type plutonium weapon: "Fat Man". The gun program switched completely over to developing a uranium bomb. Although in Little Boy of 80%-grade 235U was used (hence ), the minimum is about 44 to 55 pounds (20 to 25 kg), versus for the implosion method. Little Boy's target subcritical mass was enclosed in a neutron reflector made of tungsten carbide (WC). The presence of a neutron reflector reduced neutron losses during the chain reaction, and so reduced the quantity of uranium fuel needed. A more effective reflector material would be metallic beryllium, but this was not known until the postwar years when Ted Taylor developed an implosion design known as "Scorpion". The scientists who designed the "Little Boy" weapon were confident enough of its success that they did not field-test a design before using it in war (though scientists such as Louis Slotin did perform non-destructive tests with sub-critical assemblies, dangerous experiments nicknamed "tickling the dragon's tail"). In any event, it could not be tested before being deployed, as there was only sufficient U-235 available for one device. Even though the design was never proof-tested, there was thought to be no risk of the device being captured by an enemy if it malfunctioned. Even a "fizzle" would have completely disintegrated the device, while the multiple redundancies built into the "Little Boy" design meant there was negligible, if any, potential for the device to strike the ground without detonating at all. For a quick start of the chain reaction at the right moment a neutron trigger/initiator is used. An initiator is not strictly necessary for an effective gun design, as long as the design uses "target capture" (in essence, ensuring that the two subcritical masses, once fired together, cannot come apart until they explode). Considering the 70 spontaneous fissions per second, this only causes a delay of a few times 1/70 second, which in this case does not matter. Initiators were only added to Little Boy late in its design. Proliferation and terrorism With regard to the risk of proliferation and use by terrorists, the relatively simple design is a concern, as it does not require as much fine engineering or manufacturing as other methods. With enough highly enriched uranium, nations or groups with relatively low levels of technological sophistication could create an inefficient—though still quite powerful—gun-type nuclear weapon. Comparison with the implosion method For technologically advanced states the gun-type method is now essentially obsolete, for reasons of efficiency and safety (discussed above). The gun type method was largely abandoned by the United States as soon as the implosion technique was perfected, though it was retained in the specialised role of nuclear artillery for a time. Other nuclear powers, such as the United Kingdom and Soviet Union, never built an example of this type of weapon. Besides requiring the use of highly enriched U-235, the technique has other severe limitations. The implosion technique is much better suited to the various methods employed to reduce the mass of the weapon and increase the proportion of material which fissions. Apartheid South Africa built around five gun-type weapons, and no implosion-type weapons. They later abandoned their nuclear weapon program altogether. They were unique in their abandonment of nuclear weapons, and probably also by building gun-type weapons rather than implosion-type weapons. There are also safety problems with gun-type weapons. For example, it is inherently dangerous to have a weapon containing a quantity and shape of fissile material that can form a critical mass through a relatively simple accident. Furthermore, if the weapon is dropped from an aircraft into the sea, then the moderating effect of the seawater can also cause a criticality accident without the weapon even being physically damaged. Neither can happen with an implosion-type weapon, since there is normally insufficient fissile material to form a critical mass without the correct detonation of the explosive lenses. US nuclear artillery The gun method has also been applied for nuclear artillery shells, since the simpler design can be more easily engineered to withstand the rapid acceleration and g-forces imparted by an artillery gun, and since the smaller diameter of the gun-type design can be relatively easily fitted to projectiles that can be fired from existing artillery. A US gun-type nuclear artillery weapon, the W9, was tested on May 25, 1953, at the Nevada Test Site. Fired as part of Operation Upshot–Knothole and codenamed Shot GRABLE, a shell was fired and detonated above the ground with an estimated yield of 15 kilotons. This is approximately the same yield as Little Boy, although the W9 had less than of Little Boy's weight (365 kg vs. 4,000 kg, or 805 lbs vs. 8,819 lbs). The shell was long. This was the only nuclear artillery shell ever actually fired (from an artillery gun) in the US test program. It was fired from a specially built artillery piece, nicknamed Atomic Annie. Eighty shells were produced from 1952 to 1953. It was retired in 1957. The W19 was also a 280 mm gun-type nuclear shell, a longer version of the W-9. Eighty warheads were produced and the system was retired in 1963. The W33 was a smaller, 8 inch (203 mm) gun-type nuclear artillery shell, which was produced starting in 1957 and in service until 1992. Two were test fired (detonated, not fired from an artillery gun), one hung under a balloon in the open air, and one in a tunnel. Later versions were based on the implosion design. List of US gun-type weapons Bombs Mark 1 "Little Boy", 1945–1951 Mark 2 "Thin Man", cancelled 1944 Mark 8 (bunker buster), 1952–1957 Mark 10, cancelled 1952 Mark 11 (bunker buster), 1956–1960 Artillery W9, 1952–1957 W19, 1955–1963 W23, 1956–1962 W33, 1957–1992 Others T-4 Atomic Demolition Munition, 1957–1963 W8 for the SSM-N-8 Regulus cruise missile, cancelled 1955 References External links Simulation of "Little Boy" an interactive simulation of the gun-type design atomic bomb "Little Boy" Nuclear weapon design Gun-type nuclear bombs Nuclear fission
Gun-type fission weapon
[ "Physics", "Chemistry" ]
2,712
[ "Nuclear fission", "Nuclear physics" ]
3,183,229
https://en.wikipedia.org/wiki/Ion%20plating
Ion plating (IP) is a physical vapor deposition (PVD) process that is sometimes called ion assisted deposition (IAD) or ion vapor deposition (IVD) and is a modified version of vacuum deposition. Ion plating uses concurrent or periodic bombardment of the substrate, and deposits film by atomic-sized energetic particles called ions. Bombardment prior to deposition is used to sputter clean the substrate surface. During deposition the bombardment is used to modify and control the properties of the depositing film. It is important that the bombardment be continuous between the cleaning and the deposition portions of the process to maintain an atomically clean interface. If this interface is not properly cleaned, then it can result into a weaker coating or poor adhesion. They are many different processes to vacuum deposited coatings in which they are used for various applications such as corrosion resistance and wear on the material. Process In ion plating, the energy, flux and mass of the bombarding species along with the ratio of bombarding particles to depositing particles are important processing variables. The depositing material may be vaporized either by evaporation, sputtering (bias sputtering), arc vaporization or by decomposition of a chemical vapor precursor chemical vapor deposition (CVD). The energetic particles used for bombardment are usually ions of an inert or reactive gas, or, in some cases, ions of the condensing film material ("film ions"). Ion plating can be done in a plasma environment where ions for bombardment are extracted from the plasma or it may be done in a vacuum environment where ions for bombardment are formed in a separate ion gun. The latter ion plating configuration is often called Ion Beam Assisted Deposition (IBAD). By using a reactive gas or vapor in the plasma, films of compound materials can be deposited. Ion plating is used to deposit hard coatings of compound materials on tools, adherent metal coatings, optical coatings with high densities, and conformal coatings on complex surfaces. Pros Better surface coverage than other methods (Physical vapor deposition, Sputter deposition). More energy available on the surface of the bombarding species, resulting in more complete bonding. Flexibility with the level of ion bombardment. Improved chemical reactions when supplying plasma and energy to surface of the bombarding species. Durability of material improves at least 8 times more. Cons Increased variables to take into account when compared to other techniques. Uniformity of plating not always consistent Excessive heating to the substrate Compressive stress This process is costly and time consuming Background information on ion plating The ion plating process was first described in the technical literature by Donald M. Mattox of Sandia National Laboratories in 1964. As described by this article, it was used initially to enhance film adhesion and improve surface coverage. History This process was first used in the 1960's and was continued throughout the time by using specific cleaning techniques and film growth reactive and quasi reactive deposition techniques. Sputter cleaning has been used since the 1950's for cleaning scientific surfaces. In the 1970's, high-rate DC magnetron sputtering has shown that bombardment densified the films and helped the hardness of materials. As we further progressed, we learned in 1983 that bombardment was used as concurrent bombardment of inserted gas ions. See also List of coating techniques References Further reading Chemical processes Physical vapor deposition techniques Thin film deposition
Ion plating
[ "Chemistry", "Materials_science", "Mathematics" ]
687
[ "Thin film deposition", "Coatings", "Thin films", "Chemical processes", "nan", "Chemical process engineering", "Planes (geometry)", "Solid state engineering" ]
13,763,527
https://en.wikipedia.org/wiki/Lommel%20function
The Lommel differential equation, named after Eugen von Lommel, is an inhomogeneous form of the Bessel differential equation: Solutions are given by the Lommel functions sμ,ν(z) and Sμ,ν(z), introduced by , where Jν(z) is a Bessel function of the first kind and Yν(z) a Bessel function of the second kind. The s function can also be written as where pFq is a generalized hypergeometric function. See also Anger function Lommel polynomial Struve function Weber function References External links Weisstein, Eric W. "Lommel Differential Equation." From MathWorld—A Wolfram Web Resource. Weisstein, Eric W. "Lommel Function." From MathWorld—A Wolfram Web Resource. Special functions Ordinary differential equations
Lommel function
[ "Mathematics" ]
172
[ "Special functions", "Combinatorics" ]
13,764,124
https://en.wikipedia.org/wiki/Electric%20machine
In electrical engineering, electric machine is a general term for machines using electromagnetic forces, such as electric motors, electric generators, and others. They are electromechanical energy converters: an electric motor converts electricity to mechanical power while an electric generator converts mechanical power to electricity. The moving parts in a machine can be rotating (rotating machines) or linear (linear machines). While transformers are occasionally called "static electric machines", since they do not have moving parts, generally they are not considered "machines", but as electrical devices "closely related" to the electrical machines. Electric machines, in the form of synchronous and induction generators, produce about 95% of all electric power on Earth (as of early 2020s), and in the form of electric motors consume approximately 60% of all electric power produced. Electric machines were developed beginning in the mid 19th century and since that time have been a ubiquitous component of the infrastructure. Developing more efficient electric machine technology is crucial to any global conservation, green energy, or alternative energy strategy. Generator An electric generator is a device that converts mechanical energy to electrical energy. A generator forces electrons to flow through an external electrical circuit. It is somewhat analogous to a water pump, which creates a flow of water but does not create the water inside. The source of mechanical energy, the prime mover, may be a reciprocating or turbine steam engine, water falling through a turbine or waterwheel, an internal combustion engine, a wind turbine, a hand crank, compressed air or any other source of mechanical energy. The two main parts of an electrical machine can be described in either mechanical or electrical terms. In mechanical terms, the rotor is the rotating part, and the stator is the stationary part of an electrical machine. In electrical terms, the armature is the power-producing component and the field is the magnetic field component of an electrical machine. The armature can be on either the rotor or the stator. The magnetic field can be provided by either electromagnets or permanent magnets mounted on either the rotor or the stator. Generators are classified into two types, AC generators and DC generators. AC generator An AC generator converts mechanical energy into alternating current electricity. Because power transferred into the field circuit is much less than power transferred into the armature circuit, AC generators nearly always have the field winding on the rotor and the armature winding on the stator. AC generators are classified into several types. In an induction generator, the stator magnetic flux induces currents in the rotor. The prime mover then drives the rotor above the synchronous speed, causing the opposing rotor flux to cut the stator coils producing active current in the stater coils, thus sending power back to the electrical grid. An induction generator draws reactive power from the connected system and so cannot be an isolated source of power. In a Synchronous generator (alternator), the current for the magnetic field is provided by a DC current source, either separate or rectified from the output of the machine using a full bridge rectifier. DC generator A DC generator is a machine that converts mechanical energy into Direct Current electrical energy. A DC generator generally has a commutator with split ring to produce a direct current instead of an alternating current. Motor An electric motor converts electrical energy into mechanical energy. The reverse process of electrical generators, most electric motors operate through interacting magnetic fields and current-carrying conductors to generate rotational force. Motors and generators have many similarities and many types of electric motors can be run as generators, and vice versa. Electric motors are found in applications as diverse as industrial fans, blowers and pumps, machine tools, household appliances, power tools, and disk drives. They may be powered by direct current or by alternating current which leads to the two main classifications: AC motors and DC motors. AC motor An AC motor converts alternating current into mechanical energy. It commonly consists of two basic parts, an outside stationary stator having coils supplied with alternating current to produce a rotating magnetic field, and an inside rotor attached to the output shaft that is given a torque by the rotating field. The two main types of AC motors are distinguished by the type of rotor used. Induction (asynchronous) motor, the rotor magnetic field is created by an induced current. The rotor must turn slightly slower (or faster) than the stator magnetic field to provide the induced current. There are three types of induction motor rotors, which are squirrel-cage rotor, wound rotor and solid core rotor. Synchronous motor, it does not rely on induction and so can rotate exactly at the supply frequency or sub-multiple. The magnetic field of the rotor is either generated by direct current delivered through slip rings (exciter) or by a permanent magnet. DC motor The brushed DC electric motor generates torque directly from DC power supplied to the motor by using internal commutation, stationary permanent magnets, and rotating electrical magnets. Brushes and springs carry the electric current from the commutator to the spinning wire windings of the rotor inside the motor. Brushless DC motors use a rotating permanent magnet in the rotor, and stationary electrical magnets on the motor housing. A motor controller converts DC to AC. This design is simpler than that of brushed motors because it eliminates the complication of transferring power from outside the motor to the spinning rotor. An example of a brushless, synchronous DC motor is a stepper motor which can divide a full rotation into a large number of steps. Other electromagnetic machines Other electromagnetic machines include the Amplidyne, Synchro, Metadyne, Eddy current clutch, Eddy current brake, Eddy current dynamometer, Hysteresis dynamometer, Rotary converter, and Ward Leonard set. A rotary converter is a combination of machines that act as a mechanical rectifier, inverter or frequency converter. The Ward Leonard set is a combination of machines used to provide speed control. Other machine combinations include the Kraemer and Scherbius systems. Electromagnetic-rotor machines Electromagnetic-rotor machines are machines having some kind of electric current in the rotor which creates a magnetic field which interacts with the stator windings. The rotor current can be the internal current in a permanent magnet (PM machine), a current supplied to the rotor through brushes (Brushed machine) or a current set up in closed rotor windings by a varying magnetic field (Induction machine). Permanent magnet machines PM machines have permanent magnets in the rotor which set up a magnetic field. The magnetomotive force in a PM (caused by orbiting electrons with aligned spin) is generally much higher than what is possible in a copper coil. The copper coil can, however, be filled with a ferromagnetic material, which gives the coil much lower magnetic reluctance. Still the magnetic field created by modern PMs (Neodymium magnets) is stronger, which means that PM machines have a better torque/volume and torque/weight ratio than machines with rotor coils under continuous operation. This may change with introduction of superconductors in rotor. Since the permanent magnets in a PM machine already introduce considerable magnetic reluctance, then the reluctance in the air gap and coils are less important. This gives considerable freedom when designing PM machines. It is usually possible to overload electric machines for a short time until the current in the coils heats parts of the machine to a temperature which cause damage. PM machines can less tolerate such overload, because too high current in the coils can create a magnetic field strong enough to demagnetise the magnets. Brushed machines Brushed machines are machines where the rotor coil is supplied with current through brushes in much the same way as current is supplied to the car in an electric slot car track. More durable brushes can be made of graphite or liquid metal. It is even possible to eliminate the brushes in a "brushed machine" by using a part of the rotor and stator as a transformer that transfers current without creating torque. Brushes must not be confused with a commutator. The difference is that the brushes only transfer electric current to a moving rotor while a commutator also provides switching of the current direction. There is iron (usually laminated steel cores made of sheet metal) between the rotor coils and teeth of iron between the stator coils in addition to black iron behind the stator coils. The gap between rotor and the stator is also made as small as possible. All this is done to minimize the magnetic reluctance of the magnetic circuit which the magnetic field created by the rotor coils travels through, something which is important for optimizing these machines. Large brushed machines which are run with DC to the stator windings at synchronous speed are the most common generator in power plants, because they also supply reactive power to the grid, because they can be started by the turbine and because the machine in this system can generate power at a constant speed without a controller. This type of machine is often referred to in the literature as a synchronous machine. This machine can also be run by connecting the stator coils to the grid and supplying the rotor coils with AC from an inverter. The advantage is that it is possible to control the rotating speed of the machine with a fractionally rated inverter. When run this way the machine is known as a brushed double feed "induction" machine. "Induction" is misleading because there is no useful current in the machine which is set up by induction. Induction machines Induction machines have short circuited rotor coils where a current is set up and maintained by induction. This requires that the rotor rotates at other than synchronous speed, so that the rotor coils are subjected to a varying magnetic field created by the stator coils. An induction machine is an asynchronous machine. Induction eliminates the need for brushes which is usually a weak part in an electric machine. It also allows designs which make it very easy to manufacture the rotor. A metal cylinder will work as rotor, but to improve efficiency a "squirrel cage" rotor or a rotor with closed windings is usually used. The speed of asynchronous induction machines will decrease with increased load because a larger speed difference between stator and rotor is necessary to set up sufficient rotor current and rotor magnetic field. Asynchronous induction machines can be made so they start and run without any means of control if connected to an AC grid, but the starting torque is low. A special case would be an induction machine with superconductors in the rotor. The current in the superconductors will be set up by induction, but the rotor will run at synchronous speed because there will be no need for a speed difference between the magnetic field in stator and speed of rotor to maintain the rotor current. Another special case would be the brushless double fed induction machine, which has a double set of coils in the stator. Since it has two moving magnetic fields in the stator, it gives no meaning to talk about synchronous or asynchronous speed. Reluctance machines Reluctance machines have no windings on the rotor, only a ferromagnetic material shaped so that "electromagnets" in stator can "grab" the teeth in rotor and advance it a little. The electromagnets are then turned off, while another set of electromagnets is turned on to move rotor further. Another name is step motor, and it is suited for low speed and accurate position control. Reluctance machines can be supplied with permanent magnets in the stator to improve performance. The "electromagnet" is then "turned off" by sending a negative current in the coil. When the current is positive the magnet and the current cooperate to create a stronger magnetic field which will improve the reluctance machine's maximum torque without increasing the currents maximum absolute value. Polyphase AC machines The armature of polyphase electric machines includes multiple windings powered by the AC currents offset one from another by equal phasor angles. The most popular are the 3 phase machines, where the windings are (electrically) 120° apart. The 3-phase machines have major advantages of the single-phase ones: steady state torque is constant, leading to less vibration and longer service life (the instantanous torque of a single-phase motor pulsates with the cycle) power is constant (the power consumption of the single-phase motor varies over the cycle); smaller size (and thus lower cost) for the same power; the transmission over 3 wires need only 3/4 of the metal for the wires that would be required for a two-wire single-phase transmission line for the same power; better power factor. Sequence The winding phases of the 3-phase motor must be energized in a sequence for a motor to rotate, for example the phase V lagging phase U by 120°, and phase W lagging the phase V (U > V > W, normal phase rotation, positive sequence). If the sequence is reversed (W < V < U), the motor will rotate in the opposite direction (negative sequence). The common current through all three windings is called zero sequence. Any combination of the AC currents in the three windings can be expressed as a sum of three symmetrical currents, corresponding to positive, negative, and zero sequences. Electrostatic machines In electrostatic machines, torque is created by attraction or repulsion of electric charge in rotor and stator. Electrostatic generators generate electricity by building up electric charge. Early types were friction machines, later ones were influence machines that worked by electrostatic induction. The Van de Graaff generator is an electrostatic generator still used in research today. Homopolar machines Homopolar machines are true DC machines where current is supplied to a spinning wheel through brushes. The wheel is inserted in a magnetic field, and torque is created as the current travels from the edge to the centre of the wheel through the magnetic field. Electric machine systems For optimized or practical operation of electric machines, today's electric machine systems are complemented with electronic control. References Sources This has a detailed survey of the contemporaneous history and state of electric machines. Machines Electrical engineering Electric motors Electromechanical engineering
Electric machine
[ "Physics", "Technology", "Engineering" ]
2,934
[ "Machines", "Engines", "Electric motors", "Physical systems", "Electromechanical engineering", "Mechanical engineering by discipline", "Mechanical engineering", "Electrical engineering" ]
13,769,301
https://en.wikipedia.org/wiki/Lattice%20gas%20automaton
Lattice gas automata (LGCA), or lattice gas cellular automata, are a type of cellular automaton used to simulate fluid flows, pioneered by Hardy–Pomeau–de Pazzis and Frisch–Hasslacher–Pomeau. They were the precursor to the lattice Boltzmann methods. From lattice gas automata, it is possible to derive the macroscopic Navier–Stokes equations. Interest in lattice gas automaton methods levelled off in the early 1990s, as the interest in the lattice Boltzmann started to rise. However, an LGCA variant, termed BIO-LGCA, is still widely used to model collective migration in biology. Basic principles As a cellular automaton, these models comprise a lattice, where the sites on the lattice can take a certain number of different states. In lattice gas, the various states are particles with certain velocities. Evolution of the simulation is done in discrete time steps. After each time step, the state at a given site can be determined by the state of the site itself and neighboring sites, before the time step. The state at each site is purely boolean. At a given site, there either is or is not a particle moving in each direction. At each time step, two processes are carried out, propagation and collision. In the propagation step, each particle will move to a neighboring site determined by the velocity that particle had. Barring any collisions, a particle with an upwards velocity will after the time step maintain that velocity, but be moved to the neighboring site above the original site. The so-called exclusion principle prevents two or more particles from travelling on the same link in the same direction. In the collision step, collision rules are used to determine what happens if multiple particles reach the same site. These collision rules are required to maintain mass conservation, and conserve the total momentum; the block cellular automaton model can be used to achieve these conservation laws. Note that the exclusion principle does not prevent two particles from travelling on the same link in opposite directions; when this happens, the two particles pass each other without colliding. Early attempts with a square lattice In papers published in 1973 and 1976, Jean Hardy, Yves Pomeau and Olivier de Pazzis introduced the first lattice Boltzmann model, which is called the HPP model after the authors. HPP model is a two-dimensional model of fluid particle interactions. In this model, the lattice is square, and the particles travel independently at a unit speed to the discrete time. The particles can move to any of the four sites whose cells share a common edge. Particles cannot move diagonally. If two particles collide head-on, for example a particle moving to the left meets a particle moving to the right, the outcome will be two particles leaving the site at right angles to the direction they came in. The HPP model lacked rotational invariance, which made the model highly anisotropic. This means for example, that the vortices produced by the HPP model are square-shaped. Hexagonal grids The hexagonal grid model was first introduced in 1986, in a paper by Uriel Frisch, Brosl Hasslacher and Pomeau, and this has become known as the FHP model after its inventors. The model has six or seven velocities, depending on which variation is used. In any case, six of the velocities represent movement to each of the neighboring sites. In some models (called FHP-II and FHP-III), a seventh velocity representing particles "at rest" is introduced. The "at rest" particles do not propagate to neighboring sites, but they are capable of colliding with other particles. The FHP-III model allows all possible collisions that conserve density and momentum. Increasing the number of collisions raises the Reynolds number, so the FHP-II and FHP-III models can simulate less viscous flows than the six-speed FHP-I model. The simple update rule of FHP model proceeds in two stages, chosen to conserve particle number and momentum. The first is collision handling. The collision rules in the FHP model are not deterministic, some input situations produce two possible outcomes, and when this happens, one of them is picked at random. Since random number generation is not possible through completely computational means, a pseudorandom process is usually chosen. After the collision step a particle on a link is taken to be leaving the site. If a site has two particles approaching head-on, they scatter. A random choice is made between the two possible outgoing directions that conserve momentum. The hexagonal grid does not suffer as large anisotropy troubles as those that plague the HPP square grid model, a fortunate fact that is not entirely obvious, and that prompted Frisch to remark that "the symmetry gods are benevolent". Three dimensions For a three-dimensional grid, the only regular polytope that fills the whole space is the cube, while the only regular polytopes with a sufficiently large symmetry group are the dodecahedron and icosahedron (without the second constraint the model will suffer the same drawbacks as the HPP model). To make a model that tackles three dimensions therefore requires an increase in the number of dimensions, such as in the 1986 model by D'Humières, Lallemand and Frisch, which employed a face-centered hypercube model. Obtaining macroscopic quantities The density at a site can be found by counting the number of particles at each site. If the particles are multiplied with the unit velocity before being summed, one can obtain the momentum at the site. However, calculating density, momentum, and velocity for individual sites is subject to a large amount of noise, and in practice, one would average over a larger region to obtain more reasonable results. Ensemble averaging is often used to reduce the statistical noise further. Advantages and disadvantages The main assets held by the lattice gas model are that the boolean states mean there will be exact computing without any round-off error due to floating-point precision, and that the cellular automata system makes it possible to run lattice gas automaton simulations with parallel computing. Disadvantages of the lattice gas method include the lack of Galilean invariance, and statistical noise. Another problem is the difficulty in expanding the model to handle three dimensional problems, requiring the use of more dimensions to maintain a sufficiently symmetric grid to tackle such issues. As a model in biology Lattice-gas cellular automata have been adapted and are still widely used for modeling collective migration in biology. Due to the active nature of biological agents, as well as the viscuous environments cells live in, momentum conservation is not required. Furthermore, agents may die or reproduce, so mass conservation may also be absent. During the collision step, particles reorient stochastically following a Boltzmann distribution, simulating local interaction between individuals. Notes References (Chapter 2 is about lattice gas Cellular Automata) James Maxwell Buick (1997). Lattice Boltzmann Methods in Interfacial Wave Modelling. PhD Thesis, University of Edinburgh. (Chapter 3 is about the lattice gas model.) (archive.org) 2008-11-13 External links Master thesis (2000) – Details on programming and optimising the simulation of the FHP LGA Master thesis (2010) - Implementation of FHP model in Nvidia CUDA technology. Computational fluid dynamics Cellular automata
Lattice gas automaton
[ "Physics", "Chemistry", "Mathematics" ]
1,539
[ "Computational fluid dynamics", "Recreational mathematics", "Cellular automata", "Computational physics", "Fluid dynamics" ]
13,769,943
https://en.wikipedia.org/wiki/Polydicyclopentadiene
Polydicyclopentadiene (PDCPD) is a polymer material which is formed through ring-opening metathesis polymerization(ROMP) of dicyclopentadiene (DCPD). PDCPD exhibits high crosslinking, which grants its properties, such as high impact resistance, good chemical corrosion resistance, and high heat deflection temperature. PDCPD is frequently used in the automotive industry to make body panels, bumpers, and other components for trucks, buses, tractors, and construction equipment. PDCPD is being investigated for the creation of porous materials for tissue engineering or gas storage applications, as well as for self-healing polymers. Polymerization can be achieved through the use of different transition metal catalysts as ruthenium, molybdenum, tungsten, and titanium, as well as under metal-free conditions through photoredox catalysis. The exact structure of the PDCPD polymer depends upon the reaction conditions used for the polymerization. While the crosslinked polymer may arise from the metathesis of both alkenes in the parent monomer, it has been suggested that much polymerization conditions result in only the strained norbornene ring in the monomer undergoing olefin metathesis while subsequent crosslinking steps result from thermal condensation of the remaining olefins in the linear polymer. Several new catalytic systems for the synthesis of linear PDCPD have been run successfully using tungsten hexachloride, tungsten(VI) oxytetrachloride, and organosilicon compounds. Chemical process The reacting system is formulated to maximize the speed of the reaction, and in this system, two components must be mixed in a ration of equal volume. Both components contain mainly DCPD with some additional additives. The catalyst system is divided into two parts, each part going into a separate component. When both components are mixed, the complete catalyst system is recombined and becomes active. This is an important difference from other reaction injection molding (RIM) systems, such as polyurethane, since the reaction is not stoichiometric. The 1:1 volume ratio for DCPD molding is not critical since this is not a combination of two different chemical elements to form a specific matrix. However, significant changes in ratio will slow down the system's reactivity because fewer active reaction nuclei are being formed. Equipment DCPD resins are transformed using high pressure RIM equipment as used in the polyurethane industry, with some small changes to be considered. The most important change is that the resin can never be in contact with air or moisture, which requires a nitrogen blanket in the tanks. The tools or molds are closed tools and are being clamped using a hydraulic press. Because the resins shrink approximately 6% in volume during reaction, these presses (also called clamping units) do not have to handle high pressures, such as for sheet molding compound (SMC) or expanding polyurethane. Tooling Most tooling for PDCPD is made from aluminium. Flat parts can be made from machined aluminum while deeper 3D-shaped parts are often made as cast aluminium tools. It is important to take volumetric shrinkage into account, and gaskets must be used around all cavities. Process considerations The liquid resin has a relative density of 0.97 and reacts into a solid with a relative density of 1.03, which makes up a volumetric shrinkage of 6%. Since most parts are panels, most of the shrinkage will happen on the Z-axis — causing a change in thickness. This makes the parts self-demolding as they do not have a good contact with the core side (which is the back side) of the tool. A reacting system is always governed by temperature - in any form. This means that the temperature of the liquid components has a strong influence on the reactivity. To ensure that one side has the required surface finish, the temperature on that side needs to be higher than on the core side. Both tool-halves are therefore tempered at a different temperature with typical values of 60 °C and 80 °C. Typical cycle times for molding parts range between 4 and 6 minutes. Properties PDCPD has several useful properties: high impact resistance high chemical corrosion resistance high heat deflection temperature (HDT) PDCPD does not contain any fiber reinforcement, although a fiber reinforced version has been in development. PDCPD allows the thickness to vary throughout a part, to incorporate ribs, and to overmold inserts for an uncomplicated assembly of the parts. PDCPD cannot be painted in mass and needs to be painted after molding. Applications Since PDCPD is still a new material, the number of applications is quite limited. The major applications is in body panels, mainly for tractors, construction equipment, trucks and buses. In the industrial applications, the main usage is components for chlor-alkali production (e.g. cell covers for electrolyzers). It is used in other applications where impact resistance in combination with rigidity, 3D design and/or corrosion resistance are required. Recycling PDCPD is not recyclable. In July 2020, researchers reported the development of a technique to produce a degradable version of this tough thermoset plastic, which may also apply to other plastics, that are not included among the 75% of plastics that are recyclable. References External links Polydicyclopentadiene - Polymer Science Learning Center Properties of Polydicyclopentadiene (PDCPD) - MatWeb database Polymers
Polydicyclopentadiene
[ "Chemistry", "Materials_science" ]
1,157
[ "Polymers", "Polymer chemistry" ]
13,771,489
https://en.wikipedia.org/wiki/Micropipe
A micropipe, also called a micropore, microtube, capillary defect or pinhole defect, is a crystallographic defect in a single crystal substrate. Minimizing the presence of micropipes is important in semiconductor manufacturing, as their presence on a wafer can result in the failure of integrated circuits made from that wafer. Micropipes are also relevant to makers of silicon carbide (SiC) substrates, used in a variety of industries such as power semiconductor devices for vehicles and high frequency communication devices; during the production of these materials, the crystal undergoes internal and external stresses causing growth of defects, or dislocations, within the atomic lattice. A screw dislocation is a common dislocation that transforms successive atomic planes within a crystal lattice into the shape of a helix. Once a screw dislocation propagates through the bulk of a sample during the wafer growth process, a micropipe is formed. Micropipes and screw dislocations in epitaxial layers are normally derived from the substrates on which the epitaxy is performed. Micropipes are considered to be empty-core screw dislocations with large strain energy (i.e. they have large Burgers vector); they follow the growth direction (c-axis) in silicon carbide boules and substrates propagating into the deposited epitaxial layers. Factors which influence formation of micropipes (and other defects) are such growth parameters as temperature, supersaturation, vapor phase stoichiometry, impurities and the polarity of the seed crystal surface. References United States Patent 7,201,799, V Velidandla, KLA-Tencor Technologies Corporation (Milpitas, CA), April 10, 2007, System and method for classifying, detecting, and counting micropipes. Performance Limiting Micropipe Defects in Silicon Carbide Wafers by Philip G. Neudeck and J. Anthony Powell of NASA Lewis Research Center. Cree Demonstrates 100-mm Zero-Micropipe Silicon Carbide Substrates. Crystallographic defects Semiconductor device fabrication
Micropipe
[ "Chemistry", "Materials_science", "Engineering" ]
436
[ "Materials science stubs", "Microtechnology", "Crystallographic defects", "Materials science", "Crystallography stubs", "Semiconductor device fabrication", "Crystallography", "Materials degradation" ]
13,772,012
https://en.wikipedia.org/wiki/Prices%20of%20chemical%20elements
This is a list of prices of chemical elements. Listed here are mainly average market prices for bulk trade of commodities. Data on elements' abundance in Earth's crust is added for comparison. , the most expensive non-synthetic element by both mass and volume is rhodium. It is followed by caesium, iridium and palladium by mass and iridium, gold and platinum by volume. Carbon in the form of diamond can be more expensive than rhodium. Per-kilogram prices of some synthetic radioisotopes range to trillions of dollars. While the difficulty of obtaining macroscopic samples of synthetic elements in part explains their high value, there has been interest in converting base metals to gold (Chrysopoeia) since ancient times, but only deeper understanding of nuclear physics has allowed the actual production of a tiny amount of gold from other elements for research purposes as demonstrated by Glenn Seaborg. However, both this and other routes of synthesis of precious metals via nuclear reactions is orders of magnitude removed from economic viability. Chlorine, sulfur and carbon (as coal) are cheapest by mass. Hydrogen, nitrogen, oxygen and chlorine are cheapest by volume at atmospheric pressure. When there is no public data on the element in its pure form, price of a compound is used, per mass of element contained. This implicitly puts the value of compounds' other constituents, and the cost of extraction of the element, at zero. For elements whose radiological properties are important, individual isotopes and isomers are listed. The price listing for radioisotopes is not exhaustive. Chart See also 2000s commodities boom Notes References Properties of chemical elements Chemical industry Pricing Commodity markets
Prices of chemical elements
[ "Chemistry" ]
347
[ "Properties of chemical elements", "nan" ]
2,306,420
https://en.wikipedia.org/wiki/%CE%92-Hydride%20elimination
β-Hydride elimination is a reaction in which a metal-alkyl centre is converted into the corresponding metal-hydride-alkene. β-Hydride elimination can also occur for many alkoxide complexes as well. The main requirements are that the alkyl group possess a C-H bond β to the metal and that the metal be coordinatively unsaturated. Thus, metal-butyl complexes are susceptible to this reaction whereas metal-methyl complexes are not. The complex must have an empty (or vacant) site cis to the alkyl group for this reaction to occur. β-Hydride elimination, which can be desirable or undesirable, affects the behavior of many organometallic complexes. Moreover, for facile cleavage of the C–H bond, a d electron pair is needed for donation into the σ* orbital of the C–H bond. Thus, d0 metals alkyls are generally more stable to β-hydride elimination than d2 and higher metal alkyls and may form isolable agostic complexes, even if an empty coordination site is available. Role of β-hydride elimination The Shell higher olefin process relies on β-hydride elimination to produce α-olefins which are used to produce detergents. β-Hydride elimination interferes with the Ziegler–Natta polymerization, leading to decreased molecular weight. The production of branched polymers from ethylene relies on chain walking, a key step of which is β-hydride elimination. Nickel- and palladium-catalyzed couplings mainly focus on aryl-aryl couplings. Aryl-alkyl and especially alkyl-alkyl couplings are less successful because of β-hydride elimination can lower the yield. In Hydroformylation, β-hydride elimination can act as a side reaction that influences product regioselectivity. For example, in the hydroformylation of open chain unsaturated ethers, it reverses the formation of branched metal-alkyl intermediates at high temperatures, leading to a greater yield of linear products. β-Hydride elimination is one step in the synthesis of some metal hydrides. For instance in the synthesis of RuHCl(CO)(PPh3)3 from ruthenium trichloride, triphenylphosphine and 2-methoxyethanol, an intermediate alkoxide complex undergoes a β-hydride elimination to form the hydride ligand and the pi-bonded aldehyde which then is later converted into the carbonyl (carbon monoxide) ligand. Mechanism β-Hydride elimination transforms a metal-alkyl complex into an metal-hydrido-alkene complex. Starting with an unsaturated complex, the transformation proceeds in stages: 1) Dissociation of a ligand from a metal alkyl complex, yielding a coordinatively unsaturated derivative. 2) Alignment of the beta hydrogen. In this step, a vacant site on the metal forms an agostic complex by binding a C-H bond of the alkyl (or alkoxide). 3) Hydride Transfer/Alkene Formation. In this step, the M-H bond forms concomitant with cleavage of a C-H bond and the development of a double bond in what was once an alkyl (or alkoxide) ligand. The resulting metal hydride can eliminate the alkene ligand. The transition state for this β-hydride elimination involves a 4-membered ring. Non-dissociative Especially for Pt(II) complexes, β-hydride eliminations may occur without the dissociation of an ancillary ligand. This was suggested primarily based on the observed order of the L-type ligand in the rate law derived from kinetic studies. This mechanism appears to be operative for the minority of reactions studied. Structure-Reactivity Relationships Relative to an arbitrary reference complex, β-hydride elimination is faster in a complex with the following characteristics: More electron-deficient metal center. This can be as a result of less donating ancillary ligands. More labile ancillary ligands, such as weakly coordinating (e.g. solvent) or sterically demanding ligands. The abstracted H is more hydridic (has a higher pKa). Avoiding β-hydride elimination Several strategies exist for avoiding β-hydride elimination. The most common strategy is to employ alkyl ligands that lack hydrogen atoms at the β position. Common substituents include methyl and neopentyl. β-Hydride elimination is also inhibited when the reaction would produce a strained alkene. This situation is illustrated by the stability of metal complexes containing norbornyl ligands, where the β-hydride elimination product would violate Bredt's rule. Further reading Dissociation-induced β-hydride eliminations. β-Hydride elimination involving metal alkoxide and amido complexes. References Organometallic chemistry
Β-Hydride elimination
[ "Chemistry" ]
1,086
[ "Organometallic chemistry" ]
2,307,854
https://en.wikipedia.org/wiki/Rotations%20in%204-dimensional%20Euclidean%20space
In mathematics, the group of rotations about a fixed point in four-dimensional Euclidean space is denoted SO(4). The name comes from the fact that it is the special orthogonal group of order 4. In this article rotation means rotational displacement. For the sake of uniqueness, rotation angles are assumed to be in the segment except where mentioned or clearly implied by the context otherwise. A "fixed plane" is a plane for which every vector in the plane is unchanged after the rotation. An "invariant plane" is a plane for which every vector in the plane, although it may be affected by the rotation, remains in the plane after the rotation. Geometry of 4D rotations Four-dimensional rotations are of two types: simple rotations and double rotations. Simple rotations A simple rotation about a rotation centre leaves an entire plane through (axis-plane) fixed. Every plane that is completely orthogonal to intersects in a certain point . For each such point is the centre of the 2D rotation induced by in . All these 2D rotations have the same rotation angle . Half-lines from in the axis-plane are not displaced; half-lines from orthogonal to are displaced through ; all other half-lines are displaced through an angle less than . Double rotations For each rotation of 4-space (fixing the origin), there is at least one pair of orthogonal 2-planes and each of which is invariant and whose direct sum is all of 4-space. Hence operating on either of these planes produces an ordinary rotation of that plane. For almost all (all of the 6-dimensional set of rotations except for a 3-dimensional subset), the rotation angles in plane and in plane – both assumed to be nonzero – are different. The unequal rotation angles and satisfying , are almost uniquely determined by . Assuming that 4-space is oriented, then the orientations of the 2-planes and can be chosen consistent with this orientation in two ways. If the rotation angles are unequal (), is sometimes termed a "double rotation". In that case of a double rotation, and are the only pair of invariant planes, and half-lines from the origin in , are displaced through and respectively, and half-lines from the origin not in or are displaced through angles strictly between and . Isoclinic rotations If the rotation angles of a double rotation are equal then there are infinitely many invariant planes instead of just two, and all half-lines from are displaced through the same angle. Such rotations are called isoclinic or equiangular rotations, or Clifford displacements. Beware: not all planes through are invariant under isoclinic rotations; only planes that are spanned by a half-line and the corresponding displaced half-lines are invariant. Assuming that a fixed orientation has been chosen for 4-dimensional space, isoclinic 4D rotations may be put into two categories. To see this, consider an isoclinic rotation , and take an orientation-consistent ordered set of mutually perpendicular half-lines at (denoted as ) such that and span an invariant plane, and therefore and also span an invariant plane. Now assume that only the rotation angle is specified. Then there are in general four isoclinic rotations in planes and with rotation angle , depending on the rotation senses in and . We make the convention that the rotation senses from to and from to are reckoned positive. Then we have the four rotations , , and . and are each other's inverses; so are and . As long as lies between 0 and , these four rotations will be distinct. Isoclinic rotations with like signs are denoted as left-isoclinic; those with opposite signs as right-isoclinic. Left- and right-isoclinic rotations are represented respectively by left- and right-multiplication by unit quaternions; see the paragraph "Relation to quaternions" below. The four rotations are pairwise different except if or . The angle corresponds to the identity rotation; corresponds to the central inversion, given by the negative of the identity matrix. These two elements of SO(4) are the only ones that are simultaneously left- and right-isoclinic. Left- and right-isocliny defined as above seem to depend on which specific isoclinic rotation was selected. However, when another isoclinic rotation with its own axes , , , is selected, then one can always choose the order of , , , such that can be transformed into by a rotation rather than by a rotation-reflection (that is, so that the ordered basis , , , is also consistent with the same fixed choice of orientation as , , , ). Therefore, once one has selected an orientation (that is, a system of axes that is universally denoted as right-handed), one can determine the left or right character of a specific isoclinic rotation. Group structure of SO(4) SO(4) is a noncommutative compact 6-dimensional Lie group. Each plane through the rotation centre is the axis-plane of a commutative subgroup isomorphic to SO(2). All these subgroups are mutually conjugate in SO(4). Each pair of completely orthogonal planes through is the pair of invariant planes of a commutative subgroup of SO(4) isomorphic to . These groups are maximal tori of SO(4), which are all mutually conjugate in SO(4). See also Clifford torus. All left-isoclinic rotations form a noncommutative subgroup of SO(4), which is isomorphic to the multiplicative group of unit quaternions. All right-isoclinic rotations likewise form a subgroup of SO(4) isomorphic to . Both and are maximal subgroups of SO(4). Each left-isoclinic rotation commutes with each right-isoclinic rotation. This implies that there exists a direct product with normal subgroups and ; both of the corresponding factor groups are isomorphic to the other factor of the direct product, i.e. isomorphic to . (This is not SO(4) or a subgroup of it, because and are not disjoint: the identity and the central inversion each belong to both and .) Each 4D rotation is in two ways the product of left- and right-isoclinic rotations and . and are together determined up to the central inversion, i.e. when both and are multiplied by the central inversion their product is again. This implies that is the universal covering group of SO(4) — its unique double cover — and that and are normal subgroups of SO(4). The identity rotation and the central inversion form a group of order 2, which is the centre of SO(4) and of both and . The centre of a group is a normal subgroup of that group. The factor group of C2 in SO(4) is isomorphic to SO(3) × SO(3). The factor groups of 3L by C2 and of 3R by C2 are each isomorphic to SO(3). Similarly, the factor groups of SO(4) by 3L and of SO(4) by 3R are each isomorphic to SO(3). The topology of SO(4) is the same as that of the Lie group , namely the space where is the real projective space of dimension 3 and is the 3-sphere. However, it is noteworthy that, as a Lie group, SO(4) is not a direct product of Lie groups, and so it is not isomorphic to . Special property of SO(4) among rotation groups in general The odd-dimensional rotation groups do not contain the central inversion and are simple groups. The even-dimensional rotation groups do contain the central inversion and have the group as their centre. For even n ≥ 6, SO(n) is almost simple in that the factor group SO(n)/C2 of SO(n) by its centre is a simple group. SO(4) is different: there is no conjugation by any element of SO(4) that transforms left- and right-isoclinic rotations into each other. Reflections transform a left-isoclinic rotation into a right-isoclinic one by conjugation, and vice versa. This implies that under the group O(4) of all isometries with fixed point the distinct subgroups and are conjugate to each other, and so cannot be normal subgroups of O(4). The 5D rotation group SO(5) and all higher rotation groups contain subgroups isomorphic to O(4). Like SO(4), all even-dimensional rotation groups contain isoclinic rotations. But unlike SO(4), in SO(6) and all higher even-dimensional rotation groups any two isoclinic rotations through the same angle are conjugate. The set of all isoclinic rotations is not even a subgroup of SO(2), let alone a normal subgroup. Algebra of 4D rotations SO(4) is commonly identified with the group of orientation-preserving isometric linear mappings of a 4D vector space with inner product over the real numbers onto itself. With respect to an orthonormal basis in such a space SO(4) is represented as the group of real 4th-order orthogonal matrices with determinant +1. Isoclinic decomposition A 4D rotation given by its matrix is decomposed into a left-isoclinic and a right-isoclinic rotation as follows: Let be its matrix with respect to an arbitrary orthonormal basis. Calculate from this the so-called associate matrix has rank one and is of unit Euclidean norm as a 16D vector if and only if is indeed a 4D rotation matrix. In this case there exist real numbers and such that and There are exactly two sets of and such that and . They are each other's opposites. The rotation matrix then equals This formula is due to Van Elfrinkhof (1897). The first factor in this decomposition represents a left-isoclinic rotation, the second factor a right-isoclinic rotation. The factors are determined up to the negative 4th-order identity matrix, i.e. the central inversion. Relation to quaternions A point in 4-dimensional space with Cartesian coordinates may be represented by a quaternion . A left-isoclinic rotation is represented by left-multiplication by a unit quaternion . In matrix-vector language this is Likewise, a right-isoclinic rotation is represented by right-multiplication by a unit quaternion , which is in matrix-vector form In the preceding section (isoclinic decomposition) it is shown how a general 4D rotation is split into left- and right-isoclinic factors. In quaternion language Van Elfrinkhof's formula reads or, in symbolic form, According to the German mathematician Felix Klein this formula was already known to Cayley in 1854. Quaternion multiplication is associative. Therefore, which shows that left-isoclinic and right-isoclinic rotations commute. The eigenvalues of 4D rotation matrices The four eigenvalues of a 4D rotation matrix generally occur as two conjugate pairs of complex numbers of unit magnitude. If an eigenvalue is real, it must be ±1, since a rotation leaves the magnitude of a vector unchanged. The conjugate of that eigenvalue is also unity, yielding a pair of eigenvectors which define a fixed plane, and so the rotation is simple. In quaternion notation, a proper (i.e., non-inverting) rotation in SO(4) is a proper simple rotation if and only if the real parts of the unit quaternions and are equal in magnitude and have the same sign. If they are both zero, all eigenvalues of the rotation are unity, and the rotation is the null rotation. If the real parts of and are not equal then all eigenvalues are complex, and the rotation is a double rotation. The Euler–Rodrigues formula for 3D rotations Our ordinary 3D space is conveniently treated as the subspace with coordinate system 0XYZ of the 4D space with coordinate system UXYZ. Its rotation group SO(3) is identified with the subgroup of SO(4) consisting of the matrices In Van Elfrinkhof's formula in the preceding subsection this restriction to three dimensions leads to , , , , or in quaternion representation: . The 3D rotation matrix then becomes the Euler–Rodrigues formula for 3D rotations which is the representation of the 3D rotation by its Euler–Rodrigues parameters: . The corresponding quaternion formula , where , or, in expanded form: is known as the Hamilton–Cayley formula. Hopf coordinates Rotations in 3D space are made mathematically much more tractable by the use of spherical coordinates. Any rotation in 3D can be characterized by a fixed axis of rotation and an invariant plane perpendicular to that axis. Without loss of generality, we can take the -plane as the invariant plane and the -axis as the fixed axis. Since radial distances are not affected by rotation, we can characterize a rotation by its effect on the unit sphere (2-sphere) by spherical coordinates referred to the fixed axis and invariant plane: Because , the points (x,y,z) lie on the unit 2-sphere. A point with angles , rotated by an angle about the -axis, becomes the point with angles . While hyperspherical coordinates are also useful in dealing with 4D rotations, an even more useful coordinate system for 4D is provided by Hopf coordinates , which are a set of three angular coordinates specifying a position on the 3-sphere. For example: Because , the points lie on the 3-sphere. In 4D space, every rotation about the origin has two invariant planes which are completely orthogonal to each other and intersect at the origin, and are rotated by two independent angles and . Without loss of generality, we can choose, respectively, the - and -planes as these invariant planes. A rotation in 4D of a point through angles and is then simply expressed in Hopf coordinates as . Visualization of 4D rotations Every rotation in 3D space has a fixed axis unchanged by rotation. The rotation is completely specified by specifying the axis of rotation and the angle of rotation about that axis. Without loss of generality, this axis may be chosen as the -axis of a Cartesian coordinate system, allowing a simpler visualization of the rotation. In 3D space, the spherical coordinates may be seen as a parametric expression of the 2-sphere. For fixed they describe circles on the 2-sphere which are perpendicular to the -axis and these circles may be viewed as trajectories of a point on the sphere. A point on the sphere, under a rotation about the -axis, will follow a trajectory as the angle varies. The trajectory may be viewed as a rotation parametric in time, where the angle of rotation is linear in time: , with being an "angular velocity". Analogous to the 3D case, every rotation in 4D space has at least two invariant axis-planes which are left invariant by the rotation and are completely orthogonal (i.e. they intersect at a point). The rotation is completely specified by specifying the axis planes and the angles of rotation about them. Without loss of generality, these axis planes may be chosen to be the - and -planes of a Cartesian coordinate system, allowing a simpler visualization of the rotation. In 4D space, the Hopf angles parameterize the 3-sphere. For fixed they describe a torus parameterized by and , with being the special case of the Clifford torus in the - and -planes. These tori are not the usual tori found in 3D-space. While they are still 2D surfaces, they are embedded in the 3-sphere. The 3-sphere can be stereographically projected onto the whole Euclidean 3D-space, and these tori are then seen as the usual tori of revolution. It can be seen that a point specified by undergoing a rotation with the - and -planes invariant will remain on the torus specified by . The trajectory of a point can be written as a function of time as and stereographically projected onto its associated torus, as in the figures below. In these figures, the initial point is taken to be , i.e. on the Clifford torus. In Fig. 1, two simple rotation trajectories are shown in black, while a left and a right isoclinic trajectory is shown in red and blue respectively. In Fig. 2, a general rotation in which and is shown, while in Fig. 3, a general rotation in which and is shown. Below, a spinning 5-cell is visualized with the fourth dimension squashed and displayed as colour. The Clifford torus described above is depicted in its rectangular (wrapping) form. Generating 4D rotation matrices Four-dimensional rotations can be derived from Rodrigues' rotation formula and the Cayley formula. Let be a 4 × 4 skew-symmetric matrix. The skew-symmetric matrix can be uniquely decomposed as into two skew-symmetric matrices and satisfying the properties , and , where and are the eigenvalues of . Then, the 4D rotation matrices can be obtained from the skew-symmetric matrices and by Rodrigues' rotation formula and the Cayley formula. Let be a 4 × 4 nonzero skew-symmetric matrix with the set of eigenvalues Then can be decomposed as where and are skew-symmetric matrices satisfying the properties Moreover, the skew-symmetric matrices and are uniquely obtained as and Then, is a rotation matrix in , which is generated by Rodrigues' rotation formula, with the set of eigenvalues Also, is a rotation matrix in , which is generated by Cayley's rotation formula, such that the set of eigenvalues of is, The generating rotation matrix can be classified with respect to the values and as follows: If and or vice versa, then the formulae generate simple rotations; If and are nonzero and , then the formulae generate double rotations; If and are nonzero and , then the formulae generate isoclinic rotations. See also Laplace–Runge–Lenz vector Lorentz group Orthogonal group Orthogonal matrix Plane of rotation Poincaré group Quaternions and spatial rotation Notes References Bibliography L. van Elfrinkhof: Eene eigenschap van de orthogonale substitutie van de vierde orde. Handelingen van het 6e Nederlandsch Natuurkundig en Geneeskundig Congres, Delft, 1897. Felix Klein: Elementary Mathematics from an Advanced Standpoint: Arithmetic, Algebra, Analysis. Translated by E.R. Hedrick and C.A. Noble. The Macmillan Company, New York, 1932. Henry Parker Manning: Geometry of four dimensions. The Macmillan Company, 1914. Republished unaltered and unabridged by Dover Publications in 1954. In this monograph four-dimensional geometry is developed from first principles in a synthetic axiomatic way. Manning's work can be considered as a direct extension of the works of Euclid and Hilbert to four dimensions. J. H. Conway and D. A. Smith: On Quaternions and Octonions: Their Geometry, Arithmetic, and Symmetry. A. K. Peters, 2003. P.H.Schoute: Mehrdimensionale Geometrie. Leipzig: G.J.Göschensche Verlagshandlung. Volume 1 (Sammlung Schubert XXXV): Die linearen Räume, 1902. Volume 2 (Sammlung Schubert XXXVI): Die Polytope, 1905. Four-dimensional geometry Quaternions Rotation
Rotations in 4-dimensional Euclidean space
[ "Physics" ]
4,181
[ "Physical phenomena", "Motion (physics)", "Classical mechanics", "Rotation" ]
2,309,594
https://en.wikipedia.org/wiki/Flipped%20SO%2810%29
Flipped SO(10) is a grand unified theory which is to standard SO(10) as flipped SU(5) is to SU(5). Details In conventional SO(10) models, the fermions lie in three spinorial 16 representations, one for each generation, which decomposes under [SU(5) × U(1)χ]/Z5 as This can either be the Georgi–Glashow SU(5) or flipped SU(5). In flipped SO(10) models, however, the gauge group is not just SO(10) but SO(10)F × U(1)B or [SO(10)F × U(1)B]/Z4. The fermion fields are now three copies of These contain the Standard Model fermions as well as additional vector fermions with GUT scale masses. If we suppose [SU(5) × U(1)A]/Z5 is a subgroup of SO(10)F, then we have the intermediate scale symmetry breaking [SO(10)F × U(1)B]/Z4 → [SU(5) × U(1)χ]/Z5 where In that case, note that the Standard Model fermion fields (including the right handed neutrinos) come from all three [SO(10)F × U(1)B]/Z4 representations. In particular, they happen to be the 101 of 161, the of 10−2 and the 15 of 14 (apologies to the readers for mixing up SO(10) × U(1) notation with SU(5) × U(1) notation, but it would be really cumbersome if we have to spell out which group any given notation happens to refer to. It is left up to the reader to determine the group from the context. This is a standard practice in the GUT model building literature anyway). The other remaining fermions are vectorlike. To see this, note that with a 161H and a Higgs field, we can have VEVs which breaks the GUT group down to [SU(5) × U(1)χ]/Z5. The Yukawa coupling 161H 161 10−2 will pair up the 5−2 and fermions. And we can always introduce a sterile neutrino φ which is invariant under [SO(10) × U(1)B]/Z4 and add the Yukawa coupling OR we can add the nonrenormalizable term Either way, the 10 component of the fermion 161 gets taken care of so that it is no longer chiral. It has been left unspecified so far whether [SU(5) × U(1)χ]/Z5 is the Georgi–Glashow SU(5) or the flipped SU(5). This is because both alternatives lead to reasonable GUT models. One reason for studying flipped SO(10) is because it can be derived from an E6 GUT model. References Nobuhiro Maekawa, Toshifumi Yamashita, "Flipped SO(10) model", 2003 K. Tamvakis, "Flipped SO(10)", 1988 Grand Unified Theory
Flipped SO(10)
[ "Physics" ]
671
[ "Unsolved problems in physics", "Particle physics", "Grand Unified Theory", "Particle physics stubs", "Physics beyond the Standard Model" ]
2,310,753
https://en.wikipedia.org/wiki/Radial%20basis%20function
In mathematics a radial basis function (RBF) is a real-valued function whose value depends only on the distance between the input and some fixed point, either the origin, so that , or some other fixed point , called a center, so that . Any function that satisfies the property is a radial function. The distance is usually Euclidean distance, although other metrics are sometimes used. They are often used as a collection which forms a basis for some function space of interest, hence the name. Sums of radial basis functions are typically used to approximate given functions. This approximation process can also be interpreted as a simple kind of neural network; this was the context in which they were originally applied to machine learning, in work by David Broomhead and David Lowe in 1988, which stemmed from Michael J. D. Powell's seminal research from 1977. RBFs are also used as a kernel in support vector classification. The technique has proven effective and flexible enough that radial basis functions are now applied in a variety of engineering applications. Definition A radial function is a function . When paired with a norm on a vector space , a function of the form is said to be a radial kernel centered at . A radial function and the associated radial kernels are said to be radial basis functions if, for any finite set of nodes , all of the following conditions are true: Examples Commonly used types of radial basis functions include (writing and using to indicate a shape parameter that can be used to scale the input of the radial kernel): Approximation Radial basis functions are typically used to build up function approximations of the form where the approximating function is represented as a sum of radial basis functions, each associated with a different center , and weighted by an appropriate coefficient The weights can be estimated using the matrix methods of linear least squares, because the approximating function is linear in the weights . Approximation schemes of this kind have been particularly used in time series prediction and control of nonlinear systems exhibiting sufficiently simple chaotic behaviour and 3D reconstruction in computer graphics (for example, hierarchical RBF and Pose Space Deformation). RBF Network The sum can also be interpreted as a rather simple single-layer type of artificial neural network called a radial basis function network, with the radial basis functions taking on the role of the activation functions of the network. It can be shown that any continuous function on a compact interval can in principle be interpolated with arbitrary accuracy by a sum of this form, if a sufficiently large number of radial basis functions is used. The approximant is differentiable with respect to the weights . The weights could thus be learned using any of the standard iterative methods for neural networks. Using radial basis functions in this manner yields a reasonable interpolation approach provided that the fitting set has been chosen such that it covers the entire range systematically (equidistant data points are ideal). However, without a polynomial term that is orthogonal to the radial basis functions, estimates outside the fitting set tend to perform poorly. RBFs for PDEs Radial basis functions are used to approximate functions and so can be used to discretize and numerically solve Partial Differential Equations (PDEs). This was first done in 1990 by E. J. Kansa who developed the first RBF based numerical method. It is called the Kansa method and was used to solve the elliptic Poisson equation and the linear advection-diffusion equation. The function values at points in the domain are approximated by the linear combination of RBFs: The derivatives are approximated as such: where are the number of points in the discretized domain, the dimension of the domain and the scalar coefficients that are unchanged by the differential operator. Different numerical methods based on Radial Basis Functions were developed thereafter. Some methods are the RBF-FD method, the RBF-QR method and the RBF-PUM method. See also Matérn covariance function Radial basis function interpolation Kansa method References Further reading Sirayanone, S., 1988, Comparative studies of kriging, multiquadric-biharmonic, and other methods for solving mineral resource problems, PhD. Dissertation, Dept. of Earth Sciences, Iowa State University, Ames, Iowa. Artificial neural networks Interpolation Numerical analysis
Radial basis function
[ "Mathematics" ]
862
[ "Mathematical relations", "Computational mathematics", "Approximations", "Numerical analysis" ]
12,179,236
https://en.wikipedia.org/wiki/Periplanone%20B
Periplanone B is a pheromone produced by the female American cockroach, Periplaneta americana. It is a sexual attractant to male cockroaches, especially at short ranges. History The activity of this pheromone was first described in 1952, but it was not until 25 years later that Persoons et al. reported the gross structure of periplanones A and B. The stereochemical configuration and first total synthesis were reported by W. Clark Still's group at Columbia University in 1979. References Insect pheromones Epoxides Sesquiterpenes Spiro compounds Ketones Dienes Oxygen heterocycles Vinylidene compounds
Periplanone B
[ "Chemistry" ]
144
[ "Insect pheromones", "Chemical ecology", "Ketones", "Functional groups", "Organic compounds", "Spiro compounds" ]
12,181,958
https://en.wikipedia.org/wiki/Eikonal%20approximation
In theoretical physics, the eikonal approximation (Greek εἰκών for likeness, icon or image) is an approximative method useful in wave scattering equations, which occur in optics, seismology, quantum mechanics, quantum electrodynamics, and partial wave expansion. Informal description The main advantage that the eikonal approximation offers is that the equations reduce to a differential equation in a single variable. This reduction into a single variable is the result of the straight line approximation or the eikonal approximation, which allows us to choose the straight line as a special direction. Relation to the WKB approximation The early steps involved in the eikonal approximation in quantum mechanics are very closely related to the WKB approximation for one-dimensional waves. The WKB method, like the eikonal approximation, reduces the equations into a differential equation in a single variable. But the difficulty with the WKB approximation is that this variable is described by the trajectory of the particle which, in general, is complicated. Formal description Making use of WKB approximation we can write the wave function of the scattered system in terms of action S: Inserting the wavefunction Ψ in the Schrödinger equation without the presence of a magnetic field we obtain We write S as a power series in ħ For the zero-th order: If we consider the one-dimensional case then . We obtain a differential equation with the boundary condition: for , . See also Eikonal equation Correspondence principle Principle of least action References Notes Eikonal Approximation K. V. Shajesh Department of Physics and Astronomy, University of Oklahoma Further reading Theoretical physics Mathematical analysis
Eikonal approximation
[ "Physics", "Mathematics" ]
337
[ "Mathematical analysis", "Theoretical physics" ]
12,182,350
https://en.wikipedia.org/wiki/Remez%20inequality
In mathematics, the Remez inequality, discovered by the Soviet mathematician Evgeny Yakovlevich Remez , gives a bound on the sup norms of certain polynomials, the bound being attained by the Chebyshev polynomials. The inequality Let σ be an arbitrary fixed positive number. Define the class of polynomials πn(σ) to be those polynomials p of degree n for which on some set of measure ≥ 2 contained in the closed interval [−1, 1+σ]. Then the Remez inequality states that where Tn(x) is the Chebyshev polynomial of degree n, and the supremum norm is taken over the interval [−1, 1+σ]. Observe that Tn is increasing on , hence The R.i., combined with an estimate on Chebyshev polynomials, implies the following corollary: If J ⊂ R is a finite interval, and E ⊂ J is an arbitrary measurable set, then for any polynomial p of degree n. Extensions: Nazarov–Turán lemma Inequalities similar to () have been proved for different classes of functions, and are known as Remez-type inequalities. One important example is Nazarov's inequality for exponential sums : Nazarov's inequality. Let be an exponential sum (with arbitrary λk ∈C), and let J ⊂ R be a finite interval, E ⊂ J—an arbitrary measurable set. Then where C > 0 is a numerical constant. In the special case when λk are pure imaginary and integer, and the subset E is itself an interval, the inequality was proved by Pál Turán and is known as Turán's lemma. This inequality also extends to in the following way for some A > 0 independent of p, E, and n. When a similar inequality holds for p > 2. For p = ∞ there is an extension to multidimensional polynomials. Proof: Applying Nazarov's lemma to leads to thus Now fix a set and choose such that , that is Note that this implies: Now which completes the proof. Pólya inequality One of the corollaries of the Remez inequality is the Pólya inequality, which was proved by George Pólya , and states that the Lebesgue measure of a sub-level set of a polynomial p of degree n is bounded in terms of the leading coefficient LC(p) as follows: References Theorems in analysis Inequalities
Remez inequality
[ "Mathematics" ]
512
[ "Theorems in mathematical analysis", "Mathematical analysis", "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
12,182,912
https://en.wikipedia.org/wiki/PAVA%20spray
PAVA spray is an incapacitant spray similar to pepper spray. It is dispensed from a handheld canister, in a liquid stream. It contains a 0.3% solution of pelargonic acid vanillylamide (PAVA), also called nonivamide, a synthetic capsaicinoid (analogue of capsaicin), in a solvent of aqueous ethanol. The propellant is nitrogen. This solution has been selected because this is the minimum concentration which will fulfill the purpose of the equipment; namely to minimise a person's capacity for resistance, without unnecessarily prolonging their discomfort. PAVA is significantly more potent than CS gas. The liquid stream is a spray pattern and has a maximum effective range of up to . Maximum accuracy, however, will be achieved over a distance of . The operating distance is the distance between the canister and the subject's eyes, not the distance between the user and the subject. Effects PAVA primarily affects the eyes, causing closure and severe pain. The pain to the eyes is reported to be greater than that caused by CS. The effectiveness rate is very high once PAVA gets into the eyes; however, there have been occasions where PAVA and CS have failed to work—especially when the subject is under the influence of alcohol or other drugs. Exposure to fresh moving air will normally result in a significant recovery from the effects of PAVA, within 15–35 minutes. Pharmacologically, like other capsaicinoids, PAVA works by direct binding to receptors (TRPV1) that normally produce the pain and sensation of heat, as if exposed to scalding heat. Usage PAVA is used widely as a less lethal, temporary defence tool around the world including in the United Kingdom, India, Switzerland, and others. British police and HM Prison Service PAVA is approved for police and prison service use in the United Kingdom. British police forces had traditionally used CS gas spray, but with the more widespread carriage of tasers, PAVA has now entirely replaced its predecessor due to its non-flammable nature. Legal restrictions United States: California, Minnesota, Delaware, Washington DC and Wisconsin restrict use of less than lethal projectiles and devices using them. United Kingdom: Citizens may not use PAVA, under Section 5 of the Firearms Act 1968; however, police, prison officers and other Crown servants are allowed to use PAVA to uphold the law. Treatment There are various treatments to combat the effects of nonivamide. One popular method includes administering a one to one solution of milk of magnesia and water to the eyes. Doctors also recommend not using oils or creams on the skin, and to not wear contact lenses, if one is planning to minimise the effects of nonivamide. See also Nonivamide References Chemical weapons Lachrymatory agents Incapacitating agents
PAVA spray
[ "Chemistry", "Biology" ]
592
[ "Incapacitating agents", "Chemical accident", "Chemical weapons", "Lachrymatory agents", "Biochemistry" ]
9,031,857
https://en.wikipedia.org/wiki/CAD/CAM%20dentistry
CAD/CAM dentistry is a field of dentistry and prosthodontics using CAD/CAM (computer-aided-design and computer-aided-manufacturing) to improve the design and creation of dental restorations, especially dental prostheses, including crowns, crown lays, veneers, inlays and onlays, fixed dental prostheses (bridges), dental implant supported restorations, dentures (removable or fixed), and orthodontic appliances. CAD/CAM technology allows the delivery of a well-fitting, aesthetic, and a durable prostheses for the patient. CAD/CAM complements earlier technologies used for these purposes by any combination of increasing the speed of design and creation; increasing the convenience or simplicity of the design, creation, and insertion processes; and making possible restorations and appliances that otherwise would have been infeasible. Other goals include reducing unit cost and making affordable restorations and appliances that otherwise would have been prohibitively expensive. However, to date, chairside CAD/CAM often involves extra time on the part of the dentist, and the fee is often at least two times higher than for conventional restorative treatments using lab services. Like other CAD/CAM fields, CAD/CAM dentistry uses subtractive processes (such as CNC milling) and additive processes (such as 3D printing) to produce physical instances from 3D models. Some mentions of "CAD/CAM" and "milling technology" in dental technology have loosely treated those two terms as if they were interchangeable, largely because before the 2010s, most CAD/CAM-directed manufacturing was CNC cutting, not additive manufacturing, so CAD/CAM and CNC were usually coinstantiated; but whereas this loose/imprecise usage was once somewhat close to accurate, it no longer is, as the term "CAD/CAM" does not specify the method of production except that whatever method is used takes input from CAD/CAM, and today additive and subtractive methods are both widely used. Application of CAD/CAM in dentistry Computer-aided design (CAD) and computer-aided manufacture (CAM) is a process where non-digital data is captured, converted into a digital format, edited as necessary, and subsequently converted back into a physical form with the exact dimensions and materials specified during the digital design process, usually by either 3D printing or milling. This set of stages is known as a “digital workflow”. CAD/CAM may be used to provide a machine-led means of fabricating dental prostheses that are used to restore or replace teeth. This is an alternative to the traditional process of prosthesis fabrication using physical techniques, in which the dentist makes an impression of the site that is to be restored. This is then transported to the laboratory where a study model is made. On that model, an imitation of the final design is made using wax – known as a wax up – which represents the size and shape of the finished dental prosthesis. The wax is then encased in an investment mold, burned out and replaced with the desired material as part of lost wax casting. CAD/CAM makes such procedures unnecessary for the impression is recorded digitally and the manufacture of the appliance is accompanied by additive (3D printing) or subtractive (milling) means. Examples of dental prostheses that can be manufactured using this system include: Study models Orthodontic devices Cuspal coverage restorations Fixed dental prostheses Veneers Removable denture frameworks Implant planning and fabrication History Although CAD/CAM dentistry was used in the mid-1980s, early efforts were considered a cumbersome novelty, requiring an inordinate amount of time to produce a viable product. This inefficiency prevented its use within dental offices and limited it to labside use (that is, used within dental laboratories). As adjunctive techniques, software, and materials improved, the chairside use of CAD/CAM (use within dental offices/surgeries) increased. For example, the commercialization of Cerec by Sirona made CAD/CAM available to dentists who formerly would not have had avenues for using it. The first CAD/CAM system used in dentistry was produced in the 1970s by Professor François Duret and colleagues. The process contains a number of steps. Firstly, an optical impression of the intraoral abutment is obtained by scanning with an intra-oral digitizer. The digitized information is transferred to the monitor where a 3D graphic design is produced. The restoration can then be designed on the computer. The final restoration is then milled from a block. Professor Duret and colleagues subsequently developed the ‘sopha system’ however this was not widely used, perhaps lacking the accuracy, materials and computer capabilities required in dentistry. The second generation of CADCAM attempted to develop this system further, but struggled to obtain occlusal morphology using an intra oral scanner, so prepared a stone model first before digitising the model. Development of a various digitizers followed: a laser beam with a position sensitive detector sensor, a contact probe and a laser with a charged coupled device camera. Due to development of more sophisticated CAD/CAM systems both metal and ceramic restorations could be produced. Mormann and colleagues later developed a CADCAM system named CEREC, which they used to produce a type of dental restoration called an inlay. The inlay preparation is scanned using an intra-oral camera. A compact machine used chairside allowed design of the restoration from a ceramic block. The major advantage of this system was the chair side approach allowing same-day restorations. However, this technique was limited in that it couldn’t be used for contouring or occlusal patterns. The CEREC system is used widely across the world, and studies have shown long term clinical success. The Procera system was developed by Anderson and colleagues. They used CADCAM to develop composite veneers. The Procera system later developed as a processing centre connected to satellite digitisers worldwide to produce all ceramic frameworks. This system is used around the world today. Difference from conventional restoration Chairside CAD/CAM restoration typically creates and lutes(bonds) the prosthesis the same day. Conventional prostheses, such as crowns, have temporaries placed for one to several weeks while a dental laboratory or in-house dental lab produces the restoration. The patient returns later to have the temporaries removed and the laboratory-made crown cemented or bonded in place. An in-house CAD/CAM system enables the dentist to create a finished inlay in as little as one hour. CAD/CAM systems use an optical camera to take a virtual impression by creating a 3D image which is imported into a software program and results in a computer-generated cast on which the restoration is designed. Bonded veneer CAD/CAM restorations are more conservative in their preparation of the tooth. As bonding is more effective on tooth enamel than the underlying dentin, care is taken not to remove the enamel layer. Though one-day service is a benefit that is typically claimed by dentists offering chairside CAD/CAM services, the dentist's time is commonly doubled and the fee is therefore doubled. Process All CAD/CAM systems consist of a computer aided design (CAD) and computer aided manufacture (CAM) stage and the key stages can broadly be summarised as the following: Optical/contat scanning that captures the intraoral or extraoral condition of the patient. Use of software that can turn the captured images into a digital model to upon which a dental prosthesis can be designed and prepared for fabrication. Instruction to devices that can facilitate the conversion of the design into a product by way of 3D printing or milling depending on the CAD/CAM system used. For a single unit prosthesis, after decayed or broken areas of the tooth are corrected by the dentist, an optical impression is made of the prepared tooth and the surrounding teeth. These images are then turned into a digital model by proprietary software within which the prosthesis is created virtually. The software sends this data to a milling machine where the prosthesis is milled. Stains and glazes can be added to the surfaces of the milled ceramic crown or bridge to correct the otherwise monochromatic appearance of the restoration. The restoration is then adjusted in the patient’s mouth and luted or bonded in place. Integrating optical scan data with cone beam computed tomography datasets within implantology software also enables surgical teams to digitally plan implant placement and fabricate a surgical guide for precise implementation of that plan. Combining CAD/CAM software with 3D images from a 3D imaging system means greater safety and security from any kind of intraoperative mistakes. Computer-aided design (CAD) To design and manufacture a dental prosthesis, the physical space which it will replace within the mouth has to be converted into a digital format. To do this, a digital impression must be taken. This will convert the space into a digital image which must then be converted into a file extension that can be read by the CAD software system being used.     Once in a digital form, the structures within the mouth will be displayed as a 3D image. Using CAD software, the size and shape of the restoration can be virtually altered, thus replacing the wax up stage present in the traditional approach. Digital impressions Digital impressions are a means of recording the shape of a patient’s dental structures by using scanners. In CAD/CAM's infancy, desktop scanners were used which digitised study models or Dental impressions - indirect representations of the patient's dentition. These devices are also known as extra oral scanners and can be contact or non-contact. Contact scanners use stylus profilometers that are placed against and run along the contours of an object. The contact of the stylus against the object is represented digitally as a set of co-ordinates (point cloud), which is analyzed by an onboard mathematical algorithm to build up a 3D image of the object (mesh).   Non-contact scanners capture the shape of dental structures by using optics, such as light-emitting diodes. Light is emitted from the scanner which hits the object and then reflects into an onboard sensor, usually a charge couple device (CCD) or a position sensing detector (PSD). These reflections allow the scanner to build up a 3D image of the object as with contact scanners   Extraoral non-contact scanners can obtain this information by different means, namely: structured light, laser light and confocal microscopy.  Contact scanners are more accurate than non-contact scanners but are rarely used anymore because they are slow and their imaging is unnecessarily detailed, ten times what is required for the success of a dental prosthesis. Intra-oral scanners are a form of non-contact scanners that have grown in popularity due to their ability to digitize a patient’s dentition directly in the mouth, avoiding the need for either a physical impression or a plaster study model, as is the case with extraoral scanners. This allows the fabrication of dental prostheses to be a completely digital process from the very first stage. Older scanners require a contrast powder to be placed on all the structures which were to be scanned whereas newer products do not require such a step.   Intra-oral scanners interpret reflected light to produce a 3D image representing the patient's teeth, using systems including: Confocal laser scanner microscopy Triangulation Optical coherent tomography   Accordion fringe interferometry Active wavefront sampling  The file extension most recognised by CAD software is an STL file. This file type records and describes an object’s geometry as a series of connected triangles, the density of which, depends upon the “resolution and the mathematical algorithm that was used to create the data”. Most available scanners will produce STL files however some produce proprietary file types that can only be interpreted by select CAD software. CAD Software CAD software visualises the digital impression captured by extra or intra oral scanners and provides numerous design tools. Popular software packages include Dental System, DentalCAD and CEREC. Some of the most common ways in which the virtual dental prosthesis can be edited are as follows:   The size and shape of restorations can be adjusted. The shape of teeth is often adjusted using dental burs prior to scanning to accommodate a dental prosthesis such as a crown. This is called a preparation and the edge of this is known as the margin. Margins need to be demarcated so that the dental prosthesis finishes flush with the rest of the tooth to reduce the chances of plaque build-up under the prosthesis. Margins can be detected automatically which would normally have to be delineated by a technician visually. They can also be adjusted manually. The path of insertion axis can be determined automatically which dictates the direction the dental prosthesis must move to fit into the tooth/mouth.   Measurements can be made between points on the digital model which can help inform the technician if any modifications to the tooth are needed to accommodate the dental prosthesis. The material must be thick enough to provide adequate strength but also not so thick as to cause the restored tooth to contact the opposing tooth before all other teeth in the arch – this would prop the patient’s mouth open and prevent them from being able to bite normally. Materials used in computer-aided manufacture CAD/CAM is a rapidly evolving field, hence the materials in use are always changing. Materials that can be manufactured using CADCAM software currently include metals, porcelain, lithium disilicate, zirconia and resin materials.  CAD/CAM restorations are milled from solid blocks of ceramic or composite-resin. If pre-sintered ceramic ingots are used, subsequent sintering to reduce porosity is required and the CAD-CAM technology needs to account for any casting shrinkage during this process. Glass-based restorations can also be manufactured using CAD-CAM. Similar to ceramics, milling of glass ingots occurs and molten glass infiltration is used to reduce porosity. The advantage of materials manufactured by CADCAM is the consistency in quality of restoration when mass produced. Metals Metals such as CoCr and titanium can be manufactured using CADCAM software. Precious metals cannot be machined for a variety of reasons, including expense. Pre-sintered CoCr blocks are available, and requires sintering after to achieve the desired mechanical properties. This method replaces the more traditional lost-wax technique. Ceramics Feldspathic and leucite-reinforced ceramics The microstructure of feldspathic and leucite reinforced ceramics is a glassy matrix with crystalline loads. It has low flexural strength, very good optical properties and an advantageous bonding abilities. A major advantage is its good aesthetics, with a variety of shades available and high translucency. However, it is a fragile material and is susceptible to damage by occlusal forces. Lithium disilicate, zirconium oxide and lithium silicate ceramics Lithium disilicate, zirconium oxide and lithium silicate ceramics also have a biphasic structure with crystalline particles dispersed in a glass matrix. They have a high flexural strength, good optical properties and ability to bond. It produces highly aesthetic restorations in a variety of shades is useful as well as its high mechanical strength. Zirconia Zirconia has a polycrystalline structure. It has a high flexural strength. However, both its optical properties and ability to bond are weak. Its main advantage is its mechanical strength. CAD-CAM processing means that polycrystalline zirconia can be utilised for copings and frameworks. Its superior mechanical properties means it can be used for long-span bridgework, cores can be produced in thinner layers and can be utilised in posterior fixed partial dentures. However, the aesthetics of zirconia restorations are not as good as other types of ceramic. Resin materials Three resin materials are available: resin composite, PMMA, and Nano-ceramics. PMMA is made of polymethylmethacrylate polymers with no filler. However, resin composite is composed of inorganic filler in a resin matrix. Similarly, nano-ceramic is nanoparticles embedded in a resin matrix. All three materials have a weak flexural strength and disadvantageous optical properties. However, the ability to bond is very effective. An advantage of these materials is the ability to manufacture them quickly through fast milling, so are great to used for direct composite repairs. However, the aesthetic quality of these materials limit their utility. Advantages and drawbacks CAD/CAM has improved the quality of prostheses in dentistry and standardised the production process. It has increased productivity and the opportunity to work with new materials with a high level of accuracy. Though CAD/CAM is a major technological advancement, it is important that dentists' technique is suited to CAD/CAM milling. This includes: correct tooth preparation with a continuous preparation margin (which is recognisable to the scanner e.g. in the form of a chamfer); avoiding the use of shoulderless preparations and parallel walls and the use of rounded incisor and occlusal edges to prevent the concentration of tension. Crowns and bridges require a precise fit on tooth abutments or stumps. Fit accuracy varies according to the CAD/CAD system utilized and from user to user. Some systems are designed to attain higher standards of accuracy than others and some users are more skilled than others. , 20 new systems were expected to become available by 2020. Further research is needed to evaluate CAD/CAM technology compared to the other attachment systems (such as ball, magnetic and telescopic systems), as an option for attaching overdentures to implants. Advantages of CAD/CAM The advantages CAD/CAM provides when compared with the traditional laboratory and chairside led techniques are that it 1) allows for use of materials otherwise unavailable in the laboratory; 2) provides cheaper alternatives when compared with conventional materials; 3) decreases labour cost and time for dental technicians and 4) standardises the quality of restorations. Ceramic materials in particular, can be highly time-consuming to work with. To make a ceramic dental prosthesis by hand, the technician has to meticulously build up porcelain powder and sinter it onto the surface of a coping. With CAD/CAM, labour times are significantly reduced, with CAD systems with some reviews reporting that only 5–6 minutes of technician input is required to produce a dental prosthesis. In this way, the cost of production is reduced because labour costs are lower. Furthermore, CAD/CAM systems mill prosthesis from blocks of material which are mass manufactured, again reducing costs for the dental offices and laboratories when compared with traditional techniques. These blocks are made so that any internal porosities have been removed which are difficult to eliminate during conventional fabrication. CAD/CAM has also found great merit with regards to reducing the shrinkage which occurs when ceramics are heated during sintering – a process required to give the ceramic restoration adequate strength so it can be used successfully within the mouth. It is difficult to account for this phenomenon in a dental laboratory using traditional techniques. CAM can reduce shrinkage by two different methods. The first is to produce a prosthesis just greater than the desired size. This means that on firing, the prosthesis will shrink to the original intended size. The second is by milling the prosthesis from a block that has already been fully sintered, which eliminates shrinkage but causes increased wear on cutting tools because the block is stronger than when partially sintered. Benefits of intraoral scanning The advent of intraoral scanners affords additional advantages when compared with the traditional physical workflow, particularly for dentists. In the traditional method, dental impressions must be taken, and the materials used to facilitate this are vulnerable to distortion over time which can decrease the accuracy of the eventual dental prosthesis. These inaccuracies are compounded by subsequent steps such as the fabrication of study models based on the impressions. Intra-oral scanners rapidly digitise what they scan which removes the risk of distortion/damage to the date. Furthermore, dental impressions are often discomforting for patients, particularly those who have a strong gag reflex due to the bulk of material needed to capture a patient’s entire dentition. Intra-oral scanners reduce this element. Intraoral scanning saves considerable time in post processing when compared with conventional dental impressions because the 3D model can be instantly emailed to a dental laboratory, whereas with the conventional technique, the impression must be disinfected and physically transported to the laboratory which is a longer process. Disadvantages of CAD/CAM Learning curve: With any new technology there is a steep learning curve. With time and experience operators will need to understand how to work the equipment and software used for CAD/CAM technology. Initially it can be difficult to adopt a new digital workflow when operators were comfortable with using their long-standing process in dentistry. This would also mean staff would need to be trained to feel comfortable using CAD/CAM systems.  Cost: digital dentistry requires a large financial investment, including buying and maintaining equipment as well as software updates. However, in the long run the investment will pay off as it can save money on expenses such as laboratory fees and single use impression equipment.   Errors in occlusion assessment: Compared to the conventional technique of making complete dentures, CAD/CAM has a few disadvantages. The systems do not accurately assess element of balanced occlusion. As the denture teeth are not set on the denture base with the assistance of an articulator, there is difficulty in achieving a balanced occlusion. Hence, human assessment is still required, and the teeth will have to be clinically remounted to achieve balanced occlusion. Environmental impact: resin particles are produced during the milling process, which adds to plastic pollution. Future prospects Digital dentistry is growing at an accelerating rate, and CAD/CAM systems will continue to evolve and improve. References Computer-aided design Computer-aided manufacturing Dentistry branches
CAD/CAM dentistry
[ "Engineering" ]
4,532
[ "Computer-aided design", "Design engineering" ]
9,032,663
https://en.wikipedia.org/wiki/Airway%20resistance
In respiratory physiology, airway resistance is the resistance of the respiratory tract to airflow during inhalation and exhalation. Airway resistance can be measured using plethysmography. Definition Analogously to Ohm's law: Where: So: Where: = Airway Resistance = Pressure Difference driving airflow = Atmospheric Pressure = Alveolar Pressure = Volumetric Airflow (not minute ventilation which, confusingly, may be represented by the same symbol) N.B. PA and change constantly during the respiratory cycle. Determinants of airway resistance There are several important determinants of airway resistance including: The diameter of the airways Whether airflow is laminar or turbulent Hagen–Poiseuille equation In fluid dynamics, the Hagen–Poiseuille equation is a physical law that gives the pressure drop in a fluid flowing through a long cylindrical pipe. The assumptions of the equation are that the flow is laminar viscous and incompressible and the flow is through a constant circular cross-section that is substantially longer than its diameter. The equation is also known as the Hagen–Poiseuille law, Poiseuille law and Poiseuille equation. Where: = Pressure difference between the ends of the pipe = Length of pipe = the dynamic viscosity = the volumetric flow rate (Q is usually used in fluid dynamics, however in respiratory physiology it denotes cardiac output) = the radius of the pipe Dividing both sides by and given the above definition shows:- While the assumptions of the Hagen–Poiseuille equation are not strictly true of the respiratory tract it serves to show that, because of the fourth power, relatively small changes in the radius of the airways causes large changes in airway resistance. An individual small airway has much greater resistance than a large airway, however there are many more small airways than large ones. Therefore, resistance is greatest at the bronchi of intermediate size, in between the fourth and eighth bifurcation. Laminar flow versus turbulent flow Where air is flowing in a laminar manner it has less resistance than when it is flowing in a turbulent manner. If flow becomes turbulent, and the pressure difference is increased to maintain flow, this response itself increases resistance. This means that a large increase in pressure difference is required to maintain flow if it becomes turbulent. Whether flow is laminar or turbulent is complicated, however generally flow within a pipe will be laminar as long as the Reynolds number is less than 2300. where: is the Reynolds number is the diameter of the pipe. is the mean velocity. is the dynamic viscosity. is the density. This shows that larger airways are more prone to turbulent flow than smaller airways. In cases of upper airway obstruction the development of turbulent flow is a very important mechanism of increased airway resistance, this can be treated by administering Heliox, a breathing gas which is much less dense than air and consequently more conductive to laminar flow. Changes in airway resistance Airway resistance is not constant. As shown above airway resistance is markedly affected by changes in the diameter of the airways. Therefore, diseases affecting the respiratory tract can increase airway resistance. Airway resistance can also change over time. During an asthma attack the airways constrict causing an increase in airway resistance. Airway resistance can also vary between inspiration and expiration: In emphysema there is destruction of the elastic tissue of the lungs which help hold the small airways open. Therefore, during expiration, particularly forced expiration, these airways may collapse causing increased airway resistance. Derived parameters Airway conductance (GAW) This is simply the mathematical inverse of airway resistance. Specific airway resistance (sRaw) Where V is the lung volume at which RAW was measured. Also called volumic airway resistance. Due to the elastic nature of the tissue that supports the small airways airway resistance changes with lung volume. It is not practically possible to measure airway resistance at a set absolute lung volume, therefore specific airway resistance attempts to correct for differences in lung volume at which different measurements of airway resistance were made. Specific airway resistance is often measured at FRC, in which case: Specific airway conductance (sGaw) Where V is the lung volume at which GAW was measured. Also called volumic airway conductance. Similarly to specific airway resistance, specific airway conductance attempts to correct for differences in lung volume. Specific airway conductance is often measured at FRC, in which case: See also Turbulent flow Laminar flow Reynolds number Upper airway resistance syndrome (UARS) References External links Calculator at medstudents.com.br Respiratory therapy Respiratory physiology Mathematics in medicine
Airway resistance
[ "Mathematics" ]
976
[ "Applied mathematics", "Mathematics in medicine" ]
9,033,403
https://en.wikipedia.org/wiki/Technetium%20%2899mTc%29%20fanolesomab
{{DISPLAYTITLE:Technetium (99mTc) fanolesomab}} Technetium (99mTc) fanolesomab (trade name NeutroSpec, manufactured by Palatin Technologies) is a mouse monoclonal antibody formerly used to aid in the diagnosis of appendicitis. It is labeled with a radioisotope, technetium-99m (99mTc). History and use NeutroSpec was approved by the U.S. Food and Drug Administration (FDA) in June 2004 for imaging of patients with symptoms of appendicitis. It consisted of an intact murine (mouse) IgM monoclonal antibody against human CD15, labeled with technetium-99m so as to be visible on a gamma camera image. Since anti-CD15 antibodies bind selectively to white blood cells such as neutrophils, it could be used to localize the site of an infection. Deaths and associated recall The FDA received reports from Palatin of 2 deaths and 15 life-threatening adverse events in patients who had received NeutroSpec. These events occurred within minutes of administration of NeutroSpec and included shortness of breath, low blood pressure, and cardiopulmonary arrest. Affected patients required resuscitation with intravenous fluids, blood pressure support, and oxygen. Most, but not all, of the patients who experienced these events had existing cardiac and/or pulmonary conditions that may have placed them at higher risk for these adverse events. A review of all post-marketing reports showed an additional 46 patients who experienced adverse events that were similar but less severe. All of the reactions occurred immediately after NeutroSpec was administered. Marketing of the product was suspended in December 2005. References Further reading External links Technetium (99mTc) Fanolesomab from Micromedex Monoclonal antibodies Withdrawn drugs Technetium compounds Technetium-99m Antibody-drug conjugates Radiopharmaceuticals
Technetium (99mTc) fanolesomab
[ "Chemistry", "Biology" ]
422
[ "Medicinal radiochemistry", "Antibody-drug conjugates", "Drug safety", "Radiopharmaceuticals", "Chemicals in medicine", "Withdrawn drugs" ]
9,034,035
https://en.wikipedia.org/wiki/Computer-aided%20diagnosis
Computer-aided detection (CADe), also called computer-aided diagnosis (CADx), are systems that assist doctors in the interpretation of medical images. Imaging techniques in X-ray, MRI, endoscopy, and ultrasound diagnostics yield a great deal of information that the radiologist or other medical professional has to analyze and evaluate comprehensively in a short time. CAD systems process digital images or videos for typical appearances and to highlight conspicuous sections, such as possible diseases, in order to offer input to support a decision taken by the professional. CAD also has potential future applications in digital pathology with the advent of whole-slide imaging and machine learning algorithms. So far its application has been limited to quantifying immunostaining but is also being investigated for the standard H&E stain. CAD is an interdisciplinary technology combining elements of artificial intelligence and computer vision with radiological and pathology image processing. A typical application is the detection of a tumor. For instance, some hospitals use CAD to support preventive medical check-ups in mammography (diagnosis of breast cancer), the detection of polyps in colonoscopy, and lung cancer. Computer-aided detection (CADe) systems are usually confined to marking conspicuous structures and sections. Computer-aided diagnosis (CADx) systems evaluate the conspicuous structures. For example, in mammography CAD highlights microcalcification clusters and hyperdense structures in the soft tissue. This allows the radiologist to draw conclusions about the condition of the pathology. Another application is CADq, which quantifies, e.g., the size of a tumor or the tumor's behavior in contrast medium uptake. Computer-aided simple triage (CAST) is another type of CAD, which performs a fully automatic initial interpretation and triage of studies into some meaningful categories (e.g. negative and positive). CAST is particularly applicable in emergency diagnostic imaging, where a prompt diagnosis of critical, life-threatening condition is required. Although CAD has been used in clinical environments for over 40 years, CAD usually does not substitute the doctor or other professional, but rather plays a supporting role. The professional (generally a radiologist) is generally responsible for the final interpretation of a medical image. However, the goal of some CAD systems is to detect earliest signs of abnormality in patients that human professionals cannot, as in diabetic retinopathy, architectural distortion in mammograms, ground-glass nodules in thoracic CT, and non-polypoid (“flat”) lesions in CT colonography. History In the late 1950s, with the dawn of modern computers researchers in various fields started exploring the possibility of building computer-aided medical diagnostic (CAD) systems. These first CAD systems used flow-charts, statistical pattern-matching, probability theory, or knowledge bases to drive their decision-making process. In the early 1970s, some of the very early CAD systems in medicine, which were often referred as “expert systems” in medicine, were developed and used mainly for educational purposes. Examples include the MYCIN expert system, the Internist-I expert system and the CADUCEUS (expert system). During the beginning of the early developments, the researchers were aiming at building entirely automated CAD / expert systems. The expectated capability of computers was unrealistically optimistic among these scientists. However, after the breakthrough paper, “Reducibility among Combinatorial Problems” by Richard M. Karp, it became clear that there were limitations but also potential opportunities when one develops algorithms to solve groups of important computational problems. As result of the new understanding of the various algorithmic limitations that Karp discovered in the early 1970s, researchers started realizing the serious limitations that CAD and expert systems in medicine have. The recognition of these limitations brought the investigators to develop new kinds of CAD systems by using advanced approaches. Thus, by the late 1980s and early 1990s the focus sifted in the use of data mining approaches for the purpose of using more advanced and flexible CAD systems. In 1998, the first commercial CAD system for mammography, the ImageChecker system, was approved by the US Food and Drug Administration (FDA). In the following years several commercial CAD systems for analyzing mammography, breast MRI, medical imagining of lung, colon, and heart also received FDA approvals. Currently, CAD systems are used as a diagnostic aid to provide physicians for better medical decision-making. Methodology CAD is fundamentally based on highly complex pattern recognition. X-ray or other types of images are scanned for suspicious structures. Normally a few thousand images are required to optimize the algorithm. Digital image data are copied to a CAD server in a DICOM-format and are prepared and analyzed in several steps. 1. Preprocessing for Reduction of artifacts (bugs in images) Image noise reduction Leveling (harmonization) of image quality (increased contrast) for clearing the image's different basic conditions e.g. different exposure parameter. Filtering 2. Segmentation for Differentiation of different structures in the image, e.g. heart, lung, ribcage, blood vessels, possible round lesions Matching with anatomic databank Sample gray-values in volume of interest 3. Structure/ROI (Region of Interest) Analyze Every detected region is analyzed individually for special characteristics: Compactness Form, size and location Reference to close by structures / ROIs Average grey level value analyze within a ROI Proportion of grey levels to border of the structure inside the ROI 4. Evaluation / classification After the structure is analyzed, every ROI is evaluated individually (scoring) for the probability of a TP. The following procedures are examples of classification algorithms. Nearest-Neighbor Rule (e.g. k-nearest neighbors) Minimum distance classifier Cascade classifier Naive Bayes classifier Artificial neural network Radial basis function network (RBF) Support vector machine (SVM) Principal component analysis (PCA) If the detected structures have reached a certain threshold level, they are highlighted in the image for the radiologist. Depending on the CAD system these markings can be permanently or temporary saved. The latter's advantage is that only the markings which are approved by the radiologist are saved. False hits should not be saved, because an examination at a later date becomes more difficult then. Relation to provider metrics Sensitivity and specificity CAD systems seek to highlight suspicious structures. Today's CAD systems cannot detect 100% of pathological changes. The hit rate (sensitivity) can be up to 90% depending on system and application. A correct hit is termed a True Positive (TP), while the incorrect marking of healthy sections constitutes a False Positive (FP). The less FPs indicated, the higher the specificity is. A low specificity reduces the acceptance of the CAD system because the user has to identify all of these wrong hits. The FP-rate in lung overview examinations (CAD Chest) could be reduced to 2 per examination. In other segments (e.g. CT lung examinations) the FP-rate could be 25 or more. In CAST systems the FP rate must be extremely low (less than 1 per examination) to allow a meaningful study triage. Absolute detection rate The absolute detection rate of a radiologist is an alternative metric to sensitivity and specificity. Overall, results of clinical trials about sensitivity, specificity, and the absolute detection rate can vary markedly. Each study result depends on its basic conditions and has to be evaluated on those terms. The following facts have a strong influence: Retrospective or prospective design Quality of the used images Condition of the x-ray examination Radiologist's experience and education Type of lesion Size of the considered lesion Challenges Despite the many developments that CAD has achieved since the dawn of computers, there are still certain challenges that CAD systems face today. Some challenges are related to various algorithmic limitations in the procedures of a CAD system including input data collection, preprocessing, processing and system assessments. Algorithms are generally designed to select a single likely diagnosis, thus providing suboptimal results for patients with multiple, concurrent disorders. Today input data for CAD mostly come from electronic health records (EHR). Effective designing, implementing and analyzing for EHR is a major necessity on any CAD systems. Due to the massive availability of data and the need to analyze such data, big data is also one of the biggest challenges that CAD systems face today. The increasingly vast amount of patient data is a serious problem. Often the patient data are complex and can be semi-structured or unstructured data. It requires highly developed approaches to store, retrieve and analyze them in reasonable time. During the preprocessing stage, input data must be normalized. The normalization of input data includes noise reduction and filtering. Processing may contain a few sub-steps depending on applications. Basic three sub-steps on medical imaging are segmentation, feature extraction / selection, and classification. These sub-steps require advanced techniques to analyze input data with less computational time. Although much effort has been devoted to creating innovative techniques for these procedures of CAD systems, no single best algorithm has emerged for any individual step. Ongoing studies in building innovative algorithms for all the aspects of CAD systems is essential. There is also a lack of standardized assessment measures for CAD systems. This fact may cause the difficulty for obtaining approval for commercial use from governing bodies such as the FDA. Moreover, while many positive developments of CAD systems have been proven, studies for validating their algorithms for clinical practice have not been confirmed. Other challenges are related to the problem for healthcare providers to adopt new CAD systems in clinical practice. Some negative studies may discourage the use of CAD. In addition, the lack of training of health professionals on the use of CAD sometimes brings the incorrect interpretation of the system outcomes. Applications CAD is used in the diagnosis of breast cancer, lung cancer, colon cancer, prostate cancer, bone metastases, coronary artery disease, congenital heart defect, pathological brain detection, fracture detection, Alzheimer's disease, and diabetic retinopathy. Breast cancer CAD is used in screening mammography (X-ray examination of the female breast). Screening mammography is used for the early detection of breast cancer. CAD systems are often utilized to help classify a tumor as malignant (cancerous) or benign (non-cancerous). CAD is especially established in the US and the Netherlands and is used in addition to human evaluation, usually by a radiologist. The first CAD system for mammography was developed in a research project at the University of Chicago. Today it is commercially offered by iCAD and Hologic. However, while achieving high sensitivities, CAD systems tend to have very low specificity and the benefits of using CAD remain uncertain. A 2008 systematic review on computer-aided detection in screening mammography concluded that CAD does not have a significant effect on cancer detection rate, but does undesirably increase recall rate (i.e. the rate of false positives). However, it noted considerable heterogeneity in the impact on recall rate across studies. Recent advances in machine learning, deep-learning and artificial intelligence technology have enabled the development of CAD systems that are clinically proven to assist radiologists in addressing the challenges of reading mammographic images by improving cancer detection rates and reducing false positives and unnecessary patient recalls, while significantly decreasing reading times. Procedures to evaluate mammography based on magnetic resonance imaging (MRI) exist too. Lung cancer (bronchial carcinoma) In the diagnosis of lung cancer, computed tomography with special three-dimensional CAD systems are established and considered as appropriate second opinions. At this a volumetric dataset with up to 3,000 single images is prepared and analyzed. Round lesions (lung cancer, metastases and benign changes) from 1 mm are detectable. Today all well-known vendors of medical systems offer corresponding solutions. Early detection of lung cancer is valuable. However, the random detection of lung cancer in the early stage (stage 1) in the X-ray image is difficult. Round lesions that vary from 5–10 mm are easily overlooked. The routine application of CAD Chest Systems may help to detect small changes without initial suspicion. A number of researchers developed CAD systems for detection of lung nodules (round lesions less than 30 mm) in chest radiography and CT, and CAD systems for diagnosis (e.g., distinction between malignant and benign) of lung nodules in CT. Virtual dual-energy imaging improved the performance of CAD systems in chest radiography. Colon cancer CAD is available for detection of colorectal polyps in the colon in CT colonography. Polyps are small growths that arise from the inner lining of the colon. CAD detects the polyps by identifying their characteristic "bump-like" shape. To avoid excessive false positives, CAD ignores the normal colon wall, including the haustral folds. Cardiovascular disease State-of-the-art methods in cardiovascular computing, cardiovascular informatics, and mathematical and computational modeling can provide valuable tools in clinical decision-making. CAD systems with novel image-analysis-based markers as input can aid vascular physicians to decide with higher confidence on best suitable treatment for cardiovascular disease patients. Reliable early-detection and risk-stratification of carotid atherosclerosis is of outmost importance for predicting strokes in asymptomatic patients. To this end, various noninvasive and low-cost markers have been proposed, using ultrasound-image-based features. These combine echogenicity, texture, and motion characteristics to assist clinical decision towards improved prediction, assessment and management of cardiovascular risk. CAD is available for the automatic detection of significant (causing more than 50% stenosis) coronary artery disease in coronary CT angiography (CCTA) studies. Congenital heart defect Early detection of pathology can be the difference between life and death. CADe can be done by auscultation with a digital stethoscope and specialized software, also known as computer-aided auscultation. Murmurs, irregular heart sounds, caused by blood flowing through a defective heart, can be detected with high sensitivity and specificity. Computer-aided auscultation is sensitive to external noise and bodily sounds and requires an almost silent environment to function accurately. Pathological brain detection (PBD) Chaplot et al. was the first to use Discrete Wavelet Transform (DWT) coefficients to detect pathological brains. Maitra and Chatterjee employed the Slantlet transform, which is an improved version of DWT. Their feature vector of each image is created by considering the magnitudes of Slantlet transform outputs corresponding to six spatial positions chosen according to a specific logic. In 2010, Wang and Wu presented a forward neural network (FNN) based method to classify a given MR brain image as normal or abnormal. The parameters of FNN were optimized via adaptive chaotic particle swarm optimization (ACPSO). Results over 160 images showed that the classification accuracy was 98.75%. In 2011, Wu and Wang proposed using DWT for feature extraction, PCA for feature reduction, and FNN with scaled chaotic artificial bee colony (SCABC) as classifier. In 2013, Saritha et al. were the first to apply wavelet entropy (WE) to detect pathological brains. Saritha also suggested to use spider-web plots. Later, Zhang et al. proved removing spider-web plots did not influence the performance. Genetic pattern search method was applied to identify abnormal brain from normal controls. Its classification accuracy was reported as 95.188%. Das et al. proposed to use Ripplet transform. Zhang et al. proposed to use particle swarm optimization (PSO). Kalbkhani et al. suggested to use GARCH model. In 2014, El-Dahshan et al. suggested the use of pulse coupled neural network. In 2015, Zhou et al. suggested application of naive Bayes classifier to detect pathological brains. Alzheimer's disease CADs can be used to identify subjects with Alzheimer's and mild cognitive impairment from normal elder controls. In 2014, Padma et al. used combined wavelet statistical texture features to segment and classify AD benign and malignant tumor slices. Zhang et al. found kernel support vector machine decision tree had 80% classification accuracy, with an average computation time of 0.022s for each image classification. In 2019, Signaevsky et al. have first reported a trained Fully Convolutional Network (FCN) for detection and quantification of neurofibrillary tangles (NFT) in Alzheimer's disease and an array of other tauopathies. The trained FCN achieved high precision and recall in naive digital whole slide image (WSI) semantic segmentation, correctly identifying NFT objects using a SegNet model trained for 200 epochs. The FCN reached near-practical efficiency with average processing time of 45 min per WSI per graphics processing unit (GPU), enabling reliable and reproducible large-scale detection of NFTs. The measured performance on test data of eight naive WSI across various tauopathies resulted in the recall, precision, and an F1 score of 0.92, 0.72, and 0.81, respectively. Eigenbrain is a novel brain feature that can help to detect AD, based on principal component analysis (PCA) or independent component analysis decomposition. Polynomial kernel SVM has been shown to achieve good accuracy. The polynomial KSVM performs better than linear SVM and RBF kernel SVM. Other approaches with decent results involve the use of texture analysis, morphological features, or high-order statistical features Nuclear medicine CADx is available for nuclear medicine images. Commercial CADx systems for the diagnosis of bone metastases in whole-body bone scans and coronary artery disease in myocardial perfusion images exist. With a high sensitivity and an acceptable false lesions detection rate, computer-aided automatic lesion detection system is demonstrated as useful and will probably in the future be able to help nuclear medicine physicians to identify possible bone lesions. Diabetic retinopathy Diabetic retinopathy is a disease of the retina that is diagnosed predominantly by fundoscopic images. Diabetic patients in industrialised countries generally undergo regular screening for the condition. Imaging is used to recognize early signs of abnormal retinal blood vessels. Manual analysis of these images can be time-consuming and unreliable. CAD has been employed to enhance the accuracy, sensitivity, and specificity of automated detection method. The use of some CAD systems to replace human graders can be safe and cost effective. Image pre-processing, and feature extraction and classification are two main stages of these CAD algorithms. Pre-processing methods Image normalization is minimizing the variation across the entire image. Intensity variations in areas between periphery and central macular region of the eye have been reported to cause inaccuracy of vessel segmentation. Based on the 2014 review, this technique was the most frequently used and appeared in 11 out of 40 recently (since 2011) published primary research. Histogram equalization is useful in enhancing contrast within an image. This technique is used to increase local contrast. At the end of the processing, areas that were dark in the input image would be brightened, greatly enhancing the contrast among the features present in the area. On the other hand, brighter areas in the input image would remain bright or be reduced in brightness to equalize with the other areas in the image. Besides vessel segmentation, other features related to diabetic retinopathy can be further separated by using this pre-processing technique. Microaneurysm and hemorrhages are red lesions, whereas exudates are yellow spots. Increasing contrast between these two groups allow better visualization of lesions on images. With this technique, 2014 review found that 10 out of the 14 recently (since 2011) published primary research. Green channel filtering is another technique that is useful in differentiating lesions rather than vessels. This method is important because it provides the maximal contrast between diabetic retinopathy-related lesions. Microaneurysms and hemorrhages are red lesions that appear dark after application of green channel filtering. In contrast, exudates, which appear yellow in normal image, are transformed into bright white spots after green filtering. This technique is mostly used according to the 2014 review, with appearance in 27 out of 40 published articles in the past three years. In addition, green channel filtering can be used to detect center of optic disc in conjunction with double-windowing system. Non-uniform illumination correction is a technique that adjusts for non-uniform illumination in fundoscopic image. Non-uniform illumination can be a potential error in automated detection of diabetic retinopathy because of changes in statistical characteristics of image. These changes can affect latter processing such as feature extraction and are not observable by humans. Correction of non-uniform illumination (f') can be achieved by modifying the pixel intensity using known original pixel intensity (f), and average intensities of local (λ) and desired pixels (μ) (see formula below). Walter-Klein transformation is then applied to achieve the uniform illumination. This technique is the least used pre-processing method in the review from 2014. Morphological operations is the second least used pre-processing method in 2014 review. The main objective of this method is to provide contrast enhancement, especially darker regions compared to background. Feature extractions and classifications After pre-processing of funduscopic image, the image will be further analyzed using different computational methods. However, the current literature agreed that some methods are used more often than others during vessel segmentation analyses. These methods are SVM, multi-scale, vessel-tracking, region growing approach, and model-based approaches. Support vector machine is by far the most frequently used classifier in vessel segmentation, up to 90% of cases. SVM is a supervised learning model that belongs to the broader category of pattern recognition technique. The algorithm works by creating a largest gap between distinct samples in the data. The goal is to create the largest gap between these components that minimize the potential error in classification. In order to successfully segregate blood vessel information from the rest of the eye image, SVM algorithm creates support vectors that separate the blood vessel pixel from the rest of the image through a supervised environment. Detecting blood vessel from new images can be done through similar manner using support vectors. Combination with other pre-processing technique, such as green channel filtering, greatly improves the accuracy of detection of blood vessel abnormalities. Some beneficial properties of SVM include Flexibility – Highly flexible in terms of function Simplicity – Simple, especially with large datasets (only support vectors are needed to create separation between data) Multi-scale approach is a multiple resolution approach in vessel segmentation. At low resolution, large-diameter vessels can first be extracted. By increasing resolution, smaller branches from the large vessels can be easily recognized. Therefore, one advantage of using this technique is the increased analytical speed. Additionally, this approach can be used with 3D images. The surface representation is a surface normal to the curvature of the vessels, allowing the detection of abnormalities on vessel surface. Vessel tracking is the ability of the algorithm to detect "centerline" of vessels. These centerlines are maximal peak of vessel curvature. Centers of vessels can be found using directional information that is provided by Gaussian filter. Similar approaches that utilize the concept of centerline are the skeleton-based and differential geometry-based. Region growing approach is a method of detecting neighboring pixels with similarities. A seed point is required for such method to start. Two elements are needed for this technique to work: similarity and spatial proximity. A neighboring pixel to the seed pixel with similar intensity is likely to be the same type and will be added to the growing region. One disadvantage of this technique is that it requires manual selection of seed point, which introduces bias and inconsistency in the algorithm. This technique is also being used in optic disc identification. Model-based approaches employ representation to extract vessels from images. Three broad categories of model-based are known: deformable, parametric, and template matching. Deformable methods uses objects that will be deformed to fit the contours of the objects on the image. Parametric uses geometric parameters such as tubular, cylinder, or ellipsoid representation of blood vessels. Classical snake contour in combination with blood vessel topological information can also be used as a model-based approach. Lastly, template matching is the usage of a template, fitted by stochastic deformation process using Hidden Markov Mode 1. Effects on employment Automation of medical diagnosis labor (for example, quantifying red blood cells) has some historical precedent. The deep learning revolution of the 2010s has already produced AI that are more accurate in many areas of visual diagnosis than radiologists and dermatologists, and this gap is expected to grow. Some experts, including many doctors, are dismissive of the effects that AI will have on medical specialties. In contrast, many economists and artificial intelligence experts believe that fields such as radiology will be massively disrupted, with unemployment or downward pressure on the wages of radiologists; hospitals will need fewer radiologists overall, and many of the radiologists who still exist will require substantial retraining. Geoffrey Hinton, the "Godfather of deep learning", argues that in light of the likely advances expected in the next five or ten years, hospitals should immediately stop training radiologists, as their time-consuming and expensive training on visual diagnosis will soon be mostly obsolete, leading to a glut of traditional radiologists. An op-ed in JAMA argues that pathologists and radiologists should merge into a single "information specialist" role, and state that "To avoid being replaced by computers, radiologists must allow themselves to be displaced by computers." Information specialists would be trained in "Bayesian logic, statistics, data science", and some genomics and biometrics; manual visual pattern recognition would be greatly de-emphasized compared with current onerous radiology training. See also Computerized Systems Used In Clinical Trials Diagnostic robot Footnotes References External links Digital Retinal Images for Vessel Extraction (DRIVE) STructured Analysis of the REtina (STARE) High-Resolution Fundus (HRF) Image Database Computing in medical imaging Medical expert systems Radiology Health informatics Applications of computer vision
Computer-aided diagnosis
[ "Biology" ]
5,392
[ "Health informatics", "Medical technology" ]
9,034,184
https://en.wikipedia.org/wiki/Pressure%20altimeter
Altitude can be determined based on the measurement of atmospheric pressure. The greater the altitude, the lower the pressure. When a barometer is supplied with a nonlinear calibration so as to indicate altitude, the instrument is a type of altimeter called a pressure altimeter or barometric altimeter. A pressure altimeter is the altimeter found in most aircraft, and skydivers use wrist-mounted versions for similar purposes. Hikers and mountain climbers use wrist-mounted or hand-held altimeters, in addition to other navigational tools such as a map, magnetic compass, or GPS receiver. Calibration The calibration of an altimeter follows the equation where c is a constant, T is the absolute temperature, P is the pressure at altitude z, and Po is the pressure at sea level. The constant c depends on the acceleration of gravity and the molar mass of the air. However, one must be aware that this type of altimeter relies on "density altitude" and its readings can vary by hundreds of feet owing to a sudden change in air pressure, such as from a cold front, without any actual change in altitude. The most common unit of measurement used for altimeter calibration worldwide is hectopascals (hPa), except for North America (other than Canada ) and Japan where inches of mercury (inHg) are used. To obtain an accurate altitude reading in either feet or meters, the local barometric pressure must be calibrated correctly using the barometric formula. History The scientific principles behind the pressure altimeter were first written by Rev. Alexander Bryce, a Scottish minister and astronomer in 1772 who realised that the principles of a barometer could be adjusted to measure height. Applications Use in hiking, climbing and skiing A barometric altimeter, used along with a topographic map, can help to verify one's location. It is more reliable, and often more accurate, than a GPS receiver for measuring altitude; the GPS signal may be unavailable, for example, when one is deep in a canyon, or it may give wildly inaccurate altitudes when all available satellites are near the horizon. Because barometric pressure changes with the weather, hikers must periodically re-calibrate their altimeters when they reach a known altitude, such as a trail junction or peak marked on a topographical map. Skydiving An altimeter is the most important piece of skydiving equipment, after the parachute itself. Altitude awareness is crucial at all times during the jump, and determines the appropriate response to maintain safety. Since altitude awareness is so important in skydiving, there is a wide variety of altimeter designs made specifically for use in the sport, and a non-student skydiver will typically use two or more altimeters in a single jump: Hand, wrist or chest-mounted mechanical analogue visual altimeters are the most basic and common devices, and are used by (and commonly mandated for) virtually all student skydivers. The common design has a face marked from 0 to 4000 m (or 0 to 12000 ft, mimicking the clock face), with an arrow pointing to the current altitude. The face plate has sections prominently marked with yellow and red, signifying the recommended deployment altitude and the emergency procedure decision altitude (commonly known as "hard deck") respectively. A mechanical altimeter has a knob that needs to be manually adjusted to make it point to 0 on the ground before jump, and if the landing spot is not at the same altitude as the takeoff spot, the user needs to adjust it appropriately. Some advanced electronic altimeters are also available which make use of the familiar analogue display, despite internally operating digitally. Digital visual altimeters are mounted on the wrist or hand. This type always operates electronically, and conveys the altitude as a number, rather than a pointer on a dial. Since these altimeters already contain all the electronic circuitry necessary for altitude calculation, they are commonly equipped with auxiliary functions such as electronic logbook, real-time jump profile replay, speed indication, simulator mode for use in ground training, etc. An electronic altimeter is activated on the ground before the jump, and calibrates automatically to point to 0. It is thus essential that the user not turn it on earlier than necessary to avoid, for example, the drive to a dropzone located at a different altitude than one's home which could cause a potentially fatal false reading. If the intended landing zone is at a different elevation than the takeoff point, the user needs to input the appropriate offset by using a designated function. Audible altimeters (also known as "dytters", a genericised trademark of the first such product on the market) are inserted into one's helmet, and emit a warning tone at a predefined altitude. Contemporary audibles have evolved significantly from their crude beginnings, and sport a vast array of functions, such as multiple tones at different altitudes, multiple saved profiles that can be switched quickly, electronic logbook with data transfer to a PC for later analysis, distinct free fall and canopy modes with different warning altitudes, swoop approach guiding tones, etc. Audibles are strictly auxiliary devices, and do not replace, but complement a visual altimeter which remains the primary tool for maintaining altitude awareness. The advent of modern skydiving disciplines such as freeflying, in which the ground might not be in one's field of view for long periods of time, has made the use of audibles nearly universal, and virtually all skydiving helmets come with one or more built-in ports in which an audible might be placed. Audibles are not recommended and often banned from use by student skydivers, who need to build up a proper altitude awareness regime for themselves. Auxiliary visual altimeters do not show the precise altitude, but rather help maintain a general indicator in one's peripheral vision. They might either operate in tandem with an audible equipped with an appropriate port, in which case they emit warning flashes complementing the audible tones, or be standalone and use another display mode, such as showing either green or red light depending on the altitude. Speaking altimeters (also known as voice altimeters) combine both audible and visual altimeter functions. Units of this type operate at all altitudes experienced in skydiving, and announce their altitude as a number in the diver's native language. These are inserted into helmet like audible altimeters, and have automatic volume adjustment dependent on the skydiver's speed. The main goal of this type of altimeter is to ensure that skydivers always know their position; this feature is useful for experienced skydivers as well as formation skydiving load organizers or accelerated freefall instructors. The exact choice of altimeters depends heavily on the individual skydiver's preferences, experience level, primary disciplines, as well as the type of the jump. On one end of the spectrum, a low-altitude demonstration jump with water landing and no free fall might waive the mandated use of altimeters and use none at all. In contrast, a jumper doing freeflying jumps and flying a high performance canopy might use a mechanical analogue altimeter for easy reference in free fall, an in-helmet audible for breakaway altitude warning, additionally programmed with swoop guide tones for canopy flying, as well as a digital altimeter on an armband for quickly glancing the precise altitude on approach. Another skydiver doing similar types of jumps might wear a digital altimeter for their primary visual one, preferring the direct altitude readout of a numeric display. Use in aircraft In aircraft, an aneroid altimeter or aneroid barometer measures the atmospheric pressure from a static port outside the aircraft. Air pressure decreases with an increase of altitude—approximately 100 hectopascals per 800 meters or one inch of mercury per 1000 feet or 1 hectopascals per 30 feet near sea level. The aneroid altimeter is calibrated to show the pressure directly as an altitude above mean sea level, in accordance with a mathematical model atmosphere defined by the International Standard Atmosphere (ISA). Older aircraft used a simple aneroid barometer where the needle made less than one revolution around the face from zero to full scale. This design evolved to three-pointer altimeters with a primary needle and one or more secondary needles that show the number of revolutions, similar to a clock face. In other words, each needle points to a different digit of the current altitude measurement. However, this design has fallen out of favor due to the risk of misreading in stressful situations. The design evolved further to drum-type altimeters, the final step in analogue instrumentation, where each revolution of a single needle accounted for , with thousand foot increments recorded on a numerical odometer-type drum. To determine altitude, a pilot had first to read the drum to determine the thousands of feet, then look at the needle for the hundreds of feet. Modern analogue altimeters in transport aircraft are typically drum-type. The latest development in clarity is an Electronic flight instrument system with integrated digital altimeter displays. This technology has trickled down from airliners and military planes until it is now standard in many general aviation aircraft. Modern aircraft use a "sensitive altimeter". On a sensitive altimeter, the sea-level reference pressure can be adjusted with a setting knob. The reference pressure, in inches of mercury in Canada and the United States, and hectopascals (previously millibars) elsewhere, is displayed in the small Kollsman window, on the face of the aircraft altimeter. This is necessary, since sea level reference atmospheric pressure at a given location varies over time with temperature and the movement of pressure systems in the atmosphere. In aviation terminology, the regional or local air pressure at mean sea level (MSL) is called the QNH or "altimeter setting", and the pressure that will calibrate the altimeter to show the height above ground at a given airfield is called the QFE of the field. An altimeter cannot, however, be adjusted for variations in air temperature. Differences in temperature from the ISA model will accordingly cause errors in indicated altitude. In aerospace, the mechanical stand-alone altimeters which are based on diaphragm bellows were replaced by integrated measurement systems which are called air data computers (ADC). This module measures altitude, speed of flight and outside temperature to provide more precise output data allowing automatic flight control and flight level division. Multiple altimeters can be used to design a pressure reference system to provide information about the airplane's position angles to further support inertial navigation system calculations. Pilots can perform preflight altimeter checks by setting the barometric scale to the current reported altimeter setting. The altimeter pointers should indicate the surveyed field elevation of the airport. Federal Aviation Administration requires that if the indication is off by more than from the surveyed field elevation, the instrument should be recalibrated. Other modes of transport The altimeter is an instrument optional in off-road vehicles to aid in navigation. Some high-performance luxury cars that were never intended to leave paved roads, such as the Duesenberg in the 1930s, have also been equipped with altimeters. References Altimeters Atmospheric pressure
Pressure altimeter
[ "Physics", "Technology", "Engineering" ]
2,288
[ "Physical quantities", "Meteorological quantities", "Atmospheric pressure", "Measuring instruments", "Aircraft instruments", "Altimeters" ]
9,035,542
https://en.wikipedia.org/wiki/Extragalactic%20cosmic%20ray
Extragalactic cosmic rays are very-high-energy particles that flow into the Solar System from beyond the Milky Way galaxy. While at low energies, the majority of cosmic rays originate within the Galaxy (such as from supernova remnants), at high energies the cosmic ray spectrum is dominated by these extragalactic cosmic rays. The exact energy at which the transition from galactic to extragalactic cosmic rays occurs is not clear, but it is in the range 1017 to 1018 eV. Observation The observation of extragalactic cosmic rays requires detectors with an extremely large surface area, due to the very limited flux. As a result, extragalactic cosmic rays are generally detected with ground-based observatories, by means of the extensive air showers they create. These ground based observatories can be either surface detectors, which observe the air shower particles which reach the ground, or air fluorescence detectors (also called 'fly's eye' detectors), which observe the fluorescence caused by the interaction of the charged air shower particles with the atmosphere. In either case, the ultimate aim is to find the mass and energy of the primary cosmic ray which created the shower. Surface detectors accomplish this by measuring the density of particles at the ground, while fluorescence detectors do so by measuring the depth of shower maximum (the depth from the top of the atmosphere at which the maximum number of particles are present in the shower). The two currently operating high energy cosmic ray observatories, the Pierre Auger Observatory and the Telescope Array, are hybrid detectors which use both of these methods. This hybrid methodology allows for a full three-dimensional reconstruction of the air shower, and gives much better directional information as well as more accurate determination of the type and energy of the primary cosmic ray than either technique on its own. Pierre Auger Observatory The Pierre Auger Observatory, located in the Mendoza province in Argentina, consists of 1660 surface detectors, each separated by 1.5 km and covering a total area of 3000 km2, and 27 fluorescence detectors at 4 different locations overlooking the surface detectors. The observatory has been in operation since 2004, and began operating at full capacity in 2008 once construction was completed. The surface detectors are water Cherenkov detectors, each detector being a tank 3.6 m in diameter. One of the Pierre Auger Observatory's most notable results is the detection of a dipole anisotropy in the arrival directions of cosmic rays with energy greater than 8 x 1018 eV, which was the first conclusive indication of their extragalactic origin. Telescope Array The Telescope Array is located in the state of Utah in the United States of America, and consists of 507 surface detectors separated by 1.2 km and covering a total area of 700 km2, and 3 fluorescence detector stations with 12-14 fluorescence detectors at each station. The Telescope Array was constructed by a collaboration between the teams formerly operating the Akeno Giant Air Shower Array (AGASA), which was a surface detector array in Japan, and the High Resolution Fly's Eye (HiRes), which was an air fluorescence detector also located in Utah. The Telescope Array was initially designed to detect cosmic rays with energy above 1019 eV, but an extension to the project, the Telescope Array Low Energy extension (TALE) is currently underway and will allow observation of cosmic rays with energies above 3 x 1016 eV Spectrum and Composition Two clear and long-known features of the spectrum of extragalactic cosmic rays are the 'ankle', which is a flattening of the spectrum at around 5 x 1018 eV, and suppression of the cosmic ray flux at high energies (above about 4 x 1019 eV). More recently the Pierre Auger Observatory also observed a steepening of the cosmic ray spectrum above the ankle, before the steep cutoff above than 1019 eV (see figure). The spectrum measured by the Pierre Auger Observatory does not appear to depend on the arrival direction of the cosmic rays. However, there are some discrepancies between the spectrum (specifically the energy at which the suppression of flux occurs) measured by the Pierre Auger Observatory in the Southern hemisphere and the Telescope Array in the Northern hemisphere. It is unclear whether this is the result of an unknown systematic error or a true difference between the cosmic rays arriving at the Northern and Southern hemispheres. The interpretation of these features of the cosmic ray spectrum depends on the details of the model assumed.Historically the ankle is interpreted as the energy at which the steep Galactic cosmic ray spectrum transitions to a flat extragalactic spectrum. However diffusive shock acceleration in supernova remnants, which is the predominant source of cosmic rays below 1015 eV, can accelerate protons only up to 3 x 1015 eV and iron up to 8 x 1016 eV. Thus there must be an additional source of Galactic cosmic rays up to around 1018 eV. On the other hand, the 'dip' model assumes that the transition between Galactic and extragalactic cosmic rays occurs at about 1017 eV. This model assumes that extragalactic cosmic rays are composed purely of protons, and the ankle is interpreted as being due to pair production arising from interactions of cosmic rays with the Cosmic Microwave Background (CMB). This suppresses the cosmic ray flux and thus causes a flattening of the spectrum. Older data, as well as more recent data from the Telescope Array do favour a pure proton composition. However recent Auger data suggests a composition which is dominated by light elements to 2 x 1018 eV, but becomes increasingly dominated by heavier elements with increasing energy. In this case a source of the protons below 2 x 1018 eV is needed. The suppression of flux at high energies is generally assumed to be due to the Greisen–Zatsepin–Kuz'min (GZK) effect in the case of protons, or due to photodisintegration by the CMB (the Gerasimova-Rozental or GR effect) in the case of heavy nuclei. However it could also be because of the nature of the sources, that is because of the maximum energy to which sources can accelerate cosmic rays. As mentioned above the Telescope Array and the Pierre Auger Observatory give different results for the most likely composition. However the data used to infer composition from these two observatories is consistent once all systematic effects are taken into account. The composition of extragalactic cosmic rays is thus still ambiguous Origin Unlike solar or galactic cosmic rays, little is known about the origins of extragalactic cosmic rays. This is largely due to a lack of statistics: only about 1 extragalactic cosmic ray particle per square kilometer per year reaches the Earth's surface (see figure). The possible sources of these cosmic rays must satisfy the Hillas criterion, where E is the energy of the particle, q its electric charge, B is the magnetic field in the source and R the size of the source. This criterion comes from the fact that for a particle to be accelerated to a given energy, its Larmor radius must be less than the size of the accelerating region. Once the Larmor radius of the particle is greater than the size of the accelerating region, it escapes and does not gain any more energy. As a consequence of this, heavier nuclei (with a greater number of protons), if present, can be accelerated to higher energies than protons within the same source. Active galactic nuclei Active galactic nuclei (AGNs) are well known to be some of the most energetic objects in the universe, and are therefore often considered as candidates for the production of extragalactic cosmic rays. Given their extremely high luminosity, AGNs can accelerate cosmic rays to the required energies even if only 1/1000 of their energy is used for this acceleration. There is some observational support for this hypothesis. Analysis of cosmic ray measurements with the Pierre Auger Observatory suggests a correlation between the arrival directions of cosmic rays of the highest energies of more than 5×1019 eV and the positions of nearby active galaxies. In 2017, IceCube detected a high energy neutrino with energy 290 TeV whose direction was consistent with a flaring blazar, TXS 0506-056, which strengthened the case for AGNs as a source of extragalactic cosmic rays. Since high-energy neutrinos are assumed to come from the decay of pions produced by the interaction of correspondingly high-energy protons with the Cosmic Microwave Background (CMB) (photo-pion production), or from the photodisintegration of energetic nuclei, and since neutrinos travel essentially unimpeded through the universe, they can be traced back to the source of high-energy cosmic rays. Clusters of galaxies Galaxy clusters continuously accrete gas and galaxies from filaments of the cosmic web. As the cold gas which is accreted falls into the hot intracluster medium, it gives rise to shocks at the outskirts of the cluster, which could accelerate cosmic rays through the diffusive shock acceleration mechanism. Large scale radio halos and radio relics, which are expected to be due to synchrotron emission from relativistic electrons, show that clusters do host high energy particles. Studies have found that shocks in clusters can accelerate iron nuclei to 1020 eV, which is nearly as much as the most energetic cosmic rays observed by the Pierre Auger Observatory. However, if clusters do accelerate protons or nuclei to such high energies, they should also produce gamma ray emission due to the interaction of the high-energy particles with the intracluster medium. This gamma ray emission has not yet been observed, which is difficult to explain. Gamma ray bursts Gamma ray bursts (GRBs) were originally proposed as a possible source of extragalactic cosmic rays because the energy required to produce the observed flux of cosmic rays was similar their typical luminosity in γ-rays, and because they could accelerate protons to energies of 1020 eV through diffusive shock acceleration. Long gamma ray bursts (GRBs) are especially interesting as possible sources of extragalactic cosmic rays in light of the evidence for a heavier composition at higher energies. Long GRBs are associated with the death of massive stars, which are well known to produce heavy elements. However, in this case many of the heavy nuclei would be photo-disintegrated, leading to considerable neutrino emission also associated with GRBs, which has not been observed. Some studies have suggested that a specific population of GRBs known as low-luminosity GRBs might resolve this, as the lower luminosity would lead to less photo-dissociation and neutrino production. These low luminosity GRBs could also simultaneously account for the observed high-energy neutrinos. However it has also been argued that these low-luminosity GRBs are not energetic enough to be a major source of high energy cosmic rays. Neutron stars Neutron stars are formed from the core collapse of massive stars, and as with GRBs can be a source of heavy nuclei. In models with neutron stars - specifically young pulsars or magnetars - as the source of extragalactic cosmic rays, heavy elements (mainly iron) are stripped from the surface of the object by the electric field created by the magnetized neutron star's rapid rotation. This same electric field can accelerate iron nucleii up to 1020 eV. The photodisintegration of the heavy nucleii would produce lighter elements with lower energies, matching the observations of the Pierre Auger Observatory. In this scenario, the cosmic rays accelerated by neutron stars within the Milky Way could fill in the 'transition region' between Galactic cosmic rays produced in supernova remnants, and extragalactic cosmic rays. See also Ultra-high-energy cosmic ray References Astrophysics Astroparticle physics Cosmic rays
Extragalactic cosmic ray
[ "Physics", "Astronomy" ]
2,419
[ "Physical phenomena", "Astronomical sub-disciplines", "Astroparticle physics", "Astrophysics", "Radiation", "Particle physics", "Cosmic rays" ]