id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
1,617,188
https://en.wikipedia.org/wiki/Laws%20of%20motion
In physics, a number of noted theories of the motion of objects have developed. Among the best known are: Classical mechanics Newton's laws of motion Euler's laws of motion Cauchy's equations of motion Kepler's laws of planetary motion General relativity Special relativity Quantum mechanics Motion (physics)
Laws of motion
[ "Physics" ]
64
[ "Physical phenomena", "Motion (physics)", "Space", "Mechanics", "Spacetime" ]
1,617,558
https://en.wikipedia.org/wiki/Bridgman%27s%20thermodynamic%20equations
In thermodynamics, Bridgman's thermodynamic equations are a basic set of thermodynamic equations, derived using a method of generating multiple thermodynamic identities involving a number of thermodynamic quantities. The equations are named after the American physicist Percy Williams Bridgman. (See also the exact differential article for general differential relationships). The extensive variables of the system are fundamental. Only the entropy S , the volume V  and the four most common thermodynamic potentials will be considered. The four most common thermodynamic potentials are: {| |----- | Internal energy || U |----- | Enthalpy || H |----- | Helmholtz free energy || A |----- | Gibbs free energy || G |----- |} The first derivatives of the internal energy with respect to its (extensive) natural variables S  and V  yields the intensive parameters of the system - The pressure P  and the temperature T . For a simple system in which the particle numbers are constant, the second derivatives of the thermodynamic potentials can all be expressed in terms of only three material properties {| |----- | heat capacity (constant pressure) || CP |----- | Coefficient of thermal expansion || α |----- | Isothermal compressibility || βT |} Bridgman's equations are a series of relationships between all of the above quantities. Introduction Many thermodynamic equations are expressed in terms of partial derivatives. For example, the expression for the heat capacity at constant pressure is: which is the partial derivative of the enthalpy with respect to temperature while holding pressure constant. We may write this equation as: This method of rewriting the partial derivative was described by Bridgman (and also Lewis & Randall), and allows the use of the following collection of expressions to express many thermodynamic equations. For example from the equations below we have: and Dividing, we recover the proper expression for CP. The following summary restates various partial terms in terms of the thermodynamic potentials, the state parameters S, T, P, V, and the following three material properties which are easily measured experimentally. Bridgman's thermodynamic equations Note that Lewis and Randall use F and E for the Gibbs energy and internal energy, respectively, rather than G and U which are used in this article. See also Table of thermodynamic equations Exact differential References Thermodynamic equations Equations
Bridgman's thermodynamic equations
[ "Physics", "Chemistry", "Mathematics" ]
554
[ "Thermodynamic equations", "Equations of physics", "Mathematical objects", "Equations", "Thermodynamics" ]
1,618,377
https://en.wikipedia.org/wiki/Oceanic%20basin
In hydrology, an oceanic basin (or ocean basin) is anywhere on Earth that is covered by seawater. Geologically, most of the ocean basins are large geologic basins that are below sea level. Most commonly the ocean is divided into basins following the continents distribution: the North and South Atlantic (together approximately 75 million km2/ 29 million mi2), North and South Pacific (together approximately 155 million km2/ 59 million mi2), Indian Ocean (68 million km2/ 26 million mi2) and Arctic Ocean (14 million km2/ 5.4 million mi2). Also recognized is the Southern Ocean (20 million km2/ 7 million mi2). All ocean basins collectively cover 71% of the Earth's surface, and together they contain almost 97% of all water on the planet. They have an average depth of almost 4 km (about 2.5 miles). Definitions of boundaries Boundaries based on continents "Limits of Oceans and Seas", published by the International Hydrographic Office in 1953, is a document that defined the ocean's basins as they are largely known today. The main ocean basins are the ones named in the previous section. These main basins are divided into smaller parts. Some examples are: the Baltic Sea (with three subdivisions), the North Sea, the Greenland Sea, the Norwegian Sea, the Laptev Sea, the Gulf of Mexico, the South China Sea, and many more. The limits were set for convenience of compiling sailing directions but had no geographical or physical ground and to this day have no political significance. For instance, the line between the North and South Atlantic is set at the equator. The Antarctic or Southern Ocean, which reaches from 60° south to Antarctica had been omitted until 2000, but is now also recognized by the International Hydrographic Office. Nevertheless, and since ocean basins are interconnected, many oceanographers prefer to refer to one single ocean basin instead of multiple ones.   Older references (e.g., Littlehales 1930) consider the oceanic basins to be the complement to the continents, with erosion dominating the latter, and the sediments so derived ending up in the ocean basins. This vision is supported by the fact that oceans lie lower than continents, so the former serve as sedimentary basins that collect sediment eroded from the continents, known as clastic sediments, as well as precipitation sediments. Ocean basins also serve as repositories for the skeletons of carbonate- and silica-secreting organisms such as coral reefs, diatoms, radiolarians, and foraminifera. More modern sources (e.g., Floyd 1991) regard the ocean basins more as basaltic plains, than as sedimentary depositories, since most sedimentation occurs on the continental shelves and not in the geologically defined ocean basins. Definition based on surface connectivity The flow in the ocean is not uniform but varies with depth. Vertical circulation in the ocean is very slow compared to horizonal flow and observing the deep ocean is difficult. Defining the ocean basins based on connectivity of the entire ocean (depth and width) is therefore not possible. Froyland et al. (2014) defined ocean basins based on surface connectivity. This is achieved by creating a Markov Chain model of the surface ocean dynamics using short term time trajectory data from a global ocean model. These trajectories are of particles that move only on the surface of the ocean. The model outcome gives the probability of a particle at a certain grid point to end up somewhere else on the ocean's surface. With the model outcome a matrix can be created from which the Eigenvectors and Eigenvalues are taken. These Eigenvectors show regions of attraction, aka regions where things on the surface of the ocean (plastic, biomass, water etc.) become trapped. One of these regions is for example the Atlantic garbage patch. With this approach the five main ocean basins are still the North and South Atlantic, North and South Pacific and the Arctic Ocean, but with different boundaries between the basins. These boundaries show the lines of very little surface connectivity between the different regions which means that a particle on the ocean surface in a certain region is more likely to stay in the same region than to pass over to a different one. Formation of oceanic crusts and basins Earth's structure Depending on the chemical composition and the physical state, the Earth can be divided into three major components:  the mantle, the core, and the crust. The crust is referred to as the outside layer of the Earth. It is made of solid rock, mostly basalt and granite. The crust that lies below sea level is known as the oceanic crust, while on land it is known as the continental crust. The former is thinner and is composed of relatively dense basalt, while the latter is less dense and mainly composed of granite. The lithosphere is composed of the crust (oceanic and continental) and the uppermost part of the mantle. The lithosphere is broken into sections called plates. Processes of tectonic plates Tectonic plates move very slowly (5 to 10 cm (2 to 4 inches) per year) relative to each other and interact along their boundaries. This movement is responsible for most of the Earth's seismic and volcanic activity. Depending on how the plates interact with each other, there are three types of boundaries. Convergent boundary: the plates collide, and eventually the denser one slides underneath the lighter one, a process known as subduction. This type of interaction can take place between an oceanic and an oceanic crust, creating a so-called oceanic trench. It can also take place between an oceanic and a continental crust, forming a mountain range in the continent like the Andes, and it can take place between a continental and continental crust, resulting in large mountain chains, like the Himalayas. Divergent boundary: the plates move apart from each other. If this occurs on land a rift is formed, which eventually becomes a rift valley. The most active divergent boundaries lie under the sea. In the ocean, if magma or molten rock ascent from the mantle and fill the gap created by two diverging plates, a mid-ocean ridge is formed. Transform boundary: also called transform fault, occurs when the movement between the plates is horizontal, so no crust is created or destroyed. It can happen both, on land and in the sea, but most of the faults are in the oceanic crust. Size of trenches The Earth's deepest trench is the Mariana Trench which extends for about 2500 km (1600 miles) across the seabed. It is near the Mariana Islands, a volcanic archipelago in the West Pacific. Its deepest point is 10994 m (nearly 7 miles) below the surface of the sea. The Earth's longest trench runs alongside the coast of Peru and Chile, reaching a depth of 8065 m (26460 feet) and extending for approximately 5900 km (3700 miles). It occurs where the oceanic Nazca plate slides under the continental South American plate and is associated with the upthrust and volcanic activity of the Andes. History and age of oceanic crust The oldest oceanic crust is in the far western equatorial Pacific, east of the Mariana Islands. It is located far away from oceanic spreading centers, where oceanic crust is constantly created or destroyed. The oldest crust is estimated to be only around 200 million years old, compared to the age of Earth which is 4.6 billion years. 200 million years ago nearly all land mass was one large continent called Pangea, which started to split up. During the splitting process of Pangea, some ocean basins shrunk, such as the Pacific, while others were created, such as the Atlantic and Arctic basins. The Atlantic Basin began to form around 180 million years ago, when the continent Laurasia (North America and Eurasia) started to drift away from Africa and South America. The Pacific plate grew, and subduction led to a shrinking of its bordering plates. The Pacific plate continues to move northward. Around 130 million years ago the South Atlantic started to form, as South America and Africa started to separate. At around this time India and Madagascar rifted northwards, away from Australia and Antarctica, creating seafloor around Western Australia and East Antarctica. When Madagascar and India separated between 90 and 80 million years ago, the spreading ridges in the Indian Ocean were reorganized. The northernmost part of the Atlantic Ocean was also formed at this time when Europe and Greenland separated. About 60 million years ago a new rift and oceanic ridge formed between Greenland and Europe, separating them and initiating the formation of oceanic crust in the Norwegian Sea and the Eurasian Basin in the eastern Arctic Ocean. Changes in ocean basins State of the current ocean basins The area occupied by the individual ocean basins has fluctuated in the past due to, amongst other, tectonic plate movements. Therefore, an oceanic basin can be actively changing size and/or depth or can be relatively inactive. The elements of an active and growing oceanic basin include an elevated mid-ocean ridge, flanking abyssal hills leading down to abyssal plains and an oceanic trench. Changes in biodiversity, floodings and other climate variations are linked to sea-level, and are reconstructed with different models and observations (e.g., age of oceanic crust). Sea level is affected not only by the volume of the ocean basin, but also by the volume of water in them. Factors that influence the volume of the ocean basins are: Plate tectonics and the volume of mid-ocean ridges: the depth of the seafloor increases with distance to a ridge, as the oceanic lithosphere cools and thickens. The volume of ocean basins can be modeled using reconstructions of plate tectonics and using an age-depth relationship (see also Seafloor depth vs age). Marine sedimentations: these influence global mean depth and volume of the ocean, but they are difficult to determine and reconstruct. Passive margins and crustal extensions: to compensate the extension of continents due to continental rifting, oceanic crust decreases and therefore so does the volume of the ocean basin. However, the increase in continental area leads to a stretching and thinning of the continental crust, much of which ends up below sea level, thus again leading to an increase in ocean basin volume. The Atlantic Ocean and the Arctic Ocean are good examples of active, growing oceanic basins, whereas the Mediterranean Sea is shrinking. The Pacific Ocean is also an active, shrinking oceanic basin, even though it has both spreading ridge and oceanic trenches. Perhaps the best example of an inactive oceanic basin is the Gulf of Mexico, which formed in Jurassic times and has been doing nothing but collecting sediments since then. The Aleutian Basin is another example of a relatively inactive oceanic basin. The Japan Basin in the Sea of Japan which formed in the Miocene, is still tectonically active although recent changes have been relatively mild. See also List of abyssal plains and oceanic basins List of oceanic landforms Trough (geology) Solid Earth Notes Further reading External links Global Solid Earth Topography Physical oceanography Marine geology Coastal and oceanic landforms Oceanographical terminology
Oceanic basin
[ "Physics" ]
2,250
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
1,618,989
https://en.wikipedia.org/wiki/Sphaleron
A sphaleron ( "slippery") is a static (time-independent) solution to the electroweak field equations of the Standard Model of particle physics, and is involved in certain hypothetical processes that violate baryon and lepton numbers. Such processes cannot be represented by perturbative methods such as Feynman diagrams, and are therefore called non-perturbative. Geometrically, a sphaleron is a saddle point of the electroweak potential (in infinite-dimensional field space). This saddle point rests at the top of a barrier between two different low-energy equilibria of a given system; the two equilibria are labeled with two different baryon numbers. One of the equilibria might consist of three baryons; the other, alternative, equilibrium for the same system might consist of three antileptons. In order to cross this barrier and change the baryon number, a system must either tunnel through the barrier (in which case the transition is an instanton-like process) or must for a reasonable period of time be brought up to a high enough energy that it can classically cross over the barrier (in which case the process is termed a "sphaleron" process and can be modeled with an eponymous sphaleron particle). In both the instanton and sphaleron cases, the process can only convert groups of three baryons into three antileptons (or three antibaryons into three leptons) and vice versa. This violates conservation of baryon number and lepton number, but the difference B − L is conserved. The minimum energy required to trigger the sphaleron process is believed to be around 10 TeV; however, sphalerons cannot be produced in existing LHC collisions, because although the LHC can create collisions of energy 10 TeV and greater, the generated energy cannot be concentrated in a manner that would create sphalerons. A sphaleron is similar to the midpoint of the instanton, so it is non-perturbative. This means that under normal conditions sphalerons are unobservably rare. However, they would have been more common at the higher temperatures of the early universe. Baryogenesis Since a sphaleron may convert baryons to antileptons and antibaryons to leptons and thus change the baryon number, if the density of sphalerons was at some stage high enough, they could wipe out any net excess of baryons or anti-baryons. This has two important implications in any theory of baryogenesis within the Standard Model: Any baryon net excess arising before the electroweak symmetry breaking would be wiped out due to abundant sphalerons caused by high temperatures existing in the early universe. While a baryon net excess can be created during the electroweak symmetry breaking, it can be preserved only if this phase transition was first-order. This is because in a second-order phase transition, sphalerons would wipe out any baryon asymmetry as it is created, while in a first-order phase transition, sphalerons would wipe out baryon asymmetry only in the unbroken phase. In absence of processes which violate B − L it is possible for an initial baryon asymmetry to be protected if it has a non-zero projection onto B − L. In this case the sphaleron processes would impose an equilibrium which distributes the initial B asymmetry between both B and L numbers. In some theories of baryogenesis, an imbalance of the number of leptons and antileptons is formed first by leptogenesis and sphaleron transitions then convert this to an imbalance in the numbers of baryons and antibaryons. Details For an SU(2) gauge theory, neglecting , we have the following equations for the gauge field and the Higgs field in the gauge where , , the symbols represent the generators of SU(2), is the electroweak coupling constant, and is the Higgs VEV absolute value. The functions and , which must be determined numerically, go from 0 to 1 in value as their argument, , goes from 0 to . For a sphaleron in the background of a non-broken phase, the Higgs field must obviously fall off eventually to zero as goes to infinity. Note that in the limit , the gauge sector approaches one of the pure-gauge transformation , which is the same as the pure gauge transformation to which the BPST instanton approaches as at , hence establishing the connection between the sphaleron and the instanton. Baryon number violation is caused by the "winding" of the fields from one equilibrium to another. Each time the weak gauge fields wind, the count for each of the quark families and each of the lepton families is raised (or lowered, depending on the winding direction) by one; as there are three quark families, baryon number can only change in multiples of three. The baryon number violation can alternatively be visualized in terms of a kind of Dirac sea: in the course of the winding, a baryon originally considered to be part of the vacuum is now considered a real baryon, or vice versa, and all the other baryons stacked inside the sea are accordingly shifted by one energy level. Energy release According to physicist Max Tegmark, the theoretical energy efficiency from conversion of baryons to antileptons would be orders of magnitude higher than the energy efficiency of existing power-generation technology such as nuclear fusion. Tegmark speculates that an extremely advanced civilization might use a "sphalerizer" to generate energy from ordinary baryonic matter. See also References and notes Notes Citations Electroweak theory Anomalies (physics)
Sphaleron
[ "Physics" ]
1,195
[ "Physical phenomena", "Fundamental interactions", "Electroweak theory" ]
1,619,050
https://en.wikipedia.org/wiki/Game%20of%20the%20Amazons
The Game of the Amazons (in Spanish, El Juego de las Amazonas; often called Amazons for short) is a two-player abstract strategy game invented in 1988 by Walter Zamkauskas of Argentina. The game is played by moving pieces and blocking the opponents from squares, and the last player able to move is the winner. It is a member of the territorial game family, a distant relative of Go and chess. The Game of the Amazons is played on a 10x10 chessboard (or an international checkerboard). Some players prefer to use a monochromatic board. The two players are White and Black; each player has four amazons (not to be confused with the amazon fairy chess piece), which start on the board in the configuration shown at right. A supply of markers (checkers, poker chips, etc.) is also required. Rules White moves first, and the players alternate moves thereafter. Each move consists of two parts. First, one moves one of one's own amazons one or more empty squares in a straight line (orthogonally or diagonally), exactly as a queen moves in chess; it may not cross or enter a square occupied by an amazon of either color or an arrow. Second, after moving, the amazon shoots an arrow from its landing square to another square, using another queenlike move. This arrow may travel in any orthogonal or diagonal direction (even backwards along the same path the amazon just traveled, into or across the starting square if desired). An arrow, like an amazon, cannot cross or enter a square where another arrow has landed or an amazon of either color stands. The square where the arrow lands is marked to show that it can no longer be used. The last player to be able to make a move wins. Draws are impossible. Territory and scoring The strategy of the game is based on using arrows (as well as one's four amazons) to block the movement of the opponent's amazons and gradually wall off territory, trying to trap the opponents in smaller regions and gain larger areas for oneself. Each move reduces the available playing area, and eventually each amazon finds itself in a territory blocked off from all other amazons. The amazon can then move about its territory firing arrows until it no longer has any room to move. Since it would be tedious to actually play out all these moves, in practice the game usually ends when all of the amazons are in separate territories. The player with the largest amount of territory will be able to win, as the opponent will have to fill in their own territory more quickly. Scores are sometimes used for tie-breaking purposes in Amazons tournaments. When scoring, it is important to note that although the number of moves remaining to a player is usually equal to the number of empty squares in the territories occupied by that player's amazons, it is nonetheless possible to have defective territories in which there are fewer moves left than there are empty squares. The simplest such territory is three squares of the same colour, not in a straight line, with the amazon in the middle (for example, a1+b2+c1 with the amazon at b2). History El Juego de las Amazonas was first published in Spanish in the Argentine puzzle magazine El Acertijo in December 1992. An approved English translation written by Michael Keller appeared in the magazine World Game Review in January 1994. Other game publications also published the rules, and the game gathered a small but devoted following. The Internet spread the game more widely. Michael Keller wrote the first known computer version of the game in VAX Fortran in 1994, and an updated version with graphics in Visual Basic in 1995. There are Amazons tournaments at the Computer Olympiad, a series of computer-versus-computer competitions. El Juego de las Amazonas (The Game of the Amazons) is a trademark of Ediciones de Mente. Computational complexity Usually, in the endgame, the board is partitioned into separate "royal chambers", with queens inside each chamber. We define simple Amazons endgames to be endgames where each chamber has at most one queen. Determining who wins in a simple Amazons endgame is NP-hard. This is proven by reducing it to finding the Hamiltonian path of a cubic subgraph of the square grid graph. Generalized Amazons (that is, determining the winner of a game of Amazons played on a n x n grid, started from an arbitrary configuration) is PSPACE-complete. This can be proved in two ways. The first way is by reducing a generalized Hex position, which is known to be PSPACE-complete, into an Amazons position. The second way is by reducing a certain kind of generalized geography called GEOGRAPHY-BP3, which is PSPACE-complete, to an Amazons position. This Amazons position uses only one black queen and one white queen, thus showing that generalized Amazons is PSPACE-complete even if only one queen on each side is allowed. References Further reading . . Board games introduced in 1988 Abstract strategy games PSPACE-complete problems
Game of the Amazons
[ "Mathematics" ]
1,040
[ "PSPACE-complete problems", "Mathematical problems", "Computational problems" ]
1,619,127
https://en.wikipedia.org/wiki/Shot%20peening
Shot peening is a cold working process used to produce a compressive residual stress layer and modify the mechanical properties of metals and composites. It entails striking a surface with shot (round metallic, glass, or ceramic particles) with force sufficient to create plastic deformation. In machining, shot peening is used to strengthen and relieve stress in components like steel automobile crankshafts and connecting rods. In architecture it provides a muted finish to metal. Shot peening is similar mechanically to sandblasting, though its purpose is not to remove material, but rather it employs the mechanism of plasticity to achieve its goal, with each particle functioning as a ball-peen hammer. Details Peening a surface spreads it plastically, causing changes in the mechanical properties of the surface. Its main application is to avoid the propagation of microcracks in a surface. By putting a material under compressive stress, shot peening prevents such cracks from propagating. Shot peening is often called for in aircraft repairs to relieve tensile stresses built up in the grinding process and replace them with beneficial compressive stresses. Depending on the part geometry, part material, shot material, shot quality, shot intensity, and shot coverage, shot peening can increase fatigue life up to 1000%. Plastic deformation induces a residual compressive stress in a peened surface, along with tensile stress in the interior. Surface compressive stresses confer resistance to metal fatigue and to some forms of stress corrosion. The tensile stresses deep in the part are not as troublesome as tensile stresses on the surface because cracks are less likely to start in the interior. Intensity is a key parameter of the shot peening process. After some development of the process, an analog was needed to measure the effects of shot peening. John Almen noticed that shot peening made the side of the sheet metal that was exposed begin to bend and stretch. He created the Almen strip to measure the compressive stresses in the strip created by the shot peening operation. One can obtain what is referred to as the "intensity of the blast stream" by measuring the deformation on the Almen strip that is in the shot peening operation. As the strip reaches a 10% deformation, the Almen strip is then hit with the same intensity for twice the amount of time. If the strip deforms another 10%, then one obtains the intensity of the blast stream. Another operation to gauge the intensity of a shot peening process is the use of an Almen round, developed by R. Bosshard. Coverage, the percentage of the surface indented once or more, is subject to variation due to the angle of the shot blast stream relative to the workpiece surface. The stream is cone-shaped, thus, shot arrives at varying angles. Processing the surface with a series of overlapping passes improves coverage, although variation in "stripes" will still be present. Alignment of the axis of the shot stream with the axis of the Almen strip is important. A continuous compressively stressed surface of the workpiece has been shown to be produced at less than 50% coverage but falls as 100% is approached. Optimizing coverage level for the process being performed is important for producing the desired surface effect. SAE International's includes several standards for shot peening in aerospace and other industries. Process and equipment Popular methods for propelling shot media include air blast systems and centrifugal blast wheels. In the air blast systems, media are introduced by various methods into the path of high pressure air and accelerated through a nozzle directed at the part to be peened. The centrifugal blast wheel consists of a high speed paddle wheel. Shot media are introduced in the center of the spinning wheel and propelled by the centrifugal force by the spinning paddles towards the part by adjusting the media entrance location, effectively timing the release of the media. Other methods include ultrasonic peening, wet peening, and laser peening (which does not use media). Media choices include spherical cast steel shot, ceramic bead, glass bead or conditioned (rounded) cut wire. Cut wire shot is preferred because it maintains its roundness as it is degraded, unlike cast shot which tends to break up into sharp pieces that can damage the workpiece. Cut wire shot can last five times longer than cast shot. Because peening demands well-graded shot of consistent hardness, diameter, and shape, a mechanism for removing shot fragments throughout the process is desirable. Equipment is available that includes separators to clean and recondition shot and feeders to add new shot automatically to replace the damaged material. Wheel blast systems include satellite rotation models, rotary throughfeed components, and various manipulator designs. There are overhead monorail systems as well as reverse-belted models. Workpiece holding equipment includes rotating index tables, loading and unloading robots, and jigs that hold multiple workpieces. For larger workpieces, manipulators to reposition them to expose features to the shot blast stream are available. Cut wire shot Cut wire shot is a metal shot used for shot peening, where small particles are fired at a workpiece by a compressed air jet. It is a low-cost manufacturing process, as the basic feedstock is inexpensive. As-cut particles are an effective abrasive due to the sharp edges created in the cutting process; however, as-cut shot is not a desirable shot peening medium, as its sharp edges are not suitable to the process. Cut shot is manufactured from high quality wire in which each particle is cut to a length about equal to its diameter. If required, the particles are conditioned (rounded) to remove the sharp corners produced during the cutting process. Depending on application, various hardness ranges are available, with the higher the hardness of the media the lower its durability. Other cut-wire shot applications include tumbling and vibratory finishing. Coverage Factors affecting coverage density include: number of impacts (shot flow), exposure time, shot properties (size, chemistry), and work piece properties. Coverage is monitored by visual examination to determine the percent coverage (0–100%). Coverage beyond 100% cannot be determined. The number of individual impacts is linearly proportional to shot flow, exposure area, and exposure time. Coverage is not linearly proportional because of the random nature of the process (chaos theory). When 100% coverage is achieved, locations on the surface have been impacted multiple times. At 150% coverage, 5 or more impacts occur at 52% of locations. At 200% coverage, 5 or more impacts occur at 84% of locations. Coverage is affected by shot geometry and the shot and workpiece chemistry. The size of the shot controls how many impacts there are per pound, where smaller shot produces more impacts per pound therefore requiring less exposure time. Soft shot impacting hard material will take more exposure time to reach acceptable coverage compared to hard shot impacting a soft material (since the harder shot can penetrate deeper, thus creating a larger impression). Coverage and intensity (measured by Almen strips) can have a profound effect on fatigue life. This can affect a variety of materials typically shot peened. Incomplete or excessive coverage and intensity can result in reduced fatigue life. Over-peening will cause excessive cold working on the surface of the workpiece, which can also cause fatigue cracks. Diligence is required when developing parameters for coverage and intensity, especially when using materials having different properties (i.e. softer metal to harder metal). Testing fatigue life over a range of parameters would result in a "sweet-spot" where there is near exponential growth to a peak fatigue life (x = peening intensity or media stream energy, y = time-to-crack or fatigue strength) and rapidly decay fatigue life as more intensity or coverage is added. The "sweet-spot" will directly correlate with the kinetic energy transferred and the material properties of the shot media and workpiece. Applications Shot peening is used on gear parts, cams and camshafts, clutch springs, coil springs, connecting rods, crankshafts, gearwheels, leaf and suspension springs, rock drills, and turbine blades. It is also used in foundries for sand removal, decoring, descaling, and surface finishing of castings such as engine blocks and cylinder heads. Its descaling action can be used in the manufacturing of steel products such as strip, plates, sheets, wire, and bar stock. Shot peening is a crucial process in spring making. Types of springs include leaf springs, extension springs, and compression springs. The most widely used application are for engine valve springs (compression springs) due to high cyclic fatigue. In an OEM valve spring application, the mechanical design combined with some shot peening ensures longevity. Automotive makers are shifting to more high performance higher stressed valve spring designs as engines evolve. In aftermarket high performance valve spring applications, the need for controlled and multi-step shot peening is a requirement to withstand extreme surface stresses that sometimes exceeds material specifications. The fatigue life of an extreme performance spring (NHRA, IHRA) can be as short as two passes on a 1/4 mile drag racing track before relaxation or failure occurs. Shot peening may be used for cosmetic effect. The surface roughness resulting from the overlapping dimples causes light to scatter upon reflection. Because peening typically produces larger surface features than sand-blasting, the resulting effect is more pronounced. Shot peening and abrasive blasting can apply materials on metal surfaces. When the shot or grit particles are blasted through a powder or liquid containing the desired surface coating, the impact plates or coats the workpiece surface. The process has been used to embed ceramic coatings, though the coverage is random rather than coherent. 3M developed a process where a metal surface was blasted with particles with a core of alumina and an outer layer of silica. The result was fusion of the silica to the surface. The process known as peen plating was developed by NASA. Fine powders of metals or non-metals are plated onto metal surfaces using glass bead shot as the blast medium. The process has evolved to applying solid lubricants such as molybdenum disulphide to surfaces. Biocompatible ceramics have been applied this way to biomedical implants. Peen plating subjects the coating material to high heat in the collisions with the shot and the coating must also be available in powder form, limiting the range of materials that can be used. To overcome the problem of heat, a process called temperature moderated-collision mediated coating (TM-CMC) has allowed the use of polymers and antibiotic materials as peened coatings. The coating is presented as an aerosol directed to the surface at the same time as a stream of shot particles. The TM-CMC process is still in the R&D phase of development. Compressive residual stress A sub-surface compressive residual stress profile is measured using techniques such as x-ray diffraction and hardness profile testings. The X-axis is depth in mm or inches and the Y-axis is residual stress in ksi or MPa. The maximum residual stress profile can be affected by the factors of shot peening, including: part geometry, part material, shot material, shot quality, shot intensity, and shot coverage. For example, shot peening a hardened steel part with a process and then using the same process for another unhardened part could result in over-peening; causing a sharp decrease in surface residual stresses, but not affecting sub-surface stresses. This is critical because maximum stresses are typically at the surface of the material. Mitigation of these lower surface stresses can be accomplished by a multi-stage post process with varied shot diameters and other surface treatments that remove the low residual stress layer. The compressive residual stress in a metal alloy is produced by the transfer of kinetic energy (K.E.) from a moving mass (shot particle or ball peen) into the surface of a material with the capacity to plastically deform. The residual stress profile is also dependent on coverage density. The mechanics of the collisions involve properties of the shot hardness, shape, and structure; as well as the properties of the workpiece. Factors for process development and the control for K.E. transfer for shot peening are: shot velocity (wheel speed or air pressure/nozzle design), shot mass, shot chemistry, impact angle and work piece properties. Example: if you needed very high residual stresses you would likely want to use large diameter cut-wire shot, a high-intensity process, direct blast onto the workpiece, and a very hard workpiece material. See also Autofrettage, which produces compressive residual stresses in pressure vessels. Case hardening Differential hardening Steel abrasives Shot peening of steel belts High-frequency impact treatment after-treatment of weld transitions Suncorite Trimite References
Shot peening
[ "Materials_science" ]
2,659
[ "Strengthening mechanisms of materials", "Shot peening" ]
1,619,396
https://en.wikipedia.org/wiki/Q-analog
In mathematics, a q-analog of a theorem, identity or expression is a generalization involving a new parameter q that returns the original theorem, identity or expression in the limit as . Typically, mathematicians are interested in q-analogs that arise naturally, rather than in arbitrarily contriving q-analogs of known results. The earliest q-analog studied in detail is the basic hypergeometric series, which was introduced in the 19th century. q-analogs are most frequently studied in the mathematical fields of combinatorics and special functions. In these settings, the limit is often formal, as is often discrete-valued (for example, it may represent a prime power). q-analogs find applications in a number of areas, including the study of fractals and multi-fractal measures, and expressions for the entropy of chaotic dynamical systems. The relationship to fractals and dynamical systems results from the fact that many fractal patterns have the symmetries of Fuchsian groups in general (see, for example Indra's pearls and the Apollonian gasket) and the modular group in particular. The connection passes through hyperbolic geometry and ergodic theory, where the elliptic integrals and modular forms play a prominent role; the q-series themselves are closely related to elliptic integrals. q-analogs also appear in the study of quantum groups and in q-deformed superalgebras. The connection here is similar, in that much of string theory is set in the language of Riemann surfaces, resulting in connections to elliptic curves, which in turn relate to q-series. "Classical" q-theory Classical q-theory begins with the q-analogs of the nonnegative integers. The equality suggests that we define the q-analog of n, also known as the q-bracket or q-number of n, to be By itself, the choice of this particular q-analog among the many possible options is unmotivated. However, it appears naturally in several contexts. For example, having decided to use [n]q as the q-analog of n, one may define the q-analog of the factorial, known as the q-factorial, by This q-analog appears naturally in several contexts. Notably, while n! counts the number of permutations of length n, [n]q! counts permutations while keeping track of the number of inversions. That is, if inv(w) denotes the number of inversions of the permutation w and Sn denotes the set of permutations of length n, we have In particular, one recovers the usual factorial by taking the limit as . The q-factorial also has a concise definition in terms of the q-Pochhammer symbol, a basic building-block of all q-theories: From the q-factorials, one can move on to define the q-binomial coefficients, also known as Gaussian coefficients, Gaussian polynomials, or Gaussian binomial coefficients: The q-exponential is defined as: q-trigonometric functions, along with a q-Fourier transform, have been defined in this context. Combinatorial q-analogs The Gaussian coefficients count subspaces of a finite vector space. Let q be the number of elements in a finite field. (The number q is then a power of a prime number, , so using the letter q is especially appropriate.) Then the number of k-dimensional subspaces of the n-dimensional vector space over the q-element field equals Letting q approach 1, we get the binomial coefficient or in other words, the number of k-element subsets of an n-element set. Thus, one can regard a finite vector space as a q-generalization of a set, and the subspaces as the q-generalization of the subsets of the set. As another example, the number of flags is as the order in which we build the flag matters, and after taking the limit we get . This has been a fruitful point of view in finding interesting new theorems. For example, there are q-analogs of Sperner's theorem and Ramsey theory. Cyclic sieving Let q = (e2i/n)d be the d-th power of a primitive n-th root of unity. Let C be a cyclic group of order n generated by an element c. Let X be the set of k-element subsets of the n-element set {1, 2, ..., n}. The group C has a canonical action on X given by sending c to the cyclic permutation (1, 2, ..., n). Then the number of fixed points of cd on X is equal to q → 1 Conversely, by letting q vary and seeing q-analogs as deformations, one can consider the combinatorial case of as a limit of q-analogs as (often one cannot simply let in the formulae, hence the need to take a limit). This can be formalized in the field with one element, which recovers combinatorics as linear algebra over the field with one element: for example, Weyl groups are simple algebraic groups over the field with one element. Applications in the physical sciences q-analogs are often found in exact solutions of many-body problems. In such cases, the limit usually corresponds to relatively simple dynamics, e.g., without nonlinear interactions, while gives insight into the complex nonlinear regime with feedbacks. An example from atomic physics is the model of molecular condensate creation from an ultra cold fermionic atomic gas during a sweep of an external magnetic field through the Feshbach resonance. This process is described by a model with a q-deformed version of the SU(2) algebra of operators, and its solution is described by q-deformed exponential and binomial distributions. See also List of q-analogs Stirling number Young tableau References Andrews, G. E., Askey, R. A. & Roy, R. (1999), Special Functions, Cambridge University Press, Cambridge. Gasper, G. & Rahman, M. (2004), Basic Hypergeometric Series, Cambridge University Press, . Ismail, M. E. H. (2005), Classical and Quantum Orthogonal Polynomials in One Variable, Cambridge University Press. Koekoek, R. & Swarttouw, R. F. (1998), The Askey-scheme of hypergeometric orthogonal polynomials and its q-analogue, 98-17, Delft University of Technology, Faculty of Information Technology and Systems, Department of Technical Mathematics and Informatics. External links q-analog from MathWorld q-bracket from MathWorld q-factorial from MathWorld q-binomial coefficient from MathWorld Combinatorics
Q-analog
[ "Mathematics" ]
1,431
[ "Discrete mathematics", "Q-analogs", "Combinatorics" ]
1,619,428
https://en.wikipedia.org/wiki/Flow%20control%20%28data%29
In data communications, flow control is the process of managing the rate of data transmission between two nodes to prevent a fast sender from overwhelming a slow receiver. Flow control should be distinguished from congestion control, which is used for controlling the flow of data when congestion has actually occurred. Flow control mechanisms can be classified by whether or not the receiving node sends feedback to the sending node. Flow control is important because it is possible for a sending computer to transmit information at a faster rate than the destination computer can receive and process it. This can happen if the receiving computers have a heavy traffic load in comparison to the sending computer, or if the receiving computer has less processing power than the sending computer. Stop-and-wait Stop-and-wait flow control is the simplest form of flow control. In this method the message is broken into multiple frames, and the receiver indicates its readiness to receive a frame of data. The sender waits for a receipt acknowledgement (ACK) after every frame for a specified time (called a time out). The receiver sends the ACK to let the sender know that the frame of data was received correctly. The sender will then send the next frame only after the ACK. Operations Sender: Transmits a single frame at a time. Sender waits to receive ACK within time out. Receiver: Transmits acknowledgement (ACK) as it receives a frame. Go to step 1 when ACK is received, or time out is hit. If a frame or ACK is lost during transmission then the frame is re-transmitted. This re-transmission process is known as ARQ (automatic repeat request). The problem with Stop-and-wait is that only one frame can be transmitted at a time, and that often leads to inefficient transmission, because until the sender receives the ACK it cannot transmit any new packet. During this time both the sender and the channel are unutilised. Pros and cons of stop and wait Pros The only advantage of this method of flow control is its simplicity. Cons The sender needs to wait for the ACK after every frame it transmits. This is a source of inefficiency, and is particularly bad when the propagation delay is much longer than the transmission delay. Stop and wait can also create inefficiencies when sending longer transmissions. When longer transmissions are sent there is more likely chance for error in this protocol. If the messages are short the errors are more likely to be detected early. More inefficiency is created when single messages are broken into separate frames because it makes the transmission longer. Sliding window A method of flow control in which a receiver gives a transmitter permission to transmit data until a window is full. When the window is full, the transmitter must stop transmitting until the receiver advertises a larger window. Sliding-window flow control is best utilized when the buffer size is limited and pre-established. During a typical communication between a sender and a receiver the receiver allocates buffer space for n frames (n is the buffer size in frames). The sender can send and the receiver can accept n frames without having to wait for an acknowledgement. A sequence number is assigned to frames in order to help keep track of those frames which did receive an acknowledgement. The receiver acknowledges a frame by sending an acknowledgement that includes the sequence number of the next frame expected. This acknowledgement announces that the receiver is ready to receive n frames, beginning with the number specified. Both the sender and receiver maintain what is called a window. The size of the window is less than or equal to the buffer size. Sliding window flow control has far better performance than stop-and-wait flow control. For example, in a wireless environment if data rates are low and noise level is very high, waiting for an acknowledgement for every packet that is transferred is not very feasible. Therefore, transferring data as a bulk would yield a better performance in terms of higher throughput. Sliding window flow control is a point to point protocol assuming that no other entity tries to communicate until the current data transfer is complete. The window maintained by the sender indicates which frames it can send. The sender sends all the frames in the window and waits for an acknowledgement (as opposed to acknowledging after every frame). The sender then shifts the window to the corresponding sequence number, thus indicating that frames within the window starting from the current sequence number can be sent. Go back N An automatic repeat request (ARQ) algorithm, used for error correction, in which a negative acknowledgement (NACK) causes retransmission of the word in error as well as the next N–1 words. The value of N is usually chosen such that the time taken to transmit the N words is less than the round trip delay from transmitter to receiver and back again. Therefore, a buffer is not needed at the receiver. The normalized propagation delay (a) = , where Tp = length (L) over propagation velocity (V) and Tt = bitrate (r) over framerate (F). So that a =. To get the utilization you must define a window size (N). If N is greater than or equal to 2a + 1 then the utilization is 1 (full utilization) for the transmission channel. If it is less than 2a + 1 then the equation must be used to compute utilization. Selective repeat Selective repeat is a connection oriented protocol in which both transmitter and receiver have a window of sequence numbers. The protocol has a maximum number of messages that can be sent without acknowledgement. If this window becomes full, the protocol is blocked until an acknowledgement is received for the earliest outstanding message. At this point the transmitter is clear to send more messages. Comparison This section is geared towards the idea of comparing stop-and-wait, sliding window with the subsets of go back N and selective repeat. Stop-and-wait Error free: . With errors: . Selective repeat We define throughput T as the average number of blocks communicated per transmitted block. It is more convenient to calculate the average number of transmissions necessary to communicate a block, a quantity we denote by 0, and then to determine T from the equation . Transmit flow control Transmit flow control may occur: between data terminal equipment (DTE) and a switching center, via data circuit-terminating equipment (DCE), the opposite types interconnected straightforwardly, or between two devices of the same type (two DTEs, or two DCEs), interconnected by a crossover cable. The transmission rate may be controlled because of network or DTE requirements. Transmit flow control can occur independently in the two directions of data transfer, thus permitting the transfer rates in one direction to be different from the transfer rates in the other direction. Transmit flow control can be either stop-and-wait, or use a sliding window. Flow control can be performed either by control signal lines in a data communication interface (see serial port and RS-232), or by reserving in-band control characters to signal flow start and stop (such as the ASCII codes for XON/XOFF). Hardware flow control In common RS-232 there are pairs of control lines which are usually referred to as hardware flow control: RTS (request to send) and CTS (clear to send), used in RTS flow control DTR (data terminal ready) and DSR (data set ready), used in DTR flow control Hardware flow control is typically handled by the DTE or "master end", as it is first raising or asserting its line to command the other side: In the case of RTS control flow, DTE sets its RTS, which signals the opposite end (the slave end such as a DCE) to begin monitoring its data input line. When ready for data, the slave end will raise its complementary line, CTS in this example, which signals the master to start sending data, and for the master to begin monitoring the slave's data output line. If either end needs to stop the data, it lowers its respective "data readiness" line. For PC-to-modem and similar links, in the case of DTR flow control, DTR/DSR are raised for the entire modem session (say a dialup internet call where DTR is raised to signal the modem to dial, and DSR is raised by the modem when the connection is complete), and RTS/CTS are raised for each block of data. An example of hardware flow control is a half-duplex radio modem to computer interface. In this case, the controlling software in the modem and computer may be written to give priority to incoming radio signals such that outgoing data from the computer is paused by lowering CTS if the modem detects a reception. Polarity: RS-232 level signals are inverted by the driver ICs, so line polarity is TxD-, RxD-, CTS+, RTS+ (clear to send when HI, data 1 is a LO) for microprocessor pins the signals are TxD+, RxD+, CTS-, RTS- (clear to send when LO, data 1 is a HI) Software flow control Conversely, XON/XOFF is usually referred to as software flow control. Open-loop flow control The open-loop flow control mechanism is characterized by having no feedback between the receiver and the transmitter. This simple means of control is widely used. The allocation of resources must be a "prior reservation" or "hop-to-hop" type. Open-loop flow control has inherent problems with maximizing the utilization of network resources. Resource allocation is made at connection setup using a CAC (connection admission control) and this allocation is made using information that is already "old news" during the lifetime of the connection. Often there is an over-allocation of resources and reserved but unused capacities are wasted. Open-loop flow control is used by ATM in its CBR, VBR and UBR services (see traffic contract and congestion control). Open-loop flow control incorporates two controls; the controller and a regulator. The regulator is able to alter the input variable in response to the signal from the controller. An open-loop system has no feedback or feed forward mechanism, so the input and output signals are not directly related and there is increased traffic variability. There is also a lower arrival rate in such system and a higher loss rate. In an open control system, the controllers can operate the regulators at regular intervals, but there is no assurance that the output variable can be maintained at the desired level. While it may be cheaper to use this model, the open-loop model can be unstable. Closed-loop flow control The closed-loop flow control mechanism is characterized by the ability of the network to report pending network congestion back to the transmitter. This information is then used by the transmitter in various ways to adapt its activity to existing network conditions. Closed-loop flow control is used by ABR (see traffic contract and congestion control). Transmit flow control described above is a form of closed-loop flow control. This system incorporates all the basic control elements, such as, the sensor, transmitter, controller and the regulator. The sensor is used to capture a process variable. The process variable is sent to a transmitter which translates the variable to the controller. The controller examines the information with respect to a desired value and initiates a correction action if required. The controller then communicates to the regulator what action is needed to ensure that the output variable value is matching the desired value. Therefore, there is a high degree of assurance that the output variable can be maintained at the desired level. The closed-loop control system can be a feedback or a feed forward system: A feedback closed-loop system has a feed-back mechanism that directly relates the input and output signals. The feed-back mechanism monitors the output variable and determines if additional correction is required. The output variable value that is fed backward is used to initiate that corrective action on a regulator. Most control loops in the industry are of the feedback type. In a feed-forward closed loop system, the measured process variable is an input variable. The measured signal is then used in the same fashion as in a feedback system. The closed-loop model produces lower loss rate and queuing delays, as well as it results in congestion-responsive traffic. The closed-loop model is always stable, as the number of active lows is bounded. See also Software flow control Computer networking Traffic contract Congestion control Teletraffic engineering in broadband networks Teletraffic engineering Ethernet flow control Handshaking References Sliding window: last accessed 27 November 2012. External links RS-232 flow control and handshaking Network performance Logical link control Data transmission
Flow control (data)
[ "Engineering" ]
2,623
[ "Computer networks engineering", "Flow control (data)" ]
1,619,553
https://en.wikipedia.org/wiki/Gliese%20777
Gliese 777, often abbreviated as Gl 777 or GJ 777, is a binary star approximately 52 light-years away in the constellation of Cygnus. The system is also a binary star system made up of two stars and possibly a third. As of 2005, two extrasolar planets are known to orbit the primary star. Stellar components The primary star of the system (catalogued as Gliese 777 A) is a yellow subgiant, a Sun-like star that is ceasing fusing hydrogen in its core. The star is much older than the Sun, about 6.7 billion years old. It is 4% less massive than the Sun. It is also rather metal-rich, having about 70% more "metals" (elements heavier than helium) than the Sun, which is typical for stars with extrasolar planets. The secondary star (Gliese 777 B) is a distant, dim red dwarf star orbiting the primary at a distance of 3,000 astronomical units (0.047 light years). One orbit takes at least tens of thousands of years to complete. The star itself may be a binary, the secondary being a very dim red dwarf. Not much information is available on the star system. Planetary system In 2002, a discovery of a long-period, wide-orbiting, planet (Gliese 777 b) was announced by the Geneva extrasolar planet search team. The planet was estimated to orbit in a circular path with low orbital eccentricity, but that estimate was increased with later measurements (e=0.36). Initially therefore, the planet was believed to be a true "Jupiter-twin" but was later redefined as being more like an "eccentric Jupiter", with a mass of at least 1.5 times Jupiter and about the same size. In 2021, the true mass of Gliese 777 Ab was measured via astrometry. In 2005, further observation of the star showed another amplitude with a period of 17.1 days. The mass of this second planet (Gliese 777 c) was only 18 times more than Earth, or about the same as Neptune, indicating it was one of the smallest planets discovered at the time. It too was initially thought to be on a circular orbital path that with later measurements turned out to be not the case. There was a METI message sent to Gliese 777. It was transmitted from Eurasia's largest radar, 70-meter Eupatoria Planetary Radar. The message was named Cosmic Call 1; it was sent on July 1, 1999, and it will arrive at Gliese 777 in April 2051. See also 47 Ursae Majoris 51 Pegasi Gliese 229 References External links Extrasolar Planet Interactions by Rory Barnes & Richard Greenberg, Lunar and Planetary Lab, University of Arizona Binary stars Cygnus (constellation) 190360 098767 7670 0777 M-type main-sequence stars G-type subgiants Planetary systems with two confirmed planets Durchmusterung objects TIC objects
Gliese 777
[ "Astronomy" ]
633
[ "Cygnus (constellation)", "Constellations" ]
1,619,576
https://en.wikipedia.org/wiki/Mountain%20jet
Mountain jets are a type of jet stream created by surface winds channeled through mountain passes, sometimes causing high wind speeds and drastic temperature changes. Central America jets The North Pacific east of about 120°W is strongly influenced by winds blowing through gaps in the Central American cordillera. Air flow in the region forms the Intra-Americas Low-Level Jet, a westward flow about 1 km above sea level. This flow, trade winds, and cold air flowing south from North America contribute to winds flowing through several mountain valleys. Along Central America are three main wind jets through breaks in the American Cordillera, on the Pacific Ocean side due to prevailing winds. Tehuano wind blows from the Gulf of Mexico through Chivela Pass in Mexico's Isthmus of Tehuantepec and out over the Gulf of Tehuantepec on the Pacific coast. Chivela Pass is a gap between the Sierra Madre del Sur and the Sierra Madre range to the south. Papagayo wind shrieks over the lakes of Nicaragua and pushes far out over the Gulf of Papagayo on the Pacific coast. The Cordillera Central Mountains rise to the south, gradually descending to Gatun Lake and the Isthmus of Panama. Panama winds slice through to the Pacific through the Gaillard Cut in Panama, which also holds the Panama Canal. Cause The air flow is due to surges of cold dense air originating from the North American continent. The meteorological mechanism that causes Tehuano and Papagayo winds is relatively simple. In the winter, cold high-pressure weather systems move southward from North America over the Gulf of Mexico. These high pressure systems create strong pressure gradients between the atmosphere over the Gulf of Mexico and the warmer, moister atmosphere over the Pacific Ocean. Just as a river flows from high elevations to lower elevations, the air in the high pressure system will flow "downhill" toward lower pressure, but the Cordillera mountains block the flow of air, channeling it through Chivela Pass in Mexico, the lake district of Nicaragua, and also Gaillard (Culebra) Cut in Panama. Many times, a Tehuano wind is followed by Papagayo and Panama winds a few days later as the high pressure system moves south. The arrival of these cold surges, and their associated anticyclonic circulation, strengthens the trade winds at low latitudes, and this effect can last for several days. The wind flow over Central America is actually composed of the confluence of two air streams; one from the north, associated with cold surges, and the other from the northeast, associated with trade winds north of South America. Local effects The winds blow at speeds of 80 km/h or more down the hillsides from Chivela Pass and over the waters of the Gulf of Tehuantepec, sometimes extending more than 500 miles (800 km) into the Pacific Ocean. The surface waters under the Gulf of Tehuantepec wind jet can cool by as much as 10 °C in a day. In addition to the cold water that is detectable from other satellite sensors, the ocean's response to these winds shows up in satellite estimates of chlorophyll from ocean color measurements. The cold water and high chlorophyll concentration are signatures of mixing and upwelling of cold, nutrient-rich deep water. Fish converge on this food source, which supports the highly successful fishing industry in the Gulf of Tehuantepec. External links Atmospheric dynamics Geography of Central America Mountains
Mountain jet
[ "Chemistry" ]
720
[ "Atmospheric dynamics", "Fluid dynamics" ]
60,710
https://en.wikipedia.org/wiki/Ferrocene
Ferrocene is an organometallic compound with the formula . The molecule is a complex consisting of two cyclopentadienyl rings sandwiching a central iron atom. It is an orange solid with a camphor-like odor that sublimes above room temperature, and is soluble in most organic solvents. It is remarkable for its stability: it is unaffected by air, water, strong bases, and can be heated to 400 °C without decomposition. In oxidizing conditions it can reversibly react with strong acids to form the ferrocenium cation . Ferrocene and the ferrocenium cation are sometimes abbreviated as Fc and respectively. The first reported synthesis of ferrocene was in 1951. Its unusual stability puzzled chemists, and required the development of new theory to explain its formation and bonding. The discovery of ferrocene and its many analogues, known as metallocenes, sparked excitement and led to a rapid growth in the discipline of organometallic chemistry. Geoffrey Wilkinson and Ernst Otto Fischer, both of whom worked on elucidating the structure of ferrocene, later shared the 1973 Nobel Prize in Chemistry for their work on organometallic sandwich compounds. Ferrocene itself has no large-scale applications, but has found more niche uses in catalysis, as a fuel additive, and as a tool in undergraduate education. History Discovery Ferrocene was discovered by accident twice. The first known synthesis may have been made in the late 1940s by unknown researchers at Union Carbide, who tried to pass hot cyclopentadiene vapor through an iron pipe. The vapor reacted with the pipe wall, creating a "yellow sludge" that clogged the pipe. Years later, a sample of the sludge that had been saved was obtained and analyzed by Eugene O. Brimm, shortly after reading Kealy and Pauson's article, and was found to consist of ferrocene. The second time was around 1950, when Samuel A. Miller, John A. Tebboth, and John F. Tremaine, researchers at British Oxygen, were attempting to synthesize amines from hydrocarbons and nitrogen in a modification of the Haber process. When they tried to react cyclopentadiene with nitrogen at 300 °C, at atmospheric pressure, they were disappointed to see the hydrocarbon react with some source of iron, yielding ferrocene. While they too observed its remarkable stability, they put the observation aside and did not publish it until after Pauson reported his findings. Kealy and Pauson were later provided with a sample by Miller et al., who confirmed that the products were the same compound. In 1951, Peter L. Pauson and Thomas J. Kealy at Duquesne University attempted to prepare fulvalene () by oxidative dimerization of cyclopentadiene (). To that end, they reacted the Grignard compound cyclopentadienyl magnesium bromide in diethyl ether with ferric chloride as an oxidizer. However, instead of the expected fulvalene, they obtained a light orange powder of "remarkable stability", with the formula . Determining the structure Pauson and Kealy conjectured that the compound had two cyclopentadienyl groups, each with a single covalent bond from the saturated carbon atom to the iron atom. However, that structure was inconsistent with then-existing bonding models and did not explain the unexpected stability of the compound, and chemists struggled to find the correct structure. The structure was deduced and reported independently by three groups in 1952. Robert Burns Woodward, Geoffrey Wilkinson, et al. deduced observe that the compound was diamagnetic and nonpolar. A few months later they described its reactions as being typical of aromatic compounds such as benzene. The name ferrocene was coined by Mark Whiting, a postdoc with Woodward.. Ernst Otto Fischer and Wolfgang Pfab also noted ferrocene's diamagneticity and high symmetry. They also synthesize nickelocene and cobaltocene and confirmed they had the same structure. Fischer described the structure as Doppelkegelstruktur ("double-cone structure"), although the term "sandwich" came to be preferred by British and American chemists. Philip Frank Eiland and Raymond Pepinsky confirmed the structure through X-ray crystallography and later by NMR spectroscopy. The "sandwich" structure of ferrocene was shockingly novel and led to intensive theoretical studies. Application of molecular orbital theory with the assumption of a Fe2+ centre between two cyclopentadienide anions resulted in the successful Dewar–Chatt–Duncanson model, allowing correct prediction of the geometry of the molecule as well as explaining its remarkable stability. Impact The discovery of ferrocene was considered so significant that Wilkinson and Fischer shared the 1973 Nobel Prize in Chemistry "for their pioneering work, performed independently, on the chemistry of the organometallic, called sandwich compounds". Structure and bonding Mössbauer spectroscopy indicates that the iron center in ferrocene should be assigned the +2 oxidation state. Each cyclopentadienyl (Cp) ring should then be allocated a single negative charge. Thus ferrocene could be described as iron(II) bis(cyclopentadienide), . Each ring has six π-electrons, which makes them aromatic according to Hückel's rule. These π-electrons are then shared with the metal via covalent bonding. Since Fe2+ has six d-electrons, the complex attains an 18-electron configuration, which accounts for its stability. In modern notation, this sandwich structural model of the ferrocene molecule is denoted as , where η denotes hapticity, the number of atoms through which each ring binds. The carbon–carbon bond distances around each five-membered ring are all 1.40 Å, and all Fe–C bond distances are 2.04 Å. From room temperature down to 164 K, X-ray crystallography yields the monoclinic space group; the cyclopentadienide rings are a staggered conformation, resulting in a centrosymmetric molecule, with symmetry group D5d. However, below 110 K, ferrocene crystallizes in an orthorhombic crystal lattice in which the Cp rings are ordered and eclipsed, so that the molecule has symmetry group D5h. In the gas phase, electron diffraction and computational studies show that the Cp rings are eclipsed. While ferrocene has no permanent dipole moment at room temperature, between 172.8 and 163.5 K the molecule exhibits an "incommensurate modulation", breaking the D5 symmetry and acquiring an electric dipole. The Cp rings rotate with a low barrier about the Cp(centroid)–Fe–Cp(centroid) axis, as observed by measurements on substituted derivatives of ferrocene using 1H and 13C nuclear magnetic resonance spectroscopy. For example, methylferrocene (CH3C5H4FeC5H5) exhibits a singlet for the C5H5 ring. In solution, and at room temperature, eclipsed D5h ferrocene was determined to dominate over the staggered D5d conformer, as suggested by both Fourier-transform infrared spectroscopy and DFT calculations. Synthesis Early methods The first reported syntheses of ferrocene were nearly simultaneous. Pauson and Kealy synthesised ferrocene using iron(III) chloride and cyclopentadienyl magnesium bromide. A redox reaction produces iron(II) chloride. The formation of fulvalene, the intended outcome does not occur. Another early synthesis of ferrocene was by Miller et al., who treated metallic iron with gaseous cyclopentadiene at elevated temperature. An approach using iron pentacarbonyl was also reported. Fe(CO)5 + 2 C5H6 → Fe(C5H5)2 + 5 CO + H2 Via alkali cyclopentadienide More efficient preparative methods are generally a modification of the original transmetalation sequence using either commercially available sodium cyclopentadienide or freshly cracked cyclopentadiene deprotonated with potassium hydroxide and reacted with anhydrous iron(II) chloride in ethereal solvents. Modern modifications of Pauson and Kealy's original Grignard approach are known: Using sodium cyclopentadienide:       2 NaC5H5   +   FeCl2   →   Fe(C5H5)2   +   2 NaCl Using freshly-cracked cyclopentadiene:     FeCl2·4H2O   +   2 C5H6   +   2 KOH   →   Fe(C5H5)2   +   2 KCl   +   6 H2O Using an iron(II) salt with a Grignard reagent:     2 C5H5MgBr   +   FeCl2   →   Fe(C5H5)2   +   2 MgBrCl Even some amine bases (such as diethylamine) can be used for the deprotonation, though the reaction proceeds more slowly than when using stronger bases: 2 C5H6   +   2 (CH3CH2)2NH   +   FeCl2   →   Fe(C5H5)2   +   2 (CH3CH2)2NH2Cl Direct transmetalation can also be used to prepare ferrocene from some other metallocenes, such as manganocene: FeCl2   +   Mn(C5H5)2   →   MnCl2   +   Fe(C5H5)2 Properties Ferrocene is an air-stable orange solid with a camphor-like odor. As expected for a symmetric, uncharged species, ferrocene is soluble in normal organic solvents, such as benzene, but is insoluble in water. It is stable to temperatures as high as 400 °C. Ferrocene readily sublimes, especially upon heating in a vacuum. Its vapor pressure is about 1 Pa at 25 °C, 10 Pa at 50 °C, 100 Pa at 80 °C, 1000 Pa at 116 °C, and 10,000 Pa (nearly 0.1 atm) at 162 °C. Reactions With electrophiles Ferrocene undergoes many reactions characteristic of aromatic compounds, enabling the preparation of substituted derivatives. A common undergraduate experiment is the Friedel–Crafts reaction of ferrocene with acetic anhydride (or acetyl chloride) in the presence of phosphoric acid as a catalyst. Under conditions for a Mannich reaction, ferrocene gives N,N-dimethylaminomethylferrocene. Ferrocene can itself be oxidized to the ferrocenium cation (Fc+); the ferrocene/ferrocenium couple is often used as a reference in electrochemistry. It is an aromatic substance and undergoes substitution reactions rather than addition reactions on the cyclopentadienyl ligands. For example, Friedel-Crafts acylation of ferrocene with acetic anhydride yields acetylferrocene just as acylation of benzene yields acetophenone under similar conditions. Vilsmeier-Haack reaction (formylation) using formylanilide and phosphorus oxychloride gives ferrocenecarboxaldehyde. Diformylation does not occur readily, showing the electronic communication between the two rings. Protonation of ferrocene allows isolation of [Cp2FeH]PF6. In the presence of aluminium chloride, Me2NPCl2 and ferrocene react to give ferrocenyl dichlorophosphine, whereas treatment with phenyldichlorophosphine under similar conditions forms P,P-diferrocenyl-P-phenyl phosphine. Ferrocene reacts with P4S10 forms a diferrocenyl-dithiadiphosphetane disulfide. Lithiation Ferrocene reacts with butyllithium to give 1,1′-dilithioferrocene, which is a versatile nucleophile. In combination with butyllithiium, tert-butyllithium produces monolithioferrocene. Redox chemistry Ferrocene undergoes a one-electron oxidation at around 0.4 V versus a saturated calomel electrode (SCE), becoming ferrocenium. This reversible oxidation has been used as standard in electrochemistry as Fc+/Fc = 0.64 V versus the standard hydrogen electrode, however other values have been reported. Ferrocenium tetrafluoroborate is a common reagent. The remarkably reversible oxidation-reduction behaviour has been extensively used to control electron-transfer processes in electrochemical and photochemical systems. Substituents on the cyclopentadienyl ligands alters the redox potential in the expected way: electron-withdrawing groups such as a carboxylic acid shift the potential in the anodic direction (i.e. made more positive), whereas electron-releasing groups such as methyl groups shift the potential in the cathodic direction (more negative). Thus, decamethylferrocene is much more easily oxidised than ferrocene and can even be oxidised to the corresponding dication. Ferrocene is often used as an internal standard for calibrating redox potentials in non-aqueous electrochemistry. Stereochemistry of substituted ferrocenes Disubstituted ferrocenes can exist as either 1,2-, 1,3- or 1,1′- isomers, none of which are interconvertible. Ferrocenes that are asymmetrically disubstituted on one ring are chiral – for example [CpFe(EtC5H3Me)]. This planar chirality arises despite no single atom being a stereogenic centre. The substituted ferrocene shown at right (a 4-(dimethylamino)pyridine derivative) has been shown to be effective when used for the kinetic resolution of racemic secondary alcohols. Several approaches have been developed to asymmetrically 1,1′-functionalise the ferrocene. Applications of ferrocene and its derivatives Ferrocene and its numerous derivatives have no large-scale applications, but have many niche uses that exploit the unusual structure (ligand scaffolds, pharmaceutical candidates), robustness (anti-knock formulations, precursors to materials), and redox (reagents and redox standards). Ligand scaffolds Chiral ferrocenyl phosphines are employed as ligands for transition-metal catalyzed reactions. Some of them have found industrial applications in the synthesis of pharmaceuticals and agrochemicals. For example, the diphosphine 1,1′-bis(diphenylphosphino)ferrocene (dppf) is a valued ligand for palladium-coupling reactions and Josiphos ligand is useful for hydrogenation catalysis. They are named after the technician who made the first one, Josi Puleo. Fuel additives Ferrocene and its derivatives are antiknock agents used in the fuel for petrol engines. They are safer than previously used tetraethyllead. Petrol additive solutions containing ferrocene can be added to unleaded petrol to enable its use in vintage cars designed to run on leaded petrol. The iron-containing deposits formed from ferrocene can form a conductive coating on spark plug surfaces. Ferrocene polyglycol copolymers, prepared by effecting a polycondensation reaction between a ferrocene derivative and a substituted dihydroxy alcohol, has promise as a component of rocket propellants. These copolymers provide rocket propellants with heat stability, serving as a propellant binder and controlling propellant burn rate. Ferrocene has been found to be effective at reducing smoke and sulfur trioxide produced when burning coal. The addition by any practical means, impregnating the coal or adding ferrocene to the combustion chamber, can significantly reduce the amount of these undesirable byproducts, even with a small amount of the metal cyclopentadienyl compound. Pharmaceuticals Ferrocene derivatives have been investigated as drugs, with one compound approved for use in the USSR in the 1970s as an iron supplement, though it is no longer marketed today. Only one drug has entered clinical trials in recent years, Ferroquine (7-chloro-N-(2-((dimethylamino)methyl)ferrocenyl)quinolin-4-amine), an antimalarial, which has reached Phase IIb trials. Ferrocene-containing polymer-based drug delivery systems have been investigated. The anticancer activity of ferrocene derivatives was first investigated in the late 1970s, when derivatives bearing amine or amide groups were tested against lymphocytic leukemia. Some ferrocenium salts exhibit anticancer activity, but no compound has seen evaluation in the clinic. Ferrocene derivatives have strong inhibitory activity against human lung cancer cell line A549, colorectal cancer cell line HCT116, and breast cancer cell line MCF-7. An experimental drug was reported which is a ferrocenyl version of tamoxifen. The idea is that the tamoxifen will bind to the estrogen binding sites, resulting in cytotoxicity. Ferrocifens are exploited for cancer applications by a French biotech, Feroscan, founded by Pr. Gerard Jaouen. Solid rocket propellant Ferrocene and related derivatives are used as powerful burn rate catalysts in ammonium perchlorate composite propellant. Derivatives and variations Ferrocene analogues can be prepared with variants of cyclopentadienyl. For example, bisindenyliron and bisfluorenyliron. Carbon atoms can be replaced by heteroatoms as illustrated by Fe(η5-C5Me5)(η5-P5) and Fe(η5-C5H5)(η5-C4H4N) ("azaferrocene"). Azaferrocene arises from decarbonylation of Fe(η5-C5H5)(CO)2(η1-pyrrole) in cyclohexane. This compound on boiling under reflux in benzene is converted to ferrocene. Because of the ease of substitution, many structurally unusual ferrocene derivatives have been prepared. For example, the penta(ferrocenyl)cyclopentadienyl ligand, features a cyclopentadienyl anion derivatized with five ferrocene substituents. In hexaferrocenylbenzene, C6[(η5-C5H4)Fe(η5-C5H5)]6, all six positions on a benzene molecule have ferrocenyl substituents (R). X-ray diffraction analysis of this compound confirms that the cyclopentadienyl ligands are not co-planar with the benzene core but have alternating dihedral angles of +30° and −80°. Due to steric crowding the ferrocenyls are slightly bent with angles of 177° and have elongated C-Fe bonds. The quaternary cyclopentadienyl carbon atoms are also pyramidalized. Also, the benzene core has a chair conformation with dihedral angles of 14° and displays bond length alternation between 142.7 pm and 141.1 pm, both indications of steric crowding of the substituents. The synthesis of hexaferrocenylbenzene has been reported using Negishi coupling of hexaiodidobenzene and diferrocenylzinc, using tris(dibenzylideneacetone)dipalladium(0) as catalyst, in tetrahydrofuran: The yield is only 4%, which is further evidence consistent with substantial steric crowding around the arene core. Materials chemistry Ferrocene, a precursor to iron nanoparticles, can be used as a catalyst for the production of carbon nanotubes. Vinylferrocene can be converted to (polyvinylferrocene, PVFc), a ferrocenyl version of polystyrene (the phenyl groups are replaced with ferrocenyl groups). Another polyferrocene which can be formed is poly(2-(methacryloyloxy)ethyl ferrocenecarboxylate), PFcMA. In addition to using organic polymer backbones, these pendant ferrocene units have been attached to inorganic backbones such as polysiloxanes, polyphosphazenes, and polyphosphinoboranes, (–PH(R)–BH2–)n, and the resulting materials exhibit unusual physical and electronic properties relating to the ferrocene / ferrocinium redox couple. Both PVFc and PFcMA have been tethered onto silica wafers and the wettability measured when the polymer chains are uncharged and when the ferrocene moieties are oxidised to produce positively charged groups. The contact angle with water on the PFcMA-coated wafers was 70° smaller following oxidation, while in the case of PVFc the decrease was 30°, and the switching of wettability is reversible. In the PFcMA case, the effect of lengthening the chains and hence introducing more ferrocene groups is significantly larger reductions in the contact angle upon oxidation. See also Josiphos ligands References External links Ferrocene at The Periodic Table of Videos (University of Nottingham) NIOSH Pocket Guide to Chemical Hazards (Centers for Disease Control and Prevention) Antiknock agents Sandwich compounds Cyclopentadienyl complexes Substances discovered in the 1950s
Ferrocene
[ "Chemistry" ]
4,617
[ "Organometallic chemistry", "Cyclopentadienyl complexes", "Sandwich compounds" ]
60,744
https://en.wikipedia.org/wiki/Cubic%20zirconia
Cubic zirconia (abbreviated CZ) is the cubic crystalline form of zirconium dioxide (ZrO2). The synthesized material is hard and usually colorless, but may be made in a variety of different colors. It should not be confused with zircon, which is a zirconium silicate (ZrSiO4). It is sometimes erroneously called cubic zirconium. Because of its low cost, durability, and close visual likeness to diamond, synthetic cubic zirconia has remained the most gemologically and economically important competitor for diamonds since commercial production began in 1976. Its main competitor as a synthetic gemstone is a more recently cultivated material, synthetic moissanite. Technical aspects Cubic zirconia [also known as cubic zircon) is crystallographically isometric, an important attribute of a would-be diamond simulant. During synthesis zirconium oxide naturally forms monoclinic crystals, which are stable under normal atmospheric conditions. A stabilizer is required for cubic crystals (taking on the fluorite structure) to form, and remain stable at ordinary temperatures; typically this is either yttrium or calcium oxide, the amount of stabilizer used depending on the many recipes of individual manufacturers. Therefore, the physical and optical properties of synthesized CZ vary, all values being ranges. It is a dense substance, with a density between 5.6 and 6.0 g/cm3—about 1.65 times that of diamond. Cubic zirconia is relatively hard, 8–8.5 on the Mohs scale—slightly harder than most semi-precious natural gems. Its refractive index is high at 2.15–2.18 (compared to 2.42 for diamonds) and its luster is Adamantine lustre. Its dispersion is very high at 0.058–0.066, exceeding that of diamond (0.044). Cubic zirconia has no cleavage and exhibits a conchoidal fracture. Because of its high hardness, it is generally considered brittle. Under shortwave UV cubic zirconia typically fluoresces a yellow, greenish yellow or "beige". Under longwave UV the effect is greatly diminished, with a whitish glow sometimes being seen. Colored stones may show a strong, complex rare earth absorption spectrum. History Discovered in 1892, the yellowish monoclinic mineral baddeleyite is a natural form of zirconium oxide. The high melting point of zirconia (2750 °C or 4976 °F) hinders controlled growth of single crystals. However, stabilization of cubic zirconium oxide had been realized early on, with the synthetic product stabilized zirconia introduced in 1929. Although cubic, it was in the form of a polycrystalline ceramic: it was used as a refractory material, highly resistant to chemical and thermal attack (up to 2540 °C or 4604 °F). In 1937, German mineralogists M. V. Stackelberg and K. Chudoba discovered naturally occurring cubic zirconia in the form of microscopic grains included in metamict zircon. This was thought to be a byproduct of the metamictization process, but the two scientists did not think the mineral important enough to give it a formal name. The discovery was confirmed through X-ray diffraction, proving the existence of a natural counterpart to the synthetic product. As with the majority of grown diamond substitutes, the idea of producing single-crystal cubic zirconia arose in the minds of scientists seeking a new and versatile material for use in lasers and other optical applications. Its production eventually exceeded that of earlier synthetics, such as synthetic strontium titanate, synthetic rutile, YAG (yttrium aluminium garnet) and GGG (gadolinium gallium garnet). Some of the earliest research into controlled single-crystal growth of cubic zirconia occurred in 1960s France, much work being done by Y. Roulin and R. Collongues. This technique involved molten zirconia being contained within a thin shell of still-solid zirconia, with crystal growth from the melt. The process was named cold crucible, an allusion to the system of water cooling used. Though promising, these attempts yielded only small crystals. Later, Soviet scientists under V. V. Osiko in the Laser Equipment Laboratory at the Lebedev Physical Institute in Moscow perfected the technique, which was then named skull crucible (an allusion either to the shape of the water-cooled container or to the form of crystals sometimes grown). They named the jewel Fianit after the institute's name FIAN (Physical Institute of the Academy of Science), but the name was not used outside of the USSR. This was known at the time as the Institute of Physics at the Russian Academy of Science. Their breakthrough was published in 1973, and commercial production began in 1976. In 1977, cubic zirconia began to be mass-produced in the jewelry marketplace by the Ceres Corporation, with crystals stabilized with 94% yttria. Other major producers as of 1993 include Taiwan Crystal Company Ltd, Swarovski and ICT inc. By 1980, annual global production had reached 60 million carats (12 tonnes) and continued to increase, with production reaching around 400 tonnes per year in 1998. Because the natural form of cubic zirconia is so rare, all cubic zirconia used in jewelry has been synthesized, one method of which was patented by Josep F. Wenckus & Co. in 1997. Synthesis The skull-melting method refined by Josep F. Wenckus and coworkers in 1997 remains the industry standard. This is largely due to the process allowing for temperatures of over 3000  °C to be achieved, lack of contact between crucible and material as well as the freedom to choose any gas atmosphere. Primary downsides to this method include the inability to predict the size of the crystals produced and it is impossible to control the crystallization process through temperature changes. The apparatus used in this process consists of a cup-shaped crucible surrounded by radio frequency-activated (RF-activated) copper coils and a water-cooling system. Zirconium dioxide thoroughly mixed with a stabilizer (normally 10% yttrium oxide) is fed into a cold crucible. Metallic chips of either zirconium or the stabilizer are introduced into the powder mix in a compact pile manner. The RF generator is switched on and the metallic chips quickly start heating up and readily oxidize into more zirconia. Consequently, the surrounding powder heats up by thermal conduction, begins melting and, in turn, becomes electroconductive, and thus it begins to heat up via the RF generator as well. This continues until the entire product is molten. Due to the cooling system surrounding the crucible, a thin shell of sintered solid material is formed. This causes the molten zirconia to remain contained within its own powder which prevents it from being contaminated from the crucible and reduces heat loss. The melt is left at high temperatures for some hours to ensure homogeneity and ensure that all impurities have evaporated. Finally, the entire crucible is slowly removed from the RF coils to reduce the heating and let it slowly cool down (from bottom to top). The rate at which the crucible is removed from the RF coils is chosen as a function of the stability of crystallization dictated by the phase transition diagram. This provokes the crystallization process to begin and useful crystals begin to form. Once the crucible has been completely cooled to room temperature, the resulting crystals are multiple elongated-crystalline blocks. This shape is dictated by a concept known as crystal degeneration according to Tiller. The size and diameter of the obtained crystals is a function of the cross-sectional area of the crucible, volume of the melt and composition of the melt. The diameter of the crystals is heavily influenced by the concentration of Y2O3 stabilizer. Phase relations in zirconia solids solutions As seen on the phase diagram, the cubic phase will crystallize first as the solution is cooled down no matter the concentration of Y2O3. If the concentration of Y2O3 is not high enough the cubic structure will start to break down into the tetragonal state which will then break down into a monoclinic phase. If the concentration of Y2O3 is between 2.5-5% the resulting product will be PSZ (partially stabilized zirconia) while monophasic cubic crystals will form from around 8-40%. Below 14% at low growth rates tend to be opaque indicating partial phase separation in the solid solution (likely due to diffusion in the crystals remaining in the high temperature region for a longer time). Above this threshold crystals tend to remain clear at reasonable growth rates and maintains good annealing conditions. Doping Because of cubic zirconia's isomorphic capacity, it can be doped with several elements to change the color of the crystal. A list of specific dopants and colors produced by their addition can be seen below. Primary growth defects The vast majority of YCZ (yttrium bearing cubic zirconia) crystals are clear with high optical perfection and with gradients of the refractive index lower than . However some samples contain defects with the most characteristic and common ones listed below. Growth striations: These are located perpendicular to the growth direction of the crystal and are caused mainly by either fluctuations in the crystal growth rate or the non-congruent nature of liquid-solid transition, thus leading to non-uniform distribution of Y2O3. Light-scattering phase inclusions: Caused by contaminants in the crystal (primarily precipitates of silicates or aluminates of yttrium), typically of magnitude 0.03-10 μm. Mechanical stresses: Typically caused by the high temperature gradients of the growth and cooling processes, causing the crystal to form with internal mechanical stresses acting on it. This causes refractive index values of up to , although the effect of this can be reduced by annealing at 2100 °C followed by a slow enough cooling process. Dislocations: Similar to mechanical stresses, dislocations can be greatly reduced by annealing. Uses outside jewelry Due to its optical properties yttrium cubic zirconia (YCZ) has been used for windows, lenses, prisms, filters and laser elements. Particularly in the chemical industry it is used as window material for the monitoring of corrosive liquids due to its chemical stability and mechanical toughness. YCZ has also been used as a substrate for semiconductor and superconductor films in similar industries. Mechanical properties of partially stabilized zirconia (high hardness and shock resistance, low friction coefficient, high chemical and thermal resistance as well as high wear and tear resistance) allow it to be used as a very particular building material, especially in the bio-engineering industry: It has been used to make reliable super-sharp medical scalpels for doctors that are compatible with bio-tissues and contain an edge much smoother than one made of steel. Innovations In recent years manufacturers have sought ways of distinguishing their product by supposedly "improving" cubic zirconia. Coating finished cubic zirconia with a film of diamond-like carbon (DLC) is one such innovation, a process using chemical vapor deposition. The resulting material is purportedly harder, more lustrous and more like diamond overall. The coating is thought to quench the excess fire of cubic zirconia, while improving its refractive index, thus making it appear more like diamond. Additionally, because of the high percentage of diamond bonds in the amorphous diamond coating, the finished simulant will show a positive diamond signature in Raman spectra. Another technique first applied to quartz and topaz has also been adapted to cubic zirconia: An iridescent effect created by vacuum-sputtering onto finished stones an extremely thin layer of a precious metal (typically gold), or certain metal oxides, metal nitrides, or other coatings. This material is marketed as "mystic" by many dealers. Unlike diamond-like carbon and other hard synthetic ceramic coatings, the iridescent effect made with precious metal coatings is not durable, due to their extremely low hardness and poor abrasion wear properties, compared to the remarkably durable cubic zirconia substrate. Cubic zirconia vis-à-vis diamond Key features of cubic zirconia distinguish it from diamond: Hardness: cubic zirconia has a rating of approximately 8 on Mohs hardness scale vs. a rating of 10 for diamond. This may cause dull and rounded edges in CZ facets; the edges of diamond facets are much sharper by comparison. Furthermore, diamond rarely shows polish marks, and those which are apparent are oriented in different directions on adjoining facets, whereas CZ shows marks in the same direction of the polish throughout. The Specific gravity or density of cubic zirconia is approximately 1.7 times that of diamond. This allows gemologists to differentiate the two substances by weight alone. This property can also be exploited, for example, by dropping the stones in a heavy liquid and comparing their relative rates of descent: diamond will sink more slowly than CZ. Refractive index: cubic zirconia has a refractive index of 2.15–2.18, compared to a diamond's 2.42. This has led to the development of other immersion techniques for identification. In these methods, stones with refractive indices higher than that of the liquid used will have dark borders around the girdle and light facet edges whereas those with indices lower than the liquid will have light borders around the girdle and dark facet junctions. Dispersion is very high at 0.058–0.066, exceeding a diamond's 0.044. Cut: Cubic zirconia gemstones can be cut differently than diamonds: The facet edges can be rounded or "smooth". Color: only the rarest of diamonds are truly colorless, most having a tinge of yellow or brown to some extent. A cubic zirconia is often entirely colorless: equivalent to a perfect "D" on diamond's color grading scale. That said, desirable colors of cubic zirconia can be produced including near colorless, yellow, pink, purple, green, and even multicolored. Thermal conductivity: Cubic zirconia is a thermal insulator whereas diamond is the most powerful thermal conductor. This provides the basis for Wenckus’ canonical identification method, the industry standard. Effects on the diamond market Cubic zirconia, as a diamond simulant and jewel competitor, can potentially reduce demand for conflict diamonds, and impact the controversy surrounding the rarity and value of diamonds. Regarding value, the paradigm that diamonds are costly due to their rarity and visual beauty has been replaced by an artificial rarity attributed to price-fixing practices of De Beers Company which held a monopoly on the market from the 1870s to early 2000s. The company pleaded guilty to these charges in an Ohio court in 13 July 2004. However, while De Beers has less market power, the price of diamonds continues to increase due to the demand in emerging markets such as India and China. The emergence of artificial stones such as cubic zirconia with optic properties similar to diamonds, could be an alternative for jewelry buyers given their lower price and noncontroversial history. An issue closely related to monopoly is the emergence of conflict diamonds. The Kimberley Process (KP) was established to deter the illicit trade of diamonds that fund civil wars in Angola and Sierra Leone. However, the KP is not as effective in decreasing the number of conflict diamonds reaching the European and American markets. Its definition does not include forced labor conditions or human right violations. A 2015 study from the Enough Project, showed that groups in the Central African Republic have reaped between US$3 million and US$6 million annually from conflict diamonds. UN reports show that more than US$24 million in conflict diamonds have been smuggled since the establishment of the KP. Diamond simulants have become an alternative to boycott the funding of unethical practices. Terms such as “Eco-friendly Jewelry” define them as conflict free origin and environmentally sustainable. However, concerns from mining countries such as the Democratic Republic of Congo are that a boycott in purchases of diamonds would only worsen their economy. According to the Ministry of Mines in Congo, 10% of its population relies on the income from diamonds. Therefore, cubic zirconia are a short term alternative to reduce conflict but a long term solution would be to establish a more rigorous system of identifying the origin of these stones. See also Diamond Diamond simulant Shelby Gem Factory Synthetic diamond Yttria-stabilized zirconia References Further reading Crystals Diamond simulants Gemstones Refractory materials Synthetic minerals Zirconium dioxide Fluorite crystal structure fr:Zircone
Cubic zirconia
[ "Physics", "Chemistry", "Materials_science" ]
3,533
[ "Refractory materials", "Synthetic materials", "Materials", "Crystallography", "Crystals", "Gemstones", "Synthetic minerals", "Matter" ]
60,879
https://en.wikipedia.org/wiki/Electroluminescence
Electroluminescence (EL) is an optical and electrical phenomenon, in which a material emits light in response to the passage of an electric current or to a strong electric field. This is distinct from black body light emission resulting from heat (incandescence), chemical reactions (chemiluminescence), reactions in a liquid (electrochemiluminescence), sound (sonoluminescence), or other mechanical action (mechanoluminescence), or organic electroluminescence. Mechanism Electroluminescence is the result of radiative recombination of electrons and holes in a material, usually a semiconductor. The excited electrons release their energy as photons – light. Prior to recombination, electrons and holes may be separated either by doping the material to form a p-n junction (in semiconductor electroluminescent devices such as light-emitting diodes) or through excitation by impact of high-energy electrons accelerated by a strong electric field (as with the phosphors in electroluminescent displays). It has been recently shown that as a solar cell improves its light-to-electricity efficiency (improved open-circuit voltage), it will also improve its electricity-to-light (EL) efficiency. Characteristics Electroluminescent technologies have low power consumption compared to competing lighting technologies, such as neon or fluorescent lamps. This, together with the thinness of the material, has made EL technology valuable to the advertising industry. Relevant advertising applications include electroluminescent billboards and signs. EL manufacturers can control precisely which areas of an electroluminescent sheet illuminate, and when. This has given advertisers the ability to create more dynamic advertising that is still compatible with traditional advertising spaces. An EL film is a so-called Lambertian radiator: unlike with neon lamps, filament lamps, or LEDs, the brightness of the surface appears the same from all angles of view; electroluminescent light is not directional. The light emitted from the surface is perfectly homogeneous and is well-perceived by the eye. EL film produces single-frequency (monochromatic) light that has a very narrow bandwidth, is uniform and visible from a great distance. In principle, EL lamps can be made in any color. However, the commonly used greenish color closely matches the peak sensitivity of human vision, producing the greatest apparent light output for the least electrical power input. Unlike neon and fluorescent lamps, EL lamps are not negative resistance devices so no extra circuitry is needed to regulate the amount of current flowing through them. A new technology now being used is based on multispectral phosphors that emit light from 600 to 400nm depending on the drive frequency; this is similar to the color-changing effect seen with aqua EL sheet but on a larger scale. Examples of electroluminescent materials Electroluminescent devices are fabricated using either organic or inorganic electroluminescent materials. The active materials are generally semiconductors of wide enough bandwidth to allow the exit of the light. The most typical inorganic thin-film EL (TFEL) is ZnS:Mn with yellow-orange emission. Examples of the range of EL material include: Powdered zinc sulfide doped with copper (producing greenish light) or silver (producing bright blue light) Thin-film zinc sulfide doped with manganese (producing orange-red color) Naturally blue diamond, which includes a trace of boron that acts as a dopant. Semiconductors containing Group III and Group V elements, such as indium phosphide (InP), gallium arsenide (GaAs), and gallium nitride (GaN) (Light-emitting diodes). Certain organic semiconductors, such as [Ru(bpy)3]2+(PF6−)2, where bpy is 2,2'-bipyridine Terbium oxide (yellow-green light) Practical implementations The most common electroluminescent (EL) devices are composed of either powder (primarily used in lighting applications) or thin films (for information displays.) Light-emitting capacitor (LEC) Light-emitting capacitor, or LEC, is a term used since at least 1961 to describe electroluminescent panels. General Electric has patents dating to 1938 on flat electroluminescent panels that are still made as night lights and backlights for instrument panel displays. Electroluminescent panels are a capacitor where the dielectric between the outside plates is a phosphor that gives off photons when the capacitor is charged. By making one of the contacts transparent, the large area exposed emits light. Electroluminescent automotive instrument panel backlighting, with each gauge pointer also an individual light source, entered production on 1960 Chrysler and Imperial passenger cars, and was continued successfully on several Chrysler vehicles through 1967 and marketed as "Panelescent Lighting". Night lights The Sylvania Lighting Division in Salem and Danvers, Massachusetts, produced and marketed an EL night light, under the trade name Panelescent at roughly the same time that the Chrysler instrument panels entered production. These lamps have proven extremely reliable, with some samples known to be still functional after nearly 50 years of continuous operation. Later in the 1960s, Sylvania's Electronic Systems Division in Needham, Massachusetts developed and manufactured several instruments for the Apollo Lunar Module and Command Module using electroluminescent display panels manufactured by the Electronic Tube Division of Sylvania at Emporium, Pennsylvania. Raytheon in Sudbury, Massachusetts manufactured the Apollo Guidance Computer, which used a Sylvania electroluminescent display panel as part of its display-keyboard interface (DSKY). Display backlighting Powder phosphor-based electroluminescent panels are frequently used as backlights for liquid crystal displays. They readily provide gentle, even illumination for the entire display while consuming relatively little electric power. This makes them convenient for battery-operated devices such as pagers, wristwatches, and computer-controlled thermostats, and their gentle green-cyan glow is common in the technological world. EL backlights require relatively high voltage (between 60 and 600 volts). For battery-operated devices, this voltage must be generated by a boost converter circuit within the device. This converter often makes a faintly audible whine or siren sound while the backlight is activated. Line-voltage-operated devices may be activated directly from the power line; some electroluminescent nightlights operate in this fashion. Brightness per unit area increases with increased voltage and frequency. Thin-film phosphor electroluminescence was first commercialized during the 1980s by Sharp Corporation in Japan, Finlux (Oy Lohja Ab) in Finland, and Planar Systems in the US. In these devices, bright, long-life light emission is achieved in thin-film yellow-emitting manganese-doped zinc sulfide material. Displays using this technology were manufactured for medical and vehicle applications where ruggedness and wide viewing angles were crucial, and liquid crystal displays were not well developed. In 1992, Timex introduced its Indiglo EL display on some watches. Recently, blue-, red-, and green-emitting thin film electroluminescent materials that offer the potential for long life and full-color electroluminescent displays have been developed. The EL material must be enclosed between two electrodes and at least one electrode must be transparent to allow the escape of the produced light. Glass coated with indium tin oxide is commonly used as the front (transparent) electrode, while the back electrode is coated with reflective metal. Additionally, other transparent conducting materials, such as carbon nanotube coatings or PEDOT can be used as the front electrode. The display applications are primarily passive (i.e., voltages are driven from the edge of the display cf. driven from a transistor on the display). Similar to LCD trends, there have also been Active Matrix EL (AMEL) displays demonstrated, where the circuitry is added to prolong voltages at each pixel. The solid-state nature of TFEL allows for a very rugged and high-resolution display fabricated even on silicon substrates. AMEL displays of 1280×1024 at over 1000 lines per inch (LPI) have been demonstrated by a consortium including Planar Systems. Thick-film dielectric electroluminescent technology Thick-film dielectric electroluminescent technology (TDEL) is a phosphor-based flat panel display technology developed by Canadian company iFire Technology Corp. TDEL is based on inorganic electroluminescent (IEL) technology that combines both thick-and thin-film processes. The TDEL structure is made with glass or other substrates, consisting of a thick-film dielectric layer and a thin-film phosphor layer sandwiched between two sets of electrodes to create a matrix of pixels. Inorganic phosphors within this matrix emit light in the presence of an alternating electric field. Color By Blue Color By Blue (CBB) was developed in 2003. The Color By Blue process achieves higher luminance and better performance than the previous triple pattern process, with increased contrast, grayscale rendition, and color uniformity across the panel. Color By Blue is based on the physics of photoluminescence. High luminance inorganic blue phosphor is used in combination with specialized color conversion materials, which absorb the blue light and re-emit red or green light, to generate the other colors. New applications Electroluminescent lighting is now used as an application for public safety identification involving alphanumeric characters on the roof of vehicles for clear visibility from an aerial perspective. Electroluminescent lighting, especially electroluminescent wire (EL wire), has also made its way into clothing as many designers have brought this technology to the entertainment and nightlife industry. From 2006, t-shirts with an electroluminescent panel stylized as an audio equalizer, the T-Qualizer, saw a brief period of popularity. Engineers have developed an electroluminescent "skin" that can stretch more than six times its original size while still emitting light. This hyper-elastic light-emitting capacitor (HLEC) can endure more than twice the strain of previously tested stretchable displays. It consists of layers of transparent hydrogel electrodes sandwiching an insulating elastomer sheet. The elastomer changes luminance and capacitance when stretched, rolled, and otherwise deformed. In addition to its ability to emit light under a strain of greater than 480% of its original size, the group's HLEC was shown to be capable of being integrated into a soft robotic system. Three six-layer HLEC panels were bound together to form a crawling soft robot, with the top four layers making up the light-up skin and the bottom two the pneumatic actuators. The discovery could lead to significant advances in health care, transportation, electronic communication and other areas. See also List of light sources OLED Photoelectric effect References External links Overview of electroluminescent display technology, and thediscovery of electroluminescence Chrysler Corporation press release introducing Panelescent (EL) Lighting on 8 September, 1959. Condensed matter physics Electrical phenomena Light sources Lighting Luminescence
Electroluminescence
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,366
[ "Physical phenomena", "Luminescence", "Molecular physics", "Phases of matter", "Materials science", "Electrical phenomena", "Condensed matter physics", "Matter" ]
60,933
https://en.wikipedia.org/wiki/Triboelectric%20effect
The triboelectric effect (also known as triboelectricity, triboelectric charging, triboelectrification, or tribocharging) describes electric charge transfer between two objects when they contact or slide against each other. It can occur with different materials, such as the sole of a shoe on a carpet, or between two pieces of the same material. It is ubiquitous, and occurs with differing amounts of charge transfer (tribocharge) for all solid materials. There is evidence that tribocharging can occur between combinations of solids, liquids and gases, for instance liquid flowing in a solid tube or an aircraft flying through air. Often static electricity is a consequence of the triboelectric effect when the charge stays on one or both of the objects and is not conducted away. The term triboelectricity has been used to refer to the field of study or the general phenomenon of the triboelectric effect, or to the static electricity that results from it. When there is no sliding, tribocharging is sometimes called contact electrification, and any static electricity generated is sometimes called contact electricity. The terms are often used interchangeably, and may be confused. Triboelectric charge plays a major role in industries such as packaging of pharmaceutical powders, and in many processes such as dust storms and planetary formation. It can also increase friction and adhesion. While many aspects of the triboelectric effect are now understood and extensively documented, significant disagreements remain in the current literature about the underlying details. History The historical development of triboelectricity is interwoven with work on static electricity and electrons themselves. Experiments involving triboelectricity and static electricity occurred before the discovery of the electron. The name ēlektron (ἤλεκτρον) is Greek for amber, which is connected to the recording of electrostatic charging by Thales of Miletus around 585 BCE, and possibly others even earlier. The prefix (Greek for 'rub') refers to sliding, friction and related processes, as in tribology. From the axial age (8th to 3rd century BC) the attraction of materials due to static electricity by rubbing amber and the attraction of magnetic materials were considered to be similar or the same. There are indications that it was known both in Europe and outside, for instance China and other places. Syrian women used amber whorls in weaving and exploited the triboelectric properties, as noted by Pliny the Elder. The effect was mentioned in records from the medieval period. Archbishop Eustathius of Thessalonica, Greek scholar and writer of the 12th century, records that Woliver, king of the Goths, could draw sparks from his body. He also states that a philosopher was able, while dressing, to draw sparks from his clothes, similar to the report by Robert Symmer of his silk stocking experiments, which may be found in the 1759 Philosophical Transactions. It is generally considered that the first major scientific analysis was by William Gilbert in his publication De Magnete in 1600. He discovered that many more materials than amber such as sulphur, wax, glass could produce static electricity when rubbed, and that moisture prevented electrification. Others such as Sir Thomas Browne made important contributions slightly later, both in terms of materials and the first use of the word electricity in Pseudodoxia Epidemica. He noted that metals did not show triboelectric charging, perhaps because the charge was conducted away. An important step was around 1663 when Otto von Guericke invented a machine that could automate triboelectric charge generation, making it much easier to produce more tribocharge; other electrostatic generators followed. For instance, shown in the Figure is an electrostatic generator built by Francis Hauksbee the Younger. Another key development was in the 1730s when C. F. du Fay pointed out that there were two types of charge which he named vitreous and resinous. These names corresponded to the glass (vitreous) rods and bituminous coal, amber, or sealing wax (resinous) used in du Fay's experiments. These names were used throughout the 19th century. The use of the terms positive and negative for types of electricity grew out of the independent work of Benjamin Franklin around 1747 where he ascribed electricity to an over- or under- abundance of an electrical fluid. At about the same time Johan Carl Wilcke published in his 1757 PhD thesis a triboelectric series. In this work materials were listed in order of the polarity of charge separation when they are touched or slide against another. A material towards the bottom of the series, when touched to a material near the top of the series, will acquire a more negative charge. The first systematic analysis of triboelectricity is considered to be the work of Jean Claude Eugène Péclet in 1834. He studied triboelectric charging for a range of conditions such as the material, pressure and rubbing of surfaces. It was some time before there were further quantitative works by Owen in 1909 and Jones in 1915. The most extensive early set of experimental analyses was from 1914–1930 by the group of Professor Shaw, who laid much of the foundation of experimental knowledge. In a series of papers he: was one of the first to mention some of the failings of the triboelectric series, also showing that heat had a major effect on tribocharging; analyzed in detail where different materials would fall in a triboelectric series, at the same time pointing out anomalies; separately analyzed glass and solid elements and solid elements and textiles, carefully measuring both tribocharging and friction; analyzed charging due to air-blown particles; demonstrated that surface strain and relaxation played a critical role for a range of materials, and examined the tribocharging of many different elements with silica. Much of this work predates an understanding of solid state variations of energies levels with position, and also band bending. It was in the early 1950s in the work of authors such as Vick that these were taken into account along with concepts such as quantum tunnelling and behavior such as Schottky barrier effects, as well as including models such as asperities for contacts based upon the work of Frank Philip Bowden and David Tabor. Basic characteristics Triboelectric charging occurs when two materials are brought into contact then separated, or slide against each other. An example is rubbing a plastic pen on a shirt sleeve made of cotton, wool, polyester, or the blended fabrics used in modern clothing. An electrified pen will attract and pick up pieces of paper less than a square centimeter, and will repel a similarly electrified pen. This repulsion is detectable by hanging both pens on threads and setting them near one another. Such experiments led to the theory of two types of electric charge, one being the negative of the other, with a simple sum respecting signs giving the total charge. The electrostatic attraction of the charged plastic pen to neutral uncharged pieces of paper (for example) is due to induced dipoles in the paper. The triboelectric effect can be unpredictable because many details are often not controlled. Phenomena which do not have a simple explanation have been known for many years. For instance, as early as 1910, Jaimeson observed that for a piece of cellulose, the sign of the charge was dependent upon whether it was bent concave or convex during rubbing. The same behavior with curvature was reported in 1917 by Shaw, who noted that the effect of curvature with different materials made them either more positive or negative. In 1920, Richards pointed out that for colliding particles the velocity and mass played a role, not just what the materials were. In 1926, Shaw pointed out that with two pieces of identical material, the sign of the charge transfer from "rubber" to "rubbed" could change with time. There are other more recent experimental results which also do not have a simple explanation. For instance the work of Burgo and Erdemir, which showed that the sign of charge transfer reverses between when a tip is pushing into a substrate versus when it pulls out; the detailed work of Lee et al and Forward, Lacks and Sankaran and others measuring the charge transfer during collisions between particles of zirconia of different size but the same composition, with one size charging positive, the other negative; the observations using sliding or Kelvin probe force microscope of inhomogeneous charge variations between nominally identical materials. The details of how and why tribocharging occurs are not established science as of 2023. One component is the difference in the work function (also called the electron affinity) between the two materials. This can lead to charge transfer as, for instance, analyzed by Harper. As has been known since at least 1953, the contact potential is part of the process but does not explain many results, such as the ones mentioned in the last two paragraphs. Many studies have pointed out issues with the work function difference (Volta potential) as a complete explanation. There is also the question of why sliding is often important. Surfaces have many nanoscale asperities where the contact is taking place, which has been taken into account in many approaches to triboelectrification. Volta and Helmholtz suggested that the role of sliding was to produce more contacts per second. In modern terms, the idea is that electrons move many times faster than atoms, so the electrons are always in equilibrium when atoms move (the Born–Oppenheimer approximation). With this approximation, each asperity contact during sliding is equivalent to a stationary one; there is no direct coupling between the sliding velocity and electron motion. An alternative view (beyond the Born–Oppenheimer approximation) is that sliding acts as a quantum mechanical pump which can excite electrons to go from one material to another. A different suggestion is that local heating during sliding matters, an idea first suggested by Frenkel in 1941. Other papers have considered that local bending at the nanoscale produces voltages which help drive charge transfer via the flexoelectric effect. There are also suggestions that surface or trapped charges are important. More recently there have been attempts to include a full solid state description. Explanations and mechanisms From early work starting around the end of the 19th century a large amount of information is available about what, empirically, causes triboelectricity. While there is extensive experimental data on triboelectricity there is not as yet full scientific consensus on the source, or perhaps more probably the sources. Some aspects are established, and will be part of the full picture: Work function differences between the two materials. Local curvature, strain and roughness. The forces used during sliding, and the velocities when particles collide as well as the sizes. The electronic structure of the materials, and the crystallographic orientation of the two contacting materials. Surface or interface states, as well as environmental factors such as humidity. Triboelectric series An empirical approach to triboelectricity is a triboelectric series. This is a list of materials ordered by how they develop a charge relative to other materials on the list. Johan Carl Wilcke published the first one in a 1757 paper. The series was expanded by Shaw and Henniker by including natural and synthetic polymers, and included alterations in the sequence depending on surface and environmental conditions. Lists vary somewhat as to the order of some materials. Another triboelectric series based on measuring the triboelectric charge density of materials was proposed by the group of Zhong Lin Wang. The triboelectric charge density of the tested materials was measured with respect to liquid mercury in a glove box under well-defined conditions, with fixed temperature, pressure and humidity. It is known that this approach is too simple and unreliable. There are many cases where there are triangles: material A is positive when rubbed against B, B is positive when rubbed against C, and C is positive when rubbed against A, an issue mentioned by Shaw in 1914. This cannot be explained by a linear series; cyclic series are inconsistent with the empirical triboelectric series. Furthermore, there are many cases where charging occurs with contacts between two pieces of the same material. This has been modelled as a consequence of the electric fields from local bending (flexoelectricity). Work function differences In all materials there is a positive electrostatic potential from the positive atomic nuclei, partially balanced by a negative electrostatic potential of what can be described as a sea of electrons. The average potential is positive, what is called the mean inner potential (MIP). Different materials have different MIPs, depending upon the types of atoms and how close they are. At a surface the electrons also spill out a little into the vacuum, as analyzed in detail by Kohn and Liang. This leads to a dipole at the surface. Combined, the dipole and the MIP lead to a potential barrier for electrons to leave a material which is called the work function. A rationalization of the triboelectric series is that different members have different work functions, so electrons can go from the material with a small work function to one with a large. The potential difference between the two materials is called the Volta potential, also called the contact potential. Experiments have validated the importance of this for metals and other materials. However, because the surface dipoles vary for different surfaces of any solid the contact potential is not a universal parameter. By itself it cannot explain many of the results which were established in the early 20th century. Electromechanical contributions Whenever a solid is strained, electric fields can be generated. One process is due to linear strains, and is called piezoelectricity, the second depends upon how rapidly strains are changing with distance (derivative) and is called flexoelectricity. Both are established science, and can be both measured and calculated using density functional theory methods. Because flexoelectricity depends upon a gradient it can be much larger at the nanoscale during sliding or contact of asperity between two objects. There has been considerable work on the connection between piezoelectricity and triboelectricity. While it can be important, piezoelectricity only occurs in the small number of materials which do not have inversion symmetry, so it is not a general explanation. It has recently been suggested that flexoelectricity may be very important in triboelectricity as it occurs in all insulators and semiconductors. Quite a few of the experimental results such as the effect of curvature can be explained by this approach, although full details have not as yet been determined. There is also early work from Shaw and Hanstock, and from the group of Daniel Lacks demonstrating that strain matters. Capacitor charge compensation model An explanation that has appeared in different forms is analogous to charge on a capacitor. If there is a potential difference between two materials due to the difference in their work functions (contact potential), this can be thought of as equivalent to the potential difference across a capacitor. The charge to compensate this is that which cancels the electric field. If an insulating dielectric is in between the two materials, then this will lead to a polarization density and a bound surface charge of , where is the surface normal. The total charge in the capacitor is then the combination of the bound surface charge from the polarization and that from the potential. The triboelectric charge from this compensation model has been frequently considered as a key component. If the additional polarization due to strain (piezoelectricity) or bending of samples (flexoelectricity) is included this can explain observations such as the effect of curvature or inhomogeneous charging. Electron and/or ion transfer There is debate about whether electrons or ions are transferred in triboelectricity. For instance, Harper discusses both possibilities, whereas Vick was more in favor of electron transfer. The debate remains to this day with, for instance, George M. Whitesides advocating for ions, while Diaz and Fenzel-Alexander as well as Laurence D. Marks support both, and others just electrons. Thermodynamic irreversibility In the latter half of the 20th century the Soviet school led by chemist Boris Derjaguin argued that triboelectricity and the associated phenomenon of triboluminescence are fundamentally irreversible. A similar point of view to Derjaguin's has been more recently advocated by Seth Putterman and his collaborators at the University of California, Los Angeles (UCLA). A proposed theory of triboelectricity as a fundamentally irreversible process was published in 2020 by theoretical physicists Robert Alicki and Alejandro Jenkins. They argued that the electrons in the two materials that slide against each other have different velocities, giving a non-equilibrium state. Quantum effects cause this imbalance to pump electrons from one material to the other. This is a fermionic analog of the mechanism of rotational superradiance originally described by Yakov Zeldovich for bosons. Electrons are pumped in both directions, but small differences in the electronic potential landscapes for the two surfaces can cause net charging. Alicki and Jenkins argue that such an irreversible pumping is needed to understand how the triboelectric effect can generate an electromotive force. Humidity Generally, increased humidity (water in the air) leads to a decrease in the magnitude of triboelectric charging. The size of this effect varies greatly depending on the contacting materials; the decrease in charging ranges from up to a factor of 10 or more to very little humidity dependence. Some experiments find increased charging at moderate humidity compared to extremely dry conditions before a subsequent decrease at higher humidity. The most widespread explanation is that higher humidity leads to more water adsorbed at the surface of contacting materials, leading to a higher surface conductivity. The higher conductivity allows for greater charge recombination as contacts separate, resulting in a smaller transfer of charge. Another proposed explanation for humidity effects considers the case when charge transfer is observed to increase with humidity in dry conditions. Increasing humidity may lead to the formation of water bridges between contacting materials that promote the transfer of ions. Examples Friction and adhesion from tribocharging Friction is a retarding force due to different energy dissipation process such as elastic and plastic deformation, phonon and electron excitation, and also adhesion. As an example, in a car or any other vehicle the wheels elastically deform as they roll. Part of the energy needed for this deformation is recovered (elastic deformation), some is not and goes into heating the tires. The energy which is not recovered contributes to the back force, a process called rolling friction. Similar to rolling friction there are energy terms in charge transfer, which contribute to friction. In static friction there is coupling between elastic strains, polarization and surface charge which contributes to the frictional force. In sliding friction, when asperities contact and there is charge transfer, some of the charge returns as the contacts are released, some does not and will contribute to the macroscopically observed friction. There is evidence for a retarding Coulomb force between asperities of different charges, and an increase in the adhesion from contact electrification when geckos walk on water. There is also evidence of connections between jerky (stick–slip) processes during sliding with charge transfer, electrical discharge and x-ray emission. How large the triboelectric contribution is to friction has been debated. It has been suggested by some that it may dominate for polymers, whereas Harper has argued that it is small. Liquids and gases The generation of static electricity from the relative motion of liquids or gases is well established, with one of the first analyses in 1886 by Lord Kelvin in his water dropper which used falling drops to create an electric generator. Liquid mercury is a special case as it typically acts as a simple metal, so has been used as a reference electrode. More common is water, and electricity due to water droplets hitting surfaces has been documented since the discovery by Philipp Lenard in 1892 of the spray electrification or waterfall effect. This is when falling water generates static electricity either by collisions between water drops or with the ground, leading to the finer mist in updrafts being mainly negatively charged, with positive near the lower surface. It can also occur for sliding drops. Another type of charge can be produced during rapid solidification of water containing ions, which is called the Workman–Reynolds effect. During the solidification the positive and negative ions may not be equally distributed between the liquid and solid. For instance, in thunderstorms this can contribute (together with the waterfall effect) to separation of positive hydrogen ions and negative hydroxide ions, leading to static charge and lightning. A third class is associated with contact potential differences between liquids or gases and other materials, similar to the work function differences for solids. It has been suggested that a triboelectric series for liquids is useful. One difference from solids is that often liquids have charged double layers, and most of the work to date supports that ion transfer (rather than electron) dominates for liquids as first suggested by Irving Langmuir in 1938. Finally, with liquids there can be flow-rate gradients at interfaces, and also viscosity gradients. These can produce electric fields and also polarization of the liquid, a field called electrohydrodynamics. These are analogous to the electromechanical terms for solids where electric fields can occur due to elastic strains as described earlier. Powders During commercial powder processing or in natural processes such as dust storms, triboelectric charge transfer can occur. There can be electric fields of up to 160kV/m with moderate wind conditions, which leads to Coulomb forces of about the same magnitude as gravity. There does not need to be air present, significant charging can occur, for instance, on airless planetary bodies. With pharmaceutic powders and other commercial powders the tribocharging needs to be controlled for quality control of the materials and doses. Static discharge is also a particular hazard in grain elevators owing to the danger of a dust explosion, in places that store explosive powders, and in many other cases. Triboelectric powder separation has been discussed as a method of separating powders, for instance different biopolymers. The principle here is that different degrees of charging can be exploited for electrostatic separation, a general concept for powders. In industry There are many areas in industry where triboelectricity is known to be an issue. some examples are: Non-conducting pipes carrying combustible liquids or fuels such as petrol can result in tribocharge accumulation on the walls of the pipes, which can lead to potentials as large as 90 kV. Pneumatic transport systems in industry can lead to fires due to the tribocharge generated during use. On ships, contact between cargo and pipelines during loading and unloading, as well as flow in steam pipes and water jets in cleaning machines can lead to dangerous charging. Courses exist to teach mariners the dangers. US authorities require nearly all industrial facilities to measure particulate dust emissions. Various sensors based on triboelectricity are used, and in 1997 the United States Environmental Protection Agency issued guidelines for triboelectric fabric-filter bag leak-detection systems. Commercial sensors are available for triboelectric dust detection. Wiping a rail near a chemical tank while it is being filled with a flammable chemical can lead to sparks which ignite the chemical. This was the cause of a 2017 explosion that killed one and injured many. Other examples While the simple case of stroking a cat is familiar to many, there are other areas in modern technological civilization where triboelectricity is exploited or is a concern: Air moving past an aircraft can lead to a buildup of charge called "precipitation static" or "P-static"; aircraft typically have one or more static wicks to remove it. Checking the status of these is a standard task for pilots. Similarly, helicopter blades move fast, and tribocharging can generate voltages up to 200 kV. During planetary formation, a key step is aggregation of dust or smaller particles. There is evidence that triboelectric charging during collisions of granular material plays a key role in overcoming barriers to aggregation. Single-use medical protective clothing must fulfill certain triboelectric charging regulations in China. Space vehicles can accumulate significant tribocharge which can interfere with communications such as the sending of self-destruct signals. Some launches have been delayed by weather conditions where tribocharging could occur. Triboelectric nanogenerators are energy harvesting devices which convert mechanical energy into electricity. Triboelectric noise within medical cable assemblies and lead wires is generated when the conductors, insulation, and fillers rub against each other as the cables are flexed during movement. Keeping triboelectric noise at acceptable levels requires careful material selection, design, and processing. It is also an issue with underwater electroacoustic transducers if there are flexing motions of the cables; the mechanism is believed to involve relative motion between a dielectric and a conductor in the cable. Vehicle tires are normally dark because carbon black is added to help conduct away tribocharge that can shock passengers when they exit. There are also discharging straps than can be purchased. See also Electrostatic generator, machine to produce static electricity Electrostatic induction, separation of charges and polarization due to other charges Electrostriction, coupling between an electric field and volume of unit cells Electrohydrodynamics, coupling in liquids between electric fields and properties Flexoelectricity, polarization due to bending and other strain gradients Mechanoluminescence, light produced by mechanical action, often involving triboelectric effect Nanotribology, science of tribology (friction, lubrication and wear processes) at the nanoscale Piezoelectricity, polarization due to linear strains Polarization density, general description of the physics of polarization Static electricity, electric charge often but not always due to triboelectricity Tribology, science of friction, lubrication and wear Triboluminescence, light associated with sliding or contacts Work function, the energy to remove an electron from a surface References External links The return of Static Man, a podcast for kids about a masked menace who is electrified and goes around zapping people. Video of a charged rod demonstration at the University of Minnesota showing repulsion after rods are tribocharged, different cases giving repulsive and attractive forces. Video demonstrating tribocharging with a plastic comb rubbed by a cotton cloth attracting small pieces of paper. Video on Triboelectric Charging from the Khan Academy. It discusses the contact potential difference model, using the term electron affinity which has the same meaning as work function. Electrical phenomena Electrostatics Electricity Tribology
Triboelectric effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
5,522
[ "Tribology", "Physical phenomena", "Materials science", "Surface science", "Electrical phenomena", "Mechanical engineering" ]
61,220
https://en.wikipedia.org/wiki/Spintronics
Spintronics (a portmanteau meaning spin transport electronics), also known as spin electronics, is the study of the intrinsic spin of the electron and its associated magnetic moment, in addition to its fundamental electronic charge, in solid-state devices. The field of spintronics concerns spin-charge coupling in metallic systems; the analogous effects in insulators fall into the field of multiferroics. Spintronics fundamentally differs from traditional electronics in that, in addition to charge state, electron spins are used as a further degree of freedom, with implications in the efficiency of data storage and transfer. Spintronic systems are most often realised in dilute magnetic semiconductors (DMS) and Heusler alloys and are of particular interest in the field of quantum computing and neuromorphic computing. History Spintronics emerged from discoveries in the 1980s concerning spin-dependent electron transport phenomena in solid-state devices. This includes the observation of spin-polarized electron injection from a ferromagnetic metal to a normal metal by Johnson and Silsbee (1985) and the discovery of giant magnetoresistance independently by Albert Fert et al. and Peter Grünberg et al. (1988). The origin of spintronics can be traced to the ferromagnet/superconductor tunneling experiments pioneered by Meservey and Tedrow and initial experiments on magnetic tunnel junctions by Julliere in the 1970s. The use of semiconductors for spintronics began with the theoretical proposal of a spin field-effect-transistor by Datta and Das in 1990 and of the electric dipole spin resonance by Rashba in 1960. Theory The spin of the electron is an intrinsic angular momentum that is separate from the angular momentum due to its orbital motion. The magnitude of the projection of the electron's spin along an arbitrary axis is , implying that the electron acts as a fermion by the spin-statistics theorem. Like orbital angular momentum, the spin has an associated magnetic moment, the magnitude of which is expressed as . In a solid, the spins of many electrons can act together to affect the magnetic and electronic properties of a material, for example endowing it with a permanent magnetic moment as in a ferromagnet. In many materials, electron spins are equally present in both the up and the down state, and no transport properties are dependent on spin. A spintronic device requires generation or manipulation of a spin-polarized population of electrons, resulting in an excess of spin up or spin down electrons. The polarization of any spin dependent property X can be written as . A net spin polarization can be achieved either through creating an equilibrium energy split between spin up and spin down. Methods include putting a material in a large magnetic field (Zeeman effect), the exchange energy present in a ferromagnet or forcing the system out of equilibrium. The period of time that such a non-equilibrium population can be maintained is known as the spin lifetime, . In a diffusive conductor, a spin diffusion length can be defined as the distance over which a non-equilibrium spin population can propagate. Spin lifetimes of conduction electrons in metals are relatively short (typically less than 1 nanosecond). An important research area is devoted to extending this lifetime to technologically relevant timescales. The mechanisms of decay for a spin polarized population can be broadly classified as spin-flip scattering and spin dephasing. Spin-flip scattering is a process inside a solid that does not conserve spin, and can therefore switch an incoming spin up state into an outgoing spin down state. Spin dephasing is the process wherein a population of electrons with a common spin state becomes less polarized over time due to different rates of electron spin precession. In confined structures, spin dephasing can be suppressed, leading to spin lifetimes of milliseconds in semiconductor quantum dots at low temperatures. Superconductors can enhance central effects in spintronics such as magnetoresistance effects, spin lifetimes and dissipationless spin-currents. The simplest method of generating a spin-polarised current in a metal is to pass the current through a ferromagnetic material. The most common applications of this effect involve giant magnetoresistance (GMR) devices. A typical GMR device consists of at least two layers of ferromagnetic materials separated by a spacer layer. When the two magnetization vectors of the ferromagnetic layers are aligned, the electrical resistance will be lower (so a higher current flows at constant voltage) than if the ferromagnetic layers are anti-aligned. This constitutes a magnetic field sensor. Two variants of GMR have been applied in devices: (1) current-in-plane (CIP), where the electric current flows parallel to the layers and (2) current-perpendicular-to-plane (CPP), where the electric current flows in a direction perpendicular to the layers. Other metal-based spintronics devices: Tunnel magnetoresistance (TMR), where CPP transport is achieved by using quantum-mechanical tunneling of electrons through a thin insulator separating ferromagnetic layers. Spin-transfer torque, where a current of spin-polarized electrons is used to control the magnetization direction of ferromagnetic electrodes in the device. Spin-wave logic devices carry information in the phase. Interference and spin-wave scattering can perform logic operations. Spintronic-logic devices Non-volatile spin-logic devices to enable scaling are being extensively studied. Spin-transfer, torque-based logic devices that use spins and magnets for information processing have been proposed. These devices are part of the ITRS exploratory road map. Logic-in memory applications are already in the development stage. A 2017 review article can be found in Materials Today. A generalized circuit theory for spintronic integrated circuits has been proposed so that the physics of spin transport can be utilized by SPICE developers and subsequently by circuit and system designers for the exploration of spintronics for “beyond CMOS computing.” Applications Read heads of magnetic hard drives are based on the GMR or TMR effect. Motorola developed a first-generation 256 kb magnetoresistive random-access memory (MRAM) based on a single magnetic tunnel junction and a single transistor that has a read/write cycle of under 50 nanoseconds. Everspin has since developed a 4 Mb version. Two second-generation MRAM techniques are in development: thermal-assisted switching (TAS) and spin-transfer torque (STT). Another design, racetrack memory, a novel memory architecture proposed by Dr. Stuart S. P. Parkin, encodes information in the direction of magnetization between domain walls of a ferromagnetic wire. In 2012, persistent spin helices of synchronized electrons were made to persist for more than a nanosecond, a 30-fold increase over earlier efforts, and longer than the duration of a modern processor clock cycle. Semiconductor-based spintronic devices Doped semiconductor materials display dilute ferromagnetism. In recent years, dilute magnetic oxides (DMOs) including ZnO based DMOs and TiO2-based DMOs have been the subject of numerous experimental and computational investigations. Non-oxide ferromagnetic semiconductor sources (like manganese-doped gallium arsenide ), increase the interface resistance with a tunnel barrier, or using hot-electron injection. Spin detection in semiconductors has been addressed with multiple techniques: Faraday/Kerr rotation of transmitted/reflected photons Circular polarization analysis of electroluminescence Nonlocal spin valve (adapted from Johnson and Silsbee's work with metals) Ballistic spin filtering The latter technique was used to overcome the lack of spin-orbit interaction and materials issues to achieve spin transport in silicon. Because external magnetic fields (and stray fields from magnetic contacts) can cause large Hall effects and magnetoresistance in semiconductors (which mimic spin-valve effects), the only conclusive evidence of spin transport in semiconductors is demonstration of spin precession and dephasing in a magnetic field non-collinear to the injected spin orientation, called the Hanle effect. Applications Applications using spin-polarized electrical injection have shown threshold current reduction and controllable circularly polarized coherent light output. Examples include semiconductor lasers. Future applications may include a spin-based transistor having advantages over MOSFET devices such as steeper sub-threshold slope. Magnetic-tunnel transistor: The magnetic-tunnel transistor with a single base layer has the following terminals: Emitter (FM1): Injects spin-polarized hot electrons into the base. Base (FM2): Spin-dependent scattering takes place in the base. It also serves as a spin filter. Collector (GaAs): A Schottky barrier is formed at the interface. It only collects electrons that have enough energy to overcome the Schottky barrier, and when states are available in the semiconductor. The magnetocurrent (MC) is given as: And the transfer ratio (TR) is MTT promises a highly spin-polarized electron source at room temperature. Storage media Antiferromagnetic storage media have been studied as an alternative to ferromagnetism, especially since with antiferromagnetic material the bits can be stored as well as with ferromagnetic material. Instead of the usual definition 0 ↔ 'magnetisation upwards', 1 ↔ 'magnetisation downwards', the states can be, e.g., 0 ↔ 'vertically-alternating spin configuration' and 1 ↔ 'horizontally-alternating spin configuration'.). The main advantages of antiferromagnetic material are: insensitivity to data-damaging perturbations by stray fields due to zero net external magnetization; no effect on near particles, implying that antiferromagnetic device elements would not magnetically disturb its neighboring elements; far shorter switching times (antiferromagnetic resonance frequency is in the THz range compared to GHz ferromagnetic resonance frequency); broad range of commonly available antiferromagnetic materials including insulators, semiconductors, semimetals, metals, and superconductors. Research is being done into how to read and write information to antiferromagnetic spintronics as their net zero magnetization makes this difficult compared to conventional ferromagnetic spintronics. In modern MRAM, detection and manipulation of ferromagnetic order by magnetic fields has largely been abandoned in favor of more efficient and scalable reading and writing by electrical current. Methods of reading and writing information by current rather than fields are also being investigated in antiferromagnets as fields are ineffective anyway. Writing methods currently being investigated in antiferromagnets are through spin-transfer torque and spin-orbit torque from the spin Hall effect and the Rashba effect. Reading information in antiferromagnets via magnetoresistance effects such as tunnel magnetoresistance is also being explored. See also Stuart S. P. Parkin Electric dipole spin resonance Josephson effect Magnetoresistive random-access memory (MRAM) Magnonics Potential applications of graphene#Spintronics Rashba effect Spin pumping Spin-transfer torque Spinhenge@Home Spinmechatronics Spinplasmonics Unconventional computing Valleytronics List of emerging technologies Multiferroics References Further reading "Introduction to Spintronics". Marc Cahay, Supriyo Bandyopadhyay, CRC Press, "Spintronics Steps Forward.", University of South Florida News External links 23 milestones in the history of spin compiled by Nature Milestone 18: A Giant Leap for Electronics: Giant Magneto-resistance, compiled by Nature Milestone 20: Information in a Spin: Datta-Das, compiled by Nature Spintronics portal with news and resources RaceTrack:InformationWeek (April 11, 2008) Spintronics research targets GaAs. Spintronics Tutorial Lecture on Spin transport by S. Datta (from Datta Das transistor)—Part 1 and Part 2 Electronics Quantum electronics Condensed matter physics Theoretical computer science Non-volatile memory Solid-state computer storage
Spintronics
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
2,534
[ "Quantum electronics", "Theoretical computer science", "Applied mathematics", "Spintronics", "Phases of matter", "Quantum mechanics", "Materials science", "Condensed matter physics", "Nanotechnology", "Matter" ]
61,255
https://en.wikipedia.org/wiki/Bacterial%20artificial%20chromosome
A bacterial artificial chromosome (BAC) is a DNA construct, based on a functional fertility plasmid (or F-plasmid), used for transforming and cloning in bacteria, usually E. coli. F-plasmids play a crucial role because they contain partition genes that promote the even distribution of plasmids after bacterial cell division. The bacterial artificial chromosome's usual insert size is 150–350 kbp. A similar cloning vector called a PAC has also been produced from the DNA of P1 bacteriophage. BACs were often used to sequence the genomes of organisms in genome projects, for example the Human Genome Project, though they have been replaced by more modern technologies. In BAC sequencing, short piece of the organism's DNA is amplified as an insert in BACs, and then sequenced. Finally, the sequenced parts are rearranged in silico, resulting in the genomic sequence of the organism. BACs were replaced with faster and less laborious sequencing methods like whole genome shotgun sequencing and now more recently next-gen sequencing. Common gene components repE for plasmid replication and regulation of copy number. parA and parB for partitioning F plasmid DNA to daughter cells during division and ensures stable maintenance of the BAC. A selectable marker for antibiotic resistance; some BACs also have lacZ at the cloning site for blue/white selection. T7 & Sp6 phage promoters for transcription of inserted genes. Disease modeling Inherited BACs are now being utilized to a greater extent in modeling genetic disease, often alongside transgenic mice. BACs have been useful in this field as complex genes may have several regulatory sequences upstream of the encoding sequence, including various promoter sequences that will govern a gene's expression level. BACs have been used to some degree of success with mice when studying neurological diseases such as Alzheimer's disease or as in the case of aneuploidy associated with Down syndrome. There have also been instances when they have been used to study specific oncogenes associated with cancers. They are transferred over to these genetic disease models by electroporation/transformation, transfection with a suitable virus or microinjection. BACs can also be utilized to detect genes or large sequences of interest and then used to map them onto the human chromosome using BAC arrays. BACs are preferred for these kind of genetic studies because they accommodate much larger sequences without the risk of rearrangement, and are therefore more stable than other types of cloning vectors. Infectious The genomes of several large DNA viruses and RNA viruses have been cloned as BACs. These constructs are referred to as "infectious clones", as transfection of the BAC construct into host cells is sufficient to initiate viral infection. The infectious property of these BACs has made the study of many viruses such as the herpesviruses, poxviruses and coronaviruses more accessible. Molecular studies of these viruses can now be achieved using genetic approaches to mutate the BAC while it resides in bacteria. Such genetic approaches rely on either linear or circular targeting vectors to carry out homologous recombination. See also Cosmid End-sequence profiling Fosmid Human artificial chromosome Secondary chromosome Yeast artificial chromosome References External links The Big Bad BAC: Bacterial Artificial Chromosomes — a review from the Science Creative Quarterly Empire Genomics (company that sells BAC clones from genomic libraries) Amplicon Express (company that makes custom BAC libraries) Genomics techniques Molecular biology techniques
Bacterial artificial chromosome
[ "Chemistry", "Biology" ]
734
[ "Genetics techniques", "Genomics techniques", "Molecular biology techniques", "Molecular biology" ]
61,271
https://en.wikipedia.org/wiki/Auxiliary%20power%20unit
An auxiliary power unit (APU), is a device on a vehicle that provides energy for functions other than propulsion. They are commonly found on large aircraft and naval ships as well as some large land vehicles. Aircraft APUs generally produce 115 V AC voltage at 400 Hz (rather than 50/60 Hz in mains supply), to run the electrical systems of the aircraft; others can produce 28 V DC voltage. APUs can provide power through single or three-phase systems. A jet fuel starter (JFS) is a similar device to an APU but directly linked to the main engine and started by an onboard compressed air bottle. Transport aircraft History During World War I, the British Coastal class blimps, one of several types of airship operated by the Royal Navy, carried a ABC auxiliary engine. These powered a generator for the craft's radio transmitter and, in an emergency, could power an auxiliary air blower. One of the first military fixed-wing aircraft to use an APU was the British, World War 1, Supermarine Nighthawk, an anti-Zeppelin night fighter. During World War II, a number of large American military aircraft were fitted with APUs. These were typically known as putt–putts, even in official training documents. The putt-putt on the B-29 Superfortress bomber was fitted in the unpressurised section at the rear of the aircraft. Various models of four-stroke, Flat-twin or V-twin engines were used. The engine drove a P2, DC generator, rated 28.5 Volts and 200 Amps (several of the same P2 generators, driven by the main engines, were the B-29's DC power source in flight). The putt-putt provided power for starting the main engines and was used after take-off to a height of . The putt-putt was restarted when the B-29 was descending to land. Some models of the B-24 Liberator had a putt–putt fitted at the front of the aircraft, inside the nose-wheel compartment. Some models of the Douglas C-47 Skytrain transport aircraft carried a putt-putt under the cockpit floor. As mechanical "startup" APUs for jet engines The first German jet engines built during the Second World War used a mechanical APU starting system designed by the German engineer Norbert Riedel. It consisted of a two-stroke flat engine, which for the Junkers Jumo 004 design was hidden in the engine nose cone, essentially functioning as a pioneering example of an auxiliary power unit for starting a jet engine. A hole in the extreme nose of the cone contained a manual pull-handle which started the piston engine, which in turn rotated the compressor. Two spark plug access ports existed in the Jumo 004's nose cone to service the Riedel unit's cylinders in situ, for maintenance purposes. Two small "premix" tanks for the Riedel's petrol/oil fuel were fitted in the annular intake. The engine was considered an extreme short stroke (bore / stroke: 70 mm / 35 mm = 2:1) design so it could fit within the in the nose cone of jet engines like the Jumo 004. For reduction it had an integrated planetary gear. It was produced by Victoria in Nuremberg and served as a mechanical APU-style starter for all three German jet engine designs to have made it to at least the prototype stage before May 1945 – the Junkers Jumo 004, the BMW 003 (which uniquely appears to use an electric starter for the Riedel APU), and the prototypes (19 built) of the more advanced Heinkel HeS 011 engine, which mounted it just above the intake passage in the Heinkel-crafted sheetmetal of the engine nacelle nose. The Boeing 727 in 1963 was the first jetliner to feature a gas turbine APU, allowing it to operate at smaller airports, independent from ground facilities. The APU can be identified on many modern airliners by an exhaust pipe at the aircraft's tail. Sections A typical gas-turbine APU for commercial transport aircraft comprises three main sections: Power section The power section is the gas-generator portion of the engine and produces all the shaft power for the APU. In this section of the engine, air and fuel are mixed, compressed and ignited to create hot and expanding gases. This gas is highly energetic and is used to spin the turbine, which in turn powers other sections of the engine, such as auxiliary gearboxes, pumps, electrical generators, and in the case of a turbo fan engine, the main fan. Load compressor section The load compressor is generally a shaft-mounted compressor that provides pneumatic power for the aircraft, though some APUs extract bleed air from the power section compressor. There are two actuated devices to help control the flow of air: the inlet guide vanes that regulate airflow to the load compressor and the surge control valve that maintains stable or surge-free operation of the turbo machine. Gearbox section The gearbox transfers power from the main shaft of the engine to an oil-cooled generator for electrical power. Within the gearbox, power is also transferred to engine accessories such as the fuel control unit, the lubrication module, and cooling fan. There is also a starter motor connected through the gear train to perform the starting function of the APU. Some APU designs use a combination starter/generator for APU starting and electrical power generation to reduce complexity. On the Boeing 787, an aircraft which has greater reliance on its electrical systems, the APU delivers only electricity to the aircraft. The absence of a pneumatic system simplifies the design, but high demand for electricity requires heavier generators. Onboard solid oxide fuel cell (SOFC) APUs are being researched. Manufacturers The market of Auxiliary power units is dominated by Honeywell, followed by Pratt & Whitney, Motorsich and other manufacturers such as PBS Velká Bíteš, Safran Power Units, Aerosila and Klimov. Local manufacturers include Bet Shemesh Engines and Hanwha Aerospace. The 2018 market share varied according to the application platforms: Large commercial aircraft: Honeywell 70–80%, Pratt & Whitney 20–30%, others 0–5% Regional aircraft: Pratt & Whitney 50–60%, Honeywell 40–50%, others 0–5% Business jets: Honeywell 90–100%, others 0–5% Helicopters: Pratt & Whitney 40–50%, Motorsich 40–50%, Honeywell 5–10%, Safran Power Units 5–10%, others 0–5% On June 4, 2018, Boeing and Safran announced their 50–50 partnership to design, build and service APUs after regulatory and antitrust clearance in the second half of 2018. Boeing produced several hundred T50/T60 small turboshafts and their derivatives in the early 1960s. Safran produces helicopters and business jets APUs but stopped the large APUs since Labinal exited the APIC joint venture with Sundstrand in 1996. This could threaten the dominance of Honeywell and United Technologies. Honeywell has a 65% share of the mainliner APU market and is the sole supplier for the Airbus A350, the Boeing 777 and all single-aisles: the Boeing 737 MAX, Airbus A220 (formerly Bombardier CSeries), Comac C919, Irkut MC-21 and Airbus A320neo since Airbus eliminated the P&WC APS3200 option. P&WC claims the remaining 35% with the Airbus A380, Boeing 787 and Boeing 747-8. It should take at least a decade for the Boeing/Safran JV to reach $100 million in service revenue. The 2017 market for production was worth $800 million (88% civil and 12% military), while the MRO market was worth $2.4 billion, spread equally between civil and military. Spacecraft The Space Shuttle APUs provided hydraulic pressure. The Space Shuttle had three redundant APUs, powered by hydrazine fuel. They were only powered up for ascent, re-entry, and landing. During ascent, the APUs provided hydraulic power for gimballing of the Shuttle's three engines and control of their large valves, and for movement of the control surfaces. During landing, they moved the control surfaces, lowered the wheels, and powered the brakes and nose-wheel steering. Landing could be accomplished with only one APU working. In the early years of the Shuttle there were problems with APU reliability, with malfunctions on three of the first nine Shuttle missions. Armored vehicles APUs are fitted to some tanks to provide electrical power without the high fuel consumption and large infrared signature of the main engine. As early as World War II, the American M4 Sherman had a small, piston-engine powered APU for charging the tank's batteries, a feature the Soviet-produced T-34 tank did not have. Commercial vehicles A refrigerated or frozen food semi trailer or train car may be equipped with an independent APU and fuel tank to maintain low temperatures while in transit, without the need for an external transport-supplied power source. On some older diesel engined-equipment, a small gasoline engine (often called a "pony engine") was used instead of an electric motor to start the main engine. The exhaust path of the pony engine was typically arranged so as to warm the intake manifold of the diesel, to ease starting in colder weather. These were primarily used on large pieces of construction equipment. Fuel cells In recent years, truck and fuel cell manufacturers have teamed up to create, test and demonstrate a fuel cell APU that eliminates nearly all emissions and uses diesel fuel more efficiently. In 2008, a DOE sponsored partnership between Delphi Electronics and Peterbilt demonstrated that a fuel cell could provide power to the electronics and air conditioning of a Peterbilt Model 386 under simulated "idling" conditions for ten hours. Delphi has said the 5 kW system for Class 8 trucks will be released in 2012, at an $8000–9000 price tag that would be competitive with other "midrange" two-cylinder diesel APUs, should they be able to meet those deadlines and cost estimates. See also Air-start system Auxiliary hydraulic system Coffman engine starter Ram air turbine Uninterruptible power supply Notes References External links "Space Shuttle Orbiter APU" "Sound of an APU from inside a Boeing 737 cabin" The Riedel Starter Motor In: Messerschmitt Me 262B in Detail; The airframe, engines and canopy YouTube video of restored Junkers Jumo 004 jet engine, being started with "integral" Riedel APU, from September 2019 Starting systems Electrical generators Aircraft components
Auxiliary power unit
[ "Physics", "Technology" ]
2,230
[ "Physical systems", "Electrical generators", "Machines" ]
61,273
https://en.wikipedia.org/wiki/Supersonic%20speed
Supersonic speed is the speed of an object that exceeds the speed of sound (Mach 1). For objects traveling in dry air of a temperature of 20 °C (68 °F) at sea level, this speed is approximately . Speeds greater than five times the speed of sound (Mach 5) are often referred to as hypersonic. Flights during which only some parts of the air surrounding an object, such as the ends of rotor blades, reach supersonic speeds are called transonic. This occurs typically somewhere between Mach 0.8 and Mach 1.2. Sounds are traveling vibrations in the form of pressure waves in an elastic medium. Objects move at supersonic speed when the objects move faster than the speed at which sound propagates through the medium. In gases, sound travels longitudinally at different speeds, mostly depending on the molecular mass and temperature of the gas, and pressure has little effect. Since air temperature and composition varies significantly with altitude, the speed of sound, and Mach numbers for a steadily moving object may change. In water at room temperature supersonic speed means any speed greater than 1,440 m/s (4,724 ft/s). In solids, sound waves can be polarized longitudinally or transversely and have higher velocities. Supersonic fracture is crack formation faster than the speed of sound in a brittle material. Early meaning The word supersonic comes from two Latin derived words; 1) super: above and 2) sonus: sound, which together mean above sound, or faster than sound. At the beginning of the 20th century, the term "supersonic" was used as an adjective to describe sound whose frequency is above the range of normal human hearing. The modern term for this meaning is "ultrasonic", but the older meaning sometimes still lives on, as in the word superheterodyne Supersonic objects The tip of a bullwhip is generally seen as the first object designed to reach the speed of sound. This action results in its telltale "crack", which is actually just a sonic boom. The first human-made supersonic boom was likely caused by a piece of common cloth, leading to the whip's eventual development. It's the wave motion travelling through the bullwhip that makes it capable of achieving supersonic speeds. Most modern firearm bullets are supersonic, with rifle projectiles often travelling at speeds approaching and in some cases well exceeding Mach 3. Most spacecraft are supersonic at least during portions of their reentry, though the effects on the spacecraft are reduced by low air densities. During ascent, launch vehicles generally avoid going supersonic below 30 km (~98,400 feet) to reduce air drag. Note that the speed of sound decreases somewhat with altitude, due to lower temperatures found there (typically up to 25 km). At even higher altitudes the temperature starts increasing, with the corresponding increase in the speed of sound. When an inflated balloon is burst, the torn pieces of latex contract at supersonic speed, which contributes to the sharp and loud popping noise. Supersonic land vehicles To date, only one land vehicle has officially travelled at supersonic speed, the ThrustSSC. The vehicle, driven by Andy Green, holds the world land speed record, having achieved an average speed on its bi-directional run of in the Black Rock Desert on 15 October 1997. The Bloodhound LSR project planned an attempt on the record in 2020 at Hakskeenpan in South Africa with a combination jet and hybrid rocket propelled car. The aim was to break the existing record, then make further attempts during which (the members of) the team hoped to reach speeds of up to . The effort was originally run by Richard Noble who was the leader of the ThrustSSC project, however following funding issues in 2018, the team was bought by Ian Warhurst and renamed Bloodhound LSR. Later the project was indefinitely delayed due to the COVID-19 pandemic and the vehicle was put up for sale. Supersonic flight Most modern fighter aircraft are supersonic aircraft. No modern-day passenger aircraft are capable of supersonic speed, but there have been supersonic passenger aircraft, namely Concorde and the Tupolev Tu-144. Both of these passenger aircraft and some modern fighters are also capable of supercruise, a condition of sustained supersonic flight without the use of an afterburner. Due to its ability to supercruise for several hours and the relatively high frequency of flight over several decades, Concorde spent more time flying supersonically than all other aircraft combined by a considerable margin. Since Concorde's final retirement flight on November 26, 2003, there are no supersonic passenger aircraft left in service. Some large bombers, such as the Tupolev Tu-160 and Rockwell B-1 Lancer are also supersonic-capable. The aerodynamics of supersonic aircraft is simpler than subsonic aerodynamics because the airsheets at different points along the plane often cannot affect each other. Supersonic jets and rocket vehicles require several times greater thrust to push through the extra aerodynamic drag experienced within the transonic region (around Mach 0.85–1.2). At these speeds aerospace engineers can gently guide air around the fuselage of the aircraft without producing new shock waves, but any change in cross area farther down the vehicle leads to shock waves along the body. Designers use the Supersonic area rule and the Whitcomb area rule to minimize sudden changes in size. However, in practical applications, a supersonic aircraft must operate stably in both subsonic and supersonic profiles, hence aerodynamic design is more complex. The main key to having low supersonic drag is to properly shape the overall aircraft to be long and thin, and close to a "perfect" shape, the von Karman ogive or Sears-Haack body. This has led to almost every supersonic cruising aircraft looking very similar to every other, with a very long and slender fuselage and large delta wings, cf. SR-71, Concorde, etc. Although not ideal for passenger aircraft, this shaping is quite adaptable for bomber use. See also Area rule Hypersonic speed Sonic boom Supersonic aircraft Supersonic airfoils Transonic speed Vapor cone Prandtl–Glauert singularity Supersonic (Oasis song) References External links "Can We Ever Fly Faster Speed of Sound", October 1944, Popular Science one of the earliest articles on shock waves and flying the speed of sound "Britain Goes Supersonic", January 1946, Popular Science 1946 article trying to explain supersonic flight to the general public MathPages – The Speed of Sound Supersonic sound pressure levels Aerodynamics Aerospace engineering Airspeed Sound Temporal rates
Supersonic speed
[ "Physics", "Chemistry", "Engineering" ]
1,359
[ "Temporal quantities", "Physical quantities", "Temporal rates", "Aerodynamics", "Airspeed", "Aerospace engineering", "Wikipedia categories named after physical quantities", "Fluid dynamics" ]
61,275
https://en.wikipedia.org/wiki/Cathodoluminescence
Cathodoluminescence is an optical and electromagnetic phenomenon in which electrons impacting on a luminescent material such as a phosphor, cause the emission of photons which may have wavelengths in the visible spectrum. A familiar example is the generation of light by an electron beam scanning the phosphor-coated inner surface of the screen of a television that uses a cathode-ray tube. Cathodoluminescence is the inverse of the photoelectric effect, in which electron emission is induced by irradiation with photons. Origin Luminescence in a semiconductor results when an electron in the conduction band recombines with a hole in the valence band. The difference energy (band gap) of this transition can be emitted in form of a photon. The energy (color) of the photon, and the probability that a photon and not a phonon will be emitted, depends on the material, its purity, and the presence of defects. First, the electron has to be excited from the valence band into the conduction band. In cathodoluminescence, this occurs as the result of an impinging high energy electron beam onto a semiconductor. However, these primary electrons carry far too much energy to directly excite electrons. Instead, the inelastic scattering of the primary electrons in the crystal leads to the emission of secondary electrons, Auger electrons and X-rays, which in turn can scatter as well. Such a cascade of scattering events leads to up to 103 secondary electrons per incident electron. These secondary electrons can excite valence electrons into the conduction band when they have a kinetic energy about three times the band gap energy of the material . From there the electron recombines with a hole in the valence band and creates a photon. The excess energy is transferred to phonons and thus heats the lattice. One of the advantages of excitation with an electron beam is that the band gap energy of materials that are investigated is not limited by the energy of the incident light as in the case of photoluminescence. Therefore, in cathodoluminescence, the "semiconductor" examined can, in fact, be almost any non-metallic material. In terms of band structure, classical semiconductors, insulators, ceramics, gemstones, minerals, and glasses can be treated the same way. Microscopy In geology, mineralogy, materials science and semiconductor engineering, a scanning electron microscope (SEM) fitted with a cathodoluminescence detector, or an optical cathodoluminescence microscope, may be used to examine internal structures of semiconductors, rocks, ceramics, glass, etc. in order to get information on the composition, growth and quality of the material. Optical cathodoluminescence microscope A cathodoluminescence (CL) microscope combines a regular (light optical) microscope with a cathode-ray tube. It is designed to image the luminescence characteristics of polished thin sections of solids irradiated by an electron beam. Using a cathodoluminescence microscope, structures within crystals or fabrics can be made visible which cannot be seen in normal light conditions. Thus, for example, valuable information on the growth of minerals can be obtained. CL-microscopy is used in geology, mineralogy and materials science for the investigation of rocks, minerals, volcanic ash, glass, ceramic, concrete, fly ash, etc. CL color and intensity are dependent on the characteristics of the sample and on the working conditions of the electron gun. Here, acceleration voltage and beam current of the electron beam are of major importance. Today, two types of CL microscopes are in use. One is working with a "cold cathode" generating an electron beam by a corona discharge tube, the other one produces a beam using a "hot cathode". Cold-cathode CL microscopes are the simplest and most economical type. Unlike other electron bombardment techniques like electron microscopy, cold cathodoluminescence microscopy provides positive ions along with the electrons which neutralize surface charge buildup and eliminate the need for conductive coatings to be applied to the specimens. The "hot cathode" type generates an electron beam by an electron gun with tungsten filament. The advantage of a hot cathode is the precisely controllable high beam intensity allowing to stimulate the emission of light even on weakly luminescing materials (e.g. quartz – see picture). To prevent charging of the sample, the surface must be coated with a conductive layer of gold or carbon. This is usually done by a sputter deposition device or a carbon coater. Cathodoluminescence from a scanning electron microscope In scanning electron microscopes a focused beam of electrons impinges on a sample and induces it to emit light that is collected by an optical system, such as an elliptical mirror. From there, a fiber optic will transfer the light out of the microscope where it is separated into its component wavelengths by a monochromator and is then detected with a photomultiplier tube. By scanning the microscope's beam in an X-Y pattern and measuring the light emitted with the beam at each point, a map of the optical activity of the specimen can be obtained (cathodoluminescence imaging). Instead, by measuring the wavelength dependence for a fixed point or a certain area, the spectral characteristics can be recorded (cathodoluminescence spectroscopy). Furthermore, if the photomultiplier tube is replaced with a CCD camera, an entire spectrum can be measured at each point of a map (hyperspectral imaging). Moreover, the optical properties of an object can be correlated to structural properties observed with the electron microscope. The primary advantages to the electron microscope based technique is its spatial resolution. In a scanning electron microscope, the attainable resolution is on the order of a few ten nanometers, while in a (scanning) transmission electron microscope (TEM), nanometer-sized features can be resolved. Additionally, it is possible to perform nanosecond- to picosecond-level time-resolved measurements if the electron beam can be "chopped" into nano- or pico-second pulses by a beam-blanker or with a pulsed electron source. These advanced techniques are useful for examining low-dimensional semiconductor structures, such a quantum wells or quantum dots. While an electron microscope with a cathodoluminescence detector provides high magnification, an optical cathodoluminescence microscope benefits from its ability to show actual visible color features directly through the eyepiece. More recently developed systems try to combine both an optical and an electron microscope to take advantage of both these techniques. Extended applications Although direct bandgap semiconductors such as GaAs or GaN are most easily examined by these techniques, indirect semiconductors such as silicon also emit weak cathodoluminescence, and can be examined as well. In particular, the luminescence of dislocated silicon is different from intrinsic silicon, and can be used to map defects in integrated circuits. Recently, cathodoluminescence performed in electron microscopes is also being used to study surface plasmon resonances in metallic nanoparticles. Surface plasmons in metal nanoparticles can absorb and emit light, though the process is different from that in semiconductors. Similarly, cathodoluminescence has been exploited as a probe to map the local density of states of planar dielectric photonic crystals and nanostructured photonic materials. See also Electron-stimulated luminescence Luminescence Photoluminescence Scanning electron microscopy References Further reading Electron beams set nanostructures aglow [PDF], E. S. Reich, Nature 493, 143 (2013) Scanning Cathodoluminescence Microscopy, C. M. Parish and P. E. Russell, in Advances in Imaging and Electron Physics, V.147, ed. P. W. Hawkes, P. 1 (2007) Quick look cathodoluminescence analyses and their impact on the interpretation of carbonate reservoirs. Case study of mid-Jurassic oolitic reservoirs in the Paris Basin , B. Granier and C. Staffelbach (2009) Cathodoluminescence Microscopy of Inorganic Solids,, B. G. Yacobi and D. B. Holt, New York, Springer (1990) External links Application laboratory time-resolved cathodoluminescence spectroscopy at Paul-Drude-Institut LumiSpy – Luminescence spectroscopy data analysis with python Scientific Results about High Spatial Resolution Cathodoluminescence Electron beam Light sources Luminescence Materials science Scientific techniques
Cathodoluminescence
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,799
[ "Electron", "Luminescence", "Molecular physics", "Applied and interdisciplinary physics", "Electron beam", "Materials science", "nan" ]
61,419
https://en.wikipedia.org/wiki/Tokenization%20%28data%20security%29
Tokenization, when applied to data security, is the process of substituting a sensitive data element with a non-sensitive equivalent, referred to as a token, that has no intrinsic or exploitable meaning or value. The token is a reference (i.e. identifier) that maps back to the sensitive data through a tokenization system. The mapping from original data to a token uses methods that render tokens infeasible to reverse in the absence of the tokenization system, for example using tokens created from random numbers. A one-way cryptographic function is used to convert the original data into tokens, making it difficult to recreate the original data without obtaining entry to the tokenization system's resources. To deliver such services, the system maintains a vault database of tokens that are connected to the corresponding sensitive data. Protecting the system vault is vital to the system, and improved processes must be put in place to offer database integrity and physical security. The tokenization system must be secured and validated using security best practices applicable to sensitive data protection, secure storage, audit, authentication and authorization. The tokenization system provides data processing applications with the authority and interfaces to request tokens, or detokenize back to sensitive data. The security and risk reduction benefits of tokenization require that the tokenization system is logically isolated and segmented from data processing systems and applications that previously processed or stored sensitive data replaced by tokens. Only the tokenization system can tokenize data to create tokens, or detokenize back to redeem sensitive data under strict security controls. The token generation method must be proven to have the property that there is no feasible means through direct attack, cryptanalysis, side channel analysis, token mapping table exposure or brute force techniques to reverse tokens back to live data. Replacing live data with tokens in systems is intended to minimize exposure of sensitive data to those applications, stores, people and processes, reducing risk of compromise or accidental exposure and unauthorized access to sensitive data. Applications can operate using tokens instead of live data, with the exception of a small number of trusted applications explicitly permitted to detokenize when strictly necessary for an approved business purpose. Tokenization systems may be operated in-house within a secure isolated segment of the data center, or as a service from a secure service provider. Tokenization may be used to safeguard sensitive data involving, for example, bank accounts, financial statements, medical records, criminal records, driver's licenses, loan applications, stock trades, voter registrations, and other types of personally identifiable information (PII). Tokenization is often used in credit card processing. The PCI Council defines tokenization as "a process by which the primary account number (PAN) is replaced with a surrogate value called a token. A PAN may be linked to a reference number through the tokenization process. In this case, the merchant simply has to retain the token and a reliable third party controls the relationship and holds the PAN. The token may be created independently of the PAN, or the PAN can be used as part of the data input to the tokenization technique. The communication between the merchant and the third-party supplier must be secure to prevent an attacker from intercepting to gain the PAN and the token. De-tokenization is the reverse process of redeeming a token for its associated PAN value. The security of an individual token relies predominantly on the infeasibility of determining the original PAN knowing only the surrogate value". The choice of tokenization as an alternative to other techniques such as encryption will depend on varying regulatory requirements, interpretation, and acceptance by respective auditing or assessment entities. This is in addition to any technical, architectural or operational constraint that tokenization imposes in practical use. Concepts and origins The concept of tokenization, as adopted by the industry today, has existed since the first currency systems emerged centuries ago as a means to reduce risk in handling high value financial instruments by replacing them with surrogate equivalents. In the physical world, coin tokens have a long history of use replacing the financial instrument of minted coins and banknotes. In more recent history, subway tokens and casino chips found adoption for their respective systems to replace physical currency and cash handling risks such as theft. Exonumia and scrip are terms synonymous with such tokens. In the digital world, similar substitution techniques have been used since the 1970s as a means to isolate real data elements from exposure to other data systems. In databases for example, surrogate key values have been used since 1976 to isolate data associated with the internal mechanisms of databases and their external equivalents for a variety of uses in data processing. More recently, these concepts have been extended to consider this isolation tactic to provide a security mechanism for the purposes of data protection. In the payment card industry, tokenization is one means of protecting sensitive cardholder data in order to comply with industry standards and government regulations. Tokenization was applied to payment card data by Shift4 Corporation and released to the public during an industry Security Summit in Las Vegas, Nevada in 2005. The technology is meant to prevent the theft of the credit card information in storage. Shift4 defines tokenization as: “The concept of using a non-decryptable piece of data to represent, by reference, sensitive or secret data. In payment card industry (PCI) context, tokens are used to reference cardholder data that is managed in a tokenization system, application or off-site secure facility.” To protect data over its full lifecycle, tokenization is often combined with end-to-end encryption to secure data in transit to the tokenization system or service, with a token replacing the original data on return. For example, to avoid the risks of malware stealing data from low-trust systems such as point of sale (POS) systems, as in the Target breach of 2013, cardholder data encryption must take place prior to card data entering the POS and not after. Encryption takes place within the confines of a security hardened and validated card reading device and data remains encrypted until received by the processing host, an approach pioneered by Heartland Payment Systems as a means to secure payment data from advanced threats, now widely adopted by industry payment processing companies and technology companies. The PCI Council has also specified end-to-end encryption (certified point-to-point encryption—P2PE) for various service implementations in various PCI Council Point-to-point Encryption documents. The tokenization process The process of tokenization consists of the following steps: The application sends the tokenization data and authentication information to the tokenization system. It is stopped if authentication fails and the data is delivered to an event management system. As a result, administrators can discover problems and effectively manage the system. The system moves on to the next phase if authentication is successful. Using one-way cryptographic techniques, a token is generated and kept in a highly secure data vault. The new token is provided to the application for further use. Tokenization systems share several components according to established standards. Token Generation is the process of producing a token using any means, such as mathematically reversible cryptographic functions based on strong encryption algorithms and key management mechanisms, one-way nonreversible cryptographic functions (e.g., a hash function with strong, secret salt), or assignment via a randomly generated number. Random Number Generator (RNG) techniques are often the best choice for generating token values. Token Mapping – this is the process of assigning the created token value to its original value. To enable permitted look-ups of the original value using the token as the index, a secure cross-reference database must be constructed. Token Data Store – this is a central repository for the Token Mapping process that holds the original values as well as the related token values after the Token Generation process. On data servers, sensitive data and token values must be securely kept in encrypted format. Encrypted Data Storage – this is the encryption of sensitive data while it is in transit. Management of Cryptographic Keys. Strong key management procedures are required for sensitive data encryption on Token Data Stores. Difference from encryption Tokenization and “classic” encryption effectively protect data if implemented properly, and a computer security system may use both. While similar in certain regards, tokenization and classic encryption differ in a few key aspects. Both are cryptographic data security methods and they essentially have the same function, however they do so with differing processes and have different effects on the data they are protecting. Tokenization is a non-mathematical approach that replaces sensitive data with non-sensitive substitutes without altering the type or length of data. This is an important distinction from encryption because changes in data length and type can render information unreadable in intermediate systems such as databases. Tokenized data can still be processed by legacy systems which makes tokenization more flexible than classic encryption. In many situations, the encryption process is a constant consumer of processing power, hence such a system needs significant expenditures in specialized hardware and software. Another difference is that tokens require significantly less computational resources to process. With tokenization, specific data is kept fully or partially visible for processing and analytics while sensitive information is kept hidden. This allows tokenized data to be processed more quickly and reduces the strain on system resources. This can be a key advantage in systems that rely on high performance. In comparison to encryption, tokenization technologies reduce time, expense, and administrative effort while enabling teamwork and communication. Types of tokens There are many ways that tokens can be classified however there is currently no unified classification. Tokens can be: single or multi-use, cryptographic or non-cryptographic, reversible or irreversible, authenticable or non-authenticable, and various combinations thereof. In the context of payments, the difference between high and low value tokens plays a significant role. High-value tokens (HVTs) HVTs serve as surrogates for actual PANs in payment transactions and are used as an instrument for completing a payment transaction. In order to function, they must look like actual PANs. Multiple HVTs can map back to a single PAN and a single physical credit card without the owner being aware of it. Additionally, HVTs can be limited to certain networks and/or merchants whereas PANs cannot. HVTs can also be bound to specific devices so that anomalies between token use, physical devices, and geographic locations can be flagged as potentially fraudulent. HVT blocking enhances efficiency by reducing computational costs while maintaining accuracy and reducing record linkage as it reduces the number of records that are compared. Low-value tokens (LVTs) or security tokens LVTs also act as surrogates for actual PANs in payment transactions, however they serve a different purpose. LVTs cannot be used by themselves to complete a payment transaction. In order for an LVT to function, it must be possible to match it back to the actual PAN it represents, albeit only in a tightly controlled fashion. Using tokens to protect PANs becomes ineffectual if a tokenization system is breached, therefore securing the tokenization system itself is extremely important. System operations, limitations and evolution First generation tokenization systems use a database to map from live data to surrogate substitute tokens and back. This requires the storage, management, and continuous backup for every new transaction added to the token database to avoid data loss. Another problem is ensuring consistency across data centers, requiring continuous synchronization of token databases. Significant consistency, availability and performance trade-offs, per the CAP theorem, are unavoidable with this approach. This overhead adds complexity to real-time transaction processing to avoid data loss and to assure data integrity across data centers, and also limits scale. Storing all sensitive data in one service creates an attractive target for attack and compromise, and introduces privacy and legal risk in the aggregation of data Internet privacy, particularly in the EU. Another limitation of tokenization technologies is measuring the level of security for a given solution through independent validation. With the lack of standards, the latter is critical to establish the strength of tokenization offered when tokens are used for regulatory compliance. The PCI Council recommends independent vetting and validation of any claims of security and compliance: "Merchants considering the use of tokenization should perform a thorough evaluation and risk analysis to identify and document the unique characteristics of their particular implementation, including all interactions with payment card data and the particular tokenization systems and processes" The method of generating tokens may also have limitations from a security perspective. With concerns about security and attacks to random number generators, which are a common choice for the generation of tokens and token mapping tables, scrutiny must be applied to ensure proven and validated methods are used versus arbitrary design. Random-number generators have limitations in terms of speed, entropy, seeding and bias, and security properties must be carefully analysed and measured to avoid predictability and compromise. With tokenization's increasing adoption, new tokenization technology approaches have emerged to remove such operational risks and complexities and to enable increased scale suited to emerging big data use cases and high performance transaction processing, especially in financial services and banking. In addition to conventional tokenization methods, Protegrity provides additional security through its so-called "obfuscation layer." This creates a barrier that prevents not only regular users from accessing information they wouldn't see but also privileged users who has access, such as database administrators. Stateless tokenization allows live data elements to be mapped to surrogate values randomly, without relying on a database, while maintaining the isolation properties of tokenization. November 2014, American Express released its token service which meets the EMV tokenization standard. Other notable examples of Tokenization-based payment systems, according to the EMVCo standard, include Google Wallet, Apple Pay, Samsung Pay, Microsoft Wallet, Fitbit Pay and Garmin Pay. Visa uses tokenization techniques to provide a secure online and mobile shopping. Using blockchain, as opposed to relying on trusted third parties, it is possible to run highly accessible, tamper-resistant databases for transactions. With help of blockchain, tokenization is the process of converting the value of a tangible or intangible asset into a token that can be exchanged on the network. This enables the tokenization of conventional financial assets, for instance, by transforming rights into a digital token backed by the asset itself using blockchain technology. Besides that, tokenization enables the simple and efficient compartmentalization and management of data across multiple users. Individual tokens created through tokenization can be used to split ownership and partially resell an asset. Consequently, only entities with the appropriate token can access the data. Numerous blockchain companies support asset tokenization. In 2019, eToro acquired Firmo and renamed as eToroX. Through its Token Management Suite, which is backed by USD-pegged stablecoins, eToroX enables asset tokenization. The tokenization of equity is facilitated by STOKR, a platform that links investors with small and medium-sized businesses. Tokens issued through the STOKR platform are legally recognized as transferable securities under European Union capital market regulations. Breakers enable tokenization of intellectual property, allowing content creators to issue their own digital tokens. Tokens can be distributed to a variety of project participants. Without intermediaries or governing body, content creators can integrate reward-sharing features into the token. Application to alternative payment systems Building an alternate payments system requires a number of entities working together in order to deliver near field-communication (NFC) or other technology based payment services to the end users. One of the issues is the interoperability between the players and to resolve this issue the role of trusted service manager (TSM) is proposed to establish a technical link between mobile network operators (MNO) and providers of services, so that these entities can work together. Tokenization can play a role in mediating such services. Tokenization as a security strategy lies in the ability to replace a real card number with a surrogate (target removal) and the subsequent limitations placed on the surrogate card number (risk reduction). If the surrogate value can be used in an unlimited fashion or even in a broadly applicable manner, the token value gains as much value as the real credit card number. In these cases, the token may be secured by a second dynamic token that is unique for each transaction and also associated to a specific payment card. Example of dynamic, transaction-specific tokens include cryptograms used in the EMV specification. Application to PCI DSS standards The Payment Card Industry Data Security Standard, an industry-wide set of guidelines that must be met by any organization that stores, processes, or transmits cardholder data, mandates that credit card data must be protected when stored. Tokenization, as applied to payment card data, is often implemented to meet this mandate, replacing credit card and ACH numbers in some systems with a random value or string of characters. Tokens can be formatted in a variety of ways. Some token service providers or tokenization products generate the surrogate values in such a way as to match the format of the original sensitive data. In the case of payment card data, a token might be the same length as a Primary Account Number (bank card number) and contain elements of the original data such as the last four digits of the card number. When a payment card authorization request is made to verify the legitimacy of a transaction, a token might be returned to the merchant instead of the card number, along with the authorization code for the transaction. The token is stored in the receiving system while the actual cardholder data is mapped to the token in a secure tokenization system. Storage of tokens and payment card data must comply with current PCI standards, including the use of strong cryptography. Standards (ANSI, the PCI Council, Visa, and EMV) Tokenization is currently in standards definition in ANSI X9 as X9.119 Part 2. X9 is responsible for the industry standards for financial cryptography and data protection including payment card PIN management, credit and debit card encryption and related technologies and processes. The PCI Council has also stated support for tokenization in reducing risk in data breaches, when combined with other technologies such as Point-to-Point Encryption (P2PE) and assessments of compliance to PCI DSS guidelines. Visa Inc. released Visa Tokenization Best Practices for tokenization uses in credit and debit card handling applications and services. In March 2014, EMVCo LLC released its first payment tokenization specification for EMV. PCI DSS is the most frequently utilized standard for Tokenization systems used by payment industry players. Risk reduction Tokenization can render it more difficult for attackers to gain access to sensitive data outside of the tokenization system or service. Implementation of tokenization may simplify the requirements of the PCI DSS, as systems that no longer store or process sensitive data may have a reduction of applicable controls required by the PCI DSS guidelines. As a security best practice, independent assessment and validation of any technologies used for data protection, including tokenization, must be in place to establish the security and strength of the method and implementation before any claims of privacy compliance, regulatory compliance, and data security can be made. This validation is particularly important in tokenization, as the tokens are shared externally in general use and thus exposed in high risk, low trust environments. The infeasibility of reversing a token or set of tokens to a live sensitive data must be established using industry accepted measurements and proofs by appropriate experts independent of the service or solution provider. Restrictions on token use Not all organizational data can be tokenized, and needs to be examined and filtered. When databases are utilized on a large scale, they expand exponentially, causing the search process to take longer, restricting system performance, and increasing backup processes. A database that links sensitive information to tokens is called a vault. With the addition of new data, the vault's maintenance workload increases significantly. For ensuring database consistency, token databases need to be continuously synchronized. Apart from that, secure communication channels must be built between sensitive data and the vault so that data is not compromised on the way to or from storage. See also Adaptive Redaction PAN truncation Format preserving encryption References External links Cloud vs Payment - Cloud vs Payment - Introduction to tokenization via cloud payments. Cryptography
Tokenization (data security)
[ "Mathematics", "Engineering" ]
4,192
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
61,476
https://en.wikipedia.org/wiki/Radius%20of%20convergence
In mathematics, the radius of convergence of a power series is the radius of the largest disk at the center of the series in which the series converges. It is either a non-negative real number or . When it is positive, the power series converges absolutely and uniformly on compact sets inside the open disk of radius equal to the radius of convergence, and it is the Taylor series of the analytic function to which it converges. In case of multiple singularities of a function (singularities are those values of the argument for which the function is not defined), the radius of convergence is the shortest or minimum of all the respective distances (which are all non-negative numbers) calculated from the center of the disk of convergence to the respective singularities of the function. Definition For a power series f defined as: where a is a complex constant, the center of the disk of convergence, cn is the n-th complex coefficient, and z is a complex variable. The radius of convergence r is a nonnegative real number or such that the series converges if and diverges if Some may prefer an alternative definition, as existence is obvious: On the boundary, that is, where |z − a| = r, the behavior of the power series may be complicated, and the series may converge for some values of z and diverge for others. The radius of convergence is infinite if the series converges for all complex numbers z. Finding the radius of convergence Two cases arise: The first case is theoretical: when you know all the coefficients then you take certain limits and find the precise radius of convergence. The second case is practical: when you construct a power series solution of a difficult problem you typically will only know a finite number of terms in a power series, anywhere from a couple of terms to a hundred terms. In this second case, extrapolating a plot estimates the radius of convergence. Theoretical radius The radius of convergence can be found by applying the root test to the terms of the series. The root test uses the number "lim sup" denotes the limit superior. The root test states that the series converges if C < 1 and diverges if C > 1. It follows that the power series converges if the distance from z to the center a is less than and diverges if the distance exceeds that number; this statement is the Cauchy–Hadamard theorem. Note that r = 1/0 is interpreted as an infinite radius, meaning that f is an entire function. The limit involved in the ratio test is usually easier to compute, and when that limit exists, it shows that the radius of convergence is finite. This is shown as follows. The ratio test says the series converges if That is equivalent to Practical estimation of radius in the case of real coefficients Usually, in scientific applications, only a finite number of coefficients are known. Typically, as increases, these coefficients settle into a regular behavior determined by the nearest radius-limiting singularity. In this case, two main techniques have been developed, based on the fact that the coefficients of a Taylor series are roughly exponential with ratio where r is the radius of convergence. The basic case is when the coefficients ultimately share a common sign or alternate in sign. As pointed out earlier in the article, in many cases the limit exists, and in this case . Negative means the convergence-limiting singularity is on the negative axis. Estimate this limit, by plotting the versus , and graphically extrapolate to (effectively ) via a linear fit. The intercept with estimates the reciprocal of the radius of convergence, . This plot is called a Domb–Sykes plot. The more complicated case is when the signs of the coefficients have a more complex pattern. Mercer and Roberts proposed the following procedure. Define the associated sequence Plot the finitely many known versus , and graphically extrapolate to via a linear fit. The intercept with estimates the reciprocal of the radius of convergence, . This procedure also estimates two other characteristics of the convergence limiting singularity. Suppose the nearest singularity is of degree and has angle to the real axis. Then the slope of the linear fit given above is . Further, plot versus , then a linear fit extrapolated to has intercept at . Radius of convergence in complex analysis A power series with a positive radius of convergence can be made into a holomorphic function by taking its argument to be a complex variable. The radius of convergence can be characterized by the following theorem: The radius of convergence of a power series f centered on a point a is equal to the distance from a to the nearest point where f cannot be defined in a way that makes it holomorphic. The set of all points whose distance to a is strictly less than the radius of convergence is called the disk of convergence. The nearest point means the nearest point in the complex plane, not necessarily on the real line, even if the center and all coefficients are real. For example, the function has no singularities on the real line, since has no real roots. Its Taylor series about 0 is given by The root test shows that its radius of convergence is 1. In accordance with this, the function f(z) has singularities at ±i, which are at a distance 1 from 0. For a proof of this theorem, see analyticity of holomorphic functions. A simple example The arctangent function of trigonometry can be expanded in a power series: It is easy to apply the root test in this case to find that the radius of convergence is 1. A more complicated example Consider this power series: where the rational numbers Bn are the Bernoulli numbers. It may be cumbersome to try to apply the ratio test to find the radius of convergence of this series. But the theorem of complex analysis stated above quickly solves the problem. At z = 0, there is in effect no singularity since the singularity is removable. The only non-removable singularities are therefore located at the other points where the denominator is zero. We solve by recalling that if and then and then take x and y to be real. Since y is real, the absolute value of is necessarily 1. Therefore, the absolute value of e can be 1 only if e is 1; since x is real, that happens only if x = 0. Therefore z is purely imaginary and . Since y is real, that happens only if cos(y) = 1 and sin(y) = 0, so that y is an integer multiple of 2. Consequently the singular points of this function occur at z = a nonzero integer multiple of 2i. The singularities nearest 0, which is the center of the power series expansion, are at ±2i. The distance from the center to either of those points is 2, so the radius of convergence is 2. Convergence on the boundary If the power series is expanded around the point a and the radius of convergence is , then the set of all points such that is a circle called the boundary of the disk of convergence. A power series may diverge at every point on the boundary, or diverge on some points and converge at other points, or converge at all the points on the boundary. Furthermore, even if the series converges everywhere on the boundary (even uniformly), it does not necessarily converge absolutely. Example 1: The power series for the function , expanded around , which is simply has radius of convergence 1 and diverges at every point on the boundary. Example 2: The power series for , expanded around , which is has radius of convergence 1, and diverges for but converges for all other points on the boundary. The function of Example 1 is the derivative of . Example 3: The power series has radius of convergence 1 and converges everywhere on the boundary absolutely. If is the function represented by this series on the unit disk, then the derivative of h(z) is equal to g(z)/z with g of Example 2. It turns out that is the dilogarithm function. Example 4: The power series has radius of convergence 1 and converges uniformly on the entire boundary , but does not converge absolutely on the boundary. Rate of convergence If we expand the function around the point x = 0, we find out that the radius of convergence of this series is meaning that this series converges for all complex numbers. However, in applications, one is often interested in the precision of a numerical answer. Both the number of terms and the value at which the series is to be evaluated affect the accuracy of the answer. For example, if we want to calculate accurate up to five decimal places, we only need the first two terms of the series. However, if we want the same precision for we must evaluate and sum the first five terms of the series. For , one requires the first 18 terms of the series, and for we need to evaluate the first 141 terms. So for these particular values the fastest convergence of a power series expansion is at the center, and as one moves away from the center of convergence, the rate of convergence slows down until you reach the boundary (if it exists) and cross over, in which case the series will diverge. Abscissa of convergence of a Dirichlet series An analogous concept is the abscissa of convergence of a Dirichlet series Such a series converges if the real part of s is greater than a particular number depending on the coefficients an: the abscissa of convergence. Notes References See also Abel's theorem Convergence tests Root test External links What is radius of convergence? Analytic functions Convergence (mathematics) Mathematical physics Radii
Radius of convergence
[ "Physics", "Mathematics" ]
1,958
[ "Sequences and series", "Functions and mappings", "Convergence (mathematics)", "Mathematical structures", "Applied mathematics", "Theoretical physics", "Mathematical objects", "Mathematical relations", "Mathematical physics" ]
61,580
https://en.wikipedia.org/wiki/Electrical%20resistivity%20and%20conductivity
Electrical resistivity (also called volume resistivity or specific electrical resistance) is a fundamental specific property of a material that measures its electrical resistance or how strongly it resists electric current. A low resistivity indicates a material that readily allows electric current. Resistivity is commonly represented by the Greek letter  (rho). The SI unit of electrical resistivity is the ohm-metre (Ω⋅m). For example, if a solid cube of material has sheet contacts on two opposite faces, and the resistance between these contacts is , then the resistivity of the material is . Electrical conductivity (or specific conductance) is the reciprocal of electrical resistivity. It represents a material's ability to conduct electric current. It is commonly signified by the Greek letter  (sigma), but  (kappa) (especially in electrical engineering) and  (gamma) are sometimes used. The SI unit of electrical conductivity is siemens per metre (S/m). Resistivity and conductivity are intensive properties of materials, giving the opposition of a standard cube of material to current. Electrical resistance and conductance are corresponding extensive properties that give the opposition of a specific object to electric current. Definition Ideal case In an ideal case, cross-section and physical composition of the examined material are uniform across the sample, and the electric field and current density are both parallel and constant everywhere. Many resistors and conductors do in fact have a uniform cross section with a uniform flow of electric current, and are made of a single material, so that this is a good model. (See the adjacent diagram.) When this is the case, the resistance of the conductor is directly proportional to its length and inversely proportional to its cross-sectional area, where the electrical resistivity  (Greek: rho) is the constant of proportionality. This is written as: where The resistivity can be expressed using the SI unit ohm metre (Ω⋅m) — i.e. ohms multiplied by square metres (for the cross-sectional area) then divided by metres (for the length). Both resistance and resistivity describe how difficult it is to make electrical current flow through a material, but unlike resistance, resistivity is an intrinsic property and does not depend on geometric properties of a material. This means that all pure copper (Cu) wires (which have not been subjected to distortion of their crystalline structure etc.), irrespective of their shape and size, have the same , but a long, thin copper wire has a much larger than a thick, short copper wire. Every material has its own characteristic resistivity. For example, rubber has a far larger resistivity than copper. In a hydraulic analogy, passing current through a high-resistivity material is like pushing water through a pipe full of sand - while passing current through a low-resistivity material is like pushing water through an empty pipe. If the pipes are the same size and shape, the pipe full of sand has higher resistance to flow. Resistance, however, is not determined by the presence or absence of sand. It also depends on the length and width of the pipe: short or wide pipes have lower resistance than narrow or long pipes. The above equation can be transposed to get Pouillet's law (named after Claude Pouillet): The resistance of a given element is proportional to the length, but inversely proportional to the cross-sectional area. For example, if  = ,  = (forming a cube with perfectly conductive contacts on opposite faces), then the resistance of this element in ohms is numerically equal to the resistivity of the material it is made of in Ω⋅m. Conductivity, , is the inverse of resistivity: Conductivity has SI units of siemens per metre (S/m). General scalar quantities If the geometry is more complicated, or if the resistivity varies from point to point within the material, the current and electric field will be functions of position. Then it is necessary to use a more general expression in which the resistivity at a particular point is defined as the ratio of the electric field to the density of the current it creates at that point: where The current density is parallel to the electric field by necessity. Conductivity is the inverse (reciprocal) of resistivity. Here, it is given by: For example, rubber is a material with large and small  — because even a very large electric field in rubber makes almost no current flow through it. On the other hand, copper is a material with small and large  — because even a small electric field pulls a lot of current through it. This expression simplifies to the formula given above under "ideal case" when the resistivity is constant in the material and the geometry has a uniform cross-section. In this case, the electric field and current density are constant and parallel. {| class="toccolours collapsible collapsed" width="80%" style="text-align:left;" ! Derivation of the constant case from the general case |- |We will combine three equations. Assume the geometry has a uniform cross-section and the resistivity is constant in the material. Then the electric field and current density are constant and parallel, and by the general definition of resistivity, we obtain Since the electric field is constant, it is given by the total voltage across the conductor divided by the length of the conductor: Since the current density is constant, it is equal to the total current divided by the cross sectional area: Plugging in the values of and into the first expression, we obtain: Finally, we apply Ohm's law, : |} Tensor resistivity When the resistivity of a material has a directional component, the most general definition of resistivity must be used. It starts from the tensor-vector form of Ohm's law, which relates the electric field inside a material to the electric current flow. This equation is completely general, meaning it is valid in all cases, including those mentioned above. However, this definition is the most complicated, so it is only directly used in anisotropic cases, where the more simple definitions cannot be applied. If the material is not anisotropic, it is safe to ignore the tensor-vector definition, and use a simpler expression instead. Here, anisotropic means that the material has different properties in different directions. For example, a crystal of graphite consists microscopically of a stack of sheets, and current flows very easily through each sheet, but much less easily from one sheet to the adjacent one. In such cases, the current does not flow in exactly the same direction as the electric field. Thus, the appropriate equations are generalized to the three-dimensional tensor form: where the conductivity and resistivity are rank-2 tensors, and electric field and current density are vectors. These tensors can be represented by 3×3 matrices, the vectors with 3×1 matrices, with matrix multiplication used on the right side of these equations. In matrix form, the resistivity relation is given by: where Equivalently, resistivity can be given in the more compact Einstein notation: In either case, the resulting expression for each electric field component is: Since the choice of the coordinate system is free, the usual convention is to simplify the expression by choosing an -axis parallel to the current direction, so . This leaves: Conductivity is defined similarly: or both resulting in: Looking at the two expressions, and are the matrix inverse of each other. However, in the most general case, the individual matrix elements are not necessarily reciprocals of one another; for example, may not be equal to . This can be seen in the Hall effect, where is nonzero. In the Hall effect, due to rotational invariance about the -axis, and , so the relation between resistivity and conductivity simplifies to: If the electric field is parallel to the applied current, and are zero. When they are zero, one number, , is enough to describe the electrical resistivity. It is then written as simply , and this reduces to the simpler expression. Conductivity and current carriers Relation between current density and electric current velocity Electric current is the ordered movement of electric charges. Causes of conductivity Band theory simplified According to elementary quantum mechanics, an electron in an atom or crystal can only have certain precise energy levels; energies between these levels are impossible. When a large number of such allowed levels have close-spaced energy values – i.e. have energies that differ only minutely – those close energy levels in combination are called an "energy band". There can be many such energy bands in a material, depending on the atomic number of the constituent atoms and their distribution within the crystal. The material's electrons seek to minimize the total energy in the material by settling into low energy states; however, the Pauli exclusion principle means that only one can exist in each such state. So the electrons "fill up" the band structure starting from the bottom. The characteristic energy level up to which the electrons have filled is called the Fermi level. The position of the Fermi level with respect to the band structure is very important for electrical conduction: Only electrons in energy levels near or above the Fermi level are free to move within the broader material structure, since the electrons can easily jump among the partially occupied states in that region. In contrast, the low energy states are completely filled with a fixed limit on the number of electrons at all times, and the high energy states are empty of electrons at all times. Electric current consists of a flow of electrons. In metals there are many electron energy levels near the Fermi level, so there are many electrons available to move. This is what causes the high electronic conductivity of metals. An important part of band theory is that there may be forbidden bands of energy: energy intervals that contain no energy levels. In insulators and semiconductors, the number of electrons is just the right amount to fill a certain integer number of low energy bands, exactly to the boundary. In this case, the Fermi level falls within a band gap. Since there are no available states near the Fermi level, and the electrons are not freely movable, the electronic conductivity is very low. In metals A metal consists of a lattice of atoms, each with an outer shell of electrons that freely dissociate from their parent atoms and travel through the lattice. This is also known as a positive ionic lattice. This 'sea' of dissociable electrons allows the metal to conduct electric current. When an electrical potential difference (a voltage) is applied across the metal, the resulting electric field causes electrons to drift towards the positive terminal. The actual drift velocity of electrons is typically small, on the order of magnitude of metres per hour. However, due to the sheer number of moving electrons, even a slow drift velocity results in a large current density. The mechanism is similar to transfer of momentum of balls in a Newton's cradle but the rapid propagation of an electric energy along a wire is not due to the mechanical forces, but the propagation of an energy-carrying electromagnetic field guided by the wire. Most metals have electrical resistance. In simpler models (non quantum mechanical models) this can be explained by replacing electrons and the crystal lattice by a wave-like structure. When the electron wave travels through the lattice, the waves interfere, which causes resistance. The more regular the lattice is, the less disturbance happens and thus the less resistance. The amount of resistance is thus mainly caused by two factors. First, it is caused by the temperature and thus amount of vibration of the crystal lattice. Higher temperatures cause bigger vibrations, which act as irregularities in the lattice. Second, the purity of the metal is relevant as a mixture of different ions is also an irregularity. The small decrease in conductivity on melting of pure metals is due to the loss of long range crystalline order. The short range order remains and strong correlation between positions of ions results in coherence between waves diffracted by adjacent ions. In semiconductors and insulators In metals, the Fermi level lies in the conduction band (see Band Theory, above) giving rise to free conduction electrons. However, in semiconductors the position of the Fermi level is within the band gap, about halfway between the conduction band minimum (the bottom of the first band of unfilled electron energy levels) and the valence band maximum (the top of the band below the conduction band, of filled electron energy levels). That applies for intrinsic (undoped) semiconductors. This means that at absolute zero temperature, there would be no free conduction electrons, and the resistance is infinite. However, the resistance decreases as the charge carrier density (i.e., without introducing further complications, the density of electrons) in the conduction band increases. In extrinsic (doped) semiconductors, dopant atoms increase the majority charge carrier concentration by donating electrons to the conduction band or producing holes in the valence band. (A "hole" is a position where an electron is missing; such holes can behave in a similar way to electrons.) For both types of donor or acceptor atoms, increasing dopant density reduces resistance. Hence, highly doped semiconductors behave metallically. At very high temperatures, the contribution of thermally generated carriers dominates over the contribution from dopant atoms, and the resistance decreases exponentially with temperature. In ionic liquids/electrolytes In electrolytes, electrical conduction happens not by band electrons or holes, but by full atomic species (ions) traveling, each carrying an electrical charge. The resistivity of ionic solutions (electrolytes) varies tremendously with concentration – while distilled water is almost an insulator, salt water is a reasonable electrical conductor. Conduction in ionic liquids is also controlled by the movement of ions, but here we are talking about molten salts rather than solvated ions. In biological membranes, currents are carried by ionic salts. Small holes in cell membranes, called ion channels, are selective to specific ions and determine the membrane resistance. The concentration of ions in a liquid (e.g., in an aqueous solution) depends on the degree of dissociation of the dissolved substance, characterized by a dissociation coefficient , which is the ratio of the concentration of ions to the concentration of molecules of the dissolved substance : The specific electrical conductivity () of a solution is equal to: where : module of the ion charge, and : mobility of positively and negatively charged ions, : concentration of molecules of the dissolved substance, : the coefficient of dissociation. Superconductivity The electrical resistivity of a metallic conductor decreases gradually as temperature is lowered. In normal (that is, non-superconducting) conductors, such as copper or silver, this decrease is limited by impurities and other defects. Even near absolute zero, a real sample of a normal conductor shows some resistance. In a superconductor, the resistance drops abruptly to zero when the material is cooled below its critical temperature. In a normal conductor, the current is driven by a voltage gradient, whereas in a superconductor, there is no voltage gradient and the current is instead related to the phase gradient of the superconducting order parameter. A consequence of this is that an electric current flowing in a loop of superconducting wire can persist indefinitely with no power source. In a class of superconductors known as type II superconductors, including all known high-temperature superconductors, an extremely low but nonzero resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of magnetic vortices in the electronic superfluid, which dissipates some of the energy carried by the current. The resistance due to this effect is tiny compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen so that the resistance of the material becomes truly zero. Plasma Plasmas are very good conductors and electric potentials play an important role. The potential as it exists on average in the space between charged particles, independent of the question of how it can be measured, is called the plasma potential, or space potential. If an electrode is inserted into a plasma, its potential generally lies considerably below the plasma potential, due to what is termed a Debye sheath. The good electrical conductivity of plasmas makes their electric fields very small. This results in the important concept of quasineutrality, which says the density of negative charges is approximately equal to the density of positive charges over large volumes of the plasma (), but on the scale of the Debye length there can be charge imbalance. In the special case that double layers are formed, the charge separation can extend some tens of Debye lengths. The magnitude of the potentials and electric fields must be determined by means other than simply finding the net charge density. A common example is to assume that the electrons satisfy the Boltzmann relation: Differentiating this relation provides a means to calculate the electric field from the density: (∇ is the vector gradient operator; see nabla symbol and gradient for more information.) It is possible to produce a plasma that is not quasineutral. An electron beam, for example, has only negative charges. The density of a non-neutral plasma must generally be very low, or it must be very small. Otherwise, the repulsive electrostatic force dissipates it. In astrophysical plasmas, Debye screening prevents electric fields from directly affecting the plasma over large distances, i.e., greater than the Debye length. However, the existence of charged particles causes the plasma to generate, and be affected by, magnetic fields. This can and does cause extremely complex behavior, such as the generation of plasma double layers, an object that separates charge over a few tens of Debye lengths. The dynamics of plasmas interacting with external and self-generated magnetic fields are studied in the academic discipline of magnetohydrodynamics. Plasma is often called the fourth state of matter after solid, liquids and gases. It is distinct from these and other lower-energy states of matter. Although it is closely related to the gas phase in that it also has no definite form or volume, it differs in a number of ways, including the following: Resistivity and conductivity of various materials A conductor such as a metal has high conductivity and a low resistivity. An insulator such as glass has low conductivity and a high resistivity. The conductivity of a semiconductor is generally intermediate, but varies widely under different conditions, such as exposure of the material to electric fields or specific frequencies of light, and, most important, with temperature and composition of the semiconductor material. The degree of semiconductors doping makes a large difference in conductivity. To a point, more doping leads to higher conductivity. The conductivity of a water/aqueous solution is highly dependent on its concentration of dissolved salts, and other chemical species that ionize in the solution. Electrical conductivity of water samples is used as an indicator of how salt-free, ion-free, or impurity-free the sample is; the purer the water, the lower the conductivity (the higher the resistivity). Conductivity measurements in water are often reported as specific conductance, relative to the conductivity of pure water at . An EC meter is normally used to measure conductivity in a solution. A rough summary is as follows: This table shows the resistivity (), conductivity and temperature coefficient of various materials at . The effective temperature coefficient varies with temperature and purity level of the material. The 20 °C value is only an approximation when used at other temperatures. For example, the coefficient becomes lower at higher temperatures for copper, and the value 0.00427 is commonly specified at . The extremely low resistivity (high conductivity) of silver is characteristic of metals. George Gamow tidily summed up the nature of the metals' dealings with electrons in his popular science book One, Two, Three...Infinity (1947): More technically, the free electron model gives a basic description of electron flow in metals. Wood is widely regarded as an extremely good insulator, but its resistivity is sensitively dependent on moisture content, with damp wood being a factor of at least worse insulator than oven-dry. In any case, a sufficiently high voltage – such as that in lightning strikes or some high-tension power lines – can lead to insulation breakdown and electrocution risk even with apparently dry wood. Temperature dependence Linear approximation The electrical resistivity of most materials changes with temperature. If the temperature does not vary too much, a linear approximation is typically used: where is called the temperature coefficient of resistivity, is a fixed reference temperature (usually room temperature), and is the resistivity at temperature . The parameter is an empirical parameter fitted from measurement data. Because the linear approximation is only an approximation, is different for different reference temperatures. For this reason it is usual to specify the temperature that was measured at with a suffix, such as , and the relationship only holds in a range of temperatures around the reference. When the temperature varies over a large temperature range, the linear approximation is inadequate and a more detailed analysis and understanding should be used. Metals In general, electrical resistivity of metals increases with temperature. Electron–phonon interactions can play a key role. At high temperatures, the resistance of a metal increases linearly with temperature. As the temperature of a metal is reduced, the temperature dependence of resistivity follows a power law function of temperature. Mathematically the temperature dependence of the resistivity of a metal can be approximated through the Bloch–Grüneisen formula: where is the residual resistivity due to defect scattering, A is a constant that depends on the velocity of electrons at the Fermi surface, the Debye radius and the number density of electrons in the metal. is the Debye temperature as obtained from resistivity measurements and matches very closely with the values of Debye temperature obtained from specific heat measurements. n is an integer that depends upon the nature of interaction:  = 5 implies that the resistance is due to scattering of electrons by phonons (as it is for simple metals)  = 3 implies that the resistance is due to s-d electron scattering (as is the case for transition metals)  = 2 implies that the resistance is due to electron–electron interaction. The Bloch–Grüneisen formula is an approximation obtained assuming that the studied metal has spherical Fermi surface inscribed within the first Brillouin zone and a Debye phonon spectrum. If more than one source of scattering is simultaneously present, Matthiessen's rule (first formulated by Augustus Matthiessen in the 1860s) states that the total resistance can be approximated by adding up several different terms, each with the appropriate value of . As the temperature of the metal is sufficiently reduced (so as to 'freeze' all the phonons), the resistivity usually reaches a constant value, known as the residual resistivity. This value depends not only on the type of metal, but on its purity and thermal history. The value of the residual resistivity of a metal is decided by its impurity concentration. Some materials lose all electrical resistivity at sufficiently low temperatures, due to an effect known as superconductivity. An investigation of the low-temperature resistivity of metals was the motivation to Heike Kamerlingh Onnes's experiments that led in 1911 to discovery of superconductivity. For details see History of superconductivity. Wiedemann–Franz law The Wiedemann–Franz law states that for materials where heat and charge transport is dominated by electrons, the ratio of thermal to electrical conductivity is proportional to the temperature: where is the thermal conductivity, is the Boltzmann constant, is the electron charge, is temperature, and is the electric conductivity. The ratio on the rhs is called the Lorenz number. Semiconductors In general, intrinsic semiconductor resistivity decreases with increasing temperature. The electrons are bumped to the conduction energy band by thermal energy, where they flow freely, and in doing so leave behind holes in the valence band, which also flow freely. The electric resistance of a typical intrinsic (non doped) semiconductor decreases exponentially with temperature following an Arrhenius model: An even better approximation of the temperature dependence of the resistivity of a semiconductor is given by the Steinhart–Hart equation: where , and are the so-called Steinhart–Hart coefficients. This equation is used to calibrate thermistors. Extrinsic (doped) semiconductors have a far more complicated temperature profile. As temperature increases starting from absolute zero they first decrease steeply in resistance as the carriers leave the donors or acceptors. After most of the donors or acceptors have lost their carriers, the resistance starts to increase again slightly due to the reducing mobility of carriers (much as in a metal). At higher temperatures, they behave like intrinsic semiconductors as the carriers from the donors/acceptors become insignificant compared to the thermally generated carriers. In non-crystalline semiconductors, conduction can occur by charges quantum tunnelling from one localised site to another. This is known as variable range hopping and has the characteristic form of where = 2, 3, 4, depending on the dimensionality of the system. Kondo insulators Kondo insulators are materials where the resistivity follows the formula where , , and are constant parameters, the residual resistivity, the Fermi liquid contribution, a lattice vibrations term and the Kondo effect. Complex resistivity and conductivity When analyzing the response of materials to alternating electric fields (dielectric spectroscopy), in applications such as electrical impedance tomography, it is convenient to replace resistivity with a complex quantity called impedivity (in analogy to electrical impedance). Impedivity is the sum of a real component, the resistivity, and an imaginary component, the reactivity (in analogy to reactance). The magnitude of impedivity is the square root of sum of squares of magnitudes of resistivity and reactivity. Conversely, in such cases the conductivity must be expressed as a complex number (or even as a matrix of complex numbers, in the case of anisotropic materials) called the admittivity. Admittivity is the sum of a real component called the conductivity and an imaginary component called the susceptivity. An alternative description of the response to alternating currents uses a real (but frequency-dependent) conductivity, along with a real permittivity. The larger the conductivity is, the more quickly the alternating-current signal is absorbed by the material (i.e., the more opaque the material is). For details, see Mathematical descriptions of opacity. Resistance versus resistivity in complicated geometries Even if the material's resistivity is known, calculating the resistance of something made from it may, in some cases, be much more complicated than the formula above. One example is spreading resistance profiling, where the material is inhomogeneous (different resistivity in different places), and the exact paths of current flow are not obvious. In cases like this, the formulas must be replaced with where and are now vector fields. This equation, along with the continuity equation for and the Poisson's equation for , form a set of partial differential equations. In special cases, an exact or approximate solution to these equations can be worked out by hand, but for very accurate answers in complex cases, computer methods like finite element analysis may be required. Resistivity-density product In some applications where the weight of an item is very important, the product of resistivity and density is more important than absolute low resistivity – it is often possible to make the conductor thicker to make up for a higher resistivity; and then a low-resistivity-density-product material (or equivalently a high conductivity-to-density ratio) is desirable. For example, for long-distance overhead power lines, aluminium is frequently used rather than copper (Cu) because it is lighter for the same conductance. Silver, although it is the least resistive metal known, has a high density and performs similarly to copper by this measure, but is much more expensive. Calcium and the alkali metals have the best resistivity-density products, but are rarely used for conductors due to their high reactivity with water and oxygen (and lack of physical strength). Aluminium is far more stable. Toxicity excludes the choice of beryllium. (Pure beryllium is also brittle.) Thus, aluminium is usually the metal of choice when the weight or cost of a conductor is the driving consideration. History John Walsh and the conductivity of a vacuum In a 1774 letter to Dutch-born British scientist Jan Ingenhousz, Benjamin Franklin relates an experiment by another British scientist, John Walsh, that purportedly showed this astonishing fact: Although rarified air conducts electricity better than common air, a vacuum does not conduct electricity at all. However, to this statement a note (based on modern knowledge) was added by the editors—at the American Philosophical Society and Yale University—of the webpage hosting the letter: See also Charge transport mechanisms Chemiresistor Classification of materials based on permittivity Conductivity near the percolation threshold Contact resistance Electrical resistivities of the elements (data page) Electrical resistivity tomography Sheet resistance SI electromagnetism units Skin effect Spitzer resistivity Dielectric strength Notes References Further reading Measuring Electrical Resistivity and Conductivity External links Comparison of the electrical conductivity of various elements in WolframAlpha https://edu-physics.com/2021/01/07/resistivity-of-the-material-of-a-wire-physics-practical/ Physical quantities Materials science
Electrical resistivity and conductivity
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
6,199
[ "Physical phenomena", "Applied and interdisciplinary physics", "Physical quantities", "Quantity", "Materials science", "nan", "Wikipedia categories named after physical quantities", "Physical properties", "Electrical resistance and conductance" ]
61,866
https://en.wikipedia.org/wiki/Max%20Born
Max Born (; 11 December 1882 – 5 January 1970) was a German-British theoretical physicist who was instrumental in the development of quantum mechanics. He also made contributions to solid-state physics and optics and supervised the work of a number of notable physicists in the 1920s and 1930s. Born was awarded the 1954 Nobel Prize in Physics for his "fundamental research in quantum mechanics, especially in the statistical interpretation of the wave function". Born entered the University of Göttingen in 1904, where he met the three renowned mathematicians Felix Klein, David Hilbert, and Hermann Minkowski. He wrote his PhD thesis on the subject of the stability of elastic wires and tapes, winning the university's Philosophy Faculty Prize. In 1905, he began researching special relativity with Minkowski, and subsequently wrote his habilitation thesis on the Thomson model of the atom. A chance meeting with Fritz Haber in Berlin in 1918 led to discussion of how an ionic compound is formed when a metal reacts with a halogen, which is today known as the Born–Haber cycle. In World War I he was originally placed as a radio operator, but his specialist knowledge led to his being moved to research duties on sound ranging. In 1921 Born returned to Göttingen, where he arranged another chair for his long-time friend and colleague James Franck. Under Born, Göttingen became one of the world's foremost centres for physics. In 1925 Born and Werner Heisenberg formulated the matrix mechanics representation of quantum mechanics. The following year, he formulated the now-standard interpretation of the probability density function for ψ*ψ in the Schrödinger equation, for which he was awarded the Nobel Prize in 1954. His influence extended far beyond his own research. Max Delbrück, Siegfried Flügge, Friedrich Hund, Pascual Jordan, Maria Goeppert-Mayer, Lothar Wolfgang Nordheim, Robert Oppenheimer, and Victor Weisskopf all received their PhD degrees under Born at Göttingen, and his assistants included Enrico Fermi, Werner Heisenberg, Gerhard Herzberg, Friedrich Hund, Wolfgang Pauli, Léon Rosenfeld, Edward Teller, and Eugene Wigner. In January 1933, the Nazi Party came to power in Germany, and Born, who was Jewish, was suspended from his professorship at the University of Göttingen. He emigrated to the United Kingdom, where he took a job at St John's College, Cambridge, and wrote a popular science book, The Restless Universe, as well as Atomic Physics, which soon became a standard textbook. In October 1936, he became the Tait Professor of Natural Philosophy at the University of Edinburgh, where, working with German-born assistants E. Walter Kellermann and Klaus Fuchs, he continued his research into physics. Born became a naturalised British subject on 31 August 1939, one day before World War II broke out in Europe. He remained in Edinburgh until 1952. He retired to Bad Pyrmont, in West Germany, and died in a hospital in Göttingen on 5 January 1970. Early life Max Born was born on 11 December 1882 in Breslau (now Wrocław, Poland), which at the time of Born's birth was part of the Prussian Province of Silesia in the German Empire, to a family of Jewish descent. He was one of two children born to Gustav Born, an anatomist and embryologist, who was a professor of embryology at the University of Breslau, and his wife Margarethe (Gretchen) née Kauffmann, from a Silesian family of industrialists. She died when Max was four years old, on 29 August 1886. Max had a sister, Käthe, who was born in 1884, and a half-brother, Wolfgang, from his father's second marriage, to Bertha Lipstein. Wolfgang later became Professor of Art History at the City College of New York. Initially educated at the König-Wilhelm-Gymnasium in Breslau, Born entered the University of Breslau in 1901. The German university system allowed students to move easily from one university to another, so he spent summer semesters at Heidelberg University in 1902 and the University of Zurich in 1903. Fellow students at Breslau, Otto Toeplitz and Ernst Hellinger, told Born about the University of Göttingen, and Born went there in April 1904. At Göttingen he found three renowned mathematicians: Felix Klein, David Hilbert and Hermann Minkowski. Very soon after his arrival, Born formed close ties to the latter two men. From the first class he took with Hilbert, Hilbert identified Born as having exceptional abilities and selected him as the lecture scribe, whose function was to write up the class notes for the students' mathematics reading room at the University of Göttingen. Being class scribe put Born into regular, invaluable contact with Hilbert. Hilbert became Born's mentor after selecting him to be the first to hold the unpaid, semi-official position of assistant. Born's introduction to Minkowski came through Born's stepmother, Bertha, as she knew Minkowski from dancing classes in Königsberg. The introduction netted Born invitations to the Minkowski household for Sunday dinners. In addition, while performing his duties as scribe and assistant, Born often saw Minkowski at Hilbert's house. Born's relationship with Klein was more problematic. Born attended a seminar conducted by Klein and professors of applied mathematics, Carl Runge and Ludwig Prandtl, on the subject of elasticity. Although not particularly interested in the subject, Born was obliged to present a paper. He presented one in which, taking the simple case of a curved wire with both ends fixed, he used Hilbert's calculus of variations to determine the configuration that would minimise potential energy and therefore be the most stable. Klein was impressed, and invited Born to submit a thesis on the subject of "Stability of Elastica in a Plane and Space" – a subject near and dear to Klein – which Klein had arranged to be the subject for the prestigious annual Philosophy Faculty Prize offered by the university. Entries could also qualify as doctoral dissertations. Born responded by turning down the offer, as applied mathematics was not his preferred area of study. Klein was greatly offended. Klein had the power to make or break academic careers, so Born felt compelled to atone by submitting an entry for the prize. Because Klein refused to supervise him, Born arranged for Carl Runge to be his supervisor. Woldemar Voigt and Karl Schwarzschild became his other examiners. Starting from his paper, Born developed the equations for the stability conditions. As he became more interested in the topic, he had an apparatus constructed that could test his predictions experimentally. On 13 June 1906, the rector announced that Born had won the prize. A month later, he passed his oral examination and was awarded his PhD in mathematics magna cum laude. On graduation, Born was obliged to perform his military service, which he had deferred while a student. He found himself drafted into the German army, and posted to the 2nd Guards Dragoons "Empress Alexandra of Russia", which was stationed in Berlin. His service was brief, as he was discharged early after an asthma attack in January 1907. He then travelled to England, where he was admitted to Gonville and Caius College, Cambridge, and studied physics for six months at the Cavendish Laboratory under J. J. Thomson, George Searle and Joseph Larmor. After Born returned to Germany, the Army re-inducted him, and he served with the elite 1st (Silesian) Life Cuirassiers "Great Elector" until he was again medically discharged after just six weeks' service. He then returned to Breslau, where he worked under the supervision of Otto Lummer and Ernst Pringsheim, hoping to do his habilitation in physics. A minor accident involving Born's black body experiment, a ruptured cooling water hose, and a flooded laboratory, led to Lummer telling him that he would never become a physicist. In 1905, Albert Einstein published his paper On the Electrodynamics of Moving Bodies about special relativity. Born was intrigued, and began researching the subject. He was devastated to discover that Minkowski was also researching special relativity along the same lines, but when he wrote to Minkowski about his results, Minkowski asked him to return to Göttingen and do his habilitation there. Born accepted. Toeplitz helped Born brush up on his matrix algebra so he could work with the four-dimensional Minkowski space matrices used in the latter's project to reconcile relativity with electrodynamics. Born and Minkowski got along well, and their work made good progress, but Minkowski died suddenly of appendicitis on 12 January 1909. The mathematics students had Born speak on their behalf at the funeral. A few weeks later, Born attempted to present their results at a meeting of the Göttingen Mathematics Society. He did not get far before he was publicly challenged by Klein and Max Abraham, who rejected relativity, forcing him to terminate the lecture. However, Hilbert and Runge were interested in Born's work, and, after some discussion with Born, they became convinced of the veracity of his results and persuaded him to give the lecture again. This time he was not interrupted, and Voigt offered to sponsor Born's habilitation thesis. Born subsequently published his talk as an article on "The Theory of the Rigid Electron in the Kinematics of the Principle of Relativity" (), which introduced the concept of Born rigidity. On 23 October Born presented his habilitation lecture on the Thomson model of the atom. Career Berlin and Frankfurt Born settled in as a young academic at Göttingen as a . In Göttingen, Born stayed at a boarding house run by Sister Annie at Dahlmannstraße 17, known as El BoKaReBo. The name was derived from the first letters of the last names of its boarders: "El" for Ella Philipson (a medical student), "Bo" for Born and Hans Bolza (a physics student), "Ka" for Theodore von Kármán (a ), and "Re" for Albrecht Renner (another medical student). A frequent visitor to the boarding house was Paul Peter Ewald, a doctoral student of Arnold Sommerfeld on loan to Hilbert at Göttingen as a special assistant for physics. Richard Courant, a mathematician and , called these people the "in group". In 1912, Born met Hedwig (Hedi) Ehrenberg, the daughter of a Leipzig University law professor, and a friend of Carl Runge's daughter Iris. She was of Jewish background on her father's side, although he had become a practising Lutheran when he got married, as did Max's sister Käthe. Despite never practising his religion, Born refused to convert, and his wedding on 2 August 1913 was a garden ceremony. However, he was baptised as a Lutheran in March 1914 by the same pastor who had performed his wedding ceremony. Born regarded "religious professions and churches as a matter of no importance". His decision to be baptised was made partly in deference to his wife, and partly due to his desire to assimilate into German society. The marriage produced three children: two daughters, Irene, born in 1914, and Margarethe (Gritli), born in 1915, and a son, Gustav, born in 1921. Through marriage, Born is related to jurists Victor Ehrenberg, his father-in-law, and Rudolf von Jhering, his wife's maternal grandfather, as well as to philosopher and theologian Hans Ehrenberg, and is a great uncle of British comedian Ben Elton. By the end of 1913, Born had published 27 papers, including important work on relativity and the dynamics of crystal lattices (3 with Theodore von Karman), which became a book. In 1914, he received a letter from Max Planck explaining that a new professor extraordinarius chair of theoretical physics had been created at the University of Berlin. The chair had been offered to Max von Laue, but he had turned it down. Born accepted. The First World War was now raging. Soon after arriving in Berlin in 1915, he enlisted in an Army signals unit. In October, he joined the Artillerie Prüfungskommission, the Army's Berlin-based artillery research and development organisation, under Rudolf Ladenburg, who had established a special unit dedicated to the new technology of sound ranging. In Berlin, Born formed a lifelong friendship with Einstein, who became a frequent visitor to Born's home. Within days of the armistice in November 1918, Planck had the Army release Born. A chance meeting with Fritz Haber that month led to discussion of the manner in which an ionic compound is formed when a metal reacts with a halogen, which is today known as the Born–Haber cycle. Even before Born had taken up the chair in Berlin, von Laue had changed his mind, and decided that he wanted it after all. He arranged with Born and the faculties concerned for them to exchange jobs. In April 1919, Born became professor ordinarius and Director of the Institute of Theoretical Physics on the science faculty at the University of Frankfurt am Main. While there, he was approached by the University of Göttingen, which was looking for a replacement for Peter Debye as Director of the Physical Institute. "Theoretical physics," Einstein advised him, "will flourish wherever you happen to be; there is no other Born to be found in Germany today." In negotiating for the position with the education ministry, Born arranged for another chair, of experimental physics, at Göttingen for his long-time friend and colleague James Franck. In 1919 Elisabeth Bormann joined the Institut für Theoretische Physik as his assistant. She developed the first atomic beams. Working with Born, Bormann was the first to measure the free path of atoms in gases and the size of molecules. Göttingen For the 12 years Born and Franck were at the University of Göttingen (1921 to 1933), Born had a collaborator with shared views on basic scientific concepts—a benefit for teaching and research. Born's collaborative approach with experimental physicists was similar to that of Arnold Sommerfeld at the University of Munich, who was ordinarius professor of theoretical physics and Director of the Institute of Theoretical Physics—also a prime mover in the development of quantum theory. Born and Sommerfeld collaborated with experimental physicists to test and advance their theories. In 1922, when lecturing in the United States at the University of Wisconsin–Madison, Sommerfeld sent his student Werner Heisenberg to be Born's assistant. Heisenberg returned to Göttingen in 1923, where he completed his habilitation under Born in 1924, and became a at Göttingen. In 1919 and 1920, Max Born became displeased about the large number of objections against Einstein's relativity, and gave speeches in the winter of 1919 in support of Einstein. Born received pay for his relativity speeches which helped with expenses through the year of rapid inflation. The speeches in German language became a book published in 1920 of which Einstein received the proofs before publication. A third edition was published in 1922 and an English translation was published in 1924. Born represented light speed as a function of curvature, "the velocity of light is much greater for some directions of the light ray than its ordinary value c, and other bodies can also attain much greater velocities." In 1925, Born and Heisenberg formulated the matrix mechanics representation of quantum mechanics. On 9 July, Heisenberg gave Born a paper entitled Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen ("Quantum-Theoretical Re-interpretation of Kinematic and Mechanical Relations") to review, and submit for publication. In the paper, Heisenberg formulated quantum theory, avoiding the concrete, but unobservable, representations of electron orbits by using parameters such as transition probabilities for quantum jumps, which necessitated using two indexes corresponding to the initial and final states. When Born read the paper, he recognized the formulation as one which could be transcribed and extended to the systematic language of matrices, which he had learned from his study under Jakob Rosanes at Breslau University. Up until this time, matrices were seldom used by physicists; they were considered to belong to the realm of pure mathematics. Gustav Mie had used them in a paper on electrodynamics in 1912, and Born had used them in his work on the lattices theory of crystals in 1921. While matrices were used in these cases, the algebra of matrices with their multiplication did not enter the picture as they did in the matrix formulation of quantum mechanics. With the help of his assistant and former student Pascual Jordan, Born began immediately to make a transcription and extension, and they submitted their results for publication; the paper was received for publication just 60 days after Heisenberg's paper. A follow-on paper was submitted for publication before the end of the year by all three authors. The result was a surprising formulation: where p and q were matrices for location and momentum, and I is the identity matrix. The left hand side of the equation is not zero because matrix multiplication is not commutative. This formulation was entirely attributable to Born, who also established that all the elements not on the diagonal of the matrix were zero. Born considered that his paper with Jordan contained "the most important principles of quantum mechanics including its extension to electrodynamics." The paper put Heisenberg's approach on a solid mathematical basis. Born was surprised to discover that Paul Dirac had been thinking along the same lines as Heisenberg. Soon, Wolfgang Pauli used the matrix method to calculate the energy values of the hydrogen atom and found that they agreed with the Bohr model. Another important contribution was made by Erwin Schrödinger, who looked at the problem using wave mechanics. This had a great deal of appeal to many at the time, as it offered the possibility of returning to deterministic classical physics. Born would have none of this, as it ran counter to facts determined by experiment. He formulated the now-standard interpretation of the probability density function for ψ*ψ in the Schrödinger equation, which he published in July 1926. In a letter to Born on 4 December 1926, Einstein made his famous remark regarding quantum mechanics: This quotation is often paraphrased as 'God does not play dice'. In 1928, Einstein nominated Heisenberg, Born, and Jordan for the Nobel Prize in Physics, but Heisenberg alone won the 1932 Prize "for the creation of quantum mechanics, the application of which has led to the discovery of the allotropic forms of hydrogen", while Schrödinger and Dirac shared the 1933 Prize "for the discovery of new productive forms of atomic theory". On 25 November 1933, Born received a letter from Heisenberg in which he said he had been delayed in writing due to a "bad conscience" that he alone had received the Prize "for work done in Göttingen in collaboration—you, Jordan and I." Heisenberg went on to say that Born and Jordan's contribution to quantum mechanics cannot be changed by "a wrong decision from the outside." In 1954, Heisenberg wrote an article honouring Planck for his insight in 1900, in which he credited Born and Jordan for the final mathematical formulation of matrix mechanics and Heisenberg went on to stress how great their contributions were to quantum mechanics, which were not "adequately acknowledged in the public eye." Those who received their PhD degrees under Born at Göttingen included Max Delbrück, Siegfried Flügge, Friedrich Hund, Pascual Jordan, Maria Goeppert-Mayer, Lothar Wolfgang Nordheim, Robert Oppenheimer, and Victor Weisskopf. Born's assistants at the University of Göttingen's Institute for Theoretical Physics included Enrico Fermi, Werner Heisenberg, Gerhard Herzberg, Friedrich Hund, Pascual Jordan, Wolfgang Pauli, Léon Rosenfeld, Edward Teller, and Eugene Wigner. Walter Heitler became an assistant to Born in 1928, and completed his habilitation under him in 1929. Born not only recognised talent to work with him, but he "let his superstars stretch past him; to those less gifted, he patiently handed out respectable but doable assignments." Delbrück, and Goeppert-Mayer went on to be awarded Nobel Prizes. Later life In January 1933, the Nazi Party came to power in Germany. In May, Born became one of six Jewish professors at Göttingen who were suspended with pay; Franck had already resigned. In twelve years they had built Göttingen into one of the world's foremost centres for physics. Born began looking for a new job, writing to Maria Göppert-Mayer at Johns Hopkins University and Rudi Ladenburg at Princeton University. He accepted an offer from St John's College, Cambridge. At Cambridge, he wrote a popular science book, The Restless Universe, and a textbook, Atomic Physics, that soon became a standard text, going through seven editions. His family soon settled into life in England, with his daughters Irene and Gritli becoming engaged to Welshman Brinley (Bryn) Newton-John and Englishman Maurice Pryce respectively. Born's granddaughter Olivia Newton-John was the daughter of Irene. Born's position at Cambridge was only a temporary one, and his tenure at Göttingen was terminated in May 1935. He therefore accepted an offer from C. V. Raman to go to Bangalore in 1935. Born considered taking a permanent position there, but the Indian Institute of Science did not create an additional chair for him. In November 1935, the Born family had their German citizenship revoked, rendering them stateless. A few weeks later Göttingen cancelled Born's doctorate. Born considered an offer from Pyotr Kapitsa in Moscow, and started taking Russian lessons from Rudolf Peierls's Russian-born wife Genia. But then Charles Galton Darwin asked Born if he would consider becoming his successor as Tait Professor of Natural Philosophy at the University of Edinburgh, an offer that Born promptly accepted, assuming the chair in October 1936. In Edinburgh, Born promoted the teaching of mathematical physics. He had two German assistants, E. Walter Kellermann and Klaus Fuchs, and one Scottish assistant, Robert Schlapp, and together they continued to investigate the mysterious behaviour of electrons. Born became a Fellow of the Royal Society of Edinburgh in 1937, and of the Royal Society of London in March 1939. During 1939, he got as many of his remaining friends and relatives still in Germany as he could out of the country, including his sister Käthe, in-laws Kurt and Marga, and the daughters of his friend Heinrich Rausch von Traubenberg. Hedi ran a domestic bureau, placing young Jewish women in jobs. Born received his certificate of naturalisation as a British subject on 31 August 1939, one day before the Second World War broke out in Europe. Born remained at Edinburgh until he reached the retirement age of 70 in 1952. He retired to Bad Pyrmont, in West Germany, in 1954. In October, he received word that he was being awarded the Nobel Prize. His fellow physicists had never stopped nominating him. Franck and Fermi had nominated him in 1947 and 1948 for his work on crystal lattices, and over the years, he had also been nominated for his work on solid state physics, quantum mechanics and other topics. In 1954, he received the prize for "fundamental research in Quantum Mechanics, especially in the statistical interpretation of the wave function"—something that he had worked on alone. In his Nobel lecture he reflected on the philosophical implications of his work: In retirement, he continued scientific work, and produced new editions of his books. In 1955 he became one of signatories to the Russell-Einstein Manifesto. He died at age 87 in hospital in Göttingen on 5 January 1970, and is buried in the Stadtfriedhof there, in the same cemetery as Walther Nernst, Wilhelm Weber, Max von Laue, Otto Hahn, Max Planck, and David Hilbert. Global policy He was one of the signatories of the agreement to convene a convention for drafting a world constitution. As a result, for the first time in human history, a World Constituent Assembly convened to draft and adopt a Constitution for the Federation of Earth. Personal life Born's wife Hedwig (Hedi) Martha Ehrenberg (1891–1972) was a daughter of the jurist Victor Ehrenberg and Elise von Jhering (a daughter of the jurist Rudolf von Jhering). Born was survived by his wife Hedi and their children Irene, Gritli and Gustav. Singer and actress Olivia Newton-John was a daughter of Irene (1914–2003), while Gustav is the father of musician and academic Georgina Born and actor Max Born (Fellini Satyricon) who are thus also Max's grandchildren. His great-grandchildren include songwriter Brett Goldsmith, singer Tottie Goldsmith, racing car driver Emerson Newton-John, and singer Chloe Rose Lattanzi. Born helped his nephew, architect, Otto Königsberger (1908–1999) obtain commission in the Mysore State. Awards and honors 1934 – Stokes Medal of Cambridge 1939 – Fellow of the Royal Society 1945 – Makdougall–Brisbane Prize of the Royal Society of Edinburgh 1945 – Gunning Victoria Jubilee Prize of the Royal Society of Edinburgh 1948 – Max Planck Medaille der Deutschen Physikalischen Gesellschaft 1950 – Hughes Medal of the Royal Society of London 1953 – Honorary citizen of the town of Göttingen 1954 – Nobel Prize in Physics The award was for Born's fundamental research in quantum mechanics, especially for his statistical interpretation of the wavefunction. 1954 – Nobel Prize Banquet Speech 1954 – Born Nobel Prize Lecture 1956 – Hugo Grotius Medal for International Law, Munich 1959 – Grand Cross of Merit with Star of the Order of Merit of the German Federal Republic 1972 – Max Born Medal and Prize was created by the German Physical Society and the British Institute of Physics. It is awarded annually. 1982 – Ceremony at the University of Göttingen in the 100th Birth Year of Max Born and James Franck, Institute Directors 1921–1933. 1991 – – Institute named in his honor. 2017 – On 11 December 2017, Google showed a Google doodle, designed by Kati Szilagyi, in honouring the 135th birth anniversary of Born. Bibliography During his life, Born wrote several semi-popular and technical books. His volumes on topics like atomic physics and optics were very well received. They are considered classics in their fields, and are still in print. The following is a chronological listing of his major works: Über das Thomson'sche Atommodell Habilitations-Vortrag (FAM, 1909) – The Habilitation was done at the University of Göttingen, on 23 October 1909. – Based on Born's lectures at the University of Frankfurt am Main. Available in English under the title . Dynamik der Kristallgitter (Teubner, 1915) – After its publication, the physicist Arnold Sommerfeld asked Born to write an article based on it for the 5th volume of the Mathematical Encyclopedia. The First World War delayed the start of work on this article, but it was taken up in 1919 and finished in 1922. It was published as a revised edition under the title Atomic Theory of Solid States. Vorlesungen über Atommechanik (Springer, 1925) Problems of Atomic Dynamics (MIT Press, 1926) – A first account of matrix mechanics being developed in Germany, based on two series of lectures given at MIT, over three months, in late 1925 and early 1926. Mechanics of the Atom (George Bell & Sons, 1927) – Translated by J. W. Fisher and revised by D. R. Hartree. Elementare Quantenmechanik (Zweiter Band der Vorlesungen über Atommechanik), with Pascual Jordan. (Springer, 1930) – This was the first volume of what was intended as a two-volume work. This volume was limited to the work Born did with Jordan on matrix mechanics. The second volume was to deal with Erwin Schrödinger's wave mechanics. However, the second volume was not even started by Born, as he believed his friend and colleague Hermann Weyl had written it before he could do so. Optik: Ein Lehrbuch der elektromagnetische Lichttheorie (Springer, 1933) – The book was released just as the Borns were emigrating to England. Moderne Physik (1933) – Based on seven lectures given at the Technischen Hochschule Berlin. Atomic Physics (Blackie, London, 1935) – Authorized translation of Moderne Physik by John Dougall, with updates. The Restless Universe (Blackie and Son Limited, 1935) – A popularised rendition of the workshop of nature, translated by Winifred Margaret Deans. Born's nephew, Otto Königsberger, whose successful career as an architect in Berlin was brought to an end when the Nazis took over, was temporarily brought to England to illustrate the book. Experiment and Theory in Physics (Cambridge University Press, 1943) – The address given King's College, Newcastle upon Tyne, at the request of the Durham Philosophical Society and the Pure Science Society. An expanded version of the lecture appeared in a 1956 Dover Publications edition. Natural Philosophy of Cause and Chance (Oxford University Press, 1949) – Based on Born's 1948 Waynflete lectures, given at the College of St. Mary Magdalen, Oxford University. A later edition (Dover, 1964) included two appendices: "Symbol and Reality" and Born's lecture given at the Nobel laureates 1964 meeting in Landau, Germany. A General Kinetic Theory of Liquids with H. S. Green (Cambridge University Press, 1949) – The six papers in this book were reproduced with permission from the Proceedings of the Royal Society. Natural Philosophy Of Cause And Chance, Oxford 1949 Dynamical Theory of Crystal Lattices, with Kun Huang. (Oxford, Clarendon Press, 1954) Max Born The statistical interpretation of quantum mechanics. Nobel Lecture – 11 December 1954. Physics in My Generation: A Selection of Papers (Pergamon, 1956) Physik im Wandel meiner Zeit (Vieweg, 1957) Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, with Emil Wolf. (Pergamon, 1959) – This book is not an English translation of Optik, but rather a substantially new book. Shortly after World War II, a number of scientists suggested that Born update and translate his work into English. Since there had been many advances in optics in the intervening years, updating was warranted. In 1951, Wolf began as Born's private assistant on the book; it was eventually published in 1959 by Robert Maxwell's Pergamon Press. – the delay being due to the lengthy time needed "to resolve all the financial and publishing tricks created by Maxwell." Physik und Politik (VandenHoeck und Ruprecht, 1960) Zur Begründung der Matrizenmechanik, with Werner Heisenberg and Pascual Jordan (Battenberg, 1962) – Published in honor of Max Born's 80th birthday. This edition reprinted the authors' articles on matrix mechanics published in Zeitschrift für Physik, Volumes 26 and 33–35, 1924–1926. My Life and My Views: A Nobel Prize Winner in Physics Writes Provocatively on a Wide Range of Subjects (Scribner, 1968) – Part II (pp. 63–206) is a translation of Von der Verantwortung des Naturwissenschaftlers. Briefwechsel 1916–1955, kommentiert von Max Born with Hedwig Born and Albert Einstein (Nymphenburger, 1969) The Born–Einstein Letters: Correspondence between Albert Einstein and Max and Hedwig Born from 1916–1955, with commentaries by Max Born (Macmillan, 1971). Mein Leben: Die Erinnerungen des Nobelpreisträgers (Munich: Nymphenburger, 1975). Born's published memoirs. My Life: Recollections of a Nobel Laureate (Scribner, 1978). Translation of Mein Leben. For a full list of his published papers, see HistCite . For his published works, see Published Works – Berlin-Brandenburgische Akademie der Wissenschaften Akademiebibliothek. See also List of things named after Max Born List of refugees List of Jewish Nobel laureates Citations General references Reprinted as chapter 7 in Bernstein, Jeremy (2014). A Chorus of Bells and Other Scientific Inquiries. Also published in Germany: Max Born – Baumeister der Quantenwelt. Eine Biographie Spektrum Akademischer Verlag, 2005, . External links American Institute of Physics History Search: Max Born Encyclopædia Britannica, Max Born – full article Annotated bibliography for Max Born from the Alsos Digital Library for Nuclear Issues Freeview video of Gustav Born (son of Max) with conversation and film on Gustav's memories of his father by the Vega Science Trust Max Born information from Nobel Winners site including his Nobel Lecture, 11 December 1954 The Statistical Interpretations of Quantum Mechanics Papers of Professor Max Born (1882–1970) Held at the Edinburgh University Library, Special Collections Division The Papers of Professor Max Born held at Churchill Archives Centre, Cambridge Kuhn, Thomas S., John L. Heilbron, Paul Forman, and Lini Allen Sources for History of Quantum Physics (American Philosophical Society, 1967) Oral history interview transcript for Max Born on 1 June 1960, American Institute of Physics, Niels Bohr Library & Archives - Session I Oral history interview transcript for Max Born on 1 June 1960, American Institute of Physics, Niels Bohr Library & Archives - Session II Oral history interview transcript for Max Born on 17 October 1962, American Institute of Physics, Niels Bohr Library & Archives - Session III Oral history interview transcript for Max Born on 18 October 1962, American Institute of Physics, Niels Bohr Library & Archives - Session IV 1882 births 1970 deaths Scientists from Göttingen 20th-century German physicists Academics of the University of Cambridge Academics of the University of Edinburgh Alumni of Gonville and Caius College, Cambridge 20th-century British physicists British theoretical physicists Fellows of the Royal Society of Edinburgh Fellows of the Royal Society Foreign associates of the National Academy of Sciences Foreign members of the USSR Academy of Sciences German emigrants to Scotland German Nobel laureates Academic staff of Goethe University Frankfurt Grand Crosses with Star and Sash of the Order of Merit of the Federal Republic of Germany Heidelberg University alumni Honorary members of the USSR Academy of Sciences Academic staff of the Humboldt University of Berlin Jewish emigrants from Nazi Germany to the United Kingdom Jewish German physicists Members of the German Academy of Sciences at Berlin Members of the Prussian Academy of Sciences Nobel laureates in Physics Optical physicists People associated with the University of Zurich People from the Province of Silesia Scientists from Wrocław Quantum physicists Scientists from Frankfurt Silesian Jews Theoretical physicists German theoretical physicists University of Breslau alumni University of Göttingen alumni Academic staff of the University of Göttingen Winners of the Max Planck Medal Max Members of the Göttingen Academy of Sciences and Humanities Members of the Royal Swedish Academy of Sciences Ehrenberg family World Constitutional Convention call signatories Jewish British physicists
Max Born
[ "Physics" ]
7,279
[ "Theoretical physics", "Quantum physicists", "Theoretical physicists", "Quantum mechanics" ]
61,891
https://en.wikipedia.org/wiki/Genus%20%28mathematics%29
In mathematics, genus (: genera) has a few different, but closely related, meanings. Intuitively, the genus is the number of "holes" of a surface. A sphere has genus 0, while a torus has genus 1. Topology Orientable surfaces The genus of a connected, orientable surface is an integer representing the maximum number of cuttings along non-intersecting closed simple curves without rendering the resultant manifold disconnected. It is equal to the number of handles on it. Alternatively, it can be defined in terms of the Euler characteristic , via the relationship for closed surfaces, where is the genus. For surfaces with boundary components, the equation reads . In layman's terms, the genus is the number of "holes" an object has ("holes" interpreted in the sense of doughnut holes; a hollow sphere would be considered as having zero holes in this sense). A torus has 1 such hole, while a sphere has 0. The green surface pictured above has 2 holes of the relevant sort. For instance: The sphere and a disc both have genus zero. A torus has genus one, as does the surface of a coffee mug with a handle. This is the source of the joke "topologists are people who can't tell their donut from their coffee mug." Explicit construction of surfaces of the genus g is given in the article on the fundamental polygon. Non-orientable surfaces The non-orientable genus, demigenus, or Euler genus of a connected, non-orientable closed surface is a positive integer representing the number of cross-caps attached to a sphere. Alternatively, it can be defined for a closed surface in terms of the Euler characteristic χ, via the relationship χ = 2 − k, where k is the non-orientable genus. For instance: A real projective plane has a non-orientable genus 1. A Klein bottle has non-orientable genus 2. Knot The genus of a knot K is defined as the minimal genus of all Seifert surfaces for K. A Seifert surface of a knot is however a manifold with boundary, the boundary being the knot, i.e. homeomorphic to the unit circle. The genus of such a surface is defined to be the genus of the two-manifold, which is obtained by gluing the unit disk along the boundary. Handlebody The genus of a 3-dimensional handlebody is an integer representing the maximum number of cuttings along embedded disks without rendering the resultant manifold disconnected. It is equal to the number of handles on it. For instance: A ball has genus 0. A solid torus D2 × S1 has genus 1. Graph theory The genus of a graph is the minimal integer n such that the graph can be drawn without crossing itself on a sphere with n handles (i.e. an oriented surface of the genus n). Thus, a planar graph has genus 0, because it can be drawn on a sphere without self-crossing. The non-orientable genus of a graph is the minimal integer n such that the graph can be drawn without crossing itself on a sphere with n cross-caps (i.e. a non-orientable surface of (non-orientable) genus n). (This number is also called the demigenus.) The Euler genus is the minimal integer n such that the graph can be drawn without crossing itself on a sphere with n cross-caps or on a sphere with n/2 handles. In topological graph theory there are several definitions of the genus of a group. Arthur T. White introduced the following concept. The genus of a group G is the minimum genus of a (connected, undirected) Cayley graph for G. The graph genus problem is NP-complete. Algebraic geometry There are two related definitions of genus of any projective algebraic scheme : the arithmetic genus and the geometric genus. When is an algebraic curve with field of definition the complex numbers, and if has no singular points, then these definitions agree and coincide with the topological definition applied to the Riemann surface of (its manifold of complex points). For example, the definition of elliptic curve from algebraic geometry is connected non-singular projective curve of genus 1 with a given rational point on it. By the Riemann–Roch theorem, an irreducible plane curve of degree given by the vanishing locus of a section has geometric genus where is the number of singularities when properly counted. Differential geometry In differential geometry, a genus of an oriented manifold may be defined as a complex number subject to the conditions if and are cobordant. In other words, is a ring homomorphism , where is Thom's oriented cobordism ring. The genus is multiplicative for all bundles on spinor manifolds with a connected compact structure if is an elliptic integral such as for some This genus is called an elliptic genus. The Euler characteristic is not a genus in this sense since it is not invariant concerning cobordisms. Biology Genus can be also calculated for the graph spanned by the net of chemical interactions in nucleic acids or proteins. In particular, one may study the growth of the genus along the chain. Such a function (called the genus trace) shows the topological complexity and domain structure of biomolecules. See also Group (mathematics) Arithmetic genus Geometric genus Genus of a multiplicative sequence Genus of a quadratic form Spinor genus Citations References Topology Geometric topology Surfaces Algebraic topology Algebraic curves Graph invariants Topological graph theory Geometry processing
Genus (mathematics)
[ "Physics", "Mathematics" ]
1,125
[ "Graph theory", "Algebraic topology", "Geometric topology", "Graph invariants", "Topology", "Mathematical relations", "Space", "Geometry", "Fields of abstract algebra", "Spacetime", "Topological graph theory" ]
61,967
https://en.wikipedia.org/wiki/Detonator
A detonator is a device used to make an explosive or explosive device explode. Detonators come in a variety of types, depending on how they are initiated (chemically, mechanically, or electrically) and details of their inner working, which often involve several stages. Types of detonators include non-electric and electric. Non-electric detonators are typically stab or pyrotechnic while electric are typically "hot wire" (low voltage), exploding bridge wire (high voltage) or explosive foil (very high voltage). The original electric detonators invented in 1875 independently by Julius Smith and Perry Gardiner used mercury fulminate as the primary explosive. Around the turn of the century performance was enhanced in the Smith-Gardiner blasting cap by the addition of 10-20% potassium chlorate. This compound was superseded by others: lead azide, lead styphnate, some aluminium, or other materials such as DDNP (diazo dinitro phenol) to reduce the amount of lead emitted into the atmosphere by mining and quarrying operations. They also often use a small amount of TNT or tetryl in military detonators and PETN in commercial detonators. History The first blasting cap or detonator was demonstrated in 1745 when British physician and apothecary William Watson showed that the electric spark of a friction machine could ignite black powder, by way of igniting a flammable substance mixed in with the black powder. In 1750, Benjamin Franklin in Philadelphia made a commercial blasting cap consisting of a paper tube full of black powder, with wires leading in both sides and wadding sealing up the ends. The two wires came close but did not touch, so a large electric spark discharge between the two wires would fire the cap. In 1832, a hot wire detonator was produced by American chemist Robert Hare, although attempts along similar lines had earlier been attempted by the Italians Volta and Cavallo. Hare constructed his blasting cap by passing a multistrand wire through a charge of gunpowder inside a tin tube; he had cut all but one fine strand of the multistrand wire so that the fine strand would serve as the hot bridgewire. When a strong current from a large battery (which he called a "deflagrator" or "calorimotor") was passed through the fine strand, it became incandescent and ignited the charge of gunpowder. In 1863, Alfred Nobel realized that although nitroglycerin could not be detonated by a fuse, it could be detonated by the explosion of a small charge of gunpowder, which in turn was ignited by a fuse. Within a year, he was adding mercury fulminate to the gunpowder charges of his detonators, and by 1867 he was using small copper capsules of mercury fulminate, triggered by a fuse, to detonate nitroglycerin. In 1868, Henry Julius Smith of Boston introduced a cap that combined a spark gap ignitor and mercury fulminate, the first electric cap able to detonate dynamite. In 1875, Smith—and then in 1887, Perry G. Gardner of North Adams, Massachusetts—developed electric detonators that combined a hot wire detonator with mercury fulminate explosive. These were the first generally modern type blasting caps. Modern caps use different explosives and separate primary and secondary explosive charges, but are generally very similar to the Gardner and Smith caps. Smith also invented the first satisfactory portable power supply for igniting blasting caps: a high-voltage magneto that was driven by a rack and pinion, which in turn was driven by a T-handle that was pushed downwards. Electric match caps were developed in the early 1900s in Germany, and spread to the US in the 1950s when ICI International purchased Atlas Powder Co. These match caps have become the predominant world standard cap type. Purpose The need for detonators such as blasting caps came from the development of safer secondary and tertiary explosives . Secondary and tertiary explosives are typically initiated by an explosives train starting with the detonator. For safety, detonators and the main explosive device are typically only joined just before use. Design A detonator is usually a multi stage device, with three parts: at the first stage, the initiation mean (fire, electricity, etc.) provide enough energy (as heat or mechanical shock) to activate an easy-to-ignite primary explosive, which in turn detonates a small amount of a more powerful secondary explosive, directly in contact with the primary, and called "base" or "output" explosive, able to carry out the detonation through the casing of the detonator to the main explosive device to activate it. Explosives commonly used as primary in detonators include lead azide, lead styphnate, tetryl, and DDNP. Early blasting caps also used silver fulminate, but it has been replaced with cheaper and safer primary explosives. Silver azide is still used sometimes, but very rarely due to its high price. It is possible to construct a Non Primary Explosive Detonator (NPED) in which the primary explosive is replaced by a flammable but non-explosive mixture that propagates a shock wave along a tube into the secondary explosive. NPEDs are harder to accidentally trigger by shock and can avoid the use of lead. As secondary "base" or "output" explosive, TNT or tetryl are typically found in military detonators and PETN in commercial detonators. While detonators make explosive handling safer, they are hazardous to handle since, despite their small size, they contain enough explosive to injure people; untrained personnel might not recognize them as explosives or wrongly deem them not dangerous due to their appearance and handle them without the required care. Types Ordinary detonators usually take the form of ignition-based explosives. While they are mainly used in commercial operations, ordinary detonators are still used in military operations. This form of detonator is most commonly initiated using a safety fuse, and used in non time-critical detonations e.g. conventional munitions disposal. Well known detonators are lead azide [Pb(N3)2], silver azide [AgN3] and mercury fulminate [Hg(ONC)2]. There are three categories of electrical detonators: instantaneous electrical detonators (IED), short period delay detonators (SPD) and long period delay detonators (LPD). SPDs are measured in milliseconds and LPDs are measured in seconds. In situations where nanosecond accuracy is required, specifically in the implosion charges in nuclear weapons, exploding-bridgewire detonators are employed. The initial shock wave is created by vaporizing a length of a thin wire by an electric discharge. A new development is a slapper detonator, which uses thin plates accelerated by an electrically exploded wire or foil to deliver the initial shock. It is in use in some modern weapons systems. A variant of this concept is used in mining operations, when the foil is exploded by a laser pulse delivered to the foil by optical fiber. A non-electric detonator is a shock tube detonator designed to initiate explosions, generally for the purpose of demolition of buildings and for use in the blasting of rock in mines and quarries. Instead of electric wires, a hollow plastic tube delivers the firing impulse to the detonator, making it immune to most of the hazards associated with stray electric current. It consists of a small diameter, three-layer plastic tube coated on the innermost wall with a reactive explosive compound, which, when ignited, propagates a low energy signal, similar to a dust explosion. The reaction travels at approximately 6,500 ft/s (2,000 m/s) along the length of the tubing with minimal disturbance outside of the tube. Non-electric detonators were invented by the Swedish company Nitro Nobel in the 1960s and 1970s, and launched to the demolitions market in 1973. In civil mining, electronic detonators have a better precision for delays. Electronic detonators are designed to provide the precise control necessary to produce accurate and consistent blasting results in a variety of blasting applications in the mining, quarrying, and construction industries. Electronic detonators may be programmed in millisecond or sub-millisecond increments using a dedicated programming device. Wireless electronic detonators are beginning to be available in the civil mining market. Encrypted radio signals are used to communicate the blast signal to each detonator at the correct time. While currently expensive, wireless detonators can enable new mining techniques as multiple blasts can be loaded at once and fired in sequence without putting humans in harm's way. A number 8 test blasting cap is one containing 2 grams of a mixture of 80 percent mercury fulminate and 20 percent potassium chlorate, or a blasting cap of equivalent strength. An equivalent strength cap comprises 0.40-0.45 grams of PETN base charge pressed in an aluminum shell with bottom thickness not to exceed to 0.03 of an inch, to a specific gravity of not less than 1.4 g/cc, and primed with standard weights of primer depending on the manufacturer. Blasting caps The oldest and simplest type of cap, fuse caps are a metal cylinder, closed at one end. From the open end inwards, there is first an empty space into which a pyrotechnic fuse is inserted and crimped, then a pyrotechnic ignition mix, a primary explosive, and then the main detonating explosive charge. The primary hazard of pyrotechnic blasting caps is that for proper usage, the fuse must be inserted and then crimped into place by crushing the base of the cap around the fuse. If the tool used to crimp the cap is used too close to the explosives, the primary explosive compound can detonate during crimping. A common hazardous practice is crimping caps with one's teeth; an accidental detonation can cause serious injury to the mouth. Fuse type blasting caps are still in active use today. They are the safest type to use around certain types of electromagnetic interference, and they have a built in time delay as the fuse burns down. Solid pack electric blasting caps use a thin bridgewire in direct contact (hence solid pack) with a primary explosive, which is heated by electric current and causes the detonation of the primary explosive. That primary explosive then detonates a larger charge of secondary explosive. Some solid pack fuses incorporate a small pyrotechnic delay element, up to a few hundred milliseconds, before the cap fires. Match type blasting caps use an electric match (insulating sheet with electrodes on both sides, a thin bridgewire soldered across the sides, all dipped in ignition and output mixes) to initiate the primary explosive, rather than direct contact between the bridgewire and the primary explosive. The match can be manufactured separately from the rest of the cap and only assembled at the end of the process. Match type caps are now the most common type found worldwide. The exploding-bridgewire detonator was invented in the 1940s as part of the Manhattan Project to develop nuclear weapons. The design goal was to produce a detonator which functioned very rapidly and predictably). Both Match and Solid Pack type electric caps take a few milliseconds to fire, as the bridgewire heats up and heats the explosive to the point of detonation. Exploding bridgewire or EBW detonators use a higher voltage electric charge and a very thin bridgewire, .04 inch long, .0016 diameter, (1 mm long, 0.04 mm diameter). Instead of heating up the explosive, the EBW detonator wire is heated so quickly by the high firing current that the wire actually vaporizes and explodes due to electric resistance heating. That electrically-driven explosion causes the low-density initiating explosive (usually PETN) to detonate, which in turn detonates a higher density secondary explosive (typically RDX or HMX) in many EBW designs. In addition to firing very quickly when properly initiated, EBW detonators are much safer than blasting caps from stray static electricity and other electric current. Enough current will melt the bridgewire, but it cannot detonate the initiator explosive without the full high-voltage high-current charge passing through the bridgewire. EBW detonators are used in many civilian applications where radio signals, static electricity, or other electrical hazards might cause accidents with conventional electric detonators. Exploding foil initiators (EFI), also known as Slapper detonators are an improvement on EBW detonators. Slappers, instead of directly using the exploding foil to detonate the initiator explosive, use the electrical vaporization of the foil to drive a small circle of insulating material such as PET film or kapton down a circular hole in an additional disc of insulating material. At the far end of that hole is a pellet of high-density secondary explosive. Slapper detonators omit the low-density initiating explosive used in EBW designs and they require much greater energy density than EBW detonators to function, making them inherently safer. Laser initiation of explosives, propellants or pyrotechnics has been attempted in three different ways, (1) direct interaction with the HE or Direct Optical Initiation (DOI); (2) rapid heating of a thin film in contact with a HE; and (3) ablating a thin metal foil to produce a high velocity flyer plate that impacts the HE (laser flyer). See also References Further reading Cooper, Paul W. Explosives Engineering. New York: Wiley-VCH, 1996. . External links 1956 safety film "Blasting Cap - Danger!" from Prelinger Archives Modelling and Simulation of Burst Phenomenon in Electrically Exploded Foils Bombs Explosives Pyrotechnic initiators
Detonator
[ "Chemistry" ]
2,887
[ "Explosives", "Explosions" ]
61,983
https://en.wikipedia.org/wiki/Tannin
Tannins (or tannoids) are a class of astringent, polyphenolic biomolecules that bind to and precipitate proteins and various other organic compounds including amino acids and alkaloids. The term tannin is widely applied to any large polyphenolic compound containing sufficient hydroxyls and other suitable groups (such as carboxyls) to form strong complexes with various macromolecules. The term tannin (from scientific French tannin, from French tan "crushed oak bark", tanner "to tan", cognate with English tanning, Medieval Latin tannare, from Proto-Celtic *tannos "oak") refers to the abundance of these compounds in oak bark, which was used in tanning animal hides into leather. The tannin compounds are widely distributed in many species of plants, where they play a role in protection from predation (acting as pesticides) and might help in regulating plant growth. The astringency from the tannins is what causes the dry and puckery feeling in the mouth following the consumption of unripened fruit, red wine or tea. Likewise, the destruction or modification of tannins with time plays an important role when determining harvesting times. Tannins have molecular weights ranging from 500 to over 3,000 (gallic acid esters) and up to 20,000 daltons (proanthocyanidins). Structure and classes of tannins There are three major classes of tannins: Shown below are the base unit or monomer of the tannin. Particularly in the flavone-derived tannins, the base shown must be (additionally) heavily hydroxylated and polymerized in order to give the high molecular weight polyphenol motif that characterizes tannins. Typically, tannin molecules require at least 12 hydroxyl groups and at least five phenyl groups to function as protein binders. Oligostilbenoids (oligo- or polystilbenes) are oligomeric forms of stilbenoids and constitute a minor class of tannins. Pseudo-tannins Pseudo-tannins are low molecular weight compounds associated with other compounds. They do not change color during the Goldbeater's skin test, unlike hydrolysable and condensed tannins, and cannot be used as tanning compounds. Some examples of pseudo tannins and their sources are: History Ellagic acid, gallic acid, and pyrogallic acid were first discovered by chemist Henri Braconnot in 1831. Julius Löwe was the first person to synthesize ellagic acid by heating gallic acid with arsenic acid or silver oxide. Maximilian Nierenstein studied natural phenols and tannins found in different plant species. Working with Arthur George Perkin, he prepared ellagic acid from algarobilla and certain other fruits in 1905. He suggested its formation from galloyl-glycine by Penicillium in 1915. Tannase is an enzyme that Nierenstein used to produce m-digallic acid from gallotannins. He proved the presence of catechin in cocoa beans in 1931. He showed in 1945 that luteic acid, a molecule present in the myrobalanitannin, a tannin found in the fruit of Terminalia chebula, is an intermediary compound in the synthesis of ellagic acid. At these times, molecule formulas were determined through combustion analysis. The discovery in 1943 by Martin and Synge of paper chromatography provided for the first time the means of surveying the phenolic constituents of plants and for their separation and identification. There was an explosion of activity in this field after 1945, including prominent work by Edgar Charles Bate-Smith and Tony Swain at Cambridge University. In 1966, Edwin Haslam proposed a first comprehensive definition of plant polyphenols based on the earlier proposals of Bate-Smith, Swain and Theodore White, which includes specific structural characteristics common to all phenolics having a tanning property. It is referred to as the White–Bate-Smith–Swain–Haslam (WBSSH) definition. Occurrence Tannins are distributed in species throughout the plant kingdom. They are commonly found in both gymnosperms and angiosperms. Mole (1993) studied the distribution of tannin in 180 families of dicotyledons and 44 families of monocotyledons (Cronquist). Most families of dicot contain tannin-free species (tested by their ability to precipitate proteins). The best known families of which all species tested contain tannin are: Aceraceae, Actinidiaceae, Anacardiaceae, Bixaceae, Burseraceae, Combretaceae, Dipterocarpaceae, Ericaceae, Grossulariaceae, Myricaceae for dicot and Najadaceae and Typhaceae in Monocot. To the family of the oak, Fagaceae, 73% of the species tested contain tannin. For those of acacias, Mimosaceae, only 39% of the species tested contain tannin, among Solanaceae rate drops to 6% and 4% for the Asteraceae. Some families like the Boraginaceae, Cucurbitaceae, Papaveraceae contain no tannin-rich species. The most abundant polyphenols are the condensed tannins, found in virtually all families of plants, and comprising up to 50% of the dry weight of leaves. Cellular localization In all vascular plants studied, tannins are manufactured by a chloroplast-derived organelle, the tannosome. Tannins are mainly physically located in the vacuoles or surface wax of plants. These storage sites keep tannins active against plant predators, but also keep some tannins from affecting plant metabolism while the plant tissue is alive. Tannins are classified as ergastic substances, i.e., non-protoplasm materials found in cells. Tannins, by definition, precipitate proteins. In this condition, they must be stored in organelles able to withstand the protein precipitation process. Idioblasts are isolated plant cells which differ from neighboring tissues and contain non-living substances. They have various functions such as storage of reserves, excretory materials, pigments, and minerals. They could contain oil, latex, gum, resin or pigments etc. They also can contain tannins. In Japanese persimmon (Diospyros kaki) fruits, tannin is accumulated in the vacuole of tannin cells, which are idioblasts of parenchyma cells in the flesh. Presence in soils The convergent evolution of tannin-rich plant communities has occurred on nutrient-poor acidic soils throughout the world. Tannins were once believed to function as anti-herbivore defenses, but more and more ecologists now recognize them as important controllers of decomposition and nitrogen cycling processes. As concern grows about global warming, there is great interest to better understand the role of polyphenols as regulators of carbon cycling, in particular in northern boreal forests. Leaf litter and other decaying parts of kauri (Agathis australis), a tree species found in New Zealand, decompose much more slowly than those of most other species. Besides its acidity, the plant also bears substances such as waxes and phenols, most notably tannins, that are harmful to microorganisms. Presence in water and wood The leaching of highly water soluble tannins from decaying vegetation and leaves along a stream may produce what is known as a blackwater river. Water flowing out of bogs has a characteristic brown color from dissolved peat tannins. The presence of tannins (or humic acid) in well water can make it smell bad or taste bitter, but this does not make it unsafe to drink. Tannins leaching from an unprepared driftwood decoration in an aquarium can cause pH lowering and coloring of the water to a tea-like tinge. A way to avoid this is to boil the wood in water several times, discarding the water each time. Using peat as an aquarium substrate can have the same effect. Many hours of boiling the driftwood may need to be followed by many weeks or months of constant soaking and many water changes before the water will stay clear. Raising the water's pH level, e.g. by adding baking soda, will accelerate the process of leaching. Tannins in water can lead to feather staining on wild and domestic waterfowl which frequent the water; mute swans, which are typically white in colour, can often be observed with reddish-brown staining as a result of coming into contact with dissolved tannins, though dissolved iron compounds also play a role. Softwoods, while in general much lower in tannins than hardwoods, are usually not recommended for use in an aquarium so using a hardwood with a very light color, indicating a low tannin content, can be an easy way to avoid tannins. Tannic acid is brown in color, so in general white woods have a low tannin content. Woods with a lot of yellow, red, or brown coloration to them (like cedar, redwood, red oak, etc.) tend to contain a lot of tannin. Extraction There is no single protocol for extracting tannins from all plant material. The procedures used for tannins are widely variable. It may be that acetone in the extraction solvent increases the total yield by inhibiting interactions between tannins and proteins during extraction or even by breaking hydrogen bonds between tannin-protein complexes. Tests for tannins There are three groups of methods for the analysis of tannins: precipitation of proteins or alkaloids, reaction with phenolic rings, and depolymerization. Alkaloid precipitation Alkaloids such as caffeine, cinchonine, quinine or strychnine, precipitates polyphenols and tannins. This property can be used in a quantitation method. Goldbeater's skin test When goldbeater's skin or ox skin is dipped in HCl, rinsed in water, soaked in the tannin solution for 5 minutes, washed in water, and then treated with 1% FeSO4 solution, it gives a blue black color if tannin was present. Ferric chloride test The following describes the use of ferric chloride (FeCl3) tests for phenolics in general: Powdered plant leaves of the test plant (1.0 g) are weighed into a beaker and 10 ml of distilled water are added. The mixture is boiled for five minutes. Two drops of 5% FeCl3 are then added. Production of a greenish precipitate is an indication of the presence of tannins. Alternatively, a portion of the water extract is diluted with distilled water in a ratio of 1:4 and few drops of 10% ferric chloride solution is added. A blue or green color indicates the presence of tannins (Evans, 1989). Other methods The hide-powder method is used in tannin analysis for leather tannin and the Stiasny method for wood adhesives. Statistical analysis reveals that there is no significant relationship between the results from the hide-powder and the Stiasny methods. Hide-powder method 400 mg of sample tannins are dissolved in 100 ml of distilled water. 3 g of slightly chromated hide-powder previously dried in vacuum for 24h over CaCl2 are added and the mixture stirred for 1 h at ambient temperature. The suspension is filtered without vacuum through a sintered glass filter. The weight gain of the hide-powder expressed as a percentage of the weight of the starting material is equated to the percentage of tannin in the sample. Stiasny's method 100 mg of sample tannins are dissolved in 10 ml distilled water. 1 ml of 10M HCl and 2 ml of 37% formaldehyde are added and the mixture heated under reflux for 30 min. The reaction mixture is filtered while hot through a sintered glass filter. The precipitate is washed with hot water (5× 10 ml) and dried over CaCl2. The yield of tannin is expressed as a percentage of the weight of the starting material. Reaction with phenolic rings The bark tannins of Commiphora angolensis have been revealed by the usual color and precipitation reactions and by quantitative determination by the methods of Löwenthal-Procter and of Deijs (formalin-hydrochloric acid method). Colorimetric methods have existed such as the Neubauer-Löwenthal method which uses potassium permanganate as an oxidizing agent and indigo sulfate as an indicator, originally proposed by Löwenthal in 1877. The difficulty is that the establishing of a titer for tannin is not always convenient since it is extremely difficult to obtain the pure tannin. Neubauer proposed to remove this difficulty by establishing the titer not with regard to the tannin but with regard to crystallised oxalic acid, whereby he found that 83 g oxalic acid correspond to 41.20 g tannin. Löwenthal's method has been criticized. For instance, the amount of indigo used is not sufficient to retard noticeably the oxidation of the non-tannins substances. The results obtained by this method are therefore only comparative. A modified method, proposed in 1903 for the quantification of tannins in wine, Feldmann's method, is making use of calcium hypochlorite, instead of potassium permanganate, and indigo sulfate. Food items with tannins Pomegranates Accessory fruits Strawberries contain both hydrolyzable and condensed tannins. Berries Most berries, such as cranberries, and blueberries, contain both hydrolyzable and condensed tannins. Nuts Nuts vary in the amount of tannins they contain. Some species of acorns of oak contain large amounts. For example, acorns of Quercus robur and Quercus petraea in Poland were found to contain 2.4–5.2% and 2.6–4.8% tannins as a proportion of dry matter, but the tannins can be removed by leaching in water so that the acorns become edible. Other nuts – such as hazelnuts, walnuts, pecans, and almonds – contain lower amounts. Tannin concentration in the crude extract of these nuts did not directly translate to the same relationships for the condensed fraction. Herbs and spices Cloves, tarragon, cumin, thyme, vanilla, and cinnamon all contain tannins. Legumes Most legumes contain tannins. Red-colored beans contain the most tannins, and white-colored beans have the least. Peanuts without shells have a very low tannin content. Chickpeas (garbanzo beans) have a smaller amount of tannins. Chocolate Chocolate liquor contains about 6% tannins. Drinks with tannins Principal human dietary sources of tannins are tea and coffee. Most wines aged in charred oak barrels possess tannins absorbed from the wood. Soils high in clay also contribute to tannins in wine grapes. This concentration gives wine its signature astringency. Coffee pulp has been found to contain low to trace amounts of tannins. Fruit juices Although citrus fruits do not contain tannins, orange-colored juices often contain tannins from food colouring. Apple, grape and berry juices all contain high amounts of tannins. Sometimes tannins are even added to juices and ciders to create a more astringent feel to the taste. Beer In addition to the alpha acids extracted from hops to provide bitterness in beer, condensed tannins are also present. These originate both from malt and hops. Trained brewmasters, particularly those in Germany, consider the presence of tannins to be a flaw. However, in some styles, the presence of this astringency is acceptable or even desired, as, for example, in a Flanders red ale. In lager type beers, the tannins can form a precipitate with specific haze-forming proteins in the beer resulting in turbidity at low temperature. This chill haze can be prevented by removing part of the tannins or part of the haze-forming proteins. Tannins are removed using PVPP, haze-forming proteins by using silica or tannic acid. Properties for animal nutrition Tannins have traditionally been considered antinutritional, depending upon their chemical structure and dosage. Many studies suggest that chestnut tannins have positive effects on silage quality in the round bale silages, in particular reducing NPNs (non-protein nitrogen) in the lowest wilting level. Improved fermentability of soya meal nitrogen in the rumen may occur. Condensed tannins inhibit herbivore digestion by binding to consumed plant proteins and making them more difficult for animals to digest, and by interfering with protein absorption and digestive enzymes (for more on that topic, see plant defense against herbivory). Histatins, another type of salivary proteins, also precipitate tannins from solution, thus preventing alimentary adsorption. Legume fodders containing condensed tannins are a possible option for integrated sustainable control of gastrointestinal nematodes in ruminants, which may help address the worldwide development of resistance to synthetic anthelmintics. These include nuts, temperate and tropical barks, carob, coffee and cocoa. Tannin uses and market Tannins have been used since antiquity in the processes of tanning hides for leather, and in helping preserve iron artifacts (as with Japanese iron teapots). Industrial tannin production began at the beginning of the 19th century with the industrial revolution, to produce tanning material for the need for more leather. Before that time, processes used plant material and were long (up to six months). There was a collapse in the vegetable tannin market in the 1950s–1960s, due to the appearance of synthetic tannins, which were invented in response to a scarcity of vegetable tannins during World War II. At that time, many small tannin industry sites closed. Vegetable tannins are estimated to be used for the production of 10–20% of the global leather production. The cost of the final product depends on the method used to extract the tannins, in particular the use of solvents, alkali and other chemicals used (for instance glycerin). For large quantities, the most cost-effective method is hot water extraction. Tannic acid is used worldwide as clarifying agent in alcoholic drinks and as aroma ingredient in both alcoholic and soft drinks or juices. Tannins from different botanical origins also find extensive uses in the wine industry. Uses Tannins are an important ingredient in the process of tanning leather. Tanbark from oak, mimosa, chestnut and quebracho tree has traditionally been the primary source of tannery tannin, though inorganic tanning agents are also in use today and account for 90% of the world's leather production. Tannins produce different colors with ferric chloride (either blue, blue black, or green to greenish-black) according to the type of tannin. Iron gall ink is produced by treating a solution of tannins with iron(II) sulfate. Tannins can also be used as a mordant, and is especially useful in natural dyeing of cellulose fibers such as cotton. The type of tannin used may or may not have an impact on the final color of the fiber. Tannin is a component in a type of industrial particleboard adhesive developed jointly by the Tanzania Industrial Research and Development Organization and Forintek Labs Canada. Pinus radiata tannins has been investigated for the production of wood adhesives. Condensed tannins, e.g., quebracho tannin, and Hydrolyzable tannins, e.g., chestnut tannin, appear to be able to substitute a high proportion of synthetic phenol in phenol-formaldehyde resins for wood particleboard. Tannins can be used for production of anti-corrosive primers for treating rusted steel surfaces prior to painting, converting rust to iron tannate and consolidating and sealing the surface. The use of resins made of tannins has been investigated to remove mercury and methylmercury from solution. Immobilized tannins have been tested to recover uranium from seawater. References External links Tannins: fascinating but sometimes dangerous molecules   Nutrition Oenology Organic polymers Wine terminology Astringent flavors Phenol antioxidants Wood products Food stabilizers Phytochemicals Wood extracts
Tannin
[ "Chemistry" ]
4,380
[ "Organic compounds", "Organic polymers" ]
62,047
https://en.wikipedia.org/wiki/Mrs.%20Miniver%27s%20problem
Mrs. Miniver's problem is a geometry problem about the area of circles. It asks how to place two circles and of given radii in such a way that the lens formed by intersecting their two interiors has equal area to the symmetric difference of and (the area contained in one but not both circles). It was named for an analogy between geometry and social dynamics enunciated by fictional character Mrs. Miniver, who "saw every relationship as a pair of intersecting circles". Its solution involves a transcendental equation. Origin The problem derives from "A Country House Visit", one of Jan Struther's newspaper articles appearing in the Times of London between 1937 and 1939 featuring her character Mrs. Miniver. According to the story: She saw every relationship as a pair of intersecting circles. It would seem at first glance that the more they overlapped the better the relationship; but this is not so. Beyond a certain point the law of diminishing returns sets in, and there are not enough private resources left on either side to enrich the life that is shared. Probably perfection is reached when the area of the two outer crescents, added together, is exactly equal to that of the leaf-shaped piece in the middle. On paper there must be some neat mathematical formula for arriving at this; in life, none. Louis A. Graham and Clifton Fadiman formalized the mathematics of the problem and popularized it among recreational mathematicians. Solution The problem can be solved by cutting the lune along the line segment between the two crossing points of the circles, into two circular segments, and using the formula for the area of a circular segment to relate the distance between the crossing points to the total area that the problem requires the lune to have. This gives a transcendental equation for the distance between crossing points but it can be solved numerically. There are two boundary conditions whose distances between centers can be readily solved: the farthest apart the centers can be is when the circles have equal radii, and the closest they can be is when one circle is contained completely within the other, which happens when the ratio between radii is . If the ratio of radii falls beyond these limiting cases, the circles cannot satisfy the problem's area constraint. In the case of two circles of equal size, these equations can be simplified somewhat. The rhombus formed by the two circle centers and the two crossing points, with side lengths equal to the radius, has an angle radians at the circle centers, found by solving the equation from which it follows that the ratio of the distance between their centers to their radius is . See also Goat problem#Interior grazing problem, another problem of equalizing the areas of circular lunes and lenses References Circles Area Mathematical problems
Mrs. Miniver's problem
[ "Physics", "Mathematics" ]
567
[ "Circles", "Scalar physical quantities", "Physical quantities", "Quantity", "Size", "Wikipedia categories named after physical quantities", "Mathematical problems", "Pi", "Area" ]
62,200
https://en.wikipedia.org/wiki/Oganesson
Oganesson is a synthetic chemical element; it has symbol Og and atomic number 118. It was first synthesized in 2002 at the Joint Institute for Nuclear Research (JINR) in Dubna, near Moscow, Russia, by a joint team of Russian and American scientists. In December 2015, it was recognized as one of four new elements by the Joint Working Party of the international scientific bodies IUPAC and IUPAP. It was formally named on 28 November 2016. The name honors the nuclear physicist Yuri Oganessian, who played a leading role in the discovery of the heaviest elements in the periodic table. Oganesson has the highest atomic number and highest atomic mass of all known elements. On the periodic table of the elements it is a p-block element, a member of group 18 and the last member of period 7. Its only known isotope, oganesson-294, is highly radioactive, with a half-life of 0.7 ms and, only five atoms have been successfully produced. This has so far prevented any experimental studies of its chemistry. Because of relativistic effects, theoretical studies predict that it would be a solid at room temperature, and significantly reactive, unlike the other members of group 18 (the noble gases). Introduction History Early speculation The possibility of a seventh noble gas, after helium, neon, argon, krypton, xenon, and radon, was considered almost as soon as the noble gas group was discovered. Danish chemist Hans Peter Jørgen Julius Thomsen predicted in April 1895, the year after the discovery of argon, that there was a whole series of chemically inert gases similar to argon that would bridge the halogen and alkali metal groups: he expected that the seventh of this series would end a 32-element period which contained thorium and uranium and have an atomic weight of 292, close to the 294 now known for the first and only confirmed isotope of oganesson. Danish physicist Niels Bohr noted in 1922 that this seventh noble gas should have atomic number 118 and predicted its electronic structure as 2, 8, 18, 32, 32, 18, 8, matching modern predictions. Following this, German chemist Aristid von Grosse wrote an article in 1965 predicting the likely properties of element 118. It was 107 years from Thomsen's prediction before oganesson was successfully synthesized, although its chemical properties have not been investigated to determine if it behaves as the heavier congener of radon. In a 1975 article, American chemist Kenneth Pitzer suggested that element 118 should be a gas or volatile liquid due to relativistic effects. Unconfirmed discovery claims In late 1998, Polish physicist Robert Smolańczuk published calculations on the fusion of atomic nuclei towards the synthesis of superheavy atoms, including oganesson. His calculations suggested that it might be possible to make element 118 by fusing lead with krypton under carefully controlled conditions, and that the fusion probability (cross section) of that reaction would be close to the lead–chromium reaction that had produced element 106, seaborgium. This contradicted predictions that the cross sections for reactions with lead or bismuth targets would go down exponentially as the atomic number of the resulting elements increased. In 1999, researchers at Lawrence Berkeley National Laboratory made use of these predictions and announced the discovery of elements 118 and 116, in a paper published in Physical Review Letters, and very soon after the results were reported in Science. The researchers reported that they had performed the reaction + → + . In 2001, they published a retraction after researchers at other laboratories were unable to duplicate the results and the Berkeley lab could not duplicate them either. In June 2002, the director of the lab announced that the original claim of the discovery of these two elements had been based on data fabricated by principal author Victor Ninov. Newer experimental results and theoretical predictions have confirmed the exponential decrease in cross sections with lead and bismuth targets as the atomic number of the resulting nuclide increases. Discovery reports The first genuine decay of atoms of oganesson was observed in 2002 at the Joint Institute for Nuclear Research (JINR) in Dubna, Russia, by a joint team of Russian and American scientists. Headed by Yuri Oganessian, a Russian nuclear physicist of Armenian ethnicity, the team included American scientists from the Lawrence Livermore National Laboratory in California. The discovery was not announced immediately, because the decay energy of 294Og matched that of 212mPo, a common impurity produced in fusion reactions aimed at producing superheavy elements, and thus announcement was delayed until after a 2005 confirmatory experiment aimed at producing more oganesson atoms. The 2005 experiment used a different beam energy (251 MeV instead of 245 MeV) and target thickness (0.34 mg/cm2 instead of 0.23 mg/cm2). On 9 October 2006, the researchers announced that they had indirectly detected a total of three (possibly four) nuclei of oganesson-294 (one or two in 2002 and two more in 2005) produced via collisions of californium-249 atoms and calcium-48 ions. + → + 3 . In 2011, IUPAC evaluated the 2006 results of the Dubna–Livermore collaboration and concluded: "The three events reported for the Z = 118 isotope have very good internal redundancy but with no anchor to known nuclei do not satisfy the criteria for discovery". Because of the very small fusion reaction probability (the fusion cross section is or ) the experiment took four months and involved a beam dose of calcium ions that had to be shot at the californium target to produce the first recorded event believed to be the synthesis of oganesson. Nevertheless, researchers were highly confident that the results were not a false positive, since the chance that the detections were random events was estimated to be less than one part in . In the experiments, the alpha-decay of three atoms of oganesson was observed. A fourth decay by direct spontaneous fission was also proposed. A half-life of 0.89 ms was calculated: decays into by alpha decay. Since there were only three nuclei, the half-life derived from observed lifetimes has a large uncertainty: . → + The identification of the nuclei was verified by separately creating the putative daughter nucleus directly by means of a bombardment of with ions, + → + 3 , and checking that the decay matched the decay chain of the nuclei. The daughter nucleus is very unstable, decaying with a lifetime of 14 milliseconds into , which may experience either spontaneous fission or alpha decay into , which will undergo spontaneous fission. Confirmation In December 2015, the Joint Working Party of international scientific bodies International Union of Pure and Applied Chemistry (IUPAC) and International Union of Pure and Applied Physics (IUPAP) recognized the element's discovery and assigned the priority of the discovery to the Dubna–Livermore collaboration. This was on account of two 2009 and 2010 confirmations of the properties of the granddaughter of 294Og, 286Fl, at the Lawrence Berkeley National Laboratory, as well as the observation of another consistent decay chain of 294Og by the Dubna group in 2012. The goal of that experiment had been the synthesis of 294Ts via the reaction 249Bk(48Ca,3n), but the short half-life of 249Bk resulted in a significant quantity of the target having decayed to 249Cf, resulting in the synthesis of oganesson instead of tennessine. From 1 October 2015 to 6 April 2016, the Dubna team performed a similar experiment with 48Ca projectiles aimed at a mixed-isotope californium target containing 249Cf, 250Cf, and 251Cf, with the aim of producing the heavier oganesson isotopes 295Og and 296Og. Two beam energies at 252 MeV and 258 MeV were used. Only one atom was seen at the lower beam energy, whose decay chain fitted the previously known one of 294Og (terminating with spontaneous fission of 286Fl), and none were seen at the higher beam energy. The experiment was then halted, as the glue from the sector frames covered the target and blocked evaporation residues from escaping to the detectors. The production of 293Og and its daughter 289Lv, as well as the even heavier isotope 297Og, is also possible using this reaction. The isotopes 295Og and 296Og may also be produced in the fusion of 248Cm with 50Ti projectiles. A search beginning in summer 2016 at RIKEN for 295Og in the 3n channel of this reaction was unsuccessful, though the study is planned to resume; a detailed analysis and cross section limit were not provided. These heavier and likely more stable isotopes may be useful in probing the chemistry of oganesson. Naming Using Mendeleev's nomenclature for unnamed and undiscovered elements, oganesson is sometimes known as eka-radon (until the 1960s as eka-emanation, emanation being the old name for radon). In 1979, IUPAC assigned the systematic placeholder name ununoctium to the undiscovered element, with the corresponding symbol of Uuo, and recommended that it be used until after confirmed discovery of the element. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations were mostly ignored among scientists in the field, who called it "element 118", with the symbol of E118, (118), or simply 118. Before the retraction in 2001, the researchers from Berkeley had intended to name the element ghiorsium (Gh), after Albert Ghiorso (a leading member of the research team). The Russian discoverers reported their synthesis in 2006. According to IUPAC recommendations, the discoverers of a new element have the right to suggest a name. In 2007, the head of the Russian institute stated the team were considering two names for the new element: flyorium, in honor of Georgy Flyorov, the founder of the research laboratory in Dubna; and moskovium, in recognition of the Moscow Oblast where Dubna is located. He also stated that although the element was discovered as an American collaboration, who provided the californium target, the element should rightly be named in honor of Russia since the Flyorov Laboratory of Nuclear Reactions at JINR was the only facility in the world which could achieve this result. These names were later suggested for element 114 (flerovium) and element 116 (moscovium). Flerovium became the name of element 114; the final name proposed for element 116 was instead livermorium, with moscovium later being proposed and accepted for element 115 instead. Traditionally, the names of all noble gases end in "-on", with the exception of helium, which was not known to be a noble gas when discovered. The IUPAC guidelines valid at the moment of the discovery approval however required all new elements be named with the ending "-ium", even if they turned out to be halogens (traditionally ending in "-ine") or noble gases (traditionally ending in "-on"). While the provisional name ununoctium followed this convention, a new IUPAC recommendation published in 2016 recommended using the "-on" ending for new group 18 elements, regardless of whether they turn out to have the chemical properties of a noble gas. The scientists involved in the discovery of element 118, as well as those of 117 and 115, held a conference call on 23 March 2016 to decide their names. Element 118 was the last to be decided upon; after Oganessian was asked to leave the call, the remaining scientists unanimously decided to have the element "oganesson" after him. Oganessian was a pioneer in superheavy element research for sixty years reaching back to the field's foundation: his team and his proposed techniques had led directly to the synthesis of elements 107 through 118. Mark Stoyer, a nuclear chemist at the LLNL, later recalled, "We had intended to propose that name from Livermore, and things kind of got proposed at the same time from multiple places. I don't know if we can claim that we actually proposed the name, but we had intended it." In internal discussions, IUPAC asked the JINR if they wanted the element to be spelled "oganeson" to match the Russian spelling more closely. Oganessian and the JINR refused this offer, citing the Soviet-era practice of transliterating names into the Latin alphabet under the rules of the French language ("Oganessian" is such a transliteration) and arguing that "oganesson" would be easier to link to the person. In June 2016, IUPAC announced that the discoverers planned to give the element the name oganesson (symbol: Og). The name became official on 28 November 2016. In 2017, Oganessian commented on the naming: The naming ceremony for moscovium, tennessine, and oganesson was held on 2 March 2017 at the Russian Academy of Sciences in Moscow. In a 2019 interview, when asked what it was like to see his name in the periodic table next to Einstein, Mendeleev, the Curies, and Rutherford, Oganessian responded: Characteristics Other than nuclear properties, no properties of oganesson or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that it decays very quickly. Thus only predictions are available. Nuclear stability and isotopes The stability of nuclei quickly decreases with the increase in atomic number after curium, element 96, whose most stable isotope, 247Cm, has a half-life four orders of magnitude longer than that of any subsequent element. All nuclides with an atomic number above 101 undergo radioactive decay with half-lives shorter than 30 hours. No elements with atomic numbers above 82 (after lead) have stable isotopes. This is because of the ever-increasing Coulomb repulsion of protons, so that the strong nuclear force cannot hold the nucleus together against spontaneous fission for long. Calculations suggest that in the absence of other stabilizing factors, elements with more than 104 protons should not exist. However, researchers in the 1960s suggested that the closed nuclear shells around 114 protons and 184 neutrons should counteract this instability, creating an island of stability in which nuclides could have half-lives reaching thousands or millions of years. While scientists have still not reached the island, the mere existence of the superheavy elements (including oganesson) confirms that this stabilizing effect is real, and in general the known superheavy nuclides become exponentially longer-lived as they approach the predicted location of the island. Oganesson is radioactive, decaying via alpha decay and spontaneous fission, with a half-life that appears to be less than a millisecond. Nonetheless, this is still longer than some predicted values. Calculations using a quantum-tunneling model predict the existence of several heavier isotopes of oganesson with alpha-decay half-lives close to 1 ms. Theoretical calculations done on the synthetic pathways for, and the half-life of, other isotopes have shown that some could be slightly more stable than the synthesized isotope 294Og, most likely 293Og, 295Og, 296Og, 297Og, 298Og, 300Og and 302Og (the last reaching the N = 184 shell closure). Of these, 297Og might provide the best chances for obtaining longer-lived nuclei, and thus might become the focus of future work with this element. Some isotopes with many more neutrons, such as some located around 313Og, could also provide longer-lived nuclei. The isotopes from 291Og to 295Og might be produced as daughters of element 120 isotopes that can be reached in the reactions 249–251Cf+50Ti, 245Cm+48Ca, and 248Cm+48Ca. In a quantum-tunneling model, the alpha decay half-life of was predicted to be with the experimental Q-value published in 2004. Calculation with theoretical Q-values from the macroscopic-microscopic model of Muntian–Hofman–Patyk–Sobiczewski gives somewhat lower but comparable results. Calculated atomic and physical properties Oganesson is a member of group 18, the zero-valence elements. The members of this group are usually inert to most common chemical reactions (for example, combustion) because the outer valence shell is completely filled with eight electrons. This produces a stable, minimum energy configuration in which the outer electrons are tightly bound. It is thought that similarly, oganesson has a closed outer valence shell in which its valence electrons are arranged in a 7s27p6 configuration. Consequently, some expect oganesson to have similar physical and chemical properties to other members of its group, most closely resembling the noble gas above it in the periodic table, radon. Following the periodic trend, oganesson would be expected to be slightly more reactive than radon. However, theoretical calculations have shown that it could be significantly more reactive. In addition to being far more reactive than radon, oganesson may be even more reactive than the elements flerovium and copernicium, which are heavier homologs of the more chemically active elements lead and mercury, respectively. The reason for the possible enhancement of the chemical activity of oganesson relative to radon is an energetic destabilization and a radial expansion of the last occupied 7p-subshell. More precisely, considerable spin–orbit interactions between the 7p electrons and the inert 7s electrons effectively lead to a second valence shell closing at flerovium, and a significant decrease in stabilization of the closed shell of oganesson. It has also been calculated that oganesson, unlike the other noble gases, binds an electron with release of energy, or in other words, it exhibits positive electron affinity, due to the relativistically stabilized 8s energy level and the destabilized 7p3/2 level, whereas copernicium and flerovium are predicted to have no electron affinity. Nevertheless, quantum electrodynamic corrections have been shown to be quite significant in reducing this affinity by decreasing the binding in the anion Og− by 9%, thus confirming the importance of these corrections in superheavy elements. 2022 calculations expect the electron affinity of oganesson to be 0.080(6) eV. Monte Carlo simulations of oganesson's molecular dynamics predict it has a melting point of and a boiling point of due to relativistic effects (if these effects are ignored, oganesson would melt at ≈). Thus oganesson would probably be a solid rather than a gas under standard conditions, though still with a rather low melting point. Oganesson is expected to have an extremely broad polarizability, almost double that of radon. Because of its tremendous polarizability, oganesson is expected to have an anomalously low first ionization energy of about 860 kJ/mol, similar to that of cadmium and less than those of iridium, platinum, and gold. This is significantly smaller than the values predicted for darmstadtium, roentgenium, and copernicium, although it is greater than that predicted for flerovium. Its second ionization energy should be around 1560 kJ/mol. Even the shell structure in the nucleus and electron cloud of oganesson is strongly impacted by relativistic effects: the valence and core electron subshells in oganesson are expected to be "smeared out" in a homogeneous Fermi gas of electrons, unlike those of the "less relativistic" radon and xenon (although there is some incipient delocalisation in radon), due to the very strong spin–orbit splitting of the 7p orbital in oganesson. A similar effect for nucleons, particularly neutrons, is incipient in the closed-neutron-shell nucleus 302Og and is strongly in force at the hypothetical superheavy closed-shell nucleus 472164, with 164 protons and 308 neutrons. Studies have also predicted that due to increasing electrostatic forces, oganesson may have a semibubble structure in proton density, having few protons at the center of its nucleus. Moreover, spin–orbit effects may cause bulk oganesson to be a semiconductor, with a band gap of  eV predicted. All the lighter noble gases are insulators instead: for example, the band gap of bulk radon is expected to be  eV. Predicted compounds The only confirmed isotope of oganesson, 294Og, has much too short a half-life to be chemically investigated experimentally. Therefore, no compounds of oganesson have been synthesized yet. Nevertheless, calculations on theoretical compounds have been performed since 1964. It is expected that if the ionization energy of the element is high enough, it will be difficult to oxidize and therefore, the most common oxidation state would be 0 (as for the noble gases); nevertheless, this appears not to be the case. Calculations on the diatomic molecule showed a bonding interaction roughly equivalent to that calculated for , and a dissociation energy of 6 kJ/mol, roughly 4 times of that of . Most strikingly, it was calculated to have a bond length shorter than in by 0.16 Å, which would be indicative of a significant bonding interaction. On the other hand, the compound OgH+ exhibits a dissociation energy (in other words proton affinity of oganesson) that is smaller than that of RnH+. The bonding between oganesson and hydrogen in OgH is predicted to be very weak and can be regarded as a pure van der Waals interaction rather than a true chemical bond. On the other hand, with highly electronegative elements, oganesson seems to form more stable compounds than for example copernicium or flerovium. The stable oxidation states +2 and +4 have been predicted to exist in the fluorides and . The +6 state would be less stable due to the strong binding of the 7p1/2 subshell. This is a result of the same spin–orbit interactions that make oganesson unusually reactive. For example, it was shown that the reaction of oganesson with to form the compound would release an energy of 106 kcal/mol of which about 46 kcal/mol come from these interactions. For comparison, the spin–orbit interaction for the similar molecule is about 10 kcal/mol out of a formation energy of 49 kcal/mol. The same interaction stabilizes the tetrahedral Td configuration for , as distinct from the square planar D4h one of , which is also expected to have; this is because OgF4 is expected to have two inert electron pairs (7s and 7p1/2). As such, OgF6 is expected to be unbound, continuing an expected trend in the destabilisation of the +6 oxidation state (RnF6 is likewise expected to be much less stable than XeF6). The Og–F bond will most probably be ionic rather than covalent, rendering the oganesson fluorides non-volatile. OgF2 is predicted to be partially ionic due to oganesson's high electropositivity. Oganesson is predicted to be sufficiently electropositive to form an Og–Cl bond with chlorine. A compound of oganesson and tennessine, OgTs4, has been predicted to be potentially stable chemically. See also Island of stability Superheavy element Transuranium element Extended periodic table Notes References Bibliography Further reading External links 5 ways the heaviest element on the periodic table is really bizarre, ScienceNews.org Element 118: Experiments on discovery, archive of discoverers' official web page Element 118, Heaviest Ever, Reported for 1,000th of a Second, The New York Times. It's Elemental: Oganesson Oganesson at The Periodic Table of Videos (University of Nottingham) On the Claims for Discovery of Elements 110, 111, 112, 114, 116, and 118 (IUPAC Technical Report) WebElements: Oganesson 2002 introductions Chemical elements Chemical elements with face-centered cubic structure Noble gases Synthetic elements
Oganesson
[ "Physics", "Chemistry", "Materials_science" ]
5,049
[ "Matter", "Noble gases", "Chemical elements", "Synthetic materials", "Nonmetals", "Synthetic elements", "Atoms", "Radioactivity" ]
62,247
https://en.wikipedia.org/wiki/Backus%E2%80%93Naur%20form
In computer science, BackusNaur form (BNF; ; Backus normal form) is a notation used to describe the syntax of programming languages or other formal languages. It was developed by John Backus and Peter Naur. BNF can be described as a metasyntax notation for context-free grammars. Backus–Naur form is applied wherever exact descriptions of languages are needed, such as in official language specifications, in manuals, and in textbooks on programming language theory. BNF can be used to describe document formats, instruction sets, and communication protocols. Over time, many extensions and variants of the original Backus–Naur notation have been created; some are exactly defined, including extended Backus–Naur form (EBNF) and augmented Backus–Naur form (ABNF). Overview BNFs describe how to combine different symbols to produce a syntactically correct sequence. BNFs consist of three components: a set of non-terminal symbols, a set of terminal symbols, and rules for replacing non-terminal symbols with a sequence of symbols. These so-called "derivation rules" are written as <symbol> ::= __expression__ where: <symbol> is a nonterminal variable that is always enclosed between the pair <>. means that the symbol on the left must be replaced with the expression on the right. __expression__ consists of one or more sequences of either terminal or nonterminal symbols where each sequence is separated by a vertical bar "|" indicating a choice, the whole being a possible substitution for the symbol on the left. All syntactically correct sequences must be generated in the following manner: Initialize the sequence so that it just contains one start symbol. Apply derivation rules to this start symbol and the ensuing sequences of symbols. Applying rules in this manner can produce longer and longer sequences, so many BNF definitions allow for a special "delete" symbol to be included in the specification. We can specify a rule that allows us to replace some symbols with this "delete" symbol, which is meant to indicate that we can remove the symbols from our sequence and still have a syntactically correct sequence. Example As an example, consider this possible BNF for a U.S. postal address: <postal-address> ::= <name-part> <street-address> <zip-part> <name-part> ::= <personal-part> <last-name> <opt-suffix-part> <EOL> | <personal-part> <name-part> <personal-part> ::= <first-name> | <initial> "." <street-address> ::= <house-num> <street-name> <opt-apt-num> <EOL> <zip-part> ::= <town-name> "," <state-code> <ZIP-code> <EOL> <opt-suffix-part> ::= "Sr." | "Jr." | <roman-numeral> | "" <opt-apt-num> ::= "Apt" <apt-num> | "" This translates into English as: A postal address consists of a name-part, followed by a street-address part, followed by a zip-code part. A name-part consists of either: a personal-part followed by a last name followed by an optional suffix (Jr. Sr., or dynastic number) and end-of-line, or a personal part followed by a name part (this rule illustrates the use of recursion in BNFs, covering the case of people who use multiple first and middle names and initials). A personal-part consists of either a first name or an initial followed by a dot. A street address consists of a house number, followed by a street name, followed by an optional apartment specifier, followed by an end-of-line. A zip-part consists of a town-name, followed by a comma, followed by a state code, followed by a ZIP-code followed by an end-of-line. An opt-suffix-part consists of a suffix, such as "Sr.", "Jr." or a roman-numeral, or an empty string (i.e. nothing). An opt-apt-num consists of a prefix "Apt" followed by an apartment number, or an empty string (i.e. nothing). Note that many things (such as the format of a first-name, apartment number, ZIP-code, and Roman numeral) are left unspecified here. If necessary, they may be described using additional BNF rules. History The idea of describing the structure of language using rewriting rules can be traced back to at least the work of Pāṇini, an ancient Indian Sanskrit grammarian and a revered scholar in Hinduism who lived sometime between the 6th and 4th century BC. His notation to describe Sanskrit word structure is equivalent in power to that of Backus and has many similar properties. In Western society, grammar was long regarded as a subject for teaching, rather than scientific study; descriptions were informal and targeted at practical usage. In the first half of the 20th century, linguists such as Leonard Bloomfield and Zellig Harris started attempts to formalize the description of language, including phrase structure. Meanwhile, string rewriting rules as formal logical systems were introduced and studied by mathematicians such as Axel Thue (in 1914), Emil Post (1920s–40s) and Alan Turing (1936). Noam Chomsky, teaching linguistics to students of information theory at MIT, combined linguistics and mathematics by taking what is essentially Thue's formalism as the basis for the description of the syntax of natural language. He also introduced a clear distinction between generative rules (those of context-free grammars) and transformation rules (1956). John Backus, a programming language designer at IBM, proposed a metalanguage of "metalinguistic formulas" to describe the syntax of the new programming language IAL, known today as ALGOL 58 (1959). His notation was first used in the ALGOL 60 report. BNF is a notation for Chomsky's context-free grammars. Backus may have been familiar with Chomsky's work, but there are some doubts about this. As proposed by Backus, the formula defined "classes" whose names are enclosed in angle brackets. For example, <ab>. Each of these names denotes a class of basic symbols. Further development of ALGOL led to ALGOL 60. In the committee's 1963 report, Peter Naur called Backus's notation Backus normal form. Donald Knuth argued that BNF should rather be read as Backus–Naur form, as it is "not a normal form in the conventional sense", unlike, for instance, Chomsky normal form. The name Pāṇini Backus form was also once suggested in view of the fact that the expansion Backus normal form may not be accurate, and that Pāṇini had independently developed a similar notation earlier. BNF is described by Peter Naur in the ALGOL 60 report as metalinguistic formula: Another example from the ALGOL 60 report illustrates a major difference between the BNF metalanguage and a Chomsky context-free grammar. Metalinguistic variables do not require a rule defining their formation. Their formation may simply be described in natural language within the <> brackets. The following ALGOL 60 report section 2.3 comments specification, exemplifies how this works: For the purpose of including text among the symbols of a program the following "comment" conventions hold: Equivalence here means that any of the three structures shown in the left column may be replaced, in any occurrence outside of strings, by the symbol shown in the same line in the right column without any effect on the action of the program. Naur changed two of Backus's symbols to commonly available characters. The ::= symbol was originally a :≡. The | symbol was originally the word "" (with a bar over it). BNF is very similar to canonical-form Boolean algebra equations that are, and were at the time, used in logic-circuit design. Backus was a mathematician and the designer of the FORTRAN programming language. Studies of Boolean algebra is commonly part of a mathematics curriculum. Neither Backus nor Naur described the names enclosed in < > as non-terminals. Chomsky's terminology was not originally used in describing BNF. Naur later described them as classes in ALGOL course materials. In the ALGOL 60 report they were called metalinguistic variables. Anything other than the metasymbols ::=, |, and class names enclosed in < > are symbols of the language being defined. The metasymbol ::= is to be interpreted as "is defined as". The | is used to separate alternative definitions and is interpreted as "or". The metasymbols < > are delimiters enclosing a class name. BNF is described as a metalanguage for talking about ALGOL by Peter Naur and Saul Rosen. In 1947 Saul Rosen became involved in the activities of the fledgling Association for Computing Machinery, first on the languages committee that became the IAL group and eventually led to ALGOL. He was the first managing editor of the Communications of the ACM. BNF was first used as a metalanguage to talk about the ALGOL language in the ALGOL 60 report. That is how it is explained in ALGOL programming course material developed by Peter Naur in 1962. Early ALGOL manuals by IBM, Honeywell, Burroughs and Digital Equipment Corporation followed the ALGOL 60 report using it as a metalanguage. Saul Rosen in his book describes BNF as a metalanguage for talking about ALGOL. An example of its use as a metalanguage would be in defining an arithmetic expression: The first symbol of an alternative may be the class being defined, the repetition, as explained by Naur, having the function of specifying that the alternative sequence can recursively begin with a previous alternative and can be repeated any number of times. For example, above <expr> is defined as a <term> followed by any number of <addop> <term>. In some later metalanguages, such as Schorre's META II, the BNF recursive repeat construct is replaced by a sequence operator and target language symbols defined using quoted strings. The < and > brackets were removed. Parentheses () for mathematical grouping were added. The <expr> rule would appear in META II as These changes enabled META II and its derivative programming languages to define and extend their own metalanguage, at the cost of the ability to use a natural language description, metalinguistic variable, language construct description. Many spin-off metalanguages were inspired by BNF. See META II, TREE-META, and Metacompiler. A BNF class describes a language construct formation, with formation defined as a pattern or the action of forming the pattern. The class name expr is described in a natural language as a <term> followed by a sequence <addop> <term>. A class is an abstraction; we can talk about it independent of its formation. We can talk about term, independent of its definition, as being added or subtracted in expr. We can talk about a term being a specific data type and how an expr is to be evaluated having specific combinations of data types, or even reordering an expression to group data types and evaluation results of mixed types. The natural-language supplement provided specific details of the language class semantics to be used by a compiler implementation and a programmer writing an ALGOL program. Natural-language description further supplemented the syntax as well. The integer rule is a good example of natural and metalanguage used to describe syntax: There are no specifics on white space in the above. As far as the rule states, we could have space between the digits. In the natural language we complement the BNF metalanguage by explaining that the digit sequence can have no white space between the digits. English is only one of the possible natural languages. Translations of the ALGOL reports were available in many natural languages. The origin of BNF is not as important as its impact on programming language development. During the period immediately following the publication of the ALGOL 60 report BNF was the basis of many compiler-compiler systems. Some, like "A Syntax Directed Compiler for ALGOL 60" developed by Edgar T. Irons and "A Compiler Building System" Developed by Brooker and Morris, directly used BNF. Others, like the Schorre Metacompilers, made it into a programming language with only a few changes. <class name> became symbol identifiers, dropping the enclosing <, > and using quoted strings for symbols of the target language. Arithmetic-like grouping provided a simplification that removed using classes where grouping was its only value. The META II arithmetic expression rule shows grouping use. Output expressions placed in a META II rule are used to output code and labels in an assembly language. Rules in META II are equivalent to a class definitions in BNF. The Unix utility yacc is based on BNF with code production similar to META II. yacc is most commonly used as a parser generator, and its roots are obviously BNF. BNF today is one of the oldest computer-related languages still in use. Further examples BNF's syntax itself may be represented with a BNF like the following: <syntax> ::= <rule> | <rule> <syntax> <rule> ::= <opt-whitespace> "<" <rule-name> ">" <opt-whitespace> "::=" <opt-whitespace> <expression> <line-end> <opt-whitespace> ::= " " <opt-whitespace> | "" <expression> ::= <list> | <list> <opt-whitespace> "|" <opt-whitespace> <expression> <line-end> ::= <opt-whitespace> <EOL> | <line-end> <line-end> <list> ::= <term> | <term> <opt-whitespace> <list> <term> ::= <literal> | "<" <rule-name> ">" <literal> ::= '"' <text1> '"' | "'" <text2> "'" <text1> ::= "" | <character1> <text1> <text2> ::= "" | <character2> <text2> <character> ::= <letter> | <digit> | <symbol> <letter> ::= "A" | "B" | "C" | "D" | "E" | "F" | "G" | "H" | "I" | "J" | "K" | "L" | "M" | "N" | "O" | "P" | "Q" | "R" | "S" | "T" | "U" | "V" | "W" | "X" | "Y" | "Z" | "a" | "b" | "c" | "d" | "e" | "f" | "g" | "h" | "i" | "j" | "k" | "l" | "m" | "n" | "o" | "p" | "q" | "r" | "s" | "t" | "u" | "v" | "w" | "x" | "y" | "z" <digit> ::= "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" <symbol> ::= "|" | " " | "!" | "#" | "$" | "%" | "&" | "(" | ")" | "*" | "+" | "," | "-" | "." | "/" | ":" | ";" | ">" | "=" | "<" | "?" | "@" | "[" | "\" | "]" | "^" | "_" | "`" | "{" | "}" | "~" <character1> ::= <character> | "'" <character2> ::= <character> | '"' <rule-name> ::= <letter> | <rule-name> <rule-char> <rule-char> ::= <letter> | <digit> | "-" Note that "" is the empty string. The original BNF did not use quotes as shown in <literal> rule. This assumes that no whitespace is necessary for proper interpretation of the rule. <EOL> represents the appropriate line-end specifier (in ASCII, carriage-return, line-feed or both depending on the operating system). <rule-name> and <text> are to be substituted with a declared rule's name/label or literal text, respectively. In the U.S. postal address example above, the entire block-quote is a <syntax>. Each line or unbroken grouping of lines is a rule; for example one rule begins with <name-part> ::=. The other part of that rule (aside from a line-end) is an expression, which consists of two lists separated by a vertical bar |. These two lists consists of some terms (three terms and two terms, respectively). Each term in this particular rule is a rule-name. Variants EBNF There are many variants and extensions of BNF, generally either for the sake of simplicity and succinctness, or to adapt it to a specific application. One common feature of many variants is the use of regular expression repetition operators such as * and +. The extended Backus–Naur form (EBNF) is a common one. Another common extension is the use of square brackets around optional items. Although not present in the original ALGOL 60 report (instead introduced a few years later in IBM's PL/I definition), the notation is now universally recognised. ABNF Augmented Backus–Naur form (ABNF) and Routing Backus–Naur form (RBNF) are extensions commonly used to describe Internet Engineering Task Force (IETF) protocols. Parsing expression grammars build on the BNF and regular expression notations to form an alternative class of formal grammar, which is essentially analytic rather than generative in character. Others Many BNF specifications found online today are intended to be human-readable and are non-formal. These often include many of the following syntax rules and extensions: Optional items enclosed in square brackets: [<item-x>]. Items existing 0 or more times are enclosed in curly brackets or suffixed with an asterisk (*) such as <word> ::= <letter> {<letter>} or <word> ::= <letter> <letter>* respectively. Items existing 1 or more times are suffixed with an addition (plus) symbol, +, such as <word> ::= <letter>+. Terminals may appear in bold rather than italics, and non-terminals in plain text rather than angle brackets. Where items are grouped, they are enclosed in simple parentheses. Software using BNF or variants Software that accepts BNF (or a superset) as input ANTLR, a parser generator written in Java Coco/R, compiler generator accepting an attributed grammar in EBNF DMS Software Reengineering Toolkit, program analysis and transformation system for arbitrary languages GOLD, a BNF parser generator RPA BNF parser. Online (PHP) demo parsing: JavaScript, XML XACT X4MR System, a rule-based expert system for programming language translation XPL Analyzer, a tool which accepts simplified BNF for a language and produces a parser for that language in XPL; it may be integrated into the supplied SKELETON program, with which the language may be debugged (a SHARE contributed program, which was preceded by A Compiler Generator) bnfparser2, a universal syntax verification utility bnf2xml, Markup input with XML tags using advanced BNF matching JavaCC, Java Compiler Compiler tm (JavaCC tm) - The Java Parser Generator Similar software GNU bison, GNU version of yacc Yacc, parser generator (most commonly used with the Lex preprocessor) Racket's parser tools, lex and yacc-style parsing (Beautiful Racket edition) Qlik Sense, a BI tool, uses a variant of BNF for scripting BNF Converter (BNFC), operating on a variant called "labeled Backus–Naur form" (LBNF). In this variant, each production for a given non-terminal is given a label, which can be used as a constructor of an algebraic data type representing that nonterminal. The converter is capable of producing types and parsers for abstract syntax in several languages, including Haskell and Java See also Augmented Backus–Naur form (ABNF) Compiler Description Language (CDL) Definite clause grammar – a more expressive alternative to BNF used in Prolog Extended Backus–Naur form (EBNF) Meta-II – an early compiler writing tool and notation Syntax diagram – railroad diagram Translational Backus–Naur form (TBNF) Van Wijngaarden grammar – used in preference to BNF to define Algol68 Wirth syntax notation – an alternative to BNF from 1977 References External links . — Augmented BNF for Syntax Specifications: ABNF. — Routing BNF: A Syntax Used in Various Protocol Specifications. ISO/IEC 14977:1996(E) Information technology – Syntactic metalanguage – Extended BNF, available from or from (the latter is missing the cover page, but is otherwise much cleaner) Language grammars , the original BNF. , freely available BNF grammars for SQL. , freely available BNF grammars for SQL, Ada, Java. , freely available BNF/EBNF grammars for C/C++, Pascal, COBOL, Ada 95, PL/I. . Includes parts 11, 14, and 21 of the ISO 10303 (STEP) standard. Formal languages Compiler construction Metalanguages
Backus–Naur form
[ "Mathematics" ]
4,789
[ "Formal languages", "Mathematical logic" ]
4,262,587
https://en.wikipedia.org/wiki/Thermodynamic%20diagrams
Thermodynamic diagrams are diagrams used to represent the thermodynamic states of a material (typically fluid) and the consequences of manipulating this material. For instance, a temperature–entropy diagram (T–s diagram) may be used to demonstrate the behavior of a fluid as it is changed by a compressor. Overview Especially in meteorology they are used to analyze the actual state of the atmosphere derived from the measurements of radiosondes, usually obtained with weather balloons. In such diagrams, temperature and humidity values (represented by the dew point) are displayed with respect to pressure. Thus the diagram gives at a first glance the actual atmospheric stratification and vertical water vapor distribution. Further analysis gives the actual base and top height of convective clouds or possible instabilities in the stratification. By assuming the energy amount due to solar radiation it is possible to predict the 2 m (6.6 ft) temperature, humidity, and wind during the day, the development of the boundary layer of the atmosphere, the occurrence and development of clouds and the conditions for soaring flight during the day. The main feature of thermodynamic diagrams is the equivalence between the area in the diagram and energy. When air changes pressure and temperature during a process and prescribes a closed curve within the diagram the area enclosed by this curve is proportional to the energy which has been gained or released by the air. Types of thermodynamic diagrams General purpose diagrams include: PV diagram T–s diagram h–s (Mollier) diagram Psychrometric chart Cooling curve Indicator diagram Saturation vapor curve Thermodynamic surface Specific to weather services, there are mainly three different types of thermodynamic diagrams used: Skew-T log-P diagram Tephigram Emagram Stüve diagram All four diagrams are derived from the physical P–alpha diagram which combines pressure (P) and specific volume (alpha) as its basic coordinates. The P–alpha diagram shows a strong deformation of the grid for atmospheric conditions and is therefore not useful in atmospheric sciences. The three diagrams are constructed from the P–alpha diagram by using appropriate coordinate transformations. Not a thermodynamic diagram in a strict sense, since it does not display the energy–area equivalence, is the Stüve diagram But due to its simpler construction it is preferred in education. Another widely-used diagram that does not display the energy–area equivalence is the θ-z diagram (Theta-height diagram), extensively used boundary layer meteorology. Characteristics Thermodynamic diagrams usually show a net of five different lines: isobars = lines of constant pressure isotherms = lines of constant temperature dry adiabats = lines of constant potential temperature representing the temperature of a rising parcel of dry air saturated adiabats or pseudoadiabats = lines representing the temperature of a rising parcel saturated with water vapor mixing ratio = lines representing the dewpoint of a rising parcel The lapse rate, dry adiabatic lapse rate (DALR) and moist adiabatic lapse rate (MALR), are obtained. With the help of these lines, parameters such as cloud condensation level, level of free convection, onset of cloud formation. etc. can be derived from the soundings. Example The path or series of states through which a system passes from an initial equilibrium state to a final equilibrium state and can be viewed graphically on a pressure-volume (P-V), pressure-temperature (P-T), and temperature-entropy (T-s) diagrams. There are an infinite number of possible paths from an initial point to an end point in a process. In many cases the path matters, however, changes in the thermodynamic properties depend only on the initial and final states and not upon the path. Consider a gas in cylinder with a free floating piston resting on top of a volume of gas at a temperature . If the gas is heated so that the temperature of the gas goes up to while the piston is allowed to rise to as in Figure 1, then the pressure is kept the same in this process due to the free floating piston being allowed to rise making the process an isobaric process or constant pressure process. This Process Path is a straight horizontal line from state one to state two on a P-V diagram. It is often valuable to calculate the work done in a process. The work done in a process is the area beneath the process path on a P-V diagram. Figure 2 If the process is isobaric, then the work done on the piston is easily calculated. For example, if the gas expands slowly against the piston, the work done by the gas to raise the piston is the force F times the distance d. But the force is just the pressure P of the gas times the area A of the piston, F = PA. Thus W = Fd W = PAd W = P(V2 − V1) Now let’s say that the piston was not able to move smoothly within the cylinder due to static friction with the walls of the cylinder. Assuming that the temperature was increased slowly, you would find that the process path is not straight and no longer isobaric, but would instead undergo an isometric process till the force exceeded that of the frictional force and then would undergo an isothermal process back to an equilibrium state. This process would be repeated till the end state is reached. See figure 3. The work done on the piston in this case would be different due to the additional work required for the resistance of the friction. The work done due to friction would be the difference between the work done on these two process paths. Many engineers neglect friction at first in order to generate a simplified model. For more accurate information, the height of the highest point, or the max pressure, to surpass the static friction would be proportional to the frictional coefficient and the slope going back down to the normal pressure would be the same as an isothermal process if the temperature was increased at a slow enough rate. Another path in this process is an isometric process. This is a process where volume is held constant which shows as a vertical line on a P-V diagram. Figure 3 Since the piston is not moving during this process, there is not any work being done. See also Thermodynamics Timeline of thermodynamics References The Physics of Atmospheres by John Houghton, Cambridge University Press 2002. Especially chapter 3.3. deals solely with the tephigram. German version of Handbook of meteorological soaring flight from the Organisation Scientifique et Technique Internationale du Vol à Voile (OSTIV) (chapter 2.3) Further reading Handbook of meteorological forecasting for soaring flight WMO Technical Note No. 158. especially chapter 2.3. External links www.met.tamu.edu/../aws-tr79-006.pdf A very large technical manual (164 pages) how to use the diagrams. www.comet.ucar.edu/../sld010.htm A course on how to use diagrams at Comet, the 'Cooperative Program for Operational Meteorology, Education and Training'. diagrams Diagrams
Thermodynamic diagrams
[ "Physics", "Chemistry", "Mathematics" ]
1,472
[ "Thermodynamics", "Dynamical systems" ]
4,262,792
https://en.wikipedia.org/wiki/Perfectly%20matched%20layer
A perfectly matched layer (PML) is an artificial absorbing layer for wave equations, commonly used to truncate computational regions in numerical methods to simulate problems with open boundaries, especially in the FDTD and FE methods. The key property of a PML that distinguishes it from an ordinary absorbing material is that it is designed so that waves incident upon the PML from a non-PML medium do not reflect at the interface—this property allows the PML to strongly absorb outgoing waves from the interior of a computational region without reflecting them back into the interior. PML was originally formulated by Berenger in 1994 for use with Maxwell's equations, and since that time there have been several related reformulations of PML for both Maxwell's equations and for other wave-type equations, such as elastodynamics, the linearized Euler equations, Helmholtz equations, and poroelasticity. Berenger's original formulation is called a split-field PML, because it splits the electromagnetic fields into two unphysical fields in the PML region. A later formulation that has become more popular because of its simplicity and efficiency is called uniaxial PML or UPML, in which the PML is described as an artificial anisotropic absorbing material. Although both Berenger's formulation and UPML were initially derived by manually constructing the conditions under which incident plane waves do not reflect from the PML interface from a homogeneous medium, both formulations were later shown to be equivalent to a much more elegant and general approach: stretched-coordinate PML. In particular, PMLs were shown to correspond to a coordinate transformation in which one (or more) coordinates are mapped to complex numbers; more technically, this is actually an analytic continuation of the wave equation into complex coordinates, replacing propagating (oscillating) waves by exponentially decaying waves. This viewpoint allows PMLs to be derived for inhomogeneous media such as waveguides, as well as for other coordinate systems and wave equations. Technical description Specifically, for a PML designed to absorb waves propagating in the x direction, the following transformation is included in the wave equation. Wherever an x derivative appears in the wave equation, it is replaced by: where is the angular frequency and is some function of x. Wherever is positive, propagating waves are attenuated because: where we have taken a planewave propagating in the +x direction (for ) and applied the transformation (analytic continuation) to complex coordinates: , or equivalently . The same coordinate transformation causes waves to attenuate whenever their x dependence is in the form for some propagation constant k: this includes planewaves propagating at some angle with the x axis and also transverse modes of a waveguide. The above coordinate transformation can be left as-is in the transformed wave equations, or can be combined with the material description (e.g. the permittivity and permeability in Maxwell's equations) to form a UPML description. The coefficient σ/ω depends upon frequency—this is so the attenuation rate is proportional to k/ω, which is independent of frequency in a homogeneous material (not including material dispersion, e.g. for vacuum) because of the dispersion relation between ω and k. However, this frequency-dependence means that a time domain implementation of PML, e.g. in the FDTD method, is more complicated than for a frequency-independent absorber, and involves the auxiliary differential equation (ADE) approach (equivalently, i/ω appears as an integral or convolution in time domain). Perfectly matched layers, in their original form, only attenuate propagating waves; purely evanescent waves (exponentially decaying fields) oscillate in the PML but do not decay more quickly. However, the attenuation of evanescent waves can also be accelerated by including a real coordinate stretching in the PML: this corresponds to making σ in the above expression a complex number, where the imaginary part yields a real coordinate stretching that causes evanescent waves to decay more quickly. Limitations of perfectly matched layers PML is widely used and has become the absorbing boundary technique of choice in much of computational electromagnetism. Although it works well in most cases, there are a few important cases in which it breaks down, suffering from unavoidable reflections or even exponential growth. One caveat with perfectly matched layers is that they are only reflectionless for the exact, continuous wave equation. Once the wave equation is discretized for simulation on a computer, some small numerical reflections appear (which vanish with increasing resolution). For this reason, the PML absorption coefficient σ is typically turned on gradually from zero (e.g. quadratically) over a short distance on the scale of the wavelength of the wave. In general, any absorber, whether PML or not, is reflectionless in the limit where it turns on sufficiently gradually (and the absorbing layer becomes thicker), but in a discretized system the benefit of PML is to reduce the finite-thickness "transition" reflection by many orders of magnitude compared to a simple isotropic absorption coefficient. In certain materials, there are "backward-wave" solutions in which group and phase velocity are opposite to one another. This occurs in "left-handed" negative index metamaterials for electromagnetism and also for acoustic waves in certain solid materials, and in these cases the standard PML formulation is unstable: it leads to exponential growth rather than decay, simply because the sign of k is flipped in the analysis above. Fortunately, there is a simple solution in a left-handed medium (for which all waves are backwards): merely flip the sign of σ. A complication, however, is that physical left-handed materials are dispersive: they are only left-handed within a certain frequency range, and therefore the σ coefficient must be made frequency-dependent. Unfortunately, even without exotic materials, one can design certain waveguiding structures (such as a hollow metal tube with a high-index cylinder in its center) that exhibit both backwards- and forwards-wave solutions at the same frequency, such that any sign choice for σ will lead to exponential growth, and in such cases PML appears to be irrecoverably unstable. Another important limitation of PML is that it requires that the medium be invariant in the direction orthogonal to the boundary, in order to support the analytic continuation of the solution to complex coordinates (the complex "coordinate stretching"). As a consequence, the PML approach is no longer valid (no longer reflectionless at infinite resolution) in the case of periodic media (e.g. photonic crystals or phononic crystals) or even simply a waveguide that enters the boundary at an oblique angle. See also Cagniard–de Hoop method References External links Animation on the effects of PML (YouTube) Numerical differential equations Partial differential equations Wave mechanics Computational electromagnetics
Perfectly matched layer
[ "Physics" ]
1,447
[ "Physical phenomena", "Computational electromagnetics", "Classical mechanics", "Computational physics", "Waves", "Wave mechanics" ]
13,632,049
https://en.wikipedia.org/wiki/Gamma-ray%20burst%20emission%20mechanisms
Gamma-ray burst emission mechanisms are theories that explain how the energy from a gamma-ray burst progenitor (regardless of the actual nature of the progenitor) is turned into radiation. These mechanisms are a major topic of research as of 2007. Neither the light curves nor the early-time spectra of GRBs show resemblance to the radiation emitted by any familiar physical process. Compactness problem It has been known for many years that ejection of matter at relativistic velocities (velocities very close to the speed of light) is a necessary requirement for producing the emission in a gamma-ray burst. GRBs vary on such short timescales (as short as milliseconds) that the size of the emitting region must be very small, or else the time delay due to the finite speed of light would "smear" the emission out in time, wiping out any short-timescale behavior. At the energies involved in a typical GRB, so much energy crammed into such a small space would make the system opaque to photon-photon pair production, making the burst far less luminous and also giving it a very different spectrum from what is observed. However, if the emitting system is moving towards Earth at relativistic velocities, the burst is compressed in time (as seen by an Earth observer, due to the relativistic Doppler effect) and the emitting region inferred from the finite speed of light becomes much smaller than the true size of the GRB (see relativistic beaming). GRBs and internal shocks A related constraint is imposed by the relative timescales seen in some bursts between the short-timescale variability and the total length of the GRB. Often this variability timescale is far shorter than the total burst length. For example, in bursts as long as 100 seconds, the majority of the energy can be released in short episodes less than 1 second long. If the GRB were due to matter moving towards Earth (as the relativistic motion argument enforces), it is hard to understand why it would release its energy in such brief interludes. The generally accepted explanation for this is that these bursts involve the collision of multiple shells traveling at slightly different velocities; so-called "internal shocks". The collision of two thin shells flash-heats the matter, converting enormous amounts of kinetic energy into the random motion of particles, greatly amplifying the energy release due to all emission mechanisms. Which physical mechanisms are at play in producing the observed photons is still an area of debate, but the most likely candidates appear to be synchrotron radiation and inverse Compton scattering. As of 2007 there is no theory that has successfully described the spectrum of all gamma-ray bursts (though some theories work for a subset). However, the so-called Band function (named after David Band) has been fairly successful at fitting, empirically, the spectra of most gamma-ray bursts: A few gamma-ray bursts have shown evidence for an additional, delayed emission component at very high energies (GeV and higher). One theory for this emission invokes inverse Compton scattering. If a GRB progenitor, such as a Wolf-Rayet star, were to explode within a stellar cluster, the resulting shock wave could generate gamma-rays by scattering photons from neighboring stars. About 30% of known galactic Wolf-Rayet stars, are located in dense clusters of O stars with intense ultraviolet radiation fields, and the collapsar model suggests that WR stars are likely GRB progenitors. Therefore, a substantial fraction of GRBs are expected to occur in such clusters. As the relativistic matter ejected from an explosion slows and interacts with ultraviolet-wavelength photons, some photons gain energy, generating gamma-rays. Afterglows and external shocks The GRB itself is very rapid, lasting from less than a second up to a few minutes at most. Once it disappears, it leaves behind a counterpart at longer wavelengths (X-ray, UV, optical, infrared, and radio) known as the afterglow that generally remains detectable for days or longer. In contrast to the GRB emission, the afterglow emission is not believed to be dominated by internal shocks. In general, all the ejected matter has by this time coalesced into a single shell traveling outward into the interstellar medium (or possibly the stellar wind) around the star. At the front of this shell of matter is a shock wave referred to as the "external shock" as the still relativistically moving matter ploughs into the tenuous interstellar gas or the gas surrounding the star. As the interstellar matter moves across the shock, it is immediately heated to extreme temperatures. (How this happens is still poorly understood as of 2007, since the particle density across the shock wave is too low to create a shock wave comparable to those familiar in dense terrestrial environments – the topic of "collisionless shocks" is still largely hypothesis but seems to accurately describe a number of astrophysical situations. Magnetic fields are probably critically involved.) These particles, now relativistically moving, encounter a strong local magnetic field and are accelerated perpendicular to the magnetic field, causing them to radiate their energy via synchrotron radiation. Synchrotron radiation is well understood, and the afterglow spectrum has been modeled fairly successfully using this template. It is generally dominated by electrons (which move and therefore radiate much faster than protons and other particles) so radiation from other particles is generally ignored. In general, the GRB assumes the form of a power-law with three break points (and therefore four different power-law segments.) The lowest break point, , corresponds to the frequency below which the GRB is opaque to radiation and so the spectrum attains the form Rayleigh-Jeans tail of blackbody radiation. The two other break points, and , are related to the minimum energy acquired by an electron after it crosses the shock wave and the time it takes an electron to radiate most of its energy, respectively. Depending on which of these two frequencies is higher, two different regimes are possible: Fast cooling () - Shortly after the GRB, the shock wave imparts immense energy to the electrons and the minimum electron Lorentz factor is very high. In this case, the spectrum looks like: Slow cooling () – Later after the GRB, the shock wave has slowed down and the minimum electron Lorentz factor is much lower.: The afterglow changes with time. It must fade, obviously, but the spectrum changes as well. For the simplest case of adiabatic expansion into a uniform-density medium, the critical parameters evolve as: Here is the flux at the current peak frequency of the GRB spectrum. (During fast-cooling this is at ; during slow-cooling it is at .) Note that because drops faster than , the system eventually switches from fast-cooling to slow-cooling. Different scalings are derived for radiative evolution and for a non-constant-density environment (such as a stellar wind), but share the general power-law behavior observed in this case. Several other known effects can modify the evolution of the afterglow: Reverse shocks and the optical flash There can be "reverse shocks", which propagate back into the shocked matter once it begins to encounter the interstellar medium. The twice-shocked material can produce a bright optical/UV flash, which has been seen in a few GRBs, though it appears not to be a common phenomenon. Refreshed shocks and late-time flares There can be "refreshed" shocks if the central engine continues to release fast-moving matter in small amounts even out to late times, these new shocks will catch up with the external shock to produce something like a late-time internal shock. This explanation has been invoked to explain the frequent flares seen in X-rays and at other wavelengths in many bursts, though some theorists are uncomfortable with the apparent demand that the progenitor (which one would think would be destroyed by the GRB) remains active for very long. Jet effects Gamma-ray burst emission is believed to be released in jets, not spherical shells. Initially the two scenarios are equivalent: the center of the jet is not "aware" of the jet edge, and due to relativistic beaming we only see a small fraction of the jet. However, as the jet slows down, two things eventually occur (each at about the same time): First, information from the edge of the jet that there is no pressure to the side propagates to its center, and the jet matter can spread laterally. Second, relativistic beaming effects subside, and once Earth observers see the entire jet the widening of the relativistic beam is no longer compensated by the fact that we see a larger emitting region. Once these effects appear the jet fades very rapidly, an effect that is visible as a power-law "break" in the afterglow light curve. This is the so-called "jet break" that has been seen in some events and is often cited as evidence for the consensus view of GRBs as jets. Many GRB afterglows do not display jet breaks, especially in the X-ray, but they are more common in the optical light curves. Though as jet breaks generally occur at very late times (~1 day or more) when the afterglow is quite faint, and often undetectable, this is not necessarily surprising. Dust extinction and hydrogen absorption There may be dust along the line of sight from the GRB to Earth, both in the host galaxy and in the Milky Way. If so, the light will be attenuated and reddened and an afterglow spectrum may look very different from that modeled. At very high frequencies (far-ultraviolet and X-ray) interstellar hydrogen gas becomes a significant absorber. In particular, a photon with a wavelength of less than 91 nanometers is energetic enough to completely ionize neutral hydrogen and is absorbed with almost 100% probability even through relatively thin gas clouds. (At much shorter wavelengths the probability of absorption begins to drop again, which is why X-ray afterglows are still detectable.) As a result, observed spectra of very high-redshift GRBs often drop to zero at wavelengths less than that of where this hydrogen ionization threshold (known as the Lyman break) would be in the GRB host's reference frame. Other, less dramatic hydrogen absorption features are also commonly seen in high-z GRBs, such as the Lyman alpha forest. References Gamma-ray bursts
Gamma-ray burst emission mechanisms
[ "Physics", "Astronomy" ]
2,200
[ "Physical phenomena", "Stellar phenomena", "Astronomical events", "Gamma-ray bursts" ]
13,633,477
https://en.wikipedia.org/wiki/Direct%20integration%20of%20a%20beam
Direct integration is a structural analysis method for measuring internal shear, internal moment, rotation, and deflection of a beam. For a beam with an applied weight , taking downward to be positive, the internal shear force is given by taking the negative integral of the weight: The internal moment is the integral of the internal shear: = The angle of rotation from the horizontal, , is the integral of the internal moment divided by the product of the Young's modulus and the area moment of inertia: Integrating the angle of rotation obtains the vertical displacement : Integrating Each time an integration is carried out, a constant of integration needs to be obtained. These constants are determined by using either the forces at supports, or at free ends. For internal shear and moment, the constants can be found by analyzing the beam's free body diagram. For rotation and displacement, the constants are found using conditions dependent on the type of supports. For a cantilever beam, the fixed support has zero rotation and zero displacement. For a beam supported by a pin and roller, both the supports have zero displacement. Sample calculations Take the beam shown at right supported by a fixed pin at the left and a roller at the right. There are no applied moments, the weight is a constant 10 kN, and - due to symmetry - each support applies a 75 kN vertical force to the beam. Taking x as the distance from the pin, Integrating, where represents the applied loads. For these calculations, the only load having an effect on the beam is the 75 kN load applied by the pin, applied at x=0, giving Integrating the internal shear, where, because there is no applied moment, . Assuming an EI value of 1 kNmm (for simplicity, real EI values for structural members such as steel are normally greater by powers of ten) * and Because of the vertical supports at each end of the beam, the displacement () at x = 0 and x = 15m is zero. Substituting (x = 0, ν(0) = 0) and (x = 15m, ν(15m) = 0), we can solve for constants =-1406.25 and =0, yielding and For the given EI value, the maximum displacement, at x=7.5m, is approximately 440 times the length of the beam. For a more realistic situation, such as a uniform load of 1 kN and an EI value of 5,000 kN·m², the displacement would be approximately 13 cm. Note that for the rotation the units are meters divided by meters (or any other units of length which reduce to unity). This is because rotation is given as a slope, the vertical displacement divided by the horizontal change. See also Bending Beam theory Euler–Bernoulli static beam equation Solid Mechanics Virtual Work References Hibbeler, R.C., Mechanics Materials, sixth edition; Pearson Prentice Hall, 2005. . External links Beam Deflection by Double Integration Method
Direct integration of a beam
[ "Engineering" ]
611
[ "Structural engineering", "Structural analysis", "Mechanical engineering", "Aerospace engineering" ]
13,635,394
https://en.wikipedia.org/wiki/Indo-1
Indo-1 is a popular dye that is used as a ratiometric calcium indicator similar to Fura-2. In contrast to Fura-2, Indo-1 has a dual emissions peak and a single excitation. The main emission peak in calcium-free solution is 475 nm while in the presence of calcium the emission is shifted to 400 nm. It is widely used in flow cytometry and laser scanning microscopy, due to its single excitation property. However, its use for confocal microscopy is limited due to its photo-instability caused by photobleaching. Indo-1 is also able to keep possession of its ratiometric emission, dissimilar to Fura-2. The penta potassium salt is commercially available and preferred to the free acid because of its higher solubility in water. While Indo-1 is not cell permeable the penta acetoxymethyl ester Indo-1 AM enters the cell where it is cleaved by intracellular esterases to Indo-1. The synthesis and properties of Indo-1 were presented in 1985 by the group of Roger Y Tsien. In intact heart muscle, Indo-1, in combination with bioluminescent protein aequorin, can be utilized as a tool to distinguish between the internal and exterior inotropic regulation processes. References Biochemistry methods Cell imaging Chelating agents Fluorescent dyes Glycol ethers Indoles
Indo-1
[ "Chemistry", "Biology" ]
295
[ "Biochemistry methods", "Microscopy", "Biochemistry", "Chelating agents", "Cell imaging", "Process chemicals" ]
13,636,007
https://en.wikipedia.org/wiki/Sperm-mediated%20gene%20transfer
Sperm-mediated gene transfer (SMGT) is a transgenic technique that transfers genes based on the ability of sperm cells to spontaneously bind to and internalize exogenous DNA and transport it into an oocyte during fertilization to produce genetically modified animals.1 Exogenous DNA refers to DNA that originates outside of the organism. Transgenic animals have been obtained using SMGT, but the efficiency of this technique is low. Low efficiency is mainly due to low uptake of exogenous DNA by the spermatozoa, reducing the chances of fertilizing the oocytes with transfected spermatozoa.2 In order to successfully produce transgenic animals by SMGT, the spermatozoa must attach the exogenous DNA into the head and these transfected spermatozoa must maintain their functionality to fertilize the oocyte.2 Genetically modified animals produced by SMGT are useful for research in biomedical, agricultural, and veterinary fields of study. SMGT could also be useful in generating animals as models for human diseases or lead to future discoveries relating to human gene therapy. Sperm-Mediated Gene Transfer Mechanism The method for SMGT uses the sperm cell, a natural vector of genetic material, to transport exogenous DNA. The exogenous DNA molecules bind to the cell membrane of the head of the sperm cell. This binding and internalization of the DNA is not a random event. The exogenous DNA interacts with the DNA-binding proteins (DBPs) that are present on the surface of the sperm cell.3 Spermatozoa are naturally protected against the intrusion of exogenous DNA molecules by an inhibitory factor present in mammals’ seminal fluid. This factor blocks the binding of sperm cells and exogenous DNA because in the presence of the inhibitory factor, DBPs lose their ability to bind to exogenous DNA. In the absence of this inhibitory factor, DBPs on sperm cells are able to interact with DNA and can then translocate the DNA into the cell. Therefore, the seminal fluid must be removed from the sperm samples by extensive washing immediately after ejaculation.3 After the DNA is internalized, the exogenous DNA must be integrated into the genome. There are various mechanisms suggested for DNA integration, including integrating DNA at oocyte activation, at nucleus decondensation, or at the formation of the pronuclei, but all of these suggested mechanisms imply that the integration of DNA happens after the penetration of the sperm cell into the oocyte.3 Sperm-Mediated Gene Transfer Controversy Sperm-mediated gene transfer is considered controversial because despite the successes, it has not yet become established as a reliable form of genetic manipulation. Skepticism arises based on the assumption that evolutionary chaos could arise if sperm cells could act as vectors for exogenous DNA.4 Reasonable assumption tells us that because reproductive tracts contain free DNA molecules, sperm cells should be highly resistant to the risk of picking up exogenous DNA molecules. SMGT has been demonstrated experimentally and followed the assumption that nature has barriers against SMGT. These barriers are not always absolute and could explain the inconsistent experimental outcomes of SMGT.4 If there are natural barriers against SMGT, then the successes may only represent unusual cases in which the barriers failed. Two barriers have been identified; the inhibitory factor in seminal fluid that prevents binding to foreign DNA molecules and a sperm endogenous nuclease activity that is triggered upon interaction of sperm cells with foreign DNA molecules.4 These protections give reason to believe that unintentional interactions between sperm and exogenous genetic sequences is kept to a minimal. These barriers allow for protection against the threat that every fertilization event could become a potentially mutagenic one.4 Applications of Sperm-Mediated Gene Transfer Animal Transgenesis Transgenic animals have been produced successfully using gene transfer techniques such as sperm-mediated gene transfer. Though this production has been successful, the efficiency of the process is low. Low efficiency of SMGT in the production of transgenic animals is mainly due to poor uptake of the exogenous DNA by the sperm cells, thus reducing the number of fertilized oocytes with transfected spermatozoa.5 From 1989 to 2004, there were over 30 claims for the production of viable transgenic animals using SMGT, but only about 25 percent of these demonstrated a transmission of the transgenes beyond the F0 generation.4 This transmission is required in order to claim usable animal transgenesis. According to previous studies, numerous animal species, including mammals, birds, insects, and fish, have been found susceptible to SMGT techniques, thus indicating that SMGT has broad applicability across a wide variety of Metazoan species.4 Currently, despite the low frequency of transmission of transgenes, the frequency of phenotype modifications and overall animal transgenesis has been as high as 80 percent in some experiments.4 Gene Therapy The potential use of sperm-mediated gene transfer for embryo somatic gene therapy is a possibility for future research. Embryo somatic gene therapy would be advantageous because there seems to be an inverse correlation between the age of the patient and the effectiveness of gene therapy. Therefore, the possibility of gene therapy treatment before irreversible damage occurs would be ideal.4 A majority of the experiments that report successful SMGT provide evidence of post-fertilization transfer and maintenance of transgenes.6 SMGT has potential advantages of being a simple and cost-effective method of gene therapy, especially in contrast with pronuclear microinjection, another transgenic technique. Nevertheless, despite some successes and its potential utility, SMGT is not yet established as a reliable form of genetic modification.6 References 1. Lavitrano M, Giovannoni R, Cerrito MG. 2013. Methods for sperm-mediated gene transfer. Methods Molecular Biology. 927:519-529. 2. García-Vázquez FA, Ruiz S, Grullón LA, Ondiz AD, Gutiérrez-Adán A, Gadea J. 2011. Factors affecting porcine sperm mediated gene transfer. Research in Veterinary Science. 91(3):446-53. 3. Lavitrano M, Busnelli M, Cerrito MG, Giovannoni R, Manzini S, Vargiolu A. 2006. Sperm-mediated gene transfer. Reproduction, Fertility and Development. 18:19-23. 4. Smith K, Spadafora C. 2005. Sperm-mediated gene transfer: applications and implications. BioEssays. 27(5):551-562. 5. Collares T, Campos VF, de Leon PM, Moura, Cavalcanti PV, Amaral, MG, et al. 2011. Transgene transmission in chickens by sperm-mediated gene transfer after seminal plasma removal and exogenous DNA treated with dimethylsulfoxide or N,N-dimethylacetamide. Journal of Biosciences. 36(4):613-620. 6. Smith K. 2004. Gene therapy: the potential applicability of gene transfer technology to the human germline. International Journal of Medical Sciences. 1(2):76-91. Genetic engineering
Sperm-mediated gene transfer
[ "Chemistry", "Engineering", "Biology" ]
1,488
[ "Biological engineering", "Genetic engineering", "Molecular biology" ]
13,636,040
https://en.wikipedia.org/wiki/Lema%C3%AEtre%20coordinates
Lemaître coordinates are a particular set of coordinates for the Schwarzschild metric—a spherically symmetric solution to the Einstein field equations in vacuum—introduced by Georges Lemaître in 1932. Changing from Schwarzschild to Lemaître coordinates removes the coordinate singularity at the Schwarzschild radius. Metric The original Schwarzschild coordinate expression of the Schwarzschild metric, in natural units (), is given as where is the invariant interval; is the Schwarzschild radius; is the mass of the central body; are the Schwarzschild coordinates (which asymptotically turn into the flat spherical coordinates); is the speed of light; and is the gravitational constant. This metric has a coordinate singularity at the Schwarzschild radius . Georges Lemaître was the first to show that this is not a real physical singularity but simply a manifestation of the fact that the static Schwarzschild coordinates cannot be realized with material bodies inside the Schwarzschild radius. Indeed, inside the Schwarzschild radius everything falls towards the centre and it is impossible for a physical body to keep a constant radius. A transformation of the Schwarzschild coordinate system from to the new coordinates (the numerator and denominator are switched inside the square-roots), leads to the Lemaître coordinate expression of the metric, where The metric in Lemaître coordinates is non-singular at the Schwarzschild radius . This corresponds to the point . There remains a genuine gravitational singularity at the center, where , which cannot be removed by a coordinate change. The time coordinate used in the Lemaître coordinates is identical to the "raindrop" time coordinate used in the Gullstrand–Painlevé coordinates. The other three: the radial and angular coordinates of the Gullstrand–Painlevé coordinates are identical to those of the Schwarzschild chart. That is, Gullstrand–Painlevé applies one coordinate transform to go from the Schwarzschild time to the raindrop coordinate . Then Lemaître applies a second coordinate transform to the radial component, so as to get rid of the off-diagonal entry in the Gullstrand–Painlevé chart. The notation used in this article for the time coordinate should not be confused with the proper time. It is true that gives the proper time for radially infalling observers; it does not give the proper time for observers traveling along other geodesics. Geodesics The trajectories with ρ constant are timelike geodesics with τ the proper time along these geodesics. They represent the motion of freely falling particles which start out with zero velocity at infinity. At any point their speed is just equal to the escape velocity from that point. The Lemaître coordinate system is synchronous, that is, the global time coordinate of the metric defines the proper time of co-moving observers. The radially falling bodies reach the Schwarzschild radius and the centre within finite proper time. Radial null geodesics correspond to , which have solutions . Here, is just a short-hand for The two signs correspond to outward-moving and inward-moving light rays, respectively. Re-expressing this in terms of the coordinate gives Note that when . This is interpreted as saying that no signal can escape from inside the Schwarzschild radius, with light rays emitted radially either inwards or outwards both end up at the origin as the proper time increases. The Lemaître coordinate chart is not geodesically complete. This can be seen by tracing outward-moving radial null geodesics backwards in time. The outward-moving geodesics correspond to the plus sign in the above. Selecting a starting point at , the above equation integrates to as . Going backwards in proper time, one has as . Starting at and integrating forward, one arrives at in finite proper time. Going backwards, one has, once again that as . Thus, one concludes that, although the metric is non-singular at , all outward-traveling geodesics extend to as . See also Kruskal-Szekeres coordinates Eddington–Finkelstein coordinates Lemaître–Tolman metric Introduction to the mathematics of general relativity Stress–energy tensor Metric tensor (general relativity) Relativistic angular momentum References Metric tensors Spacetime Coordinate charts in general relativity General relativity Gravity Exact solutions in general relativity
Lemaître coordinates
[ "Physics", "Mathematics", "Engineering" ]
921
[ "Exact solutions in general relativity", "Tensors", "Vector spaces", "Mathematical objects", "Theory of relativity", "General relativity", "Equations", "Space (mathematics)", "Metric tensors", "Coordinate systems", "Spacetime", "Coordinate charts in general relativity" ]
17,588,004
https://en.wikipedia.org/wiki/Marine%20outfall
A marine outfall (or ocean outfall) is a pipeline or tunnel that discharges municipal or industrial wastewater, stormwater, combined sewer overflows (CSOs), cooling water, or brine effluents from water desalination plants to the sea. Usually they discharge under the sea's surface (submarine outfall). In the case of municipal wastewater, effluent is often being discharged after having undergone no or only primary treatment, with the intention of using the assimilative capacity of the sea for further treatment. Submarine outfalls are common throughout the world and probably number in the thousands. The light intensity and salinity in natural sea water disinfects the wastewater to ocean outfall system significantly. More than 200 outfalls alone have been listed in a single international database maintained by the Institute for Hydromechanics at Karlsruhe University for the International Association of Hydraulic Engineering and Research (IAHR) / International Water Association (IWA) Committee on Marine Outfall Systems. The world's first marine outfall was built in Santa Monica, United States, in 1910. In Latin America and the Caribbean there were 134 outfalls with more than 500 m length in 2006 for wastewater disposal alone, according to a survey by the Pan American Center for Sanitary Engineering and Environmental Sciences (CEPIS) of PAHO. According to the survey, the largest number of municipal wastewater outfalls in the region exist in Venezuela (39), Chile (39) and Brazil (22). The world's largest marine outfall stems from the Deer Island Waste Water Treatment Plant located in Boston, United States. Currently, Boston has approximately 235 miles of combined sewers and 37 active CSO outfalls. Many outfalls are simply known by a public used name, e.g. Boston Outfall. Advantages The main advantages of marine outfalls for the discharge of wastewater are: the natural dilution and dispersion of organic matter, pathogens and other pollutants the ability to keep the sewage field submerged because of the depth at which the sewage is being released the greater die-off rate of pathogens due to the greater distance they will have to travel to shore. They also tend to be less expensive than advanced wastewater treatment plants, using the natural assimilative capacity of the sea instead of energy-intensive treatment processes in a plant. For example, preliminary treatment of wastewater is sufficient with an effective outfall and diffuser. The costs of preliminary treatment are about one tenth that of secondary treatment. Preliminary treatment also requires much less land than advanced wastewater treatment. Disadvantages Marine outfalls for partially treated or untreated wastewater remain controversial. The design calculation and computer models for pollution modeling have been criticized, arguing that dilution has been overemphasized and that other mechanisms work in the opposite direction, such as bioaccumulation of toxins, sedimentation of sludge particles and agglomeration of sewage particles with grease. Accumulative mechanisms include slick formation, windrow formation, flocculate formation and agglomerated formation. Grease or wax can interfere with dispersion, so that bacteria and viruses could be carried to remote locations where the concentration of bacterial predators would be low and the die-off rate much lower. Technology Outfalls vary in diameter from as narrow as 15 cm to as wide as 8 m; the widest registered outfall in the world with 8 m diameter is located in Navia (Spain) for the discharge of industrial wastewater. Outfalls vary in length from 50 m to 55 km, the longest registered outfalls being the Boston outfall with a length of 16 km and an industrial outfall in Ankleshwar (India) with a length of 55 km. The depth of the deepest point of an outfall varies from 3 m to up to 60 m, the deepest registered outfall being located in Macuto, Vargas (Venezuela) for the discharge of untreated municipal wastewater. Outfall materials include polyethylene, stainless steel, carbon steel, glass-reinforced plastic, reinforced concrete, cast iron or tunnels through rock. Common installation methods for pipelines are float and sink, bottom pull and top pull. Examples Submarine outfalls exist, existed or have been considered in the following locations, among many others: Africa Casablanca (Morocco). Cape Town (South Africa). Asia Manila Bay (Philippines). Mumbai (India). Mutwall ( Sri Lanka). Wellawaththa (Sri Lanka). Lunawa (Sri Lanka). Oceania Anglesea, Victoria. Geelong, Victoria. Sydney (e.g., Bondi Ocean Outfall Sewer) Europe Barcelona, Spain Costa do Estoril (Portugal) Marmara Sea near Istanbul (Turkey) San Sebastián (Spain) Split (Croatia) Thames Estuary downstream of London (UK) Edinburgh, Scotland. North America Honolulu (USA) New York Bight (USA) Southern California Bight (USA). and Victoria, British Columbia, (Canada). Santa Monica, United States (world's first) Boston, United States (world's largest) The city of San Diego used Pacific Ocean dilution of primary treated effluent into the 21st century. Latin America and the Caribbean Cartagena, Colombia Ipanema Beach beach from Rio de Janeiro (Brazil). This outfall, built in 1975, discharges untreated wastewater through a pipe with a diameter of 2.4m and a length of 4,775m at a depth of 27m. Sosua (Dominican Republic). Controversies In the 1960s the city of Sydney decided to build ocean sewage outfalls to discharge partially treated sewage 2–4 km offshore at a cost of US$300 million. In the late 1980s, however, the government promised to upgrade the coastal treatment plants so that sewage would be treated to at least secondary treatment standards before discharge into the ocean. The submarine outfall in Cartagena, Colombia was financed with a loan by the World Bank. It was subsequently challenged by residents claiming that the wastewater caused damage to the marine environment and to fisheries. The case was taken up by the World Bank's Inspection Panel, which contracted two independent three-dimensional modeling efforts in 2006. Both "confirmed that the 2.85km long submarine outfall (was) adequate." For disposal into the ocean, environmental treaty requirements have to met. As international treaties often manage water over countries' borders, wastewater disposal is easier in bodies of water found entirely under the jurisdiction of one country. References Sources IWA Committee on Marine Outfall Systems Salas, Henry J.:Submarine outfalls a viable alternative for sewage discharge of coastal cities in Latin America and the Caribbean, Lima; CEPIS, 2000 External links IAHR/IWA Committee on Marine Outfall Systems Waste treatment technology Environmental engineering Hydrology Hydraulics Oceanography
Marine outfall
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
1,390
[ "Hydrology", "Applied and interdisciplinary physics", "Water treatment", "Oceanography", "Chemical engineering", "Physical systems", "Hydraulics", "Civil engineering", "Environmental engineering", "Waste treatment technology", "Fluid dynamics" ]
17,590,286
https://en.wikipedia.org/wiki/Yoshiaki%20Arata
was a Japanese physicist. Arata was one of the pioneering researchers into nuclear fusion in Japan and a former professor at Osaka University. He was reported to be a strong nationalist, speaking only Japanese in public. He received the Order of Culture in 2006. Arata started researching and publishing in the field of cold fusion around 1998, together with his colleague Yue Chang Zhang. Further reading Japan's "Cold fusion" Effort Produces Startling Claims of Bursts of Neutrons", Wall Street Journal, 4 December 1989 "New life for cold fusion?" New Scientist, 9 December 1989, p. 19 N. Wada and K. Nishizawa, "Nuclear fusion in solid", Japanese Journal of Applied Physics, 1989, 28:L2017 Publications Y. Arata and Y. C. Zhang. "Achievement of intense 'cold' fusion reaction," Proceedings of the Japanese Academy, series B, 1990. 66:l. Y.Arata. Patent Application US 2006/0153752 A References 1924 births 2018 deaths Japanese nuclear physicists Japanese metallurgists Recipients of the Order of Culture Academic staff of Osaka University Osaka University alumni Cold fusion People from Kyoto Prefecture
Yoshiaki Arata
[ "Physics", "Chemistry" ]
238
[ "Nuclear fusion", "Cold fusion", "Nuclear physics" ]
17,590,530
https://en.wikipedia.org/wiki/Sequential%20dynamical%20system
Sequential dynamical systems (SDSs) are a class of graph dynamical systems. They are discrete dynamical systems which generalize many aspects of for example classical cellular automata, and they provide a framework for studying asynchronous processes over graphs. The analysis of SDSs uses techniques from combinatorics, abstract algebra, graph theory, dynamical systems and probability theory. Definition An SDS is constructed from the following components: A finite graph Y with vertex set v[Y] = {1,2, ... , n}. Depending on the context the graph can be directed or undirected. A state xv for each vertex i of Y taken from a finite set K. The system state is the n-tuple x = (x1, x2, ... , xn), and x[i] is the tuple consisting of the states associated to the vertices in the 1-neighborhood of i in Y (in some fixed order). A vertex function fi for each vertex i. The vertex function maps the state of vertex i at time t to the vertex state at time t + 1 based on the states associated to the 1-neighborhood of i in Y. A word w = (w1, w2, ... , wm) over v[Y]. It is convenient to introduce the Y-local maps Fi constructed from the vertex functions by The word w specifies the sequence in which the Y-local maps are composed to derive the sequential dynamical system map F: Kn → Kn as If the update sequence is a permutation one frequently speaks of a permutation SDS to emphasize this point. The phase space associated to a sequential dynamical system with map F: Kn → Kn is the finite directed graph with vertex set Kn and directed edges (x, F(x)). The structure of the phase space is governed by the properties of the graph Y, the vertex functions (fi)i, and the update sequence w. A large part of SDS research seeks to infer phase space properties based on the structure of the system constituents. Example Consider the case where Y is the graph with vertex set {1,2,3} and undirected edges {1,2}, {1,3} and {2,3} (a triangle or 3-circle) with vertex states from K = {0,1}. For vertex functions use the symmetric, boolean function nor : K3 → K defined by nor(x,y,z) = (1+x)(1+y)(1+z) with boolean arithmetic. Thus, the only case in which the function nor returns the value 1 is when all the arguments are 0. Pick w = (1,2,3) as update sequence. Starting from the initial system state (0,0,0) at time t = 0 one computes the state of vertex 1 at time t=1 as nor(0,0,0) = 1. The state of vertex 2 at time t=1 is nor(1,0,0) = 0. Note that the state of vertex 1 at time t=1 is used immediately. Next one obtains the state of vertex 3 at time t=1 as nor(1,0,0) = 0. This completes the update sequence, and one concludes that the Nor-SDS map sends the system state (0,0,0) to (1,0,0). The system state (1,0,0) is in turned mapped to (0,1,0) by an application of the SDS map. See also Graph dynamical system Boolean network Gene regulatory network Dynamic Bayesian network Petri net References Predecessor and Permutation Existence Problems for Sequential Dynamical Systems Genetic Sequential Dynamical Systems Combinatorics Graph theory Networks Abstract algebra Dynamical systems
Sequential dynamical system
[ "Physics", "Mathematics" ]
799
[ "Discrete mathematics", "Graph theory", "Combinatorics", "Mathematical relations", "Mechanics", "Abstract algebra", "Algebra", "Dynamical systems" ]
17,597,147
https://en.wikipedia.org/wiki/Pulse-height%20analyzer
A pulse-height analyzer (PHA) is an instrument that accepts electronic pulses of varying heights from particle and event detectors, digitizes the pulse heights, and saves the number of pulses of each height in registers or channels, thus recording a pulse-height spectrum or pulse-height distribution used for later pulse-height analysis. PHAs are used in nuclear- and elementary-particle physics research. A PHA is a specific modification to multichannel analyzers. A pulse-height analyzer is also integrated into particle counters or used as a discrete module to calibrate particle counters. See also Nuclear electronics Experimental particle physics
Pulse-height analyzer
[ "Physics" ]
127
[ "Experimental particle physics", "Nuclear and atomic physics stubs", "Particle physics", "Experimental physics", "Nuclear physics", "Particle physics stubs" ]
3,125,808
https://en.wikipedia.org/wiki/Instantaneous%20phase%20and%20frequency
Instantaneous phase and frequency are important concepts in signal processing that occur in the context of the representation and analysis of time-varying functions. The instantaneous phase (also known as local phase or simply phase) of a complex-valued function s(t), is the real-valued function: where arg is the complex argument function. The instantaneous frequency is the temporal rate of change of the instantaneous phase. And for a real-valued function s(t), it is determined from the function's analytic representation, sa(t): where represents the Hilbert transform of s(t). When φ(t) is constrained to its principal value, either the interval or , it is called wrapped phase. Otherwise it is called unwrapped phase, which is a continuous function of argument t, assuming sa(t) is a continuous function of t. Unless otherwise indicated, the continuous form should be inferred. Examples Example 1 where ω > 0. In this simple sinusoidal example, the constant θ is also commonly referred to as phase or phase offset. φ(t) is a function of time; θ is not. In the next example, we also see that the phase offset of a real-valued sinusoid is ambiguous unless a reference (sin or cos) is specified. φ(t) is unambiguously defined. Example 2 where ω > 0. In both examples the local maxima of s(t) correspond to φ(t) = 2N for integer values of N. This has applications in the field of computer vision. Formulations Instantaneous angular frequency is defined as: and instantaneous (ordinary) frequency is defined as: where φ(t) must be the unwrapped phase; otherwise, if φ(t) is wrapped, discontinuities in φ(t) will result in Dirac delta impulses in f(t). The inverse operation, which always unwraps phase, is: This instantaneous frequency, ω(t), can be derived directly from the real and imaginary parts of sa(t), instead of the complex arg without concern of phase unwrapping. 2m1 and m2 are the integer multiples of necessary to add to unwrap the phase. At values of time, t, where there is no change to integer m2, the derivative of φ(t) is For discrete-time functions, this can be written as a recursion: Discontinuities can then be removed by adding 2 whenever Δφ[n] ≤ −, and subtracting 2 whenever Δφ[n] > . That allows φ[n] to accumulate without limit and produces an unwrapped instantaneous phase. An equivalent formulation that replaces the modulo 2 operation with a complex multiplication is: where the asterisk denotes complex conjugate. The discrete-time instantaneous frequency (in units of radians per sample) is simply the advancement of phase for that sample Complex representation In some applications, such as averaging the values of phase at several moments of time, it may be useful to convert each value to a complex number, or vector representation: This representation is similar to the wrapped phase representation in that it does not distinguish between multiples of 2 in the phase, but similar to the unwrapped phase representation since it is continuous. A vector-average phase can be obtained as the arg of the sum of the complex numbers without concern about wrap-around. See also Angular displacement Analytic signal Frequency modulation Group delay Instantaneous amplitude Negative frequency References Further reading Signal processing Digital signal processing Time–frequency analysis Fourier analysis Electrical engineering Audio engineering
Instantaneous phase and frequency
[ "Physics", "Technology", "Engineering" ]
750
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Spectrum (physical sciences)", "Time–frequency analysis", "Frequency-domain analysis", "Electrical engineering", "Audio engineering" ]
3,127,042
https://en.wikipedia.org/wiki/Behavioral%20medicine
Behavioral medicine is concerned with the integration of knowledge in the biological, behavioral, psychological, and social sciences relevant to health and illness. These sciences include epidemiology, anthropology, sociology, psychology, physiology, pharmacology, nutrition, neuroanatomy, endocrinology, and immunology. The term is often used interchangeably, but incorrectly, with health psychology. The practice of behavioral medicine encompasses health psychology, but also includes applied psychophysiological therapies such as biofeedback, hypnosis, and bio-behavioral therapy of physical disorders, aspects of occupational therapy, rehabilitation medicine, and physiatry, as well as preventive medicine. In contrast, health psychology represents a stronger emphasis specifically on psychology's role in both behavioral medicine and behavioral health. Behavioral medicine is especially relevant in recent days, where many of the health problems are primarily viewed as behavioral in nature, as opposed to medical. For example, smoking, leading a sedentary lifestyle, and alcohol use disorder or other substance use disorder are all factors in the leading causes of death in the modern society. Practitioners of behavioral medicine include appropriately qualified nurses, social workers, psychologists, and physicians (including medical students and residents), and these professionals often act as behavioral change agents, even in their medical roles. Behavioral medicine uses the biopsychosocial model of illness instead of the medical model. This model incorporates biological, psychological, and social elements into its approach to disease instead of relying only on a biological deviation from the standard or normal functioning. Origins and history Writings from the earliest civilizations have alluded to the relationship between mind and body, the fundamental concept underlying behavioral medicine. The field of psychosomatic medicine is among its academic forebears, albeit, it is now obsolete as an psychological discipline. In the form in which it is generally understood today, the field dates back to the 1970s. The earliest uses of the term were in the title of a book by Lee Birk (Biofeedback: Behavioral Medicine), published in 1973; and in the names of two clinical research units, the Center for Behavioral Medicine, founded by Ovide F. Pomerleau and John Paul Brady at the University of Pennsylvania in 1973, and the Laboratory for the Study of Behavioral Medicine, founded by William Stewart Agras at Stanford University in 1974. Subsequently, the field burgeoned, and inquiry into behavioral, physiological, and biochemical interactions with health and illness gained prominence under the rubric of behavioral medicine. In 1976, in recognition of this trend, the National Institutes of Health created the Behavioral Medicine Study Section to encourage and facilitate collaborative research across disciplines. The 1977 Yale Conference on Behavioral Medicine and a meeting of the National Academy of Sciences were explicitly aimed at defining and delineating the field in the hopes of helping to guide future research. Based on deliberations at the Yale conference, Schwartz and Weiss proposed the biopsychosocial model, emphasizing the new field's interdisciplinary roots and calling for the integration of knowledge and techniques broadly derived from behavioral and biomedical science. Shortly after, Pomerleau and Brady published a book entitled Behavioral Medicine: Theory and Practice, in which they offered an alternative definition focusing more closely on the particular contribution of the experimental analysis of behavior in shaping the field. Additional developments during this period of growth and ferment included the establishment of learned societies (the Society of Behavioral Medicine and the Academy of Behavioral Medicine Research, both in 1978) and of journals (the Journal of Behavioral Medicine in 1977 and the Annals of Behavioral Medicine in 1979). In 1990, at the International Congress of Behavioral Medicine in Sweden, the International Society of Behavioral Medicine was founded to provide, through its many daughter societies and through its own peer-reviewed journal (the International Journal of Behavioral Medicine), an international focus for professional and academic development. Areas of study Behavior-related illnesses Many chronic diseases have a behavioral component, but the following illnesses can be significantly and directly modified by behavior, as opposed to using pharmacological treatment alone: Substance use: many studies demonstrate that medication is most effective when combined with behavioral intervention Hypertension: deliberate attempts to reduce stress can also reduce high blood pressure Insomnia: cognitive and behavioural interventions are recommended as a first line treatment for insomnia Treatment adherence and compliance Medications work best for controlling chronic illness when the patients use them as prescribed and do not deviate from the physician's instructions. This is true for both physiological and mental illnesses. However, in order for the patient to adhere to a treatment regimen, the physician must provide accurate information about the regimen, an adequate explanation of what the patient must do, and should also offer more frequent reinforcement of appropriate compliance. Patients with strong social support systems, particularly through marriages and families, typically exhibit better compliance with their treatment regimen. Examples: telemonitoring through telephone or video conference with the patient case management by using a range of medical professionals to consistently follow up with the patient Doctor-patient relationship It is important for doctors to make meaningful connections and relationships with their patients, instead of simply having interactions with them, which often occurs in a system that relies heavily on specialist care. For this reason, behavioral medicine emphasizes honest and clear communication between the doctor and the patient in the successful treatment of any illness, and also in the maintenance of an optimal level of physical and mental health. Obstacles to effective communication include power dynamics, vulnerability, and feelings of helplessness or fear. Doctors and other healthcare providers also struggle with interviewing difficult or uncooperative patients, as well as giving undesirable medical news to patients and their families. The field has placed increasing emphasis on working towards sharing the power in the relationship, as well as training the doctor to empower the patient to make their own behavioral changes. More recently, behavioral medicine has expanded its area of practice to interventions with providers of medical services, in recognition of the fact that the behavior of providers can have a determinative effect on patient outcomes. Objectives include maintaining professional conduct, productivity, and altruism, in addition to preventing burnout, depression, and job dissatisfaction among practitioners. Learning principles, models and theories Behavioral medicine includes understanding the clinical applications of learning principles such as reinforcement, avoidance, generalisation, and discrimination, and of cognitive-social learning models as well, such as the cognitive-social learning model of relapse prevention by Marlatt. Learning theory Learning can be defined as a relatively permanent change in a behavioral tendency occurring as a result of reinforced practice. A behavior is significantly more likely to occur again in the future as a result of learning, making learning important in acquiring maladaptive physiological responses that can lead to psychosomatic disease. This also implies that patients can change their unhealthy behaviors in order to improve their diagnoses or health, especially in treating addictions and phobias. The three primary theories of learning are: classical conditioning operant conditioning modeling Other areas include correcting perceptual bias in diagnostic behavior; remediating clinicians' attitudes that impinge negatively upon patient treatment; and addressing clinicians' behaviors that promote disease development and illness maintenance in patients, whether within a malpractice framework or not. Our modern-day culture involves many acute, microstressors that add up to a large amount of chronic stress over time, leading to disease and illness. According to Hans Selye, the body's stress response is designed to heal and involves three phases of his General Adaptation Syndrome: alarm, resistance, and exhaustion. Applications An example of how to apply the biopsychosocial model that behavioral medicine utilizes is through chronic pain management. Before this model was adopted, physicians were unable to explain why certain patients did not experience pain despite experiencing significant tissue damage, which led them to see the purely biomedical model of disease as inadequate. However, increasing damage to body parts and tissues is generally associated with increasing levels of pain. Doctors started including a cognitive component to pain, leading to the gate control theory and the discovery of the placebo effect. Psychological factors that affect pain include self-efficacy, anxiety, fear, abuse, life stressors, and pain catastrophizing, which is particularly responsive to behavioral interventions. In addition, one's genetic predisposition to psychological distress and pain sensitivity will affect pain management. Finally, social factors such as socioeconomic status, race, and ethnicity also play a role in the experience of pain. Behavioral medicine involves examining all of the many factors associated with illness, instead of just the biomedical aspect, and heals disease by including a component of behavioral change on the part of the patient. In a review published 2011 Fisher et al. illustrates how a behavior medical approach can be applied on a number of common diseases and risk factors such as cardiovascular disease/diabetes, cancer, HIV/AIDS and tobacco use, poor diet, physical inactivity and excessive alcohol consumption. Evidence indicates that behavioral interventions are cost effectiveness and add in terms of quality of life. Importantly behavioral interventions can have broad effects and benefits on prevention, disease management, and well-being across the life span. Journals Annals of Behavioral Medicine International Journal of Behavioral Medicine Journal of Behavior Analysis of Sports, Health, Fitness and Behavioral Medicine Journal of Behavioral Health and Medicine Journal of Behavioral Medicine Organizations Association for Behavior Analysis International's Behavioral Medicine Special Interest Group Society of Behavioral Medicine International Society of Behavioral Medicine See also Health psychology Organizational psychology Medical psychology Occupational health psychology References Epidemiology Health Interdisciplinary branches of psychology Neuroanatomy
Behavioral medicine
[ "Environmental_science" ]
1,930
[ "Epidemiology", "Environmental social science" ]
3,127,378
https://en.wikipedia.org/wiki/Bicinchoninic%20acid%20assay
The bicinchoninic acid assay (BCA assay), also known as the Smith assay, after its inventor, Paul K. Smith at the Pierce Chemical Company, now part of Thermo Fisher Scientific, is a biochemical assay for determining the total concentration of protein in a solution (0.5 μg/mL to 1.5 mg/mL), similar to Lowry protein assay, Bradford protein assay or biuret reagent. The total protein concentration is exhibited by a color change of the sample solution from blue to purple in proportion to protein concentration, which can then be measured using colorimetric techniques. The BCA assay was patented by Pierce Chemical Company in 1989 & the patent expired in 2006. Mechanism A stock BCA solution contains the following ingredients in a highly alkaline solution with a pH 11.25: bicinchoninic acid, sodium carbonate, sodium bicarbonate, sodium tartrate, and copper(II) sulfate pentahydrate. The BCA assay primarily relies on two reactions. First, the peptide bonds in protein reduce Cu2+ ions from the copper(II) sulfate to Cu1+ (a temperature dependent reaction). The amount of Cu2+ reduced is proportional to the amount of protein present in the solution. Next, two molecules of bicinchoninic acid chelate with each Cu1+ ion, forming a purple-colored complex that strongly absorbs light at a wavelength of 562 nm. The bicinchoninic acid Cu1+ complex is influenced in protein samples by the presence of cysteine/cystine, tyrosine, and tryptophan side chains. At higher temperatures (37 to 60 °C), peptide bonds assist in the formation of the reaction complex. Incubating the BCA assay at higher temperatures is recommended as a way to increase assay sensitivity while minimizing the variances caused by unequal amino acid composition. The amount of protein present in a solution can be quantified by measuring the absorption spectra and comparing with protein solutions of known concentration. Limitations The BCA assay is largely incompatible with reducing agents and metal chelators, although trace quantities may be tolerated. The BCA assay also reportedly responds to common membrane lipids and phospholipids. Assay variants There are a few alternative variants of the BCA assay: Original BCA assay As described by Smith, the original BCA assay is a two-component protocol. The two reagents are "stable indefinitely at room temperature". Modern (likely exact or highly similar) formulations are available from at least two commercial vendors. The BCA Working solution is generated by mixing Reagent A and Reagent B in a 50:1 ratio, and can be prepared either weekly (it is moderately stable), or as needed. Reagent A 1% w/v BCA-Na2 (CAS: 979-88-4) 2% w/v Na2CO3·H2O (CAS: 5968-11-6) 0.16% w/v Na2 tartrate (CAS: 868-18-8) 0.4% w/v NaOH (CAS: 1310-73-2) 0.95% w/v NaHCO3 (CAS: 144-55-8) Add 50% NaOH or solid NaHCO3 to adjust the pH to 11.25 A suggested but untested alternative formulation in the Smith manuscript is to leave out the NaOH (and presumably not perform the manual pH adjustment to 11.25), but instead to dissolve the other components in a preprepared buffer of 0.25 M Na2CO3 and 0.01 M NaHCO3. Notably, Smith synthesized their own BCA via the Pfitzinger reaction of isatin and acetoin, substituting NaOH for KOH but otherwise following the synthetic method of Lesene and Henze, as the BCA available from commercial vendors of that time was too impure for their use. At least three successive recrystallizations of their synthesized BCA from 70˚C water was needed to sufficiently purify it for the assay. Reagent B 4% w/v CuSO4·5H2O Micro BCA assay (for dilute solutions) The BCA Micro BCA assay is a 3-component protocol which uses concentrated stocks of the Biuret reaction, BCA, and copper(II) reagents. It allows for an improved sensitivity of ~2 - 40 μg/mL vs 20 - 2000 μg/mL of the original BCA assay. However, it has a different, and generally speaking more sensitive, interference from non-protein components. Kits for the Micro BCA assay are available from at least two commercial vendors. Notably, the composition and use of a "Micro BCA Reagent and Protocol" was described in the original manuscript by Smith, and modern kits likely consist of an exact or highly similar formulation. The protocol consists of mixing Micro-Reagent B and the Copper Solution 25:1 to form Micro-Reagent C (MC), which is not shelf stable and should be freshly prepared, and then mixing MC 1:1 with Micro-Reagent A to produce the final (also unstable) assay working solution. Micro-Reagent A, Micro-Reagent B, and Copper Solution are stable indefinitely at room temperature. Micro-Reagent A (MA) 8% w/v Na2CO3·H2O (CAS: 5968-11-6) 1.6% w/v NaOH (CAS: 1310-73-2) 1.6% w/v Na2 tartrate (CAS: 868-18-8) (10x concentration as Reagent A in Original BCA Assay above) Sufficient NaHCO3 (CAS: 144-55-8) to adjust pH to 11.25 Micro-Reagent B (MB) 4% w/v BCA-Na2 (CAS: 979-88-4) (4x concentration as Reagent A in Original BCA Assay above) Copper Solution 4% w/v CuSO4·5H2O (CAS: 7758-99-8) (Same concentration as Reagent B in Original BCA Assay above) Reducing agent compatible (RAC) BSA assay This type of BCA assay includes a proprietary thiol covalent blocking "Compatibility Reagent" aka a Reducing Agent Compatibility Agent (RACA). Although this allows greater compatibility with reducing agents, the assay has a different interference profile from other non-protein components. Rapid Gold BCA This type of BCA assay seems to only be available from Thermo Fisher Scientific. Reportedly it uses "the same copper reduction method as the traditional BCA Protein Assay with a unique [proprietary] copper chelator.", that absorbs at 480 nm instead of 562 nm. This proprietor chelator and presumed optimized Biuret reaction formulation allows the assay to provide rapid (<5 min) results without the 37˚C+ incubation of the original BCA assay. However the assay has a different interference profile from other non-protein components. The Pierce Quantitative Colorimetric Peptide Assay (now owned by and available from Thermo Fisher Scientific) appears to use a similar or identical 480 nm absorbing proprietary copper chelator. See also Biuret test Bradford assay Colloidal gold protein assay References External links OpenWetWare BCA assay chemistry Biochemistry methods Chemical tests de:Bicinchoninsäure
Bicinchoninic acid assay
[ "Chemistry", "Biology" ]
1,611
[ "Biochemistry methods", "Biochemistry", "Chemical tests" ]
3,129,901
https://en.wikipedia.org/wiki/Stacking%20%28chemistry%29
In chemistry, stacking refers to superposition of molecules or atomic sheets owing to attractive interactions between these molecules or sheets. Metal dichalcogenide compounds Metal dichalcogenides have the formula ME2, where M = a transition metal and E = S, Se, Te. In terms of their electronic structures, these compounds are usually viewed as derivatives of M4+. They adopt stacked structures, which is relevant to their ability to undergo intercalation, e.g. by lithium, and their lubricating properties. The corresponding diselenides and even ditellurides are known, e.g., TiSe2, MoSe2, and WSe2. Charge transfer salts A combination of tetracyanoquinodimethane (TCNQ) and tetrathiafulvalene (TTF) forms a strong charge-transfer complex referred to as TTF-TCNQ. The solid shows almost metallic electrical conductance. In a TTF-TCNQ crystal, TTF and TCNQ molecules are arranged independently in separate parallel-aligned stacks, and an electron transfer occurs from donor (TTF) to acceptor (TCNQ) stacks. Graphite Graphite consists of stacked sheets of covalently bonded carbon. The individual layers are called graphene. In each layer, each carbon atom is bonded to three other atoms forming a continuous layer of sp2 bonded carbon hexagons, like a honeycomb lattice with a bond length of 0.142 nm, and the distance between planes is 0.335 nm. Bonding between layers is relatively weak van der Waals bonds, which allows the graphene-like layers to be easily separated and to glide past each other. Electrical conductivity perpendicular to the layers is consequently about 1000 times lower. Linear chain compounds Linear chain compounds are materials composed of stacked arrays of metal-metal bonded molecules or ions. Such materials exhibit anisotropic electrical conductivity. One example is (acac = acetylacetonate, which stack with distances of about 326 pm. Classic examples include Krogmann's salt and Magnus's green salt. Counterexample: benzene dimer and related species π–π stacking is a noncovalent interaction between the pi bonds of aromatic rings. Such "sandwich interactions" are however generally electrostatically repulsive. What is more commonly observed are either a staggered stacking (parallel displaced) or pi-teeing (perpendicular T-shaped) interaction both of which are electrostatic attractive. For example, the most commonly observed interactions between aromatic rings of amino acid residues in proteins is a staggered stacked followed by a perpendicular orientation. Sandwiched orientations are relatively rare. Pi stacking is repulsive as it places carbon atoms with partial negative charges from one ring on top of other partial negatively charged carbon atoms from the second ring and hydrogen atoms with partial positive charges on top of other hydrogen atoms that likewise carry partial positive charges. π–π interactions play a role in supramolecular chemistry, specifically the synthesis of catenane. The major challenge for the synthesis of catenane is to interlock molecules in a controlled fashion. Attractive π–π interactions exist between electron-rich benzene derivatives and electron-poor pyridinium rings. [2]Catanene was synthesized by treating bis(pyridinium) (A), bisparaphenylene-34-crown-10 (B), and 1, 4-bis(bromomethyl)benzene (C) (Fig. 2). The π–π interaction between A and B directed the formation of an interlocked template intermediate that was further cyclized by substitution reaction with compound C to generate the [2]catenane product. See also Noncovalent interaction Dispersion (chemistry) Cation–pi interaction Intercalation (biochemistry) Intercalation (chemistry) References External links Larry Wolf (2011): π-π (π-Stacking) interactions: origin and modulation Organic chemistry Chemical bonding Supramolecular chemistry
Stacking (chemistry)
[ "Physics", "Chemistry", "Materials_science" ]
837
[ "Condensed matter physics", "nan", "Nanotechnology", "Chemical bonding", "Supramolecular chemistry" ]
11,084,869
https://en.wikipedia.org/wiki/Gravitational-wave%20observatory
A gravitational-wave detector (used in a gravitational-wave observatory) is any device designed to measure tiny distortions of spacetime called gravitational waves. Since the 1960s, various kinds of gravitational-wave detectors have been built and constantly improved. The present-day generation of laser interferometers has reached the necessary sensitivity to detect gravitational waves from astronomical sources, thus forming the primary tool of gravitational-wave astronomy. The first direct observation of gravitational waves was made in September 2015 by the Advanced LIGO observatories, detecting gravitational waves with wavelengths of a few thousand kilometers from a merging binary of stellar black holes. In June 2023, four pulsar timing array collaborations presented the first strong evidence for a gravitational wave background of wavelengths spanning light years, most likely from many binaries of supermassive black holes. Challenge The direct detection of gravitational waves is complicated by the extraordinarily small effect the waves produce on a detector. The amplitude of a spherical wave falls off as the inverse of the distance from the source. Thus, even waves from extreme systems such as merging binary black holes die out to a very small amplitude by the time they reach the Earth. Astrophysicists predicted that some gravitational waves passing the Earth might produce differential motion on the order 10−18 m in a LIGO-size instrument. Resonant mass antennas A simple device to detect the expected wave motion is called a resonant mass antenna – a large, solid body of metal isolated from outside vibrations. This type of instrument was the first type of gravitational-wave detector. Strains in space due to an incident gravitational wave excite the body's resonant frequency and could thus be amplified to detectable levels. Conceivably, a nearby supernova might be strong enough to be seen without resonant amplification. However, up to 2018, no gravitational wave observation that would have been widely accepted by the research community has been made on any type of resonant mass antenna, despite certain claims of observation by researchers operating the antennas. There are three types of resonant mass antenna that have been built: room-temperature bar antennas, cryogenically cooled bar antennas and cryogenically cooled spherical antennas. The earliest type was the room-temperature bar-shaped antenna called a Weber bar; these were dominant in 1960s and 1970s and many were built around the world. It was claimed by Weber and some others in the late 1960s and early 1970s that these devices detected gravitational waves; however, other experimenters failed to detect gravitational waves using them, and a consensus developed that Weber bars would not be a practical means to detect gravitational waves. The second generation of resonant mass antennas, developed in the 1980s and 1990s, were the cryogenic bar antennas which are also sometimes called Weber bars. In the 1990s there were five major cryogenic bar antennas: AURIGA (Padua, Italy), NAUTILUS (Rome, Italy), EXPLORER (CERN, Switzerland), ALLEGRO (Louisiana, US), and NIOBE (Perth, Australia). In 1997, these five antennas run by four research groups formed the International Gravitational Event Collaboration (IGEC) for collaboration. While there were several cases of unexplained deviations from the background signal, there were no confirmed instances of the observation of gravitational waves with these detectors. In the 1980s, there was also a cryogenic bar antenna called ALTAIR, which, along with a room-temperature bar antenna called GEOGRAV, was built in Italy as a prototype for later bar antennas. Operators of the GEOGRAV-detector claimed to have observed gravitational waves coming from the supernova SN1987A (along with another room-temperature bar antenna), but these claims were not adopted by the wider community. These modern cryogenic forms of the Weber bar operated with superconducting quantum interference devices to detect vibration (ALLEGRO, for example). Some of them continued in operation after the interferometric antennas started to reach astrophysical sensitivity, such as AURIGA, an ultracryogenic resonant cylindrical bar gravitational wave detector based at INFN in Italy. The AURIGA and LIGO teams collaborated in joint observations. In the 2000s, the third generation of resonant mass antennas, the spherical cryogenic antennas, emerged. Four spherical antennas were proposed around year 2000 and two of them were built as downsized versions, the others were cancelled. The proposed antennas were GRAIL (Netherlands, downsized to MiniGRAIL), TIGA (US, small prototypes made), SFERA (Italy), and Graviton (Brasil, downsized to Mario Schenberg). The two downsized antennas, MiniGRAIL and the Mario Schenberg, are similar in design and are operated as a collaborative effort. MiniGRAIL is based at Leiden University, and consists of an exactingly machined sphere cryogenically cooled to . The spherical configuration allows for equal sensitivity in all directions, and is somewhat experimentally simpler than larger linear devices requiring high vacuum. Events are detected by measuring deformation of the detector sphere. MiniGRAIL is highly sensitive in the 2–4 kHz range, suitable for detecting gravitational waves from rotating neutron star instabilities or small black hole mergers. It is the current consensus that current cryogenic resonant mass detectors are not sensitive enough to detect anything but extremely powerful (and thus very rare) gravitational waves. As of 2020, no detection of gravitational waves by cryogenic resonant antennas has occurred. Laser interferometers A more sensitive detector uses laser interferometry to measure gravitational-wave induced motion between separated 'free' masses. This allows the masses to be separated by large distances (increasing the signal size); a further advantage is that it is sensitive to a wide range of frequencies (not just those near a resonance as is the case for Weber bars). Ground-based interferometers are now operational. Currently, the most sensitive ground-based laser interferometer is LIGO – the Laser Interferometer Gravitational Wave Observatory. LIGO is famous as the site of the first confirmed detections of gravitational waves in 2015. LIGO has two detectors: one in Livingston, Louisiana; the other at the Hanford site in Richland, Washington. Each consists of two light storage arms which are 4 km in length. These are at 90 degree angles to each other, with the light passing through diameter vacuum tubes running the entire . A passing gravitational wave will slightly stretch one arm as it shortens the other. This is precisely the motion to which a Michelson interferometer is most sensitive. Even with such long arms, the strongest gravitational waves will only change the distance between the ends of the arms by at most roughly 10−18 meters. LIGO should be able to detect gravitational waves as small as . Upgrades to LIGO and other detectors such as Virgo, GEO600, and TAMA 300 should increase the sensitivity further, and the next generation of instruments (Advanced LIGO Plus and Advanced Virgo Plus) will be more sensitive still. Another highly sensitive interferometer (KAGRA) began operations in 2020. A key point is that a ten-times increase in sensitivity (radius of "reach") increases the volume of space accessible to the instrument by one thousand. This increases the rate at which detectable signals should be seen from one per tens of years of observation, to tens per year. Interferometric detectors are limited at high frequencies by shot noise, which occurs because the lasers produce photons randomly. One analogy is to rainfall: the rate of rainfall, like the laser intensity, is measurable, but the raindrops, like photons, fall at random times, causing fluctuations around the average value. This leads to noise at the output of the detector, much like radio static. In addition, for sufficiently high laser power, the random momentum transferred to the test masses by the laser photons shakes the mirrors, masking signals at low frequencies. Thermal noise (e.g., Brownian motion) is another limit to sensitivity. In addition to these "stationary" (constant) noise sources, all ground-based detectors are also limited at low frequencies by seismic noise and other forms of environmental vibration, and other "non-stationary" noise sources; creaks in mechanical structures, lightning or other large electrical disturbances, etc. may also create noise masking an event or may even imitate an event. All these must be taken into account and excluded by analysis before a detection may be considered a true gravitational-wave event. Space-based interferometers, such as LISA and DECIGO, are also being developed. LISA's design calls for three test masses forming an equilateral triangle, with lasers from each spacecraft to each other spacecraft forming two independent interferometers. LISA is planned to occupy a solar orbit trailing the Earth, with each arm of the triangle being five million kilometers. This puts the detector in an excellent vacuum far from Earth-based sources of noise, though it will still be susceptible to shot noise, as well as artifacts caused by cosmic rays and solar wind. Einstein@Home In some sense, the easiest signals to detect should be constant sources. Supernovae and neutron star or black hole mergers should have larger amplitudes and be more interesting, but the waves generated will be more complicated. The waves given off by a spinning, bumpy neutron star would be "monochromatic" – like a pure tone in acoustics. It would not change very much in amplitude or frequency. The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of simple gravitational wave. By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise. Pulsar timing arrays A different approach to detecting gravitational waves is used by pulsar timing arrays, such as the European Pulsar Timing Array, the North American Nanohertz Observatory for Gravitational Waves, and the Parkes Pulsar Timing Array. These projects propose to detect gravitational waves by looking at the effect these waves have on the incoming signals from an array of 20–50 well-known millisecond pulsars. As a gravitational wave passing through the Earth contracts space in one direction and expands space in another, the times of arrival of pulsar signals from those directions are shifted correspondingly. By studying a fixed set of pulsars across the sky, these arrays should be able to detect gravitational waves in the nanohertz range. Such signals are expected to be emitted by pairs of merging supermassive black holes. In June 2023, four pulsar timing array collaborations, the three mentioned above and the Chinese Pulsar Timing Array, presented independent but similar evidence for a stochastic background of nanohertz gravitational waves. The source of this background could not yet be identified. Cosmic microwave background The cosmic microwave background, radiation left over from when the Universe cooled sufficiently for the first atoms to form, can contain the imprint of gravitational waves from the very early Universe. The microwave radiation is polarized. The pattern of polarization can be split into two classes called E-modes and B-modes. This is in analogy to electrostatics where the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence. The E-modes can be created by a variety of processes, but the B-modes can only be produced by gravitational lensing, gravitational waves, or scattering from dust. On 17 March 2014, astronomers at the Harvard-Smithsonian Center for Astrophysics announced the apparent detection of the imprint gravitational waves in the cosmic microwave background, which, if confirmed, would provide strong evidence for inflation and the Big Bang. However, on 19 June 2014, lowered confidence in confirming the findings was reported; and on 19 September 2014, even more lowered confidence. Finally, on 30 January 2015, the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way. Novel detector designs There are currently two detectors focusing on detections at the higher end of the gravitational-wave spectrum (10−7 to 105 Hz): one at University of Birmingham, England, and the other at INFN Genoa, Italy. A third is under development at Chongqing University, China. The Birmingham detector measures changes in the polarization state of a microwave beam circulating in a closed loop about one meter across. Two have been fabricated and they are currently expected to be sensitive to periodic spacetime strains of , given as an amplitude spectral density. The INFN Genoa detector is a resonant antenna consisting of two coupled spherical superconducting harmonic oscillators a few centimeters in diameter. The oscillators are designed to have (when uncoupled) almost equal resonant frequencies. The system is currently expected to have a sensitivity to periodic spacetime strains of , with an expectation to reach a sensitivity of . The Chongqing University detector is planned to detect relic high-frequency gravitational waves with the predicted typical parameters ~ 1010 Hz (10 GHz) and h ~ 10−30 to 10−31. Levitated Sensor Detector is a proposed detector for gravitational waves with a frequency between 10 kHz and 300 kHz, potentially coming from primordial black holes. It will use optically-levitated dielectric particles in an optical cavity. A torsion-bar antenna (TOBA) is a proposed design composed of two, long, thin bars, suspended as torsion pendula in a cross-like fashion, in which the differential angle is sensitive to tidal gravitational wave forces. Detectors based on matter waves (atom interferometers) have also been proposed and are being developed. There have been proposals since the beginning of the 2000s. Atom interferometry is proposed to extend the detection bandwidth in the infrasound band (10 mHz – 10 Hz), where current ground based detectors are limited by low frequency gravity noise. A demonstrator project called Matter wave laser based Interferometer Gravitation Antenna (MIGA) started construction in 2018 in the underground environment of LSBB (Rustrel, France). List of gravitational wave detectors Resonant mass detectors First generation Weber bar (1960s–80s) Second generation EXPLORER (CERN, 1985–) GEOGRAV (Rome, 1980s–) ALTAIR (Frascati, 1990–) ALLEGRO (Baton Rouge, 1991–2008) NIOBE (Perth, 1993–) NAUTILUS (Rome, 1995–) AURIGA (Padova, 1997–) Third generation Mario Schenberg (São Paulo, 2003–) MiniGrail (Leiden, 2003–) Interferometers Interferometric gravitational-wave detectors are often grouped into generations based on the technology used. The interferometric detectors deployed in the 1990s and 2000s were proving grounds for many of the foundational technologies necessary for initial detection and are commonly referred to as the first generation. The second generation of detectors operating in the 2010s, mostly at the same facilities like LIGO and Virgo, improved on these designs with sophisticated techniques such as cryogenic mirrors and the injection of squeezed vacuum. This led to the first unambiguous detection of a gravitational wave by Advanced LIGO in 2015. The third generation of detectors are currently in the planning phase, and seek to improve over the second generation by achieving greater detection sensitivity and a larger range of accessible frequencies. All these experiments involve many technologies under continuous development over multiple decades, so the categorization by generation is necessarily only rough. First generation (1995) TAMA 300 (1995) GEO600 (2002) LIGO (2006) CLIO (2007) Virgo interferometer Second generation (2010) GEO High Frequency (2015) Advanced LIGO (2016) Advanced Virgo (2019) KAGRA (LCGT) (2023) IndIGO (LIGO-India) Third generation (2030s) Einstein Telescope (2030s) Cosmic Explorer Space based (2034) Laser Interferometer Space Antenna (LISA, its technology demonstrator LISA Pathfinder was launched December 2015) (2030s?) Taiji (gravitational wave observatory) (2035) TianQin (2027) Deci-hertz Interferometer Gravitational wave Observatory (DECIGO) Pulsar timing (2005) Parkes Pulsar Timing Array (2009) European Pulsar Timing Array (2010) North American Nanohertz Observatory for Gravitational Waves (NANOGrav) (2016) International Pulsar Timing Array, a joint project combining the Parkes, European and NANOGrav arrays above (2016) Indian Pulsar Timing Array Experiment (InPTA) (?) Chinese Pulsar Timing Array (CPTA) (?) MeerKAT Pulsar Timing Array (MeerTime) See also Detection theory Gravitational-wave astronomy Matched filter References External links Video (04:36) – Detecting a gravitational wave, Dennis Overbye, NYT (11 February 2016). Video (71:29) – Press Conference announcing discovery: "LIGO detects gravitational waves", National Science Foundation (11 February 2016). Astronomical observatories Gravitational instruments observatory Articles containing video clips
Gravitational-wave observatory
[ "Physics", "Astronomy", "Technology", "Engineering" ]
3,539
[ "Astronomical observatories", "Astrophysics", "Astronomy organizations", "Measuring instruments", "Gravitational instruments", "Gravitational-wave astronomy", "Astronomical sub-disciplines" ]
11,085,324
https://en.wikipedia.org/wiki/Free-energy%20perturbation
Free-energy perturbation (FEP) is a method based on statistical mechanics that is used in computational chemistry for computing free-energy differences from molecular dynamics or Metropolis Monte Carlo simulations. The FEP method was introduced by Robert W. Zwanzig in 1954. According to the free-energy perturbation method, the free-energy difference for going from state A to state B is obtained from the following equation, known as the Zwanzig equation: where T is the temperature, kB is the Boltzmann constant, and the angular brackets denote an average over a simulation run for state A. In practice, one runs a normal simulation for state A, but each time a new configuration is accepted, the energy for state B is also computed. The difference between states A and B may be in the atom types involved, in which case the ΔF obtained is for "mutating" one molecule onto another, or it may be a difference of geometry, in which case one obtains a free-energy map along one or more reaction coordinates. This free-energy map is also known as a potential of mean force (PMF). Free-energy perturbation calculations only converge properly when the difference between the two states is small enough; therefore it is usually necessary to divide a perturbation into a series of smaller "windows", which are computed independently. Since there is no need for constant communication between the simulation for one window and the next, the process can be trivially parallelized by running each window on a different CPU, in what is known as an "embarrassingly parallel" setup. Application FEP calculations have been used for studying host–guest binding energetics, pKa predictions, solvent effects on reactions, and enzymatic reactions. Other applications are the virtual screening of ligands in drug discovery, in silico mutagenesis studies and antibody affinity maturation. For the study of reactions it is often necessary to involve a quantum-mechanical (QM) representation of the reaction center because the molecular mechanics (MM) force fields used for FEP simulations cannot handle breaking bonds. A hybrid method that has the advantages of both QM and MM calculations is called QM/MM. Umbrella sampling is another free-energy calculation technique that is typically used for calculating the free-energy change associated with a change in "position" coordinates as opposed to "chemical" coordinates, although umbrella sampling can also be used for a chemical transformation when the "chemical" coordinate is treated as a dynamic variable (as in the case of the Lambda dynamics approach of Kong and Brooks). An alternative to free-energy perturbation for computing potentials of mean force in chemical space is thermodynamic integration. Another alternative, which is probably more efficient, is the Bennett acceptance ratio method. Adaptations to FEP exist which attempt to apportion free-energy changes to subsections of the chemical structure. Software Several software packages have been developed to help perform FEP calculations. Below is a short list of some of the most common programs: Flare FEP FEP+ AMBER BOSS CHARMM Desmond GROMACS MacroModel MOLARIS NAMD Tinker Q QUELO See also Thermodynamic integration Umbrella sampling References Computational chemistry Statistical mechanics
Free-energy perturbation
[ "Physics", "Chemistry" ]
665
[ "Theoretical chemistry", "Statistical mechanics", "Computational chemistry" ]
11,087,760
https://en.wikipedia.org/wiki/Biomimetic%20material
Biomimetic materials are materials developed using inspiration from nature. This may be useful in the design of composite materials. Natural structures have inspired and innovated human creations. Notable examples of these natural structures include: honeycomb structure of the beehive, strength of spider silks, bird flight mechanics, and shark skin water repellency. The etymological roots of the neologism "biomimetic" derive from Greek, since means "life" and means "imitative". Tissue engineering Biomimetic materials in tissue engineering are materials that have been designed such that they elicit specified cellular responses mediated by interactions with scaffold-tethered peptides from extracellular matrix (ECM) proteins; essentially, the incorporation of cell-binding peptides into biomaterials via chemical or physical modification. Amino acids located within the peptides are used as building blocks by other biological structures. These peptides are often referred to as "self-assembling peptides", since they can be modified to contain biologically active motifs. This allows them to replicate information derived from tissue and to reproduce the same information independently. Thus, these peptides act as building blocks capable of conducting multiple biochemical activities, including tissue engineering. Tissue engineering research currently being performed on both short chain and long chain peptides is still in early stages. Such peptides include both native long chains of ECM proteins as well as short peptide sequences derived from intact ECM proteins. The idea is that the biomimetic material will mimic some of the roles that an ECM plays in neural tissue. In addition to promoting cellular growth and mobilization, the incorporated peptides could also mediate by specific protease enzymes or initiate cellular responses not present in a local native tissue. In the beginning, long chains of ECM proteins including fibronectin (FN), vitronectin (VN), and laminin (LN) were used, but more recently the advantages of using short peptides have been discovered. Short peptides are more advantageous because, unlike the long chains that fold randomly upon adsorption causing the active protein domains to be sterically unavailable, short peptides remain stable and do not hide the receptor binding domains when adsorbed. Another advantage to short peptides is that they can be replicated more economically due to the smaller size. A bi-functional cross-linker with a long spacer arm is used to tether peptides to the substrate surface. If a functional group is not available for attaching the cross-linker, photochemical immobilization may be used. In addition to modifying the surface, biomaterials can be modified in bulk, meaning that the cell signaling peptides and recognition sites are present not just on the surface but also throughout the bulk of the material. The strength of cell attachment, cell migration rate, and extent of cytoskeletal organization formation is determined by the receptor binding to the ligand bound to the material; thus, receptor-ligand affinity, the density of the ligand, and the spatial distribution of the ligand must be carefully considered when designing a biomimetic material. Biomimetic mineralization Proteins of the developing enamel extracellular matrix (such as amelogenin) control initial mineral deposition (nucleation) and subsequent crystal growth, ultimately determining the physico-mechanical properties of the mature mineralized tissue. Nucleators bring together mineral ions from the surrounding fluids (such as saliva) into the form of a crystal lattice structure, by stabilizing small nuclei to permit crystal growth, forming mineral tissue. Mutations in enamel ECM proteins result in enamel defects such as amelogenesis imperfecta. Type-I collagen is thought to have a similar role for the formation of dentin and bone. Dental enamel mineral (as well as dentin and bone) is made of hydroxylapatite with foreign ions incorporated in the structure. Carbonate, fluoride, and magnesium are the most common heteroionic substituents. In a biomimetic mineralization strategy based on normal enamel histogenesis, a three-dimensional scaffold is formed to attract and arrange calcium and/or phosphate ions to induce de novo precipitation of hydroxylapatite. Two general strategies have been applied. One is using fragments known to support natural mineralization proteins, such as Amelogenin, Collagen, or Dentin Phosphophoryn as the basis. Alternatively, de novo macromolecular structures have been designed to support mineralization, not based on natural molecules, but on rational design. One example is oligopeptide P11-4. In dental orthopedics and implants, a more traditional strategy to improve the density of the underlying jaw bone is via the in situ application of calcium phosphate materials. Commonly used materials include hydroxylapatite, tricalcium phosphate, and calcium phosphate cement. Newer bioactive glasses follow this line of strategy, where the added silicone provides an important bonus to the local absorption of calcium. Extracellular matrix proteins Many studies utilize laminin-1 when designing a biomimetic material. Laminin is a component of the extracellular matrix that is able to promote neuron attachment and differentiation, in addition to axon growth guidance. Its primary functional site for bioactivity is its core protein domain isoleucine-lysine-valine-alanine-valine (IKVAV), which is located in the α-1 chain of laminin. A recent study by Wu, Zheng et al., synthesized a self-assembled IKVAV peptide nanofiber and tested its effect on the adhesion of neuron-like pc12 cells. Early cell adhesion is very important for preventing cell degeneration; the longer cells are suspended in culture, the more likely they are to degenerate. The purpose was to develop a biomaterial with good cell adherence and bioactivity with IKVAV, which is able to inhibit differentiation and adhesion of glial cells in addition to promoting neuronal cell adhesion and differentiation. The IKVAV peptide domain is on the surface of the nanofibers so that it is exposed and accessible for promoting cell contact interactions. The IKVAV nanofibers promoted stronger cell adherence than the electrostatic attraction induced by poly-L-lysine, and cell adherence increased with increasing density of IKVAV until the saturation point was reached. IKVAV does not exhibit time dependent effects because the adherence was shown to be the same at 1 hour and at 3 hours. Laminin is known to stimulate neurite outgrowth and it plays a role in the developing nervous system. It is known that gradients are critical for the guidance of growth cones to their target tissues in the developing nervous system. There has been much research done on soluble gradients; however, little emphasis has been placed on gradients of substratum bound substances of the extracellular matrix such as laminin. Dodla and Bellamkonda, fabricated an anisotropic 3D agarose gel with gradients of coupled laminin-1 (LN-1). Concentration gradients of LN-1 were shown to promote faster neurite extension than the highest neurite growth rate observed with isotropic LN-1 concentrations. Neurites grew both up and down the gradients, but growth was faster at less steep gradients and was faster up the gradients than down the gradients. Biomimetic artificial muscles Electroactive polymers (EAPs) are also known as artificial muscles. EAPs are polymeric materials and they are able to produce large deformation when applied in an electric field. This provides large potential in applications in biotechnology and robotics, sensors, and actuators. Biomimetic photonic structures The production of structural colours concerns a large array of organisms. From bacteria (Flavobacterium strain IR1) to multicellular organisms, (Hibiscus trionum, Doryteuthis pealeii (squid), or Chrysochroa fulgidissima (beetle)), manipulation of light is not limited to rare and exotic life forms. Different organisms evolved different mechanisms to produce structural colours: multilayered cuticle in some insects and plants, grating like surface in plants, geometrically organised cells in bacteria... all of theme stand for a source of inspiration towards the development of structurally coloured materials. Study of the firefly abdomen revealed the presence of a 3-layer system comprising the cuticle, the Photogenic layer and then a reflector layer. Microscopy of the reflector layer revealed a granulate structure. Directly inspired from the fire fly Reflector layer, an artificial granulate film composed of hollow silica beads of about 1.05 μm was correlated with a high reflection index and could be used to improve light emission in chemiluminescent systems. Artificial enzyme Artificial enzymes are synthetic materials that can mimic (partial) function of a natural enzyme without necessarily being a protein. Among them, some nanomaterials have been used to mimic natural enzymes. These nanomaterials are termed nanozymes. Nanozymes as well as other artificial enzymes have found wide applications, from biosensing and immunoassays, to stem cell growth and pollutant removal. Biomimetic composite Biomimetic composites are being made by mimicking natural design strategies. The designs or structures found in animals and plants have been studied and these biological structures are applied to manufacture composite structure. Advanced manufacturing techniques like 3d printing are being used by the researcher to fabricate them. References material degradation Neuroscience Tissues (biology) Biomedical engineering
Biomimetic material
[ "Physics", "Engineering", "Biology" ]
2,006
[ "Biomaterials", "Biological engineering", "Neuroscience", "Biomedical engineering", "Materials", "Matter", "Medical technology" ]
11,091,779
https://en.wikipedia.org/wiki/Selenium-79
Selenium-79 is a radioisotope of selenium present in spent nuclear fuel and the wastes resulting from reprocessing this fuel. It is one of only seven long-lived fission products. Its fission yield is low (about 0.04%), as it is near the lower end of the mass range for fission products. Its half-life has been variously reported as 650,000 years, 65,000 years, 1.13 million years, 480,000 years, 295,000 years, 377,000 years and most recently with best current precision, 327,000 years. 79Se decays to 79Br by emitting a beta particle with no attendant gamma radiation (i.e., 100% β decay). This complicates its detection and liquid scintillation counting (LSC) is required for measuring it in environmental samples. The low specific activity () and relatively low energy (151 keV) of its beta particles have been said to limit the radioactive hazards of this isotope. Performance assessment calculations for the Belgian deep geological repository estimated 79Se may be the major contributor to activity release in terms of becquerels (decays per second), "attributable partly to the uncertainties about its migration behaviour in the Boom Clay and partly to its conversion factor in the biosphere." (p. 169). However, "calculations for the Belgian safety assessments use a half-life of 65 000 years" (p. 177), much less than the currently estimated half-life, and "the migration parameters ... have been estimated very cautiously for 79Se." (p. 179) Neutron absorption cross sections for 79Se have been estimated at 50 barns for thermal neutrons and 60.9 barns for resonance integral. Selenium-80 and selenium-82 have higher fission yields, about 20 times the yield of 79Se in the case of uranium-235, 6 times in the case of plutonium-239 or uranium-233, and 14 times in the case of plutonium-241. Mobility of selenium in the environment Due to redox-disequilibrium, selenium could be very reluctant to abiotic chemical reduction and would be released from the waste (spent fuel or vitrified waste) as selenate (), a soluble Se(VI) species, not sorbed onto clay minerals. Without solubility limit and retardation for aqueous selenium, the dose of 79Se is comparable to that of 129I. Moreover, selenium is an essential micronutrient as it is present in the catalytic centers in the glutathione peroxidase, an enzyme needed by many organisms for the protection of their cell membrane against oxidative stress damages; therefore, radioactive 79Se can be easily bioconcentrated in the food web. In the presence of nitrate () released in deep geological clay formations by bituminized waste issued from the spent fuel dissolution step during their reprocessing, even reduced forms of selenium could be easily oxidised and mobilised. References See also Isotopes of selenium ANL factsheet Journal of Analytical Atomic Spectrometry Fission products Isotopes of selenium Radioactive waste
Selenium-79
[ "Chemistry", "Technology" ]
680
[ "Nuclear fission", "Isotopes of selenium", "Radioactive waste", "Isotopes", "Fission products", "Nuclear fallout", "Hazardous waste", "Environmental impact of nuclear power", "Radioactivity" ]
11,092,014
https://en.wikipedia.org/wiki/Named%20data%20networking
Named Data Networking (NDN) (related to content-centric networking (CCN), content-based networking, data-oriented networking or information-centric networking (ICN)) is a proposed Future Internet architecture that seeks to address problems in contemporary internet architectures like IP. NDN has its roots in an earlier project, Content-Centric Networking (CCN), which Van Jacobson first publicly presented in 2006. The NDN project is investigating Jacobson's proposed evolution from today's host-centric network architecture IP to a data-centric network architecture (NDN). The stated goal of this project is that with a conceptually simple shift, far-reaching implications for how people design, develop, deploy, and use networks and applications could be realized. NDN has three core concepts that distinguish NDN from other network architectures. First, applications name data and data names will directly be used in network packet forwarding; consumer applications would request desired data by its name, so communications in NDN are consumer-driven. Second, NDN communications are secured in a data-centric manner wherein each piece of data (called a Data packet) will be cryptographically signed by its producer and sensitive payload or name components can also be encrypted for the purpose of privacy. In this way, consumers can verify the packet regardless of how the packet is fetched. Third, NDN adopts a stateful forwarding plane where forwarders will keep a state for each data request (called an Interest packet), and erase the state when a corresponding data packet comes back. NDN's stateful forwarding allows intelligent forwarding strategies, and eliminates loops. Its premise is that the Internet is primarily used as an information distribution network, which is not a good match for IP, and that the future Internet's "thin waist" should be based on named data rather than numerically addressed hosts. The underlying principle is that a communication network should allow a user to focus on the data they need, named content, rather than having to reference a specific, physical location where that data is to be retrieved from, named hosts. The motivation for this is derived from the fact that the vast majority of current Internet usage (a "high 90% level of traffic") consists of data being disseminated from a source to a number of users. Named-data networking comes with potential for a wide range of benefits such as content caching to reduce congestion and improve delivery speed, simpler configuration of network devices, and building security into the network at the data level. Overview Today's Internet's hourglass architecture centers on a universal network layer, IP, which implements the minimal functionality necessary for global inter-connectivity. The contemporary Internet architecture revolves around a host-based conversation model, which was created in the 1970s to allow geographically distributed users to use a few big, immobile computers. This thin waist enabled the Internet's explosive growth by allowing both lower and upper layer technologies to innovate independently. However, IP was designed to create a communication network, where packets named only communication endpoints. Sustained growth in e-commerce, digital media, social networking, and smartphone applications has led to dominant use of the Internet as a distribution network. Distribution networks are more general than communication networks, and solving distribution problems via a point-to-point communication protocol is complex and error-prone. The Named Data Networking (NDN) project proposed an evolution of the IP architecture that generalizes the role of this thin waist, such that packets can name objects other than communication endpoints. More specifically, NDN changes the semantics of network service from delivering the packet to a given destination address to fetching data identified by a given name. The name in an NDN packet can name anything – an endpoint, a data chunk in a movie or a book, a command to turn on some lights, etc. The hope is that this conceptually simple change allows NDN networks to apply almost all of the Internet's well-tested engineering properties to a broader range of problems beyond end-to-end communications. Examples of NDN applying lessons learned from 30 years of networking engineering are that self-regulation of network traffic (via flow balance between Interest (data request) and data packets), and security primitives (via signatures on all named data) are integrated into the protocol from the start. History Early research The philosophy behind NDN was pioneered by Ted Nelson in 1979, and later by Brent Baccala in 2002. In 1999, the TRIAD project at Stanford proposed avoiding DNS lookups by using the name of an object to route towards a close replica of it. In 2006, the Data-Oriented Network Architecture (DONA) project at UC Berkeley and ICSI proposed a content-centric network architecture, which improved TRIAD by incorporating security (authenticity) and persistence as first-class primitives in the architecture. Van Jacobson gave a Google Talk, A New Way to Look at Networking, in 2006 on the evolution of the network, and argued that NDN was the next step. In 2009, PARC announced their content-centric architecture within the CCNx project, which was led by Jacobson who was a research fellow at PARC at the time. On 21 September 2009, PARC published the specifications for interoperability and released an initial open source implementation (under GPL) of the Content-Centric Networking research project on the Project CCNx site. NDN is one instance of a more general network research direction called information-centric networking (ICN), under which different architecture designs have emerged. The Internet Research Task Force (IRTF) established an ICN research working group in 2012. Current state NDN includes sixteen NSF-funded principal investigators at twelve campuses, and growing interest from the academic and industrial research communities. More than 30 institutions form a global testbed. There exists a large body of research and an actively growing code base. contributed to NDN. The NDN forwarder is currently supported on Ubuntu 18.04 and 20.04, Fedora 20+, CentOS 6+, Gentoo Linux, Raspberry Pi, OpenWRT, FreeBSD 10+, and several other platforms. Common client libraries are actively supported for C++, Java, Javascript, Python, .NET Framework (C#), and Squirrel programming languages. The NDN-LITE is a lightweight NDN library designed for IoT networks and constrained devices. NDN-LITE is being actively developed and so far, NDN-LITE has been adapted to POSIX, RIOT OS, NRF boards. An NDN simulator and emulator are also available and actively developed. Several client applications are being developed in the areas of real-time conferencing, NDN friendly file systems, chat, file sharing, and IoT. Key architectural principles End-to-end principle: Enables the development of robust applications in the face of network failures. NDN retains and expands this design principle. Routing and forwarding plane separation: This has proven necessary for Internet development. It allows the forwarding plane to function while the routing system continues to evolve over time. NDN uses the same principle to allow the deployment of NDN with the best available forwarding technology while new routing system research is ongoing. Stateful forwarding: NDN routers keep the state of recently forwarded packets, which allows smart forwarding, loop detection, flow balance, ubiquitous caching, etc. Built-in security: In NDN, data transfer is secured at the network layer by signing and verification of any named data. Enable user choice and competition: The architecture should facilitate user choice and competition where possible. Although not a relevant factor in the original Internet design, global deployment has demonstrated that “architecture is not neutral". NDN makes a conscious effort to empower end users and enable competition. Architecture overview Types of packets Communication in NDN is driven by receivers i.e., data consumers, through the exchange of two types of packets: Interest and Data. Both types of packets carry a name that identifies a piece of data that can be transmitted in one Data packet. Packet types Interest: A consumer puts the name of a desired piece of data into an Interest packet and sends it to the network. Routers use this name to forward the Interest toward the data producer(s). Data: Once the Interest reaches a node that has the requested data, the node will return a Data packet that contains both the name and the content, together with a signature by the producer's key which binds the two. This Data packet follows in reverse the path taken by the Interest to get back to the requesting consumer. For the complete specification see NDN Packet Format Specification. Router architecture To carry out the Interest and Data packet forwarding functions, each NDN router maintains three data structures, and a forwarding policy: Pending Interest Table (PIT): stores all the Interests that a router has forwarded but not satisfied yet. Each PIT entry records the data name carried in the Interest, together with its incoming and outgoing interface(s). Forwarding Information Base (FIB): a routing table which maps name components to interfaces. The FIB itself is populated by a name-prefix based routing protocol, and can have multiple output interfaces for each prefix. Content Store (CS): a temporary cache of Data packets the router has received. Because an NDN Data packet is meaningful independent of where it comes from or where it is forwarded, it can be cached to satisfy future Interests. Replacement strategy is traditionally least recently used, but the replacement strategy is determined by the router and may differ. Forwarding Strategies: a series of policies and rules about forwarding interest and data packets. Note that the Forwarding Strategy may decide to drop an Interest in certain situations, e.g., if all upstream links are congested or the Interest is suspected to be part of a DoS attack. These strategies use a series of triggers in the forwarding pipeline and are assigned to name prefixes. For instance, by default /localhost uses the Multicast forwarding strategy to forward interests and data to any local application running on a client NFD. The default forwarding strategy (i.e. "/") is the Best Route forwarding strategy. When an Interest packet arrives, an NDN router first checks the Content Store for matching data; if it exists in the router returns the Data packet on the interface from which the Interest came. Otherwise the router looks up the name in its PIT, and if a matching entry exists, it simply records the incoming interface of this Interest in the PIT entry. In the absence of a matching PIT entry, the router will forward the Interest toward the data producer(s) based on information in the FIB as well as the router's adaptive Forwarding Strategy. When a router receives Interests for the same name from multiple downstream nodes, it forwards only the first one upstream toward the data producer(s). When a Data packet arrives, an NDN router finds the matching PIT entry and forwards the data to all down-stream interfaces listed in that PIT entry. It then removes that PIT entry, and caches the Data in the Content Store. Data packets always take the reverse path of Interests, and, in the absence of packet losses, one Interest packet results in one Data packet on each link, providing flow balance. To fetch large content objects that comprise multiple packets, Interests provide a similar role in controlling traffic flow as TCP ACKs in today's Internet: a fine-grained feedback loop controlled by the consumer of the data. Neither Interest nor Data packets carry any host or interface addresses; routers forward Interest packets toward data producers based on the names carried in the packets, and forward Data packets to consumers based on the PIT state information set up by the Interests at each hop. This Interest/Data packet exchange symmetry induces a hop-by-hop control loop (not to be confused with symmetric routing, or with routing at all!), and eliminates the need for any notion of source or destination nodes in data delivery, unlike in IP's end-to-end packet delivery model. Names Design NDN names are opaque to the network. This allows each application to choose the naming scheme that fits its needs, and naming can thus evolve independently from the network. Structure The NDN design assumes hierarchically structured names, e.g., a video produced by UCLA may have the name /ucla/videos/demo.mpg, where ‘/’ delineates name components in text representations, similar to URLs. This hierarchical structure has many potential benefits: Relationship specification: allows applications to represent the context and relationships of data elements. EX: segment 3 of version 1 of a UCLA demo video might be named /ucla/videos/demo.mpg/1/3 Name aggregation: /ucla could correspond to an autonomous system originating the video Routing: allows the system to scale and aids in providing the necessary context for the data Specifying a name To retrieve dynamically generated data, consumers must be able to deterministically construct the name for a desired piece of data without having previously seen the name or the data through either: an algorithm allows the producer and consumer to arrive at the same name based on information available to both. Interest selectors in conjunction with longest prefix matching retrieve the desired data through one or more iterations. Current research is exploring how applications should choose names that can facilitate both application development and network delivery. The aim of this work is to develop and refine existing principles and guidelines for naming, converting these rules into naming conventions implemented in system libraries to simplify future application development. Namespaces Data that may be retrieved globally must have globally unique names, but names used for local communications may require only local routing (or local broadcast) to find matching data. Individual data names can be meaningful in various scopes and contexts, ranging from “the light switch in this room” to “all country names in the world”. Namespace management is not part of the NDN architecture, just as address space management is not part of the IP architecture. However naming is the most important part of NDN application designs. Enabling application developers, and sometimes users, to design their own namespaces for data exchange has several benefits: increasing the closeness of mapping between an application's data and its use of the network. reducing the need for secondary notation (record-keeping to map application configuration to network configuration). expanding the range of abstractions available to the developers. named based content requests also introduces the concerns on privacy leakage. Thanks to separation of namespace management from NDN architecture, it is possible to provide privacy preserving naming scheme by making minor changes in conventional NDN naming scheme. Routing Solutions to IP issues NDN routes and forwards packets based on names, which eliminates three problems caused by addresses in the IP architecture: Address space exhaustion: NDN namespace is essentially unbounded. The namespace is only bounded by the max interest packet size of 8kb and the number of possible unique combinations of characters composing names. NAT traversal: NDN does away with addresses, public or private, so NAT is unnecessary. Address management: address assignment and management is no longer required in local networks. In network multicasting: A producer of data does not need to receive multiple interests for the same data since the PIT entries at downstream forwarders will aggregate interests. The producer receives and responds to a single interest and those forwarding nodes in which multiple incoming interest were received will multicast the data replies to the interfaces those interests were received from. High loss end to end reliability: IP based networks require lost or dropped packets to be retransmitted by the sender. However, in NDN if an interest expires before a data reply reaches the requester the data reply is still cached by forwarders along the return path. The retransmitted interest only needs to reach a forwarder with a cached copy of the data giving NDN based networks higher throughput than IP based networks when packet loss rates are high. Protocols NDN can use conventional routing algorithms such as link state and distance vector. Instead of announcing IP prefixes, an NDN router announces name prefixes that cover the data the router is willing to serve. Conventional routing protocols, such as OSPF and BGP, can be adapted to route on name prefixes by treating names as a sequence of opaque components and doing component-wise longest prefix match of a name in an Interest packet against the FIB table. This enables a wide array of inputs to be aggregated in real time and distributed across multiple interface environments simultaneously without compromising content encryption. Key interface analytics are likewise spared by the process. Application transfer and data sharing within the environment are defined by a multi-modal distribution framework, such that the affected cloud relay protocols are unique to the individual runtime identifier. PIT state The PIT state at each router supports forwarding across NDN's data plane, recording each pending Interest and the incoming interface(s), and removing the Interest after the matching Data is received or a timeout occurs. This per hop, per packet state differs from IP's stateless data plane. Based on information in the FIB and performance measurements, an adaptive forwarding strategy module in each router makes informed decisions about: Control flow: since each Interest retrieves at most one Data packet, a router can directly control flow by controlling the number of pending interests it keeps. Multicast data delivery: the PIT recording the set of interface on which the same data has arrive, naturally supports this feature. Updating paths to accommodate changes in their view of the network. Delivery: a router can reason about which Interests to forward to which interfaces, how many unsatisfied Interests to allow in the PIT, as well as the relative priority of different Interests. Interest If a router decides that the Interest cannot be satisfied, e.g., the upstream link is down, there is no forwarding entry in the FIB, or extreme congestion occurs, the router can send a NACK to its downstream neighbor(s) that transmitted the Interest. Such a Negative Acknowledgment (NACK) may trigger the receiving router to forward the Interest to other interfaces to explore alternate paths. The PIT state enables routers to identify and discard looping packets, allowing them to freely use multiple paths toward the same data producer. Packets cannot loop in NDN, which means there is no need for time-to-live and other measures implemented in IP and related protocols to address these issues. Security Overview In contrast to TCP/IP security (e.g., TLS) which secures communication by securing IP-to-IP channels, NDN secures the data itself by requiring data producers to cryptographically sign every Data packet. The publisher's signature ensures the integrity and enables authentication of data provenance, allowing a consumer's trust in data to be decoupled from how or where it is obtained. NDN also supports fine-grained trust, allowing consumers to reason about whether a public key owner is an acceptable publisher for a specific piece of data in a specific context. The second primary research thrust is designing and developing usable mechanisms to manage user trust. There has been research into 3 different types of trust models: hierarchical trust model: where a key namespace authorizes use of keys. A data packet carrying a public key is effectively a certificate, since it is signed by a third party, and this public key is used to sign specific data. web of trust: to enable secure communication without requiring pre-agreed trust anchors. lightweight trust for IoT: The NDN trust model primarily based on asymmetric cryptography, which is infeasible for resource constraint devices in IoT paradigm. Application security NDN's data-centric security has natural applications to content access control and infrastructure security. Applications can encrypt data and distribute keys as named packets using the same named infrastructure to distribute keys, effectively limiting the data security perimeter to the context of a single application. To verify a data packet's signature, an application can fetch the appropriate key, identified in the packet's key locator field, just like any other content. But trust management, i.e., how to determine the authenticity of a given key for a particular packet in a given application, is a primary research challenge. Consistent with an experimental approach, NDN trust management research is driven by application development and use: solving specific problems first and then identifying common patterns. For example, the security needs of NLSR required development of a simple hierarchical trust model, with keys at lower (closer to root) levels, being used to sign keys in higher levels in which keys are published with names that reflect their trust relationship. In this trust model, the namespace matches the hierarchy of trust delegation, i.e., /root/site/operator/ router/process. Publishing keys with a particular name in the hierarchy authorizes them to sign specific data packets and limits their scope. This paradigm can be easily extended to Other applications where real world trust tends to follow a hierarchical pattern, such as in our building management systems (BMS). Since NDN leaves the trust model under the control of each application, more flexible and expressive trust relations, may also be expressed. One such example is ChronoChat, which motivated experimentation with a web-of-trust model. The security model is that a current chatroom participant can introduce a newcomer to others by signing the newcomer's key. Future applications will implement a cross-certifying model (SDSI) [13, 3], which provides more redundancy of verification, allowing data and key names to be independent, which more easily accommodates a variety of real-world trust relationships. Routing efficiency and security Furthermore, NDN treats network routing and control messages like all NDN data, requiring signatures. This provides a solid foundation for securing routing protocols against attack, e.g., spoofing and tampering. NDN's use of multipath forwarding, together with the adaptive forwarding strategy module, mitigates prefix hijacking because routers can detect anomalies caused by hijacks and retrieve data through alternate paths. Owing to multi-source, multicast content-delivery nature of Named Data Networking, the random linear coding can improve over all network efficiency. Since NDN packets reference content rather than devices, it is trickier to maliciously target a particular device, although mitigation mechanisms will be needed against other NDN-specific attacks, e.g., Interest flooding DoS. Furthermore, having a Pending Interest Table, which keeps state regarding past requests, which can make informed forward decisions about how to handle interest has numerous security advantages: Load Balancing: the number of PIT entries is an indicator of router load; constraining its size limits the effect of a DDoS attack. Interest timeout: PIT entry timeouts offer relatively cheap attack detection, and the arrival interface information in each PIT entry could support a push-back scheme in which down stream routers are informed of unserved interests, which aides in detecting attacks. See also Information-centric networking caching policies Future Internet Research and Experimentation (EU) References External links DEATH TO TCP/IP cry Cisco, Intel, US gov and boffins galore FIA-NP: Collaborative Research: Named Data Networking Next Phase (NDN-NP) Named Data Research Home Page NSF Awards for NDN 2 FIA: Collaborative Research: Named Data Networking (NDN) Named Function Networking (NFN) NDN on Galileo (WebArchive snapshot) Computer networking Internet protocols Network layer protocols
Named data networking
[ "Technology", "Engineering" ]
4,955
[ "Computer networking", "Computer science", "Computer engineering" ]
11,092,492
https://en.wikipedia.org/wiki/Amalgamation%20property
In the mathematical field of model theory, the amalgamation property is a property of collections of structures that guarantees, under certain conditions, that two structures in the collection can be regarded as substructures of a larger one. This property plays a crucial role in Fraïssé's theorem, which characterises classes of finite structures that arise as ages of countable homogeneous structures. The diagram of the amalgamation property appears in many areas of mathematical logic. Examples include in modal logic as an incestual accessibility relation, and in lambda calculus as a manner of reduction having the Church–Rosser property. Definition An amalgam can be formally defined as a 5-tuple (A,f,B,g,C) such that A,B,C are structures having the same signature, and f: A → B, g: A → C are embeddings. Recall that f: A → B is an embedding if f is an injective morphism which induces an isomorphism from A to the substructure f(A) of B. A class K of structures has the amalgamation property if for every amalgam with A,B,C ∈ K and A ≠ Ø, there exist both a structure D ∈ K and embeddings f': B → D, g': C → D such that A first-order theory has the amalgamation property if the class of models of has the amalgamation property. The amalgamation property has certain connections to the quantifier elimination. In general, the amalgamation property can be considered for a category with a specified choice of the class of morphisms (in place of embeddings). This notion is related to the categorical notion of a pullback, in particular, in connection with the strong amalgamation property (see below). Examples The class of sets, where the embeddings are injective functions, and if they are assumed to be inclusions then an amalgam is simply the union of the two sets. The class of free groups where the embeddings are injective homomorphisms, and (assuming they are inclusions) an amalgam is the quotient group , where * is the free product. The class of finite linear orderings. This is due to the fact that any homogeneous structure from an amalgamation class of finite structure. A similar but different notion to the amalgamation property is the joint embedding property. To see the difference, first consider the class K (or simply the set) containing three models with linear orders, L1 of size one, L2 of size two, and L3 of size three. This class K has the joint embedding property because all three models can be embedded into L3. However, K does not have the amalgamation property. The counterexample for this starts with L1 containing a single element e and extends in two different ways to L3, one in which e is the smallest and the other in which e is the largest. Now any common model with an embedding from these two extensions must be at least of size five so that there are two elements on either side of e. Now consider the class of algebraically closed fields. This class has the amalgamation property since any two field extensions of a prime field can be embedded into a common field. However, two arbitrary fields cannot be embedded into a common field when the characteristic of the fields differ. Strong amalgamation property A class K of structures has the strong amalgamation property (SAP), also called the disjoint amalgamation property (DAP), if for every amalgam with A,B,C ∈ K there exist both a structure D ∈ K and embeddings f': B → D, g': C → D such that and where for any set X and function h on X, See also Span (category theory) Pushout (category theory) Joint embedding property Fraïssé's theorem References References Entries on amalgamation property and strong amalgamation property in online database of classes of algebraic structures (Department of Mathematics and Computer Science, Chapman University). E.W. Kiss, L. Márki, P. Pröhle, W. Tholen, Categorical algebraic properties. A compendium on amalgamation, congruence extension, epimorphisms, residual smallness, and injectivity, Studia Sci. Math. Hungar 18 (1), 79-141, 1983 whole journal issue. . Model theory
Amalgamation property
[ "Mathematics" ]
922
[ "Mathematical logic", "Model theory" ]
11,092,582
https://en.wikipedia.org/wiki/Cold%20saw
A cold saw is a circular saw designed to cut metal which uses a toothed blade to transfer the heat generated by cutting to the chips created by the saw blade, allowing both the blade and material being cut to remain cool. This is in contrast to an abrasive saw, which abrades the metal and generates a great deal of heat absorbed by the material being cut and saw blade. As metals expand when heated, abrasive cutting causes both the material being cut and blade to expand, resulting in increased effort to produce a cut and potential binding. This produces more heat through friction, resulting in increased blade wear and greater energy consumption. Cold saws use either a solid high-speed steel (HSS) or tungsten carbide-tipped, resharpenable circular saw blade. They are equipped with an electric motor and often a gear reduction unit to reduce the saw blade's rotational speed while maintaining constant torque. This allows the HSS saw blade to feed at a constant rate with a very high chip load per tooth. Cold saws are capable of machining most ferrous and non-ferrous alloys. Additional advantages include minimal burr production, fewer sparks, less discoloration and no dust. Saws designed to employ a flood coolant system to keep saw blade teeth cooled and lubricated may reduce sparks and discoloration completely. Saw blade type and number of teeth, cutting speed, and feed rate all must be appropriate to the type and size of material being cut, which must be mechanically clamped to prevent movement during the cutting process. Blades Cold saw blades are circular metal cutting saw blades categorized into two types: solid HSS or tungsten carbide-tipped (TCT). Both types of blades are resharpenable and may be used many times before being discarded. Cold saw blades are used to cut metal using a relatively slow rotational speed, usually less than 5000 surface feet per minute (SFM) (25 m/s), and a high chip load per tooth, usually between .001"–.003" (0.025–0.08 mm) per tooth. These blades are driven by a high power motor and high-torque gear reduction unit or an AC vector drive. During the cutting process, the metal is released in a shearing action by the teeth as the blade turns and the feed mechanism moves the blade forward. They are called "cold saw blades" because they transfer all the energy and heat created during the cutting process to the chip. This enables the blade and the work material to remain cold. Classification The first type of cold saw blade, solid HSS, may be made from either M2 tool steel or M35 tool steel, alloyed with additional cobalt. Solid HSS saw blades are heat treated and hardened to 64/65 HRC for ferrous cutting applications and 58/60 HRC for non-ferrous cutting applications. This high hardness gives the cutting edges of the teeth a high resistance to heat and wear. However, this increased hardness also makes the blades brittle and not very resistant to shock. In order to produce a high quality HSS cold saw blade, it is necessary to start with very flat and properly tensioned raw material. The blades must be press quenched after hardening to prevent them from being warped. HSS saw blades are typically hollow ground for clearance during the cutting process. The term HSS doesn't necessarily mean what it implies. These blades are usually never run at surface speeds higher than 350 SFM. Solid HSS cold saw blades may be used for cutting many different shapes and types of metal including: tubes, extrusions, structural sections, billets, bars, ingots, castings, forgings etc. These blades may also be coated with special wear resistant coatings such as titanium nitride (TiN) or titanium aluminum nitride (TiAlN), but are more commonly used commercially with a black oxide coating aiding in better coolant distribution over the surface area of the cutting blade. The second type of cold saw blade, tungsten carbide-tipped (TCT), are made with an alloy steel body and tungsten carbide inserts brazed to the tips of the teeth. These tips are ground on all surfaces to create tangential and radial clearance and provide the proper cutting and clearance angles on the teeth. The alloy body is generally made from a wear-resistant material such as a chrome vanadium steel, heat treated to 38/42 HRC. The tungsten carbide tips are capable of operating at much higher temperatures than solid HSS, therefore, TCT saw blades are usually run at much higher surface speeds. This allows carbide-tipped blades to cut at faster rates and still maintain an acceptable chip load per tooth. These blades are commonly used for cutting non-ferrous alloys, but have gained significant popularity for ferrous metal cutting applications in the last 10 years. The tungsten carbide inserts are extremely hard (98 HRC) and capable of very long wear life. However, they are less resistant to shock than solid HSS cold saw blades. Any vibration during the cutting process may severely damage the teeth. These cold saw blades need to be driven by a backlash free gear box and a constant feed mechanism like a ball-screw feed. Portable saws Portable cold saws were primarily designed for sheet metal roofers in the building industry, and can cut up to thick mild steel. Cold saws, as opposed to abrasive saws, are used so that protective coatings are not damaged. They have a heavy duty aluminium catcher which is useful for capturing the swarf, and use cermet tipped blades. References Saws Cutting machines Metalworking cutting tools
Cold saw
[ "Physics", "Technology" ]
1,176
[ "Physical systems", "Machines", "Cutting machines" ]
11,093,946
https://en.wikipedia.org/wiki/Ascofuranone
Ascofuranone is an antibiotic produced by various ascomycete fungi including Acremonium sclerotigenum that inhibits the Trypanosoma brucei alternative oxidase and is a lead compound in efforts to produce other drugs targeting this enzyme for the treatment of sleeping sickness. The compound is effective both in vitro cell culture and in infections in mice. Ascofuranone has also been reported to have anti-tumor activity, and modulate the immune system. Biosynthesis The proposed biosynthesis of ascofuranone was reported by Kita et al., as well as by Abe et al. The prenylation of orsellinic acid, followed by terminal cyclization through epoxidation is how ascofuranone can be synthesized. Compound (1), ilicicolinic acid B, was found to be produced from polyketide synthase (PKS) StBA and that AscCABD are responsible for the biosynthesis of ilicicolin A (3). Ilicicolin B (2) was found to be produced by expressing AscC (polyketide synthase) which is then followed by expression of AscA (prenyltransferase). AscD (flavin-dependent halogenase, flavin binding enzyme) is able to catalyze the chlorination of ilicicolinic acid B (2) to yield ilicicolin A (3). Expodiation of (3) by AscE (P450 monooxygenase) leads to the formation of ilicicolin A epoxide (4). Ilicicolin A epoxide can then be hydroxylated by AscH at C-16 (P450 monooygenase) to yield intermediate (5) which can then be cyclized by AscI (eight-transmembrane protein, TPC) into ascofuranol (6), specifically through 6-endo-tet cyclization. Finally, ascofuranol (6) can be oxidized by AscJ (NAD(P)-dependent alcohol dehydrogenase) leading to the formation of ascofuranone. References Antibiotics Terpeno-phenolic compounds Aromatic aldehydes Ketones Chloroarenes Tetrahydrofurans Halogen-containing natural products
Ascofuranone
[ "Chemistry", "Biology" ]
505
[ "Biotechnology products", "Ketones", "Functional groups", "Antibiotics", "Biocides" ]
11,094,380
https://en.wikipedia.org/wiki/Enzootic
Enzootic describes the situation where a disease or pathogen is continuously present in at least one species of non-human animal in a particular region. It is the non-human equivalent of endemic. In epizoology, an infection is said to be "enzootic" in a population when the infection is maintained in the population without the need for external inputs (cf. endemic). See also Epizootic Biodiversity Pathology Epidemiology
Enzootic
[ "Biology", "Environmental_science" ]
91
[ "Epidemiology", "Environmental social science", "Biodiversity", "Pathology" ]
11,095,324
https://en.wikipedia.org/wiki/Integro-differential%20equation
In mathematics, an integro-differential equation is an equation that involves both integrals and derivatives of a function. General first order linear equations The general first-order, linear (only with respect to the term involving derivative) integro-differential equation is of the form As is typical with differential equations, obtaining a closed-form solution can often be difficult. In the relatively few cases where a solution can be found, it is often by some kind of integral transform, where the problem is first transformed into an algebraic setting. In such situations, the solution of the problem may be derived by applying the inverse transform to the solution of this algebraic equation. Example Consider the following second-order problem, where is the Heaviside step function. The Laplace transform is defined by, Upon taking term-by-term Laplace transforms, and utilising the rules for derivatives and integrals, the integro-differential equation is converted into the following algebraic equation, Thus, . Inverting the Laplace transform using contour integral methods then gives . Alternatively, one can complete the square and use a table of Laplace transforms ("exponentially decaying sine wave") or recall from memory to proceed: . Applications Integro-differential equations model many situations from science and engineering, such as in circuit analysis. By Kirchhoff's second law, the net voltage drop across a closed loop equals the voltage impressed . (It is essentially an application of energy conservation.) An RLC circuit therefore obeys where is the current as a function of time, is the resistance, the inductance, and the capacitance. The activity of interacting inhibitory and excitatory neurons can be described by a system of integro-differential equations, see for example the Wilson-Cowan model. The Whitham equation is used to model nonlinear dispersive waves in fluid dynamics. Epidemiology Integro-differential equations have found applications in epidemiology, the mathematical modeling of epidemics, particularly when the models contain age-structure or describe spatial epidemics. The Kermack-McKendrick theory of infectious disease transmission is one particular example where age-structure in the population is incorporated into the modeling framework. See also Delay differential equation Differential equation Integral equation Integrodifference equation References Further reading Vangipuram Lakshmikantham, M. Rama Mohana Rao, “Theory of Integro-Differential Equations”, CRC Press, 1995 External links Interactive Mathematics Numerical solution of the example using Chebfun Differential equations
Integro-differential equation
[ "Mathematics" ]
523
[ "Mathematical objects", "Differential equations", "Equations" ]
14,752,143
https://en.wikipedia.org/wiki/Tekla%20Structures
Tekla Structures is a building information modeling software able to model structures that incorporate different kinds of building materials, including steel, concrete, timber and glass. Tekla allows structural drafters and engineers to design a building structure and its components using 3D modeling, generate 2D drawings and access building information. Tekla Structures was formerly known as Xsteel (X as in X Window System, the foundation of the Unix GUI). Features Tekla Structures is used in the construction industry for steel and concrete detailing, precast and cast in-situ. The software enables users to create and manage 3D structural models in concrete or steel, and guides them through the process from concept to fabrication. The process of shop drawing creation is automated. It is available in different configurations and localized environments. Tekla Structures is known to support large models with multiple simultaneous users, but is regarded as relatively expensive, complex to learn and fully utilize. It competes in the BIM market with AutoCAD, Autodesk Revit, DProfiler and Digital Project, Lucas Bridge, PERICad and others. Tekla Structures is Industry Foundation Classes (IFC) compliant. Modeling scopes within Tekla Structures includes Structural Steel, Cast-in-Place (CIP), Concrete, Reinforcing Bar, Miscellaneous Steel and Light Gauge Drywall Framing. The transition of Xsteel to Tekla Structures in 2004 added significant more functionality and interoperability. It is often used in conjunction with Autodesk Revit, where structural framing is designed in Tekla and exported to Revit using the DWG/DXF formats. Applications Engineers have used Tekla Structures to model stadiums, offshore structures, pipe rack structures, plants, factories, residential buildings, bridges and skyscrapers. Tekla Structures was used in the construction design for various projects around the world, including: Grandstand Replacement, Daytona International Speedway (USA) Frontstretch Grandstands, Daytona International Speedway (USA) Denver International Airport Expansion (USA) San Jose Earthquakes Stadium (USA) BB&T Ballpark (Charlotte, USA) Spillway Replacement, Manitoba Hydro (USA) National Stadium Roof, Singapore Sports Hub (Singapore) Red Bear Student Center, University of Saskatchewan (Canada) Troja Bridge (Prague) Tesco Supermarket (Sheringham, UK) Baylor University Stadium (Australia) Canopée des Halles, Forum des Halles (Paris, France) Sutter Medical Center (California, USA) Expansion, Chennai International Airport (India) Dongdaemun Design Plaza (Seoul) Capital Gate (Abu Dhabi) Midfield Terminal Complex, Abu Dhabi Airport (Abu Dhabi) King Abdullah Financial District (Saudi Arabia) King Abdulaziz Center for World Culture (Saudi Arabia) National Museum of Qatar (Qatar) Hilton Garden Inn (UAE) Puuvilla Shopping Centre (Finland) College Football Hall of Fame (Atlanta, GA) Optus Stadium (Perth, Australia) Tekla Structures was used extensively for the steel design of Capital Gate at Abu Dhabi, UAE. Files exported from Tekla facilitated faster steel fabrication. One of the architects, Jeff Schofield, stated that "it was the right time in history and we had the right technology to make this happen". The Manitoba Hydro Spillway Replacement was designed using Tekla Structures to "successfully model and co-ordinate its design", a project that won the TEKLA 2012 North American BIM Award for "Best Concrete Project". It was the "first hydroelectric project that has seen steel, concrete, and rebar fully detailed using Tekla Structures". Stable version release - history dates Tekla Structures 16.0 - March 2010 Tekla Structures 17.0 - February 2011 Tekla Structures 18.0 - March 2012 Tekla Structures 19.0 - March 2013 Tekla Structures 20.0 - February 2014 Tekla Structures 21.0 - March 2015 Tekla Structures 2016 - March 2016 Tekla Structures 2017 - March 2017 Tekla Structures 2018 - March 2018 Tekla Structures 2019 - March 2019 Tekla Structures 2020 - March 2020 Tekla Structures 2021 - March 2021 Tekla Structures 2022 - March 2022 Tekla Structures 2023 - March 2023 Tekla Structures 2024 - March 2024 See also Comparison of CAD editors for CAE References Computer-aided design software Building information modeling
Tekla Structures
[ "Engineering" ]
870
[ "Construction", "Building information modeling", "Building engineering", "Construction software" ]
14,753,789
https://en.wikipedia.org/wiki/APBB1
Amyloid beta A4 precursor protein-binding family B member 1 is a protein that in humans is encoded by the APBB1 gene. Function The protein encoded by this gene is a member of the Fe65 protein family. It is an adaptor protein localized in the nucleus. It interacts with the Alzheimer's disease amyloid precursor protein (APP), transcription factor CP2/LSF/LBP1 and the low-density lipoprotein receptor-related protein. APP functions as a cytosolic anchoring site that can prevent the gene product's nuclear translocation. This encoded protein could play an important role in the pathogenesis of Alzheimer's disease. It is thought to regulate transcription. Also it is observed to block cell cycle progression by downregulating thymidylate synthase expression. Multiple alternatively spliced transcript variants have been described for this gene but some of their full length sequence is not known. Interactions APBB1 has been shown to interact with APLP2, TFCP2, LRP1 and Amyloid precursor protein. References External links Further reading Proteins
APBB1
[ "Chemistry" ]
228
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
14,754,020
https://en.wikipedia.org/wiki/DDR1
Discoidin domain receptor family, member 1, also known as DDR1 or CD167a (cluster of differentiation 167a), is a human gene. Function Receptor tyrosine kinases (RTKs) play a key role in the communication of cells with their microenvironment. These molecules are involved in the regulation of cell growth, differentiation and metabolism. The protein encoded by this gene is a RTK that is widely expressed in normal and transformed epithelial cells and is activated by various types of collagen. This protein belongs to a subfamily of tyrosine kinase receptors with a homology region to the Dictyostelium discoideum protein discoidin I in their extracellular domain. Its autophosphorylation is achieved by all collagens so far tested (type I to type VI). A closely related family member is the DDR2 protein. In situ studies and Northern-blot analysis showed that expression of this encoded protein is restricted to epithelial cells, particularly in the kidney, lung, gastrointestinal tract, and brain. In addition, this protein is significantly over-expressed in several human tumors from breast, ovarian, esophageal, and pediatric brain. This gene is located on chromosome 6p21.3 in proximity to several HLA class I genes. Alternative splicing of this gene results in multiple transcript variants. References Further reading Clusters of differentiation Tyrosine kinase receptors
DDR1
[ "Chemistry" ]
303
[ "Tyrosine kinase receptors", "Signal transduction" ]
14,754,054
https://en.wikipedia.org/wiki/PITX2
Paired-like homeodomain transcription factor 2 also known as pituitary homeobox 2 is a protein that in humans is encoded by the PITX2 gene. Function This gene encodes a member of the RIEG/PITX homeobox family, which is in the bicoid class of homeodomain proteins. This protein acts as a transcription factor and regulates procollagen lysyl hydroxylase gene expression. This protein is involved in the development of the eye, tooth, and abdominal organs. This protein acts as a transcriptional regulator involved in the basal and hormone-regulated activity of prolactin. A similar protein in other vertebrates is involved in the determination of left-right asymmetry during development. Three transcript variants encoding distinct isoforms have been identified for this gene. Pitx2 is responsible for the establishment of the left-right axis, the asymmetrical development of the heart, lungs, and spleen, twisting of the gut and stomach, as well as the development of the eyes. Once activated Pitx2 will be locally expressed in the left lateral mesoderm, tubular heart, and early gut which leads to the asymmetrical development of organs and looping of the gut. When Pitx2 is deleted, the irregular morphogenesis of organs results on the left hand side. Pitx2 is left-laterally expressed controlling the morphology of the left visceral organs. Expression of Pitx2 is controlled by an intronic enhancer ASE and Nodal. It appears that while Nodal controls cranial expression of Pitx2, ASE controls left – right expression of Pitx2, which leads to the asymmetrical development of the left sided visceral organs, such as the spleen and liver. Collectively, Pitx2 first acts to prevent the apoptosis of the extraocular muscles followed by acting as the myogenic programmer of the extraocular muscle cells. There have also been studies showing different isoforms of the transcription factor: Pitx2a, Pitx2b, and Pitx2c, each with distinct and non-overlapping functions. Studies have shown that in chick embryos, Pitx2 is a direct regulator of cVg1, a growth factor homologous to mammalian GDF1. cVg1 is a Transforming growth factor beta signal that is expressed posteriorly before the formation of the embryo germ layers. The Pitx2 regulation of cVg1 is essential both during normal embryonic development and during establishment of polarity in twins created by experimental division of a single, original embryo. Pitx2 is shown to be essential for upregulation of cVg1 through the binding of enhancers, and is necessary for the proper expression of cVg1 in the posterior marginal zone. Expression of cVg1 in the PMZ is in turn necessary for the proper development of the primitive streak. Experimental knockouts of the PITX2 gene are associated with the subsequent upregulation of related Pitx1, which is able to partially compensate for the loss of Pitx2. Pitx2's ability to regulate the polarity of the embryo may be responsible for the ability of developing chicks to establish proper polarity in embryos created by cuts performed as late as the blastoderm stage. Pitx2 plays a role in limb myogenesis. Pitx2 can determine the development and activation of the MyoD gene (the gene responsible for skeletal myogenesis). Studies have shown that expression of Pitx2 happens before MyoD is expressed in muscles. Further studies show that Pitx2 is directly recruited to act on the MyoD core enhancer and thus, directing the expression of the MyoD gene. Pitx 2 is in a parallel pathway with Myf5 and Myf6, as both paths effect expression of MyoD. However, in the absence of the parallel pathway, Pitx2 can continue activating MyoD genes. The expression of Pitx2 saves MyoD gene expression and keeps expressing this gene for limb myogenesis. Yet, the Pitx 2 pathway is PAX3 dependent and requires this gene to enact limb myogenesis. Studies support this finding as in the absence of PAX3, there is Pitx2 expression deficit and thus, MyoD does not express itself in limb myogenesis. The Pitx2 gene is thus shown to be downstream of Pax3 and serve as an intermediate between Pax3 and MyoD. In conclusion, Pitx2 plays an integral role in limb myogenesis. Pitx2 isoforms are expressed in a sexually dimorphic manner during rat gonadal development. Pitx2 expression has been shown to be important for normal anterior pituitary gland development. Studies using mice embryos established Pitx2 expression is required in a dosage dependent manner. Mice with a homozygous null mutation of the Pitx2 gene showed that it is not required for initial pituitary formation but is needed for further development. Littermates of normal homozygotes, Pitx2+/+, versus homozygous null, Pitx2-/-, at embryonic day 10.5 provided a comparison of differing pouch sizes and cell types. Mice with the homozygous null gene had a smaller pouch and mesenchymal cell growth and differentiation arrested. While embryos with a hypomorphic mutation, Pitx2neo/+, of the gene were considered morphologically normal. Along with normal pituitary expansion, Pitx2 is needed for normal expression of cell transcription genes of hormones produced in the anterior pituitary. Of which are luteinizing hormone (LH), follicle stimulating hormone (FSH), gonadotropin-releasing hormone (GnRH), growth hormone (GH), and thyroid stimulating hormone (TSH). A study conducted using Pitx2neo/neo mice at postnatal day 1, found the transcripts of hormone genes for LH beta (LHb) and FSH beta (FSHb), and GnRH receptor (GnRHR) were nearly absent or nearly abolished, respectively. While transcription genes for GH and TSH producing cells, and growth hormone releasing hormone receptor (GHRHR) of Pitx2neo homozygous mice were moderately reduced. Further analysis of the transcription factors, Gata2, Egr1 and SF1, involved in LHb and FSHb differentiation found a reduction or absence of them in Pitx2neo/neo mice. The transcription factors, Prop1 and Pit1, which control development of GH and TSH producing cells, were also studied in Pitx2neo homozygous mice but only Pit1 expression was reduced. A reduction or absence of the transcription factors of the gonadotropin cells of the anterior pituitary leads to a loss of full pituitary cell function. Clinical significance Mutations in this gene are associated with Axenfeld-Rieger syndrome (ARS), iridogoniodysgenesis syndrome (IGDS), and sporadic cases of Peters anomaly. This protein plays a role in the terminal differentiation of somatotroph and lactotroph cell phenotypes. Pitx2 is overexpressed in many cancers. For example, thyroid, ovarian, and colon cancer all have higher levels of Pitx2 compared to noncancerous tissues. Scientists speculate that cancer cells improperly turn on Pitx2, leading to uncontrolled cell proliferation. This is consistent with the role of Pitx2 in regulating the growth-regulating genes cyclin D2, cyclin D1, and C-Myc. In renal cancer, Pitx2 regulates expression of ABCB1, a multidrug transporter, by binding to the promoter region of ABCB1. Increased expression of Pitx2 in renal cancer cells is associated with increased expression of ABCB1. Thus, renal cancer cells that overexpress ABCB1 have a greater resistance to chemotherapeutic agents. In experiments where Pitx2 expression was decreased, renal cancer cells had decreased cell proliferation and greater susceptibility to doxorubicin treatment, which is consistent with other results. In human esophageal squamous cell carcinoma (ESCC), Pitx2 is overexpressed compared to normal esophageal squamous cells. In addition, greater expression of Pitx2 is positively correlated with clinical aggressiveness of ESCC. Also, ESCC patients with high Pitx2 expression did not respond as well to definitive chemoradiotherapy (CRT) compared to ESCC patients with low Pitx2 expression. Thus, physicians may be able to use Pitx2 expression to predict how ESCC patients will respond to cancer treatment. In Congenital Heart Disease, heterozygous mutations in Pitx2 have been involved in the development of Tetralogy of Fallot, ventricular septal defects, atrial septal defects, transposition of great arteries, and endocardial cushion defect (ECD). The mutations of the Pitx2 gene are created through alternative splicing. The isoform of Pitx2 important for cardiogenesis is Pitx2c. The lack of expression of this particular isoform correlates with these congenital defects. Pitx2 mutations significantly reduce transcriptional activity of Pitx2 and synergistic activation between Pitx2 and NKX2(also important for development of the heart). The large phenotypic spectrum due to the mutation of Pitx2 may be attributed to a variety of factors including: different genetic backgrounds, epigenetic modifiers and delayed/complete penetrance. The mutation of Pitx2 is not defined as the cause of these congenital heart defects, but currently perceived as a risk factor for their development. Studies have also shown that Pitx2 displays an oncogenic role that is correlated with patients that have lung adenocarcinoma (LUAD). Pitx2 was overexpressed in LUAD when compared with neighboring normal tissues and is reported to increase clinical stages of the carcinoma and decrease survival. Patients with LUAD that presented with higher levels of Pitx2 had a lower overall survival rate compared to those with lower levels of Pitx2. The Pitx2 gene plays a role in lung adenocarcinoma that is dependent on activating the Wnt/β-catenin signaling pathway. When analyzing experimental findings from this Wnt/β-catenin signaling pathway, a TCGA dataset showed that Pitx2 had a positive correlation with WNT3A. These results propose that Pixt2 is directly bound to the WNT3A promoter region which will enhance WNT3A's transcription. This transcriptional regulation of WNT3A has been reported to encourage migration and the infiltration process of LUAD which can worsen a LUAD patients’ prognosis. Experimental knockdown of Pixt2 repressed tumor growth of LUAD; this supports the claim that Pixt2 is associated with the tumorigenesis of cancers, specifically in lung adenocarcinoma. These results suggest that Pitx2 may have a potential to serve as a biomarker for patients that present with LUAD. References Further reading External links Transcription factors
PITX2
[ "Chemistry", "Biology" ]
2,361
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,754,384
https://en.wikipedia.org/wiki/CTAG1B
Cancer/testis antigen 1 also known as LAGE2 or LAGE2B is a protein that in humans is encoded by the CTAG1B gene. It is most often referenced by its alias NY-ESO-1. Cancer/Testis Antigen 1B is a protein belonging to the family of Cancer Testis Antigens (CTA) that are expressed in a variety of malignant tumours at the mRNA and protein levels, but also restricted to testicular germ cells in normal adult tissues. A clone of CTAG gene was originally identified by immunological methods in oesophageal carcinoma using patient serum. The aberrant re-expression of CTAs is induced by molecular mechanisms including DNA demethylation, histone post-translational modification, and microRNA-mediated regulation. The effect of DNA demethylation is evident by the capability of demethylating agents, such as 5-aza-2-deoxycytidine, to induce CTAs re-expression in tumour cells but not in normal epithelial cells. Gene CTAG1B is located on the long arm of chromosome X (Xq28), containing three exons that are approximately 8 Kb in length. CTAG1B is found to have a neighbouring gene of identical sequence: CTAG1A. Protein The gene encodes a 180-amino acid polypeptide, expressed from 18 weeks during embryonic development until birth in human fetal testis. It is also strongly expressed in spermatogonia and in primary spermatocytes of adult testis, but not in post-meiotic cells or testicular somatic cells. Structurally, CTAG1B features a glycine-rich N-terminal region, as well as a hydrophobic C-terminal region with a Pcc-1 domain. The protein has been shown to be homologous to two other CTAs located in the same region: LAGE-1 and ESO3. The exact function of CTAG1B remains to be unknown. Studies have suggested its role in cell cycle progression and growth, although not being elusive, through the analysis of CTAG1B's structure and expression pattern. The coexpression of CTAG1B with melanoma antigen gene C1 (MAGE-C1), another CTA, further supports its involvement in cell cycle regulation and apoptosis, due to the role of MAGE proteins in these processes. Moreover, its restricted expression pattern in male germ cells suggests its role in germ cell self-renewal or differentiation, supported by the nuclear localization of CTAG1B in mesenchymal stem cells in contrast to its cytoplasmic expression in cancer cells. Humoral Immune Response It is also believed that cancer-testis antigens are immunogenic proteins, since many members of the family have been shown to induce spontaneous cellular and humoral immune responses in patients with advanced stage tumours. The first reported simultaneous humoral and cellular response against CTAG1B was from a metastatic melanoma patient. 3 HLA-A2 restricted epitopes in CTAG1B were identified as the recognition sites for CD8+ cytotoxic T lymphocytes. Integrated humoral immune responses against CTAG1B have been detected in patients with: Multiple myeloma, breast cancer, non small-cell lung carcinoma, and ovarian cancer. As such, CTAG1B is believed to be a promising candidate for cancer immunotherapy due to its exclusive expression in normal tissues and re expression in tumour cells, as well as its high immunogenicity. These features also suggest a limited off-target toxicity of CTAG1B-based cancer therapies. The immunisation with CTAG1B could be a successful approach to induce antigen specific immune responses in cancer patients. Up until May 2018, there have been 12 clinical trials registered using a CTAG1B cancer vaccine, 23 using modified T cells, and 13 using combinatorial immunotherapy. Examining the expression of a number of CTA genes in 23 samples of sporadic medullary thyroid carcinoma has revealed that CTAG1B expression significantly correlates with tumour recurrence. A humoral response against this CTA was detected in 54.5% of CTAG1B-expressing patients, and in 1 of 6 patients with an CTAG1B-negative tumour. Anti-CTAG1B antibodies were present in 35.7%, demonstrating that medullary thyroid carcinoma is associated with humoral immune response to CTAG1B. Another study has shown that CTAG1B binding to CALR on macrophages and dendritic cells provides a link between CTAG1B, the innate immune system, and possibly the adaptive immune response against CTAG1B. References Further reading External links Proteins
CTAG1B
[ "Chemistry" ]
1,015
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
14,754,403
https://en.wikipedia.org/wiki/Neutrophil%20cytosolic%20factor%204
Neutrophil cytosol factor 4 is a protein that in humans is encoded by the NCF4 gene. Function The protein encoded by this gene is a cytosolic regulatory component of the superoxide-producing phagocyte NADPH-oxidase, a multicomponent enzyme system important for host defense. This protein is preferentially expressed in cells of myeloid lineage. It interacts primarily with neutrophil cytosolic factor 2 (NCF2/p67-phox) to form a complex with neutrophil cytosolic factor 1 (NCF1/p47-phox), which further interacts with the small G protein RAC1 and translocates to the membrane upon cell stimulation. This complex then activates flavocytochrome b, the membrane-integrated catalytic core of the enzyme system. The PX domain of this protein can bind phospholipid products of the PI(3) kinase, which suggests its role in PI(3) kinase-mediated signaling events. The phosphorylation of this protein was found to negatively regulate the enzyme activity. Alternatively spliced transcript variants encoding distinct isoforms have been observed. Clinical significance GWAS studies showed that Crohn's disease patient with certain SNPs in NCF4 are more susceptible to get Crohn's disease. Crohn's patient with rs4821544 variants showed a decreased reactive oxygen species after stimulation with GM-CSF which is a proinflammtory cytokine. Interactions Neutrophil cytosolic factor 4 has been shown to interact with Ku70, Neutrophil cytosolic factor 1 and Moesin. References Further reading Proteins
Neutrophil cytosolic factor 4
[ "Chemistry" ]
365
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
14,754,630
https://en.wikipedia.org/wiki/Homeobox%20protein%20Nkx-2.5
Homeobox protein Nkx-2.5 is a protein that in humans is encoded by the NKX2-5 gene. Function Homeobox-containing genes play critical roles in regulating tissue-specific gene expression essential for tissue differentiation, as well as determining the temporal and spatial patterns of development (Shiojima et al., 1995). It has been demonstrated that a Drosophila homeobox-containing gene called 'tinman' is expressed in the developing dorsal vessel and in the equivalent of the vertebrate heart. Mutations in tinman result in loss of heart formation in the embryo, suggesting that tinman is essential for Drosophila heart formation. Furthermore, abundant expression of Csx, the presumptive mouse homolog of tinman, is observed only in the heart from the time of cardiac differentiation. CSX, the human homolog of murine Csx, has a homeodomain sequence identical to that of Csx and is expressed only in the heart, again suggesting that CSX plays an important role in human heart formation. In humans, proper NKX2-5 expression is essential for the development of atrial, ventricular, and conotruncal septation, atrioventricular (AV) valve formation, and maintenance of AV conduction. Mutations in expression are associated with congenital heart disease (CHD) and related ailments. Patients with NKX2-5 mutations commonly present AV conduction block and atrial septal defects (ASD). Recently, postnatal roles of cardiac transcription factors have been extensively investigated. Consistent with the direct transactivation of numerous cardiac genes reactivated in response to hypertrophic stimulation, cardiac transcription factors are profoundly involved in the generation of cardiac hypertrophy or in cardioprotection from cytotoxic stress in the adult heart. The NKX2-5 transcription factor may help myocytes endure cytotoxic stress, however further exploration in this field is required. NK-2 homeobox genes are a family of genes that encode for numerous transcription factors that go on to aid in the development of many structures including the thyroid, colon, and heart. Of the NK-2 genes, NKX2-5 transcription factor is mostly involved in cardiac development and defects with this gene can lead to congenital heart defects including, but not limited to atrial septal defects. NKX2-5 is expressed in precursor cardiac cells and this expression is necessary in order to lead to proper cardiac development. In NKX2-5 gene knock out mice, subjects were found to have induced congenital heart defects by leading to differentially expressed genes. In the case of loss of function of NKX2-5, test subjects developed increased heart rate and decreased variability in heart rate. This discovery indicates that NKX2-5 is necessary for proper cardiac formatting as well as proper cardiac function after formatting. NKX2-5 has also been shown to bind to the promoter of FGF-16 and regulate its expression. This finding suggests that NKX2-5 is implicated in cardiac injury via cytotoxic effects. Interactions During embryogenesis, NKX2-5 is expressed in early cardiac mesoderm cells throughout the left ventricle and atrial chambers. In early cardiogenesis, cardiac precursor cells from the cardiac crescent congregate along the ventral midline of the developing embryo and form the linear heart tube. In Nkx2-5 knock out mice, cardiac development halts at the linear heart tube stage and looping morphogenesis disrupted. NKX2-5 has been shown to interact with GATA4 and TBX5. NKX2-5 is a transcription factor that regulates heart development from the Cardiac Crescent of the splanchnic mesoderm in humans. NKX2-5 is dependent upon the JAK-STAT pathway and works along with MEF2, HAND1, and HAND2 transcription factors to direct heart looping during early heart development. NKX2-5 in vertebrates is equivalent to the ‘tinman’ gene in Drosophila and directly activates the MEF2 gene to control cardiomyocyte differentiation. NKX2-5 operates in a positive feedback loop with GATA transcription factors to regulate cardiomyocyte formation. NKX2-5 influences HAND1 and HAND2 transcription factors that control the essential asymmetrical development of the heart's ventricles. The gene has been shown to play a role in the heart's conduction system, postnatally. NKX2-5 is also involved in the intrinsic mechanisms that decide ventricle and atrial cellular fate. During ventricular chamber formation, NKX2-5 and NKX2-7 are required to maintain cardiomyocyte cellular identity. Repression of either gene results in the differentiating cardiomyocytes to move towards atrial chamber identity. The NKX2-5 mutation has also been associated with preeclampsia; though research is still being conducting in this area. References Further reading External links Transcription factors
Homeobox protein Nkx-2.5
[ "Chemistry", "Biology" ]
1,050
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,755,368
https://en.wikipedia.org/wiki/PMEL%20%28gene%29
Melanocyte protein PMEL also known as premelanosome protein (PMEL), silver locus protein homolog (SILV) or Glycoprotein 100 (gp100), is a protein that in humans is encoded by the PMEL gene. Its gene product may be referred to as PMEL, silver, ME20, gp100 or Pmel17. Structure and function PMEL is a 100 kDa, 661 amino acids long type I transmembrane glycoprotein that is expressed primarily in melanosomes, which are the melanin-producing organelles in melanocytes of pigment cells of the skin and eye, and in most malignant melanomas. This protein is involved in melanosome maturation, including melanogenesis, melanosome biogenesis, and melanin polymerization (Eisenberg) . The transmembrane form of PMEL is modified in the secretory pathway by elaboration of N-linked oligosaccharides and addition and modification of O-linked oligosaccharides. It is then targeted to precursors of the pigment organelle, the melanosome, where it is proteolytically processed to several small fragments. Some of these fragments form non-pathological amyloid that assemble into sheets and form the striated pattern that underlies melanosomal ultrastructure. PMEL cleavage is mediated by several proteases including a proprotein convertase of the furin family, a "sheddase" that might include members of the a disintegrin and metalloproteinase (ADAM) family, and additional proteases in melanosomes or their precursors. After the amyloidogenic region is cleaved, the small remaining integral membrane fragment is digested by γ-secretase. The expression of the PMEL gene is regulated by the microphthalmia-associated transcription factor (MITF). Function in cancer and cancer treatment The gp100 protein is a melanoma antigen i.e. a tumor-associated antigen. Short fragments of it have been used to develop the gp100 cancer vaccine which is or contains gp100:209-217(210M). Hydrophilic recombinant gp100 protein (HR-gp100) has been topically applied on human intact skin in vitro, and used as a vaccine in a mouse model. It was demonstrated that HR-gp100 permeates into human skin, and is processed and presented by human dendritic cells. In the mouse model, an HR-gp100-based vaccine triggered antigen-specific T cell responses, as shown by proliferation assays, ELISA and intracellular staining for IFN-γ. The gp100 protein contains differentiation antigens., and has been widely studied to be used as a target for melanoma immunotherapy. Different sequences of the GP100 peptide could be used for immunization against tumors. According to a case study, modifications of GP100, such as GP100-209 and GP100-208, have shown a greater number of antigen-specific CTL's (cytotoxic T lymphocytes), which can target and kill cancer cells (Eisenberg). References Further reading External links Oncology
PMEL (gene)
[ "Chemistry" ]
687
[ "Biochemistry stubs", "Protein stubs" ]
14,756,569
https://en.wikipedia.org/wiki/P%20protein
P protein, also known as melanocyte-specific transporter protein or pink-eyed dilution protein homolog, is a protein that in humans is encoded by the oculocutaneous albinism II (OCA2) gene. The P protein is believed to be an integral membrane protein involved in small molecule transport, specifically of tyrosine—a precursor of melanin. Certain mutations in OCA2 result in type 2 oculocutaneous albinism. OCA2 encodes the human homologue of the mouse p (pink-eyed dilution) gene. The human OCA2 gene is located on the long arm (q) of chromosome 15, specifically from base pair 28,000,020 to base pair 28,344,457 on chromosome 15. Function OCA2 provides instructions for making the protein called P protein which is located in melanocytes which are specialized cells that produce melanin, and in the cells of the retinal pigment epithelium. Melanin is responsible for giving color to the skin, hair, and eyes. Moreover, melanin is found in the light-sensitive tissue of the retina of the eye which plays a role in normal vision. The exact function of protein P is unknown, but it has been found that it is essential for the normal coloring of skin, eyes, and hair; and likely involved in melanin production. This gene seems to be the main determinant of eye color depending on the amount of melanin production in the iris stroma (large amounts giving rise to brown eyes; little to no melanin giving rise to blue eyes). This gene is mutated in Astyanax mexicanus, a Mexican fish which is characterized by a chronic albinism in cave-dwelling individuals. It exists as a deletion in fish from the Pachón and Molino caves, which produces albinism. Clinical significance Mutations in the OCA2 gene cause a disruption in the normal production of melanin; therefore, causing vision problems and reductions in hair, skin, and eye color. Oculocutaneous albinism caused by mutations in the OCA2 gene is called oculocutaneous albinism type 2. The prevalence of OCA type 2 is estimated at 1/38,000-1/40,000 in most populations throughout the world, with a higher prevalence in the African population of 1/3,900–1/1,500. Other diseases associated with the deletion of the OCA2 gene are Angelman syndrome (light-colored hair and fair skin) and Prader–Willi syndrome (unusually light-colored hair and fair skin). With both these syndromes, the deletion often occurs in individuals with either syndrome. A mutation in the HERC2 gene adjacent to OCA2, affecting OCA2's expression in the human iris, is found common to nearly all people with blue eyes. It has been hypothesized that all blue-eyed humans share a single common ancestor with whom the mutation originated. The His615Arg allele of OCA2 is involved in the light skin tone and the derived allele is restricted to East Asia with high frequencies, with highest frequencies in Eastern East Asia (49-63%), midrange frequencies in Southeast Asia, and the lowest frequencies in Western China and some Eastern European populations. References Further reading External links GeneReviews/NCBI/NIH/UW entry on Oculocutaneous Albinism Type 2 Genes on human chromosome 15 Eye color Proteins
P protein
[ "Chemistry" ]
748
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
14,756,976
https://en.wikipedia.org/wiki/Linear%20acetylenic%20carbon
Linear acetylenic carbon (LAC), also known as carbyne or a Linear Carbon Chain (LCC), is an allotrope of carbon that has the chemical structure as a repeat unit, with alternating single and triple bonds. It would thus be the ultimate member of the polyyne family. This polymeric carbyne is of considerable interest to nanotechnology as its Young's modulus is – forty times that of diamond; this extraordinary number is, however, based on a novel definition of cross-sectional area that does not correspond to the space occupied by the structure. Carbyne has also been identified in interstellar space; however, its existence in condensed phases has been contested recently, as such chains would crosslink exothermically (and perhaps explosively) if they approached each other. History and controversy The first claims of detection of this allotrope were made in 1960 and repeated in 1978. A 1982 re-examination of samples from several previous reports determined that the signals originally attributed to carbyne were in fact due to silicate impurities in the samples. Absence of carbyne crystalline rendered the direct observation of a pure carbyne-assembled solid still a major challenge, because carbyne crystals with well-defined structures and sufficient sizes are not available to date. This is indeed the major obstacle to general acceptance of carbyne as a true carbon allotrope. The mysterious carbyne still attracted scientists with its possible extraordinary properties. During the past thirty five years an increasing body of experimental and theoretical work has been published in the scientific literature dealing with the preparation of carbyne and the study of its structure, properties and potential applications. In 1968 a silver-white new mineral was discovered in graphitic gneisses of the Ries Crater (Nordlingen, Bavaria, Germany). This material was found to consist entirely of carbon and its hexagonal cell dimensions matched those reported earlier for carbyne by Russian scientists. It was concluded that this novel form of natural carbon, chaoite, was generated from graphite by the combined action of high temperature and high pressure, presumably caused by the impact of meteorite. Soon afterwards this “white” carbon was synthesized by sublimation of pyrolytic graphite in vacuum. In 1984, a group at Exxon reported the detection of clusters with even numbers of carbons, between 30 and 180, in carbon evaporation experiments, and attributed them to polyyne carbon. However, these clusters later were identified as fullerenes. In 1991, carbyne was allegedly detected among various other allotropes of carbon in samples of amorphous carbon black vaporized and quenched by shock waves produced by shaped explosive charges. In 1995, the preparation of carbyne chains with over 300 carbons was reported. They were claimed to be reasonably stable, even against moisture and oxygen, as long as the terminal alkynes on the chain are capped with inert groups (such as tert-butyl or trifluoromethyl) rather than hydrogen atoms. The study claimed that the data specifically indicated a carbyne-like structures rather than fullerene-like ones. However, according to H. Kroto, the properties and synthetic methods used in those studies are consistent with generation of fullerenes. Another 1995 report claimed detection of carbyne chains of indeterminate length in a layer of carbonized material, about thick, resulting from the reaction of solid polytetrafluoroethylene (PTFE, Teflon) immersed in alkali metal amalgam at ambient temperature (with no hydrogen-bearing species present). The assumed reaction was , where M is either lithium, sodium, or potassium. The authors conjectured that nanocrystals of the metal fluoride between the chains prevented their polymerization. In 1999, it was reported that copper(I) acetylide (), after partial oxidation by exposure to air or copper(II) ions followed by decomposition with hydrochloric acid, leaves a "carbonaceous" residue with the spectral signature of chains with n=2–6. The proposed mechanism involves oxidative polymerization of the acetylide anions into carbyne-type anions or cumulene-type anions . Also, thermal decomposition of copper acetylide in vacuum yielded a fluffy deposit of fine carbon powder on the walls of the flask, which, on the basis of spectral data, was claimed to be carbyne rather than graphite. Finally, the oxidation of copper acetylide in ammoniacal solution (Glaser's reaction) produces a carbonaceous residue that was claimed to consist of "polyacetylide" anions capped with residual copper(I) ions, . On the basis of the residual amount of copper, the mean number of units n was estimated to be around 230. In 2004, an analysis of a synthesized linear carbon allotrope found it to have a cumulene electronic structure—sequential double bonds along an sp-hybridized carbon chain—rather than the alternating triple–single pattern of linear carbyne. In 2016, the synthesis of linear chains of up to 6,000 sp-hybridized carbon atoms was reported. The chains were grown inside double-walled carbon nanotubes, and are highly stable protected by their hosts. Polyynes While the existence of "carbyne" chains in pure neutral carbon material is still disputed, short chains are well established as substructures of larger molecules (polyynes). As of 2010, the longest such chain in a stable molecule had 22 acetylenic units (44 atoms), stabilized by rather bulky end groups. Structure The carbon atoms in this form are each linear in geometry with sp orbital hybridisation. The estimated length of the bonds is (triple) and (single). Other possible configurations for a chain of carbon atoms include polycumulene (polyethylene-diylidene) chains with double bonds only (). This chain is expected to have slightly higher energy, with a Peierls gap of . For short molecules, however, the polycumulene structure seems favored. When n is even, two ground configurations, very close in energy, may coexist: one linear, and one cyclic (rhombic). The limits of flexibility of the carbyne chain are illustrated by a synthetic polyyne with a backbone of 8 acetylenic units, whose chain was found to be bent by or more (about at each carbon) in the solid state, to accommodate the bulky end groups of adjacent molecules. The highly symmetric carbyne chain is expected to have only one Raman-active mode with Σg symmetry, due to stretching of bonds in each single-double pair, with frequency typically between 1800 and , and affected by their environments. Properties Carbyne chains have been claimed to be the strongest material known per density. Calculations indicate that carbyne's specific tensile strength (strength divided by density) of beats graphene (), carbon nanotubes (), and diamond (). Its specific modulus (Young's Modulus divided by density) of around is also double that of graphene, which is around . Stretching carbyne 10% alters its electronic band gap from . Outfitted with molecular handles at chain's ends, it can also be twisted to alter its band gap. With a end-to-end twist, carbyne turns into a magnetic semiconductor. In 2017, the band gaps of confined linear carbon chains (LCC) inside double-walled carbon nanotubes with lengths ranging from 36 up to 6000 carbon atoms were determined for the first time ranging from , following a linear relation with Raman frequency. This lower bound is the smallest band gap of linear carbon chains observed so far. In 2020, the strength (Young's modulus) of linear carbon chains (LCC) was experimentally calculated to be about which is much higher than that of other carbon materials like graphene and carbon nanotubes. The comparison with experimental data obtained for short chains in gas phase or in solution demonstrates the effect of the DWCNT encapsulation, leading to an essential downshift of the band gap. The LCCs inside double-walled carbon nanotubes lead to an increase of the photoluminescence (PL) signal of the inner tubes up to a factor of 6 for tubes with (8,3) chirality. This behavior can be attributed to a local charge transfer from the inner tubes to the carbon chains, counterbalancing quenching mechanisms induced by the outer tubes. Carbyne chains can take on side molecules that may make the chains suitable for energy and hydrogen storage. With a differential Raman scattering cross section of 10−22 cm2 sr−1 per atom, carbyne chains confined inside carbon nanotubes are the strongest Raman scatterer ever reported, exceeding any other know material by two orders of magnitude. References Further reading Nanotechnology Allotropes of carbon Alkynes Polyynes
Linear acetylenic carbon
[ "Chemistry", "Materials_science", "Engineering" ]
1,866
[ "Alkynes", "Allotropes of carbon", "Allotropes", "Materials science", "Organic compounds", "Nanotechnology" ]
14,757,643
https://en.wikipedia.org/wiki/Stream%20restoration
Stream restoration or river restoration, also sometimes referred to as river reclamation, is work conducted to improve the environmental health of a river or stream, in support of biodiversity, recreation, flood management and/or landscape development. Stream restoration approaches can be divided into two broad categories: form-based restoration, which relies on physical interventions in a stream to improve its conditions; and process-based restoration, which advocates the restoration of hydrological and geomorphological processes (such as sediment transport or connectivity between the channel and the floodplain) to ensure a stream's resilience and ecological health. Form-based restoration techniques include deflectors; cross-vanes; weirs, step-pools and other grade-control structures; engineered log jams; bank stabilization methods and other channel-reconfiguration efforts. These induce immediate change in a stream, but sometimes fail to achieve the desired effects if degradation originates at a wider scale. Process-based restoration includes restoring lateral or longitudinal connectivity of water and sediment fluxes and limiting interventions within a corridor defined based on the stream's hydrology and geomorphology. The beneficial effects of process-based restoration projects may sometimes take time to be felt since changes in the stream will occur at a pace that depends on the stream dynamics. Despite the significant number of stream-restoration projects worldwide, the effectiveness of stream restoration remains poorly quantified, partly due to insufficient monitoring. However, in response to growing environmental awareness, stream-restoration requirements are increasingly adopted in legislation in different parts of the world. Definition, objectives and popularity Stream restoration or river restoration, sometimes called river reclamation in the United Kingdom, is a set of activities that aim to improve the environmental health of a river or stream. These activities aim to restore rivers and streams to their original states or to a reference state, in support of biodiversity, recreation, flood management, landscape development, or a combination of these phenomena. Stream restoration is generally associated with environmental restoration and ecological restoration. In that sense, stream restoration differs from: river engineering, a term which typically refers to physical alterations of a water body, for purposes that include navigation, flood control or water supply diversion and are not necessarily related to ecological restoration; waterway restoration, a term used in the United Kingdom describing alterations to a canal or river to improve navigability and related recreational amenities. Improved stream health may be indicated by expanded habitat for diverse species (e.g. fish, aquatic insects, other wildlife) and reduced stream bank erosion, although bank erosion is increasingly generally recognized as contributing to the ecological health of streams. Enhancements may also include improved water quality (i.e., reduction of pollutant levels and increase of dissolved oxygen levels) and achieving a self-sustaining, resilient stream system that does not require periodic human intervention, such as dredging or construction of flood or erosion control structures. Stream restoration projects can also yield increased property values in adjacent areas. In the past decades, stream restoration has emerged as a significant discipline in the field of water-resources management, due to the degradation of many aquatic and riparian ecosystems related to human activities. In the U.S. alone, it was estimated in the early 2000s that more than one billion U.S. dollars were spent each year to restore rivers and that close to 40,000 restoration projects had been conducted in the continental part of the country. Restoration approaches and techniques Stream restoration activities may range from the simple improvement or removal of a structure that inhibits natural stream functions (e.g. repairing or replacing a culvert, or removing barriers to fish passage such as weirs), to the stabilization of stream banks, or other interventions such as riparian zone restoration or the installation of stormwater-management facilities like constructed wetlands. The use of recycled water to augment stream flows that have been depleted as a result of human activities can also be considered a form of stream restoration. When present, navigation locks have a potential to be operated as vertical slot fishways to restore fish passage to some extent for a wide range of fish, including poor swimmers. Stream-restoration projects normally begin with an assessment of a focal stream system, including climatic data, geology, watershed hydrology, stream hydraulics, sediment transport patterns, channel geometry, historical channel mobility, and flood records. Numerous systems exist to classify streams according to their geomorphology. This preliminary assessment helps to understand the stream dynamics and determining the cause of the observed degradation to be addressed; it can also be used to determine the target state for the intended restoration work, especially since the "natural" or undisturbed state is sometimes no longer achievable due to various constraints. Two broad approaches to stream restoration have been defined in the past decades: form-based restoration and process-based restoration. Whereas the former focuses on the restoration of structural features and/or patterns considered to be characteristic of the target stream system, the latter is based on the restoration of hydrological and geomorphological processes (such as sediment transport or connectivity between the channel and the floodplain) to ensure a stream's resilience and ecological health. Form-based restoration Form-based stream restoration promotes the modification of a stream channel to improve stream conditions. Targeted outcomes can include improved water quality, enhanced fish habitat and abundance, as well as increased bank and channel stability. This approach is widely used worldwide, and is supported by various government agencies, including the United States Environmental Protection Agency (U.S. EPA). Form-based restoration projects can be carried out at various scales, including the reach scale. They can include measures such as the installation of in-stream structures, bank stabilization and more significant channel reconfiguration efforts. Reconfiguration work may focus on channel shape (in terms of sinuosity and meander characteristics), cross-section or channel profile (slope along the channel bed). These alterations affect the dissipation of energy through a channel, which impacts flow velocity and turbulence, water-surface elevations, sediment transport, and scour, among other characteristics. Installation of in-stream structures Deflectors Deflectors are generally wooden or rock structures installed at a bank toe and extending towards the center of a stream, in order to concentrate stream flow away from its banks. They can limit bank erosion and generate varying flow conditions in terms of depth and velocity, which can positively impact fish habitat. Cross-vanes and related structures Cross-vanes are U-shaped structures made of boulders or logs, built across the channel to concentrate stream flow in the center of the channel and thereby reduce bank erosion. They do not impact channel capacity and provides other benefits such as improved habitat for aquatic species. Similar structures used to dissipate stream energy include the W-weirs and J-Hook vanes. Weirs, step pools and grade-control structures These structures, which can be built with rocks or wood (logs or woody debris), gradually lower the elevation of the stream and dissipate flow energy, thereby reducing flow velocity. They can help limit bed degradation. They generate water accumulation upstream from them and fast flowing conditions downstream from them, which can improve fish habitat. However, they can limit fish passage if they are too high. Engineered log jams An emerging stream restoration technique is the installation of engineered log jams. Because of channelization and removal of beaver dams and woody debris, many streams lack the hydraulic complexity that is necessary to maintain bank stabilization and healthy aquatic habitats. Reintroduction of large woody debris into streams is a method that is being experimented in streams such as Lagunitas Creek in Marin County, California and Thornton Creek, in Seattle, Washington. Log jams add diversity to the water flow by creating riffles, pools, and temperature variations. Large wood pieces, both living and dead, play an important role in the long-term stability of engineered log jams. However, individual pieces of wood in log jams are rarely stable over long periods and are naturally transported downstream, where they can get trapped in further log jams, other stream features or human infrastructures, which can generate nuisances for human use. Bank stabilization Bank stabilization is a common objective for stream-restoration projects, although bank erosion is generally viewed as favorable for the sustainability and diversity of aquatic and riparian habitats. This technique may be employed where a stream reach is highly confined, or where infrastructure is threatened. Bank stabilization is achieved through the installation of riprap, gabions or through the use of revegetation and/or bioengineering methods, which relies on the use of live plants to build bank stabilizing structures. As new plants sprout from the live branches, the roots anchor the soil and prevent erosion. This makes bioengineering structures more natural and more adaptable to evolving conditions than "hard" engineering structures. Bioengineering structures include fascines, brush mattresses, brush layer, and vegetated geogrids. Other channel-reconfiguration techniques Channel reconfiguration involves the physical modification of the stream. Depending on the scale of a project, a channel's cross-section can be modified, and meanders can be constructed through earthworks to achieve the target stream morphology. In the U.S., such work is frequently based on the Natural Channel Design (NCD), a method developed in the 1990s. This method involves a classification of the stream to be restored based on parameters such as channel pattern and geometry, topography, slope, and bed material. This classification is followed by a design phase based on the NCD method, which includes 8 phases and 40 steps. The method relies on the construction of the desired morphology, and its stabilization with natural materials such as boulders and vegetation to limit erosion and channel mobility. Criticisms to form-based restoration Despite its popularity, form-based restoration has been criticized by the scientific community. Common criticisms are that the scale at which form-based restoration is often much smaller than the spatial and temporal scales of the processes that cause the observed problems and that the target state is frequently influenced by the social conception of what a stream should look like and does not necessarily take into account the stream's geomorphological context (e.g., meandering rivers tend to be viewed as more "natural" and more beautiful, whereas local conditions sometimes favour other patterns such as braided rivers). Numerous criticisms have also been directed at the NCD method by fluvial geomorphologists, who claim that the method is a "cookbook" approach sometimes used by practitioners that do not have sufficient knowledge of fluvial geomorphology, resulting in project failures. Another criticism is the importance given to channel stability in the NCD method (and with some other form-based restoration methods), which can limit the streams' alluvial dynamic and adaptability to evolving conditions. The NCD method has been criticized for its improper application in the Washington, D.C. area to small-order, interior-forested, upper-headwater streams and wetlands, leading to loss of natural forest ecosystems. Process-based restoration Contrary to form-based restoration, which consists of improving a stream's conditions by modifying its structure, process-based restoration focuses on restoring the hydrological and geomorphological processes (or functions) that contribute to the stream's alluvial and ecological dynamics. This type of stream restoration has gained in popularity since the mid-1990s, as a more ecosystem-centered approach. Process-based restoration includes restoring lateral connectivity (between the stream and its floodplain), longitudinal connectivity (along the stream) and water and/or sediment fluxes, which might be impacted by hydro-power dams, grade control structures, erosion control structures and flood protection structures. Valley Floor Resetting epitomises process-based restoration by infilling the river channel and allowing the stream to carve its anastomosed channel anew, matching 'Stage Zero' on the Stream Evolution Model. In general, process-based restoration aims to maximize the resilience of the system and minimize maintenance requirements. In some instances, form-based restoration methods might be coupled with process-based restoration to restore key structures and achieve quicker results while waiting for restored processes to ensure adequate conditions in the long term. Improving connectivity The connectivity of streams to their adjacent floodplain along their entire length plays an important role in the equilibrium of the river system. Streams are shaped by the water and sediment fluxes from their watershed, and any alteration of these fluxes (either in quantity, intensity or timing) will result in changes in equilibrium planform and cross-sectional geometry, as well as modifications of the aquatic and riparian ecosystem. Removal or modification of levees can allow a better connection between streams and their floodplain. Similarly, removing dams and grade control structures can restore water and sediment fluxes and result in more diversified habitats, although impacts on fish communities can be difficult to assess. In streams where existing infrastructures cannot be removed or modified, it is also possible to optimize sediment and water management in order to maximize connectivity and achieve flow patterns that ensure minimum ecosystem requirements. This can include releases from dams, but also delaying and/or treating water from agricultural and urban sources. Implementing a minimum stream corridor width Another method of ensuring the ecological health of streams while limiting impacts on human infrastructures is to delineate a corridor within which the stream is expected to migrate over time. This method is based on the concept of minimum intervention within this corridor, whose limits should be determined based on the stream's hydrology and geomorphology. Although this concept is often restricted to the lateral mobility of streams (related to bank erosion), some systems also integrate the space necessary for floods of various return periods. This concept has been developed and adapted in various countries around the world, resulting in the notion of "stream corridor" or "river corridor" in the U.S., "room for the river" in the Netherlands, "" ("freedom space") in France (where the concept of "erodible corridor" is also used) and Québec (Canada), "" ("space reserved for water(courses)") in Switzerland, "" in Italy, "fluvial territory" in Spain and "making space for water" in the United Kingdom. A cost-benefit analysis has shown that this approach could be beneficial in the long term due to lower stream stabilization and maintenance costs, lower damages resulting from erosion and flooding, and ecological services rendered by the restored streams. However, this approach cannot be implemented alone if watershed-scale stressors contribute to stream degradation. Additional practices In addition to the aforementioned restoration approaches and methods, additional measures can be implemented if stream degradation factors occur at the watershed scale. First, high-quality areas should also be protected. Additional measures include revegetation/reforestation efforts (ideally with native species); the adoption of agricultural best management practices that minimize erosion and runoff; adequate treatment of sewage water and industrial discharge across the watershed; and improved stormwater management to delay/minimize the transport of water to the stream and minimize pollutant migration. Alternative stormwater management facilities include the following options: Bioretention systems and rain gardens Constructed wetlands Infiltration basins Retention basins Effectiveness of stream restoration projects In the 2000s, a study of stream restoration efforts in the U.S. led to the creation of the National River Restoration Science Synthesis (NRRSS) database, which included information on over 35,000 stream restoration projects carried out in the U.S. Synthesizing efforts are also carried out in other parts of the world, such as Europe. However, despite the large number of stream restoration projects carried out each year worldwide, the effectiveness of stream restoration projects remains poorly quantified. This situation appears to result from limited data on the restored streams' biophysical and geochemical contexts, to insufficient post-monitoring work and to the varying metrics used to evaluate project effectiveness. Depending on the objectives of the restoration project, the goals (restoration of fish populations, of alluvial dynamics, etc.) may take considerable time to be fully achieved. Therefore, whereas monitoring efforts should be proportional to the scale of the situation to be addressed, long-term is often necessary in order to fully evaluate a project's effectiveness. In general, project effectiveness has been found to be dependent on selection of an appropriate restoration method considering the nature, cause and scale of the degradation problem. As such, reach-scale projects generally fail at restoring conditions whose root cause lies at the watershed scale, such as water quality issues. Furthermore, project failures have sometimes been attributed to design based on insufficient scientific bases; in some cases, restoration techniques may have been selected mainly for aesthetic reasons. Additional factors that can influence the effectiveness of river restoration projects include the selection of sites to be restored (for example, sites located near undisturbed reaches could be recolonized more effectively) and the amount of tree cutting and other destructive work necessary to carry out the restoration work (which can have long-lasting detrimental effects on the quality of the habitat). Although often viewed as a challenge, public involvement is generally considered to be a positive factor for the long-term success of stream restoration projects. Introduction in legislation Stream restoration is gradually being introduced in the legislative framework of various states. Examples include the European water framework's commitment to restoring surface water bodies, the adoption of the concept of freedom space in the French legislation, the inclusion in the Swiss legislation of the notion of space reserved for watercourses and of the requirement to restore streams to a state close to their natural state, and the inclusion of river corridors in land use planning in the American states of Vermont and Washington. Although this evolution is generally viewed positively by the scientific community, a concern expressed by some is that it could lead to less flexibility and less room for innovation in a field that is still in development. Informational resources The River Restoration Centre, based at Cranfield University, is responsible for the National River Restoration Inventory, which is used to document best practice in river watercourse and floodplain restoration, enhancement and management efforts in the United Kingdom. Other established sources for information on stream restoration include the NRRSS in the U.S. and the European Centre for River Restoration (ECRR), which holds details of projects across Europe. ECRR and the LIFE+ RESTORE project have developed a wiki-based inventory of river restoration case studies. See also Daylighting (streams) Environmental restoration Land rehabilitation Retrofit (environmental management) Restoration ecology Riparian zone restoration Subterranean river References Notes Federal Interagency Stream Restoration Working Group (United States)(2001). Stream Corridor Restoration: Principles, Processes, and Practices. GPO Item No. 0120-A; SuDocs No. A 57.6/2:EN 3/PT.653. . Water streams Ecological restoration Environmental engineering Environmental terminology Freshwater ecology Hydraulic engineering Hydrology Habitat Riparian zone Rivers Water and the environment
Stream restoration
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
3,882
[ "Hydrology", "Ecological restoration", "Chemical engineering", "Physical systems", "Hydraulics", "Civil engineering", "Environmental engineering", "Riparian zone", "Hydraulic engineering" ]
8,920,050
https://en.wikipedia.org/wiki/Melde%27s%20experiment
Melde's experiment is a scientific experiment carried out in 1859 by the German physicist Franz Melde on the standing waves produced in a tense cable originally set oscillating by a tuning fork, later improved with connection to an electric vibrator. This experiment, "a lecture-room standby", attempted to demonstrate that mechanical waves undergo interference phenomena. In the experiment, mechanical waves traveled in opposite directions form immobile points, called nodes. These waves were called standing waves by Melde since the position of the nodes and loops (points where the cord vibrated) stayed static. Standing waves were first discovered by Franz Melde, who coined the term "standing wave" around 1860. Melde generated parametric oscillations in a string by employing a tuning fork to periodically vary the tension at twice the resonance frequency of the string. History Wave phenomena in nature have been investigated for centuries, some being some of the most controverted themes in the history of science, and so the case is with the wave nature of light. In the 17th century, Sir Isaac Newton described light through a corpuscular theory. The English physicist Thomas Young later contrasted Newton's theories in the 18th century and established the scientific basis upon which rest the wave theories. At the end of the 19th century, at the peak of the Second Industrial Revolution, the creation of electricity as the technology of the era offered a new contribution to the wave theories. This advance allowed Franz Melde to recognize the phenomena of wave interference and the creation of standing waves. Later, the Scottish physicist James Clerk Maxwell in his study of the wave nature of light succeeded in expressing waves and the electromagnetic spectrum in a mathematical formula. Principle A string undergoing transverse vibration illustrates many features common to all vibrating acoustic systems, whether these are the vibrations of a guitar string or the standing wave nodes in a studio monitoring room. In this experiment the change in frequency produced when the tension is increased in the string – similar to the change in pitch when a guitar string is tuned – will be measured. From this the mass per unit length of the string / wire can be derived. This is called as the principle of the Melde's Experiment Finding the mass per unit length of a piece of string is also possible by using a simpler method – a ruler and some scales – and this will be used to check the results and offer a comparison. See also Sonar Wind instruments Sonometer References Wave mechanics
Melde's experiment
[ "Physics" ]
489
[ "Waves", "Wave mechanics", "Physical phenomena", "Classical mechanics" ]
8,920,717
https://en.wikipedia.org/wiki/Roasting%20%28metallurgy%29
Roasting is a process of heating a sulfide ore to a high temperature in the presence of air. It is a step in the processing of certain ores. More specifically, roasting is often a metallurgical process involving gas–solid reactions at elevated temperatures with the goal of purifying the metal component(s). Often before roasting, the ore has already been partially purified, e.g. by froth flotation. The concentrate is mixed with other materials to facilitate the process. The technology is useful in making certain ores usable but it can also be a serious source of air pollution. Roasting consists of thermal gas–solid reactions, which can include oxidation, reduction, chlorination, sulfation, and pyrohydrolysis. In roasting, the ore or ore concentrate is treated with very hot air. This process is generally applied to sulfide minerals. During roasting, the sulfide is converted to an oxide, and sulfur is released as sulfur dioxide, a gas. For the ores Cu2S (chalcocite) and ZnS (sphalerite), balanced equations for the roasting are: 2 Cu2S + 3 O2 → 2 Cu2O + 2 SO2 2 ZnS + 3 O2 → 2 ZnO + 2 SO2 The gaseous product of sulfide roasting, sulfur dioxide (SO2) is often used to produce sulfuric acid. Many sulfide minerals contain other components such as arsenic that are released into the environment. Up until the early 20th century, roasting was started by burning wood on top of ore. This would raise the temperature of the ore to the point where its sulfur content would become its source of fuel, and the roasting process could continue without external fuel sources. Early sulfide roasting was practiced in this manner in "open hearth" roasters, which were manually stirred (a practice called "rabbling") using rake-like tools to expose unroasted ore to oxygen as the reaction proceeded. This process released large amounts of acidic, metallic, and other toxic compounds. Results of this include areas that even after 60–80 years are still largely lifeless, often exactly corresponding to the area of the roast bed, some of which are hundreds of metres wide by kilometres long. Roasting is an exothermic process. Roasting operations The following describe different forms of roasting: Oxidizing roasting Oxidizing roasting, the most commonly practiced roasting process, involves heating the ore in excess of air or oxygen, to burn out or replace the impurity element, generally sulfur, partly or completely by oxygen. For sulfide roasting, the general reaction can be given by: 2MS (s) + 3O2 (g) -> 2MO (s) + 2SO2 (g) Roasting the sulfide ore, until almost complete removal of the sulfur from the ore, results in a dead roast. Volatilizing roasting Volatilizing roasting, involves oxidation at elevated temperatures of the ores, to eliminate impurity elements in the form of their volatile oxides. Examples of such volatile oxides include As2O3, Sb2O3, ZnO and sulfur oxides. Careful control of the oxygen content in the roaster is necessary, as excessive oxidation can form non-volatile oxides. Chloridizing roasting Chloridizing roasting transforms certain metal compounds to chlorides through oxidation or reduction. Some metals such as uranium, titanium, beryllium and some rare earths are processed in their chloride form. Certain forms of chloridizing roasting may be represented by the overall reactions: 2NaCl + MS + 2O2 -> Na2SO4 + MCl, 4NaCl + 2MO + S2 + 3O2 -> 2Na2SO4 + 2MCl2 The first reaction represents the chlorination of a sulfide ore involving an exothermic reaction. The second reaction involving an oxide ore is facilitated by addition of elemental sulfur. Carbonate ores react in a similar manner as the oxide ore, after decomposing to their oxide form at high temperatures. Sulfating roasting Sulfating roasting oxidizes certain sulfide ores to sulfates in a supply of air to enable leaching of the sulfate for further processing. Magnetic roasting Magnetic roasting involves controlled roasting of the ore to convert it into a magnetic form, thus enabling easy separation and processing in subsequent steps. For example, controlled reduction of haematite (non magnetic Fe2O3) to magnetite (magnetic Fe3O4). Reduction roasting Reduction roasting partially reduces an oxide ore before the actual smelting process. Sinter roasting Sinter roasting involves heating the fine ores at high temperatures, where simultaneous oxidation and agglomeration of the ores take place. For example, lead sulfide ores are subjected to sinter roasting in a continuous process after froth flotation to convert the fine ores to workable agglomerates for further smelting operations. References Metallurgy Metallurgical processes
Roasting (metallurgy)
[ "Chemistry", "Materials_science", "Engineering" ]
1,064
[ "Metallurgical processes", "Metallurgy", "nan", "Materials science" ]
8,921,202
https://en.wikipedia.org/wiki/DSSP%20%28algorithm%29
The DSSP algorithm is the standard method for assigning secondary structure to the amino acids of a protein, given the atomic-resolution coordinates of the protein. The abbreviation is only mentioned once in the 1983 paper describing this algorithm, where it is the name of the Pascal program that implements the algorithm Define Secondary Structure of Proteins. Algorithm DSSP begins by identifying the intra-backbone hydrogen bonds of the protein using a purely electrostatic definition, assuming partial charges of −0.42 e and +0.20 e to the carbonyl oxygen and amide hydrogen respectively, their opposites assigned to the carbonyl carbon and amide nitrogen. A hydrogen bond is identified if E in the following equation is less than -0.5 kcal/mol: where the terms indicate the distance between atoms A and B, taken from the carbon (C) and oxygen (O) atoms of the C=O group and the nitrogen (N) and hydrogen (H) atoms of the N-H group. Based on this, nine types of secondary structure are assigned. The 310 helix, α helix and π helix have symbols G, H and I and are recognized by having a repetitive sequence of hydrogen bonds in which the residues are three, four, or five residues apart respectively. Two types of beta sheet structures exist; a beta bridge has symbol B while longer sets of hydrogen bonds and beta bulges have symbol E. T is used for turns, featuring hydrogen bonds typical of helices, S is used for regions of high curvature (where the angle between and is at least 70°). As of DSSP version 4, PPII helices are also detected based on a combination of backbone torsion angles and the absence of hydrogen bonds compatible with other types. PPII helices have symbol P. A blank (or space) is used if no other rule applies, referring to loops. These eight types are usually grouped into three larger classes: helix (G, H and I), strand (E and B) and loop (S, T, and C, where C sometimes is represented also as blank space). π helices In the original DSSP algorithm, residues were preferentially assigned to α helices, rather than π helices. In 2011, it was shown that DSSP failed to annotate many "cryptic" π helices, which are commonly flanked by α helices. In 2012, DSSP was rewritten so that the assignment of π helices was given preference over α helices, resulting in better detection of π helices. Versions of DSSP from 2.1.0 onwards therefore produce slightly different output from older versions. Variants In 2002, a continuous DSSP assignment was developed by introducing multiple hydrogen bond thresholds, where the new assignment was found to correlate with protein motion. See also STRIDE (algorithm) an alternative algorithm Chris Sander (scientist) References External links DSSP Analysis tool Continuous DSSP tool Protein structure
DSSP (algorithm)
[ "Chemistry" ]
598
[ "Protein structure", "Structural biology" ]
8,921,317
https://en.wikipedia.org/wiki/Lifson%E2%80%93Roig%20model
In polymer science, the Lifson–Roig model is a helix-coil transition model applied to the alpha helix-random coil transition of polypeptides; it is a refinement of the Zimm–Bragg model that recognizes that a polypeptide alpha helix is only stabilized by a hydrogen bond only once three consecutive residues have adopted the helical conformation. To consider three consecutive residues each with two states (helix and coil), the Lifson–Roig model uses a 4x4 transfer matrix instead of the 2x2 transfer matrix of the Zimm–Bragg model, which considers only two consecutive residues. However, the simple nature of the coil state allows this to be reduced to a 3x3 matrix for most applications. The Zimm–Bragg and Lifson–Roig models are but the first two in a series of analogous transfer-matrix methods in polymer science that have also been applied to nucleic acids and branched polymers. The transfer-matrix approach is especially elegant for homopolymers, since the statistical mechanics may be solved exactly using a simple eigenanalysis. Parameterization The Lifson–Roig model is characterized by three parameters: the statistical weight for nucleating a helix, the weight for propagating a helix and the weight for forming a hydrogen bond, which is granted only if three consecutive residues are in a helical state. Weights are assigned at each position in a polymer as a function of the conformation of the residue in that position and as a function of its two neighbors. A statistical weight of 1 is assigned to the "reference state" of a coil unit whose neighbors are both coils, and a "nucleation" unit is defined (somewhat arbitrarily) as two consecutive helical units neighbored by a coil. A major modification of the original Lifson–Roig model introduces "capping" parameters for the helical termini, in which the N- and C-terminal capping weights may vary independently. The correlation matrix for this modification can be represented as a matrix M, reflecting the statistical weights of the helix state h and coil state c. The Lifson–Roig model may be solved by the transfer-matrix method using the transfer matrix M shown at the right, where w is the statistical weight for helix propagation, v for initiation, n for N-terminal capping, and c for C-terminal capping. (In the traditional model n and c are equal to 1.) The partition function for the helix-coil transition equilibrium is where V is the end vector , arranged to ensure the coil state of the first and last residues in the polymer. This strategy for parameterizing helix-coil transitions was originally developed for alpha helices, whose hydrogen bonds occur between residues i and i+4; however, it is straightforward to extend the model to 310 helices and pi helices, with i+3 and i+5 hydrogen bonding patterns respectively. The complete alpha/310/pi transfer matrix includes weights for transitions between helix types as well as between helix and coil states. However, because 310 helices are much more common in the tertiary structures of proteins than pi helices, extension of the Lifson–Roig model to accommodate 310 helices - resulting in a 9x9 transfer matrix when capping is included - has found a greater range of application. Analogous extensions of the Zimm–Bragg model have been put forth but have not accommodated mixed helical conformations. References Polymer physics Protein structure Statistical mechanics
Lifson–Roig model
[ "Physics", "Chemistry", "Materials_science" ]
729
[ "Polymer physics", "Structural biology", "Polymer chemistry", "Statistical mechanics", "Protein structure" ]
8,921,481
https://en.wikipedia.org/wiki/Multiplicity%20%28statistical%20mechanics%29
In statistical mechanics, multiplicity (also called statistical weight) refers to the number of microstates corresponding to a particular macrostate of a thermodynamic system. Commonly denoted , it is related to the configuration entropy of an isolated system via Boltzmann's entropy formula where is the entropy and is the Boltzmann constant. Example: the two-state paramagnet A simplified model of the two-state paramagnet provides an example of the process of calculating the multiplicity of particular macrostate. This model consists of a system of microscopic dipoles which may either be aligned or anti-aligned with an externally applied magnetic field . Let represent the number of dipoles that are aligned with the external field and represent the number of anti-aligned dipoles. The energy of a single aligned dipole is while the energy of an anti-aligned dipole is thus the overall energy of the system is The goal is to determine the multiplicity as a function of ; from there, the entropy and other thermodynamic properties of the system can be determined. However, it is useful as an intermediate step to calculate multiplicity as a function of and This approach shows that the number of available macrostates is . For example, in a very small system with dipoles, there are three macrostates, corresponding to Since the and macrostates require both dipoles to be either anti-aligned or aligned, respectively, the multiplicity of either of these states is 1. However, in the either dipole can be chosen for the aligned dipole, so the multiplicity is 2. In the general case, the multiplicity of a state, or the number of microstates, with aligned dipoles follows from combinatorics, resulting in where the second step follows from the fact that Since the energy can be related to and as follows: Thus the final expression for multiplicity as a function of internal energy is This can be used to calculate entropy in accordance with Boltzmann's entropy formula; from there one can calculate other useful properties such as temperature and heat capacity. References Statistical mechanics
Multiplicity (statistical mechanics)
[ "Physics", "Chemistry" ]
432
[ "Thermodynamics stubs", "Statistical mechanics", "Physical chemistry stubs", "Thermodynamics" ]
8,923,815
https://en.wikipedia.org/wiki/Ascendency
Ascendency or ascendancy is a quantitative attribute of an ecosystem, defined as a function of the ecosystem's trophic network. Ascendency is derived using mathematical tools from information theory. It is intended to capture in a single index the ability of an ecosystem to prevail against disturbance by virtue of its combined organization and size. One way of depicting ascendency is to regard it as "organized power", because the index represents the magnitude of the power that is flowing within the system towards particular ends, as distinct from power that is dissipated naturally. Almost half a century earlier, Alfred J. Lotka (1922) had suggested that a system's capacity to prevail in evolution was related to its ability to capture useful power. Ascendency can thus be regarded as a refinement of Lotka's supposition that also takes into account how power is actually being channeled within a system. In mathematical terms, ascendency is the product of the aggregate amount of material or energy being transferred in an ecosystem times the coherency with which the outputs from the members of the system relate to the set of inputs to the same components (Ulanowicz 1986). Coherence is gauged by the average mutual information shared between inputs and outputs (Rutledge et al. 1976). Originally, it was thought that ecosystems increase uniformly in ascendency as they developed, but subsequent empirical observation has suggested that all sustainable ecosystems are confined to a narrow "window of vitality" (Ulanowicz 2002). Systems with relative values of ascendency plotting below the window tend to fall apart due to lack of significant internal constraints, whereas systems above the window tend to be so "brittle" that they become vulnerable to external perturbations. Sensitivity analysis on the components of the ascendency reveals the controlling transfers within the system in the sense of Liebig (Ulanowicz and Baird 1999). That is, ascendency can be used to identify which resource is limiting the functioning of each component of the ecosystem. It is thought that autocatalytic feedback is the primary route by which systems increase and maintain their ascendencies (Ulanowicz 1997.) References Ulanowicz, R.E. 1986. Growth & Development: Ecosystems Phenomenology. Springer-Verlag, NY. 203 p. Ulanowicz, R.E. 1997. Ecology, the Ascendent Perspective. Columbia University Press, NY. 201p. Information theory Entropy and information Trophic ecology
Ascendency
[ "Physics", "Mathematics", "Technology", "Engineering" ]
517
[ "Telecommunications engineering", "Physical quantities", "Applied mathematics", "Entropy and information", "Computer science", "Entropy", "Information theory", "Dynamical systems" ]
8,924,366
https://en.wikipedia.org/wiki/Dewar%E2%80%93Chatt%E2%80%93Duncanson%20model
The Dewar–Chatt–Duncanson model is a model in organometallic chemistry that explains the chemical bonding in transition metal alkene complexes. The model is named after Michael J. S. Dewar, Joseph Chatt and L. A. Duncanson. The alkene donates electron density into a π-acid metal d-orbital from a π-symmetry bonding orbital between the carbon atoms. The metal donates electrons back from a (different) filled d-orbital into the empty π* antibonding orbital. Both of these effects tend to reduce the carbon-carbon bond order, leading to an elongated C−C distance and a lowering of its vibrational frequency. In Zeise's salt K[PtCl3(C2H4)].H2O the C−C bond length has increased to 134 picometres from 133 pm for ethylene. In the nickel compound Ni(C2H4)(PPh3)2 the value is 143 pm. The interaction also causes carbon atoms to "rehybridise" from sp2 towards sp3, which is indicated by the bending of the hydrogen atoms on the ethylene back away from the metal. In silico calculations show that 75% of the binding energy is derived from the forward donation and 25% from backdonation. This model is a specific manifestation of the more general π backbonding model. Main group elements can also form π-complexes with alkenes and alkynes. The β-diketiminato aluminum(I) complex Al{HC(CMeNAr)2} (Ar = 2,6-diisopropylphenyl), which bears an Al-based spx lone pair, reacts with alkenes and alkynes to give alumina(III)cyclopropanes and alumina(III)cyclopropenes in a process analogous to the formation of π-complexes by transition metals. However, in most cases, the backbonding interaction is absent in these complexes due to the lack of energetically accessible filled orbitals for backdonation, resulting in π-complexes that dissociate readily and are therefore more challenging to observe or isolate. References Organometallic chemistry Chemical bonding
Dewar–Chatt–Duncanson model
[ "Physics", "Chemistry", "Materials_science" ]
471
[ "Chemical bonding", "Organometallic chemistry", "Condensed matter physics", "nan" ]
8,924,792
https://en.wikipedia.org/wiki/Racah%20W-coefficient
Racah's W-coefficients were introduced by Giulio Racah in 1942. These coefficients have a purely mathematical definition. In physics they are used in calculations involving the quantum mechanical description of angular momentum, for example in atomic theory. The coefficients appear when there are three sources of angular momentum in the problem. For example, consider an atom with one electron in an s orbital and one electron in a p orbital. Each electron has electron spin angular momentum and in addition the p orbital has orbital angular momentum (an s orbital has zero orbital angular momentum). The atom may be described by LS coupling or by jj coupling as explained in the article on angular momentum coupling. The transformation between the wave functions that correspond to these two couplings involves a Racah W-coefficient. Apart from a phase factor, Racah's W-coefficients are equal to Wigner's 6-j symbols, so any equation involving Racah's W-coefficients may be rewritten using 6-j symbols. This is often advantageous because the symmetry properties of 6-j symbols are easier to remember. Racah coefficients are related to recoupling coefficients by Recoupling coefficients are elements of a unitary transformation and their definition is given in the next section. Racah coefficients have more convenient symmetry properties than the recoupling coefficients (but less convenient than the 6-j symbols). Recoupling coefficients Coupling of two angular momenta and is the construction of simultaneous eigenfunctions of and , where , as explained in the article on Clebsch–Gordan coefficients. The result is where and . Coupling of three angular momenta , , and , may be done by first coupling and to and next coupling and to total angular momentum : Alternatively, one may first couple and to and next couple and to : Both coupling schemes result in complete orthonormal bases for the dimensional space spanned by Hence, the two total angular momentum bases are related by a unitary transformation. The matrix elements of this unitary transformation are given by a scalar product and are known as recoupling coefficients. The coefficients are independent of and so we have The independence of follows readily by writing this equation for and applying the lowering operator to both sides of the equation. The definition of Racah W-coefficients lets us write this final expression as Algebra Let be the usual triangular factor, then the Racah coefficient is a product of four of these by a sum over factorials, where and The sum over is finite over the range Relation to Wigner's 6-j symbol Racah's W-coefficients are related to Wigner's 6-j symbols, which have even more convenient symmetry properties Cf. or See also Clebsch–Gordan coefficients 3-j symbol 6-j symbol Pandya theorem Notes Further reading External links Rotational symmetry Representation theory of Lie groups
Racah W-coefficient
[ "Physics" ]
586
[ "Symmetry", "Rotational symmetry" ]
8,925,452
https://en.wikipedia.org/wiki/6-j%20symbol
Wigner's 6-j symbols were introduced by Eugene Paul Wigner in 1940 and published in 1965. They are defined as a sum over products of four Wigner 3-j symbols, The summation is over all six allowed by the selection rules of the 3-j symbols. They are closely related to the Racah W-coefficients, which are used for recoupling 3 angular momenta, although Wigner 6-j symbols have higher symmetry and therefore provide a more efficient means of storing the recoupling coefficients. Their relationship is given by: Symmetry relations The 6-j symbol is invariant under any permutation of the columns: The 6-j symbol is also invariant if upper and lower arguments are interchanged in any two columns: These equations reflect the 24 symmetry operations of the automorphism group that leave the associated tetrahedral Yutsis graph with 6 edges invariant: mirror operations that exchange two vertices and a swap an adjacent pair of edges. The 6-j symbol is zero unless j1, j2, and j3 satisfy triangle conditions, i.e., In combination with the symmetry relation for interchanging upper and lower arguments this shows that triangle conditions must also be satisfied for the triads (j1, j5, j6), (j4, j2, j6), and (j4, j5, j3). Furthermore, the sum of the elements of each triad must be an integer. Therefore, the members of each triad are either all integers or contain one integer and two half-integers. Special case When j6 = 0 the expression for the 6-j symbol is: The triangular delta is equal to 1 when the triad (j1, j2, j3) satisfies the triangle conditions, and zero otherwise. The symmetry relations can be used to find the expression when another j is equal to zero. Orthogonality relation The 6-j symbols satisfy this orthogonality relation: Asymptotics A remarkable formula for the asymptotic behavior of the 6-j symbol was first conjectured by Ponzano and Regge and later proven by Roberts. The asymptotic formula applies when all six quantum numbers j1, ..., j6 are taken to be large and associates to the 6-j symbol the geometry of a tetrahedron. If the 6-j symbol is determined by the quantum numbers j1, ..., j6 the associated tetrahedron has edge lengths Ji = ji+1/2 (i=1,...,6) and the asymptotic formula is given by, The notation is as follows: Each θi is the external dihedral angle about the edge Ji of the associated tetrahedron and the amplitude factor is expressed in terms of the volume, V, of this tetrahedron. Mathematical interpretation In representation theory, 6-j symbols are matrix coefficients of the associator isomorphism in a tensor category. For example, if we are given three representations Vi, Vj, Vk of a group (or quantum group), one has a natural isomorphism of tensor product representations, induced by coassociativity of the corresponding bialgebra. One of the axioms defining a monoidal category is that associators satisfy a pentagon identity, which is equivalent to the Biedenharn-Elliot identity for 6-j symbols. When a monoidal category is semisimple, we can restrict our attention to irreducible objects, and define multiplicity spaces so that tensor products are decomposed as: where the sum is over all isomorphism classes of irreducible objects. Then: The associativity isomorphism induces a vector space isomorphism and the 6j symbols are defined as the component maps: When the multiplicity spaces have canonical basis elements and dimension at most one (as in the case of SU(2) in the traditional setting), these component maps can be interpreted as numbers, and the 6-j symbols become ordinary matrix coefficients. In abstract terms, the 6-j symbols are precisely the information that is lost when passing from a semisimple monoidal category to its Grothendieck ring, since one can reconstruct a monoidal structure using the associator. For the case of representations of a finite group, it is well known that the character table alone (which determines the underlying abelian category and the Grothendieck ring structure) does not determine a group up to isomorphism, while the symmetric monoidal category structure does, by Tannaka-Krein duality. In particular, the two nonabelian groups of order 8 have equivalent abelian categories of representations and isomorphic Grothdendieck rings, but the 6-j symbols of their representation categories are distinct, meaning their representation categories are inequivalent as monoidal categories. Thus, the 6-j symbols give an intermediate level of information, that in fact uniquely determines the groups in many cases, such as when the group is odd order or simple. See also Clebsch–Gordan coefficients 3-j symbol Racah W-coefficient 9-j symbol Representations of classical Lie groups Notes References External links (Gives exact answer) (accurate; C, fortran, python) (fast lookup, accurate; C, fortran) Rotational symmetry Representation theory of Lie groups Quantum mechanics Monoidal categories
6-j symbol
[ "Physics", "Mathematics" ]
1,113
[ "Mathematical structures", "Theoretical physics", "Monoidal categories", "Quantum mechanics", "Category theory", "Symmetry", "Rotational symmetry" ]
8,927,344
https://en.wikipedia.org/wiki/9-j%20symbol
In physics, Wigner's 9-j symbols were introduced by Eugene Paul Wigner in 1937. They are related to recoupling coefficients in quantum mechanics involving four angular momenta: Recoupling of four angular momentum vectors Coupling of two angular momenta and is the construction of simultaneous eigenfunctions of and , where , as explained in the article on Clebsch–Gordan coefficients. Coupling of three angular momenta can be done in several ways, as explained in the article on Racah W-coefficients. Using the notation and techniques of that article, total angular momentum states that arise from coupling the angular momentum vectors , , , and may be written as Alternatively, one may first couple and to and and to , before coupling and to : Both sets of functions provide a complete, orthonormal basis for the space with dimension spanned by Hence, the transformation between the two sets is unitary and the matrix elements of the transformation are given by the scalar products of the functions. As in the case of the Racah W-coefficients the matrix elements are independent of the total angular momentum projection quantum number (): Symmetry relations A 9-j symbol is invariant under reflection about either diagonal as well as even permutations of its rows or columns: An odd permutation of rows or columns yields a phase factor , where For example: Reduction to 6j symbols The 9-j symbols can be calculated as sums over triple-products of 6-j symbols where the summation extends over all admitted by the triangle conditions in the factors: . Special case When the 9-j symbol is proportional to a 6-j symbol: Orthogonality relation The 9-j symbols satisfy this orthogonality relation: The triangular delta is equal to 1 when the triad (j1, j2, j3) satisfies the triangle conditions, and zero otherwise. 3n-j symbols The 6-j symbol is the first representative, , of -j symbols that are defined as sums of products of of Wigner's 3-jm coefficients. The sums are over all combinations of that the -j coefficients admit, i.e., which lead to non-vanishing contributions. If each 3-jm factor is represented by a vertex and each j by an edge, these -j symbols can be mapped on certain 3-regular graphs with edges and nodes. The 6-j symbol is associated with the K4 graph on 4 vertices, the 9-j symbol with the utility graph on 6 vertices (K3,3), and the two distinct (non-isomorphic) 12-j symbols with the Q3 and Wagner graphs on 8 vertices. Symmetry relations are generally representative of the automorphism group of these graphs. See also Clebsch–Gordan coefficients 3-j symbol, also called 3-jm symbol Racah W-coefficient 6-j symbol References External links (Gives answer in exact fractions) (Answer as floating point numbers) (accurate; C, fortran, python) (fast lookup, accurate; C, fortran) Rotational symmetry Representation theory of Lie groups Quantum mechanics
9-j symbol
[ "Physics" ]
638
[ "Theoretical physics", "Quantum mechanics", "Symmetry", "Rotational symmetry" ]
9,611,617
https://en.wikipedia.org/wiki/Landslide%20mitigation
Landslide mitigation refers to several human-made activities on slopes with the goal of lessening the effect of landslides. Landslides can be triggered by many, sometimes concomitant causes. In addition to shallow erosion or reduction of shear strength caused by seasonal rainfall, landslides may be triggered by anthropic activities, such as adding excessive weight above the slope, digging at mid-slope or at the foot of the slope. Often, individual phenomena join to generate instability over time, which often does not allow a reconstruction of the evolution of a particular landslide. Therefore, landslide hazard mitigation measures are not generally classified according to the phenomenon that might cause a landslide. Instead, they are classified by the sort of slope stabilization method used: Geometric methods, in which the geometry of the hillside is changed (in general the slope); Hydrogeological methods, in which an attempt is made to lower the groundwater level or to reduce the water content of the material Chemical and mechanical methods, in which attempts are made to increase the shear strength of the unstable mass or to introduce active external forces (e.g. anchors, rock or ground nailing) or passive (e.g. structural wells, piles or reinforced ground) to counteract the destabilizing forces. Each of these methods varies somewhat with the type of material that makes up the slope. Rock slopes Reinforcement measures Reinforcement measures generally consist of the introduction of metal elements which increase the shear strength of the rock and to reduce the stress release created when the rock is cut. Reinforcement measures are made up of metal rock nails or anchors. Anchorage subjected to pretensioning is classified as active anchorage. Passive anchorage, not subjected to pretensioning, can be used both to nail single unstable blocks and to reinforce large portions of rock. Anchorage can also be used as pre-reinforcement elements on a scarp to limit hillside decompression associated with cutting. Parts of an anchorage include: the header: the set of elements (anchor plate, blocking device, etc.) that transmit the traction strength of the anchor to the anchored structure or to the rock the reinforcement: part of the anchor, concreted and otherwise, placed under traction; can be constituted by a metal rod, a metal cable, a strand, etc. the length of the foundation: the deepest portion of the anchor, fixed to the rock with chemical bonds or mechanical devices, which transfer the load to the rock itself the free length: the non-concreted length. When the anchorage acts over a short length it is defined as a bolt, which is not structurally connected to the free length, made up of an element resistant to traction (normally a steel bar of less than 12 m protected against corrosion by a concrete sheath). The anchorage device may be connected to the ground by chemical means, mechanical expansion or concreting. In the first case, polyester resin cartridges are placed in a perforation to fill the ring space around the end part of the bolt. The main advantage of this type of anchorage lies in its simplicity and in the speed of installation. The main disadvantage is in its limited strength. In the second case, the anchorage is composed of steel wedges driven into the sides of the hole. The advantage of this type of anchorage lies in the speed of installation and in the fact that the tensioning can be achieved immediately. The main disadvantage with this type of anchorage is that it can only be used with hard rock, and the maximum traction force is limited. In the third case, the anchorage is achieved by concreting the whole metal bar. This is the most-used method since the materials are cheap and installation is simple. Injected concrete mixes can be used in many different rocks and grounds, and the concrete sheath protects the bar from corrosion. The concrete mixture is generally made up of water and cement in the ratio W/C = 0.40-0.45, producing a sufficiently fluid mixture to allow pumping into the hole, while at the same time, providing high mechanical strength when set. As far as the working mechanism of a rock nail is concerned, the strains of the rock induce a stress state in the nail composed of shear and traction stress, due to the roughness of the joints, to their opening and to the direction of the nail, generally non-orthogonal to the joint itself. The execution phases of setting up the nail provides for: formation of any header niche and perforation setting up of a reinforcement bar (e.g. a 4–6 m long FeB44k bar) concrete injection of the bar sealing of the header or of the top part of the hole It is anyway opportune to close up and cement any cracks in the rock to prevent pressure caused by water during the freeze-thaw cycles from producing progressive breaking in the reinforcement system set up. To this purpose a procedure is provided for of: cleaning out and washing of the cracks; plastering of the crack; predisposition of the injection tubes at suitable inter-axes, parallel to the crack, through which the concrete mix is injected; sequential injection of the mixture from bottom to top and at low pressure (1-3 atm.) until refusal or until no flow back of the mixture is noted from the tubes placed higher up. The injection mixtures have approximately the following composition: cement 10 kg; water 65 l fluidity and anti-shrinkage additive or bentonite 1-5 kg. Shotcrete As defined by the American Concrete Institute, shotcrete is mortar or concrete conveyed through a hose and pneumatically projected at high velocity onto a surface. Shotcrete is also called spray-concrete, or spritzbeton (German). Drainage The presence of water within a rocky hillside is one of the major factors leading to instability. Knowledge of the water pressure and of the runoff mode is important to stability analysis, and to planning measures to improve hillside stability. Hoek and Bray (1981) provide a scheme of possible measures to reduce not only the amount of water, which is itself negligible as a cause of instability, but also the pressure applied by the water. The proposed scheme was elaborated taking three principles into account: Preventing water entering the hillside through open or discontinuity traction cracks Reducing water pressure in the vicinity of potential breakage surfaces through selective shallow and sub-shallow drainage. Placing drainage in order to reduce water pressure in the immediate vicinity of the hillside. The measures that can be achieved to reduce the effects of water can be shallow or deep. Shallow drainage work mainly intercepts surface runoff and keeps it away from potentially unstable areas. In reality, on rocky hillsides this type of measure alone is usually insufficient to stabilise a hillside. Deep drainage is the most effective. Sub horizontal drainage is very effective in reducing pore-pressure along crack surfaces or potential breakage surfaces. In rocks, the choice of drain spacing, slope, and length is dependent on the hillside geometry and, more importantly, the structural formation of the mass. Features such as position, spacing and discontinuity opening persistence condition, apart from the mechanical characteristics of the rock, the water runoff mode inside the mass. Therefore, only by intercepting the mostly drained discontinuities can there be an efficient result. Sub horizontal drains are accompanied by surficial collectors which gather the water and take it away through networks of small surface channels. Vertical drainage is generally associated with sunken pumps which have the task of draining the water and lowering the groundwater level. The use of continuous cycle pumps implies very high running costs conditioning the use of this technique for only limited periods. Drainage galleries are rather different in terms of efficiency. They are considered to be the most efficient drainage system for rocks even if they have the drawback of requiring very high technological and financial investment. In particular, used in rocks this technique can be highly efficient in lowering water pressure. Drainage galleries can be associated with a series of radial drains which augment their efficiency. The positioning of this type of work is certainly connected to the local morphological, geological and structural conditions. Geometry modification This type of measure is used in those cases in which, below the material to be removed, the rock face is sound and stable (for example unstable material at the top of the hillside, rock blocks thrusting out from the hillside profile, vegetation that can widen the rock joints, rock blocks isolated from the joints). Detachment measures are carried out where there are risk conditions due to infrastructures or the passage of people at the foot of the hillside. Generally this type of measure can solve the problem by eliminating the hazard. However, it should be ensured that once the measure is carried out, the problem does not re-emerge in the short term. In fact, where there are very cracked rocks, the shallower rock portions can undergo mechanical incoherence, sometimes encouraged by extremes of climate, causing the isolation of unstable blocks. The measure can be effected in various ways, which range from demolition with pick axes to the use of explosives. In the case of high and/or not easily accessible faces it is necessary to turn to specialists who work acrobatically. When explosives are used, sometimes controlled demolition is needed, with the aim of minimising or nullifying the undesired effects resulting from the explosion of the charges, safeguarding the integrity of the surrounding rock. Controlled demolition is based on the drilling of holes placed at a short distance from each other and parallel to the scarp to be demolished. The diameter of the holes generally varies from 40 to 80 mm; the spacing of the holes is generally about 10 to 12 times the diameter. The charge fuse times are established so that those at the outer edges explode first and the more internal ones successively, so that the area of the operation is delimited. Protection measures The protection of natural and quarry faces can have two different aims: Protecting the rock from alteration or weathering Protecting infrastructure and towns from rockfalls. Identification of the cause of alteration or the possibility of rockfall allows mitigation measures to be tailored to individual sites. The most-used passive protection measures are boulder-gathering trenches at the foot of the hillside, metal containment nets, and boulder barriers. Boulder barriers are generally composed of suitably rigid metal nets. Various structural types are on the market, for which the manufacturers specify the kinetic energy of absorption based on an elemental analysis of the structure under projectile collision conditions. Another type of boulder containment barrier is the earth embankment, sometimes reinforced with geo-synthetics (reinforced ground). The advantages of such earthworks over nets are: easier maintenance, higher absorption of kinetic energy, and lower environmental impacts. Soil slopes Geometric modification The operation of re-profiling a slope with the aim of improving its stability, can be achieved by either: Lowering the angle of the slope, or Positioning infill at the foot of the slope Slope angles can be reduced by digging out the brow of the slope, usually in a step-wise fashion. This method is effective for correcting shallow forms of instability, where movement is limited to layers of ground near the surface and when the slopes are higher than 5m. Steps created by this method may also reduce surface erosion. However, caution is necessary to avoid the onset of local breakage following the cuts. In contrast, infill at the foot of the slope has a stabilising effect on a translational or deep rotational landslide, in which the landslide surface at the top submerges and describes a sub-vertical surface that re-emerges in the area at the foot of the slope. The process of infill at the foot of the slope may include construction of berms, gravitational structures such as gabions, or reinforced ground (i.e., concrete blocks). The choice between reducing the slope or infilling at the foot is usually controlled by location-specific constraints at the top or at the foot of the slope. In cases of slope stabilisation where there are no constraints (usually natural slopes) a combination of slope reduction and infilling at the foot of the slope is adopted to avoid heavy work of just one type. In the case of natural slopes the choice of re-profiling scheme is not as simple as that for artificial slopes. The natural profile is often highly irregular with large areas of natural creep, so that its shallow development can make some areas unserviceable as a cutting or infill point. Where the buried shapes of older landslides are complicated, depositing infill material in one area can trigger a new landslide. When planning this type of work the stepping effect of the cuts and infill should be taken into account: their beneficial influence on the increase in safety factor will be reduced in relationship to the size of the landslide under examination. It is very important to ensure that neither the cuts nor the infill mobilise any existing or potential creep plane(s). Usually, infilling at the foot of the landslide is cheaper than cutting at the top. Moreover, in complex and compound landslides, infill at the foot of the slope, at the tip of the foot itself, has a lesser probability of interfering with the interaction of the individual landslide elements. An important aspect of stabilisation work that changes the morphology of the slope is that cuts and infill generate non-drained charge and discharge stresses. In the case of positioning infill, the safety factor SF, will be less in the short term than in the long term. In the case of a cut in the slope, SF will be less in the long term than in the short term. Therefore, in both cases the SF must be calculated in both the short and long terms. Finally, the effectiveness of infill increases with time so long as it is associated with an appropriate infill drainage system, achieved with an underlying drainage cover or appropriate shallow drainage. More generally, therefore, re-profiling systems are associated with and integrated by surficial protection of the slope against erosion and by regulation of meteoric waters through drainage systems made up of ditches and small channels (clad or unclad and prefabricated) to run off the water collected. These surficial water regulation systems are designed by modelling the land itself around the body of the landslide. These provisions will serve the purpose of avoiding penetration of the landslide body by circulating water or into any cracks or fissures, further decreasing ground shear strength. Surface erosion control Water near the surface of the hillside can cause the erosion of surface material due to water runoff. This process tends to weaken the slope by removing material and triggering excess pore pressures due to the water flow. For defense against erosion, several solutions may be used. The following measures share the superficial character of their installation and low environmental impact. Geomats are anti-eroding biomats or bionets that are purpose-made synthetic products for the protection and grassing of slopes subject to surface wash. Geomats provide two main erosion control mechanisms: containment and reinforcement of the surficial ground; and protection from the impact of the raindrops. Geogrids made of geosynthetic materials Steel wire mesh may be used for soil and rock slope stabilization. After leveling, the surface is covered by a steel-wire mesh, which is fastened to the slope and tensioned. It is a cost-effective approach. Wicker or brushwood mats made of vegetable material. Very long and flexible willow branches can be used, which are then covered with infill soil. Alternating stakes of different woody species are used and they are woven to form a barrier against the downward drag of the material eroded by free water on the surface. Coir (coconut fiber) geotextiles are used globally for bioengineering and slope stabilization applications due to the mechanical strength necessary to hold soil together. Coir geotextiles last for 3–5 years depending on the weight, and as the product degrades, it converts itself it to humus, which enriches the soil. Draining techniques Drainage systems reduce the water level inside a potentially unstable hillside, which leads to reduction in pore water pressures in the ground and an increase in the shear strength within the slope. The reduction in pore pressure by drainage can be achieved by shallow and/or deep drains, depending on hillside morphology, the kinematics of movement predicted and the depth of creep surfaces. Usually, shallow drainage is adopted where the potential hillside movement is shallow, affecting a depth of 5-6m. Where there are deeper slippage surfaces, deep drainage must be introduced, but shallow drainage systems may also be installed, with the aim of running off surface water. Shallow drainage Shallow drainage is facilitated through trenches. Traditional drainage trenches are cut in an unbroken length and filled with highly permeable, granular, draining material. Shallow drainage trenches may also be equipped with geocomposites. The scarped sides of the trenches are covered with geocomposite panels. The bottom of the trenches houses a drainage tube placed in continuity to the geocomposite canvas. Deep drainage Deep drainage modifies the filtration routes in the ground. Often more expensive than shallow drains, deep drains are usually more effective because they directly remove the water that induces instability within the hillside. Deep drainage in earth slopes can be achieved in several ways: Large diameter drainage wells with sub-horizontal drains These systems can serve a structural function, a drainage function, or both. The draining elements are microdrains, perforated and positioned sub-horizontally and fanned out, oriented uphill to favour water discharge by gravity. The size of the wells is chosen with the aim of allowing the insertion and functioning of the perforation equipment for the microdrains. Generally, the minimum internal diameter is greater than 3.5 m for drains with a length of 20 to 30 m. Longer drains require wells with a diameter of up to 8–10 m. To determine the network of microdrains planners take into consideration the makeup of the subsoil and the hydraulic regime of the slope. The drainage in these wells is passive, realised by linking the bottom of adjacent wells by sub-horizontal perforations (provided with temporary sheathing pipes) in which the microdrains are placed at a gradient of about 15-20° and are equipped with microperforated PVC pipes, protected by non-filtering fabric along the draining length. Once the drain is embedded in the ground, the temporary sheathing is completely removed and the head of the drain is cemented to the well. In this way a discharge line is created linking all the wells emerging to the surface downhill, where the water is discharged naturally without the help of pumps. The wells are placed at such a distance apart that the individual collecting areas of the microdrains, appertaining to each well, are overlaid. In this way all the volume of the slope involved with the water table is drained. Medium-diameter drainage wells linked at the bottom. The technique involves the dry cutting with temporary sheathing pipes, of aligned drainage wells, with a diameter of 1200–1500 mm., positioned at an interaxis of 6–8 m., their bottoms linked together to a bottom tube for the discharge of drained water. In this way the water discharge takes place passively, due to gravity by perforated pipes with mini-tubes, positioned at the bottom of the wells themselves. The linking pipes, generally made of steel, are blind in the linking length and perforated or windowed in the length corresponding to the well. The wells have a concrete bung at the bottom and are filled, after withdrawal of the temporary sheathing pipe, with dry draining material and are closed with an impermeable clay bung. In normal conditions, these wells reach a depth of 20–30 m, but, in especially favourable cases, may reach 50 m. Some of these wells have drainage functions across their whole section and others can be inspected. The latter serve for maintenance of the whole drainage screen. Such wells that can be inspected are also a support point for the creation of new drainage wells and access for the installation, also on a later occasion, for a range of sub-horizontal drains at the bottom or along the walls of the wells themselves, with the purpose of increasing the drainage capacity of the well. Isolated wells fitted with drainage pumps This system provides for the installation of a drainage pump for each well. The distribution of the wells is established according to the permeability of the land to be drained and the lowering of the water pressure to be achieved. The use of isolated wells with drainage pumps leads to high operational costs and imposes a very time-consuming level of control and maintenance. Deep drainage trenches Deep drainage trenches consist of unbroken cuts with a small cross-section that can be covered at the bottom with geofabric canvas having a primary filter function. They are filled with draining material that has a filtering function and exploits the passive drainage to carry away the drained water downhill. The effectiveness of these systems is connected to the geometry of the trench and the continuity of the draining material along the whole trench. As far as the geometry of the cut is concerned attention should be paid to the slope given to the bottom of the cut. In fact, deep drainage trenches do not have bottom piping that is inserted in the end part of the trench, downhill, where the depth of the cut is reduced until the campaign level is reached. Drainage galleries fitted with microdrains Drainage galleries constitute a rather expensive stabilisation provision for large, deep landslide movements, used where the ground is unsuitable for cutting trenches or drainage wells and where it is impossible to work on the surface owing to a lack of space for the work machinery. Their effectiveness is due to the extensiveness of the area to be drained. Moreover, these drainage systems must be installed on the stable part of the slope. Drainage systems made up of microdrains are placed inside galleries with lengths that can reach 50–60 m. The sizes of the galleries are conditioned by the need to insert the drain perforation equipment. For this reason the minimum transversal internal size of the galleries vary from a minimum of 2 m, when using special reduced size equipment, to at least 3.5 m, when using traditional equipment. Siphon drain This is a technique conceived and developed in France, which works like the system of isolated drainage wells but overcoming the inconvenience of installing a pump for each well. Once motion is triggered in the siphon tube, without the entry of air into the loop, the flow of water is uninterrupted. For this reason, the two ends of the siphon tube are submerged in the water of two permanent storage tanks. This drain is created vertically starting from the campaign level but can also be sub-vertical or inclined. The diameter of the well can vary from 100 to 300 mm;. Inside a PVC pipe is placed or a perforated or microperforated steel pipe, filled with draining material. The siphon drain in this way carries off drainage water by gravity without the need for drainage pumps or pipes linking the bottom of each well. This system proves to be economically advantageous and relatively simple to set up, but requires a programme of controls and maintenance. Microdrains Microdrains is a simple to create drainage system with contained costs. They consist of small diameter perforations, made from surface locations, in trenches, in wells or in galleries. The microdrains are set to work in a sub-horizontal or sub-vertical position, according to the type of application. Drainage anti-slide pile (DASP) The Drainage anti-slide pile (DASP) is a reinforced concrete structure with a hollow upper section and a solid lower section, designed to resist slope deformation. The hollow part is filled with compacted, high-permeability gravels and can drain water via a vertical drain-pipe or sub-horizontal pipes connected to the slope surface. Reinforcement measures Stabilization of a hillside by increasing the mechanical strength of the unstable ground, can be achieved in two ways: Insertion of reinforcement elements into the ground The improvement of the mechanical characteristics of the ground through chemical, thermal, or mechanical treatment. Insertion of reinforcement elements into the ground Types of mechanical reinforcement include: Large diameter wells supported by one or more crowns of consolidated and possibly Reinforced Earth columns Anchors Networks of micropiles Soil nailing Geogrids for reinforced ground Cellular faces Large diameter wells To guarantee slope stability it may be necessary to insert very rigid, strong elements. These elements are large diameter full section or ring section reinforced concrete wells with circular or elliptical cross-sections. The depth of the static wells can reach 30-40m. Often the static stabilising action of the wells is integrated with a series of microdrains laid out radially on several levels, reducing pore-pressures. Anchors Stabilising an unstable slope also can be achieved by the application of active forces to the unstable ground. These forces increase the normal stress and therefore resistance to friction along the creeping surface. Anchors can be applied for this purpose, linked at the surface to each other by a beam frame, which is generally made of reinforced concrete. The anchors are fixed in a place known to be stable. They are usually installed with orthogonal axes to the slope surface and therefore, at first, approximately orthogonal to the surface of the creep. Sometimes anchorage problems occur, as in the case of silty-clayey ground. Where there is water or the anchors are embedded in a clayey sub-layer, the adherence of the anchor to the ground must be confirmed. The surface contained within the grid of the beam frame should also be protected, using geofabrics, in order to prevent erosion from removing the ground underlying the beam frame. Networks of micropiles This solution requires the installation of a series of micropiles that make up a three-dimensional grid, variably tilted and linked at the head by a rigid reinforced concrete mortise. This structure constitutes a reinforcement for the ground, inducing an intrinsic improvement of the ground characteristics incorporated in the micropiles. This type of measure is used in cases of smaller landslides. The effectiveness of micropiles is linked to the insertion of micropiles over the entire landslide area. In the case of rotational landslides in soft clay, the piles contribute to increasing the resisting moment by friction on the upper part of the pile shaft found in the landslide. In the case of suspended piles, strength is governed by the part of the pile offering the least resistance. In practice, those piles in the most unstable area of the slope are positioned first, in order to reduce any possible lateral ground displacements. Preliminary design methods for the micropiles, are entrusted to computer codes that carry out numerical simulations, but which are subject to simplifications in the models that necessitate characterizations of rather precise potential landslide material. Nailing The soil nailing technique applied to temporarily and/or permanently stabilise natural slopes and artificial scarps is based on a fundamental principle in construction engineering: mobilizing the intrinsic mechanical characteristics of the ground, such as cohesion and the angle of internal friction, so that the ground actively collaborates with the stabilisation work. Nailing, on a par with anchors, induces normal stress, thereby increasing friction and stability within the hillside. One nailing method is rapid response diffuse nailing: CLOUJET, where the nails are embedded in the ground by means of an expanded bulb obtained by means of injecting mortar at high pressure into the anchorage area. Drainage is important to the CLOUJET method since the hydraulic regime, considered in the form of pore-pressure applied normally to the fractured surfaces, directly influences the characteristics of the system. The drained water, both through fabric and by means of pipes embedded in the ground, flows together at the foot of the slope in a collector installed parallel to the direction of the face. Another nailing system is the soil nail and root technology (SNART). Here, steel nails are inserted very rapidly into a slope by percussion, vibration or screw methods. Grid spacing is typically 0.8 to 1.5 m, nails are 25 to 50 mm in diameter and may be as long as 20 m. Nails are installed perpendicular to and through the failure plane, and are designed to resist bending and shear (rather than tension) using geotechnical engineering principles. Potential failure surfaces less than 2 m deep normally require the nails to be wider near the top, which may be achieved with steel plates fastened at the nail heads. Plant roots often form an effective and aesthetic facing to prevent soil loss between the nails. Geogrids Geogrids are synthetic materials used to reinforce the ground. The insertion of geosynthetic reinforcements (generally in the direction in which the deformation has developed) has the function of conferring greater stiffness and stability upon the ground, increasing its capacity to be subjected to greater deformations without fracturing. Cellular faces Cellular faces, also known by the name of "crib faces" are special supporting walls made of head grids prefabricated in reinforced concrete or wood (treated with preservatives). The heads have a length of about 1–2 m and the wall can reach 5 m in height. Compacted granular material is inserted in the spaces of the grid. The modularity of the system confers notable flexibility of use, both in terms of adaptability to the ground morphology, and because the structure does not require a deep foundation other than a laying plane of lean concrete used to make the support plane of the whole structure regular. Vegetation may be planted in the grid spaces, camouflaging of the structure. Chemical, thermal and mechanical treatments A variety of treatments may be used to improve the mechanical characteristics of the soil volume affected by landslides. Among these treatments, the technique of jet-grouting is often used, often as a substitute for and/or complement to previously discussed structural measures. The phases of jet-grouting work are: Perforation phase: insertion, with perforation destroying the nucleus, of a set of poles into the ground up to the depth of treatment required by the project. Extraction and programmed injection phase: injection of the mixture at very high pressure is done during the extraction phase of the set of poles. It is in this phase that through the insistence of the jet in a certain direction for a certain interval of time, the effect is obtained by the speed of extraction and rotation of the set of poles, so that volumes of ground can be treated in the shape and size desired. (see) The high energy jet produces a mixture of the ground and a continuous and systematic "claquage" with only a local effect within the radius of action without provoking deformations at the surface that could induce negative consequences on the stability of adjacent constructions. The projection of the mixture at high speed through the nozzles, using the effect of the elevated energy in play, allows the modification of the natural disposition and mechanical characteristics of the ground in the desired direction and in accordance with the mixture used (cement, bentonite, water, chemical, mixtures etc.). Depending on the characteristics of the natural ground, the type of mixture used, and work parameters, compression strength from 1 to 500 kgf/cm² (100 kPa to 50 MPa) can be obtained in the treated area. The realisation of massive consolidated ground elements of various shapes and sizes (buttresses and spurs) within the mass to be stabilised, is achieved by acting opportunely on the injection parameters. In this way the following can be obtained: thin diaphragms, horizontal and vertical cylinders of various diameter and generally any geometrical shapes. Another method for improving the mechanical characteristics of the ground is thermal treatment of potentially unstable hillsides made up of clayey materials. Historically, unstable clayey slopes along railways were hardened by lighting of wood or coal fires within holes dug into the slope. In large diameter holes (from 200 to 400 mm.), about 0.8-1.2m. apart and horizontally interconnected, burners were introduced to form cylinders of hardened clay. The temperatures reached were around 800 °C. These clay cylinders worked like piles giving greater shear strength to the creep surface. This system was useful for surface creep, as in the case of an embankment. In other cases the depth of the holes or the amount of fuel necessary led to either the exclusion of this technique or made the effort ineffective. Other stabilisation attempts were made by using electro-osmotic treatment of the ground. This type of treatment is applicable only in clayey grounds. It consists of subjecting the material to the action of a continuous electrical field, introducing pairs of electrodes embedded in the ground. These electrodes, when current is introduced cause the migration of the ion charges in the clay. Therefore, the inter-pore waters are collected in the cathode areas and they are dragged by the ion charges. In this way a reduction in water content is achieved. Moreover, by suitable choice of anodic electrode a structural transformation of the clay can be induced due to the ions freed by the anode triggering a series of chemo-physical reactions improving the mechanical characteristics of the unstable ground. This stabilisation method, however, is effective only in homogeneous clayey grounds. This condition is hard to find in unstable slopes, therefore electro-osmotic treatment, after some applications, has been abandoned. See also , mitigation of a similar disaster type Rockfall protection embankment References Bomhad E. N. (1986). Stabilità dei pendii, Dario Flaccovio Editore, Palermo. Cruden D. M. & Varnes D. J. (1996). Landslide types and process. In "Landslides – Investigation and Mitigation", Transportation Research Board special Report n. 247, National Academy Press, Washington DC, 36–75. Fell R. (1994). Landslide risk assessment and acceptable risk, Can. Geotech. J., vol. 31, 261–272. Giant G. (1997). Caduta di massi – Analisi del moto e opere di protezione, Hevelius edizioni, Naples. Hunge O. (1981). Dynamics of rock avalanches and other types of mass movements. PhD Thesis, University of Alberta, Canada. Peck R.P. (1969). Advantages and limitations of the observational method in applied soil mechanics, Geotechnique 19, n. 2, 171–187. Tambura F. (1998). Stabilizzazione di pendii – Tipologie, tecnologie, realizzazioni, Hevelius edizioni, Naples. Tanzini M. (2001). Fenomeni franosi e opere di stabilizzazione, Dario Flaccovio Editore, Palermo Terzaghi K. & Peck R. B. (1948). Soil mechanics in engineering practice, New York, Wiley. Coir Green (1998). " Erosion Control – Soil Erosion Landslide analysis, prevention and mitigation
Landslide mitigation
[ "Environmental_science" ]
7,171
[ "Environmental soil science", " prevention and mitigation", "Landslide analysis" ]
9,612,212
https://en.wikipedia.org/wiki/Flame%20rectification
Flame rectification is a phenomenon in which a flame can act as an electrical rectifier. The effect is commonly described as being caused by the greater mobility of electrons relative to that of positive ions within the flame, and the asymmetric nature of the electrodes used to detect the phenomenon. This effect is used by rectification flame sensors to detect the presence of flame. The rectifying effect of the flame on an AC voltage allows the presence of flame to be distinguished from a resistive leakage path. One experimental study suggested that the effect is caused by the ionization process occurring mostly at the base of the flame, making it more difficult for the electrode further from the base of the flame to attract positive ions from the burner, yet leaving the electron current largely unchanged with distance because of the greater mobility of the electron charge carriers. See also Flame detection Flame supervision device References External links A video of a flame being used as a rectifier in a simple AM radio Using a flame as a triode amplifier Plasma technology and applications
Flame rectification
[ "Physics" ]
214
[ "Plasma technology and applications", "Plasma physics" ]
9,612,488
https://en.wikipedia.org/wiki/Misiurewicz%20point
In mathematics, a Misiurewicz point is a parameter value in the Mandelbrot set (the parameter space of complex quadratic maps) and also in real quadratic maps of the interval for which the critical point is strictly pre-periodic (i.e., it becomes periodic after finitely many iterations but is not periodic itself). By analogy, the term Misiurewicz point is also used for parameters in a multibrot set where the unique critical point is strictly pre-periodic. This term makes less sense for maps in greater generality that have more than one free critical point because some critical points might be periodic and others not. These points are named after the Polish-American mathematician Michał Misiurewicz, who was the first to study them. Mathematical notation A parameter is a Misiurewicz point if it satisfies the equations: and: so: where: is a critical point of , and are positive integers, denotes the -th iterate of . Name The term "Misiurewicz point" is used ambiguously: Misiurewicz originally investigated maps in which all critical points were non-recurrent; that is, in which there exists a neighbourhood for every critical point that is not visited by the orbit of this critical point. This meaning is firmly established in the context of the dynamics of iterated interval maps. Only in very special cases does a quadratic polynomial have a strictly periodic and unique critical point. In this restricted sense, the term is used in complex dynamics; a more appropriate one would be Misiurewicz–Thurston points (after William Thurston, who investigated post-critically finite rational maps). Quadratic maps A complex quadratic polynomial has only one critical point. By a suitable conjugation any quadratic polynomial can be transformed into a map of the form which has a single critical point at . The Misiurewicz points of this family of maps are roots of the equations: Subject to the condition that the critical point is not periodic, where: k is the pre-period n is the period denotes the n-fold composition of with itself i.e. the nth iteration of . For example, the Misiurewicz points with k= 2 and n= 1, denoted by M2,1, are roots of: The root c= 0 is not a Misiurewicz point because the critical point is a fixed point when c= 0, and so is periodic rather than pre-periodic. This leaves a single Misiurewicz point M2,1 at c = −2. Properties of Misiurewicz points of complex quadratic mapping Misiurewicz points belong to, and are dense in, the boundary of the Mandelbrot set. If is a Misiurewicz point, then the associated filled Julia set is equal to the Julia set and means the filled Julia set has no interior. If is a Misiurewicz point, then in the corresponding Julia set all periodic cycles are repelling (in particular the cycle that the critical orbit falls onto). The Mandelbrot set and Julia set are locally asymptotically self-similar around Misiurewicz points. Types Misiurewicz points in the context of the Mandelbrot set can be classified based on several criteria. One such criterion is the number of external rays that converge on such a point. Branch points, which can divide the Mandelbrot set into two or more sub-regions, have three or more external arguments (or angles). Non-branch points have exactly two external rays (these correspond to points lying on arcs within the Mandelbrot set). These non-branch points are generally more subtle and challenging to identify in visual representations. End points, or branch tips, have only one external ray converging on them. Another criterion for classifying Misiurewicz points is their appearance within a plot of a subset of the Mandelbrot set. Misiurewicz points can be found at the centers of spirals as well as at points where two or more branches meet. According to the Branch Theorem of the Mandelbrot set, all branch points of the Mandelbrot set are Misiurewicz points. Most Misiurewicz parameters within the Mandelbrot set exhibit a "center of a spiral". This occurs due to the behavior at a Misiurewicz parameter where the critical value jumps onto a repelling periodic cycle after a finite number of iterations. At each point during the cycle, the Julia set exhibits asymptotic self-similarity through complex multiplication by the derivative of this cycle. If the derivative is non-real, it implies that the Julia set near the periodic cycle has a spiral structure. Consequently, a similar spiral structure occurs in the Julia set near the critical value, and by Tan Lei's theorem, also in the Mandelbrot set near any Misiurewicz parameter for which the repelling orbit has a non-real multiplier. The visibility of the spiral shape depends on the value of this multiplier. The number of arms in the spiral corresponds to the number of branches at the Misiurewicz parameter, which in turn equals the number of branches at the critical value in the Julia set. Even the principal Misiurewicz point in the 1/3-limb, located at the end of the parameter rays at angles 9/56, 11/56, and 15/56, is asymptotically a spiral with infinitely many turns, although this is difficult to discern without magnification. External arguments External arguments of Misiurewicz points, measured in turns are: Rational numbers Proper fractions with an even denominator Dyadic fractions with denominator and finite (terminating) expansion: Fractions with a denominator and repeating expansion: The subscript number in each of these expressions is the base of the numeral system being used. Examples of Misiurewicz points of complex quadratic mapping End points Point is considered an end point as it is a tip of a filament, and the landing point of the external ray for the angle 1/6. Its critical orbit is . Point is considered an end point as it is the endpoint of the main antenna of the Mandelbrot set. and the landing point of only one external ray (parameter ray) of angle 1/2. It is also considered an end point because its critical orbit is , following the Symbolic sequence = C L R R R ... with a pre-period of 2 and period of 1. Branch points Point is considered a branch point because it is a principal Misiurewicz point of the 1/3 limb and has 3 external rays: 9/56, 11/56 and 15/56. Other points These are points which are not-branch and not-end points. Point is near a Misiurewicz point . This can be seen because it is a center of a two-arms spiral, the landing point of 2 external rays with angles: and where the denominator is , and has a preperiodic point with pre-period and period . Point is near a Misiurewicz point , as it is the landing point for pair of rays: , and has pre-period and period . See also Arithmetic dynamics Feigenbaum point Dendrite (mathematics) References Further reading Michał Misiurewicz (1981), "Absolutely continuous measures for certain maps of an interval" (in French). Publications Mathématiques de l'IHÉS, 53 (1981), p. 17-51 External links Preperiodic (Misiurewicz) points in the Mandelbrot set by Evgeny Demidov M & J-sets similarity for preperiodic points. Lei's theorem by Douglas C. Ravenel Misiurewicz Point of the logistic map by J. C. Sprott Fractals Systems theory Dynamical systems
Misiurewicz point
[ "Physics", "Mathematics" ]
1,639
[ "Functions and mappings", "Mathematical analysis", "Mathematical objects", "Fractals", "Mathematical relations", "Mechanics", "Dynamical systems" ]
9,614,445
https://en.wikipedia.org/wiki/Oxytocin%20receptor
The oxytocin receptor, also known as OXTR, is a protein which functions as receptor for the hormone and neurotransmitter oxytocin. In humans, the oxytocin receptor is encoded by the OXTR gene which has been localized to human chromosome 3p25. Function and location The OXTR protein belongs to the G-protein coupled receptor family, specifically Gq, and acts as a receptor for oxytocin. Its activity is mediated by G proteins that activate several different second messenger systems. Oxytocin receptors are expressed by the myoepithelial cells of the mammary gland, and in both the myometrium and endometrium of the uterus at the end of pregnancy. The oxytocin-oxytocin receptor system plays an important role as an inducer of uterine contractions during parturition and of milk ejection. OXTR is also associated with the central nervous system. The gene is believed to play a major role in social, cognitive, and emotional behavior. A decrease in OXTR expression by methylation of the OXTR gene is associated with Callous and unemotional traits in adolescence, rigid thinking in anorexia nervosa, problems with facial and emotional recognition, and difficulties in the affect regulation. A reduction in this gene is believed to lead to prenatal stress, postnatal depression, and social anxiety. Further research must be gathered before concluding these findings, however strong evidence is pointing in this direction. Studies on OXTR methylation—which downregulates oxytocin mechanisms—suggest this process is associated with increased gray matter density in the amygdala, implicating OXTR regulation in stress and parasympathetic regulation. In some mammals, oxytocin receptors are also found in the kidney and heart. Mesolimbic dopamine pathways The oxytocinergic circuit projecting from the paraventricular hypothalamic nucleus (PVN) innervates the ventral tegmental area (VTA) dopaminergic neurons that project to the nucleus accumbens, i.e., the mesolimbic pathway. Activation of the PVN→VTA projection by oxytocin affects sexual, social, and addictive behavior via this link to the mesolimbic pathway; specifically, oxytocin exerts a prosexual and prosocial effect in this region. Polymorphism The receptors for oxytocin (OXTR) have genetic differences with varied effects on individual behavior. The polymorphism (rs53576) occurs on the third intron of OXTR in three types: GG, AG, AA. The GG allele is connected with oxytocin levels in people . A-allele carrier individuals are associated with more sensitivity to stress, fewer social skills, and more mental health issues than the GG-carriers. In a study looking at empathy and stress, individuals with the allele GG scored higher than A-carrier individuals in a "Reading the Mind in the Eyes" test. GG carriers, with their naturally higher levels of oxytocin , were better able to distinguish between emotions. A-allele carriers responded with more stress to stressful situations than GG-allele carriers. A-allele carriers had lower scores on psychological resources, like optimism, mastery, and self-esteem, than GG individuals when measured with factor analysis for depressive symptomology and psychological resources, along with the Beck Depression Inventory. A-allele carriers had higher depressive symptomology and lower psychological resources than GG individuals. A-allele individuals scored lower in human sociality than GG people on a Tridimensional Personality Questionnaire. AA individuals had the lowest amygdala activation while processing emotionally salient information and those with GG had the highest activity when tested using BOLD during an fMRI. On the other hand, variations at the CD38 rs3796863 and OXTR rs53576 loci were not associated with psychosocial characteristics of adolescents assessed with the Strengths and Difficulties Questionnaire (SDQ); in studies with a similar design, authors recommend replication with larger samples and greater power to detect small effects, especially in age–sex subgroups of adolescents. The frequency of the A allele varies among ethnic groups, being significantly more common among East Asians than Europeans. Some evidence suggests an association between OXTR gene polymorphism, IQ, and autism spectrum disorder (ASD). Studies have done research focusing on variants in the third intron of the gene, a region that is strongly correlated with personality traits and ASD. OXTR knockout mice have shown abnormal behaviors such as social impairments and aggressiveness. These abnormalities can be reduced with oxytocin or oxytocin agonist administration. Overall, the study suggests that rare variants are considerably more abundant in individuals with ASD compared to that of a normal individual, however further research with larger sample sizes must be completed before concluding any information. Ligands Several selective ligands for the oxytocin receptor have recently been developed, but close similarity between the oxytocin and related vasopressin receptors make it difficult to achieve high selectivity with peptide derivatives. However the search for a druggable, non-peptide template has led to several potent, highly selective, orally bioavailable oxytocin antagonists. Oxytocin receptor agonists have also been developed. Agonists Peptide Carbetocin Demoxytocin Lipo-oxytocin-1 Merotocin Oxytocin Non-peptide LIT-001 — improved social deficits in mice; non-selective over vasopressin receptors TC OT 39 – non-selective over vasopressin receptors WAY-267,464 – anxiolytic in mice; possibly non-selective over vasopressin receptors Antagonists Peptide Atosiban Barusiban Non-peptide Epelsiban L-368,899 (CAS# 148927-60-0) L-371,257 (CAS# 162042-44-6) – peripherally selective (i.e. poor blood brain barrier penetration, few central effects) L-372,662 Nolasiban Retosiban (GSK-221,149) SSR-126,768 WAY-162,720 – centrally active following peripheral administration References External links G protein-coupled receptors Genes on human chromosome 3
Oxytocin receptor
[ "Chemistry" ]
1,355
[ "G protein-coupled receptors", "Signal transduction" ]
9,615,240
https://en.wikipedia.org/wiki/Nitazoxanide
Nitazoxanide, sold under the brand name Alinia among others, is a broad-spectrum antiparasitic and broad-spectrum antiviral medication that is used in medicine for the treatment of various helminthic, protozoal, and viral infections. It is indicated for the treatment of infection by Cryptosporidium parvum and Giardia lamblia in immunocompetent individuals and has been repurposed for the treatment of influenza. Nitazoxanide has also been shown to have in vitro antiparasitic activity and clinical treatment efficacy for infections caused by other protozoa and helminths; evidence suggested that it possesses efficacy in treating a number of viral infections as well. Chemically, nitazoxanide is the prototype member of the thiazolides, a class of drugs which are synthetic nitrothiazolyl-salicylamide derivatives with antiparasitic and antiviral activity. Tizoxanide, an active metabolite of nitazoxanide in humans, is also an antiparasitic drug of the thiazolide class. Nitazoxanide tablets were approved as a generic medication in the United States in 2020. Uses Nitazoxanide is an effective first-line treatment for infection by Blastocystis species and is indicated for the treatment of infection by Cryptosporidium parvum or Giardia lamblia in immunocompetent adults and children. It is also an effective treatment option for infections caused by other protozoa and helminths (e.g., Entamoeba histolytica, Hymenolepis nana, Ascaris lumbricoides, and Cyclospora cayetanensis). Chronic hepatitis B Nitazoxanide alone has shown preliminary evidence of efficacy in the treatment of chronic hepatitis B over a one-year course of therapy. Nitazoxanide 500 mg twice daily resulted in a decrease in serum HBV DNA in all of 4 HBeAg-positive patients, with undetectable HBV DNA in 2 of 4 patients, loss of HBeAg in 3 patients, and loss of HBsAg in one patient. Seven of 8 HBeAg-negative patients treated with nitazoxanide 500 mg twice daily had undetectable HBV DNA and 2 had loss of HBsAg. Additionally, nitazoxanide monotherapy in one case and nitazoxanide plus adefovir in another case resulted in undetectable HBV DNA, loss of HBeAg and loss of HBsAg. These preliminary studies showed a higher rate of HBsAg loss than any currently licensed therapy for chronic hepatitis B. The similar mechanism of action of interferon and nitazoxanide suggest that stand-alone nitazoxanide therapy or nitazoxanide in concert with nucleos(t)ide analogs have the potential to increase loss of HBsAg, which is the ultimate end-point of therapy. A formal phase 2 study is being planned for 2009. Chronic hepatitis C Romark initially decided to focus on the possibility of treating chronic hepatitis C with nitazoxanide. The drug garnered interest from the hepatology community after three phase II clinical trials involving the treatment of hepatitis C with nitazoxanide produced positive results for treatment efficacy and similar tolerability to placebo without any signs of toxicity. A meta-analysis from 2014 concluded that the previous held trials were of low-quality and withheld with a risk of bias. The authors concluded that more randomized trials with low risk of bias are needed to determine if Nitazoxanide can be used as an effective treatment for chronic hepatitis C patients. Contraindications Nitazoxanide is contraindicated only in individuals who have experienced a hypersensitivity reaction to nitazoxanide or the inactive ingredients of a nitazoxanide formulation. Adverse effects The side effects of nitazoxanide do not significantly differ from a placebo treatment for giardiasis; these symptoms include stomach pain, headache, upset stomach, vomiting, discolored urine, excessive urinating, skin rash, itching, fever, flu syndrome, and others. Nitazoxanide does not appear to cause any significant adverse effects when taken by healthy adults. Overdose Information on nitazoxanide overdose is limited. Oral doses of 4 grams in healthy adults do not appear to cause any significant adverse effects. In various animals, the oral LD50 is higher than 10 . Interactions Due to the exceptionally high plasma protein binding (>99.9%) of nitazoxanide's metabolite, tizoxanide, the concurrent use of nitazoxanide with other highly plasma protein-bound drugs with narrow therapeutic indices (e.g., warfarin) increases the risk of drug toxicity. In vitro evidence suggests that nitazoxanide does not affect the CYP450 system. Pharmacology Pharmacodynamics The anti-protozoal activity of nitazoxanide is believed to be due to interference with the pyruvate:ferredoxin oxidoreductase (PFOR) enzyme-dependent electron-transfer reaction that is essential to anaerobic energy metabolism. PFOR inhibition may also contribute to its activity against anaerobic bacteria. It has also been shown to have activity against influenza A virus in vitro. The mechanism appears to be by selectively blocking the maturation of the viral hemagglutinin at a stage preceding resistance to endoglycosidase H digestion. This impairs hemagglutinin intracellular trafficking and insertion of the protein into the host plasma membrane. Nitazoxanide modulates a variety of other pathways in vitro, including glutathione-S-transferase and glutamate-gated chloride ion channels in nematodes, respiration and other pathways in bacteria and cancer cells, and viral and host transcriptional factors. Pharmacokinetics Following oral administration, nitazoxanide is rapidly hydrolyzed to the pharmacologically active metabolite, tizoxanide, which is 99% protein bound. Tizoxanide is then glucuronide conjugated into the active metabolite, tizoxanide glucuronide. Peak plasma concentrations of the metabolites tizoxanide and tizoxanide glucuronide are observed 1–4 hours after oral administration of nitazoxanide, whereas nitazoxanide itself is not detected in blood plasma. Roughly of an oral dose of nitazoxanide is excreted as its metabolites in feces, while the remainder of the dose excreted in urine. Tizoxanide is excreted in the urine, bile and feces. Tizoxanide glucuronide is excreted in urine and bile. Chemistry Acetic acid [2-[(5-nitro-2-thiazolyl)amino]-oxomethyl]phenyl ester is a carboxylic ester and a member of benzamides. It is functionally related to a salicylamide. Nitazoxanide is the prototype member of the thiazolides, which is a drug class of structurally-related broad-spectrum antiparasitic compounds. Nitazoxanide belongs to the class of drugs known as thiazolides. It is a broad-spectrum anti-infective drug that significantly modulates the survival, growth, and proliferation of a range of extracellular and intracellular protozoa, helminths, anaerobic and microaerophilic bacteria, in addition to viruses. Nitazoxanide is a light yellow crystalline powder. It is poorly soluble in ethanol and practically insoluble in water. The molecular formula of Nitazoxanide is C12H9N3O5S and its molecular weight is 307.28 g/mol2. Tizoxanide, an active metabolite of nitazoxanide in humans, is also an antiparasitic drug of the thiazolide class. IUPAC Name: [[[2-[(5-nitro-1,3-thiazol-2-yl)carbamoyl]phenyl] acetate2]] Canonical SMILES: CC(=O)OC1=CC=CC=C1C(=O)NC2=NC=C(S2)N+[O-]2 MeSH Synonyms: 1) 2-(acetolyloxy)-n-(5-nitro-2-thiazolyl)benzamide 2) Alinia 3) Colufase 4) Cryptaz 5) Daxon 6) Heliton 7) Ntz 8) Taenitaz History Nitazoxanide was originally discovered in the 1980s by Jean-François Rossignol at the Pasteur Institute. Initial studies demonstrated activity versus tapeworms. In vitro studies demonstrated much broader activity. Dr. Rossignol co-founded Romark Laboratories, with the goal of bringing nitazoxanide to market as an anti-parasitic drug. Initial studies in the USA were conducted in collaboration with Unimed Pharmaceuticals, Inc. (Marietta, GA) and focused on development of the drug for treatment of cryptosporidiosis in AIDS. Controlled trials began shortly after the advent of effective anti-retroviral therapies. The trials were abandoned due to poor enrollment and the FDA rejected an application based on uncontrolled studies. Subsequently, Romark launched a series of controlled trials. A placebo-controlled study of nitazoxanide in cryptosporidiosis demonstrated significant clinical improvement in adults and children with mild illness. Among malnourished children in Zambia with chronic cryptosporidiosis, a three-day course of therapy led to clinical and parasitologic improvement and improved survival. In Zambia and in a study conducted in Mexico, nitazoxanide was not successful in the treatment of cryptosporidiosis in advanced infection with human immunodeficiency virus at the doses used. However, it was effective in patients with higher CD4 counts. In treatment of giardiasis, nitazoxanide was superior to placebo and comparable to metronidazole. Nitazoxanide was successful in the treatment of metronidazole-resistant giardiasis. Studies have suggested efficacy in the treatment of cyclosporiasis, isosporiasis, and amebiasis. Recent studies have also found it to be effective against beef tapeworm (Taenia saginata). Pharmaceutical products Dosage forms Nitazoxanide is currently available in two oral dosage forms: a tablet (500 mg) and an oral suspension (100 mg per 5 ml when reconstituted). An extended release tablet (675 mg) has been used in clinical trials for chronic hepatitis C; however, this form is not currently marketed or available for prescription. Brand names Nitazoxanide is sold under the brand names Adonid, Alinia, Allpar, Annita, Celectan, Colufase, Daxon, Dexidex, Diatazox, Kidonax, Mitafar, Nanazoxid, Parazoxanide, Netazox, Niazid, Nitamax, Nitax, Nitaxide, Nitaz, Nizonide, , Pacovanton, Paramix, Toza, and Zox. Research , nitazoxanide was in phase 3 clinical trials for the treatment influenza due to its inhibitory effect on a broad range of influenza virus subtypes and efficacy against influenza viruses that are resistant to neuraminidase inhibitors like oseltamivir. Nitazoxanide is also being researched as a potential treatment for COVID-19, chronic hepatitis B, chronic hepatitis C, rotavirus and norovirus gastroenteritis. References Further reading Acetate esters Antiparasitic agents Antiviral drugs Nitrothiazoles Salicylamide ethers
Nitazoxanide
[ "Biology" ]
2,621
[ "Antiviral drugs", "Biocides", "Antiparasitic agents" ]
9,616,023
https://en.wikipedia.org/wiki/C%20to%20HDL
C to HDL tools convert C language or C-like computer code into a hardware description language (HDL) such as VHDL or Verilog. The converted code can then be synthesized and translated into a hardware device such as a field-programmable gate array. Compared to software, equivalent designs in hardware consume less power (yielding higher performance per watt) and execute faster with lower latency, more parallelism and higher throughput. However, system design and functional verification in a hardware description language can be tedious and time-consuming, so systems engineers often write critical modules in HDL and other modules in a high-level language and synthesize these into HDL through C to HDL or high-level synthesis tools. C to is another name for this methodology. RTL refers to the register transfer level representation of a program necessary to implement it in logic. History Early development on C to HDL was done by Ian Page, Charles Sweeney and colleagues at Oxford University in the 1990s who developed the Handel-C language. They commercialized their research by forming Embedded Solutions Limited (ESL) in 1999 which was renamed Celoxica in September 2000. In 2008, the embedded systems departments of Celoxica was sold to Catalytic for $3 million and which later merged to become Agility Computing. In January 2009, Mentor Graphics acquired Agility's C synthesis assets. Celoxica continues to trade concentrating on hardware acceleration to process transactions in the financial sector and other industries. Applications C to HDL techniques are most commonly applied to applications that have unacceptably high execution times on existing general-purpose supercomputer architectures. Examples include bioinformatics, computational fluid dynamics (CFD), financial processing, and oil and gas survey data analysis. Embedded applications requiring high performance or real-time data processing are also an area of use. System-on-chip (SoC) design may also take advantage of C to HDL techniques. C-to-VHDL compilers are very useful for large designs or for implementing code that might change in the future. Designing a large application entirely in HDL may be very difficult and time-consuming; the abstraction of a high level language for such a large application will often reduce total development time. Furthermore, an application coded in HDL will almost certainly be more difficult to modify than one coded in a higher level language. If the designer needs to add new functionality to the application, adding a few lines of C code will almost always be easier than remodeling the equivalent HDL code. Flow to HDL tools have a similar aim, but with flow rather than C-based design. Example tools SmartHLS (originally LegUp), ANSI C to Verilog tool developed by Microchip Technology, based on LLVM compiler. CBG CtoV A tool developed 1995-99 by DJ Greaves (University of Cambridge) that instantiated RAMs and interpreted various SystemC constructs and datatypes. C-to-Verilog tool (NISC) from University of California, Irvine Altium Designer 6.9 and 7.0 (a.k.a. Summer 08) from Altium Nios II C-to-Hardware Acceleration Compiler from Altera Catapult C tool from Mentor Graphics Cynthesizer from Forte Design Systems SystemC from Celoxica (defunct) Handel-C from Celoxica (defunct) DIME-C from Nallatech Impulse C from Impulse Accelerated Technologies Instant-SoC from FPGA-Cores FpgaC which is an open source initiative SA-C programming language Cascade (C to RTL synthesizer) from CriticalBlue Mitrion-C from Mitrionics SPARK (a C-to-VHDL) from University of California, San Diego VLSI/VHDL CAD Group Index of Useful Tools from Case Western Reserve University MyHDL is a Python-subset compiler and simulator to VHDL and Verilog See also Comparison of EDA Software Electronic design automation (EDA) High-level synthesis Silicon compiler Hardware acceleration References External links A good article on Dr Dobbs Journal about ImpulseC. An overview of flows by Daresbury Labs. An Overview of Hardware Compilation and the Handel-C language. Xilinx's ESL initiative, some products listed and C to VHDL tools. Altium's C-to-Hardware Compiler overview. Altera's Nios II C2H Acceleration Compiler White Paper. Hardware description languages Program transformation Hardware acceleration
C to HDL
[ "Technology", "Engineering" ]
925
[ "Electronic engineering", "Hardware acceleration", "Hardware description languages", "Computer systems" ]
13,255
https://en.wikipedia.org/wiki/Hydrogen
Hydrogen is a chemical element; it has symbol H and atomic number 1. It is the lightest element and, at standard conditions, is a gas of diatomic molecules with the formula , sometimes called dihydrogen, hydrogen gas, molecular hydrogen, or simply hydrogen. It is colorless, odorless, non-toxic, and highly combustible. Constituting about 75% of all normal matter, hydrogen is the most abundant chemical element in the universe. Stars, including the Sun, mainly consist of hydrogen in a plasma state, while on Earth, hydrogen is found in water, organic compounds, as dihydrogen, and in other molecular forms. The most common isotope of hydrogen (protium, H) consists of one proton, one electron, and no neutrons. In the early universe, the formation of hydrogen's protons occurred in the first second after the Big Bang; neutral hydrogen atoms only formed about 370,000 years later during the recombination epoch as the universe expanded and plasma had cooled enough for electrons to remain bound to protons. Hydrogen gas was first produced artificially in the early 16th century by the reaction of acids with metals. Henry Cavendish, in 1766–81, identified hydrogen gas as a distinct substance and discovered its property of producing water when burned; hence its name means "water-former" in Greek. Understanding the colors of light absorbed and emitted by hydrogen was a crucial part of developing quantum mechanics. Hydrogen, typically nonmetallic except under extreme pressure, readily forms covalent bonds with most nonmetals, contributing to the formation of compounds like water and various organic substances. Its role is crucial in acid-base reactions, which mainly involve proton exchange among soluble molecules. In ionic compounds, hydrogen can take the form of either a negatively charged anion, where it is known as hydride, or as a positively charged cation, H, called a proton. Although tightly bonded to water molecules, protons strongly affect the behavior of aqueous solutions, as reflected in the importance of pH. Hydride on the other hand, is rarely observed because it tends to deprotonate solvents, yielding H2. Industrial hydrogen production occurs through steam reforming of natural gas. The more familiar electrolysis of water is uncommon because it is energy-intensive, i.e. expensive. Its main industrial uses include fossil fuel processing, such as hydrocracking and hydrodesulfurization. Ammonia production also is a major consumer of hydrogen. Fuel cells for electricity generation from hydrogen is rapidly emerging. Properties Combustion Hydrogen gas is highly flammable: (572 kJ/2 mol = 286 kJ/mol = 141.865 MJ/kg) Enthalpy of combustion: −286 kJ/mol. Hydrogen gas forms explosive mixtures with air in concentrations from 4–74% and with chlorine at 5–95%. The hydrogen autoignition temperature, the temperature of spontaneous ignition in air, is . Flame Pure hydrogen-oxygen flames emit ultraviolet light and with high oxygen mix are nearly invisible to the naked eye, as illustrated by the faint plume of the Space Shuttle Main Engine, compared to the highly visible plume of a Space Shuttle Solid Rocket Booster, which uses an ammonium perchlorate composite. The detection of a burning hydrogen leak, may require a flame detector; such leaks can be very dangerous. Hydrogen flames in other conditions are blue, resembling blue natural gas flames. The destruction of the Hindenburg airship was a notorious example of hydrogen combustion and the cause is still debated. The visible flames in the photographs were the result of carbon compounds in the airship skin burning. Electron energy levels The ground state energy level of the electron in a hydrogen atom is −13.6 eV, equivalent to an ultraviolet photon of roughly 91 nm wavelength. The energy levels of hydrogen are referred to by consecutive quantum numbers, with being the ground state. The hydrogen spectral series corresponds to emission of light due to transitions from higher to lower energy levels. The energy levels of hydrogen can be calculated fairly accurately using the Bohr model of the atom, in which the electron "orbits" the proton, like how Earth orbits the Sun. However, the electron and proton are held together by electrostatic attraction, while planets and celestial objects are held by gravity. Due to the discretization of angular momentum postulated in early quantum mechanics by Bohr, the electron in the Bohr model can only occupy certain allowed distances from the proton, and therefore only certain allowed energies. An accurate description of the hydrogen atom comes from a quantum analysis that uses the Schrödinger equation, Dirac equation or Feynman path integral formulation to calculate the probability density of the electron around the proton. The most complex formulas include the small effects of special relativity and vacuum polarization. In the quantum mechanical treatment, the electron in a ground state hydrogen atom has no angular momentum—illustrating how the "planetary orbit" differs from electron motion. Spin isomers Molecular exists as two nuclear isomers that differ in the spin states of their nuclei. In the orthohydrogen form, the spins of the two nuclei are parallel, forming a spin triplet state having a total molecular spin ; in the parahydrogen form the spins are antiparallel and form a spin singlet state having spin . The equilibrium ratio of ortho- to para-hydrogen depends on temperature. At room temperature or warmer, equilibrium hydrogen gas contains about 25% of the para form and 75% of the ortho form. The ortho form is an excited state, having higher energy than the para form by 1.455 kJ/mol, and it converts to the para form over the course of several minutes when cooled to low temperature. The thermal properties of these isomers differ because each has distinct rotational quantum states. The ortho-to-para ratio in is an important consideration in the liquefaction and storage of liquid hydrogen: the conversion from ortho to para is exothermic and produces sufficient heat to evaporate most of the liquid if not converted first to parahydrogen during the cooling process. Catalysts for the ortho-para interconversion, such as ferric oxide and activated carbon compounds, are used during hydrogen cooling to avoid this loss of liquid. Phases Liquid hydrogen can exist at temperatures below hydrogen's critical point of 33 K. However, for it to be in a fully liquid state at atmospheric pressure, H2 needs to be cooled to . Hydrogen was liquefied by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask. Liquid hydrogen is a common rocket propellant, and it can also be used as the fuel for an internal combustion engine or fuel cell. Solid hydrogen can be made at standard pressure, by decreasing the temperature below hydrogen's melting point of . It was collected for the first time by James Dewar in 1899. Multiple distinct solid phases exist, known as Phase I through Phase V, each exhibiting a characteristic molecular arrangement. Liquid and solid phases can exist in combination at the triple point, a substance known as slush hydrogen. Metallic hydrogen, a phase obtained at extremely high pressures (in excess of ), is an electrical conductor. It is believed to exist deep within giant planets like Jupiter. When ionized, hydrogen becomes a plasma. This is the form in which hydrogen exists within stars. Isotopes Hydrogen has three naturally occurring isotopes, denoted , and . Other, highly unstable nuclei ( to ) have been synthesized in the laboratory but not observed in nature. is the most common hydrogen isotope, with an abundance of >99.98%. Because the nucleus of this isotope consists of only a single proton, it is given the descriptive but rarely used formal name protium. It is the only stable isotope with no neutrons; see diproton for a discussion of why others do not exist. , the other stable hydrogen isotope, is known as deuterium and contains one proton and one neutron in the nucleus. Nearly all deuterium in the universe is thought to have been produced at the time of the Big Bang, and has endured since then. Deuterium is not radioactive, and is not a significant toxicity hazard. Water enriched in molecules that include deuterium instead of normal hydrogen is called heavy water. Deuterium and its compounds are used as a non-radioactive label in chemical experiments and in solvents for -NMR spectroscopy. Heavy water is used as a neutron moderator and coolant for nuclear reactors. Deuterium is also a potential fuel for commercial nuclear fusion. is known as tritium and contains one proton and two neutrons in its nucleus. It is radioactive, decaying into helium-3 through beta decay with a half-life of 12.32 years. It is radioactive enough to be used in luminous paint to enhance the visibility of data displays, such as for painting the hands and dial-markers of watches. The watch glass prevents the small amount of radiation from escaping the case. Small amounts of tritium are produced naturally by cosmic rays striking atmospheric gases; tritium has also been released in nuclear weapons tests. It is used in nuclear fusion, as a tracer in isotope geochemistry, and in specialized self-powered lighting devices. Tritium has also been used in chemical and biological labeling experiments as a radiolabel. Unique among the elements, distinct names are assigned to its isotopes in common use. During the early study of radioactivity, heavy radioisotopes were given their own names, but these are mostly no longer used. The symbols D and T (instead of and ) are sometimes used for deuterium and tritium, but the symbol P was already used for phosphorus and thus was not available for protium. In its nomenclatural guidelines, the International Union of Pure and Applied Chemistry (IUPAC) allows any of D, T, , and to be used, though and are preferred. The exotic atom muonium (symbol Mu), composed of an antimuon and an electron, can also be considered a light radioisotope of hydrogen. Because muons decay with lifetime , muonium is too unstable for observable chemistry. Nevertheless, muonium compounds are important test cases for quantum simulation, due to the mass difference between the antimuon and the proton, and IUPAC nomenclature incorporates such hypothetical compounds as muonium chloride (MuCl) and sodium muonide (NaMu), analogous to hydrogen chloride and sodium hydride respectively. Antihydrogen () is the antimatter counterpart to hydrogen. It consists of an antiproton with a positron. Antihydrogen is the only type of antimatter atom to have been produced . Thermal and physical properties Table of thermal and physical properties of hydrogen (H) at atmospheric pressure: History 18th century In 1671, Irish scientist Robert Boyle discovered and described the reaction between iron filings and dilute acids, which results in the production of hydrogen gas. Boyle did not note that the gas was inflammable, but hydrogen would play a key role in overturning the phlogiston theory of combustion. In 1766, Henry Cavendish was the first to recognize hydrogen gas as a discrete substance, by naming the gas from a metal-acid reaction "inflammable air". He speculated that "inflammable air" was in fact identical to the hypothetical substance "phlogiston" and further finding in 1781 that the gas produces water when burned. He is usually given credit for the discovery of hydrogen as an element. In 1783, Antoine Lavoisier identified the element that came to be known as hydrogen when he and Laplace reproduced Cavendish's finding that water is produced when hydrogen is burned. Lavoisier produced hydrogen for his experiments on mass conservation by treating metallic iron with a steam of H2 through an incandescent iron tube heated in a fire. Anaerobic oxidation of iron by the protons of water at high temperature can be schematically represented by the set of following reactions: 1) 2) 3) Many metals react similarly with water leading to the production of hydrogen. In some situations, this H2-producing process is problematic as is the case of zirconium cladding on nuclear fuel rods. 19th century By 1806 hydrogen was used to fill balloons. François Isaac de Rivaz built the first de Rivaz engine, an internal combustion engine powered by a mixture of hydrogen and oxygen in 1806. Edward Daniel Clarke invented the hydrogen gas blowpipe in 1819. The Döbereiner's lamp and limelight were invented in 1823.Hydrogen was liquefied for the first time by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask. He produced solid hydrogen the next year. One of the first quantum effects to be explicitly noticed (but not understood at the time) was James Clerk Maxwell's observation that the specific heat capacity of unaccountably departs from that of a diatomic gas below room temperature and begins to increasingly resemble that of a monatomic gas at cryogenic temperatures. According to quantum theory, this behavior arises from the spacing of the (quantized) rotational energy levels, which are particularly wide-spaced in because of its low mass. These widely spaced levels inhibit equal partition of heat energy into rotational motion in hydrogen at low temperatures. Diatomic gases composed of heavier atoms do not have such widely spaced levels and do not exhibit the same effect. 20th century The existence of the hydride anion was suggested by Gilbert N. Lewis in 1916 for group 1 and 2 salt-like compounds. In 1920, Moers electrolyzed molten lithium hydride (LiH), producing a stoichiometric quantity of hydrogen at the anode. Because of its simple atomic structure, consisting only of a proton and an electron, the hydrogen atom, together with the spectrum of light produced from it or absorbed by it, has been central to the development of the theory of atomic structure. Hydrogen's unique position as the only neutral atom for which the Schrödinger equation can be directly solved, has significantly contributed to the understanding of quantum mechanics through the exploration of its energetics. Furthermore, study of the corresponding simplicity of the hydrogen molecule and the corresponding cation brought understanding of the nature of the chemical bond, which followed shortly after the quantum mechanical treatment of the hydrogen atom had been developed in the mid-1920s. Hydrogen-lifted airship The first hydrogen-filled balloon was invented by Jacques Charles in 1783. Hydrogen provided the lift for the first reliable form of air-travel following the 1852 invention of the first hydrogen-lifted airship by Henri Giffard. German count Ferdinand von Zeppelin promoted the idea of rigid airships lifted by hydrogen that later were called Zeppelins; the first of which had its maiden flight in 1900. Regularly scheduled flights started in 1910 and by the outbreak of World War I in August 1914, they had carried 35,000 passengers without a serious incident. Hydrogen-lifted airships were used as observation platforms and bombers during the war. The first non-stop transatlantic crossing was made by the British airship R34 in 1919. Regular passenger service resumed in the 1920s and the discovery of helium reserves in the United States promised increased safety, but the U.S. government refused to sell the gas for this purpose. Therefore, was used in the Hindenburg airship, which was destroyed in a midair fire over New Jersey on 6 May 1937. The incident was broadcast live on radio and filmed. Ignition of leaking hydrogen is widely assumed to be the cause, but later investigations pointed to the ignition of the aluminized fabric coating by static electricity. But the damage to hydrogen's reputation as a lifting gas was already done and commercial hydrogen airship travel ceased. Hydrogen is still used, in preference to non-flammable but more expensive helium, as a lifting gas for weather balloons. Deuterium and tritium Deuterium was discovered in December 1931 by Harold Urey, and tritium was prepared in 1934 by Ernest Rutherford, Mark Oliphant, and Paul Harteck. Heavy water, which consists of deuterium in the place of regular hydrogen, was discovered by Urey's group in 1932. Hydrogen-cooled turbogenerator The first hydrogen-cooled turbogenerator went into service using gaseous hydrogen as a coolant in the rotor and the stator in 1937 at Dayton, Ohio, owned by the Dayton Power & Light Co. This was justified by the high thermal conductivity and very low viscosity of hydrogen gas, thus lower drag than air. This is the most common coolant used for generators 60 MW and larger; smaller generators are usually air-cooled. Nickel–hydrogen battery The nickel–hydrogen battery was used for the first time in 1977 aboard the U.S. Navy's Navigation technology satellite-2 (NTS-2). The International Space Station, Mars Odyssey and the Mars Global Surveyor are equipped with nickel-hydrogen batteries. In the dark part of its orbit, the Hubble Space Telescope is also powered by nickel-hydrogen batteries, which were finally replaced in May 2009, more than 19 years after launch and 13 years beyond their design life. Chemistry Laboratory syntheses is produced in labs, often as a by-product of other reactions. Many metals react with water to produce , but the rate of hydrogen evolution depends on the metal, the pH, and the presence of alloying agents. Most often, hydrogen evolution is induced by acids. The alkali and alkaline earth metals, aluminium, zinc, manganese, and iron react readily with aqueous acids. This reaction is the basis of the Kipp's apparatus, which once was used as a laboratory gas source: In the absence of acid, the evolution of is slower. Because iron is widely used structural material, its anaerobic corrosion is of technological significance: Many metals, such as aluminium, are slow to react with water because they form passivated oxide coatings of oxides. An alloy of aluminium and gallium, however, does react with water. At high pH, aluminium can produce : Reactions of H2 is relatively unreactive. The thermodynamic basis of this low reactivity is the very strong H–H bond, with a bond dissociation energy of 435.7 kJ/mol. It does form coordination complexes called dihydrogen complexes. These species provide insights into the early steps in the interactions of hydrogen with metal catalysts. According to neutron diffraction, the metal and two H atoms form a triangle in these complexes. The H-H bond remains intact but is elongated. They are acidic. Although exotic on Earth, the ion is common in the universe. It is a triangular species, like the aforementioned dihydrogen complexes. It is known as protonated molecular hydrogen or the trihydrogen cation. Hydrogen directly reacts with chlorine, fluorine and bromine to give HF, HCl, and HBr, respectively. The conversion involves a radical chain mechanism. With heating, H2 reacts efficiently with the alkali and alkaline earth metals to give the saline hydrides of the formula MH and MH2, respectively. One of the striking properties of H2 is its inertness toward unsaturated organic compounds, such as alkenes and alkynes. These species only react with H2 in the presence of catalysts. Especially active catalysts are the platinum metals (platinum, rhodium, palladium, etc.). A major driver for the mining of these rare and expensive elements is their use as catalysts. Hydrogen-containing compounds Most known compounds contain hydrogen, not as H2, but as covalently bonded H atoms. This interaction is the basis of organic chemistry and biochemistry.Hydrogen forms many compounds with carbon, called the hydrocarbons. Hydrocarbons are called organic compounds. In nature, they almost always contain "heteroatoms" such as nitrogen, oxygen, and sulfur. The study of their properties is known as organic chemistry and their study in the context of living organisms is called biochemistry. By some definitions, "organic" compounds are only required to contain carbon. However, most of them also contain hydrogen, and because it is the carbon-hydrogen bond that gives this class of compounds most of its particular chemical characteristics, carbon-hydrogen bonds are required in some definitions of the word "organic" in chemistry. Millions of hydrocarbons are known, and they are usually formed by complicated pathways that seldom involve elemental hydrogen. Hydrides Hydrogen forms compounds with less electronegative elements, such as metals and main group elements. In these compounds, hydrogen takes on a partial negative charge. The term "hydride" suggests that the H atom has acquired a negative or anionic character, denoted . Usually hydride refers to hydrogen in a compound with a more electropositive element. For hydrides other than group 1 and 2 metals, the term can be misleading, considering the low electronegativity of hydrogen. A well known hydride is lithium aluminium hydride, the anion carries hydridic centers firmly attached to the Al(III). Perhaps the most extensive series of hydrides are the boranes, compounds consisting only of boron and hydrogen. Hydrides can bond to these electropositive elements not only as a terminal ligand but also as bridging ligands. In diborane (), four H's are terminal and two bridge between the two B atoms. Protons and acids When bonded to a more electronegative element, particularly fluorine, oxygen, or nitrogen, hydrogen can participate in a form of medium-strength noncovalent bonding with another electronegative element with a lone pair, a phenomenon called hydrogen bonding that is critical to the stability of many biological molecules. can also be obtained by oxidation of H2. Under the Brønsted–Lowry acid–base theory, acids are proton donors, while bases are proton acceptors. A bare proton, essentially cannot exist in anything other than a vacuum. Otherwise it attaches to other atoms, ions, or molecules. Even species as inert as methane can be protonated. The term 'proton' is used loosely and metaphorically to refer to refer to solvated " without any implication that any single protons exist freely as a species. To avoid the implication of the naked proton in solution, acidic aqueous solutions are sometimes considered to contain the "hydronium ion" () or still more accurately, . Other oxonium ions are found when water is in acidic solution with other solvents. Occurrence Cosmic Hydrogen, as atomic H, is the most abundant chemical element in the universe, making up 75% of normal matter by mass and >90% by number of atoms. In astrophysics, neutral hydrogen in the interstellar medium is called H I and ionized hydrogen is called H II. Radiation from stars ionizes H I to H II, creating spheres of ionized H II around stars. In the chronology of the universe neutral hydrogen dominated until the birth of stars during the era of reionization led to bubbles of ionized hydrogen that grew and merged over 500 million of years. They are the source of the 21-cm hydrogen line at 1420 MHz that is detected in order to probe primordial hydrogen. The large amount of neutral hydrogen found in the damped Lyman-alpha systems is thought to dominate the cosmological baryonic density of the universe up to a redshift of z = 4. Hydrogen is found in great abundance in stars and gas giant planets. Molecular clouds of are associated with star formation. Hydrogen plays a vital role in powering stars through the proton-proton reaction in lower-mass stars, and through the CNO cycle of nuclear fusion in case of stars more massive than the Sun. Hydrogen plasma states have properties quite distinct from those of molecular or atomic hydrogen. As a plasma, hydrogen's electron and proton are not bound together, resulting in very high electrical conductivity and high emissivity (producing the light from the Sun and other stars). The charged particles are highly influenced by magnetic and electric fields. For example, in the solar wind they interact with the Earth's magnetosphere giving rise to Birkeland currents and the aurora. A molecular form called protonated molecular hydrogen () is found in the interstellar medium, where it is generated by ionization of molecular hydrogen from cosmic rays. This ion has also been observed in the upper atmosphere of Jupiter. The ion is long-lived in outer space due to the low temperature and density. is one of the most abundant ions in the universe, and it plays a notable role in the chemistry of the interstellar medium. Neutral triatomic hydrogen can exist only in an excited form and is unstable. By contrast, the positive hydrogen molecular ion () is a rare in the universe. Terrestrial Under ordinary conditions on Earth, elemental hydrogen exists as the diatomic gas, . Hydrogen gas is very rare in Earth's atmosphere (around 0.53 ppm on a molar basis) because of its light weight, which enables it to escape the atmosphere more rapidly than heavier gases. However, hydrogen, usually in the form of water, is the third most abundant element on the Earth's surface, mostly in the form of chemical compounds such as hydrocarbons and water. Despite its low concentration in our atmosphere, terrestrial hydrogen is sufficiently abundant to support the metabolism of several bacteria. Deposits of hydrogen gas have been discovered in several countries including Mali, France and Australia. Production and storage Industrial routes Many methods exist for producing H2, but three dominate commercially: steam reforming often coupled to water-gas shift, partial oxidation of hydrocarbons, and water electrolysis. Steam reforming Hydrogen is mainly produced by steam methane reforming (SMR), the reaction of water and methane. Thus, at high temperature (1000–1400 K, 700–1100 °C or 1300–2000 °F), steam (water vapor) reacts with methane to yield carbon monoxide and . Steam reforming is also used for the industrial preparation of ammonia. This reaction is favored at low pressures, Nonetheless, conducted at high pressures (2.0 MPa, 20 atm or 600 inHg) because high-pressure is the most marketable product, and pressure swing adsorption (PSA) purification systems work better at higher pressures. The product mixture is known as "synthesis gas" because it is often used directly for the production of methanol and many other compounds. Hydrocarbons other than methane can be used to produce synthesis gas with varying product ratios. One of the many complications to this highly optimized technology is the formation of coke or carbon: Therefore, steam reforming typically employs an excess of . Additional hydrogen can be recovered from the steam by using carbon monoxide through the water gas shift reaction (WGS). This process requires an iron oxide catalyst: Hydrogen is sometimes produced and consumed in the same industrial process, without being separated. In the Haber process for ammonia production, hydrogen is generated from natural gas. Partial oxidation of hydrocarbons Other methods for CO and production include partial oxidation of hydrocarbons: Although less important commercially, coal can serve as a prelude to the shift reaction above: Olefin production units may produce substantial quantities of byproduct hydrogen particularly from cracking light feedstocks like ethane or propane. Water electrolysis Electrolysis of water is a conceptually simple method of producing hydrogen. Commercial electrolyzers use nickel-based catalysts in strongly alkaline solution. Platinum is a better catalyst but is expensive. Electrolysis of brine to yield chlorine also produces high purity hydrogen as a co-product, which is used for a variety of transformations such as hydrogenations. The electrolysis process is more expensive than producing hydrogen from methane without CCS and the efficiency of energy conversion is inherently low. Innovation in hydrogen electrolyzers could make large-scale production of hydrogen from electricity more cost-competitive. Hydrogen produced in this manner could play a significant role in decarbonizing energy systems where there are challenges and limitations to replacing fossil fuels with direct use of electricity. Methane pyrolysis Hydrogen can be produced by pyrolysis of natural gas (methane). This route has a lower carbon footprint than commercial hydrogen production processes. Developing a commercial methane pyrolysis process could expedite the expanded use of hydrogen in industrial and transportation applications. Methane pyrolysis is accomplished by passing methane through a molten metal catalyst containing dissolved nickel. Methane is converted to hydrogen gas and solid carbon. (ΔH° = 74 kJ/mol) The carbon may be sold as a manufacturing feedstock or fuel, or landfilled. Further research continues in several laboratories, including at Karlsruhe Liquid-metal Laboratory and at University of California – Santa Barbara. BASF built a methane pyrolysis pilot plant. Thermochemical Water splitting is the process by which water is decomposed into its components. Relevant to the biological scenario is this simple equation: The reaction occurs in the light reactions in all photosynthetic organisms. A few organisms, including the alga Chlamydomonas reinhardtii and cyanobacteria, have evolved a second step in the dark reactions in which protons and electrons are reduced to form gas by specialized hydrogenases in the chloroplast. Efforts have been undertaken to genetically modify cyanobacterial hydrogenases to more efficiently generate gas even in the presence of oxygen. Efforts have also been undertaken with genetically modified alga in a bioreactor. Relevant to the thermal water-splitting scenario is this simple equation: More than 200 thermochemical cycles can be used for water splitting. Many of these cycles such as the iron oxide cycle, cerium(IV) oxide–cerium(III) oxide cycle, zinc zinc-oxide cycle, sulfur-iodine cycle, copper-chlorine cycle and hybrid sulfur cycle have been evaluated for their commercial potential to produce hydrogen and oxygen from water and heat without using electricity. A number of labs (including in France, Germany, Greece, Japan, and the United States) are developing thermochemical methods to produce hydrogen from solar energy and water. Natural routes Biohydrogen is produced by enzymes called hydrogenases. This process allows the host organism to use fermentation as a source of energy. These same enzymes also can oxidize H2, such that the host organisms can subsist by reducing oxidized substrates using electrons extracted from H2. The hydrogenase enzyme feature iron or nickel-iron centers at their active sites. The natural cycle of hydrogen production and consumption by organisms is called the hydrogen cycle. Some bacteria such as Mycobacterium smegmatis can use the small amount of hydrogen in the atmosphere as a source of energy when other sources are lacking. Their hydrogenase are designed with small channels that exclude oxygen and so permits the reaction to occur even though the hydrogen concentration is very low and the oxygen concentration is as in normal air. Confirming the existence of hydrogenases in the human gut, occurs in human breath. The concentration in the breath of fasting people at rest is typically less than 5 parts per million (ppm) but can be 50 ppm when people with intestinal disorders consume molecules they cannot absorb during diagnostic hydrogen breath tests. Serpentinization Serpentinization is a geological mechanism that produce highly reducing conditions. Under these conditions, water is capable of oxidizing ferrous () ions in fayalite. The process is of interest because it generates hydrogen gas: Closely related to this geological process is the Schikorr reaction: This process also is relevant to the corrosion of iron and steel in oxygen-free groundwater and in reducing soils below the water table. Storage Hydrogen produced when there is a surplus of variable renewable electricity could in principle be stored and later used to generate heat or to re-generate electricity. The hydrogen created through electrolysis using renewable energy is commonly referred to as "green hydrogen". It can be further transformed into synthetic fuels such as ammonia and methanol. Disadvantages of hydrogen as an energy carrier include high costs of storage and distribution due to hydrogen's explosivity, its large volume compared to other fuels, and its tendency to make pipes brittle. If H2 is to used as an energy source, its storage is important. It dissolves only poorly in solvents. For example, at room temperature and 0.1 Mpascal, ca. 0.05 moles dissolves in one kilogram of diethyl ether. The H2 can be stored in compressed form, although compressing costs energy. Liquifaction is impractical given its low critical temperature. In contrast, ammonia and many hydrocarbons can be liquified at room temperature under pressure. For these reasons, hydrogen carriers - materials that reversibly bind H2 - have attracted much attention. The key question is then the weight percent of H2-equivalents within the carrier material. For example, hydrogen can be reversibly absorbed into many rare earth and transition metals and is soluble in both nanocrystalline and amorphous metals. Hydrogen solubility in metals is influenced by local distortions or impurities in the crystal lattice. These properties may be useful when hydrogen is purified by passage through hot palladium disks, but the gas's high solubility is also a metallurgical problem, contributing to the embrittlement of many metals, complicating the design of pipelines and storage tanks. The most problematic aspect of metal hydrides for storage is their modest H2 content, often on the order of 1%. For this reason, there is interest in storage of H2 in compounds of low molecular weight. For example, ammonia borane () contains 19.8 weight percent of H2. The problem with this material is that after release of H2, the resulting boron nitride does not re-add H2, i.e. ammonia borane is an irreversible hydrogen carrier. More attractive, somewhat ironically, are hydrocarbons such as tetrahydroquinoline, which reversibly release some H2 when heated in the presence of a catalyst: Applications Petrochemical industry Large quantities of are used in the "upgrading" of fossil fuels. Key consumers of include hydrodesulfurization, and hydrocracking. Many of these reactions can be classified as hydrogenolysis, i.e., the cleavage of bonds by hydrogen. Illustrative is the separation of sulfur from liquid fossil fuels: Hydrogenation Hydrogenation, the addition of to various substrates, is done on a large scale. Hydrogenation of to produce ammonia by the Haber process, consumes a few percent of the energy budget in the entire industry. The resulting ammonia is used to supply most of the protein consumed by humans. Hydrogenation is used to convert unsaturated fats and oils to saturated fats and oils. The major application is the production of margarine. Methanol is produced by hydrogenation of carbon dioxide. It is similarly the source of hydrogen in the manufacture of hydrochloric acid. is also used as a reducing agent for the conversion of some ores to the metals. Coolant Hydrogen is commonly used in power stations as a coolant in generators due to a number of favorable properties that are a direct result of its light diatomic molecules. These include low density, low viscosity, and the highest specific heat and thermal conductivity of all gases. Fuel Hydrogen (H2) is widely discussed as a carrier of energy with potential to help to decarbonize economies and mitigate greenhouse gas emissions. This scenario requires the efficient production and storage of hydrogen. Hydrogen fuel can produce the intense heat required for industrial production of steel, cement, glass, and chemicals, thus contributing to the decarbonisation of industry alongside other technologies, such as electric arc furnaces for steelmaking. However, it is likely to play a larger role in providing industrial feedstock for cleaner production of ammonia and organic chemicals. For example, in steelmaking, hydrogen could function as a clean energy carrier and also as a low-carbon catalyst, replacing coal-derived coke (carbon): vs Hydrogen used to decarbonise transportation is likely to find its largest applications in shipping, aviation and, to a lesser extent, heavy goods vehicles, through the use of hydrogen-derived synthetic fuels such as ammonia and methanol and fuel cell technology. For light-duty vehicles including cars, hydrogen is far behind other alternative fuel vehicles, especially compared with the rate of adoption of battery electric vehicles, and may not play a significant role in future. Liquid hydrogen and liquid oxygen together serve as cryogenic propellants in liquid-propellant rockets, as in the Space Shuttle main engines. NASA has investigated the use of rocket propellant made from atomic hydrogen, boron or carbon that is frozen into solid molecular hydrogen particles suspended in liquid helium. Upon warming, the mixture vaporizes to allow the atomic species to recombine, heating the mixture to high temperature. Semiconductor industry Hydrogen is employed to saturate broken ("dangling") bonds of amorphous silicon and amorphous carbon that helps stabilizing material properties. It is also a potential electron donor in various oxide materials, including ZnO, , CdO, MgO, , , , , , , , , , , , and . Niche and evolving uses Shielding gas: Hydrogen is used as a shielding gas in welding methods such as atomic hydrogen welding. Cryogenic research: Liquid is used in cryogenic research, including superconductivity studies. Buoyant lifting: Because is only 7% the density of air, it was once widely used as a lifting gas in balloons and airships. Leak detection: Pure or mixed with nitrogen (sometimes called forming gas), hydrogen is a tracer gas for detection of minute leaks. Applications can be found in the automotive, chemical, power generation, aerospace, and telecommunications industries. Hydrogen is an authorized food additive (E 949) that allows food package leak testing, as well as having anti-oxidizing properties. Neutron moderation: Deuterium (hydrogen-2) is used in nuclear fission applications as a moderator to slow neutrons. Nuclear fusion fuel: Deuterium is used in nuclear fusion reactions. Isotopic labeling: Deuterium compounds have applications in chemistry and biology in studies of isotope effects on reaction rates. Tritium uses: Tritium (hydrogen-3), produced in nuclear reactors, is used in the production of hydrogen bombs, as an isotopic label in the biosciences, and as a source of beta radiation in radioluminescent paint for instrument dials and emergency signage. Safety and precautions Hydrogen poses few hazards to human safety. The chief hazards are for detonations and asphyxiation, but both are mitigated by its high diffusivity. Because hydrogen has been intensively investigated as a fuel, there is extensive documentation on the risks. Because H2 reacts with very few substrates, it is nontoxic as evidenced by the fact that humans exhale small amounts of it. See also Combined cycle hydrogen power plant (for hydrogen) Notes References Further reading The Hyperfine Splitting in Hydrogen - The Feynman Lectures on Physics External links Basic Hydrogen Calculations of Quantum Mechanics Hydrogen at The Periodic Table of Videos (University of Nottingham) High temperature hydrogen phase diagram Wavefunction of hydrogen Chemical elements Reactive nonmetals Diatomic nonmetals Nuclear fusion fuels Airship technology Reducing agents Refrigerants Gaseous signaling molecules E-number additives Least dense things
Hydrogen
[ "Physics", "Chemistry", "Materials_science" ]
8,160
[ "Chemical elements", "Redox", "Diatomic nonmetals", "Nonmetals", "Signal transduction", "Reducing agents", "Gaseous signaling molecules", "Reactive nonmetals", "Atoms", "Matter" ]
13,256
https://en.wikipedia.org/wiki/Helium
Helium (from ) is a chemical element; it has symbol He and atomic number 2. It is a colorless, odorless, non-toxic, inert, monatomic gas and the first in the noble gas group in the periodic table. Its boiling point is the lowest among all the elements, and it does not have a melting point at standard pressures. It is the second-lightest and second most abundant element in the observable universe, after hydrogen. It is present at about 24% of the total elemental mass, which is more than 12 times the mass of all the heavier elements combined. Its abundance is similar to this in both the Sun and Jupiter, because of the very high nuclear binding energy (per nucleon) of helium-4, with respect to the next three elements after helium. This helium-4 binding energy also accounts for why it is a product of both nuclear fusion and radioactive decay. The most common isotope of helium in the universe is helium-4, the vast majority of which was formed during the Big Bang. Large amounts of new helium are created by nuclear fusion of hydrogen in stars. Helium was first detected as an unknown, yellow spectral line signature in sunlight during a solar eclipse in 1868 by Georges Rayet, Captain C. T. Haig, Norman R. Pogson, and Lieutenant John Herschel, and was subsequently confirmed by French astronomer Jules Janssen. Janssen is often jointly credited with detecting the element, along with Norman Lockyer. Janssen recorded the helium spectral line during the solar eclipse of 1868, while Lockyer observed it from Britain. However, only Lockyer proposed that the line was due to a new element, which he named after the Sun. The formal discovery of the element was made in 1895 by chemists Sir William Ramsay, Per Teodor Cleve, and Nils Abraham Langlet, who found helium emanating from the uranium ore cleveite, which is now not regarded as a separate mineral species, but as a variety of uraninite. In 1903, large reserves of helium were found in natural gas fields in parts of the United States, by far the largest supplier of the gas today. Liquid helium is used in cryogenics (its largest single use, consuming about a quarter of production), and in the cooling of superconducting magnets, with its main commercial application in MRI scanners. Helium's other industrial uses—as a pressurizing and purge gas, as a protective atmosphere for arc welding, and in processes such as growing crystals to make silicon wafers—account for half of the gas produced. A small but well-known use is as a lifting gas in balloons and airships. As with any gas whose density differs from that of air, inhaling a small volume of helium temporarily changes the timbre and quality of the human voice. In scientific research, the behavior of the two fluid phases of helium-4 (helium I and helium II) is important to researchers studying quantum mechanics (in particular the property of superfluidity) and to those looking at the phenomena, such as superconductivity, produced in matter near absolute zero. On Earth, it is relatively rare—5.2 ppm by volume in the atmosphere. Most terrestrial helium present today is created by the natural radioactive decay of heavy radioactive elements (thorium and uranium, although there are other examples), as the alpha particles emitted by such decays consist of helium-4 nuclei. This radiogenic helium is trapped with natural gas in concentrations as great as 7% by volume, from which it is extracted commercially by a low-temperature separation process called fractional distillation. Terrestrial helium is a non-renewable resource because once released into the atmosphere, it promptly escapes into space. Its supply is thought to be rapidly diminishing. However, some studies suggest that helium produced deep in the Earth by radioactive decay can collect in natural gas reserves in larger-than-expected quantities, in some cases having been released by volcanic activity. History Scientific discoveries The first evidence of helium was observed on August 18, 1868, as a bright yellow line with a wavelength of 587.49 nanometers in the spectrum of the chromosphere of the Sun. The line was detected by French astronomer Jules Janssen during a total solar eclipse in Guntur, India. This line was initially assumed to be sodium. On October 20 of the same year, English astronomer Norman Lockyer observed a yellow line in the solar spectrum, which he named the D3 because it was near the known D1 and D2 Fraunhofer lines of sodium. He concluded that it was caused by an element in the Sun unknown on Earth. Lockyer named the element with the Greek word for the Sun, ἥλιος (helios). It is sometimes said that English chemist Edward Frankland was also involved in the naming, but this is unlikely as he doubted the existence of this new element. The ending "-ium" is unusual, as it normally applies only to metallic elements; probably Lockyer, being an astronomer, was unaware of the chemical conventions. In 1881, Italian physicist Luigi Palmieri detected helium on Earth for the first time through its D3 spectral line, when he analyzed a material that had been sublimated during a recent eruption of Mount Vesuvius. On March 26, 1895, Scottish chemist Sir William Ramsay isolated helium on Earth by treating the mineral cleveite (a variety of uraninite with at least 10% rare-earth elements) with mineral acids. Ramsay was looking for argon but, after separating nitrogen and oxygen from the gas, liberated by sulfuric acid, he noticed a bright yellow line that matched the D3 line observed in the spectrum of the Sun. These samples were identified as helium by Lockyer and British physicist William Crookes. It was independently isolated from cleveite in the same year by chemists Per Teodor Cleve and Abraham Langlet in Uppsala, Sweden, who collected enough of the gas to accurately determine its atomic weight. Helium was also isolated by American geochemist William Francis Hillebrand prior to Ramsay's discovery, when he noticed unusual spectral lines while testing a sample of the mineral uraninite. Hillebrand, however, attributed the lines to nitrogen. His letter of congratulations to Ramsay offers an interesting case of discovery, and near-discovery, in science. In 1907, Ernest Rutherford and Thomas Royds demonstrated that alpha particles are helium nuclei by allowing the particles to penetrate the thin glass wall of an evacuated tube, then creating a discharge in the tube, to study the spectrum of the new gas inside. In 1908, helium was first liquefied by Dutch physicist Heike Kamerlingh Onnes by cooling the gas to less than . He tried to solidify it by further reducing the temperature but failed, because helium does not solidify at atmospheric pressure. Onnes' student Willem Hendrik Keesom was eventually able to solidify 1 cm3 of helium in 1926 by applying additional external pressure. In 1913, Niels Bohr published his "trilogy" on atomic structure that included a reconsideration of the Pickering–Fowler series as central evidence in support of his model of the atom. This series is named for Edward Charles Pickering, who in 1896 published observations of previously unknown lines in the spectrum of the star ζ Puppis (these are now known to occur with Wolf–Rayet and other hot stars). Pickering attributed the observation (lines at 4551, 5411, and 10123 Å) to a new form of hydrogen with half-integer transition levels. In 1912, Alfred Fowler managed to produce similar lines from a hydrogen-helium mixture, and supported Pickering's conclusion as to their origin. Bohr's model does not allow for half-integer transitions (nor does quantum mechanics) and Bohr concluded that Pickering and Fowler were wrong, and instead assigned these spectral lines to ionised helium, He+. Fowler was initially skeptical but was ultimately convinced that Bohr was correct, and by 1915 "spectroscopists had transferred [the Pickering–Fowler series] definitively [from hydrogen] to helium." Bohr's theoretical work on the Pickering series had demonstrated the need for "a re-examination of problems that seemed already to have been solved within classical theories" and provided important confirmation for his atomic theory. In 1938, Russian physicist Pyotr Leonidovich Kapitsa discovered that helium-4 has almost no viscosity at temperatures near absolute zero, a phenomenon now called superfluidity. This phenomenon is related to Bose–Einstein condensation. In 1972, the same phenomenon was observed in helium-3, but at temperatures much closer to absolute zero, by American physicists Douglas D. Osheroff, David M. Lee, and Robert C. Richardson. The phenomenon in helium-3 is thought to be related to pairing of helium-3 fermions to make bosons, in analogy to Cooper pairs of electrons producing superconductivity. In 1961, Vignos and Fairbank reported the existence of a different phase of solid helium-4, designated the gamma-phase. It exists for a narrow range of pressure between 1.45 and 1.78 K. Extraction and use After an oil drilling operation in 1903 in Dexter, Kansas produced a gas geyser that would not burn, Kansas state geologist Erasmus Haworth collected samples of the escaping gas and took them back to the University of Kansas at Lawrence where, with the help of chemists Hamilton Cady and David McFarland, he discovered that the gas consisted of, by volume, 72% nitrogen, 15% methane (a combustible percentage only with sufficient oxygen), 1% hydrogen, and 12% an unidentifiable gas. With further analysis, Cady and McFarland discovered that 1.84% of the gas sample was helium. This showed that despite its overall rarity on Earth, helium was concentrated in large quantities under the American Great Plains, available for extraction as a byproduct of natural gas. Following a suggestion by Sir Richard Threlfall, the United States Navy sponsored three small experimental helium plants during World War I. The goal was to supply barrage balloons with the non-flammable, lighter-than-air gas. A total of of 92% helium was produced in the program even though less than a cubic meter of the gas had previously been obtained. Some of this gas was used in the world's first helium-filled airship, the U.S. Navy's C-class blimp C-7, which flew its maiden voyage from Hampton Roads, Virginia, to Bolling Field in Washington, D.C., on December 1, 1921, nearly two years before the Navy's first rigid helium-filled airship, the Naval Aircraft Factory-built USS Shenandoah, flew in September 1923. Although the extraction process using low-temperature gas liquefaction was not developed in time to be significant during World War I, production continued. Helium was primarily used as a lifting gas in lighter-than-air craft. During World War II, the demand increased for helium for lifting gas and for shielded arc welding. The helium mass spectrometer was also vital in the atomic bomb Manhattan Project. The government of the United States set up the National Helium Reserve in 1925 at Amarillo, Texas, with the goal of supplying military airships in time of war and commercial airships in peacetime. Because of the Helium Act of 1925, which banned the export of scarce helium on which the US then had a production monopoly, together with the prohibitive cost of the gas, German Zeppelins were forced to use hydrogen as lifting gas, which would gain infamy in the Hindenburg disaster. The helium market after World War II was depressed but the reserve was expanded in the 1950s to ensure a supply of liquid helium as a coolant to create oxygen/hydrogen rocket fuel (among other uses) during the Space Race and Cold War. Helium use in the United States in 1965 was more than eight times the peak wartime consumption. After the Helium Acts Amendments of 1960 (Public Law 86–777), the U.S. Bureau of Mines arranged for five private plants to recover helium from natural gas. For this helium conservation program, the Bureau built a pipeline from Bushton, Kansas, to connect those plants with the government's partially depleted Cliffside gas field near Amarillo, Texas. This helium-nitrogen mixture was injected and stored in the Cliffside gas field until needed, at which time it was further purified. By 1995, a billion cubic meters of the gas had been collected and the reserve was US$1.4 billion in debt, prompting the Congress of the United States in 1996 to discontinue the reserve. The resulting Helium Privatization Act of 1996 (Public Law 104–273) directed the United States Department of the Interior to empty the reserve, with sales starting by 2005. Helium produced between 1930 and 1945 was about 98.3% pure (2% nitrogen), which was adequate for airships. In 1945, a small amount of 99.9% helium was produced for welding use. By 1949, commercial quantities of Grade A 99.95% helium were available. For many years, the United States produced more than 90% of commercially usable helium in the world, while extraction plants in Canada, Poland, Russia, and other nations produced the remainder. In the mid-1990s, a new plant in Arzew, Algeria, producing began operation, with enough production to cover all of Europe's demand. Meanwhile, by 2000, the consumption of helium within the U.S. had risen to more than 15 million kg per year. In 2004–2006, additional plants in Ras Laffan, Qatar, and Skikda, Algeria were built. Algeria quickly became the second leading producer of helium. Through this time, both helium consumption and the costs of producing helium increased. From 2002 to 2007 helium prices doubled. , the United States National Helium Reserve accounted for 30 percent of the world's helium. The reserve was expected to run out of helium in 2018. Despite that, a proposed bill in the United States Senate would allow the reserve to continue to sell the gas. Other large reserves were in the Hugoton in Kansas, United States, and nearby gas fields of Kansas and the panhandles of Texas and Oklahoma. New helium plants were scheduled to open in 2012 in Qatar, Russia, and the US state of Wyoming, but they were not expected to ease the shortage. In 2013, Qatar started up the world's largest helium unit, although the 2017 Qatar diplomatic crisis severely affected helium production there. 2014 was widely acknowledged to be a year of over-supply in the helium business, following years of renowned shortages. Nasdaq reported (2015) that for Air Products, an international corporation that sells gases for industrial use, helium volumes remain under economic pressure due to feedstock supply constraints. Characteristics Atom In quantum mechanics In the perspective of quantum mechanics, helium is the second simplest atom to model, following the hydrogen atom. Helium is composed of two electrons in atomic orbitals surrounding a nucleus containing two protons and (usually) two neutrons. As in Newtonian mechanics, no system that consists of more than two particles can be solved with an exact analytical mathematical approach (see 3-body problem) and helium is no exception. Thus, numerical mathematical methods are required, even to solve the system of one nucleus and two electrons. Such computational chemistry methods have been used to create a quantum mechanical picture of helium electron binding which is accurate to within < 2% of the correct value, in a few computational steps. Such models show that each electron in helium partly screens the nucleus from the other, so that the effective nuclear charge Zeff which each electron sees is about 1.69 units, not the 2 charges of a classic "bare" helium nucleus. Related stability of the helium-4 nucleus and electron shell The nucleus of the helium-4 atom is identical with an alpha particle. High-energy electron-scattering experiments show its charge to decrease exponentially from a maximum at a central point, exactly as does the charge density of helium's own electron cloud. This symmetry reflects similar underlying physics: the pair of neutrons and the pair of protons in helium's nucleus obey the same quantum mechanical rules as do helium's pair of electrons (although the nuclear particles are subject to a different nuclear binding potential), so that all these fermions fully occupy 1s orbitals in pairs, none of them possessing orbital angular momentum, and each cancelling the other's intrinsic spin. This arrangement is thus energetically extremely stable for all these particles and has astrophysical implications. Namely, adding another particle – proton, neutron, or alpha particle – would consume rather than release energy; all systems with mass number 5, as well as beryllium-8 (comprising two alpha particles), are unbound. For example, the stability and low energy of the electron cloud state in helium accounts for the element's chemical inertness, and also the lack of interaction of helium atoms with each other, producing the lowest melting and boiling points of all the elements. In a similar way, the particular energetic stability of the helium-4 nucleus, produced by similar effects, accounts for the ease of helium-4 production in atomic reactions that involve either heavy-particle emission or fusion. Some stable helium-3 (two protons and one neutron) is produced in fusion reactions from hydrogen, though its estimated abundance in the universe is about relative to helium-4. The unusual stability of the helium-4 nucleus is also important cosmologically: it explains the fact that in the first few minutes after the Big Bang, as the "soup" of free protons and neutrons which had initially been created in about 6:1 ratio cooled to the point that nuclear binding was possible, almost all first compound atomic nuclei to form were helium-4 nuclei. Owing to the relatively tight binding of helium-4 nuclei, its production consumed nearly all of the free neutrons in a few minutes, before they could beta-decay, and thus few neutrons were available to form heavier atoms such as lithium, beryllium, or boron. Helium-4 nuclear binding per nucleon is stronger than in any of these elements (see nucleogenesis and binding energy) and thus, once helium had been formed, no energetic drive was available to make elements 3, 4 and 5. It is barely energetically favorable for helium to fuse into the next element with a lower energy per nucleon, carbon. However, due to the short lifetime of the intermediate beryllium-8, this process requires three helium nuclei striking each other nearly simultaneously (see triple-alpha process). There was thus no time for significant carbon to be formed in the few minutes after the Big Bang, before the early expanding universe cooled to the temperature and pressure point where helium fusion to carbon was no longer possible. This left the early universe with a very similar ratio of hydrogen/helium as is observed today (3 parts hydrogen to 1 part helium-4 by mass), with nearly all the neutrons in the universe trapped in helium-4. All heavier elements (including those necessary for rocky planets like the Earth, and for carbon-based or other life) have thus been created since the Big Bang in stars which were hot enough to fuse helium itself. All elements other than hydrogen and helium today account for only 2% of the mass of atomic matter in the universe. Helium-4, by contrast, comprises about 24% of the mass of the universe's ordinary matter—nearly all the ordinary matter that is not hydrogen. Gas and plasma phases Helium is the second least reactive noble gas after neon, and thus the second least reactive of all elements. It is chemically inert and monatomic in all standard conditions. Because of helium's relatively low molar (atomic) mass, its thermal conductivity, specific heat, and sound speed in the gas phase are all greater than any other gas except hydrogen. For these reasons and the small size of helium monatomic molecules, helium diffuses through solids at a rate three times that of air and around 65% that of hydrogen. Helium is the least water-soluble monatomic gas, and one of the least water-soluble of any gas (CF4, SF6, and C4F8 have lower mole fraction solubilities: 0.3802, 0.4394, and 0.2372 x2/10−5, respectively, versus helium's 0.70797 x2/10−5), and helium's index of refraction is closer to unity than that of any other gas. Helium has a negative Joule–Thomson coefficient at normal ambient temperatures, meaning it heats up when allowed to freely expand. Only below its Joule–Thomson inversion temperature (of about 32 to 50 K at 1 atmosphere) does it cool upon free expansion. Once precooled below this temperature, helium can be liquefied through expansion cooling. Most extraterrestrial helium is plasma in stars, with properties quite different from those of atomic helium. In a plasma, helium's electrons are not bound to its nucleus, resulting in very high electrical conductivity, even when the gas is only partially ionized. The charged particles are highly influenced by magnetic and electric fields. For example, in the solar wind together with ionized hydrogen, the particles interact with the Earth's magnetosphere, giving rise to Birkeland currents and the aurora. Liquid phase Helium liquifies when cooled below 4.2 K at atmospheric pressure. Unlike any other element, however, helium remains liquid down to a temperature of absolute zero. This is a direct effect of quantum mechanics: specifically, the zero point energy of the system is too high to allow freezing. Pressures above about 25 atmospheres are required to freeze it. There are two liquid phases: Helium I is a conventional liquid, and Helium II, which occurs at a lower temperature, is a superfluid. Helium I Below its boiling point of and above the lambda point of , the isotope helium-4 exists in a normal colorless liquid state, called helium I. Like other cryogenic liquids, helium I boils when it is heated and contracts when its temperature is lowered. Below the lambda point, however, helium does not boil, and it expands as the temperature is lowered further. Helium I has a gas-like index of refraction of 1.026 which makes its surface so hard to see that floats of Styrofoam are often used to show where the surface is. This colorless liquid has a very low viscosity and a density of 0.145–0.125 g/mL (between about 0 and 4 K), which is only one-fourth the value expected from classical physics. Quantum mechanics is needed to explain this property and thus both states of liquid helium (helium I and helium II) are called quantum fluids, meaning they display atomic properties on a macroscopic scale. This may be an effect of its boiling point being so close to absolute zero, preventing random molecular motion (thermal energy) from masking the atomic properties. Helium II Liquid helium below its lambda point (called helium II) exhibits very unusual characteristics. Due to its high thermal conductivity, when it boils, it does not bubble but rather evaporates directly from its surface. Helium-3 also has a superfluid phase, but only at much lower temperatures; as a result, less is known about the properties of the isotope. Helium II is a superfluid, a quantum mechanical state of matter with strange properties. For example, when it flows through capillaries as thin as 10 to 100 nm it has no measurable viscosity. However, when measurements were done between two moving discs, a viscosity comparable to that of gaseous helium was observed. Existing theory explains this using the two-fluid model for helium II. In this model, liquid helium below the lambda point is viewed as containing a proportion of helium atoms in a ground state, which are superfluid and flow with exactly zero viscosity, and a proportion of helium atoms in an excited state, which behave more like an ordinary fluid. In the fountain effect, a chamber is constructed which is connected to a reservoir of helium II by a sintered disc through which superfluid helium leaks easily but through which non-superfluid helium cannot pass. If the interior of the container is heated, the superfluid helium changes to non-superfluid helium. In order to maintain the equilibrium fraction of superfluid helium, superfluid helium leaks through and increases the pressure, causing liquid to fountain out of the container. The thermal conductivity of helium II is greater than that of any other known substance, a million times that of helium I and several hundred times that of copper. This is because heat conduction occurs by an exceptional quantum mechanism. Most materials that conduct heat well have a valence band of free electrons which serve to transfer the heat. Helium II has no such valence band but nevertheless conducts heat well. The flow of heat is governed by equations that are similar to the wave equation used to characterize sound propagation in air. When heat is introduced, it moves at 20 meters per second at 1.8 K through helium II as waves in a phenomenon known as second sound. Helium II also exhibits a creeping effect. When a surface extends past the level of helium II, the helium II moves along the surface, against the force of gravity. Helium II will escape from a vessel that is not sealed by creeping along the sides until it reaches a warmer region where it evaporates. It moves in a 30 nm-thick film regardless of surface material. This film is called a Rollin film and is named after the man who first characterized this trait, Bernard V. Rollin. As a result of this creeping behavior and helium II's ability to leak rapidly through tiny openings, it is very difficult to confine. Unless the container is carefully constructed, the helium II will creep along the surfaces and through valves until it reaches somewhere warmer, where it will evaporate. Waves propagating across a Rollin film are governed by the same equation as gravity waves in shallow water, but rather than gravity, the restoring force is the van der Waals force. These waves are known as third sound. Solid phases Helium remains liquid down to absolute zero at atmospheric pressure, but it freezes at high pressure. Solid helium requires a temperature of 1–1.5 K (about −272 °C or −457 °F) at about 25 bar (2.5 MPa) of pressure. It is often hard to distinguish solid from liquid helium since the refractive index of the two phases are nearly the same. The solid has a sharp melting point and has a crystalline structure, but it is highly compressible; applying pressure in a laboratory can decrease its volume by more than 30%. With a bulk modulus of about 27 MPa it is ~100 times more compressible than water. Solid helium has a density of at 1.15 K and 66 atm; the projected density at 0 K and 25 bar (2.5 MPa) is . At higher temperatures, helium will solidify with sufficient pressure. At room temperature, this requires about 114,000 atm. Helium-4 and helium-3 both form several crystalline solid phases, all requiring at least 25 bar. They both form an α phase, which has a hexagonal close-packed (hcp) crystal structure, a β phase, which is face-centered cubic (fcc), and a γ phase, which is body-centered cubic (bcc). Isotopes There are nine known isotopes of helium of which two, helium-3 and helium-4, are stable. In the Earth's atmosphere, one atom is for every million that are . Unlike most elements, helium's isotopic abundance varies greatly by origin, due to the different formation processes. The most common isotope, helium-4, is produced on Earth by alpha decay of heavier radioactive elements; the alpha particles that emerge are fully ionized helium-4 nuclei. Helium-4 is an unusually stable nucleus because its nucleons are arranged into complete shells. It was also formed in enormous quantities during Big Bang nucleosynthesis. Helium-3 is present on Earth only in trace amounts. Most of it has been present since Earth's formation, though some falls to Earth trapped in cosmic dust. Trace amounts are also produced by the beta decay of tritium. Rocks from the Earth's crust have isotope ratios varying by as much as a factor of ten, and these ratios can be used to investigate the origin of rocks and the composition of the Earth's mantle. is much more abundant in stars as a product of nuclear fusion. Thus in the interstellar medium, the proportion of to is about 100 times higher than on Earth. Extraplanetary material, such as lunar and asteroid regolith, have trace amounts of helium-3 from being bombarded by solar winds. The Moon's surface contains helium-3 at concentrations on the order of 10 ppb, much higher than the approximately 5 ppt found in the Earth's atmosphere. A number of people, starting with Gerald Kulcinski in 1986, have proposed to explore the Moon, mine lunar regolith, and use the helium-3 for fusion. Liquid helium-4 can be cooled to about using evaporative cooling in a 1-K pot. Similar cooling of helium-3, which has a lower boiling point, can achieve about in a helium-3 refrigerator. Equal mixtures of liquid and below separate into two immiscible phases due to their dissimilarity (they follow different quantum statistics: helium-4 atoms are bosons while helium-3 atoms are fermions). Dilution refrigerators use this immiscibility to achieve temperatures of a few millikelvins. It is possible to produce exotic helium isotopes, which rapidly decay into other substances. The shortest-lived heavy helium isotope is the unbound helium-10 with a half-life of . Helium-6 decays by emitting a beta particle and has a half-life of 0.8 second. Helium-7 and helium-8 are created in certain nuclear reactions. Helium-6 and helium-8 are known to exhibit a nuclear halo. Properties Table of thermal and physical properties of helium gas at atmospheric pressure: Compounds Helium has a valence of zero and is chemically unreactive under all normal conditions. It is an electrical insulator unless ionized. As with the other noble gases, helium has metastable energy levels that allow it to remain ionized in an electrical discharge with a voltage below its ionization potential. Helium can form unstable compounds, known as excimers, with tungsten, iodine, fluorine, sulfur, and phosphorus when it is subjected to a glow discharge, to electron bombardment, or reduced to plasma by other means. The molecular compounds HeNe, HgHe10, and WHe2, and the molecular ions , , , and have been created this way. HeH+ is also stable in its ground state but is extremely reactive—it is the strongest Brønsted acid known, and therefore can exist only in isolation, as it will protonate any molecule or counteranion it contacts. This technique has also produced the neutral molecule He2, which has a large number of band systems, and HgHe, which is apparently held together only by polarization forces. Van der Waals compounds of helium can also be formed with cryogenic helium gas and atoms of some other substance, such as LiHe and He2. Theoretically, other true compounds may be possible, such as helium fluorohydride (HHeF), which would be analogous to HArF, discovered in 2000. Calculations show that two new compounds containing a helium-oxygen bond could be stable. Two new molecular species, predicted using theory, CsFHeO and N(CH3)4FHeO, are derivatives of a metastable FHeO− anion first theorized in 2005 by a group from Taiwan. Helium atoms have been inserted into the hollow carbon cage molecules (the fullerenes) by heating under high pressure. The endohedral fullerene molecules formed are stable at high temperatures. When chemical derivatives of these fullerenes are formed, the helium stays inside. If helium-3 is used, it can be readily observed by helium nuclear magnetic resonance spectroscopy. Many fullerenes containing helium-3 have been reported. Although the helium atoms are not attached by covalent or ionic bonds, these substances have distinct properties and a definite composition, like all stoichiometric chemical compounds. Under high pressures helium can form compounds with various other elements. Helium-nitrogen clathrate (He(N2)11) crystals have been grown at room temperature at pressures ca. 10 GPa in a diamond anvil cell. The insulating electride Na2He has been shown to be thermodynamically stable at pressures above 113 GPa. It has a fluorite structure. Occurrence and production Natural abundance Although it is rare on Earth, helium is the second most abundant element in the known Universe, constituting 23% of its baryonic mass. Only hydrogen is more abundant. The vast majority of helium was formed by Big Bang nucleosynthesis one to three minutes after the Big Bang. As such, measurements of its abundance contribute to cosmological models. In stars, it is formed by the nuclear fusion of hydrogen in proton–proton chain reactions and the CNO cycle, part of stellar nucleosynthesis. In the Earth's atmosphere, the concentration of helium by volume is only 5.2 parts per million. The concentration is low and fairly constant despite the continuous production of new helium because most helium in the Earth's atmosphere escapes into space by several processes. In the Earth's heterosphere, a part of the upper atmosphere, helium and hydrogen are the most abundant elements. Most helium on Earth is a result of radioactive decay. Helium is found in large amounts in minerals of uranium and thorium, including uraninite and its varieties cleveite and pitchblende, carnotite and monazite (a group name; "monazite" usually refers to monazite-(Ce)), because they emit alpha particles (helium nuclei, He2+) to which electrons immediately combine as soon as the particle is stopped by the rock. In this way an estimated 3000 metric tons of helium are generated per year throughout the lithosphere. In the Earth's crust, the concentration of helium is 8 parts per billion. In seawater, the concentration is only 4 parts per trillion. There are also small amounts in mineral springs, volcanic gas, and meteoric iron. Because helium is trapped in the subsurface under conditions that also trap natural gas, the greatest natural concentrations of helium on the planet are found in natural gas, from which most commercial helium is extracted. The concentration varies in a broad range from a few ppm to more than 7% in a small gas field in San Juan County, New Mexico. , the world's helium reserves were estimated at 31 billion cubic meters, with a third of that being in Qatar. In 2015 and 2016 additional probable reserves were announced to be under the Rocky Mountains in North America and in the East African Rift. The Bureau of Land Management (BLM) has proposed an October 2024 plan for managing natural resources in western Colorado. The plan involves closing 543,000 acres to oil and gas leasing while keeping 692,300 acres open. Among the open areas, 165,700 acres have been identified as suitable for helium recovery. The United States possesses an estimated 306 billion cubic feet of recoverable helium, sufficient to meet current consumption rates of 2.15 billion cubic feet per year for approximately 150 years. Modern extraction and distribution For large-scale use, helium is extracted by fractional distillation from natural gas, which can contain as much as 7% helium. Since helium has a lower boiling point than any other element, low temperatures and high pressure are used to liquefy nearly all the other gases (mostly nitrogen and methane). The resulting crude helium gas is purified by successive exposures to lowering temperatures, in which almost all of the remaining nitrogen and other gases are precipitated out of the gaseous mixture. Activated charcoal is used as a final purification step, usually resulting in 99.995% pure Grade-A helium. The principal impurity in Grade-A helium is neon. In a final production step, most of the helium that is produced is liquefied via a cryogenic process. This is necessary for applications requiring liquid helium and also allows helium suppliers to reduce the cost of long-distance transportation, as the largest liquid helium containers have more than five times the capacity of the largest gaseous helium tube trailers. In 2008, approximately 169 million standard cubic meters (SCM) of helium were extracted from natural gas or withdrawn from helium reserves, with approximately 78% from the United States, 10% from Algeria, and most of the remainder from Russia, Poland, and Qatar. By 2013, increases in helium production in Qatar (under the company Qatargas managed by Air Liquide) had increased Qatar's fraction of world helium production to 25%, making it the second largest exporter after the United States. An estimated deposit of helium was found in Tanzania in 2016. A large-scale helium plant was opened in Ningxia, China in 2020. In the United States, most helium is extracted from the natural gas of the Hugoton and nearby gas fields in Kansas, Oklahoma, and the Panhandle Field in Texas. Much of this gas was once sent by pipeline to the National Helium Reserve, but since 2005, this reserve has been depleted and sold off, and it is expected to be largely depleted by 2021 under the October 2013 Responsible Helium Administration and Stewardship Act (H.R. 527). The helium fields of the western United States are emerging as an alternate source of helium supply, particularly those of the "Four Corners" region (the states of Arizona, Colorado, New Mexico and Utah). Diffusion of crude natural gas through special semipermeable membranes and other barriers is another method to recover and purify helium. In 1996, the U.S. had proven helium reserves in such gas well complexes of about 147 billion standard cubic feet (4.2 billion SCM). At rates of use at that time (72 million SCM per year in the U.S.; see pie chart below) this would have been enough helium for about 58 years of U.S. use, and less than this (perhaps 80% of the time) at world use rates, although factors in saving and processing impact effective reserve numbers. Helium is generally extracted from natural gas because it is present in air at only a fraction of that of neon, yet the demand for it is far higher. It is estimated that if all neon production were retooled to save helium, 0.1% of the world's helium demands would be satisfied. Similarly, only 1% of the world's helium demands could be satisfied by re-tooling all air distillation plants. Helium can be synthesized by bombardment of lithium or boron with high-velocity protons, or by bombardment of lithium with deuterons, but these processes are a completely uneconomical method of production. Helium is commercially available in either liquid or gaseous form. As a liquid, it can be supplied in small insulated containers called dewars which hold as much as 1,000 liters of helium, or in large ISO containers, which have nominal capacities as large as 42 m3 (around 11,000 U.S. gallons). In gaseous form, small quantities of helium are supplied in high-pressure cylinders holding as much as 8 m3 (approximately . 282 standard cubic feet), while large quantities of high-pressure gas are supplied in tube trailers, which have capacities of as much as 4,860 m3 (approx. 172,000 standard cubic feet). Conservation advocates According to helium conservationists like Nobel laureate physicist Robert Coleman Richardson, writing in 2010, the free market price of helium has contributed to "wasteful" usage (e.g. for helium balloons). Prices in the 2000s had been lowered by the decision of the U.S. Congress to sell off the country's large helium stockpile by 2015. According to Richardson, the price needed to be multiplied by 20 to eliminate the excessive wasting of helium. In the 2012 Nuttall et al. paper titled "Stop squandering helium", it was also proposed to create an International Helium Agency that would build a sustainable market for "this precious commodity". Applications While balloons are perhaps the best-known use of helium, they are a minor part of all helium use. Helium is used for many purposes that require some of its unique properties, such as its low boiling point, low density, low solubility, high thermal conductivity, or inertness. Of the 2014 world helium total production of about 32 million kg (180 million standard cubic meters) helium per year, the largest use (about 32% of the total in 2014) is in cryogenic applications, most of which involves cooling the superconducting magnets in medical MRI scanners and NMR spectrometers. Other major uses were pressurizing and purging systems, welding, maintenance of controlled atmospheres, and leak detection. Other uses by category were relatively minor fractions. Controlled atmospheres Helium is used as a protective gas in growing silicon and germanium crystals, in titanium and zirconium production, and in gas chromatography, because it is inert. Because of its inertness, thermally and calorically perfect nature, high speed of sound, and high value of the heat capacity ratio, it is also useful in supersonic wind tunnels and impulse facilities. Gas tungsten arc welding Helium is used as a shielding gas in arc welding processes on materials that, at welding temperatures are contaminated and weakened by air or nitrogen. A number of inert shielding gases are used in gas tungsten arc welding, but helium is used instead of cheaper argon especially for welding materials that have higher heat conductivity, like aluminium or copper. Minor uses Industrial leak detection One industrial application for helium is leak detection. Because helium diffuses through solids three times faster than air, it is used as a tracer gas to detect leaks in high-vacuum equipment (such as cryogenic tanks) and high-pressure containers. The tested object is placed in a chamber, which is then evacuated and filled with helium. The helium that escapes through the leaks is detected by a sensitive device (helium mass spectrometer), even at the leak rates as small as 10−9 mbar·L/s (10−10 Pa·m3/s). The measurement procedure is normally automatic and is called helium integral test. A simpler procedure is to fill the tested object with helium and to manually search for leaks with a hand-held device. Helium leaks through cracks should not be confused with gas permeation through a bulk material. While helium has documented permeation constants (thus a calculable permeation rate) through glasses, ceramics, and synthetic materials, inert gases such as helium will not permeate most bulk metals. Flight Because it is lighter than air, airships and balloons are inflated with helium for lift. While hydrogen gas is more buoyant and escapes permeating through a membrane at a lower rate, helium has the advantage of being non-flammable, and indeed fire-retardant. Another minor use is in rocketry, where helium is used as an ullage medium to backfill rocket propellant tanks in flight and to condense hydrogen and oxygen to make rocket fuel. It is also used to purge fuel and oxidizer from ground support equipment prior to launch and to pre-cool liquid hydrogen in space vehicles. For example, the Saturn V rocket used in the Apollo program needed about of helium to launch. Minor commercial and recreational uses Helium as a breathing gas has no narcotic properties, so helium mixtures such as trimix, heliox and heliair are used for deep diving to reduce the effects of narcosis, which worsen with increasing depth. As pressure increases with depth, the density of the breathing gas also increases, and the low molecular weight of helium is found to considerably reduce the effort of breathing by lowering the density of the mixture. This reduces the Reynolds number of flow, leading to a reduction of turbulent flow and an increase in laminar flow, which requires less breathing. At depths below divers breathing helium-oxygen mixtures begin to experience tremors and a decrease in psychomotor function, symptoms of high-pressure nervous syndrome. This effect may be countered to some extent by adding an amount of narcotic gas such as hydrogen or nitrogen to a helium–oxygen mixture. Helium–neon lasers, a type of low-powered gas laser producing a red beam, had various practical applications which included barcode readers and laser pointers, before they were almost universally replaced by cheaper diode lasers. For its inertness and high thermal conductivity, neutron transparency, and because it does not form radioactive isotopes under reactor conditions, helium is used as a heat-transfer medium in some gas-cooled nuclear reactors. Helium, mixed with a heavier gas such as xenon, is useful for thermoacoustic refrigeration due to the resulting high heat capacity ratio and low Prandtl number. The inertness of helium has environmental advantages over conventional refrigeration systems which contribute to ozone depletion or global warming. Helium is also used in some hard disk drives. Scientific uses The use of helium reduces the distorting effects of temperature variations in the space between lenses in some telescopes due to its extremely low index of refraction. This method is especially used in solar telescopes where a vacuum tight telescope tube would be too heavy. Helium is a commonly used carrier gas for gas chromatography. The age of rocks and minerals that contain uranium and thorium can be estimated by measuring the level of helium with a process known as helium dating. Helium at low temperatures is used in cryogenics and in certain cryogenic applications. As examples of applications, liquid helium is used to cool certain metals to the extremely low temperatures required for superconductivity, such as in superconducting magnets for magnetic resonance imaging. The Large Hadron Collider at CERN uses 96 metric tons of liquid helium to maintain the temperature at . Medical uses Helium was approved for medical use in the United States in April 2020 for humans and animals. As a contaminant While chemically inert, helium contamination impairs the operation of microelectromechanical systems (MEMS) such that iPhones may fail. Inhalation and safety Effects Neutral helium at standard conditions is non-toxic, plays no biological role and is found in trace amounts in human blood. The speed of sound in helium is nearly three times the speed of sound in air. Because the natural resonance frequency of a gas-filled cavity is proportional to the speed of sound in the gas, when helium is inhaled, a corresponding increase occurs in the resonant frequencies of the vocal tract, which is the amplifier of vocal sound. This increase in the resonant frequency of the amplifier (the vocal tract) gives increased amplification to the high-frequency components of the sound wave produced by the direct vibration of the vocal folds, compared to the case when the voice box is filled with air. When a person speaks after inhaling helium gas, the muscles that control the voice box still move in the same way as when the voice box is filled with air; therefore the fundamental frequency (sometimes called pitch) produced by direct vibration of the vocal folds does not change. However, the high-frequency-preferred amplification causes a change in timbre of the amplified sound, resulting in a reedy, duck-like vocal quality. The opposite effect, lowering resonant frequencies, can be obtained by inhaling a dense gas such as sulfur hexafluoride or xenon. Hazards Inhaling helium can be dangerous if done to excess, since helium is a simple asphyxiant and so displaces oxygen needed for normal respiration. Fatalities have been recorded, including a youth who suffocated in Vancouver in 2003 and two adults who suffocated in South Florida in 2006. In 1998, an Australian girl from Victoria fell unconscious and temporarily turned blue after inhaling the entire contents of a party balloon. Inhaling helium directly from pressurized cylinders or even balloon filling valves is extremely dangerous, as high flow rate and pressure can result in barotrauma, fatally rupturing lung tissue. Death caused by helium is rare. The first media-recorded case was that of a 15-year-old girl from Texas who died in 1998 from helium inhalation at a friend's party; the exact type of helium death is unidentified. In the United States, only two fatalities were reported between 2000 and 2004, including a man who died in North Carolina of barotrauma in 2002. A youth asphyxiated in Vancouver during 2003, and a 27-year-old man in Australia had an embolism after breathing from a cylinder in 2000. Since then, two adults asphyxiated in South Florida in 2006, and there were cases in 2009 and 2010, one of whom was a Californian youth who was found with a bag over his head, attached to a helium tank, and another teenager in Northern Ireland died of asphyxiation. At Eagle Point, Oregon a teenage girl died in 2012 from barotrauma at a party. A girl from Michigan died from hypoxia later in the year. On February 4, 2015, it was revealed that, during the recording of their main TV show on January 28, a 12-year-old member (name withheld) of Japanese all-girl singing group 3B Junior suffered from air embolism, losing consciousness and falling into a coma as a result of air bubbles blocking the flow of blood to the brain after inhaling huge quantities of helium as part of a game. The incident was not made public until a week later. The staff of TV Asahi held an emergency press conference to communicate that the member had been taken to the hospital and is showing signs of rehabilitation such as moving eyes and limbs, but her consciousness has not yet been sufficiently recovered. Police have launched an investigation due to a neglect of safety measures. The safety issues for cryogenic helium are similar to those of liquid nitrogen; its extremely low temperatures can result in cold burns, and the liquid-to-gas expansion ratio can cause explosions if no pressure-relief devices are installed. Containers of helium gas at 5 to 10 K should be handled as if they contain liquid helium due to the rapid and significant thermal expansion that occurs when helium gas at less than 10 K is warmed to room temperature. At high pressures (more than about 20 atm or two MPa), a mixture of helium and oxygen (heliox) can lead to high-pressure nervous syndrome, a sort of reverse-anesthetic effect; adding a small amount of nitrogen to the mixture can alleviate the problem. See also Abiogenic petroleum origin Helium-3 propulsion Leidenfrost effect Superfluid Tracer-gas leak testing method Hamilton Cady Notes References Bibliography External links General U.S. Government's Bureau of Land Management: Sources, Refinement, and Shortage. With some history of helium. U.S. Geological Survey publications on helium beginning 1996: Helium Where is all the helium? Aga website It's Elemental – Helium Chemistry in its element podcast (MP3) from the Royal Society of Chemistry's Chemistry World: Helium International Chemical Safety Cards – Helium includes health and safety information regarding accidental exposures to helium More detail Helium at The Periodic Table of Videos (University of Nottingham) Helium at the Helsinki University of Technology; includes pressure-temperature phase diagrams for helium-3 and helium-4 Lancaster University, Ultra Low Temperature Physics – includes a summary of some low temperature techniques Video: Demonstration of superfluid helium (Alfred Leitner, 1963, 38 min.) Miscellaneous Physics in Speech with audio samples that demonstrate the unchanged voice pitch Article about helium and other noble gases Helium shortage America's Helium Supply: Options for Producing More Helium from Federal Land: Oversight Hearing before the Subcommittee on Energy and Mineral Resources of the Committee on Natural Resources, U.S. House Of Representatives, One Hundred Thirteenth Congress, First Session, Thursday, July 11, 2013 Helium Program: Urgent Issues Facing BLM's Storage and Sale of Helium Reserves: Testimony before the Committee on Natural Resources, House of Representatives Government Accountability Office Chemical elements Noble gases Quantum phases Airship technology Coolants Nuclear reactor coolants Underwater diving equipment E-number additives Helios
Helium
[ "Physics", "Chemistry", "Materials_science" ]
10,905
[ "Quantum phases", "Noble gases", "Chemical elements", "Phases of matter", "Quantum mechanics", "Nonmetals", "Condensed matter physics", "Atoms", "Matter" ]
13,311
https://en.wikipedia.org/wiki/Hormone
A hormone (from the Greek participle , "setting in motion") is a class of signaling molecules in multicellular organisms that are sent to distant organs or tissues by complex biological processes to regulate physiology and behavior. Hormones are required for the correct development of animals, plants and fungi. Due to the broad definition of a hormone (as a signaling molecule that exerts its effects far from its site of production), numerous kinds of molecules can be classified as hormones. Among the substances that can be considered hormones, are eicosanoids (e.g. prostaglandins and thromboxanes), steroids (e.g. oestrogen and brassinosteroid), amino acid derivatives (e.g. epinephrine and auxin), protein or peptides (e.g. insulin and CLE peptides), and gases (e.g. ethylene and nitric oxide). Hormones are used to communicate between organs and tissues. In vertebrates, hormones are responsible for regulating a wide range of processes including both physiological processes and behavioral activities such as digestion, metabolism, respiration, sensory perception, sleep, excretion, lactation, stress induction, growth and development, movement, reproduction, and mood manipulation. In plants, hormones modulate almost all aspects of development, from germination to senescence. Hormones affect distant cells by binding to specific receptor proteins in the target cell, resulting in a change in cell function. When a hormone binds to the receptor, it results in the activation of a signal transduction pathway that typically activates gene transcription, resulting in increased expression of target proteins. Hormones can also act in non-genomic pathways that synergize with genomic effects. Water-soluble hormones (such as peptides and amines) generally act on the surface of target cells via second messengers. Lipid soluble hormones, (such as steroids) generally pass through the plasma membranes of target cells (both cytoplasmic and nuclear) to act within their nuclei. Brassinosteroids, a type of polyhydroxysteroids, are a sixth class of plant hormones and may be useful as an anticancer drug for endocrine-responsive tumors to cause apoptosis and limit plant growth. Despite being lipid soluble, they nevertheless attach to their receptor at the cell surface. In vertebrates, endocrine glands are specialized organs that secrete hormones into the endocrine signaling system. Hormone secretion occurs in response to specific biochemical signals and is often subject to negative feedback regulation. For instance, high blood sugar (serum glucose concentration) promotes insulin synthesis. Insulin then acts to reduce glucose levels and maintain homeostasis, leading to reduced insulin levels. Upon secretion, water-soluble hormones are readily transported through the circulatory system. Lipid-soluble hormones must bond to carrier plasma glycoproteins (e.g., thyroxine-binding globulin (TBG)) to form ligand-protein complexes. Some hormones, such as insulin and growth hormones, can be released into the bloodstream already fully active. Other hormones, called prohormones, must be activated in certain cells through a series of steps that are usually tightly controlled. The endocrine system secretes hormones directly into the bloodstream, typically via fenestrated capillaries, whereas the exocrine system secretes its hormones indirectly using ducts. Hormones with paracrine function diffuse through the interstitial spaces to nearby target tissue. Plants lack specialized organs for the secretion of hormones, although there is spatial distribution of hormone production. For example, the hormone auxin is produced mainly at the tips of young leaves and in the shoot apical meristem. The lack of specialised glands means that the main site of hormone production can change throughout the life of a plant, and the site of production is dependent on the plant's age and environment. Introduction and overview Hormone producing cells are found in the endocrine glands, such as the thyroid gland, ovaries, and testes. Hormonal signaling involves the following steps: Biosynthesis of a particular hormone in a particular tissue. Storage and secretion of the hormone. Transport of the hormone to the target cell(s). Recognition of the hormone by an associated cell membrane or intracellular receptor protein. Relay and amplification of the received hormonal signal via a signal transduction process: This then leads to a cellular response. The reaction of the target cells may then be recognized by the original hormone-producing cells, leading to a downregulation in hormone production. This is an example of a homeostatic negative feedback loop. Breakdown of the hormone. Exocytosis and other methods of membrane transport are used to secrete hormones when the endocrine glands are signaled. The hierarchical model is an oversimplification of the hormonal signaling process. Cellular recipients of a particular hormonal signal may be one of several cell types that reside within a number of different tissues, as is the case for insulin, which triggers a diverse range of systemic physiological effects. Different tissue types may also respond differently to the same hormonal signal. Discovery Arnold Adolph Berthold (1849) Arnold Adolph Berthold was a German physiologist and zoologist, who, in 1849, had a question about the function of the testes. He noticed in castrated roosters that they did not have the same sexual behaviors as roosters with their testes intact. He decided to run an experiment on male roosters to examine this phenomenon. He kept a group of roosters with their testes intact, and saw that they had normal sized wattles and combs (secondary sexual organs), a normal crow, and normal sexual and aggressive behaviors. He also had a group with their testes surgically removed, and noticed that their secondary sexual organs were decreased in size, had a weak crow, did not have sexual attraction towards females, and were not aggressive. He realized that this organ was essential for these behaviors, but he did not know how. To test this further, he removed one testis and placed it in the abdominal cavity. The roosters acted and had normal physical anatomy. He was able to see that location of the testes does not matter. He then wanted to see if it was a genetic factor that was involved in the testes that provided these functions. He transplanted a testis from another rooster to a rooster with one testis removed, and saw that they had normal behavior and physical anatomy as well. Berthold determined that the location or genetic factors of the testes do not matter in relation to sexual organs and behaviors, but that some chemical in the testes being secreted is causing this phenomenon. It was later identified that this factor was the hormone testosterone. Charles and Francis Darwin (1880) Although known primarily for his work on the Theory of Evolution, Charles Darwin was also keenly interested in plants. Through the 1870s, he and his son Francis studied the movement of plants towards light. They were able to show that light is perceived at the tip of a young stem (the coleoptile), whereas the bending occurs lower down the stem. They proposed that a 'transmissible substance' communicated the direction of light from the tip down to the stem. The idea of a 'transmissible substance' was initially dismissed by other plant biologists, but their work later led to the discovery of the first plant hormone. In the 1920s Dutch scientist Frits Warmolt Went and Russian scientist Nikolai Cholodny (working independently of each other) conclusively showed that asymmetric accumulation of a growth hormone was responsible for this bending. In 1933 this hormone was finally isolated by Kögl, Haagen-Smit and Erxleben and given the name 'auxin'. Oliver and Schäfer (1894) British physician George Oliver and physiologist Edward Albert Schäfer, professor at University College London, collaborated on the physiological effects of adrenal extracts. They first published their findings in two reports in 1894, a full publication followed in 1895. Though frequently falsely attributed to secretin, found in 1902 by Bayliss and Starling, Oliver and Schäfer's adrenal extract containing adrenaline, the substance causing the physiological changes, was the first hormone to be discovered. The term hormone would later be coined by Starling. Bayliss and Starling (1902) William Bayliss and Ernest Starling, a physiologist and biologist, respectively, wanted to see if the nervous system had an impact on the digestive system. They knew that the pancreas was involved in the secretion of digestive fluids after the passage of food from the stomach to the intestines, which they believed to be due to the nervous system. They cut the nerves to the pancreas in an animal model and discovered that it was not nerve impulses that controlled secretion from the pancreas. It was determined that a factor secreted from the intestines into the bloodstream was stimulating the pancreas to secrete digestive fluids. This was named secretin: a hormone. Types of signaling Hormonal effects are dependent on where they are released, as they can be released in different manners. Not all hormones are released from a cell and into the blood until it binds to a receptor on a target. The major types of hormone signaling are: Chemical classes As hormones are defined functionally, not structurally, they may have diverse chemical structures. Hormones occur in multicellular organisms (plants, animals, fungi, brown algae, and red algae). These compounds occur also in unicellular organisms, and may act as signaling molecules however there is no agreement that these molecules can be called hormones. Vertebrates Invertebrates Compared with vertebrates, insects and crustaceans possess a number of structurally unusual hormones such as the juvenile hormone, a sesquiterpenoid. Plants Examples include abscisic acid, auxin, cytokinin, ethylene, and gibberellin. Receptors Most hormones initiate a cellular response by initially binding to either cell surface receptors or intracellular receptors. A cell may have several different receptors that recognize the same hormone but activate different signal transduction pathways, or a cell may have several different receptors that recognize different hormones and activate the same biochemical pathway. Receptors for most peptide as well as many eicosanoid hormones are embedded in the cell membrane as cell surface receptors, and the majority of these belong to the G protein-coupled receptor (GPCR) class of seven alpha helix transmembrane proteins. The interaction of hormone and receptor typically triggers a cascade of secondary effects within the cytoplasm of the cell, described as signal transduction, often involving phosphorylation or dephosphorylation of various other cytoplasmic proteins, changes in ion channel permeability, or increased concentrations of intracellular molecules that may act as secondary messengers (e.g., cyclic AMP). Some protein hormones also interact with intracellular receptors located in the cytoplasm or nucleus by an intracrine mechanism. For steroid or thyroid hormones, their receptors are located inside the cell within the cytoplasm of the target cell. These receptors belong to the nuclear receptor family of ligand-activated transcription factors. To bind their receptors, these hormones must first cross the cell membrane. They can do so because they are lipid-soluble. The combined hormone-receptor complex then moves across the nuclear membrane into the nucleus of the cell, where it binds to specific DNA sequences, regulating the expression of certain genes, and thereby increasing the levels of the proteins encoded by these genes. However, it has been shown that not all steroid receptors are located inside the cell. Some are associated with the plasma membrane. Effects in humans Hormones have the following effects on the body: stimulation or inhibition of growth wake-sleep cycle and other circadian rhythms mood swings induction or suppression of apoptosis (programmed cell death) activation or inhibition of the immune system regulation of metabolism preparation of the body for mating, fighting, fleeing, and other activity preparation of the body for a new phase of life, such as puberty, parenting, and menopause control of the reproductive cycle hunger cravings A hormone may also regulate the production and release of other hormones. Hormone signals control the internal environment of the body through homeostasis. Regulation The rate of hormone biosynthesis and secretion is often regulated by a homeostatic negative feedback control mechanism. Such a mechanism depends on factors that influence the metabolism and excretion of hormones. Thus, higher hormone concentration alone cannot trigger the negative feedback mechanism. Negative feedback must be triggered by overproduction of an "effect" of the hormone. Hormone secretion can be stimulated and inhibited by: Other hormones (stimulating- or releasing -hormones) Plasma concentrations of ions or nutrients, as well as binding globulins Neurons and mental activity Environmental changes, e.g., of light or temperature One special group of hormones is the tropic hormones that stimulate the hormone production of other endocrine glands. For example, thyroid-stimulating hormone (TSH) causes growth and increased activity of another endocrine gland, the thyroid, which increases output of thyroid hormones. To release active hormones quickly into the circulation, hormone biosynthetic cells may produce and store biologically inactive hormones in the form of pre- or prohormones. These can then be quickly converted into their active hormone form in response to a particular stimulus. Eicosanoids are considered to act as local hormones. They are considered to be "local" because they possess specific effects on target cells close to their site of formation. They also have a rapid degradation cycle, making sure they do not reach distant sites within the body. Hormones are also regulated by receptor agonists. Hormones are ligands, which are any kinds of molecules that produce a signal by binding to a receptor site on a protein. Hormone effects can be inhibited, thus regulated, by competing ligands that bind to the same target receptor as the hormone in question. When a competing ligand is bound to the receptor site, the hormone is unable to bind to that site and is unable to elicit a response from the target cell. These competing ligands are called antagonists of the hormone. Therapeutic use Many hormones and their structural and functional analogs are used as medication. The most commonly prescribed hormones are estrogens and progestogens (as methods of hormonal contraception and as HRT), thyroxine (as levothyroxine, for hypothyroidism) and steroids (for autoimmune diseases and several respiratory disorders). Insulin is used by many diabetics. Local preparations for use in otolaryngology often contain pharmacologic equivalents of adrenaline, while steroid and vitamin D creams are used extensively in dermatological practice. A "pharmacologic dose" or "supraphysiological dose" of a hormone is a medical usage referring to an amount of a hormone far greater than naturally occurs in a healthy body. The effects of pharmacologic doses of hormones may be different from responses to naturally occurring amounts and may be therapeutically useful, though not without potentially adverse side effects. An example is the ability of pharmacologic doses of glucocorticoids to suppress inflammation. Hormone-behavior interactions At the neurological level, behavior can be inferred based on hormone concentration, which in turn are influenced by hormone-release patterns; the numbers and locations of hormone receptors; and the efficiency of hormone receptors for those involved in gene transcription. Hormone concentration does not incite behavior, as that would undermine other external stimuli; however, it influences the system by increasing the probability of a certain event to occur. Not only can hormones influence behavior, but also behavior and the environment can influence hormone concentration. Thus, a feedback loop is formed, meaning behavior can affect hormone concentration, which in turn can affect behavior, which in turn can affect hormone concentration, and so on. For example, hormone-behavior feedback loops are essential in providing constancy to episodic hormone secretion, as the behaviors affected by episodically secreted hormones directly prevent the continuous release of sad hormones. Three broad stages of reasoning may be used to determine if a specific hormone-behavior interaction is present within a system: The frequency of occurrence of a hormonally dependent behavior should correspond to that of its hormonal source. A hormonally dependent behavior is not expected if the hormonal source (or its types of action) is non-existent. The reintroduction of a missing behaviorally dependent hormonal source (or its types of action) is expected to bring back the absent behavior. Comparison with neurotransmitters Though colloquially oftentimes used interchangeably, there are various clear distinctions between hormones and neurotransmitters: A hormone can perform functions over a larger spatial and temporal scale than can a neurotransmitter, which often acts in micrometer-scale distances. Hormonal signals can travel virtually anywhere in the circulatory system, whereas neural signals are restricted to pre-existing nerve tracts. Assuming the travel distance is equivalent, neural signals can be transmitted much more quickly (in the range of milliseconds) than can hormonal signals (in the range of seconds, minutes, or hours). Neural signals can be sent at speeds up to 100 meters per second. Neural signalling is an all-or-nothing (digital) action, whereas hormonal signalling is an action that can be continuously variable as it is dependent upon hormone concentration. Neurohormones are a type of hormone that share a commonality with neurotransmitters. They are produced by endocrine cells that receive input from neurons, or neuroendocrine cells. Both classic hormones and neurohormones are secreted by endocrine tissue; however, neurohormones are the result of a combination between endocrine reflexes and neural reflexes, creating a neuroendocrine pathway. While endocrine pathways produce chemical signals in the form of hormones, the neuroendocrine pathway involves the electrical signals of neurons. In this pathway, the result of the electrical signal produced by a neuron is the release of a chemical, which is the neurohormone. Finally, like a classic hormone, the neurohormone is released into the bloodstream to reach its target. Binding proteins Hormone transport and the involvement of binding proteins is an essential aspect when considering the function of hormones. The formation of a complex with a binding protein has several benefits: the effective half-life of the bound hormone is increased, and a reservoir of bound hormones is created, which evens the variations in concentration of unbound hormones (bound hormones will replace the unbound hormones when these are eliminated). An example of the usage of hormone-binding proteins is in the thyroxine-binding protein which carries up to 80% of all thyroxine in the body, a crucial element in regulating the metabolic rate. See also Autocrine signaling Adipokine Cytokine Hepatokine Endocrine disease Endocrine system Endocrinology Environmental hormones Growth factor Intracrine List of investigational sex-hormonal agents Metabolomics Myokine Neohormone Neuroendocrinology Paracrine signaling Plant hormones, a.k.a. plant growth regulators Semiochemical Sex-hormonal agent Sexual motivation and hormones Xenohormone List of human hormones References External links HMRbase: A database of hormones and their receptors Physiology Endocrinology Cell signaling Signal transduction Human female endocrine system
Hormone
[ "Chemistry", "Biology" ]
4,080
[ "Biochemistry", "Neurochemistry", "Physiology", "Signal transduction" ]
13,435
https://en.wikipedia.org/wiki/Hydrology
Hydrology () is the scientific study of the movement, distribution, and management of water on Earth and other planets, including the water cycle, water resources, and drainage basin sustainability. A practitioner of hydrology is called a hydrologist. Hydrologists are scientists studying earth or environmental science, civil or environmental engineering, and physical geography. Using various analytical methods and scientific techniques, they collect and analyze data to help solve water related problems such as environmental preservation, natural disasters, and water management. Hydrology subdivides into surface water hydrology, groundwater hydrology (hydrogeology), and marine hydrology. Domains of hydrology include hydrometeorology, surface hydrology, hydrogeology, drainage-basin management, and water quality. Oceanography and meteorology are not included because water is only one of many important aspects within those fields. Hydrological research can inform environmental engineering, policy, and planning. Branches Chemical hydrology is the study of the chemical characteristics of water. Ecohydrology is the study of interactions between organisms and the hydrologic cycle. Hydrogeology is the study of the presence and movement of groundwater. Hydrogeochemistry is the study of how terrestrial water dissolves minerals weathering and this effect on water chemistry. Hydroinformatics is the adaptation of information technology to hydrology and water resources applications. Hydrometeorology is the study of the transfer of water and energy between land and water body surfaces and the lower atmosphere. Isotope hydrology is the study of the isotopic signatures of water. Surface hydrology is the study of hydrologic processes that operate at or near Earth's surface. Drainage basin management covers water storage, in the form of reservoirs, and floods protection. Water quality includes the chemistry of water in rivers and lakes, both of pollutants and natural solutes. Applications Calculation of rainfall. Calculation of Evapotranspiration Calculating surface runoff and precipitation. Determining the water balance of a region. Determining the agricultural water balance. Designing riparian-zone restoration projects. Mitigating and predicting flood, landslide and Drought risk. Real-time flood forecasting, flood warning, Flood Frequency Analysis Designing irrigation schemes and managing agricultural productivity. Part of the hazard module in catastrophe modeling. Providing drinking water. Designing dams for water supply or hydroelectric power generation. Designing bridges. Designing sewers and urban drainage systems. Analyzing the impacts of antecedent moisture on sanitary sewer systems. Predicting geomorphologic changes, such as erosion or sedimentation. Assessing the impacts of natural and anthropogenic environmental change on water resources. Assessing contaminant transport risk and establishing environmental policy guidelines. Estimating the water resource potential of river basins. Water resources management. Water resources engineering - application of hydrological and hydraulic principles to the planning, development, and management of water resources for beneficial human use. It involves assessing water availability, quality, and demand; designing and operating water infrastructure; and implementing strategies for sustainable water management. History Hydrology has been subject to investigation and engineering for millennia. Ancient Egyptians were one of the first to employ hydrology in their engineering and agriculture, inventing a form of water management known as basin irrigation. Mesopotamian towns were protected from flooding with high earthen walls. Aqueducts were built by the Greeks and Romans, while history shows that the Chinese built irrigation and flood control works. The ancient Sinhalese used hydrology to build complex irrigation works in Sri Lanka, also known for the invention of the Valve Pit which allowed construction of large reservoirs, anicuts and canals which still function. Marcus Vitruvius, in the first century BC, described a philosophical theory of the hydrologic cycle, in which precipitation falling in the mountains infiltrated the Earth's surface and led to streams and springs in the lowlands. With the adoption of a more scientific approach, Leonardo da Vinci and Bernard Palissy independently reached an accurate representation of the hydrologic cycle. It was not until the 17th century that hydrologic variables began to be quantified. Pioneers of the modern science of hydrology include Pierre Perrault, Edme Mariotte and Edmund Halley. By measuring rainfall, runoff, and drainage area, Perrault showed that rainfall was sufficient to account for the flow of the Seine. Mariotte combined velocity and river cross-section measurements to obtain a discharge value, again in the Seine. Halley showed that the evaporation from the Mediterranean Sea was sufficient to account for the outflow of rivers flowing into the sea. Advances in the 18th century included the Bernoulli piezometer and Bernoulli's equation, by Daniel Bernoulli, and the Pitot tube, by Henri Pitot. The 19th century saw development in groundwater hydrology, including Darcy's law, the Dupuit-Thiem well formula, and Hagen-Poiseuille's capillary flow equation. Rational analyses began to replace empiricism in the 20th century, while governmental agencies began their own hydrological research programs. Of particular importance were Leroy Sherman's unit hydrograph, the infiltration theory of Robert E. Horton, and C.V. Theis' aquifer test/equation describing well hydraulics. Since the 1950s, hydrology has been approached with a more theoretical basis than in the past, facilitated by advances in the physical understanding of hydrological processes and by the advent of computers and especially geographic information systems (GIS). (See also GIS and hydrology) Themes The central theme of hydrology is that water circulates throughout the Earth through different pathways and at different rates. The most vivid image of this is in the evaporation of water from the ocean, which forms clouds. These clouds drift over the land and produce rain. The rainwater flows into lakes, rivers, or aquifers. The water in lakes, rivers, and aquifers then either evaporates back to the atmosphere or eventually flows back to the ocean, completing a cycle. Water changes its state of being several times throughout this cycle. The areas of research within hydrology concern the movement of water between its various states, or within a given state, or simply quantifying the amounts in these states in a given region. Parts of hydrology concern developing methods for directly measuring these flows or amounts of water, while others concern modeling these processes either for scientific knowledge or for making a prediction in practical applications. Groundwater Ground water is water beneath Earth's surface, often pumped for drinking water. Groundwater hydrology (hydrogeology) considers quantifying groundwater flow and solute transport. Problems in describing the saturated zone include the characterization of aquifers in terms of flow direction, groundwater pressure and, by inference, groundwater depth (see: aquifer test). Measurements here can be made using a piezometer. Aquifers are also described in terms of hydraulic conductivity, storativity and transmissivity. There are a number of geophysical methods for characterizing aquifers. There are also problems in characterizing the vadose zone (unsaturated zone). Infiltration Infiltration is the process by which water enters the soil. Some of the water is absorbed, and the rest percolates down to the water table. The infiltration capacity, the maximum rate at which the soil can absorb water, depends on several factors. The layer that is already saturated provides a resistance that is proportional to its thickness, while that plus the depth of water above the soil provides the driving force (hydraulic head). Dry soil can allow rapid infiltration by capillary action; this force diminishes as the soil becomes wet. Compaction reduces the porosity and the pore sizes. Surface cover increases capacity by retarding runoff, reducing compaction and other processes. Higher temperatures reduce viscosity, increasing infiltration. Soil moisture Soil moisture can be measured in various ways; by capacitance probe, time domain reflectometer or tensiometer. Other methods include solute sampling and geophysical methods. Surface water flow Hydrology considers quantifying surface water flow and solute transport, although the treatment of flows in large rivers is sometimes considered as a distinct topic of hydraulics or hydrodynamics. Surface water flow can include flow both in recognizable river channels and otherwise. Methods for measuring flow once the water has reached a river include the stream gauge (see: discharge), and tracer techniques. Other topics include chemical transport as part of surface water, sediment transport and erosion. One of the important areas of hydrology is the interchange between rivers and aquifers. Groundwater/surface water interactions in streams and aquifers can be complex and the direction of net water flux (into surface water or into the aquifer) may vary spatially along a stream channel and over time at any particular location, depending on the relationship between stream stage and groundwater levels. Precipitation and evaporation In some considerations, hydrology is thought of as starting at the land-atmosphere boundary and so it is important to have adequate knowledge of both precipitation and evaporation. Precipitation can be measured in various ways: disdrometer for precipitation characteristics at a fine time scale; radar for cloud properties, rain rate estimation, hail and snow detection; rain gauge for routine accurate measurements of rain and snowfall; satellite for rainy area identification, rain rate estimation, land-cover/land-use, and soil moisture, snow cover or snow water equivalent for example. Evaporation is an important part of the water cycle. It is partly affected by humidity, which can be measured by a sling psychrometer. It is also affected by the presence of snow, hail, and ice and can relate to dew, mist and fog. Hydrology considers evaporation of various forms: from water surfaces; as transpiration from plant surfaces in natural and agronomic ecosystems. Direct measurement of evaporation can be obtained using Simon's evaporation pan. Detailed studies of evaporation involve boundary layer considerations as well as momentum, heat flux, and energy budgets. Remote sensing Remote sensing of hydrologic processes can provide information on locations where in situ sensors may be unavailable or sparse. It also enables observations over large spatial extents. Many of the variables constituting the terrestrial water balance, for example surface water storage, soil moisture, precipitation, evapotranspiration, and snow and ice, are measurable using remote sensing at various spatial-temporal resolutions and accuracies. Sources of remote sensing include land-based sensors, airborne sensors and satellite sensors which can capture microwave, thermal and near-infrared data or use lidar, for example. Water quality In hydrology, studies of water quality concern organic and inorganic compounds, and both dissolved and sediment material. In addition, water quality is affected by the interaction of dissolved oxygen with organic material and various chemical transformations that may take place. Measurements of water quality may involve either in-situ methods, in which analyses take place on-site, often automatically, and laboratory-based analyses and may include microbiological analysis. Integrating measurement and modelling Budget analyses Parameter estimation Scaling in time and space Data assimilation Quality control of data – see for example Double mass analysis Prediction Observations of hydrologic processes are used to make predictions of the future behavior of hydrologic systems (water flow, water quality). One of the major current concerns in hydrologic research is "Prediction in Ungauged Basins" (PUB), i.e. in basins where no or only very few data exist. Statistical hydrology The aims of Statistical hydrology is to provide appropriate statistical methods for analyzing and modeling various parts of the hydrological cycle. By analyzing the statistical properties of hydrologic records, such as rainfall or river flow, hydrologists can estimate future hydrologic phenomena. When making assessments of how often relatively rare events will occur, analyses are made in terms of the return period of such events. Other quantities of interest include the average flow in a river, in a year or by season. These estimates are important for engineers and economists so that proper risk analysis can be performed to influence investment decisions in future infrastructure and to determine the yield reliability characteristics of water supply systems. Statistical information is utilized to formulate operating rules for large dams forming part of systems which include agricultural, industrial and residential demands. Modeling Hydrological models are simplified, conceptual representations of a part of the hydrologic cycle. They are primarily used for hydrological prediction and for understanding hydrological processes, within the general field of scientific modeling. Two major types of hydrological models can be distinguished: Models based on data. These models are black box systems, using mathematical and statistical concepts to link a certain input (for instance rainfall) to the model output (for instance runoff). Commonly used techniques are regression, transfer functions, and system identification. The simplest of these models may be linear models, but it is common to deploy non-linear components to represent some general aspects of a catchment's response without going deeply into the real physical processes involved. An example of such an aspect is the well-known behavior that a catchment will respond much more quickly and strongly when it is already wet than when it is dry. Models based on process descriptions. These models try to represent the physical processes observed in the real world. Typically, such models contain representations of surface runoff, subsurface flow, evapotranspiration, and channel flow, but they can be far more complicated. Within this category, models can be divided into conceptual and deterministic. Conceptual models link simplified representations of the hydrological processes in an area, whereas deterministic models seek to resolve as much of the physics of a system as possible. These models can be subdivided into single-event models and continuous simulation models. Recent research in hydrological modeling tries to have a more global approach to the understanding of the behavior of hydrologic systems to make better predictions and to face the major challenges in water resources management. Transport Water movement is a significant means by which other materials, such as soil, gravel, boulders or pollutants, are transported from place to place. Initial input to receiving waters may arise from a point source discharge or a line source or area source, such as surface runoff. Since the 1960s rather complex mathematical models have been developed, facilitated by the availability of high-speed computers. The most common pollutant classes analyzed are nutrients, pesticides, total dissolved solids and sediment. Organizations Intergovernmental organizations International Hydrological Programme (IHP) International research bodies International Water Management Institute (IWMI) UN-IHE Delft Institute for Water Education National research bodies Centre for Ecology and Hydrology – UK Centre for Water Science, Cranfield University, UK eawag – aquatic research, ETH Zürich, Switzerland Institute of Hydrology, Albert-Ludwigs-University of Freiburg, Germany United States Geological Survey – Water Resources of the United States NOAA's National Weather Service – Office of Hydrologic Development, US US Army Corps of Engineers Hydrologic Engineering Center, US Hydrologic Research Center, US NOAA Economics and Social Sciences, United States University of Oklahoma Center for Natural Hazards and Disasters Research, US National Hydrology Research Centre, Canada National Institute of Hydrology, India National and international societies American Institute of Hydrology (AIH) Geological Society of America (GSA) – Hydrogeology Division American Geophysical Union (AGU) – Hydrology Section National Ground Water Association (NGWA) American Water Resources Association Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) International Association of Hydrological Sciences (IAHS) Statistics in Hydrology Working Group (subgroup of IAHS) German Hydrological Society (DHG: Deutsche Hydrologische Gesellschaft) Italian Hydrological Society (SII-IHS) – Società Idrologica Italiana Nordic Association for Hydrology British Hydrological Society Russian Geographical Society (Moscow Center) – Hydrology Commission International Association for Environmental Hydrology International Association of Hydrogeologists Society of Hydrologists and Meteorologists – Nepal Basin- and catchment-wide overviews Connected Waters Initiative, University of New South Wales – Investigating and raising awareness of groundwater and water resource issues in Australia Murray Darling Basin Initiative, Department of Environment and Heritage, Australia Research journals International Journal of Hydrology Science and Technology Hydrological Processes, (electronic) 0885-6087 (paper), John Wiley & Sons Hydrology Research, , IWA Publishing (formerly Nordic Hydrology) Journal of Hydroinformatics, , IWA Publishing Journal of Hydrologic Engineering, , ASCE Publication Journal of Hydrology Water Research Water Resources Research Hydrological Sciences Journal - Journal of the International Association of Hydrological Sciences (IAHS) (Print), (Online) Hydrology and Earth System Sciences Journal of Hydrometeorology See also Aqueous solution Climatology Environmental engineering science Geological Engineering Green Kenue – a software tool for hydrologic modellers Hydraulics HydroCAD – hydrology and hydraulics modeling software Hydrography Hydrology (agriculture) International Hydrological Programme Nash–Sutcliffe model efficiency coefficient Outline of hydrology Potamal Socio-hydrology Soil science Water distribution on Earth WEAP (Water Evaluation And Planning) software to model catchment hydrology from climate and land use data Catchment hydrology Other water-related fields Oceanography is the more general study of water in the oceans and estuaries. Meteorology is the more general study of the atmosphere and of weather, including precipitation as snow and rainfall. Limnology is the study of lakes, rivers and wetlands ecosystems. It covers the biological, chemical, physical, geological, and other attributes of all inland waters (running and standing waters, both fresh and saline, natural or man-made). Water resources are sources of water that are useful or potentially useful. Hydrology studies the availability of those resources, but usually not their uses. References Further reading Eslamian, S., 2014, (ed.) Handbook of Engineering Hydrology, Vol. 1: Fundamentals and Applications, Francis and Taylor, CRC Group, 636 Pages, USA. Eslamian, S., 2014, (ed.) Handbook of Engineering Hydrology, Vol. 2: Modeling, Climate Change and Variability, Francis and Taylor, CRC Group, 646 Pages, USA. Eslamian, S, 2014, (ed.) Handbook of Engineering Hydrology, Vol. 3: Environmental Hydrology and Water Management, Francis and Taylor, CRC Group, 606 Pages, USA. External links Hydrology.nl – Portal to international hydrology and water resources Decision tree to choose an uncertainty method for hydrological and hydraulic modelling (archived 1 June 2013) Experimental Hydrology Wiki Hydraulic engineering Environmental engineering Environmental science Physical geography
Hydrology
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
3,812
[ "Hydrology", "Chemical engineering", "Physical systems", "Hydraulics", "Civil engineering", "nan", "Environmental engineering", "Hydraulic engineering" ]
13,465
https://en.wikipedia.org/wiki/Holmium
Holmium is a chemical element; it has symbol Ho and atomic number 67. It is a rare-earth element and the eleventh member of the lanthanide series. It is a relatively soft, silvery, fairly corrosion-resistant and malleable metal. Like many other lanthanides, holmium is too reactive to be found in native form, as pure holmium slowly forms a yellowish oxide coating when exposed to air. When isolated, holmium is relatively stable in dry air at room temperature. However, it reacts with water and corrodes readily, and also burns in air when heated. In nature, holmium occurs together with the other rare-earth metals (like thulium). It is a relatively rare lanthanide, making up 1.4 parts per million of the Earth's crust, an abundance similar to tungsten. Holmium was discovered through isolation by Swedish chemist Per Theodor Cleve. It was also independently discovered by Jacques-Louis Soret and Marc Delafontaine, who together observed it spectroscopically in 1878. Its oxide was first isolated from rare-earth ores by Cleve in 1878. The element's name comes from Holmia, the Latin name for the city of Stockholm. Like many other lanthanides, holmium is found in the minerals monazite and gadolinite and is usually commercially extracted from monazite using ion-exchange techniques. Its compounds in nature and in nearly all of its laboratory chemistry are trivalently oxidized, containing Ho(III) ions. Trivalent holmium ions have fluorescent properties similar to many other rare-earth ions (while yielding their own set of unique emission light lines), and thus are used in the same way as some other rare earths in certain laser and glass-colorant applications. Holmium has the highest magnetic permeability and magnetic saturation of any element and is thus used for the pole pieces of the strongest static magnets. Because holmium strongly absorbs neutrons, it is also used as a burnable poison in nuclear reactors. Properties Holmium is the eleventh member of the lanthanide series. In the periodic table, it appears in period 6, between the lanthanides dysprosium to its left and erbium to its right, and above the actinide einsteinium. Physical properties With a boiling point of , holmium is the sixth most volatile lanthanide after ytterbium, europium, samarium, thulium and dysprosium. At standard temperature and pressure, holmium, like many of the second half of the lanthanides, normally assumes a hexagonally close-packed (hcp) structure. Its 67 electrons are arranged in the configuration [Xe] 4f11 6s2, so that it has thirteen valence electrons filling the 4f and 6s subshells. Holmium, like all of the lanthanides, is paramagnetic at standard temperature and pressure. However, holmium is ferromagnetic at temperatures below . It has the highest magnetic moment () of any naturally occurring element and possesses other unusual magnetic properties. When combined with yttrium, it forms highly magnetic compounds. Chemical properties Holmium metal tarnishes slowly in air, forming a yellowish oxide layer that has an appearance similar to that of iron rust. It burns readily to form holmium(III) oxide: 4 Ho + 3 O2 → 2 Ho2O3 It is a relatively soft and malleable element that is fairly corrosion-resistant and chemically stable in dry air at standard temperature and pressure. In moist air and at higher temperatures, however, it quickly oxidizes, forming a yellowish oxide. In pure form, holmium possesses a metallic, bright silvery luster. Holmium is quite electropositive: on the Pauling electronegativity scale, it has an electronegativity of 1.23. It is generally trivalent. It reacts slowly with cold water and quickly with hot water to form holmium(III) hydroxide: 2 Ho (s) + 6 H2O (l) → 2 Ho(OH)3 (aq) + 3 H2 (g) Holmium metal reacts with all the stable halogens: 2 Ho (s) + 3 F2 (g) → 2 HoF3 (s) [pink] 2 Ho (s) + 3 Cl2 (g) → 2 HoCl3 (s) [yellow] 2 Ho (s) + 3 Br2 (g) → 2 HoBr3 (s) [yellow] 2 Ho (s) + 3 I2 (g) → 2 HoI3 (s) [yellow] Holmium dissolves readily in dilute sulfuric acid to form solutions containing the yellow Ho(III) ions, which exist as a [Ho(OH2)9]3+ complexes: 2 Ho (s) + 3 H2SO4 (aq) → 2 Ho3+ (aq) + 3 (aq) + 3 H2 (g) Oxidation states As with many lanthanides, holmium is usually found in the +3 oxidation state, forming compounds such as holmium(III) fluoride (HoF3) and holmium(III) chloride (HoCl3). Holmium in solution is in the form of Ho3+ surrounded by nine molecules of water. Holmium dissolves in acids. However, holmium is also found to exist in +2, +1 and 0 oxidation states. Isotopes The isotopes of holmium range from 140Ho to 175Ho. The primary decay mode before the most abundant stable isotope, 165Ho, is positron emission, and the primary mode after is beta minus decay. The primary decay products before 165Ho are terbium and dysprosium isotopes, and the primary products after are erbium isotopes. Natural holmium consists of one primordial isotope, holmium-165; it is the only isotope of holmium that is thought to be stable, although it is predicted to undergo alpha decay to terbium-161 with a very long half-life. Of the 35 synthetic radioactive isotopes that are known, the most stable one is holmium-163 (163Ho), with a half-life of 4570 years. All other radioisotopes have ground-state half-lives not greater than 1.117 days, with the longest, holmium-166 (166Ho) having a half-life of 26.83 hours, and most have half-lives under 3 hours. 166m1Ho has a half-life of around 1200 years. The high excitation energy, resulting in a particularly rich spectrum of decay gamma rays produced when the metastable state de-excites, makes this isotope useful as a means for calibrating gamma ray spectrometers. Compounds Oxides and chalcogenides Holmium(III) oxide is the only oxide of holmium. It changes its color depending on the lighting conditions. In daylight, it has a yellowish color. Under trichromatic light, it appears orange red, almost indistinguishable from the appearance of erbium oxide under the same lighting conditions. The color change is related to the sharp emission lines of trivalent holmium ions acting as red phosphors. Holmium(III) oxide appears pink under a cold-cathode fluorescent lamp. Other chalcogenides are known for holmium. Holmium(III) sulfide has orange-yellow crystals in the monoclinic crystal system, with the space group P21/m (No. 11). Under high pressure, holmium(III) sulfide can form in the cubic and orthorhombic crystal systems. It can be obtained by the reaction of holmium(III) oxide and hydrogen sulfide at . Holmium(III) selenide is also known. It is antiferromagnetic below 6 K. Halides All four trihalides of holmium are known. Holmium(III) fluoride is a yellowish powder that can be produced by reacting holmium(III) oxide and ammonium fluoride, then crystallising it from the ammonium salt formed in solution. Holmium(III) chloride can be prepared in a similar way, with ammonium chloride instead of ammonium fluoride. It has the YCl3 layer structure in the solid state. These compounds, as well as holmium(III) bromide and holmium(III) iodide, can be obtained by the direct reaction of the elements: 2 Ho + 3 X2 → 2 HoX3 In addition, holmium(III) iodide can be obtained by the direct reaction of holmium and mercury(II) iodide, then removing the mercury by distillation. Organoholmium compounds Organoholmium compounds are very similar to those of the other lanthanides, as they all share an inability to undergo π backbonding. They are thus mostly restricted to the mostly ionic cyclopentadienides (isostructural with those of lanthanum) and the σ-bonded simple alkyls and aryls, some of which may be polymeric. History Holmium (, Latin name for Stockholm) was discovered by the Swiss chemists Jacques-Louis Soret and Marc Delafontaine in 1878 who noticed the aberrant spectrographic emission spectrum of the then-unknown element (they called it "Element X"). The Swedish chemist Per Teodor Cleve also independently discovered the element while he was working on erbia earth (erbium oxide). He was the first to isolate the new element. Using the method developed by the Swedish chemist Carl Gustaf Mosander, Cleve first removed all of the known contaminants from erbia. The result of that effort was two new materials, one brown and one green. He named the brown substance holmia (after the Latin name for Cleve's home town, Stockholm) and the green one thulia. Holmia was later found to be the holmium oxide, and thulia was thulium oxide. In the English physicist Henry Moseley's classic paper on atomic numbers, holmium was assigned the value 66. The holmium preparation he had been given to investigate had been impure, dominated by neighboring (at the time undiscovered) dysprosium. He would have seen x-ray emission lines for both elements, but assumed that the dominant ones belonged to holmium, instead of the dysprosium impurity. Occurrence and production Like all the other rare-earth elements, holmium is not naturally found as a free element. It occurs combined with other elements in gadolinite, monazite and other rare-earth minerals. No holmium-dominant mineral has yet been found. The main mining areas are China, United States, Brazil, India, Sri Lanka, and Australia with reserves of holmium estimated as 400,000 tonnes. The annual production of holmium metal is of about 10 tonnes per year. Holmium makes up 1.3 parts per million of the Earth's crust by mass. Holmium makes up 1 part per million of the soils, 400 parts per quadrillion of seawater, and almost none of Earth's atmosphere, which is very rare for a lanthanide. It makes up 500 parts per trillion of the universe by mass. Holmium is commercially extracted by ion exchange from monazite sand (0.05% holmium), but is still difficult to separate from other rare earths. The element has been isolated through the reduction of its anhydrous chloride or fluoride with metallic calcium. Its estimated abundance in the Earth's crust is 1.3 mg/kg. Holmium obeys the Oddo–Harkins rule: as an odd-numbered element, it is less abundant than both dysprosium and erbium. However, it is the most abundant of the odd-numbered heavy lanthanides. Of the lanthanides, only promethium, thulium, lutetium and terbium are less abundant on Earth. The principal current source are some of the ion-adsorption clays of southern China. Some of these have a rare-earth composition similar to that found in xenotime or gadolinite. Yttrium makes up about two-thirds of the total by mass; holmium is around 1.5%. Holmium is relatively inexpensive for a rare-earth metal with the price about 1000 USD/kg. Applications Glass containing holmium oxide and holmium oxide solutions (usually in perchloric acid) have sharp optical absorption peaks in the spectral range 200 to 900 nm. They are therefore used as a calibration standard for optical spectrophotometers. The radioactive but long-lived 166m1Ho is used in calibration of gamma-ray spectrometers. Holmium is used to create the strongest artificially generated magnetic fields, when placed within high-strength magnets as a magnetic pole piece (also called a magnetic flux concentrator). Holmium is also used in the manufacture of some permanent magnets. Holmium-doped yttrium iron garnet (YIG) and yttrium lithium fluoride have applications in solid-state lasers, and Ho-YIG has applications in optical isolators and in microwave equipment (e.g., YIG spheres). Holmium lasers emit at 2.1 micrometres. They are used in medical, dental, and fiber-optical applications. It is also being considered for usage in the enucleation of the prostate. Since holmium can absorb nuclear fission-bred neutrons, it is used as a burnable poison to regulate nuclear reactors. It is used as a colorant for cubic zirconia, providing pink coloring, and for glass, providing yellow-orange coloring. In March 2017, IBM announced that they had developed a technique to store one bit of data on a single holmium atom set on a bed of magnesium oxide. With sufficient quantum and classical control techniques, holmium may be a good candidate to make quantum computers. Holmium is used in the medical field, particularly in laser surgery for procedures such as kidney stone removal and prostate treatment, due to its precision and minimal tissue damage. Its isotope, holmium-166, is applied in targeted cancer therapies, especially for liver cancer, and it also enhances MRI imaging as a contrast agent. Biological role and precautions Holmium plays no biological role in humans, but its salts are able to stimulate metabolism. Humans typically consume about a milligram of holmium a year. Plants do not readily take up holmium from the soil. Some vegetables have had their holmium content measured, and it amounted to 100 parts per trillion. Holmium and its soluble salts are slightly toxic if ingested, but insoluble holmium salts are nontoxic. Metallic holmium in dust form presents a fire and explosion hazard. Large amounts of holmium salts can cause severe damage if inhaled, consumed orally, or injected. The biological effects of holmium over a long period of time are not known. Holmium has a low level of acute toxicity. See also :Category:Holmium compounds Period 6 element References Bibliography Further reading R. J. Callow, The Industrial Chemistry of the Lanthanons, Yttrium, Thorium, and Uranium, Pergamon Press, 1967. External links Holmium at The Periodic Table of Videos (University of Nottingham) Chemical elements Chemical elements with hexagonal close-packed structure Ferromagnetic materials Lanthanides Neutron poisons Reducing agents
Holmium
[ "Physics", "Chemistry" ]
3,269
[ "Chemical elements", "Redox", "Ferromagnetic materials", "Reducing agents", "Materials", "Atoms", "Matter" ]
13,466
https://en.wikipedia.org/wiki/Hafnium
Hafnium is a chemical element; it has symbol Hf and atomic number 72. A lustrous, silvery gray, tetravalent transition metal, hafnium chemically resembles zirconium and is found in many zirconium minerals. Its existence was predicted by Dmitri Mendeleev in 1869, though it was not identified until 1922, by Dirk Coster and George de Hevesy. Hafnium is named after , the Latin name for Copenhagen, where it was discovered. Hafnium is used in filaments and electrodes. Some semiconductor fabrication processes use its oxide for integrated circuits at 45 nanometers and smaller feature lengths. Some superalloys used for special applications contain hafnium in combination with niobium, titanium, or tungsten. Hafnium's large neutron capture cross section makes it a good material for neutron absorption in control rods in nuclear power plants, but at the same time requires that it be removed from the neutron-transparent corrosion-resistant zirconium alloys used in nuclear reactors. Characteristics Physical characteristics Hafnium is a shiny, silvery, ductile metal that is corrosion-resistant and chemically similar to zirconium in that they have the same number of valence electrons and are in the same group. Also, their relativistic effects are similar: The expected expansion of atomic radii from period 5 to 6 is almost exactly canceled out by the lanthanide contraction. Hafnium changes from its alpha form, a hexagonal close-packed lattice, to its beta form, a body-centered cubic lattice, at . The physical properties of hafnium metal samples are markedly affected by zirconium impurities, especially the nuclear properties, as these two elements are among the most difficult to separate because of their chemical similarity. A notable physical difference between these metals is their density, with zirconium having about one-half the density of hafnium. The most notable nuclear properties of hafnium are its high thermal neutron capture cross section and that the nuclei of several different hafnium isotopes readily absorb two or more neutrons apiece. In contrast with this, zirconium is practically transparent to thermal neutrons, and it is commonly used for the metal components of nuclear reactors—especially the cladding of their nuclear fuel rods. Chemical characteristics Hafnium reacts in air to form a protective film that inhibits further corrosion. Despite this, the metal is attacked by hydrofluoric acid and concentrated sulfuric acid, and can be oxidized with halogens or burnt in air. Like its sister metal zirconium, finely divided hafnium can ignite spontaneously in air. The metal is resistant to concentrated alkalis. As a consequence of lanthanide contraction, the chemistry of hafnium and zirconium is so similar that the two cannot be separated based on differing chemical reactions. The melting and boiling points of the compounds and the solubility in solvents are the major differences in the chemistry of these twin elements. Isotopes At least 40 isotopes of hafnium have been observed, ranging in mass number from 153 to 192. The five stable isotopes have mass numbers ranging from 176 to 180 inclusive. The radioactive isotopes' half-lives range from 400 ms for 153Hf to years for the most stable one, the primordial 174Hf. The extinct radionuclide 182Hf has a half-life of , and is an important tracker isotope for the formation of planetary cores. The nuclear isomer 178m2Hf was at the center of a controversy for several years regarding its potential use as a weapon. Occurrence Hafnium is estimated to make up about between 3.0 and 4.8 ppm of the Earth's upper crust by mass. It does not exist as a free element on Earth, but is found combined in solid solution with zirconium in natural zirconium compounds such as zircon, ZrSiO4, which usually has about 1–4% of the Zr replaced by Hf. Rarely, the Hf/Zr ratio increases during crystallization to give the isostructural mineral hafnon , with atomic Hf > Zr. An obsolete name for a variety of zircon containing unusually high Hf content is alvite. A major source of zircon (and hence hafnium) ores is heavy mineral sands ore deposits, pegmatites, particularly in Brazil and Malawi, and carbonatite intrusions, particularly the Crown Polymetallic Deposit at Mount Weld, Western Australia. A potential source of hafnium is trachyte tuffs containing rare zircon-hafnium silicates eudialyte or armstrongite, at Dubbo in New South Wales, Australia. Production The heavy mineral sands ore deposits of the titanium ores ilmenite and rutile yield most of the mined zirconium, and therefore also most of the hafnium. Zirconium is a good nuclear fuel-rod cladding metal, with the desirable properties of a very low neutron capture cross section and good chemical stability at high temperatures. However, because of hafnium's neutron-absorbing properties, hafnium impurities in zirconium would cause it to be far less useful for nuclear reactor applications. Thus, a nearly complete separation of zirconium and hafnium is necessary for their use in nuclear power. The production of hafnium-free zirconium is the main source of hafnium. The chemical properties of hafnium and zirconium are nearly identical, which makes the two difficult to separate. The methods first used—fractional crystallization of ammonium fluoride salts or the fractional distillation of the chloride—have not proven suitable for an industrial-scale production. After zirconium was chosen as a material for nuclear reactor programs in the 1940s, a separation method had to be developed. Liquid–liquid extraction processes with a wide variety of solvents were developed and are still used for producing hafnium. About half of all hafnium metal manufactured is produced as a by-product of zirconium refinement. The end product of the separation is hafnium(IV) chloride. The purified hafnium(IV) chloride is converted to the metal by reduction with magnesium or sodium, as in the Kroll process. HfCl4{} + 2 Mg ->[1100~^\circ\text{C}] Hf{} + 2 MgCl2 Further purification is effected by a chemical transport reaction developed by Arkel and de Boer: In a closed vessel, hafnium reacts with iodine at temperatures of , forming hafnium(IV) iodide; at a tungsten filament of the reverse reaction happens preferentially, and the chemically bound iodine and hafnium dissociate into the native elements. The hafnium forms a solid coating at the tungsten filament, and the iodine can react with additional hafnium, resulting in a steady iodine turnover and ensuring the chemical equilibrium remains in favor of hafnium production. Hf{} + 2 I2 ->[500~^\circ\text{C}] HfI4 HfI4 ->[1700~^\circ\text{C}] Hf{} + 2 I2 Chemical compounds Due to the lanthanide contraction, the ionic radius of hafnium(IV) (0.78 ångström) is almost the same as that of zirconium(IV) (0.79 angstroms). Consequently, compounds of hafnium(IV) and zirconium(IV) have very similar chemical and physical properties. Hafnium and zirconium tend to occur together in nature and the similarity of their ionic radii makes their chemical separation rather difficult. Hafnium tends to form inorganic compounds in the oxidation state of +4. Halogens react with it to form hafnium tetrahalides. At higher temperatures, hafnium reacts with oxygen, nitrogen, carbon, boron, sulfur, and silicon. Some hafnium compounds in lower oxidation states are known. Hafnium(IV) chloride and hafnium(IV) iodide have some applications in the production and purification of hafnium metal. They are volatile solids with polymeric structures. These tetrachlorides are precursors to various organohafnium compounds such as hafnocene dichloride and tetrabenzylhafnium. The white hafnium oxide (HfO2), with a melting point of and a boiling point of roughly , is very similar to zirconia, but slightly more basic. Hafnium carbide is the most refractory binary compound known, with a melting point over , and hafnium nitride is the most refractory of all known metal nitrides, with a melting point of . This has led to proposals that hafnium or its carbides might be useful as construction materials that are subjected to very high temperatures. The mixed carbide tantalum hafnium carbide () possesses the highest melting point of any currently known compound, . Recent supercomputer simulations suggest a hafnium alloy with a melting point of . History Hafnium's existence was predicted by Dmitri Mendeleev in 1869. In his report on The Periodic Law of the Chemical Elements, in 1869, Dmitri Mendeleev had implicitly predicted the existence of a heavier analog of titanium and zirconium. At the time of his formulation in 1871, Mendeleev believed that the elements were ordered by their atomic masses and placed lanthanum (element 57) in the spot below zirconium. The exact placement of the elements and the location of missing elements was done by determining the specific weight of the elements and comparing the chemical and physical properties. The X-ray spectroscopy done by Henry Moseley in 1914 showed a direct dependency between spectral line and effective nuclear charge. This led to the nuclear charge, or atomic number of an element, being used to ascertain its place within the periodic table. With this method, Moseley determined the number of lanthanides and showed the gaps in the atomic number sequence at numbers 43, 61, 72, and 75. The discovery of the gaps led to an extensive search for the missing elements. In 1914, several people claimed the discovery after Henry Moseley predicted the gap in the periodic table for the then-undiscovered element 72. Georges Urbain asserted that he found element 72 in the rare earth elements in 1907 and published his results on celtium in 1911. Neither the spectra nor the chemical behavior he claimed matched with the element found later, and therefore his claim was turned down after a long-standing controversy. The controversy was partly because the chemists favored the chemical techniques which led to the discovery of celtium, while the physicists relied on the use of the new X-ray spectroscopy method that proved that the substances discovered by Urbain did not contain element 72. In 1921, Charles R. Bury suggested that element 72 should resemble zirconium and therefore was not part of the rare earth elements group. By early 1923, Niels Bohr and others agreed with Bury. These suggestions were based on Bohr's theories of the atom which were identical to chemist Charles Bury, the X-ray spectroscopy of Moseley, and the chemical arguments of Friedrich Paneth. Encouraged by these suggestions and by the reappearance in 1922 of Urbain's claims that element 72 was a rare earth element discovered in 1911, Dirk Coster and Georg von Hevesy were motivated to search for the new element in zirconium ores. Hafnium was discovered by the two in 1923 in Copenhagen, Denmark, validating the original 1869 prediction of Mendeleev. It was ultimately found in zircon in Norway through X-ray spectroscopy analysis. The place where the discovery took place led to the element being named for the Latin name for "Copenhagen", Hafnia, the home town of Niels Bohr. Today, the Faculty of Science of the University of Copenhagen uses in its seal a stylized image of the hafnium atom. Hafnium was separated from zirconium through repeated recrystallization of the double ammonium or potassium fluorides by Valdemar Thal Jantzen and von Hevesey. Anton Eduard van Arkel and Jan Hendrik de Boer were the first to prepare metallic hafnium by passing hafnium tetraiodide vapor over a heated tungsten filament in 1924. This process for differential purification of zirconium and hafnium is still in use today. Hafnium was one of the last two stable elements to be discovered. The element rhenium was found in 1908 by Masataka Ogawa, though its atomic number was misidentified at the time, and it was not generally recognised by the scientific community until its rediscovery by Walter Noddack, Ida Noddack, and Otto Berg in 1925. This makes it somewhat difficult to say if hafnium or rhenium was discovered last. In 1923, six predicted elements were still missing from the periodic table: 43 (technetium), 61 (promethium), 85 (astatine), and 87 (francium) are radioactive elements and are only present in trace amounts in the environment, thus making elements 75 (rhenium) and 72 (hafnium) the last two unknown non-radioactive elements. Applications Most of the hafnium produced is used in the manufacture of control rods for nuclear reactors. Hafnium has limited technical applications due to a few factors. First, it's very similar to zirconium, a more abundant element that can be used in most cases. Second, pure hafnium wasn't widely available until the late 1950s, when it became a byproduct of the nuclear industry's need for hafnium-free zirconium. Additionally, hafnium is rare and difficult to separate from other elements, making it expensive. After the Fukushima disaster reduced the demand for hafnium-free zirconium, the price of hafnium increased significantly from around $500–$600/kg ($227-$272/lb) in 2014 to around $1000/kg ($454/lb) in 2015. Nuclear reactors The nuclei of several hafnium isotopes can each absorb multiple neutrons. This makes hafnium a good material for nuclear reactors' control rods. Its neutron capture cross section (Capture Resonance Integral Io ≈ 2000 barns) is about 600 times that of zirconium (other elements that are good neutron-absorbers for control rods are cadmium and boron). Excellent mechanical properties and exceptional corrosion-resistance properties allow its use in the harsh environment of pressurized water reactors. The German research reactor FRM II uses hafnium as a neutron absorber. It is also common in military reactors, particularly in US naval submarine reactors, to slow reactor rates that are too high. It is seldom found in civilian reactors, the first core of the Shippingport Atomic Power Station (a conversion of a naval reactor) being a notable exception. Alloys Hafnium is used in alloys with iron, titanium, niobium, tantalum, and other metals. An alloy used for liquid-rocket thruster nozzles, for example the main engine of the Apollo Lunar Modules, is C103 which consists of 89% niobium, 10% hafnium and 1% titanium. Small additions of hafnium increase the adherence of protective oxide scales on nickel-based alloys. It thereby improves the corrosion resistance, especially under cyclic temperature conditions that tend to break oxide scales, by inducing thermal stresses between the bulk material and the oxide layer. Microprocessors Hafnium-based compounds are employed in gates of transistors as insulators in the 45 nm (and below) generation of integrated circuits from Intel, IBM and others. Hafnium oxide-based compounds are practical high-k dielectrics, allowing reduction of the gate leakage current which improves performance at such scales. Isotope geochemistry Isotopes of hafnium and lutetium (along with ytterbium) are also used in isotope geochemistry and geochronological applications, in lutetium-hafnium dating. It is often used as a tracer of isotopic evolution of Earth's mantle through time. This is because 176Lu decays to 176Hf with a half-life of approximately 37 billion years. In most geologic materials, zircon is the dominant host of hafnium (>10,000 ppm) and is often the focus of hafnium studies in geology. Hafnium is readily substituted into the zircon crystal lattice, and is therefore very resistant to hafnium mobility and contamination. Zircon also has an extremely low Lu/Hf ratio, making any correction for initial lutetium minimal. Although the Lu/Hf system can be used to calculate a "model age", i.e. the time at which it was derived from a given isotopic reservoir such as the depleted mantle, these "ages" do not carry the same geologic significance as do other geochronological techniques as the results often yield isotopic mixtures and thus provide an average age of the material from which it was derived. Garnet is another mineral that contains appreciable amounts of hafnium to act as a geochronometer. The high and variable Lu/Hf ratios found in garnet make it useful for dating metamorphic events. Other uses Due to its heat resistance and its affinity to oxygen and nitrogen, hafnium is a good scavenger for oxygen and nitrogen in gas-filled and incandescent lamps. Hafnium is also used as the electrode in plasma cutting because of its ability to shed electrons into the air. The high energy content of 178m2Hf was the concern of a DARPA-funded program in the US. This program eventually concluded that using the above-mentioned 178m2Hf nuclear isomer of hafnium to construct high-yield weapons with X-ray triggering mechanisms—an application of induced gamma emission—was infeasible because of its expense. See hafnium controversy. Hafnium metallocene compounds can be prepared from hafnium tetrachloride and various cyclopentadiene-type ligand species. Perhaps the simplest hafnium metallocene is hafnocene dichloride. Hafnium metallocenes are part of a large collection of Group 4 transition metal metallocene catalysts that are used worldwide in the production of polyolefin resins like polyethylene and polypropylene. A pyridyl-amidohafnium catalyst can be used for the controlled iso-selective polymerization of propylene which can then be combined with polyethylene to make a much tougher recycled plastic. Hafnium diselenide is studied in spintronics thanks to its charge density wave and superconductivity. Precautions Care needs to be taken when machining hafnium because it is pyrophoric—fine particles can spontaneously combust when exposed to air. Compounds that contain this metal are rarely encountered by most people. The pure metal is not considered toxic, but hafnium compounds should be handled as if they were toxic because the ionic forms of metals are normally at greatest risk for toxicity, and limited animal testing has been done for hafnium compounds. People can be exposed to hafnium in the workplace by breathing, swallowing, skin, and eye contact. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for exposure to hafnium and hafnium compounds in the workplace as TWA 0.5 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set the same recommended exposure limit (REL). At levels of 50 mg/m3, hafnium is immediately dangerous to life and health. References Further reading External links Hafnium at Los Alamos National Laboratory's periodic table of the elements Hafnium at The Periodic Table of Videos (University of Nottingham) Hafnium Technical & Safety Data NLM Hazardous Substances Databank – Hafnium, elemental Don Clark: Intel Shifts from Silicon to Lift Chip Performance – WSJ, 2007 Hafnium-based Intel 45nm Process Technology CDC – NIOSH Pocket Guide to Chemical Hazards Chemical elements Transition metals Neutron poisons 1923 in science Chemical elements with hexagonal close-packed structure
Hafnium
[ "Physics" ]
4,339
[ "Chemical elements", "Atoms", "Matter" ]
13,483
https://en.wikipedia.org/wiki/Hemoglobin
Hemoglobin (haemoglobin, Hb or Hgb) is a protein containing iron that facilitates the transportation of oxygen in red blood cells. Almost all vertebrates contain hemoglobin, with the sole exception of the fish family Channichthyidae. Hemoglobin in the blood carries oxygen from the respiratory organs (lungs or gills) to the other tissues of the body, where it releases the oxygen to enable aerobic respiration which powers an animal's metabolism. A healthy human has 12to 20grams of hemoglobin in every 100mL of blood. Hemoglobin is a metalloprotein, a chromoprotein, and globulin. In mammals, hemoglobin makes up about 96% of a red blood cell's dry weight (excluding water), and around 35% of the total weight (including water). Hemoglobin has an oxygen-binding capacity of 1.34mL of O2 per gram, which increases the total blood oxygen capacity seventy-fold compared to dissolved oxygen in blood plasma alone. The mammalian hemoglobin molecule can bind and transport up to four oxygen molecules. Hemoglobin also transports other gases. It carries off some of the body's respiratory carbon dioxide (about 20–25% of the total) as carbaminohemoglobin, in which CO2 binds to the heme protein. The molecule also carries the important regulatory molecule nitric oxide bound to a thiol group in the globin protein, releasing it at the same time as oxygen. Hemoglobin is also found in other cells, including in the A9 dopaminergic neurons of the substantia nigra, macrophages, alveolar cells, lungs, retinal pigment epithelium, hepatocytes, mesangial cells of the kidney, endometrial cells, cervical cells, and vaginal epithelial cells. In these tissues, hemoglobin absorbs unneeded oxygen as an antioxidant, and regulates iron metabolism. Excessive glucose in the blood can attach to hemoglobin and raise the level of hemoglobin A1c. Hemoglobin and hemoglobin-like molecules are also found in many invertebrates, fungi, and plants. In these organisms, hemoglobins may carry oxygen, or they may transport and regulate other small molecules and ions such as carbon dioxide, nitric oxide, hydrogen sulfide and sulfide. A variant called leghemoglobin serves to scavenge oxygen away from anaerobic systems such as the nitrogen-fixing nodules of leguminous plants, preventing oxygen poisoning. The medical condition hemoglobinemia, a form of anemia, is caused by intravascular hemolysis, in which hemoglobin leaks from red blood cells into the blood plasma. Research history In 1825, Johann Friedrich Engelhart discovered that the ratio of iron to protein is identical in the hemoglobins of several species. From the known atomic mass of iron, he calculated the molecular mass of hemoglobin to n × 16000 (n=number of iron atoms per hemoglobin molecule, now known to be 4), the first determination of a protein's molecular mass. This "hasty conclusion" drew ridicule from colleagues who could not believe that any molecule could be so large. However, Gilbert Smithson Adair confirmed Engelhart's results in 1925 by measuring the osmotic pressure of hemoglobin solutions. Although blood had been known to carry oxygen since at least 1794, the oxygen-carrying property of hemoglobin was described by Hünefeld in 1840. In 1851, German physiologist Otto Funke published a series of articles in which he described growing hemoglobin crystals by successively diluting red blood cells with a solvent such as pure water, alcohol or ether, followed by slow evaporation of the solvent from the resulting protein solution. Hemoglobin's reversible oxygenation was described a few years later by Felix Hoppe-Seyler. With the development of X-ray crystallography, it became possible to sequence protein structures. In 1959, Max Perutz determined the molecular structure of hemoglobin. For this work he shared the 1962 Nobel Prize in Chemistry with John Kendrew, who sequenced the globular protein myoglobin. The role of hemoglobin in the blood was elucidated by French physiologist Claude Bernard. The name hemoglobin (or haemoglobin) is derived from the words heme (or haem) and globin, reflecting the fact that each subunit of hemoglobin is a globular protein with an embedded heme group. Each heme group contains one iron atom, that can bind one oxygen molecule through ion-induced dipole forces. The most common type of hemoglobin in mammals contains four such subunits. Genetics Hemoglobin consists of protein subunits (globin molecules), which are polypeptides, long folded chains of specific amino acids which determine the protein's chemical properties and function. The amino acid sequence of any polypeptide is translated from a segment of DNA, the corresponding gene. There is more than one hemoglobin gene. In humans, hemoglobin A (the main form of hemoglobin in adults) is coded by genes HBA1, HBA2, and HBB. Alpha 1 and alpha 2 subunits are respectively coded by genes HBA1 and HBA2 close together on chromosome 16, while the beta subunit is coded by gene HBB on chromosome 11. The amino acid sequences of the globin subunits usually differ between species, with the difference growing with evolutionary distance. For example, the most common hemoglobin sequences in humans, bonobos and chimpanzees are completely identical, with exactly the same alpha and beta globin protein chains. Human and gorilla hemoglobin differ in one amino acid in both alpha and beta chains, and these differences grow larger between less closely related species. Mutations in the genes for hemoglobin can result in variants of hemoglobin within a single species, although one sequence is usually "most common" in each species. Many of these mutations cause no disease, but some cause a group of hereditary diseases called hemoglobinopathies. The best known hemoglobinopathy is sickle-cell disease, which was the first human disease whose mechanism was understood at the molecular level. A mostly separate set of diseases called thalassemias involves underproduction of normal and sometimes abnormal hemoglobins, through problems and mutations in globin gene regulation. All these diseases produce anemia. Variations in hemoglobin sequences, as with other proteins, may be adaptive. For example, hemoglobin has been found to adapt in different ways to the thin air at high altitudes, where lower partial pressure of oxygen diminishes its binding to hemoglobin compared to the higher pressures at sea level. Recent studies of deer mice found mutations in four genes that can account for differences between high- and low-elevation populations. It was found that the genes of the two breeds are "virtually identical—except for those that govern the oxygen-carrying capacity of their hemoglobin. . . . The genetic difference enables highland mice to make more efficient use of their oxygen." Mammoth hemoglobin featured mutations that allowed for oxygen delivery at lower temperatures, thus enabling mammoths to migrate to higher latitudes during the Pleistocene. This was also found in hummingbirds that inhabit the Andes. Hummingbirds already expend a lot of energy and thus have high oxygen demands and yet Andean hummingbirds have been found to thrive in high altitudes. Non-synonymous mutations in the hemoglobin gene of multiple species living at high elevations (Oreotrochilus, A. castelnaudii, C. violifer, P. gigas, and A. viridicuada) have caused the protein to have less of an affinity for inositol hexaphosphate (IHP), a molecule found in birds that has a similar role as 2,3-BPG in humans; this results in the ability to bind oxygen in lower partial pressures. Birds' unique circulatory lungs also promote efficient use of oxygen at low partial pressures of O2. These two adaptations reinforce each other and account for birds' remarkable high-altitude performance. Hemoglobin adaptation extends to humans, as well. There is a higher offspring survival rate among Tibetan women with high oxygen saturation genotypes residing at 4,000 m. Natural selection seems to be the main force working on this gene because the mortality rate of offspring is significantly lower for women with higher hemoglobin-oxygen affinity when compared to the mortality rate of offspring from women with low hemoglobin-oxygen affinity. While the exact genotype and mechanism by which this occurs is not yet clear, selection is acting on these women's ability to bind oxygen in low partial pressures, which overall allows them to better sustain crucial metabolic processes. Synthesis Hemoglobin (Hb) is synthesized in a complex series of steps. The heme part is synthesized in a series of steps in the mitochondria and the cytosol of immature red blood cells, while the globin protein parts are synthesized by ribosomes in the cytosol. Production of Hb continues in the cell throughout its early development from the proerythroblast to the reticulocyte in the bone marrow. At this point, the nucleus is lost in mammalian red blood cells, but not in birds and many other species. Even after the loss of the nucleus in mammals, residual ribosomal RNA allows further synthesis of Hb until the reticulocyte loses its RNA soon after entering the vasculature (this hemoglobin-synthetic RNA in fact gives the reticulocyte its reticulated appearance and name). Structure of heme Hemoglobin has a quaternary structure characteristic of many multi-subunit globular proteins. Most of the amino acids in hemoglobin form alpha helices, and these helices are connected by short non-helical segments. Hydrogen bonds stabilize the helical sections inside this protein, causing attractions within the molecule, which then causes each polypeptide chain to fold into a specific shape. Hemoglobin's quaternary structure comes from its four subunits in roughly a tetrahedral arrangement. In most vertebrates, the hemoglobin molecule is an assembly of four globular protein subunits. Each subunit is composed of a protein chain tightly associated with a non-protein prosthetic heme group. Each protein chain arranges into a set of alpha-helix structural segments connected together in a globin fold arrangement. Such a name is given because this arrangement is the same folding motif used in other heme/globin proteins such as myoglobin. This folding pattern contains a pocket that strongly binds the heme group. A heme group consists of an iron (Fe) ion held in a heterocyclic ring, known as a porphyrin. This porphyrin ring consists of four pyrrole molecules cyclically linked together (by methine bridges) with the iron ion bound in the center. The iron ion, which is the site of oxygen binding, coordinates with the four nitrogen atoms in the center of the ring, which all lie in one plane. The heme is bound strongly (covalently) to the globular protein via the N atoms of the imidazole ring of F8 histidine residue (also known as the proximal histidine) below the porphyrin ring. A sixth position can reversibly bind oxygen by a coordinate covalent bond, completing the octahedral group of six ligands. This reversible bonding with oxygen is why hemoglobin is so useful for transporting oxygen around the body. Oxygen binds in an "end-on bent" geometry where one oxygen atom binds to Fe and the other protrudes at an angle. When oxygen is not bound, a very weakly bonded water molecule fills the site, forming a distorted octahedron. Even though carbon dioxide is carried by hemoglobin, it does not compete with oxygen for the iron-binding positions but is bound to the amine groups of the protein chains attached to the heme groups. The iron ion may be either in the ferrous Fe2+ or in the ferric Fe3+ state, but ferrihemoglobin (methemoglobin) (Fe3+) cannot bind oxygen. In binding, oxygen temporarily and reversibly oxidizes (Fe2+) to (Fe3+) while oxygen temporarily turns into the superoxide ion, thus iron must exist in the +2 oxidation state to bind oxygen. If superoxide ion associated to Fe3+ is protonated, the hemoglobin iron will remain oxidized and incapable of binding oxygen. In such cases, the enzyme methemoglobin reductase will be able to eventually reactivate methemoglobin by reducing the iron center. In adult humans, the most common hemoglobin type is a tetramer (which contains four subunit proteins) called hemoglobin A, consisting of two α and two β subunits non-covalently bound, each made of 141 and 146 amino acid residues, respectively. This is denoted as α2β2. The subunits are structurally similar and about the same size. Each subunit has a molecular weight of about 16,000 daltons, for a total molecular weight of the tetramer of about 64,000 daltons (64,458 g/mol). Thus, 1 g/dL=0.1551 mmol/L. Hemoglobin A is the most intensively studied of the hemoglobin molecules. In human infants, the fetal hemoglobin molecule is made up of 2 α chains and 2 γ chains. The γ chains are gradually replaced by β chains as the infant grows. The four polypeptide chains are bound to each other by salt bridges, hydrogen bonds, and the hydrophobic effect. Oxygen saturation In general, hemoglobin can be saturated with oxygen molecules (oxyhemoglobin), or desaturated with oxygen molecules (deoxyhemoglobin). Oxyhemoglobin Oxyhemoglobin is formed during physiological respiration when oxygen binds to the heme component of the protein hemoglobin in red blood cells. This process occurs in the pulmonary capillaries adjacent to the alveoli of the lungs. The oxygen then travels through the blood stream to be dropped off at cells where it is utilized as a terminal electron acceptor in the production of ATP by the process of oxidative phosphorylation. It does not, however, help to counteract a decrease in blood pH. Ventilation, or breathing, may reverse this condition by removal of carbon dioxide, thus causing a shift up in pH. Hemoglobin exists in two forms, a taut (tense) form (T) and a relaxed form (R). Various factors such as low pH, high CO2 and high 2,3 BPG at the level of the tissues favor the taut form, which has low oxygen affinity and releases oxygen in the tissues. Conversely, a high pH, low CO2, or low 2,3 BPG favors the relaxed form, which can better bind oxygen. The partial pressure of the system also affects O2 affinity where, at high partial pressures of oxygen (such as those present in the alveoli), the relaxed (high affinity, R) state is favoured. Inversely, at low partial pressures (such as those present in respiring tissues), the (low affinity, T) tense state is favoured. Additionally, the binding of oxygen to the iron(II) heme pulls the iron into the plane of the porphyrin ring, causing a slight conformational shift. The shift encourages oxygen to bind to the three remaining heme units within hemoglobin (thus, oxygen binding is cooperative). Classically, the iron in oxyhemoglobin is seen as existing in the iron(II) oxidation state. However, the complex of oxygen with heme iron is diamagnetic, whereas both oxygen and high-spin iron(II) are paramagnetic. Experimental evidence strongly suggests heme iron is in the iron(III) oxidation state in oxyhemoglobin, with the oxygen existing as superoxide anion (O2•−) or in a covalent charge-transfer complex. Deoxygenated hemoglobin Deoxygenated hemoglobin (deoxyhemoglobin) is the form of hemoglobin without the bound oxygen. The absorption spectra of oxyhemoglobin and deoxyhemoglobin differ. The oxyhemoglobin has significantly lower absorption of the 660 nm wavelength than deoxyhemoglobin, while at 940 nm its absorption is slightly higher. This difference is used for the measurement of the amount of oxygen in a patient's blood by an instrument called a pulse oximeter. This difference also accounts for the presentation of cyanosis, the blue to purplish color that tissues develop during hypoxia. Deoxygenated hemoglobin is paramagnetic; it is weakly attracted to magnetic fields. In contrast, oxygenated hemoglobin exhibits diamagnetism, a weak repulsion from a magnetic field. Evolution of vertebrate hemoglobin Scientists agree that the event that separated myoglobin from hemoglobin occurred after lampreys diverged from jawed vertebrates. This separation of myoglobin and hemoglobin allowed for the different functions of the two molecules to arise and develop: myoglobin has more to do with oxygen storage while hemoglobin is tasked with oxygen transport. The α- and β-like globin genes encode the individual subunits of the protein. The predecessors of these genes arose through another duplication event also after the gnathosome common ancestor derived from jawless fish, approximately 450–500 million years ago. Ancestral reconstruction studies suggest that the preduplication ancestor of the α and β genes was a dimer made up of identical globin subunits, which then evolved to assemble into a tetrameric architecture after the duplication. The development of α and β genes created the potential for hemoglobin to be composed of multiple distinct subunits, a physical composition central to hemoglobin's ability to transport oxygen. Having multiple subunits contributes to hemoglobin's ability to bind oxygen cooperatively as well as be regulated allosterically. Subsequently, the α gene also underwent a duplication event to form the HBA1 and HBA2 genes. These further duplications and divergences have created a diverse range of α- and β-like globin genes that are regulated so that certain forms occur at different stages of development. Most ice fish of the family Channichthyidae have lost their hemoglobin genes as an adaptation to cold water. Cooperativity When oxygen binds to the iron complex, it causes the iron atom to move back toward the center of the plane of the porphyrin ring (see moving diagram). At the same time, the imidazole side-chain of the histidine residue interacting at the other pole of the iron is pulled toward the porphyrin ring. This interaction forces the plane of the ring sideways toward the outside of the tetramer, and also induces a strain in the protein helix containing the histidine as it moves nearer to the iron atom. This strain is transmitted to the remaining three monomers in the tetramer, where it induces a similar conformational change in the other heme sites such that binding of oxygen to these sites becomes easier. As oxygen binds to one monomer of hemoglobin, the tetramer's conformation shifts from the T (tense) state to the R (relaxed) state. This shift promotes the binding of oxygen to the remaining three monomers' heme groups, thus saturating the hemoglobin molecule with oxygen. In the tetrameric form of normal adult hemoglobin, the binding of oxygen is, thus, a cooperative process. The binding affinity of hemoglobin for oxygen is increased by the oxygen saturation of the molecule, with the first molecules of oxygen bound influencing the shape of the binding sites for the next ones, in a way favorable for binding. This positive cooperative binding is achieved through steric conformational changes of the hemoglobin protein complex as discussed above; i.e., when one subunit protein in hemoglobin becomes oxygenated, a conformational or structural change in the whole complex is initiated, causing the other subunits to gain an increased affinity for oxygen. As a consequence, the oxygen binding curve of hemoglobin is sigmoidal, or S-shaped, as opposed to the normal hyperbolic curve associated with noncooperative binding. The dynamic mechanism of the cooperativity in hemoglobin and its relation with low-frequency resonance has been discussed. Binding of ligands other than oxygen Besides the oxygen ligand, which binds to hemoglobin in a cooperative manner, hemoglobin ligands also include competitive inhibitors such as carbon monoxide (CO) and allosteric ligands such as carbon dioxide (CO2) and nitric oxide (NO). The carbon dioxide is bound to amino groups of the globin proteins to form carbaminohemoglobin; this mechanism is thought to account for about 10% of carbon dioxide transport in mammals. Nitric oxide can also be transported by hemoglobin; it is bound to specific thiol groups in the globin protein to form an S-nitrosothiol, which dissociates into free nitric oxide and thiol again, as the hemoglobin releases oxygen from its heme site. This nitric oxide transport to peripheral tissues is hypothesized to assist oxygen transport in tissues, by releasing vasodilatory nitric oxide to tissues in which oxygen levels are low. Competitive The binding of oxygen is affected by molecules such as carbon monoxide (for example, from tobacco smoking, exhaust gas, and incomplete combustion in furnaces). CO competes with oxygen at the heme binding site. Hemoglobin's binding affinity for CO is 250 times greater than its affinity for oxygen, Since carbon monoxide is a colorless, odorless and tasteless gas, and poses a potentially fatal threat, carbon monoxide detectors have become commercially available to warn of dangerous levels in residences. When hemoglobin combines with CO, it forms a very bright red compound called carboxyhemoglobin, which may cause the skin of CO poisoning victims to appear pink in death, instead of white or blue. When inspired air contains CO levels as low as 0.02%, headache and nausea occur; if the CO concentration is increased to 0.1%, unconsciousness will follow. In heavy smokers, up to 20% of the oxygen-active sites can be blocked by CO. In similar fashion, hemoglobin also has competitive binding affinity for cyanide (CN−), sulfur monoxide (SO), and sulfide (S2−), including hydrogen sulfide (H2S). All of these bind to iron in heme without changing its oxidation state, but they nevertheless inhibit oxygen-binding, causing grave toxicity. The iron atom in the heme group must initially be in the ferrous (Fe2+) oxidation state to support oxygen and other gases' binding and transport (it temporarily switches to ferric during the time oxygen is bound, as explained above). Initial oxidation to the ferric (Fe3+) state without oxygen converts hemoglobin into "hemiglobin" or methemoglobin, which cannot bind oxygen. Hemoglobin in normal red blood cells is protected by a reduction system to keep this from happening. Nitric oxide is capable of converting a small fraction of hemoglobin to methemoglobin in red blood cells. The latter reaction is a remnant activity of the more ancient nitric oxide dioxygenase function of globins. Allosteric Carbon dioxide occupies a different binding site on the hemoglobin. At tissues, where carbon dioxide concentration is higher, carbon dioxide binds to allosteric site of hemoglobin, facilitating unloading of oxygen from hemoglobin and ultimately its removal from the body after the oxygen has been released to tissues undergoing metabolism. This increased affinity for carbon dioxide by the venous blood is known as the Bohr effect. Through the enzyme carbonic anhydrase, carbon dioxide reacts with water to give carbonic acid, which decomposes into bicarbonate and protons: CO2 + H2O → H2CO3 → HCO3− + H+ Hence, blood with high carbon dioxide levels is also lower in pH (more acidic). Hemoglobin can bind protons and carbon dioxide, which causes a conformational change in the protein and facilitates the release of oxygen. Protons bind at various places on the protein, while carbon dioxide binds at the α-amino group. Carbon dioxide binds to hemoglobin and forms carbaminohemoglobin. This decrease in hemoglobin's affinity for oxygen by the binding of carbon dioxide and acid is known as the Bohr effect. The Bohr effect favors the T state rather than the R state. (shifts the O2-saturation curve to the right). Conversely, when the carbon dioxide levels in the blood decrease (i.e., in the lung capillaries), carbon dioxide and protons are released from hemoglobin, increasing the oxygen affinity of the protein. A reduction in the total binding capacity of hemoglobin to oxygen (i.e. shifting the curve down, not just to the right) due to reduced pH is called the root effect. This is seen in bony fish. It is necessary for hemoglobin to release the oxygen that it binds; if not, there is no point in binding it. The sigmoidal curve of hemoglobin makes it efficient in binding (taking up O2 in lungs), and efficient in unloading (unloading O2 in tissues). In people acclimated to high altitudes, the concentration of 2,3-Bisphosphoglycerate (2,3-BPG) in the blood is increased, which allows these individuals to deliver a larger amount of oxygen to tissues under conditions of lower oxygen tension. This phenomenon, where molecule Y affects the binding of molecule X to a transport molecule Z, is called a heterotropic allosteric effect. Hemoglobin in organisms at high altitudes has also adapted such that it has less of an affinity for 2,3-BPG and so the protein will be shifted more towards its R state. In its R state, hemoglobin will bind oxygen more readily, thus allowing organisms to perform the necessary metabolic processes when oxygen is present at low partial pressures. Animals other than humans use different molecules to bind to hemoglobin and change its O2 affinity under unfavorable conditions. Fish use both ATP and GTP. These bind to a phosphate "pocket" on the fish hemoglobin molecule, which stabilizes the tense state and therefore decreases oxygen affinity. GTP reduces hemoglobin oxygen affinity much more than ATP, which is thought to be due to an extra hydrogen bond formed that further stabilizes the tense state. Under hypoxic conditions, the concentration of both ATP and GTP is reduced in fish red blood cells to increase oxygen affinity. A variant hemoglobin, called fetal hemoglobin (HbF, α2γ2), is found in the developing fetus, and binds oxygen with greater affinity than adult hemoglobin. This means that the oxygen binding curve for fetal hemoglobin is left-shifted (i.e., a higher percentage of hemoglobin has oxygen bound to it at lower oxygen tension), in comparison to that of adult hemoglobin. As a result, fetal blood in the placenta is able to take oxygen from maternal blood. Hemoglobin also carries nitric oxide (NO) in the globin part of the molecule. This improves oxygen delivery in the periphery and contributes to the control of respiration. NO binds reversibly to a specific cysteine residue in globin; the binding depends on the state (R or T) of the hemoglobin. The resulting S-nitrosylated hemoglobin influences various NO-related activities such as the control of vascular resistance, blood pressure and respiration. NO is not released in the cytoplasm of red blood cells but transported out of them by an anion exchanger called AE1. Types of hemoglobin in humans Hemoglobin variants are a part of the normal embryonic and fetal development. They may also be pathologic mutant forms of hemoglobin in a population, caused by variations in genetics. Some well-known hemoglobin variants, such as sickle-cell anemia, are responsible for diseases and are considered hemoglobinopathies. Other variants cause no detectable pathology, and are thus considered non-pathological variants. In embryos: Gower 1 (ζ2ε2). Gower 2 (α2ε2) (). Hemoglobin Portland I (ζ2γ2). Hemoglobin Portland II (ζ2β2). In fetuses: Hemoglobin F (α2γ2) (). In neonates (newborns inmmediately after birth): Hemoglobin A (adult hemoglobin) (α2β2) () – The most common with a normal amount over 95% Hemoglobin A2 (α2δ2) – δ chain synthesis begins late in the third trimester and, in adults, it has a normal range of 1.5–3.5% Hemoglobin F (fetal hemoglobin) (α2γ2) – In adults Hemoglobin F is restricted to a limited population of red cells called F-cells. However, the level of Hb F can be elevated in persons with sickle-cell disease and beta-thalassemia. Abnormal forms that occur in diseases: Hemoglobin D – (α2βD2) – A variant form of hemoglobin. Hemoglobin H (β4) – A variant form of hemoglobin, formed by a tetramer of β chains, which may be present in variants of α thalassemia. Hemoglobin Barts (γ4) – A variant form of hemoglobin, formed by a tetramer of γ chains, which may be present in variants of α thalassemia. Hemoglobin S (α2βS2) – A variant form of hemoglobin found in people with sickle cell disease. There is a variation in the β-chain gene, causing a change in the properties of hemoglobin, which results in sickling of red blood cells. Hemoglobin C (α2βC2) – Another variant due to a variation in the β-chain gene. This variant causes a mild chronic hemolytic anemia. Hemoglobin E (α2βE2) – Another variant due to a variation in the β-chain gene. This variant causes a mild chronic hemolytic anemia. Hemoglobin AS – A heterozygous form causing sickle cell trait with one adult gene and one sickle cell disease gene Hemoglobin SC disease – A compound heterozygous form with one sickle gene and another encoding hemoglobin C. Hemoglobin Hopkins-2 – A variant form of hemoglobin that is sometimes viewed in combination with hemoglobin S to produce sickle cell disease. Degradation in vertebrate animals When red blood cells reach the end of their life due to aging or defects, they are removed from the circulation by the phagocytic activity of macrophages in the spleen or the liver or hemolyze within the circulation. Free hemoglobin is then cleared from the circulation via the hemoglobin transporter CD163, which is exclusively expressed on monocytes or macrophages. Within these cells the hemoglobin molecule is broken up, and the iron gets recycled. This process also produces one molecule of carbon monoxide for every molecule of heme degraded. Heme degradation is the only natural source of carbon monoxide in the human body, and is responsible for the normal blood levels of carbon monoxide in people breathing normal air. The other major final product of heme degradation is bilirubin. Increased levels of this chemical are detected in the blood if red blood cells are being destroyed more rapidly than usual. Improperly degraded hemoglobin protein or hemoglobin that has been released from the blood cells too rapidly can clog small blood vessels, especially the delicate blood filtering vessels of the kidneys, causing kidney damage. Iron is removed from heme and salvaged for later use, it is stored as hemosiderin or ferritin in tissues and transported in plasma by beta globulins as transferrins. When the porphyrin ring is broken up, the fragments are normally secreted as a yellow pigment called bilirubin, which is secreted into the intestines as bile. Intestines metabolize bilirubin into urobilinogen. Urobilinogen leaves the body in faeces, in a pigment called stercobilin. Globulin is metabolized into amino acids that are then released into circulation. Diseases related to hemoglobin Hemoglobin deficiency can be caused either by a decreased amount of hemoglobin molecules, as in anemia, or by decreased ability of each molecule to bind oxygen at the same partial pressure of oxygen. Hemoglobinopathies (genetic defects resulting in abnormal structure of the hemoglobin molecule) may cause both. In any case, hemoglobin deficiency decreases blood oxygen-carrying capacity. Hemoglobin deficiency is, in general, strictly distinguished from hypoxemia, defined as decreased partial pressure of oxygen in blood, although both are causes of hypoxia (insufficient oxygen supply to tissues). Other common causes of low hemoglobin include loss of blood, nutritional deficiency, bone marrow problems, chemotherapy, kidney failure, or abnormal hemoglobin (such as that of sickle-cell disease). The ability of each hemoglobin molecule to carry oxygen is normally modified by altered blood pH or CO2, causing an altered oxygen–hemoglobin dissociation curve. However, it can also be pathologically altered in, e.g., carbon monoxide poisoning. Decrease of hemoglobin, with or without an absolute decrease of red blood cells, leads to symptoms of anemia. Anemia has many different causes, although iron deficiency and its resultant iron deficiency anemia are the most common causes in the Western world. As absence of iron decreases heme synthesis, red blood cells in iron deficiency anemia are hypochromic (lacking the red hemoglobin pigment) and microcytic (smaller than normal). Other anemias are rarer. In hemolysis (accelerated breakdown of red blood cells), associated jaundice is caused by the hemoglobin metabolite bilirubin, and the circulating hemoglobin can cause kidney failure. Some mutations in the globin chain are associated with the hemoglobinopathies, such as sickle-cell disease and thalassemia. Other mutations, as discussed at the beginning of the article, are benign and are referred to merely as hemoglobin variants. There is a group of genetic disorders, known as the porphyrias that are characterized by errors in metabolic pathways of heme synthesis. King George III of the United Kingdom was probably the most famous porphyria sufferer. To a small extent, hemoglobin A slowly combines with glucose at the terminal valine (an alpha aminoacid) of each β chain. The resulting molecule is often referred to as Hb A1c, a glycated hemoglobin. The binding of glucose to amino acids in the hemoglobin takes place spontaneously (without the help of an enzyme) in many proteins, and is not known to serve a useful purpose. However, as the concentration of glucose in the blood increases, the percentage of Hb A that turns into Hb A1c increases. In diabetics whose glucose usually runs high, the percent Hb A1c also runs high. Because of the slow rate of Hb A combination with glucose, the Hb A1c percentage reflects a weighted average of blood glucose levels over the lifetime of red cells, which is approximately 120 days. The levels of glycated hemoglobin are therefore measured in order to monitor the long-term control of the chronic disease of type 2 diabetes mellitus (T2DM). Poor control of T2DM results in high levels of glycated hemoglobin in the red blood cells. The normal reference range is approximately 4.0–5.9%. Though difficult to obtain, values less than 7% are recommended for people with T2DM. Levels greater than 9% are associated with poor control of the glycated hemoglobin, and levels greater than 12% are associated with very poor control. Diabetics who keep their glycated hemoglobin levels close to 7% have a much better chance of avoiding the complications that may accompany diabetes (than those whose levels are 8% or higher). In addition, increased glycated of hemoglobin increases its affinity for oxygen, therefore preventing its release at the tissue and inducing a level of hypoxia in extreme cases. Elevated levels of hemoglobin are associated with increased numbers or sizes of red blood cells, called polycythemia. This elevation may be caused by congenital heart disease, cor pulmonale, pulmonary fibrosis, too much erythropoietin, or polycythemia vera. High hemoglobin levels may also be caused by exposure to high altitudes, smoking, dehydration (artificially by concentrating Hb), advanced lung disease and certain tumors. Diagnostic uses Hemoglobin concentration measurement is among the most commonly performed blood tests, usually as part of a complete blood count. For example, it is typically tested before or after blood donation. Results are reported in g/L, g/dL or mol/L. 1 g/dL equals about 0.6206 mmol/L, although the latter units are not used as often due to uncertainty regarding the polymeric state of the molecule. This conversion factor, using the single globin unit molecular weight of 16,000 Da, is more common for hemoglobin concentration in blood. For MCHC (mean corpuscular hemoglobin concentration) the conversion factor 0.155, which uses the tetramer weight of 64,500 Da, is more common. Normal levels are: Men: 13.8 to 18.0 g/dL (138 to 180 g/L, or 8.56 to 11.17 mmol/L) Women: 12.1 to 15.1 g/dL (121 to 151 g/L, or 7.51 to 9.37 mmol/L) Children: 11 to 16 g/dL (110 to 160 g/L, or 6.83 to 9.93 mmol/L) Pregnant women: 11 to 14 g/dL (110 to 140 g/L, or 6.83 to 8.69 mmol/L) (9.5 to 15 usual value during pregnancy) Normal values of hemoglobin in the 1st and 3rd trimesters of pregnant women must be at least 11 g/dL and at least 10.5 g/dL during the 2nd trimester. Dehydration or hyperhydration can greatly influence measured hemoglobin levels. Albumin can indicate hydration status. If the concentration is below normal, this is called anemia. Anemias are classified by the size of red blood cells, the cells that contain hemoglobin in vertebrates. The anemia is called "microcytic" if red cells are small, "macrocytic" if they are large, and "normocytic" otherwise. Hematocrit, the proportion of blood volume occupied by red blood cells, is typically about three times the hemoglobin concentration measured in g/dL. For example, if the hemoglobin is measured at 17 g/dL, that compares with a hematocrit of 51%. Laboratory hemoglobin test methods require a blood sample (arterial, venous, or capillary) and analysis on hematology analyzer and CO-oximeter. Additionally, a new noninvasive hemoglobin (SpHb) test method called Pulse CO-Oximetry is also available with comparable accuracy to invasive methods. Concentrations of oxy- and deoxyhemoglobin can be measured continuously, regionally and noninvasively using NIRS. NIRS can be used both on the head and on muscles. This technique is often used for research in e.g. elite sports training, ergonomics, rehabilitation, patient monitoring, neonatal research, functional brain monitoring, brain–computer interface, urology (bladder contraction), neurology (Neurovascular coupling) and more. Hemoglobin mass can be measured in humans using the non-radioactive, carbon monoxide (CO) rebreathing technique that has been used for more than 100 years. With this technique, a small volume of pure CO gas is inhaled and rebreathed for a few minutes. During rebreathing, CO binds to hemoglobin present in red blood cells. Based on the increase in blood CO after the rebreathing period, the hemoglobin mass can be determined through the dilution principle. Long-term control of blood sugar concentration can be measured by the concentration of Hb A1c. Measuring it directly would require many samples because blood sugar levels vary widely through the day. Hb A1c is the product of the irreversible reaction of hemoglobin A with glucose. A higher glucose concentration results in more Hb A1c. Because the reaction is slow, the Hb A1c proportion represents glucose level in blood averaged over the half-life of red blood cells, is typically ~120 days. An Hb A1c proportion of 6.0% or less show good long-term glucose control, while values above 7.0% are elevated. This test is especially useful for diabetics. The functional magnetic resonance imaging (fMRI) machine uses the signal from deoxyhemoglobin, which is sensitive to magnetic fields since it is paramagnetic. Combined measurement with NIRS shows good correlation with both the oxy- and deoxyhemoglobin signal compared to the BOLD signal. Athletic tracking and self-tracking uses Hemoglobin can be tracked noninvasively, to build an individual data set tracking the hemoconcentration and hemodilution effects of daily activities for better understanding of sports performance and training. Athletes are often concerned about endurance and intensity of exercise. The sensor uses light-emitting diodes that emit red and infrared light through the tissue to a light detector, which then sends a signal to a processor to calculate the absorption of light by the hemoglobin protein. This sensor is similar to a pulse oximeter, which consists of a small sensing device that clips to the finger. Analogues in non-vertebrate organisms A variety of oxygen-transport and -binding proteins exist in organisms throughout the animal and plant kingdoms. Organisms including bacteria, protozoans, and fungi all have hemoglobin-like proteins whose known and predicted roles include the reversible binding of gaseous ligands. Since many of these proteins contain globins and the heme moiety (iron in a flat porphyrin support), they are often called hemoglobins, even if their overall tertiary structure is very different from that of vertebrate hemoglobin. In particular, the distinction of "myoglobin" and hemoglobin in lower animals is often impossible, because some of these organisms do not contain muscles. Or, they may have a recognizable separate circulatory system but not one that deals with oxygen transport (for example, many insects and other arthropods). In all these groups, heme/globin-containing molecules (even monomeric globin ones) that deal with gas-binding are referred to as oxyhemoglobins. In addition to dealing with transport and sensing of oxygen, they may also deal with NO, CO2, sulfide compounds, and even O2 scavenging in environments that must be anaerobic. They may even deal with detoxification of chlorinated materials in a way analogous to heme-containing P450 enzymes and peroxidases. The structure of hemoglobins varies across species. Hemoglobin occurs in all kingdoms of organisms, but not in all organisms. Primitive species such as bacteria, protozoa, algae, and plants often have single-globin hemoglobins. Many nematode worms, molluscs, and crustaceans contain very large multisubunit molecules, much larger than those in vertebrates. In particular, chimeric hemoglobins found in fungi and giant annelids may contain both globin and other types of proteins. One of the most striking occurrences and uses of hemoglobin in organisms is in the giant tube worm (Riftia pachyptila, also called Vestimentifera), which can reach 2.4 meters length and populates ocean volcanic vents. Instead of a digestive tract, these worms contain a population of bacteria constituting half the organism's weight. The bacteria oxidize H2S from the vent with O2 from the water to produce energy to make food from H2O and CO2. The worms' upper end is a deep-red fan-like structure ("plume"), which extends into the water and absorbs H2S and O2 for the bacteria, and CO2 for use as synthetic raw material similar to photosynthetic plants. The structures are bright red due to their content of several extraordinarily complex hemoglobins that have up to 144 globin chains, each including associated heme structures. These hemoglobins are remarkable for being able to carry oxygen in the presence of sulfide, and even to carry sulfide, without being completely "poisoned" or inhibited by it as hemoglobins in most other species are. Other oxygen-binding proteins Myoglobin Found in the muscle tissue of many vertebrates, including humans, it gives muscle tissue a distinct red or dark gray color. It is very similar to hemoglobin in structure and sequence, but is not a tetramer; instead, it is a monomer that lacks cooperative binding. It is used to store oxygen rather than transport it. Hemocyanin The second most common oxygen-transporting protein found in nature, it is found in the blood of many arthropods and molluscs. Uses copper prosthetic groups instead of iron heme groups and is blue in color when oxygenated. Hemerythrin Some marine invertebrates and a few species of annelid use this iron-containing non-heme protein to carry oxygen in their blood. Appears pink/violet when oxygenated, clear when not. Chlorocruorin Found in many annelids, it is very similar to erythrocruorin, but the heme group is significantly different in structure. Appears green when deoxygenated and red when oxygenated. Vanabins Also known as vanadium chromagens, they are found in the blood of sea squirts. They were once hypothesized to use the metal vanadium as an oxygen binding prosthetic group. However, although they do contain vanadium by preference, they apparently bind little oxygen, and thus have some other function, which has not been elucidated (sea squirts also contain some hemoglobin). They may act as toxins. Erythrocruorin Found in many annelids, including earthworms, it is a giant free-floating blood protein containing many dozens—possibly hundreds—of iron- and heme-bearing protein subunits bound together into a single protein complex with a molecular mass greater than 3.5 million daltons. Leghemoglobin In leguminous plants, such as alfalfa or soybeans, the nitrogen fixing bacteria in the roots are protected from oxygen by this iron heme containing oxygen-binding protein. The specific enzyme protected is nitrogenase, which is unable to reduce nitrogen gas in the presence of free oxygen. Coboglobin A synthetic cobalt-based porphyrin. Coboprotein would appear colorless when oxygenated, but yellow when in veins. Presence in nonerythroid cells Some nonerythroid cells (i.e., cells other than the red blood cell line) contain hemoglobin. In the brain, these include the A9 dopaminergic neurons in the substantia nigra, astrocytes in the cerebral cortex and hippocampus, and in all mature oligodendrocytes. It has been suggested that brain hemoglobin in these cells may enable the "storage of oxygen to provide a homeostatic mechanism in anoxic conditions, which is especially important for A9 DA neurons that have an elevated metabolism with a high requirement for energy production". It has been noted further that "A9 dopaminergic neurons may be at particular risk of anoxic degeneration since in addition to their high mitochondrial activity they are under intense oxidative stress caused by the production of hydrogen peroxide via autoxidation and/or monoamine oxidase (MAO)-mediated deamination of dopamine and the subsequent reaction of accessible ferrous iron to generate highly toxic hydroxyl radicals". This may explain the risk of degeneration of these cells in Parkinson's disease. The hemoglobin-derived iron in these cells is not the cause of the post-mortem darkness of these cells (origin of the Latin name, substantia nigra), but rather is due to neuromelanin. Outside the brain, hemoglobin has non-oxygen-carrying functions as an antioxidant and a regulator of iron metabolism in macrophages, alveolar cells, and mesangial cells in the kidney. In history, art, and music Historically, an association between the color of blood and rust occurs in the association of the planet Mars, with the Roman god of war, since the planet is an orange-red, which reminded the ancients of blood. Although the color of the planet is due to iron compounds in combination with oxygen in the Martian soil, it is a common misconception that the iron in hemoglobin and its oxides gives blood its red color. The color is actually due to the porphyrin moiety of hemoglobin to which the iron is bound, not the iron itself, although the ligation and redox state of the iron can influence the pi to pi* or n to pi* electronic transitions of the porphyrin and hence its optical characteristics. Artist Julian Voss-Andreae created a sculpture called Heart of Steel (Hemoglobin) in 2005, based on the protein's backbone. The sculpture was made from glass and weathering steel. The intentional rusting of the initially shiny work of art mirrors hemoglobin's fundamental chemical reaction of oxygen binding to iron. Montreal artist Nicolas Baier created Lustre (Hémoglobine), a sculpture in stainless steel that shows the structure of the hemoglobin molecule. It is displayed in the atrium of McGill University Health Centre's research centre in Montreal. The sculpture measures about 10 metres × 10 metres × 10 metres. See also Carbaminohemoglobin (Hb associated with ) Carboxyhemoglobin (Hb associated with CO) Chlorophyll (Mg heme) Complete blood count Delta globin Hemoglobinometer Hemoprotein Methemoglobin (ferric Hb, or ferrihemoglobin) Oxyhemoglobin (with diatomic oxygen, colored blood-red) Tegillarca granosa - "blood clam" Vaska's complex – iridium organometallic complex notable for its ability to bind to O2 reversibly References Notes Sources Further reading Hazelwood, Loren (2001) Can't Live Without It: The story of hemoglobin in sickness and in health, Nova Science Publishers External links National Anemia Action Council at anemia.org New hemoglobin type causes mock diagnosis with pulse oxymeters at www.life-of-science.net Animation of hemoglobin: from deoxy to oxy form at vimeo.com Hemoglobins Equilibrium chemistry Respiratory physiology
Hemoglobin
[ "Chemistry" ]
11,313
[ "Equilibrium chemistry" ]
13,564
https://en.wikipedia.org/wiki/Homomorphism
In algebra, a homomorphism is a structure-preserving map between two algebraic structures of the same type (such as two groups, two rings, or two vector spaces). The word homomorphism comes from the Ancient Greek language: () meaning "same" and () meaning "form" or "shape". However, the word was apparently introduced to mathematics due to a (mis)translation of German meaning "similar" to meaning "same". The term "homomorphism" appeared as early as 1892, when it was attributed to the German mathematician Felix Klein (1849–1925). Homomorphisms of vector spaces are also called linear maps, and their study is the subject of linear algebra. The concept of homomorphism has been generalized, under the name of morphism, to many other structures that either do not have an underlying set, or are not algebraic. This generalization is the starting point of category theory. A homomorphism may also be an isomorphism, an endomorphism, an automorphism, etc. (see below). Each of those can be defined in a way that may be generalized to any class of morphisms. Definition A homomorphism is a map between two algebraic structures of the same type (e.g. two groups, two fields, two vector spaces), that preserves the operations of the structures. This means a map between two sets , equipped with the same structure such that, if is an operation of the structure (supposed here, for simplification, to be a binary operation), then for every pair , of elements of . One says often that preserves the operation or is compatible with the operation. Formally, a map preserves an operation of arity , defined on both and if for all elements in . The operations that must be preserved by a homomorphism include 0-ary operations, that is the constants. In particular, when an identity element is required by the type of structure, the identity element of the first structure must be mapped to the corresponding identity element of the second structure. For example: A semigroup homomorphism is a map between semigroups that preserves the semigroup operation. A monoid homomorphism is a map between monoids that preserves the monoid operation and maps the identity element of the first monoid to that of the second monoid (the identity element is a 0-ary operation). A group homomorphism is a map between groups that preserves the group operation. This implies that the group homomorphism maps the identity element of the first group to the identity element of the second group, and maps the inverse of an element of the first group to the inverse of the image of this element. Thus a semigroup homomorphism between groups is necessarily a group homomorphism. A ring homomorphism is a map between rings that preserves the ring addition, the ring multiplication, and the multiplicative identity. Whether the multiplicative identity is to be preserved depends upon the definition of ring in use. If the multiplicative identity is not preserved, one has a rng homomorphism. A linear map is a homomorphism of vector spaces; that is, a group homomorphism between vector spaces that preserves the abelian group structure and scalar multiplication. A module homomorphism, also called a linear map between modules, is defined similarly. An algebra homomorphism is a map that preserves the algebra operations. An algebraic structure may have more than one operation, and a homomorphism is required to preserve each operation. Thus a map that preserves only some of the operations is not a homomorphism of the structure, but only a homomorphism of the substructure obtained by considering only the preserved operations. For example, a map between monoids that preserves the monoid operation and not the identity element, is not a monoid homomorphism, but only a semigroup homomorphism. The notation for the operations does not need to be the same in the source and the target of a homomorphism. For example, the real numbers form a group for addition, and the positive real numbers form a group for multiplication. The exponential function satisfies and is thus a homomorphism between these two groups. It is even an isomorphism (see below), as its inverse function, the natural logarithm, satisfies and is also a group homomorphism. Examples The real numbers are a ring, having both addition and multiplication. The set of all 2×2 matrices is also a ring, under matrix addition and matrix multiplication. If we define a function between these rings as follows: where is a real number, then is a homomorphism of rings, since preserves both addition: and multiplication: For another example, the nonzero complex numbers form a group under the operation of multiplication, as do the nonzero real numbers. (Zero must be excluded from both groups since it does not have a multiplicative inverse, which is required for elements of a group.) Define a function from the nonzero complex numbers to the nonzero real numbers by That is, is the absolute value (or modulus) of the complex number . Then is a homomorphism of groups, since it preserves multiplication: Note that cannot be extended to a homomorphism of rings (from the complex numbers to the real numbers), since it does not preserve addition: As another example, the diagram shows a monoid homomorphism from the monoid to the monoid . Due to the different names of corresponding operations, the structure preservation properties satisfied by amount to and . A composition algebra over a field has a quadratic form, called a norm, , which is a group homomorphism from the multiplicative group of to the multiplicative group of . Special homomorphisms Several kinds of homomorphisms have a specific name, which is also defined for general morphisms. Isomorphism An isomorphism between algebraic structures of the same type is commonly defined as a bijective homomorphism. In the more general context of category theory, an isomorphism is defined as a morphism that has an inverse that is also a morphism. In the specific case of algebraic structures, the two definitions are equivalent, although they may differ for non-algebraic structures, which have an underlying set. More precisely, if is a (homo)morphism, it has an inverse if there exists a homomorphism such that If and have underlying sets, and has an inverse , then is bijective. In fact, is injective, as implies , and is surjective, as, for any in , one has , and is the image of an element of . Conversely, if is a bijective homomorphism between algebraic structures, let be the map such that is the unique element of such that . One has and it remains only to show that is a homomorphism. If is a binary operation of the structure, for every pair , of elements of , one has and is thus compatible with As the proof is similar for any arity, this shows that is a homomorphism. This proof does not work for non-algebraic structures. For example, for topological spaces, a morphism is a continuous map, and the inverse of a bijective continuous map is not necessarily continuous. An isomorphism of topological spaces, called homeomorphism or bicontinuous map, is thus a bijective continuous map, whose inverse is also continuous. Endomorphism An endomorphism is a homomorphism whose domain equals the codomain, or, more generally, a morphism whose source is equal to its target. The endomorphisms of an algebraic structure, or of an object of a category, form a monoid under composition. The endomorphisms of a vector space or of a module form a ring. In the case of a vector space or a free module of finite dimension, the choice of a basis induces a ring isomorphism between the ring of endomorphisms and the ring of square matrices of the same dimension. Automorphism An automorphism is an endomorphism that is also an isomorphism. The automorphisms of an algebraic structure or of an object of a category form a group under composition, which is called the automorphism group of the structure. Many groups that have received a name are automorphism groups of some algebraic structure. For example, the general linear group is the automorphism group of a vector space of dimension over a field . The automorphism groups of fields were introduced by Évariste Galois for studying the roots of polynomials, and are the basis of Galois theory. Monomorphism For algebraic structures, monomorphisms are commonly defined as injective homomorphisms. In the more general context of category theory, a monomorphism is defined as a morphism that is left cancelable. This means that a (homo)morphism is a monomorphism if, for any pair , of morphisms from any other object to , then implies . These two definitions of monomorphism are equivalent for all common algebraic structures. More precisely, they are equivalent for fields, for which every homomorphism is a monomorphism, and for varieties of universal algebra, that is algebraic structures for which operations and axioms (identities) are defined without any restriction (the fields do not form a variety, as the multiplicative inverse is defined either as a unary operation or as a property of the multiplication, which are, in both cases, defined only for nonzero elements). In particular, the two definitions of a monomorphism are equivalent for sets, magmas, semigroups, monoids, groups, rings, fields, vector spaces and modules. A split monomorphism is a homomorphism that has a left inverse and thus it is itself a right inverse of that other homomorphism. That is, a homomorphism is a split monomorphism if there exists a homomorphism such that A split monomorphism is always a monomorphism, for both meanings of monomorphism. For sets and vector spaces, every monomorphism is a split monomorphism, but this property does not hold for most common algebraic structures. An injective homomorphism is left cancelable: If one has for every in , the common source of and . If is injective, then , and thus . This proof works not only for algebraic structures, but also for any category whose objects are sets and arrows are maps between these sets. For example, an injective continuous map is a monomorphism in the category of topological spaces. For proving that, conversely, a left cancelable homomorphism is injective, it is useful to consider a free object on . Given a variety of algebraic structures a free object on is a pair consisting of an algebraic structure of this variety and an element of satisfying the following universal property: for every structure of the variety, and every element of , there is a unique homomorphism such that . For example, for sets, the free object on is simply ; for semigroups, the free object on is which, as, a semigroup, is isomorphic to the additive semigroup of the positive integers; for monoids, the free object on is which, as, a monoid, is isomorphic to the additive monoid of the nonnegative integers; for groups, the free object on is the infinite cyclic group which, as, a group, is isomorphic to the additive group of the integers; for rings, the free object on is the polynomial ring for vector spaces or modules, the free object on is the vector space or free module that has as a basis. If a free object over exists, then every left cancelable homomorphism is injective: let be a left cancelable homomorphism, and and be two elements of such . By definition of the free object , there exist homomorphisms and from to such that and . As , one has by the uniqueness in the definition of a universal property. As is left cancelable, one has , and thus . Therefore, is injective. Existence of a free object on for a variety (see also ): For building a free object over , consider the set of the well-formed formulas built up from and the operations of the structure. Two such formulas are said equivalent if one may pass from one to the other by applying the axioms (identities of the structure). This defines an equivalence relation, if the identities are not subject to conditions, that is if one works with a variety. Then the operations of the variety are well defined on the set of equivalence classes of for this relation. It is straightforward to show that the resulting object is a free object on . Epimorphism In algebra, epimorphisms are often defined as surjective homomorphisms. On the other hand, in category theory, epimorphisms are defined as right cancelable morphisms. This means that a (homo)morphism is an epimorphism if, for any pair , of morphisms from to any other object , the equality implies . A surjective homomorphism is always right cancelable, but the converse is not always true for algebraic structures. However, the two definitions of epimorphism are equivalent for sets, vector spaces, abelian groups, modules (see below for a proof), and groups. The importance of these structures in all mathematics, especially in linear algebra and homological algebra, may explain the coexistence of two non-equivalent definitions. Algebraic structures for which there exist non-surjective epimorphisms include semigroups and rings. The most basic example is the inclusion of integers into rational numbers, which is a homomorphism of rings and of multiplicative semigroups. For both structures it is a monomorphism and a non-surjective epimorphism, but not an isomorphism. A wide generalization of this example is the localization of a ring by a multiplicative set. Every localization is a ring epimorphism, which is not, in general, surjective. As localizations are fundamental in commutative algebra and algebraic geometry, this may explain why in these areas, the definition of epimorphisms as right cancelable homomorphisms is generally preferred. A split epimorphism is a homomorphism that has a right inverse and thus it is itself a left inverse of that other homomorphism. That is, a homomorphism is a split epimorphism if there exists a homomorphism such that A split epimorphism is always an epimorphism, for both meanings of epimorphism. For sets and vector spaces, every epimorphism is a split epimorphism, but this property does not hold for most common algebraic structures. In summary, one has the last implication is an equivalence for sets, vector spaces, modules, abelian groups, and groups; the first implication is an equivalence for sets and vector spaces. Let be a homomorphism. We want to prove that if it is not surjective, it is not right cancelable. In the case of sets, let be an element of that not belongs to , and define such that is the identity function, and that for every except that is any other element of . Clearly is not right cancelable, as and In the case of vector spaces, abelian groups and modules, the proof relies on the existence of cokernels and on the fact that the zero maps are homomorphisms: let be the cokernel of , and be the canonical map, such that . Let be the zero map. If is not surjective, , and thus (one is a zero map, while the other is not). Thus is not cancelable, as (both are the zero map from to ). Kernel Any homomorphism defines an equivalence relation on by if and only if . The relation is called the kernel of . It is a congruence relation on . The quotient set can then be given a structure of the same type as , in a natural way, by defining the operations of the quotient set by , for each operation of . In that case the image of in under the homomorphism is necessarily isomorphic to ; this fact is one of the isomorphism theorems. When the algebraic structure is a group for some operation, the equivalence class of the identity element of this operation suffices to characterize the equivalence relation. In this case, the quotient by the equivalence relation is denoted by (usually read as " mod "). Also in this case, it is , rather than , that is called the kernel of . The kernels of homomorphisms of a given type of algebraic structure are naturally equipped with some structure. This structure type of the kernels is the same as the considered structure, in the case of abelian groups, vector spaces and modules, but is different and has received a specific name in other cases, such as normal subgroup for kernels of group homomorphisms and ideals for kernels of ring homomorphisms (in the case of non-commutative rings, the kernels are the two-sided ideals). Relational structures In model theory, the notion of an algebraic structure is generalized to structures involving both operations and relations. Let L be a signature consisting of function and relation symbols, and A, B be two L-structures. Then a homomorphism from A to B is a mapping h from the domain of A to the domain of B such that h(FA(a1,...,an)) = FB(h(a1),...,h(an)) for each n-ary function symbol F in L, RA(a1,...,an) implies RB(h(a1),...,h(an)) for each n-ary relation symbol R in L. In the special case with just one binary relation, we obtain the notion of a graph homomorphism. Formal language theory Homomorphisms are also used in the study of formal languages and are often briefly referred to as morphisms. Given alphabets and , a function such that for all is called a homomorphism on . If is a homomorphism on and denotes the empty string, then is called an -free homomorphism when for all in . A homomorphism on that satisfies for all is called a -uniform homomorphism. If for all (that is, is 1-uniform), then is also called a coding or a projection. The set of words formed from the alphabet may be thought of as the free monoid generated by Here the monoid operation is concatenation and the identity element is the empty word. From this perspective, a language homomorphism is precisely a monoid homomorphism. See also Diffeomorphism Homomorphic encryption Homomorphic secret sharing – a simplistic decentralized voting protocol Morphism Quasimorphism Notes Citations References Morphisms
Homomorphism
[ "Mathematics" ]
3,871
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Category theory", "Mathematical relations", "Morphisms" ]
13,606
https://en.wikipedia.org/wiki/Half-life
Half-life (symbol ) is the time required for a quantity (of substance) to reduce to half of its initial value. The term is commonly used in nuclear physics to describe how quickly unstable atoms undergo radioactive decay or how long stable atoms survive. The term is also used more generally to characterize any type of exponential (or, rarely, non-exponential) decay. For example, the medical sciences refer to the biological half-life of drugs and other chemicals in the human body. The converse of half-life (in exponential growth) is doubling time. The original term, half-life period, dating to Ernest Rutherford's discovery of the principle in 1907, was shortened to half-life in the early 1950s. Rutherford applied the principle of a radioactive element's half-life in studies of age determination of rocks by measuring the decay period of radium to lead-206. Half-life is constant over the lifetime of an exponentially decaying quantity, and it is a characteristic unit for the exponential decay equation. The accompanying table shows the reduction of a quantity as a function of the number of half-lives elapsed. Probabilistic nature A half-life often describes the decay of discrete entities, such as radioactive atoms. In that case, it does not work to use the definition that states "half-life is the time required for exactly half of the entities to decay". For example, if there is just one radioactive atom, and its half-life is one second, there will not be "half of an atom" left after one second. Instead, the half-life is defined in terms of probability: "Half-life is the time required for exactly half of the entities to decay on average". In other words, the probability of a radioactive atom decaying within its half-life is 50%. For example, the accompanying image is a simulation of many identical atoms undergoing radioactive decay. Note that after one half-life there are not exactly one-half of the atoms remaining, only approximately, because of the random variation in the process. Nevertheless, when there are many identical atoms decaying (right boxes), the law of large numbers suggests that it is a very good approximation to say that half of the atoms remain after one half-life. Various simple exercises can demonstrate probabilistic decay, for example involving flipping coins or running a statistical computer program. Formulas for half-life in exponential decay An exponential decay can be described by any of the following four equivalent formulas: where is the initial quantity of the substance that will decay (this quantity may be measured in grams, moles, number of atoms, etc.), is the quantity that still remains and has not yet decayed after a time , is the half-life of the decaying quantity, is a positive number called the mean lifetime of the decaying quantity, is a positive number called the decay constant of the decaying quantity. The three parameters , , and are directly related in the following way:where is the natural logarithm of 2 (approximately 0.693). Half-life and reaction orders In chemical kinetics, the value of the half-life depends on the reaction order: Zero order kinetics The rate of this kind of reaction does not depend on the substrate concentration, . Thus the concentration decreases linearly. The integrated rate law of zero order kinetics is: In order to find the half-life, we have to replace the concentration value for the initial concentration divided by 2: and isolate the time:This formula indicates that the half-life for a zero order reaction depends on the initial concentration and the rate constant. First order kinetics In first order reactions, the rate of reaction will be proportional to the concentration of the reactant. Thus the concentration will decrease exponentially. as time progresses until it reaches zero, and the half-life will be constant, independent of concentration. The time for to decrease from to in a first-order reaction is given by the following equation:It can be solved forFor a first-order reaction, the half-life of a reactant is independent of its initial concentration. Therefore, if the concentration of at some arbitrary stage of the reaction is , then it will have fallen to after a further interval of Hence, the half-life of a first order reaction is given as the following:</p>The half-life of a first order reaction is independent of its initial concentration and depends solely on the reaction rate constant, . Second order kinetics In second order reactions, the rate of reaction is proportional to the square of the concentration. By integrating this rate, it can be shown that the concentration of the reactant decreases following this formula: We replace for in order to calculate the half-life of the reactant and isolate the time of the half-life ():This shows that the half-life of second order reactions depends on the initial concentration and rate constant. Decay by two or more processes Some quantities decay by two exponential-decay processes simultaneously. In this case, the actual half-life can be related to the half-lives and that the quantity would have if each of the decay processes acted in isolation: For three or more processes, the analogous formula is: For a proof of these formulas, see Exponential decay § Decay by two or more processes. Examples There is a half-life describing any exponential-decay process. For example: As noted above, in radioactive decay the half-life is the length of time after which there is a 50% chance that an atom will have undergone nuclear decay. It varies depending on the atom type and isotope, and is usually determined experimentally. See List of nuclides. The current flowing through an RC circuit or RL circuit decays with a half-life of or , respectively. For this example the term half time tends to be used rather than "half-life", but they mean the same thing. In a chemical reaction, the half-life of a species is the time it takes for the concentration of that substance to fall to half of its initial value. In a first-order reaction the half-life of the reactant is , where (also denoted as ) is the reaction rate constant. In non-exponential decay The term "half-life" is almost exclusively used for decay processes that are exponential (such as radioactive decay or the other examples above), or approximately exponential (such as biological half-life discussed below). In a decay process that is not even close to exponential, the half-life will change dramatically while the decay is happening. In this situation it is generally uncommon to talk about half-life in the first place, but sometimes people will describe the decay in terms of its "first half-life", "second half-life", etc., where the first half-life is defined as the time required for decay from the initial value to 50%, the second half-life is from 50% to 25%, and so on. In biology and pharmacology A biological half-life or elimination half-life is the time it takes for a substance (drug, radioactive nuclide, or other) to lose one-half of its pharmacologic, physiologic, or radiological activity. In a medical context, the half-life may also describe the time that it takes for the concentration of a substance in blood plasma to reach one-half of its steady-state value (the "plasma half-life"). The relationship between the biological and plasma half-lives of a substance can be complex, due to factors including accumulation in tissues, active metabolites, and receptor interactions. While a radioactive isotope decays almost perfectly according to first order kinetics, where the rate constant is a fixed number, the elimination of a substance from a living organism usually follows more complex chemical kinetics. For example, the biological half-life of water in a human being is about 9 to 10 days, though this can be altered by behavior and other conditions. The biological half-life of caesium in human beings is between one and four months. The concept of a half-life has also been utilized for pesticides in plants, and certain authors maintain that pesticide risk and impact assessment models rely on and are sensitive to information describing dissipation from plants. In epidemiology, the concept of half-life can refer to the length of time for the number of incident cases in a disease outbreak to drop by half, particularly if the dynamics of the outbreak can be modeled exponentially. See also Half time (physics) List of radioactive nuclides by half-life Mean lifetime Median lethal dose References External links https://www.calculator.net/half-life-calculator.html Comprehensive half-life calculator wiki: Decay Engine, Nucleonica.net (archived 2016) System Dynamics – Time Constants, Bucknell.edu Researchers Nikhef and UvA measure slowest radioactive decay ever: Xe-124 with 18 billion trillion years https://academo.org/demos/radioactive-decay-simulator/ Interactive radioactive decay simulator demonstrating how half-life is related to the rate of decay Chemical kinetics Radioactivity Nuclear fission Temporal exponentials
Half-life
[ "Physics", "Chemistry" ]
1,897
[ "Nuclear fission", "Chemical reaction engineering", "Physical quantities", "Time", "Nuclear physics", "Temporal exponentials", "Spacetime", "Chemical kinetics", "Radioactivity" ]
13,609
https://en.wikipedia.org/wiki/Hydrogen%20bond
In chemistry, a hydrogen bond (or H-bond) is primarily an electrostatic force of attraction between a hydrogen (H) atom which is covalently bonded to a more electronegative "donor" atom or group (Dn), and another electronegative atom bearing a lone pair of electrons—the hydrogen bond acceptor (Ac). Such an interacting system is generally denoted , where the solid line denotes a polar covalent bond, and the dotted or dashed line indicates the hydrogen bond. The most frequent donor and acceptor atoms are the period 2 elements nitrogen (N), oxygen (O), and fluorine (F). Hydrogen bonds can be intermolecular (occurring between separate molecules) or intramolecular (occurring among parts of the same molecule). The energy of a hydrogen bond depends on the geometry, the environment, and the nature of the specific donor and acceptor atoms and can vary between 1 and 40 kcal/mol. This makes them somewhat stronger than a van der Waals interaction, and weaker than fully covalent or ionic bonds. This type of bond can occur in inorganic molecules such as water and in organic molecules like DNA and proteins. Hydrogen bonds are responsible for holding materials such as paper and felted wool together, and for causing separate sheets of paper to stick together after becoming wet and subsequently drying. The hydrogen bond is also responsible for many of the physical and chemical properties of compounds of N, O, and F that seem unusual compared with other similar structures. In particular, intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group-16 hydrides that have much weaker hydrogen bonds. Intramolecular hydrogen bonding is partly responsible for the secondary and tertiary structures of proteins and nucleic acids. Bonding Definitions and general characteristics In a hydrogen bond, the electronegative atom not covalently attached to the hydrogen is named the proton acceptor, whereas the one covalently bound to the hydrogen is named the proton donor. This nomenclature is recommended by the IUPAC. The hydrogen of the donor is protic and therefore can act as a Lewis acid and the acceptor is the Lewis base. Hydrogen bonds are represented as system, where the dots represent the hydrogen bond. Liquids that display hydrogen bonding (such as water) are called associated liquids. Hydrogen bonds arise from a combination of electrostatics (multipole-multipole and multipole-induced multipole interactions), covalency (charge transfer by orbital overlap), and dispersion (London forces). In weaker hydrogen bonds, hydrogen atoms tend to bond to elements such as sulfur (S) or chlorine (Cl); even carbon (C) can serve as a donor, particularly when the carbon or one of its neighbors is electronegative (e.g., in chloroform, aldehydes and terminal acetylenes). Gradually, it was recognized that there are many examples of weaker hydrogen bonding involving donor other than N, O, or F and/or acceptor Ac with electronegativity approaching that of hydrogen (rather than being much more electronegative). Although weak (≈1 kcal/mol), "non-traditional" hydrogen bonding interactions are ubiquitous and influence structures of many kinds of materials. The definition of hydrogen bonding has gradually broadened over time to include these weaker attractive interactions. In 2011, an IUPAC Task Group recommended a modern evidence-based definition of hydrogen bonding, which was published in the IUPAC journal Pure and Applied Chemistry. This definition specifies: Bond strength Hydrogen bonds can vary in strength from weak (1–2 kJ/mol) to strong (161.5 kJ/mol in the bifluoride ion, ). Typical enthalpies in vapor include: (161.5 kJ/mol or 38.6 kcal/mol), illustrated uniquely by (29 kJ/mol or 6.9 kcal/mol), illustrated water-ammonia (21 kJ/mol or 5.0 kcal/mol), illustrated water-water, alcohol-alcohol (13 kJ/mol or 3.1 kcal/mol), illustrated by ammonia-ammonia (8 kJ/mol or 1.9 kcal/mol), illustrated water-amide (18 kJ/mol or 4.3 kcal/mol) The strength of intermolecular hydrogen bonds is most often evaluated by measurements of equilibria between molecules containing donor and/or acceptor units, most often in solution. The strength of intramolecular hydrogen bonds can be studied with equilibria between conformers with and without hydrogen bonds. The most important method for the identification of hydrogen bonds also in complicated molecules is crystallography, sometimes also NMR-spectroscopy. Structural details, in particular distances between donor and acceptor which are smaller than the sum of the van der Waals radii can be taken as indication of the hydrogen bond strength. One scheme gives the following somewhat arbitrary classification: those that are 15 to 40 kcal/mol, 5 to 15 kcal/mol, and >0 to 5 kcal/mol are considered strong, moderate, and weak, respectively. Hydrogen bonds involving C-H bonds are both very rare and weak. Resonance assisted hydrogen bond The resonance assisted hydrogen bond (commonly abbreviated as RAHB) is a strong type of hydrogen bond. It is characterized by the π-delocalization that involves the hydrogen and cannot be properly described by the electrostatic model alone. This description of the hydrogen bond has been proposed to describe unusually short distances generally observed between or . Structural details The distance is typically ≈110 pm, whereas the distance is ≈160 to 200 pm. The typical length of a hydrogen bond in water is 197 pm. The ideal bond angle depends on the nature of the hydrogen bond donor. The following hydrogen bond angles between a hydrofluoric acid donor and various acceptors have been determined experimentally: Spectroscopy Strong hydrogen bonds are revealed by downfield shifts in the 1H NMR spectrum. For example, the acidic proton in the enol tautomer of acetylacetone appears at  15.5, which is about 10 ppm downfield of a conventional alcohol. In the IR spectrum, hydrogen bonding shifts the stretching frequency to lower energy (i.e. the vibration frequency decreases). This shift reflects a weakening of the bond. Certain hydrogen bonds - improper hydrogen bonds - show a blue shift of the stretching frequency and a decrease in the bond length. H-bonds can also be measured by IR vibrational mode shifts of the acceptor. The amide I mode of backbone carbonyls in α-helices shifts to lower frequencies when they form H-bonds with side-chain hydroxyl groups. The dynamics of hydrogen bond structures in water can be probed by this OH stretching vibration. In the hydrogen bonding network in protic organic ionic plastic crystals (POIPCs), which are a type of phase change material exhibiting solid-solid phase transitions prior to melting, variable-temperature infrared spectroscopy can reveal the temperature dependence of hydrogen bonds and the dynamics of both the anions and the cations. The sudden weakening of hydrogen bonds during the solid-solid phase transition seems to be coupled with the onset of orientational or rotational disorder of the ions. Theoretical considerations Hydrogen bonding is of persistent theoretical interest. According to a modern description integrates both the intermolecular O:H lone pair ":" nonbond and the intramolecular polar-covalent bond associated with repulsive coupling. Quantum chemical calculations of the relevant interresidue potential constants (compliance constants) revealed large differences between individual H bonds of the same type. For example, the central interresidue hydrogen bond between guanine and cytosine is much stronger in comparison to the bond between the adenine-thymine pair. Theoretically, the bond strength of the hydrogen bonds can be assessed using NCI index, non-covalent interactions index, which allows a visualization of these non-covalent interactions, as its name indicates, using the electron density of the system. Interpretations of the anisotropies in the Compton profile of ordinary ice claim that the hydrogen bond is partly covalent. However, this interpretation was challenged and subsequently clarified. Most generally, the hydrogen bond can be viewed as a metric-dependent electrostatic scalar field between two or more intermolecular bonds. This is slightly different from the intramolecular bound states of, for example, covalent or ionic bonds. However, hydrogen bonding is generally still a bound state phenomenon, since the interaction energy has a net negative sum. The initial theory of hydrogen bonding proposed by Linus Pauling suggested that the hydrogen bonds had a partial covalent nature. This interpretation remained controversial until NMR techniques demonstrated information transfer between hydrogen-bonded nuclei, a feat that would only be possible if the hydrogen bond contained some covalent character. History The concept of hydrogen bonding once was challenging. Linus Pauling credits T. S. Moore and T. F. Winmill with the first mention of the hydrogen bond, in 1912. Moore and Winmill used the hydrogen bond to account for the fact that trimethylammonium hydroxide is a weaker base than tetramethylammonium hydroxide. The description of hydrogen bonding in its better-known setting, water, came some years later, in 1920, from Latimer and Rodebush. In that paper, Latimer and Rodebush cited the work of a fellow scientist at their laboratory, Maurice Loyal Huggins, saying, "Mr. Huggins of this laboratory in some work as yet unpublished, has used the idea of a hydrogen kernel held between two atoms as a theory in regard to certain organic compounds." Hydrogen bonds in small molecules Water An ubiquitous example of a hydrogen bond is found between water molecules. In a discrete water molecule, there are two hydrogen atoms and one oxygen atom. The simplest case is a pair of water molecules with one hydrogen bond between them, which is called the water dimer and is often used as a model system. When more molecules are present, as is the case with liquid water, more bonds are possible because the oxygen of one water molecule has two lone pairs of electrons, each of which can form a hydrogen bond with a hydrogen on another water molecule. This can repeat such that every water molecule is H-bonded with up to four other molecules, as shown in the figure (two through its two lone pairs, and two through its two hydrogen atoms). Hydrogen bonding strongly affects the crystal structure of ice, helping to create an open hexagonal lattice. The density of ice is less than the density of water at the same temperature; thus, the solid phase of water floats on the liquid, unlike most other substances. Liquid water's high boiling point is due to the high number of hydrogen bonds each molecule can form, relative to its low molecular mass. Owing to the difficulty of breaking these bonds, water has a very high boiling point, melting point, and viscosity compared to otherwise similar liquids not conjoined by hydrogen bonds. Water is unique because its oxygen atom has two lone pairs and two hydrogen atoms, meaning that the total number of bonds of a water molecule is up to four. The number of hydrogen bonds formed by a molecule of liquid water fluctuates with time and temperature. From TIP4P liquid water simulations at 25 °C, it was estimated that each water molecule participates in an average of 3.59 hydrogen bonds. At 100 °C, this number decreases to 3.24 due to the increased molecular motion and decreased density, while at 0 °C, the average number of hydrogen bonds increases to 3.69. Another study found a much smaller number of hydrogen bonds: 2.357 at 25 °C. Defining and counting the hydrogen bonds is not straightforward however. Because water may form hydrogen bonds with solute proton donors and acceptors, it may competitively inhibit the formation of solute intermolecular or intramolecular hydrogen bonds. Consequently, hydrogen bonds between or within solute molecules dissolved in water are almost always unfavorable relative to hydrogen bonds between water and the donors and acceptors for hydrogen bonds on those solutes. Hydrogen bonds between water molecules have an average lifetime of 10−11 seconds, or 10 picoseconds. Bifurcated and over-coordinated hydrogen bonds in water A single hydrogen atom can participate in two hydrogen bonds. This type of bonding is called "bifurcated" (split in two or "two-forked"). It can exist, for instance, in complex organic molecules. It has been suggested that a bifurcated hydrogen atom is an essential step in water reorientation. Acceptor-type hydrogen bonds (terminating on an oxygen's lone pairs) are more likely to form bifurcation (it is called overcoordinated oxygen, OCO) than are donor-type hydrogen bonds, beginning on the same oxygen's hydrogens. Other liquids For example, hydrogen fluoride—which has three lone pairs on the F atom but only one H atom—can form only two bonds; (ammonia has the opposite problem: three hydrogen atoms but only one lone pair). H-F***H-F***H-F Further manifestations of solvent hydrogen bonding Increase in the melting point, boiling point, solubility, and viscosity of many compounds can be explained by the concept of hydrogen bonding. Negative azeotropy of mixtures of HF and water. The fact that ice is less dense than liquid water is due to a crystal structure stabilized by hydrogen bonds. Dramatically higher boiling points of , , and HF compared to the heavier analogues , , and HCl, where hydrogen-bonding is absent. Viscosity of anhydrous phosphoric acid and of glycerol. Dimer formation in carboxylic acids and hexamer formation in hydrogen fluoride, which occur even in the gas phase, resulting in gross deviations from the ideal gas law. Pentamer formation of water and alcohols in apolar solvents. Hydrogen bonds in polymers Hydrogen bonding plays an important role in determining the three-dimensional structures and the properties adopted by many proteins. Compared to the , , and bonds that comprise most polymers, hydrogen bonds are far weaker, perhaps 5%. Thus, hydrogen bonds can be broken by chemical or mechanical means while retaining the basic structure of the polymer backbone. This hierarchy of bond strengths (covalent bonds being stronger than hydrogen-bonds being stronger than van der Waals forces) is relevant in the properties of many materials. DNA In these macromolecules, bonding between parts of the same macromolecule cause it to fold into a specific shape, which helps determine the molecule's physiological or biochemical role. For example, the double helical structure of DNA is due largely to hydrogen bonding between its base pairs (as well as pi stacking interactions), which link one complementary strand to the other and enable replication. Proteins In the secondary structure of proteins, hydrogen bonds form between the backbone oxygens and amide hydrogens. When the spacing of the amino acid residues participating in a hydrogen bond occurs regularly between positions i and , an alpha helix is formed. When the spacing is less, between positions i and , then a 310 helix is formed. When two strands are joined by hydrogen bonds involving alternating residues on each participating strand, a beta sheet is formed. Hydrogen bonds also play a part in forming the tertiary structure of protein through interaction of R-groups. (See also protein folding). Bifurcated H-bond systems are common in alpha-helical transmembrane proteins between the backbone amide of residue i as the H-bond acceptor and two H-bond donors from residue : the backbone amide and a side-chain hydroxyl or thiol . The energy preference of the bifurcated H-bond hydroxyl or thiol system is -3.4 kcal/mol or -2.6 kcal/mol, respectively. This type of bifurcated H-bond provides an intrahelical H-bonding partner for polar side-chains, such as serine, threonine, and cysteine within the hydrophobic membrane environments. The role of hydrogen bonds in protein folding has also been linked to osmolyte-induced protein stabilization. Protective osmolytes, such as trehalose and sorbitol, shift the protein folding equilibrium toward the folded state, in a concentration dependent manner. While the prevalent explanation for osmolyte action relies on excluded volume effects that are entropic in nature, circular dichroism (CD) experiments have shown osmolyte to act through an enthalpic effect. The molecular mechanism for their role in protein stabilization is still not well established, though several mechanisms have been proposed. Computer molecular dynamics simulations suggest that osmolytes stabilize proteins by modifying the hydrogen bonds in the protein hydration layer. Several studies have shown that hydrogen bonds play an important role for the stability between subunits in multimeric proteins. For example, a study of sorbitol dehydrogenase displayed an important hydrogen bonding network which stabilizes the tetrameric quaternary structure within the mammalian sorbitol dehydrogenase protein family. A protein backbone hydrogen bond incompletely shielded from water attack is a dehydron. Dehydrons promote the removal of water through proteins or ligand binding. The exogenous dehydration enhances the electrostatic interaction between the amide and carbonyl groups by de-shielding their partial charges. Furthermore, the dehydration stabilizes the hydrogen bond by destabilizing the nonbonded state consisting of dehydrated isolated charges. Wool, being a protein fibre, is held together by hydrogen bonds, causing wool to recoil when stretched. However, washing at high temperatures can permanently break the hydrogen bonds and a garment may permanently lose its shape. Other polymers The properties of many polymers are affected by hydrogen bonds within and/or between the chains. Prominent examples include cellulose and its derived fibers, such as cotton and flax. In nylon, hydrogen bonds between carbonyl and the amide NH effectively link adjacent chains, which gives the material mechanical strength. Hydrogen bonds also affect the aramid fibre, where hydrogen bonds stabilize the linear chains laterally. The chain axes are aligned along the fibre axis, making the fibres extremely stiff and strong. Hydrogen-bond networks make both polymers sensitive to humidity levels in the atmosphere because water molecules can diffuse into the surface and disrupt the network. Some polymers are more sensitive than others. Thus nylons are more sensitive than aramids, and nylon 6 more sensitive than nylon-11. Symmetric hydrogen bond A symmetric hydrogen bond is a special type of hydrogen bond in which the proton is spaced exactly halfway between two identical atoms. The strength of the bond to each of those atoms is equal. It is an example of a three-center four-electron bond. This type of bond is much stronger than a "normal" hydrogen bond. The effective bond order is 0.5, so its strength is comparable to a covalent bond. It is seen in ice at high pressure, and also in the solid phase of many anhydrous acids such as hydrofluoric acid and formic acid at high pressure. It is also seen in the bifluoride ion . Due to severe steric constraint, the protonated form of Proton Sponge (1,8-bis(dimethylamino)naphthalene) and its derivatives also have symmetric hydrogen bonds (), although in the case of protonated Proton Sponge, the assembly is bent. Dihydrogen bond The hydrogen bond can be compared with the closely related dihydrogen bond, which is also an intermolecular bonding interaction involving hydrogen atoms. These structures have been known for some time, and well characterized by crystallography; however, an understanding of their relationship to the conventional hydrogen bond, ionic bond, and covalent bond remains unclear. Generally, the hydrogen bond is characterized by a proton acceptor that is a lone pair of electrons in nonmetallic atoms (most notably in the nitrogen, and chalcogen groups). In some cases, these proton acceptors may be pi-bonds or metal complexes. In the dihydrogen bond, however, a metal hydride serves as a proton acceptor, thus forming a hydrogen-hydrogen interaction. Neutron diffraction has shown that the molecular geometry of these complexes is similar to hydrogen bonds, in that the bond length is very adaptable to the metal complex/hydrogen donor system. Application to drugs The Hydrogen bond is relevant to drug design. According to Lipinski's rule of five the majority of orally active drugs have no more than five hydrogen bond donors and fewer than ten hydrogen bond acceptors. These interactions exist between nitrogen–hydrogen and oxygen–hydrogen centers. Many drugs do not, however, obey these "rules". References Further reading George A. Jeffrey. An Introduction to Hydrogen Bonding (Topics in Physical Chemistry). Oxford University Press, US (March 13, 1997). External links The Bubble Wall (Audio slideshow from the National High Magnetic Field Laboratory explaining cohesion, surface tension and hydrogen bonds) isotopic effect on bond dynamics Chemical bonding Hydrogen physics Supramolecular chemistry Intermolecular forces
Hydrogen bond
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,430
[ "Molecular physics", "Materials science", "Intermolecular forces", "Condensed matter physics", "nan", "Nanotechnology", "Chemical bonding", "Supramolecular chemistry" ]
13,654
https://en.wikipedia.org/wiki/Heat%20engine
A heat engine is a system that converts heat to usable energy, particularly mechanical energy, which can then be used to do mechanical work. While originally conceived in the context of mechanical energy, the concept of the heat engine has been applied to various other kinds of energy, particularly electrical, since at least the late 19th century. The heat engine does this by bringing a working substance from a higher state temperature to a lower state temperature. A heat source generates thermal energy that brings the working substance to the higher temperature state. The working substance generates work in the working body of the engine while transferring heat to the colder sink until it reaches a lower temperature state. During this process some of the thermal energy is converted into work by exploiting the properties of the working substance. The working substance can be any system with a non-zero heat capacity, but it usually is a gas or liquid. During this process, some heat is normally lost to the surroundings and is not converted to work. Also, some energy is unusable because of friction and drag. In general, an engine is any machine that converts energy to mechanical work. Heat engines distinguish themselves from other types of engines by the fact that their efficiency is fundamentally limited by Carnot's theorem of thermodynamics. Although this efficiency limitation can be a drawback, an advantage of heat engines is that most forms of energy can be easily converted to heat by processes like exothermic reactions (such as combustion), nuclear fission, absorption of light or energetic particles, friction, dissipation and resistance. Since the heat source that supplies thermal energy to the engine can thus be powered by virtually any kind of energy, heat engines cover a wide range of applications. Heat engines are often confused with the cycles they attempt to implement. Typically, the term "engine" is used for a physical device and "cycle" for the models. Overview In thermodynamics, heat engines are often modeled using a standard engineering model such as the Otto cycle. The theoretical model can be refined and augmented with actual data from an operating engine, using tools such as an indicator diagram. Since very few actual implementations of heat engines exactly match their underlying thermodynamic cycles, one could say that a thermodynamic cycle is an ideal case of a mechanical engine. In any case, fully understanding an engine and its efficiency requires a good understanding of the (possibly simplified or idealised) theoretical model, the practical nuances of an actual mechanical engine and the discrepancies between the two. In general terms, the larger the difference in temperature between the hot source and the cold sink, the larger is the potential thermal efficiency of the cycle. On Earth, the cold side of any heat engine is limited to being close to the ambient temperature of the environment, or not much lower than 300 kelvin, so most efforts to improve the thermodynamic efficiencies of various heat engines focus on increasing the temperature of the source, within material limits. The maximum theoretical efficiency of a heat engine (which no engine ever attains) is equal to the temperature difference between the hot and cold ends divided by the temperature at the hot end, each expressed in absolute temperature. The efficiency of various heat engines proposed or used today has a large range: 3% (97 percent waste heat using low quality heat) for the ocean thermal energy conversion (OTEC) ocean power proposal 25% for most automotive gasoline engines 49% for a supercritical coal-fired power station such as the Avedøre Power Station 50%+ for long stroke marine Diesel engines 60% for a combined cycle gas turbine The efficiency of these processes is roughly proportional to the temperature drop across them. Significant energy may be consumed by auxiliary equipment, such as pumps, which effectively reduces efficiency. Examples Although some cycles have a typical combustion location (internal or external), they can often be implemented with the other. For example, John Ericsson developed an external heated engine running on a cycle very much like the earlier Diesel cycle. In addition, externally heated engines can often be implemented in open or closed cycles. In a closed cycle the working fluid is retained within the engine at the completion of the cycle whereas is an open cycle the working fluid is either exchanged with the environment together with the products of combustion in the case of the internal combustion engine or simply vented to the environment in the case of external combustion engines like steam engines and turbines. Everyday examples Everyday examples of heat engines include the thermal power station, internal combustion engine, firearms, refrigerators and heat pumps. Power stations are examples of heat engines run in a forward direction in which heat flows from a hot reservoir and flows into a cool reservoir to produce work as the desired product. Refrigerators, air conditioners and heat pumps are examples of heat engines that are run in reverse, i.e. they use work to take heat energy at a low temperature and raise its temperature in a more efficient way than the simple conversion of work into heat (either through friction or electrical resistance). Refrigerators remove heat from within a thermally sealed chamber at low temperature and vent waste heat at a higher temperature to the environment and heat pumps take heat from the low temperature environment and 'vent' it into a thermally sealed chamber (a house) at higher temperature. In general heat engines exploit the thermal properties associated with the expansion and compression of gases according to the gas laws or the properties associated with phase changes between gas and liquid states. Earth's heat engine Earth's atmosphere and hydrosphere—Earth's heat engine—are coupled processes that constantly even out solar heating imbalances through evaporation of surface water, convection, rainfall, winds and ocean circulation, when distributing heat around the globe. A Hadley cell is an example of a heat engine. It involves the rising of warm and moist air in the earth's equatorial region and the descent of colder air in the subtropics creating a thermally driven direct circulation, with consequent net production of kinetic energy. Phase-change cycles In phase change cycles and engines, the working fluids are gases and liquids. The engine converts the working fluid from a gas to a liquid, from liquid to gas, or both, generating work from the fluid expansion or compression. Rankine cycle (classical steam engine) Regenerative cycle (steam engine more efficient than Rankine cycle) Organic Rankine cycle (Coolant changing phase in temperature ranges of ice and hot liquid water) Vapor to liquid cycle (drinking bird, injector, Minto wheel) Liquid to solid cycle (frost heaving – water changing from ice to liquid and back again can lift rock up to 60 cm.) Solid to gas cycle (firearms – solid propellants combust to hot gases.) Gas-only cycles In these cycles and engines the working fluid is always a gas (i.e., there is no phase change): Carnot cycle (Carnot heat engine) Ericsson cycle (Caloric Ship John Ericsson) Stirling cycle (Stirling engine, thermoacoustic devices) Internal combustion engine (ICE): Otto cycle (e.g. gasoline/petrol engine) Diesel cycle (e.g. Diesel engine) Atkinson cycle (Atkinson engine) Brayton cycle or Joule cycle originally Ericsson cycle (gas turbine) Lenoir cycle (e.g., pulse jet engine) Miller cycle (Miller engine) Liquid-only cycles In these cycles and engines the working fluid are always like liquid: Stirling cycle (Malone engine) Electron cycles Johnson thermoelectric energy converter Thermoelectric (Peltier–Seebeck effect) Thermogalvanic cell Thermionic emission Thermotunnel cooling Magnetic cycles Thermo-magnetic motor (Tesla) Cycles used for refrigeration A domestic refrigerator is an example of a heat pump: a heat engine in reverse. Work is used to create a heat differential. Many cycles can run in reverse to move heat from the cold side to the hot side, making the cold side cooler and the hot side hotter. Internal combustion engine versions of these cycles are, by their nature, not reversible. Refrigeration cycles include: Air cycle machine Gas-absorption refrigerator Magnetic refrigeration Stirling cryocooler Vapor-compression refrigeration Vuilleumier cycle Evaporative heat engines The Barton evaporation engine is a heat engine based on a cycle producing power and cooled moist air from the evaporation of water into hot dry air. Mesoscopic heat engines Mesoscopic heat engines are nanoscale devices that may serve the goal of processing heat fluxes and perform useful work at small scales. Potential applications include e.g. electric cooling devices. In such mesoscopic heat engines, work per cycle of operation fluctuates due to thermal noise. There is exact equality that relates average of exponents of work performed by any heat engine and the heat transfer from the hotter heat bath. This relation transforms the Carnot's inequality into exact equality. This relation is also a Carnot cycle equality Efficiency The efficiency of a heat engine relates how much useful work is output for a given amount of heat energy input. From the laws of thermodynamics, after a completed cycle: and therefore where is the net work extracted from the engine in one cycle. (It is negative, in the IUPAC convention, since work is done by the engine.) is the heat energy taken from the high temperature heat source in the surroundings in one cycle. (It is positive since heat energy is added to the engine.) is the waste heat given off by the engine to the cold temperature heat sink. (It is negative since heat is lost by the engine to the sink.) In other words, a heat engine absorbs heat energy from the high temperature heat source, converting part of it to useful work and giving off the rest as waste heat to the cold temperature heat sink. In general, the efficiency of a given heat transfer process is defined by the ratio of "what is taken out" to "what is put in". (For a refrigerator or heat pump, which can be considered as a heat engine run in reverse, this is the coefficient of performance and it is ≥ 1.) In the case of an engine, one desires to extract work and has to put in heat , for instance from combustion of a fuel, so the engine efficiency is reasonably defined as The efficiency is less than 100% because of the waste heat unavoidably lost to the cold sink (and corresponding compression work put in) during the required recompression at the cold temperature before the power stroke of the engine can occur again. The theoretical maximum efficiency of any heat engine depends only on the temperatures it operates between. This efficiency is usually derived using an ideal imaginary heat engine such as the Carnot heat engine, although other engines using different cycles can also attain maximum efficiency. Mathematically, after a full cycle, the overall change of entropy is zero: Note that is positive because isothermal expansion in the power stroke increases the multiplicity of the working fluid while is negative since recompression decreases the multiplicity. If the engine is ideal and runs reversibly, and , and thus , which gives and thus the Carnot limit for heat-engine efficiency, where is the absolute temperature of the hot source and that of the cold sink, usually measured in kelvins. The reasoning behind this being the maximal efficiency goes as follows. It is first assumed that if a more efficient heat engine than a Carnot engine is possible, then it could be driven in reverse as a heat pump. Mathematical analysis can be used to show that this assumed combination would result in a net decrease in entropy. Since, by the second law of thermodynamics, this is statistically improbable to the point of exclusion, the Carnot efficiency is a theoretical upper bound on the reliable efficiency of any thermodynamic cycle. Empirically, no heat engine has ever been shown to run at a greater efficiency than a Carnot cycle heat engine. Figure 2 and Figure 3 show variations on Carnot cycle efficiency with temperature. Figure 2 indicates how efficiency changes with an increase in the heat addition temperature for a constant compressor inlet temperature. Figure 3 indicates how the efficiency changes with an increase in the heat rejection temperature for a constant turbine inlet temperature. Endo-reversible heat-engines By its nature, any maximally efficient Carnot cycle must operate at an infinitesimal temperature gradient; this is because any transfer of heat between two bodies of differing temperatures is irreversible, therefore the Carnot efficiency expression applies only to the infinitesimal limit. The major problem is that the objective of most heat-engines is to output power, and infinitesimal power is seldom desired. A different measure of ideal heat-engine efficiency is given by considerations of endoreversible thermodynamics, where the system is broken into reversible subsystems, but with non reversible interactions between them. A classical example is the Curzon–Ahlborn engine, very similar to a Carnot engine, but where the thermal reservoirs at temperature and are allowed to be different from the temperatures of the substance going through the reversible Carnot cycle: and . The heat transfers between the reservoirs and the substance are considered as conductive (and irreversible) in the form . In this case, a tradeoff has to be made between power output and efficiency. If the engine is operated very slowly, the heat flux is low, and the classical Carnot result is found , but at the price of a vanishing power output. If instead one chooses to operate the engine at its maximum output power, the efficiency becomes (Note: T in units of K or °R) This model does a better job of predicting how well real-world heat-engines can do (Callen 1985, see also endoreversible thermodynamics): As shown, the Curzon–Ahlborn efficiency much more closely models that observed. History Heat engines have been known since antiquity but were only made into useful devices at the time of the industrial revolution in the 18th century. They continue to be developed today. Enhancements Engineers have studied the various heat-engine cycles to improve the amount of usable work they could extract from a given power source. The Carnot cycle limit cannot be reached with any gas-based cycle, but engineers have found at least two ways to bypass that limit and one way to get better efficiency without bending any rules: Increase the temperature difference in the heat engine. The simplest way to do this is to increase the hot side temperature, which is the approach used in modern combined-cycle gas turbines. Unfortunately, physical limits (such as the melting point of the materials used to build the engine) and environmental concerns regarding NOx production (if the heat source is combustion with ambient air) restrict the maximum temperature on workable heat-engines. Modern gas turbines run at temperatures as high as possible within the range of temperatures necessary to maintain acceptable NOx output . Another way of increasing efficiency is to lower the output temperature. One new method of doing so is to use mixed chemical working fluids, then exploit the changing behavior of the mixtures. One of the most famous is the so-called Kalina cycle, which uses a 70/30 mix of ammonia and water as its working fluid. This mixture allows the cycle to generate useful power at considerably lower temperatures than most other processes. Exploit the physical properties of the working fluid. The most common such exploitation is the use of water above the critical point (supercritical water). The behavior of fluids above their critical point changes radically, and with materials such as water and carbon dioxide it is possible to exploit those changes in behavior to extract greater thermodynamic efficiency from the heat engine, even if it is using a fairly conventional Brayton or Rankine cycle. A newer and very promising material for such applications is supercritical CO2. SO2 and xenon have also been considered for such applications. Downsides include issues of corrosion and erosion, the different chemical behavior above and below the critical point, the needed high pressures and – in the case of sulfur dioxide and to a lesser extent carbon dioxide – toxicity. Among the mentioned compounds xenon is least suitable for use in a nuclear reactor due to the high neutron absorption cross section of almost all isotopes of xenon, whereas carbon dioxide and water can also double as a neutron moderator for a thermal spectrum reactor. Exploit the chemical properties of the working fluid. A fairly new and novel exploit is to use exotic working fluids with advantageous chemical properties. One such is nitrogen dioxide (NO2), a toxic component of smog, which has a natural dimer as di-nitrogen tetraoxide (N2O4). At low temperature, the N2O4 is compressed and then heated. The increasing temperature causes each N2O4 to break apart into two NO2 molecules. This lowers the molecular weight of the working fluid, which drastically increases the efficiency of the cycle. Once the NO2 has expanded through the turbine, it is cooled by the heat sink, which makes it recombine into N2O4. This is then fed back by the compressor for another cycle. Such species as aluminium bromide (Al2Br6), NOCl, and Ga2I6 have all been investigated for such uses. To date, their drawbacks have not warranted their use, despite the efficiency gains that can be realized. Heat engine processes Each process is one of the following: isothermal (at constant temperature, maintained with heat added or removed from a heat source or sink) isobaric (at constant pressure) isometric/isochoric (at constant volume), also referred to as iso-volumetric adiabatic (no heat is added or removed from the system during adiabatic process) isentropic (reversible adiabatic process, no heat is added or removed during isentropic process) See also Carnot heat engine Cogeneration Einstein refrigerator Heat pump Reciprocating engine for a general description of the mechanics of piston engines Stirling engine Thermosynthesis Timeline of heat engine technology References Energy conversion Engine technology Engines Heating, ventilation, and air conditioning Thermodynamics
Heat engine
[ "Physics", "Chemistry", "Mathematics", "Technology" ]
3,733
[ "Machines", "Engines", "Physical systems", "Engine technology", "Thermodynamics", "Dynamical systems" ]
13,733
https://en.wikipedia.org/wiki/Hilbert%27s%20basis%20theorem
In mathematics Hilbert's basis theorem asserts that every ideal of a polynomial ring over a field has a finite generating set (a finite basis in Hilbert's terminology). In modern algebra, rings whose ideals have this property are called Noetherian rings. Every field, and the ring of integers are Noetherian rings. So, the theorem can be generalized and restated as: every polynomial ring over a Noetherian ring is also Noetherian. The theorem was stated and proved by David Hilbert in 1890 in his seminal article on invariant theory, where he solved several problems on invariants. In this article, he proved also two other fundamental theorems on polynomials, the Nullstellensatz (zero-locus theorem) and the syzygy theorem (theorem on relations). These three theorems were the starting point of the interpretation of algebraic geometry in terms of commutative algebra. In particular, the basis theorem implies that every algebraic set is the intersection of a finite number of hypersurfaces. Another aspect of this article had a great impact on mathematics of the 20th century; this is the systematic use of non-constructive methods. For example, the basis theorem asserts that every ideal has a finite generator set, but the original proof does not provide any way to compute it for a specific ideal. This approach was so astonishing for mathematicians of that time that the first version of the article was rejected by Paul Gordan, the greatest specialist of invariants of that time, with the comment "This is not mathematics. This is theology." Later, he recognized "I have convinced myself that even theology has its merits." Statement If is a ring, let denote the ring of polynomials in the indeterminate over . Hilbert proved that if is "not too large", in the sense that if is Noetherian, the same must be true for . Formally, Hilbert's Basis Theorem. If is a Noetherian ring, then is a Noetherian ring. Corollary. If is a Noetherian ring, then is a Noetherian ring. Hilbert proved the theorem (for the special case of multivariate polynomials over a field) in the course of his proof of finite generation of rings of invariants. The theorem is interpreted in algebraic geometry as follows: every algebraic set is the set of the common zeros of finitely many polynomials. Hilbert's proof is highly non-constructive: it proceeds by induction on the number of variables, and, at each induction step uses the non-constructive proof for one variable less. Introduced more than eighty years later, Gröbner bases allow a direct proof that is as constructive as possible: Gröbner bases produce an algorithm for testing whether a polynomial belong to the ideal generated by other polynomials. So, given an infinite sequence of polynomials, one can construct algorithmically the list of those polynomials that do not belong to the ideal generated by the preceding ones. Gröbner basis theory implies that this list is necessarily finite, and is thus a finite basis of the ideal. However, for deciding whether the list is complete, one must consider every element of the infinite sequence, which cannot be done in the finite time allowed to an algorithm. Proof Theorem. If is a left (resp. right) Noetherian ring, then the polynomial ring is also a left (resp. right) Noetherian ring. Remark. We will give two proofs, in both only the "left" case is considered; the proof for the right case is similar. First proof Suppose is a non-finitely generated left ideal. Then by recursion (using the axiom of dependent choice) there is a sequence of polynomials such that if is the left ideal generated by then is of minimal degree. By construction, is a non-decreasing sequence of natural numbers. Let be the leading coefficient of and let be the left ideal in generated by . Since is Noetherian the chain of ideals must terminate. Thus for some integer . So in particular, Now consider whose leading term is equal to that of ; moreover, . However, , which means that has degree less than , contradicting the minimality. Second proof Let be a left ideal. Let be the set of leading coefficients of members of . This is obviously a left ideal over , and so is finitely generated by the leading coefficients of finitely many members of ; say . Let be the maximum of the set , and let be the set of leading coefficients of members of , whose degree is . As before, the are left ideals over , and so are finitely generated by the leading coefficients of finitely many members of , say with degrees . Now let be the left ideal generated by: We have and claim also . Suppose for the sake of contradiction this is not so. Then let be of minimal degree, and denote its leading coefficient by . Case 1: . Regardless of this condition, we have , so is a left linear combination of the coefficients of the . Consider which has the same leading term as ; moreover while . Therefore and , which contradicts minimality. Case 2: . Then so is a left linear combination of the leading coefficients of the . Considering we yield a similar contradiction as in Case 1. Thus our claim holds, and which is finitely generated. Note that the only reason we had to split into two cases was to ensure that the powers of multiplying the factors were non-negative in the constructions. Applications Let be a Noetherian commutative ring. Hilbert's basis theorem has some immediate corollaries. By induction we see that will also be Noetherian. Since any affine variety over (i.e. a locus-set of a collection of polynomials) may be written as the locus of an ideal and further as the locus of its generators, it follows that every affine variety is the locus of finitely many polynomials — i.e. the intersection of finitely many hypersurfaces. If is a finitely-generated -algebra, then we know that , where is an ideal. The basis theorem implies that must be finitely generated, say , i.e. is finitely presented. Formal proofs Formal proofs of Hilbert's basis theorem have been verified through the Mizar project (see HILBASIS file) and Lean (see ring_theory.polynomial). References Further reading Cox, Little, and O'Shea, Ideals, Varieties, and Algorithms, Springer-Verlag, 1997. The definitive English-language biography of Hilbert. Commutative algebra Invariant theory Articles containing proofs Theorems in ring theory David Hilbert Theorems about polynomials
Hilbert's basis theorem
[ "Physics", "Mathematics" ]
1,348
[ "Symmetry", "Group actions", "Theorems in algebra", "Fields of abstract algebra", "Theorems about polynomials", "Articles containing proofs", "Commutative algebra", "Invariant theory" ]
13,734
https://en.wikipedia.org/wiki/Heterocyclic%20compound
A heterocyclic compound or ring structure is a cyclic compound that has atoms of at least two different elements as members of its ring(s). Heterocyclic organic chemistry is the branch of organic chemistry dealing with the synthesis, properties, and applications of organic heterocycles. Examples of heterocyclic compounds include all of the nucleic acids, the majority of drugs, most biomass (cellulose and related materials), and many natural and synthetic dyes. More than half of known compounds are heterocycles. 59% of US FDA-approved drugs contain nitrogen heterocycles. Classification The study of organic heterocyclic chemistry focuses especially on organic unsaturated derivatives, and the preponderance of work and applications involves unstrained organic 5- and 6-membered rings. Included are pyridine, thiophene, pyrrole, and furan. Another large class of organic heterocycles refers to those fused to benzene rings. For example, the fused benzene derivatives of pyridine, thiophene, pyrrole, and furan are quinoline, benzothiophene, indole, and benzofuran, respectively. The fusion of two benzene rings gives rise to a third large family of organic compounds. Analogs of the previously mentioned heterocycles for this third family of compounds are acridine, dibenzothiophene, carbazole, and dibenzofuran, respectively. Heterocyclic organic compounds can be usefully classified based on their electronic structure. The saturated organic heterocycles behave like the acyclic derivatives. Thus, piperidine and tetrahydrofuran are conventional amines and ethers, with modified steric profiles. Therefore, the study of organic heterocyclic chemistry focuses on organic unsaturated rings. Inorganic rings Some heterocycles contain no carbon. Examples are borazine (B3N3 ring), hexachlorophosphazenes (P3N3 rings), and tetrasulfur tetranitride S4N4. In comparison with organic heterocycles, which have numerous commercial applications, inorganic ring systems are mainly of theoretical interest. IUPAC recommends the Hantzsch-Widman nomenclature for naming heterocyclic compounds. Notes on lists "Heteroatoms" are atoms in the ring other than carbon atoms. Names in italics are retained by IUPAC and do not follow the Hantzsch-Widman nomenclature Some of the names refer to classes of compounds rather than individual compounds. Also no attempt is made to list isomers. 3-membered rings Although subject to ring strain, 3-membered heterocyclic rings are well characterized. 4-membered rings 5-membered rings The 5-membered ring compounds containing two heteroatoms, at least one of which is nitrogen, are collectively called the azoles. Thiazoles and isothiazoles contain a sulfur and a nitrogen atom in the ring. Dithioles have two sulfur atoms. A large group of 5-membered ring compounds with three or more heteroatoms also exists. One example is the class of dithiazoles, which contain two sulfur atoms and one nitrogen atom. 6-membered rings The 6-membered ring compounds containing two heteroatoms, at least one of which is nitrogen, are collectively called the azines. Thiazines contain a sulfur and a nitrogen atom in the ring. Dithiines have two sulfur atoms. Six-membered rings with five heteroatomsThe hypothetical chemical compound with five nitrogen heteroatoms would be pentazine. Six-membered rings with six heteroatomsThe hypothetical chemical compound with six nitrogen heteroatoms would be hexazine. Borazine is a six-membered ring with three nitrogen heteroatoms and three boron heteroatoms. 7-membered rings In a 7-membered ring, the heteroatom must be able to provide an empty π-orbital (e.g. boron) for "normal" aromatic stabilization to be available; otherwise, homoaromaticity may be possible. 8-membered rings Borazocine is an eight-membered ring with four nitrogen heteroatoms and four boron heteroatoms. 9-membered rings Images of rings with one heteroatom Fused/condensed rings Heterocyclic rings systems that are formally derived by fusion with other rings, either carbocyclic or heterocyclic, have a variety of common and systematic names. For example, with the benzo-fused unsaturated nitrogen heterocycles, pyrrole provides indole or isoindole depending on the orientation. The pyridine analog is quinoline or isoquinoline. For azepine, benzazepine is the preferred name. Likewise, the compounds with two benzene rings fused to the central heterocycle are carbazole, acridine, and dibenzoazepine. Thienothiophene are the fusion of two thiophene rings. Phosphaphenalenes are a tricyclic phosphorus-containing heterocyclic system derived from the carbocycle phenalene. History of heterocyclic chemistry The history of heterocyclic chemistry began in the 1800s, in step with the development of organic chemistry. Some noteworthy developments: 1818: Brugnatelli makes alloxan from uric acid 1832: Dobereiner produces furfural (a furan) by treating starch with sulfuric acid 1834: Runge obtains pyrrole ("fiery oil") by dry distillation of bones 1906: Friedlander synthesizes indigo dye, allowing synthetic chemistry to displace a large agricultural industry 1936: Treibs isolates chlorophyll derivatives from crude oil, explaining the biological origin of petroleum. 1951: Chargaff's rules are described, highlighting the role of heterocyclic compounds (purines and pyrimidines) in the genetic code. Uses Heterocyclic compounds are pervasive in many areas of life sciences and technology. Many drugs are heterocyclic compounds. See also Spiroketals References External links Hantzsch-Widman nomenclature, IUPAC Heterocyclic amines in cooked meat, US CDC List of known and probable carcinogens, American Cancer Society List of known carcinogens by the State of California, Proposition 65 (more comprehensive)
Heterocyclic compound
[ "Chemistry" ]
1,407
[ "Organic compounds", "Heterocyclic compounds" ]
14,022
https://en.wikipedia.org/wiki/Haber%20process
The Haber process, also called the Haber–Bosch process, is the main industrial procedure for the production of ammonia. It converts atmospheric nitrogen (N2) to ammonia (NH3) by a reaction with hydrogen (H2) using finely divided iron metal as a catalyst: This reaction is thermodynamically favorable at room temperature, but the kinetics are prohibitively slow. At high temperatures at which catalysts are active enough that the reaction proceeds to equilibrium, the reaction is reactant-favored rather than product-favored. As a result, high pressures are needed to drive the reaction forward. The German chemists Fritz Haber and Carl Bosch developed the process in the first decade of the 20th century, and its improved efficiency over existing methods such as the Birkeland-Eyde and Frank-Caro processes was a major advancement in the industrial production of ammonia. The Haber process can be combined with steam reforming to produce ammonia with just three chemical inputs: water, natural gas, and atmospheric nitrogen. Both Haber and Bosch were eventually awarded the Nobel Prize in Chemistry: Haber in 1918 for ammonia synthesis specifically, and Bosch in 1931 for related contributions to high-pressure chemistry. History During the 19th century, the demand rapidly increased for nitrates and ammonia for use as fertilizers, which supply plants with the nutrients they need to grow, and for industrial feedstocks. The main source was mining niter deposits and guano from tropical islands. At the beginning of the 20th century these reserves were thought insufficient to satisfy future demands, and research into new potential sources of ammonia increased. Although atmospheric nitrogen (N2) is abundant, comprising ~78% of the air, it is exceptionally stable and does not readily react with other chemicals. Haber, with his assistant Robert Le Rossignol, developed the high-pressure devices and catalysts needed to demonstrate the Haber process at a laboratory scale. They demonstrated their process in the summer of 1909 by producing ammonia from the air, drop by drop, at the rate of about per hour. The process was purchased by the German chemical company BASF, which assigned Carl Bosch the task of scaling up Haber's tabletop machine to industrial scale. He succeeded in 1910. Haber and Bosch were later awarded Nobel Prizes, in 1918 and 1931 respectively, for their work in overcoming the chemical and engineering problems of large-scale, continuous-flow, high-pressure technology. Ammonia was first manufactured using the Haber process on an industrial scale in 1913 in BASF's Oppau plant in Germany, reaching 20 tonnes/day in 1914. During World War I, the production of munitions required large amounts of nitrate. The Allied powers had access to large deposits of sodium nitrate in Chile (Chile saltpetre) controlled by British companies. India had large supplies too, but it was also controlled by the British. Moreover, even if German commercial interests had nominal legal control of such resources, the Allies controlled the sea lanes and imposed a highly effective blockade which would have prevented such supplies from reaching Germany. The Haber process proved so essential to the German war effort that it is considered virtually certain Germany would have been defeated in a matter of months without it. Synthetic ammonia from the Haber process was used for the production of nitric acid, a precursor to the nitrates used in explosives. The original Haber–Bosch reaction chambers used osmium as the catalyst, but this was available in extremely small quantities. Haber noted that uranium was almost as effective and easier to obtain than osmium. In 1909, BASF researcher Alwin Mittasch discovered a much less expensive iron-based catalyst that is still used. A major contributor to the discovery of this catalysis was Gerhard Ertl. The most popular catalysts are based on iron promoted with K2O, CaO, SiO2, and Al2O3. During the interwar years, alternative processes were developed, most notably the Casale process, the Claude process, and the Mont-Cenis process developed by the Friedrich Uhde Ingenieurbüro. Luigi Casale and Georges Claude proposed to increase the pressure of the synthesis loop to , thereby increasing the single-pass ammonia conversion and making nearly complete liquefaction at ambient temperature feasible. Claude proposed to have three or four converters with liquefaction steps in series, thereby avoiding recycling. Most plants continue to use the original Haber process ( and ), albeit with improved single-pass conversion and lower energy consumption due to process and catalyst optimization. Process Combined with the energy needed to produce hydrogen and purified atmospheric nitrogen, ammonia production is energy-intensive, accounting for 1% to 2% of global energy consumption, 3% of global carbon emissions, and 3% to 5% of natural gas consumption. Hydrogen required for ammonia synthesis is most often produced through gasification of carbon-containing material, mostly natural gas, but other potential carbon sources include coal, petroleum, peat, biomass, or waste. As of 2012, the global production of ammonia produced from natural gas using the steam reforming process was 72%, however in China as of 2022 natural gas and coal were responsible for 20% and 75% respectively. Hydrogen can also be produced from water and electricity using electrolysis: at one time, most of Europe's ammonia was produced from the Hydro plant at Vemork. Other possibilities include biological hydrogen production or photolysis, but at present, steam reforming of natural gas is the most economical means of mass-producing hydrogen. The choice of catalyst is important for synthesizing ammonia. In 2012, Hideo Hosono's group found that Ru-loaded calcium-aluminum oxide C12A7: electride works well as a catalyst and pursued more efficient formation. This method is implemented in a small plant for ammonia synthesis in Japan. In 2019, Hosono's group found another catalyst, a novel perovskite oxynitride-hydride , that works at lower temperature and without costly ruthenium. Hydrogen production The major source of hydrogen is methane. Steam reforming of natural gas extracts hydrogen from methane in a high-temperature and pressure tube inside a reformer with a nickel catalyst. Other fossil fuel sources include coal, heavy fuel oil and naphtha. Green hydrogen is produced without fossil fuels or carbon dioxide emissions from biomass, water electrolysis and thermochemical (solar or another heat source) water splitting. Starting with a natural gas () feedstock, the steps are as follows; Remove sulfur compounds from the feedstock, because sulfur deactivates the catalysts used in subsequent steps. Sulfur removal requires catalytic hydrogenation to convert sulfur compounds in the feedstocks to gaseous hydrogen sulfide (hydrodesulfurization, hydrotreating): H2 + RSH -> RH + H2S Hydrogen sulfide is adsorbed and removed by passing it through beds of zinc oxide where it is converted to solid zinc sulfide: H2S + ZnO -> ZnS + H2O Catalytic steam reforming of the sulfur-free feedstock forms hydrogen plus carbon monoxide: CH4 + H2O -> CO + 3 H2 Catalytic shift conversion converts the carbon monoxide to carbon dioxide and more hydrogen: CO + H2O -> CO2 + H2 Carbon dioxide is removed either by absorption in aqueous ethanolamine solutions or by adsorption in pressure swing adsorbers (PSA) using proprietary solid adsorption media. The final step in producing hydrogen is to use catalytic methanation to remove residual carbon monoxide or carbon dioxide: CO + 3 H2 -> CH4 + H2O CO2 + 4 H2 -> CH4 + 2 H2O Ammonia production The hydrogen is catalytically reacted with nitrogen (derived from process air) to form anhydrous liquid ammonia. It is difficult and expensive, as lower temperatures result in slower reaction kinetics (hence a slower reaction rate) and high pressure requires high-strength pressure vessels that resist hydrogen embrittlement. Diatomic nitrogen is bound together by a triple bond, which makes it relatively inert. Yield and efficiency are low, meaning that the ammonia must be extracted and the gases reprocessed for the reaction to proceed at an acceptable pace. This step is known as the ammonia synthesis loop: 3 H2 + N2 -> 2 NH3 The gases (nitrogen and hydrogen) are passed over four beds of catalyst, with cooling between each pass to maintain a reasonable equilibrium constant. On each pass, only about 15% conversion occurs, but unreacted gases are recycled, and eventually conversion of 97% is achieved. Due to the nature of the (typically multi-promoted magnetite) catalyst used in the ammonia synthesis reaction, only low levels of oxygen-containing (especially CO, CO2 and H2O) compounds can be tolerated in the hydrogen/nitrogen mixture. Relatively pure nitrogen can be obtained by air separation, but additional oxygen removal may be required. Because of relatively low single pass conversion rates (typically less than 20%), a large recycle stream is required. This can lead to the accumulation of inerts in the gas. Nitrogen gas (N2) is unreactive because the atoms are held together by triple bonds. The Haber process relies on catalysts that accelerate the scission of these bonds. Two opposing considerations are relevant: the equilibrium position and the reaction rate. At room temperature, the equilibrium is in favor of ammonia, but the reaction does not proceed at a detectable rate due to its high activation energy. Because the reaction is exothermic, the equilibrium constant decreases with increasing temperature following Le Châtelier's principle. It becomes unity at around . Above this temperature, the equilibrium quickly becomes unfavorable at atmospheric pressure, according to the Van 't Hoff equation. Lowering the temperature is unhelpful because the catalyst requires a temperature of at least 400 °C to be efficient. Increased pressure favors the forward reaction because 4 moles of reactant produce 2 moles of product, and the pressure used () alters the equilibrium concentrations to give a substantial ammonia yield. The reason for this is evident in the equilibrium relationship: where is the fugacity coefficient of species , is the mole fraction of the same species, is the reactor pressure, and is standard pressure, typically . Economically, reactor pressurization is expensive: pipes, valves, and reaction vessels need to be strong enough, and safety considerations affect operating at 20 MPa. Compressors take considerable energy, as work must be done on the (compressible) gas. Thus, the compromise used gives a single-pass yield of around 15%. While removing the ammonia from the system increases the reaction yield, this step is not used in practice, since the temperature is too high; instead it is removed from the gases leaving the reaction vessel. The hot gases are cooled under high pressure, allowing the ammonia to condense and be removed as a liquid. Unreacted hydrogen and nitrogen gases are returned to the reaction vessel for another round. While most ammonia is removed (typically down to 2–5 mol.%), some ammonia remains in the recycle stream. In academic literature, a more complete separation of ammonia has been proposed by absorption in metal halides, metal-organic frameworks or zeolites. Such a process is called an absorbent-enhanced Haber process or adsorbent-enhanced Haber–Bosch process. Pressure/temperature The steam reforming, shift conversion, carbon dioxide removal, and methanation steps each operate at absolute pressures of about 25 to 35 bar, while the ammonia synthesis loop operates at temperatures of and pressures ranging from 60 to 180 bar depending upon the method used. The resulting ammonia must then be separated from the residual hydrogen and nitrogen at temperatures of . Catalysts The Haber–Bosch process relies on catalysts to accelerate N2 hydrogenation. The catalysts are heterogeneous solids that interact with gaseous reagents. The catalyst typically consists of finely divided iron bound to an iron oxide carrier containing promoters possibly including aluminium oxide, potassium oxide, calcium oxide, potassium hydroxide, molybdenum, and magnesium oxide. Iron-based catalysts The iron catalyst is obtained from finely ground iron powder, which is usually obtained by reduction of high-purity magnetite (Fe3O4). The pulverized iron is oxidized to give magnetite or wüstite (FeO, ferrous oxide) particles of a specific size. The magnetite (or wüstite) particles are then partially reduced, removing some of the oxygen. The resulting catalyst particles consist of a core of magnetite, encased in a shell of wüstite, which in turn is surrounded by an outer shell of metallic iron. The catalyst maintains most of its bulk volume during the reduction, resulting in a highly porous high-surface-area material, which enhances its catalytic effectiveness. Minor components include calcium and aluminium oxides, which support the iron catalyst and help it maintain its surface area. These oxides of Ca, Al, K, and Si are unreactive to reduction by hydrogen. The production of the catalyst requires a particular melting process in which used raw materials must be free of catalyst poisons and the promoter aggregates must be evenly distributed in the magnetite melt. Rapid cooling of the magnetite, which has an initial temperature of about 3500 °C, produces the desired precursor. Unfortunately, the rapid cooling ultimately forms a catalyst of reduced abrasion resistance. Despite this disadvantage, the method of rapid cooling is often employed. The reduction of the precursor magnetite to α-iron is carried out directly in the production plant with synthesis gas. The reduction of the magnetite proceeds via the formation of wüstite (FeO) so that particles with a core of magnetite become surrounded by a shell of wüstite. The further reduction of magnetite and wüstite leads to the formation of α-iron, which forms together with the promoters the outer shell. The involved processes are complex and depend on the reduction temperature: At lower temperatures, wüstite disproportionates into an iron phase and a magnetite phase; at higher temperatures, the reduction of the wüstite and magnetite to iron dominates. The α-iron forms primary crystallites with a diameter of about 30 nanometers. These crystallites form a bimodal pore system with pore diameters of about 10 nanometers (produced by the reduction of the magnetite phase) and of 25 to 50 nanometers (produced by the reduction of the wüstite phase). With the exception of cobalt oxide, the promoters are not reduced. During the reduction of the iron oxide with synthesis gas, water vapor is formed. This water vapor must be considered for high catalyst quality as contact with the finely divided iron would lead to premature aging of the catalyst through recrystallization, especially in conjunction with high temperatures. The vapor pressure of the water in the gas mixture produced during catalyst formation is thus kept as low as possible, target values are below 3 gm−3. For this reason, the reduction is carried out at high gas exchange, low pressure, and low temperatures. The exothermic nature of the ammonia formation ensures a gradual increase in temperature. The reduction of fresh, fully oxidized catalyst or precursor to full production capacity takes four to ten days. The wüstite phase is reduced faster and at lower temperatures than the magnetite phase (Fe3O4). After detailed kinetic, microscopic, and X-ray spectroscopic investigations it was shown that wüstite reacts first to metallic iron. This leads to a gradient of iron(II) ions, whereby these diffuse from the magnetite through the wüstite to the particle surface and precipitate there as iron nuclei. A high-activity novel catalyst based on this phenomenon was discovered in the 1980s at the Zhejiang University of Technology and commercialized by 2003. Pre-reduced, stabilized catalysts occupy a significant market share. They are delivered showing the fully developed pore structure, but have been oxidized again on the surface after manufacture and are therefore no longer pyrophoric. The reactivation of such pre-reduced catalysts requires only 30 to 40 hours instead of several days. In addition to the short start-up time, they have other advantages such as higher water resistance and lower weight. Catalysts other than iron Many efforts have been made to improve the Haber–Bosch process. Many metals were tested as catalysts. The requirement for suitability is the dissociative adsorption of nitrogen (i. e. the nitrogen molecule must be split into nitrogen atoms upon adsorption). If the binding of the nitrogen is too strong, the catalyst is blocked and the catalytic ability is reduced (self-poisoning). The elements in the periodic table to the left of the iron group show such strong bonds. Further, the formation of surface nitrides makes, for example, chromium catalysts ineffective. Metals to the right of the iron group, in contrast, adsorb nitrogen too weakly for ammonia synthesis. Haber initially used catalysts based on osmium and uranium. Uranium reacts to its nitride during catalysis, while osmium oxide is rare. According to theoretical and practical studies, improvements over pure iron are limited. The activity of iron catalysts is increased by the inclusion of cobalt. Ruthenium Ruthenium forms highly active catalysts. Allowing milder operating pressures and temperatures, Ru-based materials are referred to as second-generation catalysts. Such catalysts are prepared by the decomposition of triruthenium dodecacarbonyl on graphite. A drawback of activated-carbon-supported ruthenium-based catalysts is the methanation of the support in the presence of hydrogen. Their activity is strongly dependent on the catalyst carrier and the promoters. A wide range of substances can be used as carriers, including carbon, magnesium oxide, aluminium oxide, zeolites, spinels, and boron nitride. Ruthenium-activated carbon-based catalysts have been used industrially in the KBR Advanced Ammonia Process (KAAP) since 1992. The carbon carrier is partially degraded to methane; however, this can be mitigated by a special treatment of the carbon at 1500 °C, thus prolonging the catalyst lifetime. In addition, the finely dispersed carbon poses a risk of explosion. For these reasons and due to its low acidity, magnesium oxide has proven to be a good choice of carrier. Carriers with acidic properties extract electrons from ruthenium, make it less reactive, and have the undesirable effect of binding ammonia to the surface. Catalyst poisons Catalyst poisons lower catalyst activity. They are usually impurities in the synthesis gas. Permanent poisons cause irreversible loss of catalytic activity, while temporary poisons lower the activity while present. Sulfur compounds, phosphorus compounds, arsenic compounds, and chlorine compounds are permanent poisons. Oxygenic compounds like water, carbon monoxide, carbon dioxide, and oxygen are temporary poisons. Although chemically inert components of the synthesis gas mixture such as noble gases or methane are not strictly poisons, they accumulate through the recycling of the process gases and thus lower the partial pressure of the reactants, which in turn slows conversion. Industrial production Synthesis parameters The reaction is: The reaction is an exothermic equilibrium reaction in which the gas volume is reduced. The equilibrium constant Keq of the reaction (see table) and obtained from: Since the reaction is exothermic, the equilibrium of the reaction shifts at lower temperatures to the ammonia side. Furthermore, four volumetric units of the raw materials produce two volumetric units of ammonia. According to Le Chatelier's principle, higher pressure favours ammonia. High pressure is necessary to ensure sufficient surface coverage of the catalyst with nitrogen. For this reason, a ratio of nitrogen to hydrogen of 1 to 3, a pressure of 250 to 350 bar, a temperature of 450 to 550 °C and α iron are optimal. The catalyst ferrite (α-Fe) is produced in the reactor by the reduction of magnetite with hydrogen. The catalyst has its highest efficiency at temperatures of about 400 to 500 °C. Even though the catalyst greatly lowers the activation energy for the cleavage of the triple bond of the nitrogen molecule, high temperatures are still required for an appropriate reaction rate. At the industrially used reaction temperature of 450 to 550 °C an optimum between the decomposition of ammonia into the starting materials and the effectiveness of the catalyst is achieved. The formed ammonia is continuously removed from the system. The volume fraction of ammonia in the gas mixture is about 20%. The inert components, especially the noble gases such as argon, should not exceed a certain content in order not to reduce the partial pressure of the reactants too much. To remove the inert gas components, part of the gas is removed and the argon is separated in a gas separation plant. The extraction of pure argon from the circulating gas is carried out using the Linde process. Large-scale implementation Modern ammonia plants produce more than 3000 tons per day in one production line. The following diagram shows the set-up of a modern (designed in the early 1960s by Kellogg) "single-train" Haber–Bosch plant: Depending on its origin, the synthesis gas must first be freed from impurities such as hydrogen sulfide or organic sulphur compounds, which act as a catalyst poison. High concentrations of hydrogen sulfide, which occur in synthesis gas from carbonization coke, are removed in a wet cleaning stage such as the sulfosolvan process, while low concentrations are removed by adsorption on activated carbon. Organosulfur compounds are separated by pressure swing adsorption together with carbon dioxide after CO conversion. To produce hydrogen by steam reforming, methane reacts with water vapor using a nickel oxide-alumina catalyst in the primary reformer to form carbon monoxide and hydrogen. The energy required for this, the enthalpy ΔH, is 206 kJ/mol. The methane gas reacts in the primary reformer only partially. To increase the hydrogen yield and keep the content of inert components (i. e. methane) as low as possible, the remaining methane gas is converted in a second step with oxygen to hydrogen and carbon monoxide in the secondary reformer. The secondary reformer is supplied with air as the oxygen source. Also, the required nitrogen for the subsequent ammonia synthesis is added to the gas mixture. In the third step, the carbon monoxide is oxidized to carbon dioxide, which is called CO conversion or water-gas shift reaction. Carbon monoxide and carbon dioxide would form carbamates with ammonia, which would clog (as solids) pipelines and apparatus within a short time. In the following process step, the carbon dioxide must therefore be removed from the gas mixture. In contrast to carbon monoxide, carbon dioxide can easily be removed from the gas mixture by gas scrubbing with triethanolamine. The gas mixture then still contains methane and noble gases such as argon, which, however, behave inertly. The gas mixture is then compressed to operating pressure by turbo compressors. The resulting compression heat is dissipated by heat exchangers; it is used to preheat raw gases. The actual production of ammonia takes place in the ammonia reactor. The first reactors were bursting under high pressure because the atomic hydrogen in the carbonaceous steel partially recombined into methane and produced cracks in the steel. Bosch, therefore, developed tube reactors consisting of a pressure-bearing steel tube in which a low-carbon iron lining tube was inserted and filled with the catalyst. Hydrogen that diffused through the inner steel pipe escaped to the outside via thin holes in the outer steel jacket, the so-called Bosch holes. A disadvantage of the tubular reactors was the relatively high-pressure loss, which had to be applied again by compression. The development of hydrogen-resistant chromium-molybdenum steels made it possible to construct single-walled pipes. Modern ammonia reactors are designed as multi-storey reactors with a low-pressure drop, in which the catalysts are distributed as fills over about ten storeys one above the other. The gas mixture flows through them one after the other from top to bottom. Cold gas is injected from the side for cooling. A disadvantage of this reactor type is the incomplete conversion of the cold gas mixture in the last catalyst bed. Alternatively, the reaction mixture between the catalyst layers is cooled using heat exchangers, whereby the hydrogen-nitrogen mixture is preheated to the reaction temperature. Reactors of this type have three catalyst beds. In addition to good temperature control, this reactor type has the advantage of better conversion of the raw material gases compared to reactors with cold gas injection. Uhde has developed and is using an ammonia converter with three radial flow catalyst beds and two internal heat exchangers instead of axial flow catalyst beds. This further reduces the pressure drop in the converter. The reaction product is continuously removed for maximum yield. The gas mixture is cooled to 450 °C in a heat exchanger using water, freshly supplied gases, and other process streams. The ammonia also condenses and is separated in a pressure separator. Unreacted nitrogen and hydrogen are then compressed back to the process by a circulating gas compressor, supplemented with fresh gas, and fed to the reactor. In a subsequent distillation, the product ammonia is purified. Mechanism Elementary steps The mechanism of ammonia synthesis contains the following seven elementary steps: transport of the reactants from the gas phase through the boundary layer to the surface of the catalyst. pore diffusion to the reaction center adsorption of reactants reaction desorption of product transport of the product through the pore system back to the surface transport of the product into the gas phase Transport and diffusion (the first and last two steps) are fast compared to adsorption, reaction, and desorption because of the shell structure of the catalyst. It is known from various investigations that the rate-determining step of the ammonia synthesis is the dissociation of nitrogen. In contrast, exchange reactions between hydrogen and deuterium on the Haber–Bosch catalysts still take place at temperatures of at a measurable rate; the exchange between deuterium and hydrogen on the ammonia molecule also takes place at room temperature. Since the adsorption of both molecules is rapid, it cannot determine the speed of ammonia synthesis. In addition to the reaction conditions, the adsorption of nitrogen on the catalyst surface depends on the microscopic structure of the catalyst surface. Iron has different crystal surfaces, whose reactivity is very different. The Fe(111) and Fe(211) surfaces have by far the highest activity. The explanation for this is that only these surfaces have so-called C7 sites – these are iron atoms with seven closest neighbours. The dissociative adsorption of nitrogen on the surface follows the following scheme, where S* symbolizes an iron atom on the surface of the catalyst: N2 → S*–N2 (γ-species) → S*–N2–S* (α-species) → 2 S*–N (β-species, surface nitride) The adsorption of nitrogen is similar to the chemisorption of carbon monoxide. On a Fe(111) surface, the adsorption of nitrogen first leads to an adsorbed γ-species with an adsorption energy of 24 kJmol−1 and an N-N stretch vibration of 2100 cm−1. Since the nitrogen is isoelectronic to carbon monoxide, it adsorbs in an on-end configuration in which the molecule is bound perpendicular to the metal surface at one nitrogen atom. This has been confirmed by photoelectron spectroscopy. Ab-initio-MO calculations have shown that, in addition to the σ binding of the free electron pair of nitrogen to the metal, there is a π binding from the d orbitals of the metal to the π* orbitals of nitrogen, which strengthens the iron-nitrogen bond. The nitrogen in the α state is more strongly bound with 31 kJmol−1. The resulting N–N bond weakening could be experimentally confirmed by a reduction of the wave numbers of the N–N stretching oscillation to 1490 cm−1. Further heating of the Fe(111) area covered by α-N2 leads to both desorption and the emergence of a new band at 450 cm−1. This represents a metal-nitrogen oscillation, the β state. A comparison with vibration spectra of complex compounds allows the conclusion that the N2 molecule is bound "side-on", with an N atom in contact with a C7 site. This structure is called "surface nitride". The surface nitride is very strongly bound to the surface. Hydrogen atoms (Hads), which are very mobile on the catalyst surface, quickly combine with it. Infrared spectroscopically detected surface imides (NHad), surface amides (NH2,ad) and surface ammoniacates (NH3,ad) are formed, the latter decay under NH3 release (desorption). The individual molecules were identified or assigned by X-ray photoelectron spectroscopy (XPS), high-resolution electron energy loss spectroscopy (HREELS) and Ir Spectroscopy. On the basis of these experimental findings, the reaction mechanism is believed to involve the following steps (see also figure): N2 (g) → N2 (adsorbed) N2 (adsorbed) → 2 N (adsorbed) H2 (g) → H2 (adsorbed) H2 (adsorbed) → 2 H (adsorbed) N (adsorbed) + 3 H (adsorbed) → NH3 (adsorbed) NH3 (adsorbed) → NH3 (g) Reaction 5 occurs in three steps, forming NH, NH2, and then NH3. Experimental evidence points to reaction 2 as being slow, rate-determining step. This is not unexpected, since that step breaks the nitrogen triple bond, the strongest of the bonds broken in the process. As with all Haber–Bosch catalysts, nitrogen dissociation is the rate-determining step for ruthenium-activated carbon catalysts. The active center for ruthenium is a so-called B5 site, a 5-fold coordinated position on the Ru(0001) surface where two ruthenium atoms form a step edge with three ruthenium atoms on the Ru(0001) surface. The number of B5 sites depends on the size and shape of the ruthenium particles, the ruthenium precursor and the amount of ruthenium used. The reinforcing effect of the basic carrier used in the ruthenium catalyst is similar to the promoter effect of alkali metals used in the iron catalyst. Energy diagram An energy diagram can be created based on the Enthalpy of Reaction of the individual steps. The energy diagram can be used to compare homogeneous and heterogeneous reactions: Due to the high activation energy of the dissociation of nitrogen, the homogeneous gas phase reaction is not realizable. The catalyst avoids this problem as the energy gain resulting from the binding of nitrogen atoms to the catalyst surface overcompensates for the necessary dissociation energy so that the reaction is finally exothermic. Nevertheless, the dissociative adsorption of nitrogen remains the rate-determining step: not because of the activation energy, but mainly because of the unfavorable pre-exponential factor of the rate constant. Although hydrogenation is endothermic, this energy can easily be applied by the reaction temperature (about 700 K). Economic and environmental aspects When first invented, the Haber process competed against another industrial process, the cyanamide process. However, the cyanamide process consumed large amounts of electrical power and was more labor-intensive than the Haber process. As of 2018, the Haber process produces 230 million tonnes of anhydrous ammonia per year. The ammonia is used mainly as a nitrogen fertilizer as ammonia itself, in the form of ammonium nitrate, and as urea. The Haber process consumes 3–5% of the world's natural gas production (around 1–2% of the world's energy supply). In combination with advances in breeding, herbicides, and pesticides, these fertilizers have helped to increase the productivity of agricultural land: The energy-intensity of the process contributes to climate change and other environmental problems such as the leaching of nitrates into groundwater, rivers, ponds, and lakes; expanding dead zones in coastal ocean waters, resulting from recurrent eutrophication; atmospheric deposition of nitrates and ammonia affecting natural ecosystems; higher emissions of nitrous oxide (N2O), now the third most important greenhouse gas following CO2 and CH4. The Haber–Bosch process is one of the largest contributors to a buildup of reactive nitrogen in the biosphere, causing an anthropogenic disruption to the nitrogen cycle. Since nitrogen use efficiency is typically less than 50%, farm runoff from heavy use of fixed industrial nitrogen disrupts biological habitats. Nearly 50% of the nitrogen found in human tissues originated from the Haber–Bosch process. Thus, the Haber process serves as the "detonator of the population explosion", enabling the global population to increase from 1.6 billion in 1900 to 7.7 billion by November 2018. Reverse fuel cell technology converts electric energy, water and nitrogen into ammonia without a separate hydrogen electrolysis process. The use of synthetic nitrogen fertilisers reduces the incentive for farmers to use more sustainable crop rotations which include legumes for their natural nitrogen-fixing ability. See also Crop rotation Legume References Sources External links , 29 July 1999. BASF – Fertilizer out of thin air Britannica guide to Nobel Prizes: Fritz Haber Haber Process for Ammonia Synthesis Haber–Bosch process, most important invention of the 20th century, according to V. Smil, Nature, 29 July 1999, p. 415 (by Jürgen Schmidhuber) Nobel e-Museum – Biography of Fritz Haber Uses and Production of Ammonia BASF Chemical processes Industrial processes Equilibrium chemistry Peak oil Catalysis History of mining in Chile German inventions Industrial gases Name reactions Fritz Haber 1909 in science 1909 in Germany
Haber process
[ "Chemistry" ]
7,081
[ "Catalysis", "Equilibrium chemistry", "Name reactions", "Chemical processes", "Industrial gases", "nan", "Chemical process engineering", "Chemical kinetics" ]
14,029
https://en.wikipedia.org/wiki/Histone
In biology, histones are highly basic proteins abundant in lysine and arginine residues that are found in eukaryotic cell nuclei and in most Archaeal phyla. They act as spools around which DNA winds to create structural units called nucleosomes. Nucleosomes in turn are wrapped into 30-nanometer fibers that form tightly packed chromatin. Histones prevent DNA from becoming tangled and protect it from DNA damage. In addition, histones play important roles in gene regulation and DNA replication. Without histones, unwound DNA in chromosomes would be very long. For example, each human cell has about 1.8 meters of DNA if completely stretched out; however, when wound about histones, this length is reduced to about 9 micrometers (0.09 mm) of 30 nm diameter chromatin fibers. There are five families of histones, which are designated H1/H5 (linker histones), H2, H3, and H4 (core histones). The nucleosome core is formed of two H2A-H2B dimers and a H3-H4 tetramer. The tight wrapping of DNA around histones, is to a large degree, a result of electrostatic attraction between the positively charged histones and negatively charged phosphate backbone of DNA. Histones may be chemically modified through the action of enzymes to regulate gene transcription. The most common modifications are the methylation of arginine or lysine residues or the acetylation of lysine. Methylation can affect how other proteins such as transcription factors interact with the nucleosomes. Lysine acetylation eliminates a positive charge on lysine thereby weakening the electrostatic attraction between histone and DNA, resulting in partial unwinding of the DNA, making it more accessible for gene expression. Classes and variants Five major families of histone proteins exist: H1/H5, H2A, H2B, H3, and H4. Histones H2A, H2B, H3 and H4 are known as the core or nucleosomal histones, while histones H1/H5 are known as the linker histones. The core histones all exist as dimers, which are similar in that they all possess the histone fold domain: three alpha helices linked by two loops. It is this helical structure that allows for interaction between distinct dimers, particularly in a head-tail fashion (also called the handshake motif). The resulting four distinct dimers then come together to form one octameric nucleosome core, approximately 63 Angstroms in diameter (a solenoid (DNA)-like particle). Around 146 base pairs (bp) of DNA wrap around this core particle 1.65 times in a left-handed super-helical turn to give a particle of around 100 Angstroms across. The linker histone H1 binds the nucleosome at the entry and exit sites of the DNA, thus locking the DNA into place and allowing the formation of higher order structure. The most basic such formation is the 10 nm fiber or beads on a string conformation. This involves the wrapping of DNA around nucleosomes with approximately 50 base pairs of DNA separating each pair of nucleosomes (also referred to as linker DNA). Higher-order structures include the 30 nm fiber (forming an irregular zigzag) and 100 nm fiber, these being the structures found in normal cells. During mitosis and meiosis, the condensed chromosomes are assembled through interactions between nucleosomes and other regulatory proteins. Histones are subdivided into canonical replication-dependent histones, whose genes are expressed during the S-phase of the cell cycle and replication-independent histone variants, expressed during the whole cell cycle. In mammals, genes encoding canonical histones are typically clustered along chromosomes in 4 different highly-conserved loci, lack introns and use a stem loop structure at the 3' end instead of a polyA tail. Genes encoding histone variants are usually not clustered, have introns and their mRNAs are regulated with polyA tails. Complex multicellular organisms typically have a higher number of histone variants providing a variety of different functions. Recent data are accumulating about the roles of diverse histone variants highlighting the functional links between variants and the delicate regulation of organism development. Histone variants proteins from different organisms, their classification and variant specific features can be found in "HistoneDB 2.0 - Variants" database. Several pseudogenes have also been discovered and identified in very close sequences of their respective functional ortholog genes. The following is a list of human histone proteins, genes and pseudogenes: Structure The nucleosome core is formed of two H2A-H2B dimers and a H3-H4 tetramer, forming two nearly symmetrical halves by tertiary structure (C2 symmetry; one macromolecule is the mirror image of the other). The H2A-H2B dimers and H3-H4 tetramer also show pseudodyad symmetry. The 4 'core' histones (H2A, H2B, H3 and H4) are relatively similar in structure and are highly conserved through evolution, all featuring a 'helix turn helix turn helix' motif (DNA-binding protein motif that recognize specific DNA sequence). They also share the feature of long 'tails' on one end of the amino acid structure - this being the location of post-translational modification (see below). Archaeal histone only contains a H3-H4 like dimeric structure made out of a single type of unit. Such dimeric structures can stack into a tall superhelix ("hypernucleosome") onto which DNA coils in a manner similar to nucleosome spools. Only some archaeal histones have tails. The distance between the spools around which eukaryotic cells wind their DNA has been determined to range from 59 to 70 Å. In all, histones make five types of interactions with DNA: Salt bridges and hydrogen bonds between side chains of basic amino acids (especially lysine and arginine) and phosphate oxygens on DNA Helix-dipoles form alpha-helixes in H2B, H3, and H4 cause a net positive charge to accumulate at the point of interaction with negatively charged phosphate groups on DNA Hydrogen bonds between the DNA backbone and the amide group on the main chain of histone proteins Nonpolar interactions between the histone and deoxyribose sugars on DNA Non-specific minor groove insertions of the H3 and H2B N-terminal tails into two minor grooves each on the DNA molecule The highly basic nature of histones, aside from facilitating DNA-histone interactions, contributes to their water solubility. Histones are subject to post translational modification by enzymes primarily on their N-terminal tails, but also in their globular domains. Such modifications include methylation, citrullination, acetylation, phosphorylation, SUMOylation, ubiquitination, and ADP-ribosylation. This affects their function of gene regulation. In general, genes that are active have less bound histone, while inactive genes are highly associated with histones during interphase. It also appears that the structure of histones has been evolutionarily conserved, as any deleterious mutations would be severely maladaptive. All histones have a highly positively charged N-terminus with many lysine and arginine residues. Evolution and species distribution Core histones are found in the nuclei of eukaryotic cells and in most Archaeal phyla, but not in bacteria. The unicellular algae known as dinoflagellates were previously thought to be the only eukaryotes that completely lack histones, but later studies showed that their DNA still encodes histone genes. Unlike the core histones, homologs of the lysine-rich linker histone (H1) proteins are found in bacteria, otherwise known as nucleoprotein HC1/HC2. It has been proposed that core histone proteins are evolutionarily related to the helical part of the extended AAA+ ATPase domain, the C-domain, and to the N-terminal substrate recognition domain of Clp/Hsp100 proteins. Despite the differences in their topology, these three folds share a homologous helix-strand-helix (HSH) motif. It's also proposed that they may have evolved from ribosomal proteins (RPS6/RPS15), both being short and basic proteins. Archaeal histones may well resemble the evolutionary precursors to eukaryotic histones. Histone proteins are among the most highly conserved proteins in eukaryotes, emphasizing their important role in the biology of the nucleus. In contrast mature sperm cells largely use protamines to package their genomic DNA, most likely because this allows them to achieve an even higher packaging ratio. There are some variant forms in some of the major classes. They share amino acid sequence homology and core structural similarity to a specific class of major histones but also have their own feature that is distinct from the major histones. These minor histones usually carry out specific functions of the chromatin metabolism. For example, histone H3-like CENPA is associated with only the centromere region of the chromosome. Histone H2A variant H2A.Z is associated with the promoters of actively transcribed genes and also involved in the prevention of the spread of silent heterochromatin. Furthermore, H2A.Z has roles in chromatin for genome stability. Another H2A variant H2A.X is phosphorylated at S139 in regions around double-strand breaks and marks the region undergoing DNA repair. Histone H3.3 is associated with the body of actively transcribed genes. Function Compacting DNA strands Histones act as spools around which DNA winds. This enables the compaction necessary to fit the large genomes of eukaryotes inside cell nuclei: the compacted molecule is 40,000 times shorter than an unpacked molecule. Chromatin regulation Histones undergo posttranslational modifications that alter their interaction with DNA and nuclear proteins. The H3 and H4 histones have long tails protruding from the nucleosome, which can be covalently modified at several places. Modifications of the tail include methylation, acetylation, phosphorylation, ubiquitination, SUMOylation, citrullination, and ADP-ribosylation. The core of the histones H2A and H2B can also be modified. Combinations of modifications, known as histone marks, are thought to constitute a code, the so-called "histone code". Histone modifications act in diverse biological processes such as gene regulation, DNA repair, chromosome condensation (mitosis) and spermatogenesis (meiosis). The common nomenclature of histone modifications is: The name of the histone (e.g., H3) The single-letter amino acid abbreviation (e.g., K for Lysine) and the amino acid position in the protein The type of modification (Me: methyl, P: phosphate, Ac: acetyl, Ub: ubiquitin) The number of modifications (only Me is known to occur in more than one copy per residue. 1, 2 or 3 is mono-, di- or tri-methylation) So H3K4me1 denotes the monomethylation of the 4th residue (a lysine) from the start (i.e., the N-terminal) of the H3 protein. Modification A huge catalogue of histone modifications have been described, but a functional understanding of most is still lacking. Collectively, it is thought that histone modifications may underlie a histone code, whereby combinations of histone modifications have specific meanings. However, most functional data concerns individual prominent histone modifications that are biochemically amenable to detailed study. Chemistry Lysine methylation The addition of one, two, or many methyl groups to lysine has little effect on the chemistry of the histone; methylation leaves the charge of the lysine intact and adds a minimal number of atoms so steric interactions are mostly unaffected. However, proteins containing Tudor, chromo or PHD domains, amongst others, can recognise lysine methylation with exquisite sensitivity and differentiate mono, di and tri-methyl lysine, to the extent that, for some lysines (e.g.: H4K20) mono, di and tri-methylation appear to have different meanings. Because of this, lysine methylation tends to be a very informative mark and dominates the known histone modification functions. Glutamine serotonylation Recently it has been shown, that the addition of a serotonin group to the position 5 glutamine of H3, happens in serotonergic cells such as neurons. This is part of the differentiation of the serotonergic cells. This post-translational modification happens in conjunction with the H3K4me3 modification. The serotonylation potentiates the binding of the general transcription factor TFIID to the TATA box. Arginine methylation What was said above of the chemistry of lysine methylation also applies to arginine methylation, and some protein domains—e.g., Tudor domains—can be specific for methyl arginine instead of methyl lysine. Arginine is known to be mono- or di-methylated, and methylation can be symmetric or asymmetric, potentially with different meanings. Arginine citrullination Enzymes called peptidylarginine deiminases (PADs) hydrolyze the imine group of arginines and attach a keto group, so that there is one less positive charge on the amino acid residue. This process has been involved in the activation of gene expression by making the modified histones less tightly bound to DNA and thus making the chromatin more accessible. PADs can also produce the opposite effect by removing or inhibiting mono-methylation of arginine residues on histones and thus antagonizing the positive effect arginine methylation has on transcriptional activity. Lysine acetylation Addition of an acetyl group has a major chemical effect on lysine as it neutralises the positive charge. This reduces electrostatic attraction between the histone and the negatively charged DNA backbone, loosening the chromatin structure; highly acetylated histones form more accessible chromatin and tend to be associated with active transcription. Lysine acetylation appears to be less precise in meaning than methylation, in that histone acetyltransferases tend to act on more than one lysine; presumably this reflects the need to alter multiple lysines to have a significant effect on chromatin structure. The modification includes H3K27ac. Serine/threonine/tyrosine phosphorylation Addition of a negatively charged phosphate group can lead to major changes in protein structure, leading to the well-characterised role of phosphorylation in controlling protein function. It is not clear what structural implications histone phosphorylation has, but histone phosphorylation has clear functions as a post-translational modification, and binding domains such as BRCT have been characterised. Effects on transcription Most well-studied histone modifications are involved in control of transcription. Actively transcribed genes Two histone modifications are particularly associated with active transcription: Trimethylation of H3 lysine 4 (H3K4me3) This trimethylation occurs at the promoter of active genes and is performed by the COMPASS complex. Despite the conservation of this complex and histone modification from yeast to mammals, it is not entirely clear what role this modification plays. However, it is an excellent mark of active promoters and the level of this histone modification at a gene's promoter is broadly correlated with transcriptional activity of the gene. The formation of this mark is tied to transcription in a rather convoluted manner: early in transcription of a gene, RNA polymerase II undergoes a switch from initiating' to 'elongating', marked by a change in the phosphorylation states of the RNA polymerase II C terminal domain (CTD). The same enzyme that phosphorylates the CTD also phosphorylates the Rad6 complex, which in turn adds a ubiquitin mark to H2B K123 (K120 in mammals). H2BK123Ub occurs throughout transcribed regions, but this mark is required for COMPASS to trimethylate H3K4 at promoters. Trimethylation of H3 lysine 36 (H3K36me3) This trimethylation occurs in the body of active genes and is deposited by the methyltransferase Set2. This protein associates with elongating RNA polymerase II, and H3K36Me3 is indicative of actively transcribed genes. H3K36Me3 is recognised by the Rpd3 histone deacetylase complex, which removes acetyl modifications from surrounding histones, increasing chromatin compaction and repressing spurious transcription. Increased chromatin compaction prevents transcription factors from accessing DNA, and reduces the likelihood of new transcription events being initiated within the body of the gene. This process therefore helps ensure that transcription is not interrupted. Repressed genes Three histone modifications are particularly associated with repressed genes: Trimethylation of H3 lysine 27 (H3K27me3) This histone modification is deposited by the polycomb complex PRC2. It is a clear marker of gene repression, and is likely bound by other proteins to exert a repressive function. Another polycomb complex, PRC1, can bind H3K27me3 and adds the histone modification H2AK119Ub which aids chromatin compaction. Based on this data it appears that PRC1 is recruited through the action of PRC2, however, recent studies show that PRC1 is recruited to the same sites in the absence of PRC2. Di and tri-methylation of H3 lysine 9 (H3K9me2/3) H3K9me2/3 is a well-characterised marker for heterochromatin, and is therefore strongly associated with gene repression. The formation of heterochromatin has been best studied in the yeast Schizosaccharomyces pombe, where it is initiated by recruitment of the RNA-induced transcriptional silencing (RITS) complex to double stranded RNAs produced from centromeric repeats. RITS recruits the Clr4 histone methyltransferase which deposits H3K9me2/3. This process is called histone methylation. H3K9Me2/3 serves as a binding site for the recruitment of Swi6 (heterochromatin protein 1 or HP1, another classic heterochromatin marker) which in turn recruits further repressive activities including histone modifiers such as histone deacetylases and histone methyltransferases. Trimethylation of H4 lysine 20 (H4K20me3) This modification is tightly associated with heterochromatin, although its functional importance remains unclear. This mark is placed by the Suv4-20h methyltransferase, which is at least in part recruited by heterochromatin protein 1. Bivalent promoters Analysis of histone modifications in embryonic stem cells (and other stem cells) revealed many gene promoters carrying both H3K4Me3 and H3K27Me3, in other words these promoters display both activating and repressing marks simultaneously. This peculiar combination of modifications marks genes that are poised for transcription; they are not required in stem cells, but are rapidly required after differentiation into some lineages. Once the cell starts to differentiate, these bivalent promoters are resolved to either active or repressive states depending on the chosen lineage. Other functions DNA damage repair Marking sites of DNA damage is an important function for histone modifications. Without a repair marker, DNA would get destroyed by damage accumulated from sources such as the ultraviolet radiation of the sun. Phosphorylation of H2AX at serine 139 (γH2AX) Phosphorylated H2AX (also known as gamma H2AX) is a marker for DNA double strand breaks, and forms part of the response to DNA damage. H2AX is phosphorylated early after detection of DNA double strand break, and forms a domain extending many kilobases either side of the damage. Gamma H2AX acts as a binding site for the protein MDC1, which in turn recruits key DNA repair proteins (this complex topic is well reviewed in) and as such, gamma H2AX forms a vital part of the machinery that ensures genome stability. Acetylation of H3 lysine 56 (H3K56Ac) H3K56Acx is required for genome stability. H3K56 is acetylated by the p300/Rtt109 complex, but is rapidly deacetylated around sites of DNA damage. H3K56 acetylation is also required to stabilise stalled replication forks, preventing dangerous replication fork collapses. Although in general mammals make far greater use of histone modifications than microorganisms, a major role of H3K56Ac in DNA replication exists only in fungi, and this has become a target for antibiotic development. Trimethylation of H3 lysine 36 (H3K36me3) H3K36me3 has the ability to recruit the MSH2-MSH6 (hMutSα) complex of the DNA mismatch repair pathway. Consistently, regions of the human genome with high levels of H3K36me3 accumulate less somatic mutations due to mismatch repair activity. Chromosome condensation Phosphorylation of H3 at serine 10 (phospho-H3S10) The mitotic kinase aurora B phosphorylates histone H3 at serine 10, triggering a cascade of changes that mediate mitotic chromosome condensation. Condensed chromosomes therefore stain very strongly for this mark, but H3S10 phosphorylation is also present at certain chromosome sites outside mitosis, for example in pericentric heterochromatin of cells during G2. H3S10 phosphorylation has also been linked to DNA damage caused by R-loop formation at highly transcribed sites. Phosphorylation H2B at serine 10/14 (phospho-H2BS10/14) Phosphorylation of H2B at serine 10 (yeast) or serine 14 (mammals) is also linked to chromatin condensation, but for the very different purpose of mediating chromosome condensation during apoptosis. This mark is not simply a late acting bystander in apoptosis as yeast carrying mutations of this residue are resistant to hydrogen peroxide-induced apoptotic cell death. Addiction Epigenetic modifications of histone tails in specific regions of the brain are of central importance in addictions. Once particular epigenetic alterations occur, they appear to be long lasting "molecular scars" that may account for the persistence of addictions. Cigarette smokers (about 15% of the US population) are usually addicted to nicotine. After 7 days of nicotine treatment of mice, acetylation of both histone H3 and histone H4 was increased at the FosB promoter in the nucleus accumbens of the brain, causing 61% increase in FosB expression. This would also increase expression of the splice variant Delta FosB. In the nucleus accumbens of the brain, Delta FosB functions as a "sustained molecular switch" and "master control protein" in the development of an addiction. About 7% of the US population is addicted to alcohol. In rats exposed to alcohol for up to 5 days, there was an increase in histone 3 lysine 9 acetylation in the pronociceptin promoter in the brain amygdala complex. This acetylation is an activating mark for pronociceptin. The nociceptin/nociceptin opioid receptor system is involved in the reinforcing or conditioning effects of alcohol. Methamphetamine addiction occurs in about 0.2% of the US population. Chronic methamphetamine use causes methylation of the lysine in position 4 of histone 3 located at the promoters of the c-fos and the C-C chemokine receptor 2 (ccr2) genes, activating those genes in the nucleus accumbens (NAc). c-fos is well known to be important in addiction. The ccr2 gene is also important in addiction, since mutational inactivation of this gene impairs addiction. Synthesis The first step of chromatin structure duplication is the synthesis of histone proteins: H1, H2A, H2B, H3, H4. These proteins are synthesized during S phase of the cell cycle. There are different mechanisms which contribute to the increase of histone synthesis. Yeast Yeast carry one or two copies of each histone gene, which are not clustered but rather scattered throughout chromosomes. Histone gene transcription is controlled by multiple gene regulatory proteins such as transcription factors which bind to histone promoter regions. In budding yeast, the candidate gene for activation of histone gene expression is SBF. SBF is a transcription factor that is activated in late G1 phase, when it dissociates from its repressor Whi5. This occurs when Whi5 is phosphorylated by Cdc8 which is a G1/S Cdk. Suppression of histone gene expression outside of S phases is dependent on Hir proteins which form inactive chromatin structure at the locus of histone genes, causing transcriptional activators to be blocked. Metazoan In metazoans the increase in the rate of histone synthesis is due to the increase in processing of pre-mRNA to its mature form as well as decrease in mRNA degradation; this results in an increase of active mRNA for translation of histone proteins. The mechanism for mRNA activation has been found to be the removal of a segment of the 3' end of the mRNA strand, and is dependent on association with stem-loop binding protein (SLBP). SLBP also stabilizes histone mRNAs during S phase by blocking degradation by the 3'hExo nuclease. SLBP levels are controlled by cell-cycle proteins, causing SLBP to accumulate as cells enter S phase and degrade as cells leave S phase. SLBP are marked for degradation by phosphorylation at two threonine residues by cyclin dependent kinases, possibly cyclin A/ cdk2, at the end of S phase. Metazoans also have multiple copies of histone genes clustered on chromosomes which are localized in structures called Cajal bodies as determined by genome-wide chromosome conformation capture analysis (4C-Seq). Link between cell-cycle control and synthesis Nuclear protein Ataxia-Telangiectasia (NPAT), also known as nuclear protein coactivator of histone transcription, is a transcription factor which activates histone gene transcription on chromosomes 1 and 6 of human cells. NPAT is also a substrate of cyclin E-Cdk2, which is required for the transition between G1 phase and S phase. NPAT activates histone gene expression only after it has been phosphorylated by the G1/S-Cdk cyclin E-Cdk2 in early S phase. This shows an important regulatory link between cell-cycle control and histone synthesis. History Histones were discovered in 1884 by Albrecht Kossel. The word "histone" dates from the late 19th century and is derived from the German word "Histon", a word itself of uncertain origin, perhaps from Ancient Greek ἵστημι (hístēmi, “make stand”) or ἱστός (histós, “loom”). In the early 1960s, before the types of histones were known and before histones were known to be highly conserved across taxonomically diverse organisms, James F. Bonner and his collaborators began a study of these proteins that were known to be tightly associated with the DNA in the nucleus of higher organisms. Bonner and his postdoctoral fellow Ru Chih C. Huang showed that isolated chromatin would not support RNA transcription in the test tube, but if the histones were extracted from the chromatin, RNA could be transcribed from the remaining DNA. Their paper became a citation classic. Paul T'so and James Bonner had called together a World Congress on Histone Chemistry and Biology in 1964, in which it became clear that there was no consensus on the number of kinds of histone and that no one knew how they would compare when isolated from different organisms. Bonner and his collaborators then developed methods to separate each type of histone, purified individual histones, compared amino acid compositions in the same histone from different organisms, and compared amino acid sequences  of the same histone from different organisms in collaboration with Emil Smith from UCLA. For example, they found Histone IV sequence to be highly conserved between peas and calf thymus. However, their work on the biochemical characteristics of individual histones did not reveal how the histones interacted with each other or with DNA to which they were tightly bound. Also in the 1960s, Vincent Allfrey and Alfred Mirsky had suggested, based on their analyses of histones, that acetylation and methylation of histones could provide a transcriptional control mechanism, but did not have available the kind of detailed analysis that later investigators were able to conduct to show how such regulation could be gene-specific. Until the early 1990s, histones were dismissed by most as inert packing material for eukaryotic nuclear DNA, a view based in part on the models of Mark Ptashne and others, who believed that transcription was activated by protein-DNA and protein-protein interactions on largely naked DNA templates, as is the case in bacteria. During the 1980s, Yahli Lorch and Roger Kornberg showed that a nucleosome on a core promoter prevents the initiation of transcription in vitro, and Michael Grunstein demonstrated that histones repress transcription in vivo, leading to the idea of the nucleosome as a general gene repressor. Relief from repression is believed to involve both histone modification and the action of chromatin-remodeling complexes. Vincent Allfrey and Alfred Mirsky had earlier proposed a role of histone modification in transcriptional activation, regarded as a molecular manifestation of epigenetics. Michael Grunstein and David Allis found support for this proposal, in the importance of histone acetylation for transcription in yeast and the activity of the transcriptional activator Gcn5 as a histone acetyltransferase. The discovery of the H5 histone appears to date back to the 1970s, and it is now considered an isoform of Histone H1. See also Histone variants Chromatin Gene silencing Genetics Histone acetyltransferase Histone deacetylases Histone methyltransferase Histone-modifying enzymes Nucleosome PRMT4 pathway Protamine Histone H1 References External links HistoneDB 2.0 - Database of histones and variants at NCBI Chromatin, Histones & Cathepsin; PMAP The Proteolysis Map-animation Epigenetics Proteins DNA-binding proteins
Histone
[ "Chemistry" ]
6,713
[ "Proteins", "Biomolecules by chemical classification", "Molecular biology" ]
14,073
https://en.wikipedia.org/wiki/Hydropower
Hydropower (from Ancient Greek -, "water"), also known as water power or water energy, is the use of falling or fast-running water to produce electricity or to power machines. This is achieved by converting the gravitational potential or kinetic energy of a water source to produce power. Hydropower is a method of sustainable energy production. Hydropower is now used principally for hydroelectric power generation, and is also applied as one half of an energy storage system known as pumped-storage hydroelectricity. Hydropower is an attractive alternative to fossil fuels as it does not directly produce carbon dioxide or other atmospheric pollutants and it provides a relatively consistent source of power. Nonetheless, it has economic, sociological, and environmental downsides and requires a sufficiently energetic source of water, such as a river or elevated lake. International institutions such as the World Bank view hydropower as a low-carbon means for economic development. Since ancient times, hydropower from watermills has been used as a renewable energy source for irrigation and the operation of mechanical devices, such as gristmills, sawmills, textile mills, trip hammers, dock cranes, domestic lifts, and ore mills. A trompe, which produces compressed air from falling water, is sometimes used to power other machinery at a distance. Calculating the amount of available power A hydropower resource can be evaluated by its available power. Power is a function of the hydraulic head and volumetric flow rate. The head is the energy per unit weight (or unit mass) of water. The static head is proportional to the difference in height through which the water falls. Dynamic head is related to the velocity of moving water. Each unit of water can do an amount of work equal to its weight times the head. The power available from falling water can be calculated from the flow rate and density of water, the height of fall, and the local acceleration due to gravity: where (work flow rate out) is the useful power output (SI unit: watts) ("eta") is the efficiency of the turbine (dimensionless) is the mass flow rate (SI unit: kilograms per second) ("rho") is the density of water (SI unit: kilograms per cubic metre) is the volumetric flow rate (SI unit: cubic metres per second) is the acceleration due to gravity (SI unit: metres per second per second) ("Delta h") is the difference in height between the outlet and inlet (SI unit: metres) To illustrate, the power output of a turbine that is 85% efficient, with a flow rate of 80 cubic metres per second (2800 cubic feet per second) and a head of , is 97 megawatts: Operators of hydroelectric stations compare the total electrical energy produced with the theoretical potential energy of the water passing through the turbine to calculate efficiency. Procedures and definitions for calculation of efficiency are given in test codes such as ASME PTC 18 and IEC 60041. Field testing of turbines is used to validate the manufacturer's efficiency guarantee. Detailed calculation of the efficiency of a hydropower turbine accounts for the head lost due to flow friction in the power canal or penstock, rise in tailwater level due to flow, the location of the station and effect of varying gravity, the air temperature and barometric pressure, the density of the water at ambient temperature, and the relative altitudes of the forebay and tailbay. For precise calculations, errors due to rounding and the number of significant digits of constants must be considered. Some hydropower systems such as water wheels can draw power from the flow of a body of water without necessarily changing its height. In this case, the available power is the kinetic energy of the flowing water. Over-shot water wheels can efficiently capture both types of energy. The flow in a stream can vary widely from season to season. The development of a hydropower site requires analysis of flow records, sometimes spanning decades, to assess the reliable annual energy supply. Dams and reservoirs provide a more dependable source of power by smoothing seasonal changes in water flow. However, reservoirs have a significant environmental impact, as does alteration of naturally occurring streamflow. Dam design must account for the worst-case, "probable maximum flood" that can be expected at the site; a spillway is often included to route flood flows around the dam. A computer model of the hydraulic basin and rainfall and snowfall records are used to predict the maximum flood. Disadvantages and limitations Some disadvantages of hydropower have been identified. Dam failures can have catastrophic effects, including loss of life, property and pollution of land. Dams and reservoirs can have major negative impacts on river ecosystems such as preventing some animals traveling upstream, cooling and de-oxygenating of water released downstream, and loss of nutrients due to settling of particulates. River sediment builds river deltas and dams prevent them from restoring what is lost from erosion. Furthermore, studies found that the construction of dams and reservoirs can result in habitat loss for some aquatic species.Large and deep dam and reservoir plants cover large areas of land which causes greenhouse gas emissions from underwater rotting vegetation. Furthermore, although at lower levels than other renewable energy sources, it was found that hydropower produces methane equivalent to almost a billion tonnes of CO2 greenhouse gas a year. This occurs when organic matters accumulate at the bottom of the reservoir because of the deoxygenation of water which triggers anaerobic digestion. People who live near a hydro plant site are displaced during construction or when reservoir banks become unstable. Another potential disadvantage is cultural or religious sites may block construction. Applications Mechanical power Watermills Compressed air A plentiful head of water can be made to generate compressed air directly without moving parts. In these designs, a falling column of water is deliberately mixed with air bubbles generated through turbulence or a venturi pressure reducer at the high-level intake. This allows it to fall down a shaft into a subterranean, high-roofed chamber where the now-compressed air separates from the water and becomes trapped. The height of the falling water column maintains compression of the air in the top of the chamber, while an outlet, submerged below the water level in the chamber allows water to flow back to the surface at a lower level than the intake. A separate outlet in the roof of the chamber supplies the compressed air. A facility on this principle was built on the Montreal River at Ragged Shutes near Cobalt, Ontario, in 1910 and supplied 5,000 horsepower to nearby mines. Electricity Hydroelectricity is the biggest hydropower application. Hydroelectricity generates about 15% of global electricity and provides at least 50% of the total electricity supply for more than 35 countries. In 2021, global installed hydropower electrical capacity reached almost 1400 GW, the highest among all renewable energy technologies. Hydroelectricity generation starts with converting either the potential energy of water that is present due to the site's elevation or the kinetic energy of moving water into electrical energy. Hydroelectric power plants vary in terms of the way they harvest energy. One type involves a dam and a reservoir. The water in the reservoir is available on demand to be used to generate electricity by passing through channels that connect the dam to the reservoir. The water spins a turbine, which is connected to the generator that produces electricity. The other type is called a run-of-river plant. In this case, a barrage is built to control the flow of water, absent a reservoir. The run-of river power plant needs continuous water flow and therefore has less ability to provide power on demand. The kinetic energy of flowing water is the main source of energy. Both designs have limitations. For example, dam construction can result in discomfort to nearby residents. The dam and reservoirs occupy a relatively large amount of space that may be opposed by nearby communities. Moreover, reservoirs can potentially have major environmental consequences such as harming downstream habitats. On the other hand, the limitation of the run-of-river project is the decreased efficiency of electricity generation because the process depends on the speed of the seasonal river flow. This means that the rainy season increases electricity generation compared to the dry season. The size of hydroelectric plants can vary from small plants called micro hydro, to large plants that supply power to a whole country. As of 2019, the five largest power stations in the world are conventional hydroelectric power stations with dams. Hydroelectricity can also be used to store energy in the form of potential energy between two reservoirs at different heights with pumped-storage. Water is pumped uphill into reservoirs during periods of low demand to be released for generation when demand is high or system generation is low. Other forms of electricity generation with hydropower include tidal stream generators using energy from tidal power generated from oceans, rivers, and human-made canal systems to generating electricity. Rain power Rain has been referred to as "one of the last unexploited energy sources in nature. When it rains, billions of litres of water can fall, which have an enormous electric potential if used in the right way." Research is being done into the different methods of generating power from rain, such as by using the energy in the impact of raindrops. This is in its very early stages with new and emerging technologies being tested, prototyped and created. Such power has been called rain power. One method in which this has been attempted is by using hybrid solar panels called "all-weather solar panels" that can generate electricity from both the sun and the rain. According to zoologist and science and technology educator, Luis Villazon, "A 2008 French study estimated that you could use piezoelectric devices, which generate power when they move, to extract 12 milliwatts from a raindrop. Over a year, this would amount to less than 0.001kWh per square metre – enough to power a remote sensor." Villazon suggested a better application would be to collect the water from fallen rain and use it to drive a turbine, with an estimated energy generation of 3 kWh of energy per year for a 185 m2 roof. A microturbine-based system created by three students from the Technological University of Mexico has been used to generate electricity. The Pluvia system "uses the stream of rainwater runoff from houses' rooftop rain gutters to spin a microturbine in a cylindrical housing. Electricity generated by that turbine is used to charge 12-volt batteries." The term rain power has also been applied to hydropower systems which include the process of capturing the rain. History Ancient history Evidence suggests that the fundamentals of hydropower date to ancient Greek civilization. Other evidence indicates that the waterwheel independently emerged in China around the same period. Evidence of water wheels and watermills date to the ancient Near East in the 4th century BC. Moreover, evidence indicates the use of hydropower using irrigation machines to ancient civilizations such as Sumer and Babylonia. Studies suggest that the water wheel was the initial form of water power and it was driven by either humans or animals. In the Roman Empire, water-powered mills were described by Vitruvius by the first century BC. The Barbegal mill, located in modern-day France, had 16 water wheels processing up to 28 tons of grain per day. Roman waterwheels were also used for sawing marble such as the Hierapolis sawmill of the late 3rd century AD. Such sawmills had a waterwheel that drove two crank-and-connecting rods to power two saws. It also appears in two 6th century Eastern Roman sawmills excavated at Ephesus and Gerasa respectively. The crank and connecting rod mechanism of these Roman watermills converted the rotary motion of the waterwheel into the linear movement of the saw blades. Water-powered trip hammers and bellows in China, during the Han dynasty (202 BC – 220 AD), were initially thought to be powered by water scoops. However, some historians suggested that they were powered by waterwheels. This is since it was theorized that water scoops would not have had the motive force to operate their blast furnace bellows. Many texts describe the Hun waterwheel; some of the earliest ones are the Jijiupian dictionary of 40 BC, Yang Xiong's text known as the Fangyan of 15 BC, as well as Xin Lun, written by Huan Tan about 20 AD. It was also during this time that the engineer Du Shi (c. AD 31) applied the power of waterwheels to piston-bellows in forging cast iron. Ancient Indian texts dating back to the 4th century BC refer to the term cakkavattaka (turning wheel), which commentaries explain as arahatta-ghati-yanta (machine with wheel-pots attached), however whether this is water or hand powered is disputed by scholars India received Roman water mills and baths in the early 4th century AD when a certain according to Greek sources. Dams, spillways, reservoirs, channels, and water balance would develop in India during the Mauryan, Gupta and Chola empires. Another example of the early use of hydropower is seen in hushing, a historic method of mining that uses flood or torrent of water to reveal mineral veins. The method was first used at the Dolaucothi Gold Mines in Wales from 75 AD onwards. This method was further developed in Spain in mines such as Las Médulas. Hushing was also widely used in Britain in the Medieval and later periods to extract lead and tin ores. It later evolved into hydraulic mining when used during the California Gold Rush in the 19th century. The Islamic Empire spanned a large region, mainly in Asia and Africa, along with other surrounding areas. During the Islamic Golden Age and the Arab Agricultural Revolution (8th–13th centuries), hydropower was widely used and developed. Early uses of tidal power emerged along with large hydraulic factory complexes. A wide range of water-powered industrial mills were used in the region including fulling mills, gristmills, paper mills, hullers, sawmills, ship mills, stamp mills, steel mills, sugar mills, and tide mills. By the 11th century, every province throughout the Islamic Empire had these industrial mills in operation, from Al-Andalus and North Africa to the Middle East and Central Asia. Muslim engineers also used water turbines while employing gears in watermills and water-raising machines. They also pioneered the use of dams as a source of water power, used to provide additional power to watermills and water-raising machines. Islamic irriguation techniques including Persian Wheels would be introduced to India, and would be combined with local methods, during the Delhi Sultanate and the Mughal Empire. Furthermore, in his book, The Book of Knowledge of Ingenious Mechanical Devices, the Muslim mechanical engineer, Al-Jazari (1136–1206) described designs for 50 devices. Many of these devices were water-powered, including clocks, a device to serve wine, and five devices to lift water from rivers or pools, where three of them are animal-powered and one can be powered by animal or water. Moreover, they included an endless belt with jugs attached, a cow-powered shadoof (a crane-like irrigation tool), and a reciprocating device with hinged valves. 19th century In the 19th century, French engineer Benoît Fourneyron developed the first hydropower turbine. This device was implemented in the commercial plant of Niagara Falls in 1895 and it is still operating. In the early 20th century, English engineer William Armstrong built and operated the first private electrical power station which was located in his house in Cragside in Northumberland, England. In 1753, the French engineer Bernard Forest de Bélidor published his book, Architecture Hydraulique, which described vertical-axis and horizontal-axis hydraulic machines. The growing demand for the Industrial Revolution would drive development as well. At the beginning of the Industrial Revolution in Britain, water was the main power source for new inventions such as Richard Arkwright's water frame. Although water power gave way to steam power in many of the larger mills and factories, it was still used during the 18th and 19th centuries for many smaller operations, such as driving the bellows in small blast furnaces (e.g. the Dyfi Furnace) and gristmills, such as those built at Saint Anthony Falls, which uses the drop in the Mississippi River. Technological advances moved the open water wheel into an enclosed turbine or water motor. In 1848, the British-American engineer James B. Francis, head engineer of Lowell's Locks and Canals company, improved on these designs to create a turbine with 90% efficiency. He applied scientific principles and testing methods to the problem of turbine design. His mathematical and graphical calculation methods allowed the confident design of high-efficiency turbines to exactly match a site's specific flow conditions. The Francis reaction turbine is still in use. In the 1870s, deriving from uses in the California mining industry, Lester Allan Pelton developed the high-efficiency Pelton wheel impulse turbine, which used hydropower from the high head streams characteristic of the Sierra Nevada. 20th century The modern history of hydropower begins in the 1900s, with large dams built not simply to power neighboring mills or factories but provide extensive electricity for increasingly distant groups of people. Competition drove much of the global hydroelectric craze: Europe competed amongst itself to electrify first, and the United States' hydroelectric plants in Niagara Falls and the Sierra Nevada inspired bigger and bolder creations across the globe. American and USSR financers and hydropower experts also spread the gospel of dams and hydroelectricity across the globe during the Cold War, contributing to projects such as the Three Gorges Dam and the Aswan High Dam. Feeding desire for large scale electrification with water inherently required large dams across powerful rivers, which impacted public and private interests downstream and in flood zones. Inevitably smaller communities and marginalized groups suffered. They were unable to successfully resist companies flooding them out of their homes or blocking traditional salmon passages. The stagnant water created by hydroelectric dams provides breeding ground for pests and pathogens, leading to local epidemics. However, in some cases, a mutual need for hydropower could lead to cooperation between otherwise adversarial nations. Hydropower technology and attitude began to shift in the second half of the 20th century. While countries had largely abandoned their small hydropower systems by the 1930s, the smaller hydropower plants began to make a comeback in the 1970s, boosted by government subsidies and a push for more independent energy producers. Some politicians who once advocated for large hydropower projects in the first half of the 20th century began to speak out against them, and citizen groups organizing against dam projects increased. In the 1980s and 90s the international anti-dam movement had made finding government or private investors for new large hydropower projects incredibly difficult, and given rise to NGOs devoted to fighting dams. Additionally, while the cost of other energy sources fell, the cost of building new hydroelectric dams increased 4% annually between 1965 and 1990, due both to the increasing costs of construction and to the decrease in high quality building sites. In the 1990s, only 18% of the world's electricity came from hydropower. Tidal power production also emerged in the 1960s as a burgeoning alternative hydropower system, though still has not taken hold as a strong energy contender. United States Especially at the start of the American hydropower experiment, engineers and politicians began major hydroelectricity projects to solve a problem of 'wasted potential' rather than to power a population that needed the electricity. When the Niagara Falls Power Company began looking into damming Niagara, the first major hydroelectric project in the United States, in the 1890s they struggled to transport electricity from the falls far enough away to actually reach enough people and justify installation. The project succeeded in large part due to Nikola Tesla's invention of the alternating current motor. On the other side of the country, San Francisco engineers, the Sierra Club, and the federal government fought over acceptable use of the Hetch Hetchy Valley. Despite ostensible protection within a national park, city engineers successfully won the rights to both water and power in the Hetch Hetchy Valley in 1913. After their victory they delivered Hetch Hetchy hydropower and water to San Francisco a decade later and at twice the promised cost, selling power to PG&E which resold to San Francisco residents at a profit. The American West, with its mountain rivers and lack of coal, turned to hydropower early and often, especially along the Columbia River and its tributaries. The Bureau of Reclamation built the Hoover Dam in 1931, symbolically linking the job creation and economic growth priorities of the New Deal. The federal government quickly followed Hoover with the Shasta Dam and Grand Coulee Dam. Power demand in Oregon did not justify damming the Columbia until WWI revealed the weaknesses of a coal-based energy economy. The federal government then began prioritizing interconnected power—and lots of it. Electricity from all three dams poured into war production during WWII. After the war, the Grand Coulee Dam and accompanying hydroelectric projects electrified almost all of the rural Columbia Basin, but failed to improve the lives of those living and farming there the way its boosters had promised and also damaged the river ecosystem and migrating salmon populations. In the 1940s as well, the federal government took advantage of the sheer amount of unused power and flowing water from the Grand Coulee to build a nuclear site placed on the banks of the Columbia. The nuclear site leaked radioactive matter into the river, contaminating the entire area. Post-WWII Americans, especially engineers from the Tennessee Valley Authority, refocused from simply building domestic dams to promoting hydropower abroad. While domestic dam building continued well into the 1970s, with the Reclamation Bureau and Army Corps of Engineers building more than 150 new dams across the American West, organized opposition to hydroelectric dams sparked up in the 1950s and 60s based on environmental concerns. Environmental movements successfully shut down proposed hydropower dams in Dinosaur National Monument and the Grand Canyon, and gained more hydropower-fighting tools with 1970s environmental legislation. As nuclear and fossil fuels grew in the 70s and 80s and environmental activists push for river restoration, hydropower gradually faded in American importance. Africa Foreign powers and IGOs have frequently used hydropower projects in Africa as a tool to interfere in the economic development of African countries, such as the World Bank with the Kariba and Akosombo Dams, and the Soviet Union with the Aswan Dam. The Nile River especially has borne the consequences of countries both along the Nile and distant foreign actors using the river to expand their economic power or national force. After the British occupation of Egypt in 1882, the British worked with Egypt to construct the first Aswan Dam, which they heightened in 1912 and 1934 to try to hold back the Nile floods. Egyptian engineer Adriano Daninos developed a plan for the Aswan High Dam, inspired by the Tennessee Valley Authority's multipurpose dam. When Gamal Abdel Nasser took power in the 1950s, his government decided to undertake the High Dam project, publicizing it as an economic development project. After American refusal to help fund the dam, and anti-British sentiment in Egypt and British interests in neighboring Sudan combined to make the United Kingdom pull out as well, the Soviet Union funded the Aswan High Dam. Between 1977 and 1990 the dam's turbines generated one third of Egypt's electricity. The building of the Aswan Dam triggered a dispute between Sudan and Egypt over the sharing of the Nile, especially since the dam flooded part of Sudan and decreased the volume of water available to them. Ethiopia, also located on the Nile, took advantage of the Cold War tensions to request assistance from the United States for their own irrigation and hydropower investments in the 1960s. While progress stalled due to the coup d'état of 1974 and following 17-year-long Ethiopian Civil War Ethiopia began construction on the Grand Ethiopian Renaissance Dam in 2011. Beyond the Nile, hydroelectric projects cover the rivers and lakes of Africa. The Inga powerplant on the Congo River had been discussed since Belgian colonization in the late 19th century, and was successfully built after independence. Mobutu's government failed to regularly maintain the plants and their capacity declined until the 1995 formation of the Southern African Power Pool created a multi-national power grid and plant maintenance program. States with an abundance of hydropower, such as the Democratic Republic of the Congo and Ghana, frequently sell excess power to neighboring countries. Foreign actors such as Chinese hydropower companies have proposed a significant amount of new hydropower projects in Africa, and already funded and consulted on many others in countries like Mozambique and Ghana. Small hydropower also played an important role in early 20th century electrification across Africa. In South Africa, small turbines powered gold mines and the first electric railway in the 1890s, and Zimbabwean farmers installed small hydropower stations in the 1930s. While interest faded as national grids improved in the second half of the century, 21st century national governments in countries including South Africa and Mozambique, as well as NGOs serving countries like Zimbabwe, have begun re-exploring small-scale hydropower to diversify power sources and improve rural electrification. Europe In the early 20th century, two major factors motivated the expansion of hydropower in Europe: in the northern countries of Norway and Sweden, high rainfall and mountains proved exceptional resources for abundant hydropower, and in the south, coal shortages pushed governments and utility companies to seek alternative power sources. Early on, Switzerland dammed the Alpine rivers and the Swiss Rhine, creating, along with Italy and Scandinavia, a Southern Europe hydropower race. In Italy's Po Valley, the main 20th-century transition was not the creation of hydropower but the transition from mechanical to electrical hydropower. 12,000 watermills churned in the Po watershed in the 1890s, but the first commercial hydroelectric plant, completed in 1898, signaled the end of the mechanical reign. These new large plants moved power away from rural mountainous areas to urban centers in the lower plain. Italy prioritized early near-nationwide electrification, almost entirely from hydropower, which powered its rise as a dominant European and imperial force. However, they failed to reach any conclusive standard for determining water rights before WWI. Modern German hydropower dam construction was built on a history of small dams powering mines and mills in the 15th century. Some parts of the German industry relied more on waterwheels than steam until the 1870s. The German government did not set out building large dams such as the prewar Urft, Mohne, and Eder dams to expand hydropower: they mostly wanted to reduce flooding and improve navigation. However, hydropower quickly emerged as a bonus for all these dams, especially in the coal-poor south. Bavaria even achieved a statewide power grid by damming the Walchensee in 1924, inspired in part by loss of coal reserves after WWI. Hydropower became a symbol of regional pride and distaste for northern 'coal barons', although the north also held strong enthusiasm for hydropower. Dam building rapidly increased after WWII, aiming to increase hydropower. However, conflict accompanied the dam building and spread of hydropower: agrarian interests suffered from decreased irrigation, small mills lost water flow, and different interest groups fought over where dams should be located, controlling who benefited and whose homes they drowned. See also Deep water source cooling Energy conversion efficiency Gravitation water vortex power plant Hydraulic ram Hydropower Sustainability Assessment Protocol International Hydropower Association Low-head hydro power Marine current power Marine energy Ocean thermal energy conversion Osmotic power Pumped-storage hydroelectricity Run-of-the-river hydroelectricity Tidal power Tidal stream generator Wave power Notes References Sources External links International Hydropower Association International Centre for Hydropower (ICH) hydropower portal with links to numerous organizations related to hydropower worldwide IEC TC 4: Hydraulic turbines (International Electrotechnical Commission – Technical Committee 4) IEC TC 4 portal with access to scope, documents and TC 4 website Micro-hydro power, Adam Harvey, 2004, Intermediate Technology Development Group. Retrieved 1 January 2005 Microhydropower Systems, US Department of Energy, Energy Efficiency and Renewable Energy, 2005 Power station technology Energy conversion Hydraulic engineering Sustainable technologies
Hydropower
[ "Physics", "Engineering", "Environmental_science" ]
5,709
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
14,136
https://en.wikipedia.org/wiki/Hydrophobe
In chemistry, hydrophobicity is the chemical property of a molecule (called a hydrophobe) that is seemingly repelled from a mass of water. In contrast, hydrophiles are attracted to water. Hydrophobic molecules tend to be nonpolar and, thus, prefer other neutral molecules and nonpolar solvents. Because water molecules are polar, hydrophobes do not dissolve well among them. Hydrophobic molecules in water often cluster together, forming micelles. Water on hydrophobic surfaces will exhibit a high contact angle. Examples of hydrophobic molecules include the alkanes, oils, fats, and greasy substances in general. Hydrophobic materials are used for oil removal from water, the management of oil spills, and chemical separation processes to remove non-polar substances from polar compounds. The term hydrophobic—which comes from the Ancient Greek (), "having a fear of water", constructed —is often used interchangeably with lipophilic, "fat-loving". However, the two terms are not synonymous. While hydrophobic substances are usually lipophilic, there are exceptions, such as the silicones and fluorocarbons. Chemical background The hydrophobic interaction is mostly an entropic effect originating from the disruption of the highly dynamic hydrogen bonds between molecules of liquid water by the nonpolar solute, causing the water to form a clathrate-like structure around the non-polar molecules. This structure formed is more highly ordered than free water molecules due to the water molecules arranging themselves to interact as much as possible with themselves, and thus results in a higher entropic state which causes non-polar molecules to clump together to reduce the surface area exposed to water and decrease the entropy of the system. Thus, the two immiscible phases (hydrophilic vs. hydrophobic) will change so that their corresponding interfacial area will be minimal. This effect can be visualized in the phenomenon called phase separation. Superhydrophobicity Superhydrophobic surfaces, such as the leaves of the lotus plant, are those that are extremely difficult to wet. The contact angles of a water droplet exceeds 150°. This is referred to as the lotus effect, and is primarily a chemical property related to interfacial tension, rather than a chemical property. Theory In 1805, Thomas Young defined the contact angle θ by analyzing the forces acting on a fluid droplet resting on a solid surface surrounded by a gas. where = Interfacial tension between the solid and gas = Interfacial tension between the solid and liquid = Interfacial tension between the liquid and gas θ can be measured using a contact angle goniometer. Wenzel determined that when the liquid is in intimate contact with a microstructured surface, θ will change to θW* where r is the ratio of the actual area to the projected area. Wenzel's equation shows that microstructuring a surface amplifies the natural tendency of the surface. A hydrophobic surface (one that has an original contact angle greater than 90°) becomes more hydrophobic when microstructured – its new contact angle becomes greater than the original. However, a hydrophilic surface (one that has an original contact angle less than 90°) becomes more hydrophilic when microstructured – its new contact angle becomes less than the original. Cassie and Baxter found that if the liquid is suspended on the tops of microstructures, θ will change to θCB*: where φ is the area fraction of the solid that touches the liquid. Liquid in the Cassie–Baxter state is more mobile than in the Wenzel state. We can predict whether the Wenzel or Cassie–Baxter state should exist by calculating the new contact angle with both equations. By a minimization of free energy argument, the relation that predicted the smaller new contact angle is the state most likely to exist. Stated in mathematical terms, for the Cassie–Baxter state to exist, the following inequality must be true. A recent alternative criterion for the Cassie–Baxter state asserts that the Cassie–Baxter state exists when the following 2 criteria are met:1) Contact line forces overcome body forces of unsupported droplet weight and 2) The microstructures are tall enough to prevent the liquid that bridges microstructures from touching the base of the microstructures. A new criterion for the switch between Wenzel and Cassie-Baxter states has been developed recently based on surface roughness and surface energy. The criterion focuses on the air-trapping capability under liquid droplets on rough surfaces, which could tell whether Wenzel's model or Cassie-Baxter's model should be used for certain combination of surface roughness and energy. Contact angle is a measure of static hydrophobicity, and contact angle hysteresis and slide angle are dynamic measures. Contact angle hysteresis is a phenomenon that characterizes surface heterogeneity. When a pipette injects a liquid onto a solid, the liquid will form some contact angle. As the pipette injects more liquid, the droplet will increase in volume, the contact angle will increase, but its three-phase boundary will remain stationary until it suddenly advances outward. The contact angle the droplet had immediately before advancing outward is termed the advancing contact angle. The receding contact angle is now measured by pumping the liquid back out of the droplet. The droplet will decrease in volume, the contact angle will decrease, but its three-phase boundary will remain stationary until it suddenly recedes inward. The contact angle the droplet had immediately before receding inward is termed the receding contact angle. The difference between advancing and receding contact angles is termed contact angle hysteresis and can be used to characterize surface heterogeneity, roughness, and mobility. Surfaces that are not homogeneous will have domains that impede motion of the contact line. The slide angle is another dynamic measure of hydrophobicity and is measured by depositing a droplet on a surface and tilting the surface until the droplet begins to slide. In general, liquids in the Cassie–Baxter state exhibit lower slide angles and contact angle hysteresis than those in the Wenzel state. Research and development Dettre and Johnson discovered in 1964 that the superhydrophobic lotus effect phenomenon was related to rough hydrophobic surfaces, and they developed a theoretical model based on experiments with glass beads coated with paraffin or TFE telomer. The self-cleaning property of superhydrophobic micro-nanostructured surfaces was reported in 1977. Perfluoroalkyl, perfluoropolyether, and RF plasma -formed superhydrophobic materials were developed, used for electrowetting and commercialized for bio-medical applications between 1986 and 1995. Other technology and applications have emerged since the mid-1990s. A durable superhydrophobic hierarchical composition, applied in one or two steps, was disclosed in 2002 comprising nano-sized particles ≤ 100 nanometers overlaying a surface having micrometer-sized features or particles ≤ 100 micrometers. The larger particles were observed to protect the smaller particles from mechanical abrasion. In recent research, superhydrophobicity has been reported by allowing alkylketene dimer (AKD) to solidify into a nanostructured fractal surface. Many papers have since presented fabrication methods for producing superhydrophobic surfaces including particle deposition, sol-gel techniques, plasma treatments, vapor deposition, and casting techniques. Current opportunity for research impact lies mainly in fundamental research and practical manufacturing. Debates have recently emerged concerning the applicability of the Wenzel and Cassie–Baxter models. In an experiment designed to challenge the surface energy perspective of the Wenzel and Cassie–Baxter model and promote a contact line perspective, water drops were placed on a smooth hydrophobic spot in a rough hydrophobic field, a rough hydrophobic spot in a smooth hydrophobic field, and a hydrophilic spot in a hydrophobic field. Experiments showed that the surface chemistry and geometry at the contact line affected the contact angle and contact angle hysteresis, but the surface area inside the contact line had no effect. An argument that increased jaggedness in the contact line enhances droplet mobility has also been proposed. Many hydrophobic materials found in nature rely on Cassie's law and are biphasic on the submicrometer level with one component air. The lotus effect is based on this principle. Inspired by it, many functional superhydrophobic surfaces have been prepared. An example of a bionic or biomimetic superhydrophobic material in nanotechnology is nanopin film. One study presents a vanadium pentoxide surface that switches reversibly between superhydrophobicity and superhydrophilicity under the influence of UV radiation. According to the study, any surface can be modified to this effect by application of a suspension of rose-like V2O5 particles, for instance with an inkjet printer. Once again hydrophobicity is induced by interlaminar air pockets (separated by 2.1 nm distances). The UV effect is also explained. UV light creates electron-hole pairs, with the holes reacting with lattice oxygen, creating surface oxygen vacancies, while the electrons reduce V5+ to V3+. The oxygen vacancies are met by water, and it is this water absorbency by the vanadium surface that makes it hydrophilic. By extended storage in the dark, water is replaced by oxygen and hydrophilicity is once again lost. A significant majority of hydrophobic surfaces have their hydrophobic properties imparted by structural or chemical modification of a surface of a bulk material, through either coatings or surface treatments. That is to say, the presence of molecular species (usually organic) or structural features results in high contact angles of water. In recent years, rare earth oxides have been shown to possess intrinsic hydrophobicity. The intrinsic hydrophobicity of rare earth oxides depends on surface orientation and oxygen vacancy levels, and is naturally more robust than coatings or surface treatments, having potential applications in condensers and catalysts that can operate at high temperatures or corrosive environments. Applications and potential applications Hydrophobic concrete has been produced since the mid-20th century. Active recent research on superhydrophobic materials might eventually lead to more industrial applications. A simple routine of coating cotton fabric with silica or titania particles by sol-gel technique has been reported, which protects the fabric from UV light and makes it superhydrophobic. An efficient routine has been reported for making polyethylene superhydrophobic and thus self-cleaning. 99% of dirt on such a surface is easily washed away. Patterned superhydrophobic surfaces also have promise for lab-on-a-chip microfluidic devices and can drastically improve surface-based bioanalysis. In pharmaceuticals, hydrophobicity of pharmaceutical blends affects important quality attributes of final products, such as drug dissolution and hardness. Methods have been developed to measure the hydrophobicity of pharmaceutical materials. The development of hydrophobic passive daytime radiative cooling (PDRC) surfaces, whose effectiveness at solar reflectance and thermal emittance is predicated on their cleanliness, has improved the "self-cleaning" of these surfaces. Scalable and sustainable hydrophobic PDRCs that avoid VOCs have further been developed. See also References External links What are superhydrophobic surfaces? Chemical properties Intermolecular forces Surface science Articles containing video clips
Hydrophobe
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,382
[ "Molecular physics", "Materials science", "Surface science", "Intermolecular forces", "Condensed matter physics", "nan" ]
14,286
https://en.wikipedia.org/wiki/Holographic%20principle
The holographic principle is a property of string theories and a supposed property of quantum gravity that states that the description of a volume of space can be thought of as encoded on a lower-dimensional boundary to the region – such as a light-like boundary like a gravitational horizon. First proposed by Gerard 't Hooft, it was given a precise string theoretic interpretation by Leonard Susskind, who combined his ideas with previous ones of 't Hooft and Charles Thorn. Susskind said, "The three-dimensional world of ordinary experience—the universe filled with galaxies, stars, planets, houses, boulders, and people—is a hologram, an image of reality coded on a distant two-dimensional surface." As pointed out by Raphael Bousso, Thorn observed in 1978, that string theory admits a lower-dimensional description in which gravity emerges from it in what would now be called a holographic way. The prime example of holography is the AdS/CFT correspondence. The holographic principle was inspired by the Bekenstein bound of black hole thermodynamics, which conjectures that the maximum entropy in any region scales with the radius , rather than cubed as might be expected. In the case of a black hole, the insight was that the information content of all the objects that have fallen into the hole might be entirely contained in surface fluctuations of the event horizon. The holographic principle resolves the black hole information paradox within the framework of string theory. However, there exist classical solutions to the Einstein equations that allow values of the entropy larger than those allowed by an area law (radius squared), hence in principle larger than those of a black hole. These are the so-called "Wheeler's bags of gold". The existence of such solutions conflicts with the holographic interpretation, and their effects in a quantum theory of gravity including the holographic principle are not yet fully understood. High-level summary The physical universe is widely seen to be composed of "matter" and "energy". In his 2003 article published in Scientific American magazine, Jacob Bekenstein speculatively summarized a current trend started by John Archibald Wheeler, which suggests scientists may "regard the physical world as made of information, with energy and matter as incidentals". Bekenstein asks "Could we, as William Blake memorably penned, 'see a world in a grain of sand', or is that idea no more than 'poetic license'?", referring to the holographic principle. Unexpected connection Bekenstein's topical overview "A Tale of Two Entropies" describes potentially profound implications of Wheeler's trend, in part by noting a previously unexpected connection between the world of information theory and classical physics. This connection was first described shortly after the seminal 1948 papers of American applied mathematician Claude Shannon introduced today's most widely used measure of information content, now known as Shannon entropy. As an objective measure of the quantity of information, Shannon entropy has been enormously useful, as the design of all modern communications and data storage devices, from cellular phones to modems to hard disk drives and DVDs, rely on Shannon entropy. In thermodynamics (the branch of physics dealing with heat), entropy is popularly described as a measure of the "disorder" in a physical system of matter and energy. In 1877, Austrian physicist Ludwig Boltzmann described it more precisely in terms of the number of distinct microscopic states that the particles composing a macroscopic "chunk" of matter could be in, while still "looking" like the same macroscopic "chunk". As an example, for the air in a room, its thermodynamic entropy would equal the logarithm of the count of all the ways that the individual gas molecules could be distributed in the room, and all the ways they could be moving. Energy, matter, and information equivalence Shannon's efforts to find a way to quantify the information contained in, for example, a telegraph message, led him unexpectedly to a formula with the same form as Boltzmann's. In an article in the August 2003 issue of Scientific American titled "Information in the Holographic Universe", Bekenstein summarizes that "Thermodynamic entropy and Shannon entropy are conceptually equivalent: the number of arrangements that are counted by Boltzmann entropy reflects the amount of Shannon information one would need to implement any particular arrangement" of matter and energy. The only salient difference between the thermodynamic entropy of physics and Shannon's entropy of information is in the units of measure; the former is expressed in units of energy divided by temperature, the latter in essentially dimensionless "bits" of information. The holographic principle states that the entropy of ordinary mass (not just black holes) is also proportional to surface area and not volume; that volume itself is illusory and the universe is really a hologram which is isomorphic to the information "inscribed" on the surface of its boundary. The AdS/CFT correspondence The anti-de Sitter/conformal field theory correspondence, sometimes called Maldacena duality (after ref.) or gauge/gravity duality, is a conjectured relationship between two kinds of physical theories. On one side are anti-de Sitter spaces (AdS) which are used in theories of quantum gravity, formulated in terms of string theory or M-theory. On the other side of the correspondence are conformal field theories (CFT) which are quantum field theories, including theories similar to the Yang–Mills theories that describe elementary particles. The duality represents a major advance in understanding of string theory and quantum gravity. This is because it provides a non-perturbative formulation of string theory with certain boundary conditions and because it is the most successful realization of the holographic principle. It also provides a powerful toolkit for studying strongly coupled quantum field theories. Much of the usefulness of the duality results from a strong-weak duality: when the fields of the quantum field theory are strongly interacting, the ones in the gravitational theory are weakly interacting and thus more mathematically tractable. This fact has been used to study many aspects of nuclear and condensed matter physics by translating problems in those subjects into more mathematically tractable problems in string theory. The AdS/CFT correspondence was first proposed by Juan Maldacena in late 1997. Important aspects of the correspondence were elaborated in articles by Steven Gubser, Igor Klebanov, and Alexander Markovich Polyakov, and by Edward Witten. By 2015, Maldacena's article had over 10,000 citations, becoming the most highly cited article in the field of high energy physics. Black hole entropy An object with relatively high entropy is microscopically random, like a hot gas. A known configuration of classical fields has zero entropy: there is nothing random about electric and magnetic fields, or gravitational waves. Since black holes are exact solutions of Einstein's equations, they were thought not to have any entropy. But Jacob Bekenstein noted that this leads to a violation of the second law of thermodynamics. If one throws a hot gas with entropy into a black hole, once it crosses the event horizon, the entropy would disappear. The random properties of the gas would no longer be seen once the black hole had absorbed the gas and settled down. One way of salvaging the second law is if black holes are in fact random objects with an entropy that increases by an amount greater than the entropy of the consumed gas. Given a fixed volume, a black hole whose event horizon encompasses that volume should be the object with the highest amount of entropy. Otherwise, imagine something with a larger entropy, then by throwing more mass into that something, we obtain a black hole with less entropy, violating the second law. In a sphere of radius R, the entropy in a relativistic gas increases as the energy increases. The only known limit is gravitational; when there is too much energy, the gas collapses into a black hole. Bekenstein used this to put an upper bound on the entropy in a region of space, and the bound was proportional to the area of the region. He concluded that the black hole entropy is directly proportional to the area of the event horizon. Gravitational time dilation causes time, from the perspective of a remote observer, to stop at the event horizon. Due to the natural limit on maximum speed of motion, this prevents falling objects from crossing the event horizon no matter how close they get to it. Since any change in quantum state requires time to flow, all objects and their quantum information state stay imprinted on the event horizon. Bekenstein concluded that from the perspective of any remote observer, the black hole entropy is directly proportional to the area of the event horizon. Stephen Hawking had shown earlier that the total horizon area of a collection of black holes always increases with time. The horizon is a boundary defined by light-like geodesics; it is those light rays that are just barely unable to escape. If neighboring geodesics start moving toward each other they eventually collide, at which point their extension is inside the black hole. So the geodesics are always moving apart, and the number of geodesics which generate the boundary, the area of the horizon, always increases. Hawking's result was called the second law of black hole thermodynamics, by analogy with the law of entropy increase. At first, Hawking did not take the analogy too seriously. He argued that the black hole must have zero temperature, since black holes do not radiate and therefore cannot be in thermal equilibrium with any black body of positive temperature. Then he discovered that black holes do radiate. When heat is added to a thermal system, the change in entropy is the increase in mass–energy divided by temperature: (Here the term δM c2 is substituted for the thermal energy added to the system, generally by non-integrable random processes, in contrast to dS, which is a function of a few "state variables" only, i.e. in conventional thermodynamics only of the Kelvin temperature T and a few additional state variables, such as the pressure.) If black holes have a finite entropy, they should also have a finite temperature. In particular, they would come to equilibrium with a thermal gas of photons. This means that black holes would not only absorb photons, but they would also have to emit them in the right amount to maintain detailed balance. Time-independent solutions to field equations do not emit radiation, because a time-independent background conserves energy. Based on this principle, Hawking set out to show that black holes do not radiate. But, to his surprise, a careful analysis convinced him that they do, and in just the right way to come to equilibrium with a gas at a finite temperature. Hawking's calculation fixed the constant of proportionality at 1/4; the entropy of a black hole is one quarter its horizon area in Planck units. The entropy is proportional to the logarithm of the number of microstates, the enumerated ways a system can be configured microscopically while leaving the macroscopic description unchanged. Black hole entropy is deeply puzzling – it says that the logarithm of the number of states of a black hole is proportional to the area of the horizon, not the volume in the interior. Later, Raphael Bousso came up with a covariant version of the bound based upon null sheets. Black hole information paradox Hawking's calculation suggested that the radiation which black holes emit is not related in any way to the matter that they absorb. The outgoing light rays start exactly at the edge of the black hole and spend a long time near the horizon, while the infalling matter only reaches the horizon much later. The infalling and outgoing mass/energy interact only when they cross. It is implausible that the outgoing state would be completely determined by some tiny residual scattering. Hawking interpreted this to mean that when black holes absorb some photons in a pure state described by a wave function, they re-emit new photons in a thermal mixed state described by a density matrix. This would mean that quantum mechanics would have to be modified because, in quantum mechanics, states which are superpositions with probability amplitudes never become states which are probabilistic mixtures of different possibilities. Troubled by this paradox, Gerard 't Hooft analyzed the emission of Hawking radiation in more detail. He noted that when Hawking radiation escapes, there is a way in which incoming particles can modify the outgoing particles. Their gravitational field would deform the horizon of the black hole, and the deformed horizon could produce different outgoing particles than the undeformed horizon. When a particle falls into a black hole, it is boosted relative to an outside observer, and its gravitational field assumes a universal form. 't Hooft showed that this field makes a logarithmic tent-pole shaped bump on the horizon of a black hole, and like a shadow, the bump is an alternative description of the particle's location and mass. For a four-dimensional spherical uncharged black hole, the deformation of the horizon is similar to the type of deformation which describes the emission and absorption of particles on a string-theory world sheet. Since the deformations on the surface are the only imprint of the incoming particle, and since these deformations would have to completely determine the outgoing particles, 't Hooft believed that the correct description of the black hole would be by some form of string theory. This idea was made more precise by Leonard Susskind, who had also been developing holography, largely independently. Susskind argued that the oscillation of the horizon of a black hole is a complete description of both the infalling and outgoing matter, because the world-sheet theory of string theory was just such a holographic description. While short strings have zero entropy, he could identify long highly excited string states with ordinary black holes. This was a deep advance because it revealed that strings have a classical interpretation in terms of black holes. This work showed that the black hole information paradox is resolved when quantum gravity is described in an unusual string-theoretic way assuming the string-theoretical description is complete, unambiguous and non-redundant. The space-time in quantum gravity would emerge as an effective description of the theory of oscillations of a lower-dimensional black-hole horizon, and suggest that any black hole with appropriate properties, not just strings, would serve as a basis for a description of string theory. In 1995, Susskind, along with collaborators Tom Banks, Willy Fischler, and Stephen Shenker, presented a formulation of the new M-theory using a holographic description in terms of charged point black holes, the D0 branes of type IIA string theory. The matrix theory they proposed was first suggested as a description of two branes in eleven-dimensional supergravity by Bernard de Wit, Jens Hoppe, and Hermann Nicolai. The later authors reinterpreted the same matrix models as a description of the dynamics of point black holes in particular limits. Holography allowed them to conclude that the dynamics of these black holes give a complete non-perturbative formulation of M-theory. In 1997, Juan Maldacena gave the first holographic descriptions of a higher-dimensional object, the 3+1-dimensional type IIB membrane, which resolved a long-standing problem of finding a string description which describes a gauge theory. These developments simultaneously explained how string theory is related to some forms of supersymmetric quantum field theories. Limit on information density Information content is defined as the logarithm of the reciprocal of the probability that a system is in a specific microstate, and the information entropy of a system is the expected value of the system's information content. This definition of entropy is equivalent to the standard Gibbs entropy used in classical physics. Applying this definition to a physical system leads to the conclusion that, for a given energy in a given volume, there is an upper limit to the density of information (the Bekenstein bound) about the whereabouts of all the particles which compose matter in that volume. In particular, a given volume has an upper limit of information it can contain, at which it will collapse into a black hole. This suggests that matter itself cannot be subdivided infinitely many times and there must be an ultimate level of fundamental particles. As the degrees of freedom of a particle are the product of all the degrees of freedom of its sub-particles, were a particle to have infinite subdivisions into lower-level particles, the degrees of freedom of the original particle would be infinite, violating the maximal limit of entropy density. The holographic principle thus implies that the subdivisions must stop at some level. The most rigorous realization of the holographic principle is the AdS/CFT correspondence by Juan Maldacena. However, J. David Brown and Marc Henneaux had rigorously proved in 1986, that the asymptotic symmetry of 2+1 dimensional gravity gives rise to a Virasoro algebra, whose corresponding quantum theory is a 2-dimensional conformal field theory. Experimental tests The Fermilab physicist Craig Hogan claims that the holographic principle would imply quantum fluctuations in spatial position that would lead to apparent background noise or "holographic noise" measurable at gravitational wave detectors, in particular GEO 600. However these claims have not been widely accepted, or cited, among quantum gravity researchers and appear to be in direct conflict with string theory calculations. Analyses in 2011 of measurements of gamma ray burst GRB 041219A in 2004 by the INTEGRAL space observatory launched in 2002 by the European Space Agency, shows that Craig Hogan's noise is absent down to a scale of 10−48 meters, as opposed to the scale of 10−35 meters predicted by Hogan, and the scale of 10−16 meters found in measurements of the GEO 600 instrument. Research continued at Fermilab under Hogan as of 2013. Jacob Bekenstein claimed to have found a way to test the holographic principle with a tabletop photon experiment. See also Bekenstein bound Beyond black holes Bousso's holographic bound Brane cosmology Digital physics Entropic gravity Implicate and explicate order Quantum speed limit theorems Physical cosmology Quantum foam Notes References Citations Sources . 't Hooft's original paper. External links Alfonso V. Ramallo: Introduction to the AdS/CFT correspondence, , pedagogical lecture. For the holographic principle: see especially Fig. 1. UC Berkeley's Raphael Bousso gives an introductory lecture on the holographic principle – Video. Scientific American article on holographic principle by Jacob Bekenstein Theoretical physics Black holes Quantum information science Holography
Holographic principle
[ "Physics", "Astronomy" ]
3,876
[ "Physical phenomena", "Black holes", "Physical quantities", "Theoretical physics", "Unsolved problems in physics", "Astrophysics", "Density", "Stellar phenomena", "Astronomical objects" ]
14,307
https://en.wikipedia.org/wiki/Hall%20effect
The Hall effect is the production of a potential difference (the Hall voltage) across an electrical conductor that is transverse to an electric current in the conductor and to an applied magnetic field perpendicular to the current. It was discovered by Edwin Hall in 1879. The Hall coefficient is defined as the ratio of the induced electric field to the product of the current density and the applied magnetic field. It is a characteristic of the material from which the conductor is made, since its value depends on the type, number, and properties of the charge carriers that constitute the current. Discovery Wires carrying current in a magnetic field experience a mechanical force perpendicular to both the current and magnetic field. In the 1820s, André-Marie Ampère observed this underlying mechanism that led to the discovery of the Hall effect. However it was not until a solid mathematical basis for electromagnetism was systematized by James Clerk Maxwell's "On Physical Lines of Force" (published in 1861–1862) that details of the interaction between magnets and electric current could be understood. Edwin Hall then explored the question of whether magnetic fields interacted with the conductors or the electric current, and reasoned that if the force was specifically acting on the current, it should crowd current to one side of the wire, producing a small measurable voltage. In 1879, he discovered this Hall effect while he was working on his doctoral degree at Johns Hopkins University in Baltimore, Maryland. Eighteen years before the electron was discovered, his measurements of the tiny effect produced in the apparatus he used were an experimental tour de force, published under the name "On a New Action of the Magnet on Electric Currents". Hall effect within voids The term ordinary Hall effect can be used to distinguish the effect described in the introduction from a related effect which occurs across a void or hole in a semiconductor or metal plate when current is injected via contacts that lie on the boundary or edge of the void. The charge then flows outside the void, within the metal or semiconductor material. The effect becomes observable, in a perpendicular applied magnetic field, as a Hall voltage appearing on either side of a line connecting the current-contacts. It exhibits apparent sign reversal in comparison to the "ordinary" effect occurring in the simply connected specimen. It depends only on the current injected from within the void. Hall effect superposition Superposition of these two forms of the effect, the ordinary and void effects, can also be realized. First imagine the "ordinary" configuration, a simply connected (void-less) thin rectangular homogeneous element with current-contacts on the (external) boundary. This develops a Hall voltage, in a perpendicular magnetic field. Next, imagine placing a rectangular void within this ordinary configuration, with current-contacts, as mentioned above, on the interior boundary of the void. (For simplicity, imagine the contacts on the boundary of the void lined up with the ordinary-configuration contacts on the exterior boundary.) In such a combined configuration, the two Hall effects may be realized and observed simultaneously in the same doubly connected device: A Hall effect on the external boundary that is proportional to the current injected only via the outer boundary, and an apparently sign-reversed Hall effect on the interior boundary that is proportional to the current injected only via the interior boundary. The superposition of multiple Hall effects may be realized by placing multiple voids within the Hall element, with current and voltage contacts on the boundary of each void. Further "Hall effects" may have additional physical mechanisms but are built on these basics. Theory The Hall effect is due to the nature of the current in a conductor. Current consists of the movement of many small charge carriers, typically electrons, holes, ions (see Electromigration) or all three. When a magnetic field is present, these charges experience a force, called the Lorentz force. When such a magnetic field is absent, the charges follow approximately straight paths between collisions with impurities, phonons, etc. However, when a magnetic field with a perpendicular component is applied, their paths between collisions are curved; thus, moving charges accumulate on one face of the material. This leaves equal and opposite charges exposed on the other face, where there is a scarcity of mobile charges. The result is an asymmetric distribution of charge density across the Hall element, arising from a force that is perpendicular to both the straight path and the applied magnetic field. The separation of charge establishes an electric field that opposes the migration of further charge, so a steady electric potential is established for as long as the charge is flowing. In classical electromagnetism electrons move in the opposite direction of the current (by convention "current" describes a theoretical "hole flow"). In some metals and semiconductors it appears "holes" are actually flowing because the direction of the voltage is opposite to the derivation below. For a simple metal where there is only one type of charge carrier (electrons), the Hall voltage can be derived by using the Lorentz force and seeing that, in the steady-state condition, charges are not moving in the -axis direction. Thus, the magnetic force on each electron in the -axis direction is cancelled by a -axis electrical force due to the buildup of charges. The term is the drift velocity of the current which is assumed at this point to be holes by convention. The term is negative in the -axis direction by the right hand rule. In steady state, , so , where is assigned in the direction of the -axis, (and not with the arrow of the induced electric field as in the image (pointing in the direction), which tells you where the field caused by the electrons is pointing). In wires, electrons instead of holes are flowing, so and . Also . Substituting these changes gives The conventional "hole" current is in the negative direction of the electron current and the negative of the electrical charge which gives where is charge carrier density, is the cross-sectional area, and is the charge of each electron. Solving for and plugging into the above gives the Hall voltage: If the charge build up had been positive (as it appears in some metals and semiconductors), then the assigned in the image would have been negative (positive charge would have built up on the left side). The Hall coefficient is defined as or where is the current density of the carrier electrons, and is the induced electric field. In SI units, this becomes (The units of are usually expressed as m3/C, or Ω·cm/G, or other variants.) As a result, the Hall effect is very useful as a means to measure either the carrier density or the magnetic field. One very important feature of the Hall effect is that it differentiates between positive charges moving in one direction and negative charges moving in the opposite. In the diagram above, the Hall effect with a negative charge carrier (the electron) is presented. But consider the same magnetic field and current are applied but the current is carried inside the Hall effect device by a positive particle. The particle would of course have to be moving in the opposite direction of the electron in order for the current to be the same—down in the diagram, not up like the electron is. And thus, mnemonically speaking, your thumb in the Lorentz force law, representing (conventional) current, would be pointing the same direction as before, because current is the same—an electron moving up is the same current as a positive charge moving down. And with the fingers (magnetic field) also being the same, interestingly the charge carrier gets deflected to the left in the diagram regardless of whether it is positive or negative. But if positive carriers are deflected to the left, they would build a relatively positive voltage on the left whereas if negative carriers (namely electrons) are, they build up a negative voltage on the left as shown in the diagram. Thus for the same current and magnetic field, the electric polarity of the Hall voltage is dependent on the internal nature of the conductor and is useful to elucidate its inner workings. This property of the Hall effect offered the first real proof that electric currents in most metals are carried by moving electrons, not by protons. It also showed that in some substances (especially p-type semiconductors), it is contrarily more appropriate to think of the current as positive "holes" moving rather than negative electrons. A common source of confusion with the Hall effect in such materials is that holes moving one way are really electrons moving the opposite way, so one expects the Hall voltage polarity to be the same as if electrons were the charge carriers as in most metals and n-type semiconductors. Yet we observe the opposite polarity of Hall voltage, indicating positive charge carriers. However, of course there are no actual positrons or other positive elementary particles carrying the charge in p-type semiconductors, hence the name "holes". In the same way as the oversimplistic picture of light in glass as photons being absorbed and re-emitted to explain refraction breaks down upon closer scrutiny, this apparent contradiction too can only be resolved by the modern quantum mechanical theory of quasiparticles wherein the collective quantized motion of multiple particles can, in a real physical sense, be considered to be a particle in its own right (albeit not an elementary one). Unrelatedly, inhomogeneity in the conductive sample can result in a spurious sign of the Hall effect, even in ideal van der Pauw configuration of electrodes. For example, a Hall effect consistent with positive carriers was observed in evidently n-type semiconductors. Another source of artefact, in uniform materials, occurs when the sample's aspect ratio is not long enough: the full Hall voltage only develops far away from the current-introducing contacts, since at the contacts the transverse voltage is shorted out to zero. Hall effect in semiconductors When a current-carrying semiconductor is kept in a magnetic field, the charge carriers of the semiconductor experience a force in a direction perpendicular to both the magnetic field and the current. At equilibrium, a voltage appears at the semiconductor edges. The simple formula for the Hall coefficient given above is usually a good explanation when conduction is dominated by a single charge carrier. However, in semiconductors and many metals the theory is more complex, because in these materials conduction can involve significant, simultaneous contributions from both electrons and holes, which may be present in different concentrations and have different mobilities. For moderate magnetic fields the Hall coefficient is or equivalently with Here is the electron concentration, the hole concentration, the electron mobility, the hole mobility and the elementary charge. For large applied fields the simpler expression analogous to that for a single carrier type holds. Relationship with star formation Although it is well known that magnetic fields play an important role in star formation, research models indicate that Hall diffusion critically influences the dynamics of gravitational collapse that forms protostars. Quantum Hall effect For a two-dimensional electron system which can be produced in a MOSFET, in the presence of large magnetic field strength and low temperature, one can observe the quantum Hall effect, in which the Hall conductance undergoes quantum Hall transitions to take on the quantized values. Spin Hall effect The spin Hall effect consists in the spin accumulation on the lateral boundaries of a current-carrying sample. No magnetic field is needed. It was predicted by Mikhail Dyakonov and V. I. Perel in 1971 and observed experimentally more than 30 years later, both in semiconductors and in metals, at cryogenic as well as at room temperatures. The quantity describing the strength of the Spin Hall effect is known as Spin Hall angle, and it is defined as: Where is the spin current generated by the applied current density . Quantum spin Hall effect For mercury telluride two dimensional quantum wells with strong spin-orbit coupling, in zero magnetic field, at low temperature, the quantum spin Hall effect has been observed in 2007. Anomalous Hall effect In ferromagnetic materials (and paramagnetic materials in a magnetic field), the Hall resistivity includes an additional contribution, known as the anomalous Hall effect (or the extraordinary Hall effect), which depends directly on the magnetization of the material, and is often much larger than the ordinary Hall effect. (Note that this effect is not due to the contribution of the magnetization to the total magnetic field.) For example, in nickel, the anomalous Hall coefficient is about 100 times larger than the ordinary Hall coefficient near the Curie temperature, but the two are similar at very low temperatures. Although a well-recognized phenomenon, there is still debate about its origins in the various materials. The anomalous Hall effect can be either an extrinsic (disorder-related) effect due to spin-dependent scattering of the charge carriers, or an intrinsic effect which can be described in terms of the Berry phase effect in the crystal momentum space (-space). Hall effect in ionized gases The Hall effect in an ionized gas (plasma) is significantly different from the Hall effect in solids (where the Hall parameter is always much less than unity). In a plasma, the Hall parameter can take any value. The Hall parameter, , in a plasma is the ratio between the electron gyrofrequency, , and the electron-heavy particle collision frequency, : where is the elementary charge (approximately ) is the magnetic field (in teslas) is the electron mass (approximately ). The Hall parameter value increases with the magnetic field strength. Physically, the trajectories of electrons are curved by the Lorentz force. Nevertheless, when the Hall parameter is low, their motion between two encounters with heavy particles (neutral or ion) is almost linear. But if the Hall parameter is high, the electron movements are highly curved. The current density vector, , is no longer collinear with the electric field vector, . The two vectors and make the Hall angle, , which also gives the Hall parameter: Other Hall effects The Hall Effects family has expanded to encompass other quasi-particles in semiconductor nanostructures. Specifically, a set of Hall Effects has emerged based on excitons and exciton-polaritons n 2D materials and quantum wells. Applications Hall sensors amplify and use the Hall effect for a variety of sensing applications. Corbino effect The Corbino effect, named after its discoverer Orso Mario Corbino, is a phenomenon involving the Hall effect, but a disc-shaped metal sample is used in place of a rectangular one. Because of its shape the Corbino disc allows the observation of Hall effect–based magnetoresistance without the associated Hall voltage. A radial current through a circular disc, subjected to a magnetic field perpendicular to the plane of the disc, produces a "circular" current through the disc. The absence of the free transverse boundaries renders the interpretation of the Corbino effect simpler than that of the Hall effect. See also Electromagnetic induction Nernst effect Thermal Hall effect References Sources Introduction to Plasma Physics and Controlled Fusion, Volume 1, Plasma Physics, Second Edition, 1984, Francis F. Chen Further reading Annraoi M. de Paor. Correction to the classical two-species Hall Coefficient using twoport network theory. International Journal of Electrical Engineering Education 43/4. The Hall effect - The Feynman Lectures on Physics University of Washington The Hall Effect External links , P. H. Craig, System and apparatus employing the Hall effect , J. T. Maupin, E. A. Vorthmann, Hall effect contactless switch with prebiased Schmitt trigger Understanding and Applying the Hall Effect Hall Effect Thrusters Alta Space Hall effect calculators Interactive Java tutorial on the Hall effect National High Magnetic Field Laboratory Science World (wolfram.com) article. "The Hall Effect". nist.gov. Table with Hall coefficients of different elements at room temperature . Simulation of the Hall effect as a Youtube video Hall effect in electrolytes Condensed matter physics Electric and magnetic fields in matter
Hall effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,247
[ "Physical phenomena", "Hall effect", "Phases of matter", "Electric and magnetic fields in matter", "Materials science", "Electrical phenomena", "Condensed matter physics", "Solid state engineering", "Matter" ]