id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
398,540 | https://en.wikipedia.org/wiki/Euler%20product | In number theory, an Euler product is an expansion of a Dirichlet series into an infinite product indexed by prime numbers. The original such product was given for the sum of all positive integers raised to a certain power as proven by Leonhard Euler. This series and its continuation to the entire complex plane would later become known as the Riemann zeta function.
Definition
In general, if is a bounded multiplicative function, then the Dirichlet series
is equal to
where the product is taken over prime numbers , and is the sum
In fact, if we consider these as formal generating functions, the existence of such a formal Euler product expansion is a necessary and sufficient condition that be multiplicative: this says exactly that is the product of the whenever factors as the product of the powers of distinct primes .
An important special case is that in which is totally multiplicative, so that is a geometric series. Then
as is the case for the Riemann zeta function, where , and more generally for Dirichlet characters.
Convergence
In practice all the important cases are such that the infinite series and infinite product expansions are absolutely convergent in some region
that is, in some right half-plane in the complex numbers. This already gives some information, since the infinite product, to converge, must give a non-zero value; hence the function given by the infinite series is not zero in such a half-plane.
In the theory of modular forms it is typical to have Euler products with quadratic polynomials in the denominator here. The general Langlands philosophy includes a comparable explanation of the connection of polynomials of degree , and the representation theory for .
Examples
The following examples will use the notation for the set of all primes, that is:
The Euler product attached to the Riemann zeta function , also using the sum of the geometric series, is
while for the Liouville function , it is
Using their reciprocals, two Euler products for the Möbius function are
and
Taking the ratio of these two gives
Since for even values of the Riemann zeta function has an analytic expression in terms of a rational multiple of , then for even exponents, this infinite product evaluates to a rational number. For example, since , , and , then
and so on, with the first result known by Ramanujan. This family of infinite products is also equivalent to
where counts the number of distinct prime factors of , and is the number of square-free divisors.
If is a Dirichlet character of conductor , so that is totally multiplicative and only depends on , and if is not coprime to , then
Here it is convenient to omit the primes dividing the conductor from the product. In his notebooks, Ramanujan generalized the Euler product for the zeta function as
for where is the polylogarithm. For the product above is just .
Notable constants
Many well known constants have Euler product expansions.
The Leibniz formula for
can be interpreted as a Dirichlet series using the (unique) Dirichlet character modulo 4, and converted to an Euler product of superparticular ratios (fractions where numerator and denominator differ by 1):
where each numerator is a prime number and each denominator is the nearest multiple of 4.
Other Euler products for known constants include:
The Hardy–Littlewood twin prime constant:
The Landau–Ramanujan constant:
Murata's constant :
The strongly carefree constant :
Artin's constant :
Landau's totient constant :
The carefree constant :
and its reciprocal :
The Feller–Tornier constant :
The quadratic class number constant :
The totient summatory constant :
Sarnak's constant :
The carefree constant :
The strongly carefree constant :
Stephens' constant :
Barban's constant :
Taniguchi's constant :
The Heath-Brown and Moroz constant :
Notes
References
G. Polya, Induction and Analogy in Mathematics Volume 1 Princeton University Press (1954) L.C. Card 53-6388 (A very accessible English translation of Euler's memoir regarding this "Most Extraordinary Law of the Numbers" appears starting on page 91)
(Provides an introductory discussion of the Euler product in the context of classical number theory.)
G.H. Hardy and E.M. Wright, An introduction to the theory of numbers, 5th ed., Oxford (1979) (Chapter 17 gives further examples.)
George E. Andrews, Bruce C. Berndt, Ramanujan's Lost Notebook: Part I, Springer (2005),
G. Niklasch, ''Some number theoretical constants: 1000-digit values"
External links
Analytic number theory
Zeta and L-functions
Mathematical constants
Infinite products | Euler product | [
"Mathematics"
] | 985 | [
"Analytic number theory",
"Mathematical analysis",
"Mathematical objects",
"nan",
"Infinite products",
"Mathematical constants",
"Numbers",
"Number theory"
] |
398,638 | https://en.wikipedia.org/wiki/Biogeochemical%20cycle | A biogeochemical cycle, or more generally a cycle of matter, is the movement and transformation of chemical elements and compounds between living organisms, the atmosphere, and the Earth's crust. Major biogeochemical cycles include the carbon cycle, the nitrogen cycle and the water cycle. In each cycle, the chemical element or molecule is transformed and cycled by living organisms and through various geological forms and reservoirs, including the atmosphere, the soil and the oceans. It can be thought of as the pathway by which a chemical substance cycles (is turned over or moves through) the biotic compartment and the abiotic compartments of Earth. The biotic compartment is the biosphere and the abiotic compartments are the atmosphere, lithosphere and hydrosphere.
For example, in the carbon cycle, atmospheric carbon dioxide is absorbed by plants through photosynthesis, which converts it into organic compounds that are used by organisms for energy and growth. Carbon is then released back into the atmosphere through respiration and decomposition. Additionally, carbon is stored in fossil fuels and is released into the atmosphere through human activities such as burning fossil fuels. In the nitrogen cycle, atmospheric nitrogen gas is converted by plants into usable forms such as ammonia and nitrates through the process of nitrogen fixation. These compounds can be used by other organisms, and nitrogen is returned to the atmosphere through denitrification and other processes. In the water cycle, the universal solvent water evaporates from land and oceans to form clouds in the atmosphere, and then precipitates back to different parts of the planet. Precipitation can seep into the ground and become part of groundwater systems used by plants and other organisms, or can runoff the surface to form lakes and rivers. Subterranean water can then seep into the ocean along with river discharges, rich with dissolved and particulate organic matter and other nutrients.
There are biogeochemical cycles for many other elements, such as for oxygen, hydrogen, phosphorus, calcium, iron, sulfur, mercury and selenium. There are also cycles for molecules, such as water and silica. In addition there are macroscopic cycles such as the rock cycle, and human-induced cycles for synthetic compounds such as for polychlorinated biphenyls (PCBs). In some cycles there are geological reservoirs where substances can remain or be sequestered for long periods of time.
Biogeochemical cycles involve the interaction of biological, geological, and chemical processes. Biological processes include the influence of microorganisms, which are critical drivers of biogeochemical cycling. Microorganisms have the ability to carry out wide ranges of metabolic processes essential for the cycling of nutrients and chemicals throughout global ecosystems. Without microorganisms many of these processes would not occur, with significant impact on the functioning of land and ocean ecosystems and the planet's biogeochemical cycles as a whole. Changes to cycles can impact human health. The cycles are interconnected and play important roles regulating climate, supporting the growth of plants, phytoplankton and other organisms, and maintaining the health of ecosystems generally. Human activities such as burning fossil fuels and using large amounts of fertilizer can disrupt cycles, contributing to climate change, pollution, and other environmental problems.
Overview
Energy flows directionally through ecosystems, entering as sunlight (or inorganic molecules for chemoautotrophs) and leaving as heat during the many transfers between trophic levels. However, the matter that makes up living organisms is conserved and recycled. The six most common elements associated with organic molecules — carbon, nitrogen, hydrogen, oxygen, phosphorus, and sulfur — take a variety of chemical forms and may exist for long periods in the atmosphere, on land, in water, or beneath the Earth's surface. Geologic processes, such as weathering, erosion, water drainage, and the subduction of the continental plates, all play a role in this recycling of materials. Because geology and chemistry have major roles in the study of this process, the recycling of inorganic matter between living organisms and their environment is called a biogeochemical cycle.
The six aforementioned elements are used by organisms in a variety of ways. Hydrogen and oxygen are found in water and organic molecules, both of which are essential to life. Carbon is found in all organic molecules, whereas nitrogen is an important component of nucleic acids and proteins. Phosphorus is used to make nucleic acids and the phospholipids that comprise biological membranes. Sulfur is critical to the three-dimensional shape of proteins. The cycling of these elements is interconnected. For example, the movement of water is critical for leaching sulfur and phosphorus into rivers which can then flow into oceans. Minerals cycle through the biosphere between the biotic and abiotic components and from one organism to another.
Ecological systems (ecosystems) have many biogeochemical cycles operating as a part of the system, for example, the water cycle, the carbon cycle, the nitrogen cycle, etc. All chemical elements occurring in organisms are part of biogeochemical cycles. In addition to being a part of living organisms, these chemical elements also cycle through abiotic factors of ecosystems such as water (hydrosphere), land (lithosphere), and/or the air (atmosphere).
The living factors of the planet can be referred to collectively as the biosphere. All the nutrients — such as carbon, nitrogen, oxygen, phosphorus, and sulfur — used in ecosystems by living organisms are a part of a closed system; therefore, these chemicals are recycled instead of being lost and replenished constantly such as in an open system.
The major parts of the biosphere are connected by the flow of chemical elements and compounds in biogeochemical cycles. In many of these cycles, the biota plays an important role. Matter from the Earth's interior is released by volcanoes. The atmosphere exchanges some compounds and elements rapidly with the biota and oceans. Exchanges of materials between rocks, soils, and the oceans are generally slower by comparison.
The flow of energy in an ecosystem is an open system; the Sun constantly gives the planet energy in the form of light while it is eventually used and lost in the form of heat throughout the trophic levels of a food web. Carbon is used to make carbohydrates, fats, and proteins, the major sources of food energy. These compounds are oxidized to release carbon dioxide, which can be captured by plants to make organic compounds. The chemical reaction is powered by the light energy of sunshine.
Sunlight is required to combine carbon with hydrogen and oxygen into an energy source, but ecosystems in the deep sea, where no sunlight can penetrate, obtain energy from sulfur. Hydrogen sulfide near hydrothermal vents can be utilized by organisms such as the giant tube worm. In the sulfur cycle, sulfur can be forever recycled as a source of energy. Energy can be released through the oxidation and reduction of sulfur compounds (e.g., oxidizing elemental sulfur to sulfite and then to sulfate).
Although the Earth constantly receives energy from the Sun, its chemical composition is essentially fixed, as the additional matter is only occasionally added by meteorites. Because this chemical composition is not replenished like energy, all processes that depend on these chemicals must be recycled. These cycles include both the living biosphere and the nonliving lithosphere, atmosphere, and hydrosphere.
Biogeochemical cycles can be contrasted with geochemical cycles. The latter deals only with crustal and subcrustal reservoirs even though some process from both overlap.
Compartments
Atmosphere
Hydrosphere
The global ocean covers more than 70% of the Earth's surface and is remarkably heterogeneous. Marine productive areas, and coastal ecosystems comprise a minor fraction of the ocean in terms of surface area, yet have an enormous impact on global biogeochemical cycles carried out by microbial communities, which represent 90% of the ocean's biomass. Work in recent years has largely focused on cycling of carbon and macronutrients such as nitrogen, phosphorus, and silicate: other important elements such as sulfur or trace elements have been less studied, reflecting associated technical and logistical issues. Increasingly, these marine areas, and the taxa that form their ecosystems, are subject to significant anthropogenic pressure, impacting marine life and recycling of energy and nutrients. A key example is that of cultural eutrophication, where agricultural runoff leads to nitrogen and phosphorus enrichment of coastal ecosystems, greatly increasing productivity resulting in algal blooms, deoxygenation of the water column and seabed, and increased greenhouse gas emissions, with direct local and global impacts on nitrogen and carbon cycles. However, the runoff of organic matter from the mainland to coastal ecosystems is just one of a series of pressing threats stressing microbial communities due to global change. Climate change has also resulted in changes in the cryosphere, as glaciers and permafrost melt, resulting in intensified marine stratification, while shifts of the redox-state in different biomes are rapidly reshaping microbial assemblages at an unprecedented rate.
Global change is, therefore, affecting key processes including primary productivity, CO2 and N2 fixation, organic matter respiration/remineralization, and the sinking and burial deposition of fixed CO2. In addition to this, oceans are experiencing an acidification process, with a change of ~0.1 pH units between the pre-industrial period and today, affecting carbonate/bicarbonate buffer chemistry. In turn, acidification has been reported to impact planktonic communities, principally through effects on calcifying taxa. There is also evidence for shifts in the production of key intermediary volatile products, some of which have marked greenhouse effects (e.g., N2O and CH4, reviewed by Breitburg in 2018, due to the increase in global temperature, ocean stratification and deoxygenation, driving as much as 25 to 50% of nitrogen loss from the ocean to the atmosphere in the so-called oxygen minimum zones or anoxic marine zones, driven by microbial processes. Other products, that are typically toxic for the marine nekton, including reduced sulfur species such as H2S, have a negative impact for marine resources like fisheries and coastal aquaculture. While global change has accelerated, there has been a parallel increase in awareness of the complexity of marine ecosystems, and especially the fundamental role of microbes as drivers of ecosystem functioning.
Lithosphere
Biosphere
Microorganisms drive much of the biogeochemical cycling in the earth system.
Reservoirs
The chemicals are sometimes held for long periods of time in one place. This place is called a reservoir, which, for example, includes such things as coal deposits that are storing carbon for a long period of time. When chemicals are held for only short periods of time, they are being held in exchange pools. Examples of exchange pools include plants and animals.
Plants and animals utilize carbon to produce carbohydrates, fats, and proteins, which can then be used to build their internal structures or to obtain energy. Plants and animals temporarily use carbon in their systems and then release it back into the air or surrounding medium. Generally, reservoirs are abiotic factors whereas exchange pools are biotic factors. Carbon is held for a relatively short time in plants and animals in comparison to coal deposits. The amount of time that a chemical is held in one place is called its residence time or turnover time (also called the renewal time or exit age).
Box models
Box models are widely used to model biogeochemical systems. Box models are simplified versions of complex systems, reducing them to boxes (or storage reservoirs) for chemical materials, linked by material fluxes (flows). Simple box models have a small number of boxes with properties, such as volume, that do not change with time. The boxes are assumed to behave as if they were mixed homogeneously. These models are often used to derive analytical formulas describing the dynamics and steady-state abundance of the chemical species involved.
The diagram at the right shows a basic one-box model. The reservoir contains the amount of material M under consideration, as defined by chemical, physical or biological properties. The source Q is the flux of material into the reservoir, and the sink S is the flux of material out of the reservoir. The budget is the check and balance of the sources and sinks affecting material turnover in a reservoir. The reservoir is in a steady state if Q = S, that is, if the sources balance the sinks and there is no change over time.
The residence or turnover time is the average time material spends resident in the reservoir. If the reservoir is in a steady state, this is the same as the time it takes to fill or drain the reservoir. Thus, if τ is the turnover time, then τ = M/S. The equation describing the rate of change of content in a reservoir is
When two or more reservoirs are connected, the material can be regarded as cycling between the reservoirs, and there can be predictable patterns to the cyclic flow. More complex multibox models are usually solved using numerical techniques.
The diagram on the left shows a simplified budget of ocean carbon flows. It is composed of three simple interconnected box models, one for the euphotic zone, one for the ocean interior or dark ocean, and one for ocean sediments. In the euphotic zone, net phytoplankton production is about 50 Pg C each year. About 10 Pg is exported to the ocean interior while the other 40 Pg is respired. Organic carbon degradation occurs as particles (marine snow) settle through the ocean interior. Only 2 Pg eventually arrives at the seafloor, while the other 8 Pg is respired in the dark ocean. In sediments, the time scale available for degradation increases by orders of magnitude with the result that 90% of the organic carbon delivered is degraded and only 0.2 Pg C yr−1 is eventually buried and transferred from the biosphere to the geosphere.
The diagram on the right shows a more complex model with many interacting boxes. Reservoir masses here represents carbon stocks, measured in Pg C. Carbon exchange fluxes, measured in Pg C yr−1, occur between the atmosphere and its two major sinks, the land and the ocean. The black numbers and arrows indicate the reservoir mass and exchange fluxes estimated for the year 1750, just before the Industrial Revolution. The red arrows (and associated numbers) indicate the annual flux changes due to anthropogenic activities, averaged over the 2000–2009 time period. They represent how the carbon cycle has changed since 1750. Red numbers in the reservoirs represent the cumulative changes in anthropogenic carbon since the start of the Industrial Period, 1750–2011.
Fast and slow cycles
There are fast and slow biogeochemical cycles. Fast cycle operate in the biosphere and slow cycles operate in rocks. Fast or biological cycles can complete within years, moving substances from atmosphere to biosphere, then back to the atmosphere. Slow or geological cycles can take millions of years to complete, moving substances through the Earth's crust between rocks, soil, ocean and atmosphere.
As an example, the fast carbon cycle is illustrated in the diagram below on the left. This cycle involves relatively short-term biogeochemical processes between the environment and living organisms in the biosphere. It includes movements of carbon between the atmosphere and terrestrial and marine ecosystems, as well as soils and seafloor sediments. The fast cycle includes annual cycles involving photosynthesis and decadal cycles involving vegetative growth and decomposition. The reactions of the fast carbon cycle to human activities will determine many of the more immediate impacts of climate change.
The slow cycle is illustrated in the diagram above on the right. It involves medium to long-term geochemical processes belonging to the rock cycle. The exchange between the ocean and atmosphere can take centuries, and the weathering of rocks can take millions of years. Carbon in the ocean precipitates to the ocean floor where it can form sedimentary rock and be subducted into the Earth's mantle. Mountain building processes result in the return of this geologic carbon to the Earth's surface. There the rocks are weathered and carbon is returned to the atmosphere by degassing and to the ocean by rivers. Other geologic carbon returns to the ocean through the hydrothermal emission of calcium ions. In a given year between 10 and 100 million tonnes of carbon moves around this slow cycle. This includes volcanoes returning geologic carbon directly to the atmosphere in the form of carbon dioxide. However, this is less than one percent of the carbon dioxide put into the atmosphere by burning fossil fuels.
Deep cycles
The terrestrial subsurface is the largest reservoir of carbon on earth, containing 14–135 Pg of carbon and 2–19% of all biomass. Microorganisms drive organic and inorganic compound transformations in this environment and thereby control biogeochemical cycles. Current knowledge of the microbial ecology of the subsurface is primarily based on 16S ribosomal RNA (rRNA) gene sequences. Recent estimates show that <8% of 16S rRNA sequences in public databases derive from subsurface organisms and only a small fraction of those are represented by genomes or isolates. Thus, there is remarkably little reliable information about microbial metabolism in the subsurface. Further, little is known about how organisms in subsurface ecosystems are metabolically interconnected. Some cultivation-based studies of syntrophic consortia and small-scale metagenomic analyses of natural communities suggest that organisms are linked via metabolic handoffs: the transfer of redox reaction products of one organism to another. However, no complex environments have been dissected completely enough to resolve the metabolic interaction networks that underpin them. This restricts the ability of biogeochemical models to capture key aspects of the carbon and other nutrient cycles. New approaches such as genome-resolved metagenomics, an approach that can yield a comprehensive set of draft and even complete genomes for organisms without the requirement for laboratory isolation have the potential to provide this critical level of understanding of biogeochemical processes.
Some examples
Some of the more well-known biogeochemical cycles are shown below:
Many biogeochemical cycles are currently being studied for the first time. Climate change and human impacts are drastically changing the speed, intensity, and balance of these relatively unknown cycles, which include:
the mercury cycle, and
the human-caused cycle of PCBs.
Biogeochemical cycles always involve active equilibrium states: a balance in the cycling of the element between compartments. However, overall balance may involve compartments distributed on a global scale.
As biogeochemical cycles describe the movements of substances on the entire globe, the study of these is inherently multidisciplinary. The carbon cycle may be related to research in ecology and atmospheric sciences. Biochemical dynamics would also be related to the fields of geology and pedology.
See also
Carbonate–silicate cycle
Ecological recycling
Great Acceleration
Hydrogen cycle
Redox gradient
References
Further reading
Schink, Bernhard; "Microbes: Masters of the Global Element Cycles" pp 33–58. "Metals, Microbes and Minerals: The Biogeochemical Side of Life", pp xiv + 341. Walter de Gruyter, Berlin. DOI 10.1515/9783110589771-002
Biogeography
Biosphere
Geochemistry | Biogeochemical cycle | [
"Chemistry",
"Biology"
] | 3,978 | [
"Biogeochemical cycle",
"Biogeography",
"Biogeochemistry",
"nan"
] |
399,116 | https://en.wikipedia.org/wiki/Quantum%20solvent | A quantum solvent is essentially a superfluid (aka a quantum liquid) used to dissolve another chemical species. Any superfluid can theoretically act as a quantum solvent, but in practice the only viable superfluid medium that can currently be used is helium-4, and it has been successfully accomplished in controlled conditions. Such solvents are currently under investigation for use in spectroscopic techniques in the field of analytical chemistry, due to their superior kinetic properties.
Any matter dissolved (or otherwise suspended) in the superfluid will tend to aggregate together in clumps, encapsulated by a 'quantum solvation shell'. Due to the totally frictionless nature of the superfluid medium, the entire object then proceeds to act very much like a nanoscopic ball bearing, allowing effectively complete rotational freedom of the solvated chemical species. A quantum solvation shell consists of a region of non-superfluid helium-4 atoms that surround the molecule(s) and exhibit adiabatic following around the centre of gravity of the solute. As such, the kinetics of an effectively gaseous molecule can be studied without the need to use an actual gas (which can be impractical or impossible). It is necessary to make a small alteration to the rotational constant of the chemical species being examined, in order to compensate for the higher mass entailed by the quantum solvation shell.
Quantum solvation has so far been achieved with a number of organic, inorganic and organometallic compounds, and it has been speculated that as well as the obvious use in the field of spectroscopy, quantum solvents could be used as tools in nanoscale chemical engineering, perhaps to manufacture components for use in nanotechnology.
References
Solvents
Nanotechnology
Chemical physics
Superfluidity | Quantum solvent | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 364 | [
"Physical phenomena",
"Phase transitions",
"Applied and interdisciplinary physics",
"Matter",
"Materials science stubs",
"Phases of matter",
"Quantum mechanics",
"Quantum physics stubs",
"Superfluidity",
"Materials science",
"Nanotechnology stubs",
"Condensed matter physics",
"nan",
"Exoti... |
399,123 | https://en.wikipedia.org/wiki/Solvation%20shell | A solvation shell or solvation sheath is the solvent interface of any chemical compound or biomolecule that constitutes the solute in a solution. When the solvent is water it is called a hydration shell or hydration sphere. The number of solvent molecules surrounding each unit of solute is called the hydration number of the solute.
A classic example is when water molecules arrange around a metal ion. If the metal ion is a cation, the electronegative oxygen atom of the water molecule would be attracted electrostatically to the positive charge on the metal ion. The result is a solvation shell of water molecules that surround the ion. This shell can be several molecules thick, dependent upon the charge of the ion, its distribution and spatial dimensions.
A number of molecules of solvent are involved in the solvation shell around anions and cations from a dissolved salt in a solvent. Metal ions in aqueous solutions form metal aquo complexes. This number can be determined by various methods like compressibility and NMR measurements among others.
Relation to activity coefficient of an electrolyte and its solvation shell number
The solvation shell number of a dissolved electrolyte can be linked to the statistical component of the activity coefficient of the electrolyte and to the ratio between the apparent molar volume of a dissolved electrolyte in a concentrated solution and the molar volume of the solvent (water):
Hydration shells of proteins
The hydration shell (also sometimes called hydration layer) that forms around proteins is of particular importance in biochemistry. This interaction of the protein surface with the surrounding water is often referred to as protein hydration and is fundamental to the activity of the protein. The hydration layer around a protein has been found to have dynamics distinct from the bulk water to a distance of 1 nm. The duration of contact of a specific water molecule with the protein surface may be in the subnanosecond range while molecular dynamics simulations suggest the time water spends in the hydration shell before mixing with the outside bulk water could be in the femtosecond to picosecond range, and that near features conventionally regarded as attractive to water, such as hydrogen bond donors, the water molecules are actually relatively weakly bound and are easily displaced. Solvation shell water molecules can also influence the molecular design of protein binders or inhibitors.
With other solvents and solutes, varying steric and kinetic factors can also affect the solvation shell.
See also
Activity coefficient
Metal ions in aqueous solution
Ion transport number
Ionic radius
Water model
Poisson-Boltzmann equation
Hydration energy
Solvation
References
Solutions
Chemical properties
Chemical bonding | Solvation shell | [
"Physics",
"Chemistry",
"Materials_science"
] | 538 | [
"Homogeneous chemical mixtures",
"Condensed matter physics",
"nan",
"Solutions",
"Chemical bonding"
] |
399,665 | https://en.wikipedia.org/wiki/Centauro%20event | A Centauro event is a kind of anomalous event observed in cosmic-ray detectors since 1972. They are so named because their shape resembles that of a centaur: i.e., highly asymmetric.
If some versions of string theory are correct, then high-energy cosmic rays could create black holes when they collide with molecules in the Earth's atmosphere. These black holes would be tiny, with a mass of around 10 micrograms. They would also be unstable enough to explode in a burst of particles within around 10−27 seconds.
Theodore Tomaras, a physicist at the University of Crete in Heraklion, Greece, and his Russian collaborators hypothesize that these miniature black holes could explain certain anomalous observations made by cosmic-ray detectors in the Bolivian Andes and on a mountain in Tajikistan.
In 1972, the Andean detector registered a cascade that was strangely rich in charged, quark-based particles; far more particles were detected in the bottom portion of the detector than in the top portion.
In years since, the detectors in Bolivia and Tajikistan have detected more than 40 Centauro events. Various explanations have been suggested. One possible explanation might be if the strong force between particles behaves unusually when they have extremely high energies.
Exploding black holes are also a possibility. The team calculated what signal a detector would register if a cosmic ray creates a miniature black hole that explodes nearby. The researchers' prediction is consistent with the observed Centauro events.
The Tomaras team hopes that computer simulations of mini-black holes exploding, and further observations, will solve the puzzle.
Solution to the Centauro puzzle
In 2003 an international team of researches from Russia and Japan
found out that the mysterious observation from mountain-top
cosmic ray experiments can be explained with conventional
physics.
The new analysis of Centauro I reveals that there is a
difference in the arrival angle between the upper block and lower block events,
so the two are not products of the same interaction.
That leaves only the lower chamber data connected to the Centauro I event.
In other words, the man-horse analogy becomes redundant.
There is only an obvious "tail", and no "head".
The original detector setup had gaps between neighboring blocks
in the upper chamber.
Linear dimensions of gaps were comparable to the geometrical
size of the event.
The signal observed in the lower detector was similar to an
ordinary interaction occurred at low altitude above the chamber,
thus providing a natural solution:
passing of a cascade of particles through a gap
between the upper blocks.
In 2005 it was shown that "other Centauro events" can be explained by
peculiarities of the Chacaltaya detector.
So-called "exotic signal" observed so far in cosmic ray experiments
using a traditional X-ray emulsion chamber detector can
be consistently explained within the framework of standard physics.
Further reading
Cosmic rays | Centauro event | [
"Physics"
] | 586 | [
"Astrophysics",
"Physical phenomena",
"Radiation",
"Cosmic rays"
] |
399,678 | https://en.wikipedia.org/wiki/Fermi%20Gamma-ray%20Space%20Telescope | The Fermi Gamma-ray Space Telescope (FGST, also FGRST), formerly called the Gamma-ray Large Area Space Telescope (GLAST), is a space observatory being used to perform gamma-ray astronomy observations from low Earth orbit. Its main instrument is the Large Area Telescope (LAT), with which astronomers mostly intend to perform an all-sky survey studying astrophysical and cosmological phenomena such as active galactic nuclei, pulsars, other high-energy sources and dark matter. Another instrument aboard Fermi, the Gamma-ray Burst Monitor (GBM; formerly GLAST Burst Monitor), is being used to study gamma-ray bursts and solar flares.
Fermi, named for high-energy physics pioneer Enrico Fermi, was launched on 11 June 2008 at 16:05 UTC aboard a Delta II 7920-H rocket. The mission is a joint venture of NASA, the United States Department of Energy, and government agencies in France, Germany, Italy, Japan, and Sweden, becoming the most sensitive gamma-ray telescope on orbit, succeeding INTEGRAL. The project is a recognized CERN experiment (RE7).
Overview
Fermi includes two scientific instruments, the Large Area Telescope (LAT) and the Gamma-ray Burst Monitor (GBM).
The LAT is an imaging gamma-ray detector (a pair-conversion instrument) which detects photons with energy from about 20 million to about 300 billion electronvolts (20 MeV to 300 GeV), with a field of view of about 20% of the sky; it may be thought of as a sequel to the EGRET instrument on the Compton Gamma Ray Observatory.
The GBM consists of 14 scintillation detectors (twelve sodium iodide crystals for the 8 keV to 1 MeV range and two bismuth germanate crystals with sensitivity from 150 keV to 30 MeV), and can detect gamma-ray bursts in that energy range across the whole of the sky not occluded by the Earth.
General Dynamics Advanced Information Systems (formerly Spectrum Astro and now Orbital Sciences) in Gilbert, Arizona, designed and built the spacecraft that carries the instruments. It travels in a low, circular orbit with a period of about 95 minutes. Its normal mode of operation maintains its orientation so that the instruments will look away from the Earth, with a "rocking" motion to equalize the coverage of the sky. The view of the instruments will sweep out across most of the sky about 16 times per day. The spacecraft can also maintain an orientation that points to a chosen target.
Both science instruments underwent environmental testing, including vibration, vacuum, and high and low temperatures to ensure that they can withstand the stresses of launch and continue to operate in space. They were integrated with the spacecraft at the General Dynamics ASCENT facility in Gilbert, Arizona.
Data from the instruments are available to the public through the Fermi Science Support Center web site. Software for analyzing the data is also available.
GLAST renamed Fermi Gamma-ray Space Telescope
NASA's Alan Stern, associate administrator for Science at NASA Headquarters, launched a public competition 7 February 2008, closing 31 March 2008, to rename GLAST in a way that would "capture the excitement of GLAST's mission and call attention to gamma-ray and high-energy astronomy ... something memorable to commemorate this spectacular new astronomy mission ... a name that is catchy, easy to say and will help make the satellite and its mission a topic of dinner table and classroom discussion".
Fermi gained its new name in 2008: On 26 August 2008, GLAST was renamed the "Fermi Gamma-ray Space Telescope" in honor of Enrico Fermi, a pioneer in high-energy physics.
Mission
NASA designed the mission with a five-year lifetime, with a goal of ten years of operations.
The key scientific objectives of the Fermi mission have been described as:
To understand the mechanisms of particle acceleration in active galactic nuclei (AGN), pulsars, and supernova remnants (SNR).
Resolve the gamma-ray sky: unidentified sources and diffuse emission.
Determine the high-energy behavior of gamma-ray bursts and transients.
Probe dark matter (e.g. by looking for an excess of gamma rays from the center of the Milky Way) and early Universe.
Search for evaporating primordial micro black holes (MBH) from their presumed gamma burst signatures (Hawking Radiation component).
The National Academies of Sciences ranked this mission as a top priority. Many new possibilities and discoveries are anticipated to emerge from this single mission and greatly expand our view of the Universe.
Blazars and active galaxies
Study energy spectra and variability of wavelengths of light coming from blazars so as to determine the composition of the black hole jets aimed directly at Earth -- whether they are
(a) a combination of electrons and positrons or
(b) only protons.
Gamma-ray bursts
Study gamma-ray bursts with an energy range several times more intense than ever before so that scientists may be able to understand them better.
Neutron stars
Study younger, more energetic pulsars in the Milky Way than ever before so as to broaden our understanding of stars. Study the pulsed emissions of magnetospheres so as to possibly solve how they are produced. Study how pulsars generate winds of interstellar particles.
Milky Way galaxy
Provide new data to help improve upon existing theoretical models of our own galaxy.
Gamma-ray background radiation
Study better than ever before whether ordinary galaxies are responsible for gamma-ray background radiation. The potential for a tremendous discovery awaits if ordinary sources are determined to be irresponsible, in which case the cause may be anything from self-annihilating dark matter to entirely new chain reactions among interstellar particles that have yet to be conceived.
The early universe
Study better than ever before how concentrations of visible and ultraviolet light change over time. The mission should easily detect regions of spacetime where gamma-rays interacted with visible or UV light to make matter. This can be seen as an example of E=mc2 working in reverse, where energy is converted into mass, in the early universe.
Sun
Study better than ever before how our own Sun produces gamma rays in solar flares.
Dark matter
Search for evidence that dark matter is made up of weakly interacting massive particles, complementing similar experiments already planned for the Large Hadron Collider as well as other underground detectors. The potential for a tremendous discovery in this area is possible over the next several years.
Fundamental physics
Test better than ever before certain established theories of physics, such as whether the speed of light in vacuum remains constant regardless of wavelength. Einstein's general theory of relativity contends that it does, yet some models in quantum mechanics and quantum gravity predict that it may not. Search for gamma rays emanating from former black holes that once exploded, providing yet another potential step toward the unification of quantum mechanics and general relativity. Determine whether photons naturally split into smaller photons, as predicted by quantum mechanics and already achieved under controlled, man-made experimental conditions.
Unknown discoveries
Scientists estimate a very high possibility for new scientific discoveries, even revolutionary discoveries, emerging from this single mission.
Mission timeline
Prelaunch
On 4 March 2008, the spacecraft arrived at the Astrotech payload processing facility in Titusville, Florida. On 4 June 2008, after several previous delays, launch status was retargeted for 11 June at the earliest, the last delays resulting from the need to replace the Flight Termination System batteries. The launch window extended from 15:45 to 17:40 UTC daily, until 7 August 2008.
Launch
Launch occurred successfully on 11 June 2008 at 16:05 UTC aboard a Delta 7920H-10C rocket from Cape Canaveral Air Force Station Space Launch Complex 17-B. Spacecraft separation took place about 75 minutes after launch.
Orbit
Fermi resides in a low-Earth circular orbit at an altitude of , and at an inclination of 28.5 degrees.
Software modifications
GLAST received some minor modifications to its computer software on 23 June 2008.
LAT/GBM computers operational
Computers operating both the LAT and GBM and most of the LAT's components were turned on 24 June 2008. The LAT high voltage was turned on 25 June, and it began detecting high-energy particles from space, but minor adjustments were still needed to calibrate the instrument. The GBM high voltage was also turned on 25 June, but the GBM still required one more week of testing/calibrations before searching for gamma-ray bursts.
Sky survey mode
After presenting an overview of the Fermi instrumentation and goals, Jennifer Carson of SLAC National Accelerator Laboratory had concluded that the primary goals were "all achievable with the all-sky scanning mode of observing". Fermi switched to "sky survey mode" on 26 June 2008 so as to begin sweeping its field of view over the entire sky every three hours (every two orbits).
Collision avoided
On 30 April 2013, NASA revealed that the telescope had narrowly avoided a collision a year earlier with a defunct Cold War-era Soviet spy satellite, Kosmos 1805, in April 2012. Orbital predictions several days earlier indicated that the two satellites were expected to occupy the same point in space within 30 milliseconds of each other. On 3 April, telescope operators decided to stow the satellite's high-gain parabolic antenna, rotate the solar panels out of the way and to fire Fermi's rocket thrusters for one second to move it out of the way. Even though the thrusters had been idle since the telescope had been placed in orbit nearly five years earlier, they worked correctly and potential disaster was thus avoided.
Extended mission 2013–2018
In August 2013 Fermi started its 5-year mission extension.
Pass 8 software upgrade
In June 2015, the Fermi LAT Collaboration released "Pass 8 LAT data". Iterations of the analysis framework used by LAT are called "passes" and at launch Fermi LAT data was analyzed using Pass 6. Significant improvements to Pass 6 were included in Pass 7 which debuted in August 2011.
Every detection by the Fermi LAT since its launch, was reexamined with the latest tools to learn how the LAT detector responded to both each event and to the background. This improved understanding led to two major improvements: gamma-rays that had been missed by previous analysis were detected and the direction they arrived from was determined with greater accuracy. The impact of the latter is to sharpen Fermi LAT's vision as illustrated in the figure on the right. Pass 8 also delivers better energy measurements and a significantly increased effective area. The entire mission dataset was reprocessed.
These improvements have the greatest impact on both the low and high ends of the range of energy Fermi LAT can detect - in effect expanding the energy range within which LAT can make useful observations. The improvement in the performance of Fermi LAT due to Pass 8 is so dramatic that this software update is sometimes called the cheapest satellite upgrade in history. Among numerous advances, it allowed for a better search for Galactic spectral lines from dark matter interactions, analysis of extended supernova remnants, and to search for extended sources in the Galactic plane.
For almost all event classes, Version P8R2 had a residual background that was not fully isotropic. This anisotropy was traced to cosmic-ray electrons leaking through the ribbons of the Anti-Coincidence Detector and a set of cuts allowed rejection of these events while minimally impacting acceptance. This selection was used to create the P8R3 version of LAT data.
Solar array drive failure
On 16 March 2018 one of Fermi's solar arrays quit rotating, prompting a transition to "safe hold" mode and instrument power off. This was the first mechanical failure in nearly 10 years. Fermi's solar arrays rotate to maximize the exposure of the arrays to the Sun. The motor that drives that rotation failed to move as instructed in one direction. On 27 March, the satellite was placed at a fixed angle relative to its orbit to maximize solar power. The next day the GBM instrument was turned back on. On 2 April, operators turned LAT on and it resumed operations on 8 April. Alternative observation strategies are being developed due to power and thermal requirements.
Discoveries
Pulsar discovery
The first major discovery came when the space telescope detected a pulsar in the CTA 1 supernova remnant that appeared to emit radiation in the gamma ray bands only, a first for its kind. This new pulsar sweeps the Earth every 316.86 milliseconds and is about 4,600 light-years away.
Greatest gamma-ray burst energy release
In September 2008, the gamma-ray burst GRB 080916C in the constellation Carina was recorded by the Fermi telescope. This burst is notable as having "the largest apparent energy release yet measured". The explosion had the power of about 9,000 ordinary supernovae, and the relativistic jet of material ejected in the blast must have moved at a minimum of 99.9999% the speed of light. Overall, GRB 080916C had "the greatest total energy, the fastest motions, and the highest initial-energy emissions" ever seen.
Galactic Center gamma ray excess
In 2009, a surplus of gamma rays from a spherical region around the Galactic Center of the Milky Way was found in data from the Fermi telescope. This is now known as the Galactic Center GeV excess. The source of this surplus is not known. Suggestions include self-annihilation of dark matter or a population of pulsars.
Cosmic rays and supernova remnants
In February 2010, it was announced that Fermi-LAT had determined that supernova remnants act as enormous accelerators for cosmic particles. This determination fulfills one of the stated missions for this project.
Background gamma ray sources
In March 2010 it was announced that active galactic nuclei are not responsible for most gamma-ray background radiation. Though active galactic nuclei do produce some of the gamma-ray radiation detected here on Earth, less than 30% originates from these sources. The search now is to locate the sources for the remaining 70% or so of all gamma-rays detected. Possibilities include star forming galaxies, galactic mergers, and yet-to-be explained dark matter interactions.
Milky Way Gamma- and X-ray emitting Fermi bubbles
In November 2010, it was announced that two gamma-ray and X-ray emitting bubbles were detected around our galaxy, the Milky Way. The bubbles, named Fermi bubbles, extend about 25 thousand light-years distant above and below the galactic center. The galaxy's diffuse gamma-ray fog hampered prior observations, but the discovery team led by D. Finkbeiner, building on research by G. Dobler, worked around this problem.
Highest energy light ever seen from the Sun
In early 2012, Fermi/GLAST observed the highest energy light ever seen in a solar eruption.
Terrestrial gamma-ray flash observations
Fermi telescope has observed and detected numerous terrestrial gamma-ray flashes and discovered that such flashes can produce 100 trillion positrons, far more than scientists had previously expected.
GRB 130427A
On 27 April 2013, Fermi detected GRB 130427A, a gamma-ray burst with one of the highest energy outputs yet recorded.
This included detection of a gamma-ray over 94 billion electron volts (GeV). This broke Fermi's previous record detection, by over three times the amount.
GRB coincident with gravitational wave event GW150914
Fermi reported that its GBM instrument detected a weak gamma-ray burst above 50 keV, starting 0.4 seconds after the LIGO event and with a positional uncertainty region overlapping that of the LIGO observation. The Fermi team calculated the odds of such an event being the result of a coincidence or noise at 0.22%. However, observations from the INTEGRAL telescope's all-sky SPI-ACS instrument indicated that any energy emission in gamma-rays and hard X-rays from the event was less than one millionth of the energy emitted as gravitational waves, concluding that "this limit excludes the possibility that the event is associated with substantial gamma-ray radiation, directed towards the observer." If the signal observed by the Fermi GBM was associated with GW150914, SPI-ACS would have detected it with a significance of 15 sigma above the background. The AGILE space telescope also did not detect a gamma-ray counterpart of the event. A follow-up analysis of the Fermi report by an independent group, released in June 2016, purported to identify statistical flaws in the initial analysis, concluding that the observation was consistent with a statistical fluctuation or an Earth albedo transient on a 1-second timescale. A rebuttal of this follow-up analysis, however, pointed out that the independent group misrepresented the analysis of the original Fermi GBM Team paper and therefore misconstrued the results of the original analysis. The rebuttal reaffirmed that the false coincidence probability is calculated empirically and is not refuted by the independent analysis.
In October 2018, astronomers reported that GRB 150101B, 1.7 billion light years away from Earth, may be analogous to the historic GW170817. It was detected on 1 January 2015 at 15:23:35 UT by the Gamma-ray Burst Monitor on board the Fermi Gamma-ray Space Telescope, along with detections by the Burst Alert Telescope (BAT) on board the Swift Observatory Satellite.
Black hole mergers of the type thought to have produced the gravitational wave event are not expected to produce gamma-ray bursts, as stellar-mass black hole binaries are not expected to have large amounts of orbiting matter. Avi Loeb has theorised that if a massive star is rapidly rotating, the centrifugal force produced during its collapse will lead to the formation of a rotating bar that breaks into two dense clumps of matter with a dumbbell configuration that becomes a black hole binary, and at the end of the star's collapse it triggers a gamma-ray burst. Loeb suggests that the 0.4 second delay is the time it took the gamma-ray burst to cross the star, relative to the gravitational waves.
GRB 170817A signals a multi-messenger transient
On 17 August 2017, Fermi Gamma-Ray Burst Monitor software detected, classified, and localized a gamma-ray burst which was later designated as GRB 170817A. Six minutes later, a single detector at Hanford LIGO registered a gravitational-wave candidate which was consistent with a binary neutron star merger, occurring 2 seconds before the GRB 170817A event. This observation was "the first joint detection of gravitational and electromagnetic radiation from a single source".
Instruments
Gamma-ray Burst Monitor
The Gamma-ray Burst Monitor (GBM) (formerly GLAST Burst Monitor) detects sudden flares of gamma-rays produced by gamma ray bursts and solar flares. Its scintillators are on the sides of the spacecraft to view all of the sky which is not blocked by the Earth. The design is optimized for good resolution in time and photon energy, and is sensitive from (a medium X-ray) to (a medium-energy gamma-ray).
"Gamma-ray bursts are so bright we can see them from billions of light-years away, which means they occurred billions of years ago, and we see them as they looked then", stated Charles Meegan of NASA's Marshall Space Flight Center.
The Gamma-ray Burst Monitor has detected gamma rays from positrons generated in powerful thunderstorms.
Large Area Telescope
The Large Area Telescope (LAT) detects individual gamma rays using technology similar to that used in terrestrial particle accelerators. Photons hit thin metal sheets, converting to electron-positron pairs, via a process termed pair production. These charged particles pass through interleaved layers of silicon microstrip detectors, causing ionization which produce detectable tiny pulses of electric charge. Researchers can combine information from several layers of this tracker to determine the path of the particles. After passing through the tracker, the particles enter the calorimeter, which consists of a stack of caesium iodide scintillator crystals to measure the total energy of the particles. The LAT's field of view is large, about 20% of the sky. The resolution of its images is modest by astronomical standards, a few arc minutes for the highest-energy photons and about 3 degrees at 100 MeV. It is sensitive from to (from medium up to some very-high-energy gamma rays). The LAT is a bigger and better successor to the EGRET instrument on NASA's Compton Gamma Ray Observatory satellite in the 1990s. Several countries produced the components of the LAT, who then sent the components for assembly at SLAC National Accelerator Laboratory. SLAC also hosts the LAT Instrument Science Operations Center, which supports the operation of the LAT during the Fermi mission for the LAT scientific collaboration and for NASA.
Education and public outreach
Education and public outreach are important components of the Fermi project. The main Fermi education and public outreach website at http://glast.sonoma.edu offers gateways to resources for students, educators, scientists, and the public. NASA's Education and Public Outreach (E/PO) group operates the Fermi education and outreach resources at Sonoma State University.
Rossi Prize
The 2011 Bruno Rossi Prize was awarded to Bill Atwood, Peter Michelson and the Fermi LAT team "for enabling, through the development of the Large Area Telescope, new insights into neutron stars, supernova remnants, cosmic rays, binary systems, active galactic nuclei and gamma-ray bursts."
In 2013, the prize was awarded to Roger W. Romani of Leland Stanford Junior University and Alice Harding of Goddard Space Flight Center for their work in developing the theoretical framework underpinning the many exciting pulsar results from Fermi Gamma-ray Space Telescope.
The 2014 prize went to Tracy Slatyer, Douglas Finkeiner and Meng Su "for their discovery, in gamma rays, of the large unanticipated Galactic structure called the Fermi bubbles."
The 2018 prize was awarded to Colleen Wilson-Hodge and the Fermi GBM team for the detection of , the first unambiguous and completely independent discovery of an electromagnetic counterpart to a gravitational wave signal (GW170817) that "confirmed that short gamma-ray bursts are produced by binary neutron star mergers and enabled a global multi-wavelength follow-up campaign."
See also
Galactic Center GeV excess
GRB 160625B
List of gamma-ray bursts
eROSITA
References
External links
Fermi website at NASA.gov
Fermi website by NASA's Goddard Space Flight Center
Fermi website at Sonoma.edu
Large Area Telescope website at Stanford.edu
Large Area Telescope publications
Gamma-ray Burst Monitor website by NASA's Marshall Space Flight Center
Gamma-ray Burst Monitor publications
Fermi's 14-Year Time-Lapse of the Gamma-Ray Sky
Astrophysics
Sonoma State University
Space telescopes
Gamma-ray telescopes
Spacecraft launched in 2008
Spacecraft launched by Delta II rockets
Articles containing video clips
CERN experiments | Fermi Gamma-ray Space Telescope | [
"Physics",
"Astronomy"
] | 4,796 | [
"Space telescopes",
"Astronomical sub-disciplines",
"Astrophysics"
] |
399,730 | https://en.wikipedia.org/wiki/Dividing%20a%20circle%20into%20areas | In geometry, the problem of dividing a circle into areas by means of an inscribed polygon with n sides in such a way as to maximise the number of areas created by the edges and diagonals, sometimes called Moser's circle problem (named after Leo Moser), has a solution by an inductive method. The greatest possible number of regions, , giving the sequence 1, 2, 4, 8, 16, 31, 57, 99, 163, 256, ... (). Though the first five terms match the geometric progression , it deviates at , showing the risk of generalising from only a few observations.
Lemma
If there are n points on the circle and one more point is added, n lines can be drawn from the new point to previously existing points. Two cases are possible. In the first case (a), the new line passes through a point where two or more old lines (between previously existing points) cross. In the second case (b), the new line crosses each of the old lines in a different point. It will be useful to know the following fact.
Lemma. The new point A can be chosen so that case b occurs for each of the new lines.
Proof. For the case a, three points must be on one line: the new point A, the old point O to which the line is drawn, and the point I where two of the old lines intersect. There are n old points O, and hence finitely many points I where two of the old lines intersect. For each O and I, the line OI crosses the circle in one point other than O. Since the circle has infinitely many points, it has a point A which will be on none of the lines OI. Then, for this point A and all of the old points O, case b will be true.
This lemma means that, if there are k lines crossing AO, then each of them crosses AO at a different point and k + 1 new areas are created by the line AO.
Solution
Inductive method
The lemma establishes an important property for solving the problem. By employing an inductive proof, one can arrive at a formula for f(n) in terms of f(n − 1).
In the figure the dark lines are connecting points 1 through 4 dividing the circle into 8 total regions (i.e., f(4) = 8). This figure illustrates the inductive step from n = 4 to n = 5 with the dashed lines. When the fifth point is added (i.e., when computing f(5) using f(4)), this results in four new lines (the dashed lines in the diagram) being added, numbered 1 through 4, one for each point that they connect to. The number of new regions introduced by the fifth point can therefore be determined by considering the number of regions added by each of the 4 lines. Set i to count the lines being added. Each new line can cross a number of existing lines, depending on which point it is to (the value of i). The new lines will never cross each other, except at the new point.
The number of lines that each new line intersects can be determined by considering the number of points on the "left" of the line and the number of points on the "right" of the line. Since all existing points already have lines between them, the number of points on the left multiplied by the number of points on the right is the number of lines that will be crossing the new line. For the line to point i, there are
n − i − 1
points on the left and
i − 1
points on the right, so a total of
(n − i − 1) (i − 1)
lines must be crossed.
In this example, the lines to i = 1 and i = 4 each cross zero lines,
while the lines to i = 2 and i = 3 each cross two lines (there are two
points on one side and one on the other).
So the recurrence can be expressed as
which can be easily reduced to
Using the sums of the first natural numbers and the first squares, this combines to
Finally,
with
which yields
Combinatorics and topology method
The lemma asserts that the number of regions is maximal if all "inner" intersections of chords are simple (exactly two chords pass through each point of intersection in the interior). This will be the case if the points on the circle are chosen "in general position". Under this assumption of "generic intersection", the number of regions can also be determined in a non-inductive way, using the formula for the Euler characteristic of a connected planar graph (viewed here as a graph embedded in the 2-sphere S 2).
A planar graph determines a cell decomposition of the plane with F faces (2-dimensional cells), E edges (1-dimensional cells) and V vertices (0-dimensional cells). As the graph is connected, the Euler relation for the 2-dimensional sphere S 2
holds. View the diagram (the circle together with all the chords) above as a planar graph. If the general formulas for V and E can both be found, the formula for F can also be derived, which will solve the problem.
Its vertices include the n points on the circle, referred to as the exterior vertices, as well as the interior vertices, the intersections of distinct chords in the interior of the circle. The "generic intersection" assumption made above guarantees that each interior vertex is the intersection of no more than two chords.
Thus the main task in determining V is finding the number of interior vertices. As a consequence of the lemma, any two intersecting chords will uniquely determine an interior vertex. These chords are in turn uniquely determined by the four corresponding endpoints of the chords, which are all exterior vertices. Any four exterior vertices determine a cyclic quadrilateral, and all cyclic quadrilaterals are convex quadrilaterals, so each set of four exterior vertices have exactly one point of intersection formed by their diagonals (chords). Further, by definition all interior vertices are formed by intersecting chords.
Therefore, each interior vertex is uniquely determined by a combination of four exterior vertices, where the number of interior vertices is given by
and so
The edges include the n circular arcs connecting pairs of adjacent exterior vertices, as well as the chordal line segments (described below) created inside the circle by the collection of chords. Since there are two groups of vertices: exterior and interior, the chordal line segments can be further categorized into three groups:
Edges directly (not cut by other chords) connecting two exterior vertices. These are chords between adjacent exterior vertices, and form the perimeter of the polygon. There are n such edges.
Edges connecting two interior vertices.
Edges connecting an interior and exterior vertex.
To find the number of edges in groups 2 and 3, consider each interior vertex, which is connected to exactly four edges. This yields
edges. Since each edge is defined by two endpoint vertices, only the interior vertices were enumerated, group 2 edges are counted twice while group 3 edges are counted once only.
Every chord that is cut by another (i.e., chords not in group 1) must contain two group 3 edges, its beginning and ending chordal segments. As chords are uniquely determined by two exterior vertices, there are altogether
group 3 edges. This is twice the total number of chords that are not themselves members of group 1.
The sum of these results divided by two gives the combined number of edges in groups 2 and 3. Adding the n edges from group 1, and the n circular arc edges brings the total to
Substituting V and E into the Euler relation solved for F, one then obtains
Since one of these faces is the exterior of the circle, the number of regions rG inside the circle is F − 1, or
which resolves to
which yields the same quartic polynomial obtained by using the inductive method
The fifth column of Bernoulli's triangle (k = 4) gives the maximum number of regions in the problem of dividing a circle into areas for n + 1 points, where n ≥ 4.
Application to mathematical billiards inside the circle
Considering the force-free motion of a particle inside a circle it was shown (see D. Jaud) that for specific reflection angles along the circle boundary the associated area division sequence is given by an arithmetic series.
Evenly-spaced points
If the points are uniformly spaced around the circle, the number of regions is reduced for even n > 4, yielding the OEIS sequence A006533:
Though the number of divisors of n! for n > 0 also start with 1, 2, 4, 8, 16 and 30, the following terms (60, 96, 160, 270, 540, 792, ...) diverge from the above.
See also
Cake number
Lazy caterer's sequence – where n is the number of straight cuts
Pizza theorem
References
Conway, J. H. and Guy, R. K. "How Many Regions." In The Book of Numbers. New York: Springer-Verlag, pp. 76–79, 1996.
http://www.arbelos.co.uk/Papers/Chords-regions.pdf
Jaud, D. "Integer Sequences from Circle Divisions by Rational Billiard Trajectories". In "ICGG 2022 - Proceedings of the 20th International Conference on Geometry and Graphics", DOI: 10.1007/978-3-031-13588-0_8
Combinatorics
Circles
Area | Dividing a circle into areas | [
"Physics",
"Mathematics"
] | 1,971 | [
"Scalar physical quantities",
"Discrete mathematics",
"Physical quantities",
"Quantity",
"Size",
"Combinatorics",
"Wikipedia categories named after physical quantities",
"Circles",
"Pi",
"Area"
] |
12,332,868 | https://en.wikipedia.org/wiki/Columbia%20Non-neutral%20Torus | The Columbia Non-neutral Torus (CNT) is a small stellarator at the Columbia University Plasma Physics Laboratory designed by Thomas Sunn Pedersen with the aid of Wayne Reiersen and Fred Dahlgren of the Princeton Plasma Physics Laboratory to conduct the first investigation of non-neutral plasmas confined on magnetic surfaces. The experiment, which began operation in November 2004, is funded by the National Science Foundation and the United States Department of Energy in the form of a Faculty Early Career Development (CAREER) award.
Technical design
CNT, which is housed in a cylindrical vacuum chamber made of 316 stainless steel, measures 60 inches in diameter and stands 75 inches tall. The empty chamber is capable of reaching a pressure of 2x10−10 Torr.
CNT is unique in its simple geometry. Magnetic surfaces are created using only four electromagnetic coils – two interlocking coils inside the chamber, and two poloidal field coils outside the chamber. The two interlocking coils have a radius of .405m, and the angle between them can be manually selected to be 64°, 78°, or 88°, allowing for different shear and rotational transform values, and magnetic surface configuration. The poloidal field coils have a radius of 1.08 m. The coils are powered by a 200 kW power supply and are capable of producing magnetic fields of 0.01–0.2T. The configuration of CNT creates a very low aspect ratio of 1.9, the lowest of any stellarator built.
Research
Thomas Sunn Pedersen is the principal investigator of CNT, which studies several areas of theoretical and experimental non-neutral plasma physics. These include the equilibrium of non-neutral plasmas, transport and confinement, and ion-related instabilities. The CNT theory program is run by Pedersen and Prof. Allen Boozer, also at Columbia University.
First studies on CNT showed the successful creation of magnetic
surfaces with the simple four coil design. At sufficiently low neutral pressures and sufficiently high magnetic field strengths, the plasmas are essentially pure electron plasmas and are macroscopically stable with confinement times of up to 20 ms. Transport is driven by collisions with neutrals as well as E x B drift along insulating
rods inserted into the plasma. At higher neutral pressures (10−7 Torr and above), an ion related instability is observed, with a frequency in the 10–50 kHz range, and a poloidal mode number m = 1.
The CNT group installed a conducting boundary in August 2007 to study its effects on confinement times, and to allow measurements in the absence of internal rods. Future plans for CNT include the study of electron-positron plasmas confined on magnetic surfaces and further studies of partly neutralized plasmas.
References
External links
CNT homepage
CNT publications at Columbia University Department of Applied Physics and Applied Mathematics, Fu Foundation School of Engineering and Applied Science
Fusion power
Plasma physics facilities
Stellarators | Columbia Non-neutral Torus | [
"Physics",
"Chemistry"
] | 597 | [
"Nuclear fusion",
"Fusion power",
"Plasma physics facilities",
"Plasma physics"
] |
12,335,704 | https://en.wikipedia.org/wiki/Made-to-measure | Made-to-measure (MTM) typically refers to custom clothing that is cut and sewn using a standard-sized base pattern. Suits and sport coats are the most common garments made-to-measure. The fit of a made-to-measure garment is expected to be superior to that of a ready-to-wear garment because made-to-measure garments are constructed to fit each customer individually based on a few body measurements to customize the pre-existing pattern.
Made-to-measure garments always involve some form of standardization in the pattern and manufacturing, whereas bespoke tailoring is entirely made from scratch based on a customer's specifications with far more attention to minute fit details and using multiple fittings during the construction process.
All else being equal, a made-to-measure garment will be more expensive than a ready-to-wear garment but cheaper than a bespoke one. "Custom made" most often refers to MTM.
Overview
The made-to-measure process is simple considering that its purpose is to make an existing garment fit a person better. The very first step is for the tailor to take precise measurements of both the customer and the garment itself. Basic measurements taken include measurements of the chest, waist, seat etc., and most importantly, these measurements are taken as the tailor studies the posture of the person to make sure the garment won't be uncomfortable for the customer to wear. Then, the tailor will proceed to either cut the fabric of the original garment to make it fight tighter/smaller or will select the fabric that closest resembles the original garment's fabric to make it looser/bigger. A base pattern is similarly selected which most closely corresponds with the customer's measurements and is further altered to match the customer's measurements on the original garment. Lastly, the garment is given to the customer to try it on. Although the process is simple, it is however lengthy because tailors measure and remeasure the garment on the person to make sure it fits exactly how it's wanted.
Made-to-measure items typically have a price markup compared to their ready-to-wear counterparts due to the customization and personalized service involved in their production. The primary benefits to the customer of made-to-measure clothing are that the garments will be well-fitted to the customer's body and the customer may have the opportunity to customize the fabric and detailing. However, the primary disadvantage of made-to-measure is that the customer must wait up to several weeks for the garment to be sewn and delivered.
Made-to-measure retailers often travel internationally meeting clients in cities, providing samples of the latest materials and styles.
Unlike bespoke garments, which traditionally involves hand sewing, made-to-measure manufacturers use both machine- and hand-sewing. Made-to-measure also requires fewer fittings than bespoke, resulting in a shorter wait between customer measurement and garment delivery.
Made-to-measure is sometimes also referred to as personal tailoring.
Advertising Standards Authority ruling
In the United Kingdom, the legal definition of "made-to-measure" has been conflated with bespoke tailoring by a ruling of the Advertising Standards Authority. The ruling is based on the Oxford English Dictionary definition of bespoke as "made to order". While this ruling clarified the difference between bespoke and ready-to-wear, it had the effect of blurring the line between bespoke and made-to-measure.
The ruling established that a "made-to-measure suit would be cut, usually by machine, from an existing pattern, and adjusted according to the customer's measurements," while "a bespoke suit would be fully hand-made and the pattern cut from scratch, with an intermediary baste stage which involved a first fitting so that adjustments could be made to a half-made suit." The ruling concluded, however, that a "majority of people... would not expect that bespoke suit to be fully hand-made with the pattern cut from scratch," effectively equalizing the terms bespoke and made-to-measure.
While etymologist Michael Quinion observed that by definition "it was legitimate for a tailor offering clothes cut and sewn by machine to refer to them as bespoke, provided that they were made to the customer's measurements", since the traditional use of bespoke inside the tailoring community has been more nuanced than the Oxford definition, others concluded that the ASA "took a rather ignorant decision to declare that there is no difference between bespoke and made-to-measure."
Comparison
In essence, MTM (made-to-measure) is a step up from pret-a-porter. It's affordable for most people and solves some of the common fit issues that shoppers may have with ready-to-wear garments.
The main advantage of MTM is that the clothes will be adapted to the customer's physique and the customer can have the option of choosing fabric, details, sleeve length, pose adjustment, shirt length, jacket length, etc. The main disadvantage is that customers have to wait up to several weeks for the clothes to be ready and delivered. A typical price increase for a bespoke fashion item is 15% higher than the price of a ready-made garment. [7]
Making bespoke MTM garments takes longer than ready-to-wear (RTW), but not as long as making bespoke garments. Unlike Bespoke, which is traditionally sewn by hand, makers or tailors use MTM for both machine and hand sewing. MTM production also requires fewer accessories than bespoke production, resulting in shorter waiting times between customers and delivery.
Advantages:
Better body fit than RTW
Customers choose fabrics and aesthetic details
Faster production time than Bespoke
Cheaper than Bespoke, slightly more expensive than RTW
More precise and dedication
Disadvantages:
More time than ready-made clothes
Does not offer as many quality options as Bespoke
Costs more than RTW; quality not necessarily better
See also
Ready-to-wear
Mass Customization
Bespoke tailoring
References
Sizes in clothing
Fashion design | Made-to-measure | [
"Physics",
"Mathematics",
"Engineering"
] | 1,276 | [
"Sizes in clothing",
"Fashion design",
"Physical quantities",
"Quantity",
"Size",
"Design"
] |
12,337,275 | https://en.wikipedia.org/wiki/Flow-accelerated%20corrosion | Flow-accelerated corrosion (FAC), also known as flow-assisted corrosion, is a corrosion mechanism in which a normally protective oxide layer on a metal surface dissolves in a fast flowing water. The underlying metal corrodes to re-create the oxide, and thus the metal loss continues.
By definition, the rate of FAC depends on the flow velocity. FAC often affects carbon steel piping carrying ultra-pure, deoxygenated water or wet steam. Stainless steel does not suffer from FAC. FAC of carbon steel halts in the presence of small amount of oxygen dissolved in water. FAC rates rapidly decrease with increasing water pH.
FAC has to be distinguished from erosion corrosion because the fundamental mechanisms for the two corrosion modes are different. FAC does not involve impingement of particles, bubbles, or cavitation which cause the mechanical (often crater-like) wear on the surface. By contrast to mechanical erosion, FAC involves dissolution of normally poorly soluble oxide by combined electrochemical, water chemistry and mass-transfer phenomena. Nevertheless, the terms FAC and erosion are sometimes used interchangeably because the actual mechanism may, in some cases, be unclear.
FAC was the cause of several high-profile accidents in power plants, for example, a rupture of a high-pressure condensate line in Virginia Power's Surry nuclear plant in 1986, that resulted in four fatalities and four injuries.
See also
Erosion Corrosion of Copper Water Tubes
Oxygenated treatment
References
Further reading
"Flow Accelerated Corrosion is Still With Us...," 2008 By Dave Daniels, M&M Engineering
"Flow Accelerated Corrosion Evaluation, A Case Study " 2008 By Jon McFarlen, M&M Engineering
Corrosion | Flow-accelerated corrosion | [
"Chemistry",
"Materials_science"
] | 354 | [
"Metallurgy",
"Corrosion",
"Electrochemistry",
"Electrochemistry stubs",
"Materials degradation",
"Physical chemistry stubs",
"Chemical process stubs"
] |
12,340,141 | https://en.wikipedia.org/wiki/Iron%20nanoparticle | Nanoscale iron particles are sub-micrometer particles of iron metal. They are highly reactive because of their large surface area. In the presence of oxygen and water, they rapidly oxidize to form free iron ions. They are widely used in medical and laboratory applications and have also been studied for remediation of industrial sites contaminated with chlorinated organic compounds.
Synthesis
Iron nanoparticles can be synthesized by the reduction of Fe(II) or Fe(III) salt with sodium borohydride in an aqueous medium.
Reactivity
When exposed to oxygen and water, iron oxidizes. This redox process can occur under either acidic or neutral/basic conditions:
2 Fe0(s) + 4 H+(aq) + O2(aq) → 2 Fe2+(aq) + 2 H2O(l)
Fe0(s) + 2 H2O (aq) → Fe2+(aq) + H2(g) + 2 OH−(aq)
Research
Research has shown that nanoscale iron particles can be effectively used to treat several forms of ground contamination, including grounds contaminated by polychlorinated biphenyls (PCBs), chlorinated organic solvents, and organochlorine pesticides. Nanoscale iron particle are easily transportable through ground water, allowing for in situ treatment. Additionally, the nanoparticle-water slurry can be injected into the contaminated area and stay there for long periods of time. These factors combine to make this method cheaper than the most currently used alternative.
Researchers have found that although metallic iron nanoparticles remediate contaminants well, they tend to agglomerate on the soil surfaces. In response, carbon nanoparticles and water-soluble polyelectrolytes have been used as supports to the metallic iron nanoparticles. The hydrophobic contaminants adsorb to these supports, improving permeability in sand and soil.
In field tests have generally confirmed lab findings. However, research is still ongoing and nanoscale iron particles are not yet commonly used for treating ground contamination.
See also
Health and safety hazards of nanomaterials
Environmental implications of nanotechnology
References
2.http://www.chalcogen.infim.ro/1771_Yuvakkaur.pdf
External links
National Nanotechnology Initiative
Nanotechnology methods to clean up water pollution
Largescale production and applications of zero-valent iron nanoparticles (nZVI)
Nanoparticles by composition
Environmental science
Pollution control technologies
Iron | Iron nanoparticle | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 543 | [
"nan",
"Pollution control technologies",
"Environmental engineering"
] |
12,342,942 | https://en.wikipedia.org/wiki/Anderson%20orthogonality%20theorem | The Anderson orthogonality theorem is a theorem in physics by the physicist P. W. Anderson.
It relates to the introduction of a magnetic impurity in a metal. When a magnetic impurity is introduced into a metal, the conduction electrons will tend to screen the potential that the impurity creates. The N-electron ground state for the system when , which corresponds to the absence of the impurity and , which corresponds to the introduction of the impurity are orthogonal in the thermodynamic limit .
References
Condensed matter physics | Anderson orthogonality theorem | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 108 | [
"Materials science stubs",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Condensed matter stubs",
"Matter"
] |
5,867,217 | https://en.wikipedia.org/wiki/Archie%27s%20law | In petrophysics, Archie's law is a purely empirical law relating the measured electrical conductivity of a porous rock to its porosity and fluid saturation. It is named after Gus Archie (1907–1978) and laid the foundation for modern well log interpretation, as it relates borehole electrical conductivity measurements to hydrocarbon saturations.
Statement of the law
The in-situ electrical conductivity () of a fluid saturated, porous rock is described as
where
denotes the porosity
represents the electrical conductivity of the aqueous solution (fluid or liquid phase)
is the water saturation, or more generally the fluid saturation, of the pores
is the cementation exponent of the rock (usually in the range 1.8–2.0 for sandstones)
is the saturation exponent (usually close to 2)
is the tortuosity factor.
This relationship attempts to describe ion flow (mostly sodium and chloride) in clean, consolidated sands, with varying intergranular porosity. Electrical conduction is assumed to be exclusively performed by ions dissolved in the pore-filling fluid. Electrical conduction is considered to be absent in the rock grains of the solid phase or in organic fluids other than water (oil, hydrocarbon, gas).
Reformulated for resistivity measurements
The electrical resistivity, the inverse of the electrical conductivity , is expressed as
with for the total fluid saturated rock resistivity, and for the resistivity of the fluid itself (w meaning water or an aqueous solution containing dissolved salts with ions bearing electricity in solution).
The factor
is also called the formation factor, where (index standing for total) is the resistivity of the rock saturated with the fluid and is the resistivity of the fluid (index standing for water) inside the porosity of the rock. The porosity being saturated with the fluid (often water, ), .
In case the fluid filling the porosity is a mixture of water and hydrocarbon (petroleum, oil, gas), a resistivity index () can be defined:
Where is the resistivity of the rock saturated in water only.
Parameters
Cementation exponent, m
The cementation exponent models how much the pore network increases the resistivity, as the rock itself is assumed to be non-conductive. If the pore network were to be modelled as a set of parallel capillary tubes, a cross-section area average of the rock's resistivity would yield porosity dependence equivalent to a cementation exponent of 1. However, the tortuosity of the rock increases this to a higher number than 1. This relates the cementation exponent to the permeability of the rock, increasing permeability decreases the cementation exponent.
The exponent has been observed near 1.3 for unconsolidated sands, and is believed to increase with cementation. Common values for this cementation exponent for consolidated sandstones are 1.8 < < 2.0.
In carbonate rocks, the cementation exponent shows higher variance due to strong diagenetic affinity and complex pore structures. Values between 1.7 and 4.1 have been observed.
The cementation exponent is usually assumed not to be dependent on temperature.
Saturation exponent, n
The saturation exponent usually is fixed to values close to 2. The saturation exponent models the dependency on the presence of non-conductive fluid (hydrocarbons) in the pore-space, and is related to the wettability of the rock. Water-wet rocks will, for low water saturation values, maintain a continuous film along the pore walls making the rock conductive. Oil-wet rocks will have discontinuous droplets of water within the pore space, making the rock less conductive.
Tortuosity factor, a
The constant , called the tortuosity factor, cementation intercept, lithology factor or, lithology coefficient is sometimes used. It is meant to correct for variation in compaction, pore structure and grain size.
The parameter is called the tortuosity factor and is related to the path length of the current flow. The value lies in the range 0.5 to 1.5, and it may be different in different reservoirs. However a typical value to start with for a sandstone reservoir might be 0.6, which then can be tuned during log data matching process with other sources of data such as core.
Measuring the exponents
In petrophysics, the only reliable source for the numerical value of both exponents is experiments on sand plugs from cored wells. The fluid electrical conductivity can be measured directly on produced fluid (groundwater) samples. Alternatively, the fluid electrical conductivity and the cementation exponent can also be inferred from downhole electrical conductivity measurements across fluid-saturated intervals. For fluid-saturated intervals () Archie's law can be written
Hence, plotting the logarithm of the measured in-situ electrical conductivity against the logarithm of the measured in-situ porosity (Pickett plot), according to Archie's law a straight-line relationship is expected with slope equal to the cementation exponent and intercept equal to the logarithm of the in-situ fluid electrical conductivity.
Sands with clay/shaly sands
Archie's law postulates that the rock matrix is non-conductive. For sandstone with clay minerals, this assumption is no longer true in general, due to the clay's structure and cation exchange capacity. The Waxman–Smits equation is one model that tries to correct for this.
See also
Birch's law
Byerlee's law
References
Geophysics
Equations
Well logging | Archie's law | [
"Physics",
"Mathematics",
"Engineering"
] | 1,177 | [
"Applied and interdisciplinary physics",
"Petroleum engineering",
"Mathematical objects",
"Equations",
"Well logging",
"Geophysics"
] |
5,868,379 | https://en.wikipedia.org/wiki/Ns%20%28simulator%29 | ns (from network simulator) is a name for a series of discrete event network simulators, specifically ns-1, ns-2, and ns-3. All are discrete-event computer network simulators, primarily used in research and teaching.
History
ns-1
The first version of ns, known as ns-1, was developed at Lawrence Berkeley National Laboratory (LBNL) in the 1995-97 timeframe by Steve McCanne, Sally Floyd, Kevin Fall, and other contributors. This was known as the LBNL Network Simulator, and derived in 1989 from an earlier simulator known as REAL by S. Keshav.
ns-2
Ns-2 began as a revision of ns-1. From 1997 to 2000, ns development was supported by DARPA through the VINT project at LBL, Xerox PARC, UC Berkeley, and USC/ISI. In 2000, ns-2 development was supported through DARPA with SAMAN and through NSF with CONSER, both at USC/ISI, in collaboration with other researchers including ACIRI.
Features of NS2
1. It is a discrete event simulator for networking research.
2. It provides substantial support to simulate several protocols like TCP, FTP, UDP, https, and DSR.
3. It simulates wired and wireless networks.
4. It is primarily Unix-based.
5. Uses TCL as its scripting language.
6. Otcl: Object-oriented support
7. Tclcl: C++ and otcl linkage
8. Discrete event scheduler
Ns-2 incorporates substantial contributions from third parties, including wireless code from the UCB Daedelus and CMU Monarch projects and Sun Microsystems.
ns-3
In 2005, a team led by Tom Henderson, George Riley, Sally Floyd, and Sumit Roy, applied for and received funding from the U.S. National Science Foundation (NSF) to build a replacement for ns-2, called ns-3. This team collaborated with the Planete project of INRIA at Sophia Antipolis, with Mathieu Lacage as the software lead, and formed a new open source project.
In the process of developing ns-3, it was decided to completely abandon backward-compatibility with ns-2. The new simulator would be written from scratch, using the C++ programming language. Development of ns-3 began in July 2006.
Current status of the three versions is:
ns-1 development stopped when ns-2 was founded. It is no longer developed nor maintained.
ns-2 development stopped around 2010. It is no longer developed nor maintained.
ns-3 is actively being developed and maintained.
Design of ns-3
ns-3 is a discrete-event network simulator, sometimes called a 'system simulator' in contrast to a 'link simulator' that models an individual communications link in more detail. ns-3 is written in C++ and compiled into a set of shared libraries that are linked by executable programs that describe the desired simulation topology and configuration. Python bindings are optionally provided using cppyy, allowing users to write simulation programs in Python. The ns-3 simulator features an integrated attribute-based system to manage default and per-instance values for simulation parameters.
Requirements for ns-3
To build ns-3, you need a computer with a C++ compiler, Python, and the CMake build system. Simple scenarios should run on typical home or office computers, but very large scenarios benefit from large amounts of memory and faster CPUs. The project provides an installation guide that details the requirements, and a tutorial on how to get started.
Simulation workflow
The general process of creating a simulation using either ns-2 or ns-3 can be divided into several steps:
Topology definition: To ease the creation of basic facilities and define their interrelationships, ns-3 has a system of containers and helpers that facilitates this process.
Model development: Models are added to simulation (for example, UDP, IPv4, point-to-point devices and links, applications); most of the time this is done using helpers.
Node and link configuration: models set their default values (for example, the size of packets sent by an application or MTU of a point-to-point link); most of the time this is done using the attribute system.
Execution: Simulation facilities generate events, data requested by the user is logged.
Performance analysis: After the simulation is finished and data is available as a time-stamped event trace. This data can then be statistically analysed with tools like R to draw conclusions.
Graphical Visualization: Raw or processed data collected in a simulation can be graphed using tools like Gnuplot, matplotlib or XGRAPH.
See also
GloMoSim
References
External links
ns-3 webpage
Computer networking
Computer network analysis
Simulation software
Telecommunications engineering
Free software programmed in C++
Software using the GNU General Public License | Ns (simulator) | [
"Technology",
"Engineering"
] | 1,011 | [
"Computer networking",
"Telecommunications engineering",
"Computer engineering",
"Computer science",
"Electrical engineering"
] |
5,871,160 | https://en.wikipedia.org/wiki/Electron-cloud%20effect | The electron-cloud effect is a phenomenon that occurs in particle accelerators and reduces the quality of the particle beam.
Explanation
Electron clouds are created when accelerated charged particles disturb stray electrons already floating in the tube, and bounce or slingshot the electrons into the wall. These stray electrons can be photo-electrons from synchrotron radiation or electrons from ionized gas molecules. When an electron hits the wall, the wall emits more electrons due to secondary emission. These electrons in turn hit another wall, releasing more and more electrons into the accelerator chamber.
Exacerbating factors
This effect is especially a problem in positron accelerations, where electrons are attracted and slingshot into the walls at variable incident angles. Negatively charged electrons liberated from the accelerator walls are attracted to the positively charged beam, and form a "cloud" around it.
The effect is most pronounced for electrons with around 300eV of kinetic energy - with a steep drop-off of the effect at less than that energy, and a gradual drop-off at higher energies, which occurs because electrons "bury" themselves deep inside the walls of the accelerator tube, making it difficult for secondary electrons to escape into the tube.
The effect is also more pronounced for higher incidence angles (angles farther from the normal).
Electron cloud growth can be a grave limitation in bunch currents and total beam currents if multipacting occurs. Multipacting can occur when the electron cloud dynamics can achieve a resonance with the bunch spacing of the accelerator beam. This can cause instabilities along a bunch train and even instabilities within a single bunch, which are known as head-tail instabilities.
Proposed remedies
A few remedies have been proposed to deal with this, such as putting ridges in the accelerator tube, adding antechambers to the tube, coating the tube to reduce the yield of electrons from the surface, or creating an electric field to pull in stray electrons. At the PEP-II accelerator at SLAC National Accelerator Laboratory, the vacuum pipe which contains the positron ring has a wire coiled around its entire length. Running a current through this wire creates a solenoidal magnetic field which tends to contain the electrons liberated from the beam pipe walls.
The Large Hadron Collider is very prone to multipacting due to the tight spacing (25 ns) of its proton bunches. During Run 1 (2010–2013) science operation mainly used beams with 50 ns spacing, while 25 ns beams were only employed for short tests in 2011 and 2012. In addition to using a ribbed beam screen designed to minimize secondary electron emission, the effect can also be reduced by in-situ electron bombardment. This is done in the LHC by circulating a special non-science "scrubbing" beam that is specifically designed to generate as many electrons as possible within the constraints of heat dissipation and beam stability. This technique was tested during Run 1, and will be used to allow operation at 25 ns bunch spacing during Run 2 (2015–2018).
Measurement techniques
There are many different ways of measuring the electron cloud in a vacuum chamber. Each one gives insight into a different aspect of the electron cloud.
Retarding field analyzers are local grids in the chamber wall that allow some of the cloud to escape. These electrons can be filtered by an electric field and the resultant energy spectrum can be measured. Retarding field analyzers can be installed in drift regions, dipoles, quadrupoles, and wiggler magnets. A limitation is that retarding field analyzers measure only local cloud, and because they measure current, there is inherently some time averaging involved. The RFA can also interact with the measurement it is taking through secondary electrons from the retarding grid being expelled from the RA and being kicked back into the device by the beam.
Witness bunch studies measure the tune shift along successive bunches in a train and in a witness bunch that is placed at varying locations behind the train. Since tune shift is related to the ring-averaged central cloud density if the tune shift is known the central cloud density can be calculated. An advantage of witness bunch studies is the tune shifts can be measured bunch by bunch and so the time evolution of the cloud can be measured.
The vacuum chamber in an accelerator can be used as a waveguide for radio-frequency transmission. Transverse-electric waves can be propagated in the chamber. The electron cloud acts as a plasma and causes a density dependent phase shift in the RF. The phase shift can be measured as frequency sidebands which can then be converted back into a plasma density.
Further reading
References
External links
"Electron Cloud Buildup in the ISIS Proton Synchrotron and Related Machines", by G Bellodi
"Battling the Clouds" article in symmetry magazine
Cornell CESRTA wiki
"Cloudscapes - Diagnosing electron clouds in positive-particle accelerators", by Paul Preuss
Accelerator physics | Electron-cloud effect | [
"Physics"
] | 1,002 | [
"Applied and interdisciplinary physics",
"Accelerator physics",
"Experimental physics"
] |
5,872,318 | https://en.wikipedia.org/wiki/Steady-state%20free%20precession%20imaging | Steady-state free precession (SSFP) imaging is a magnetic resonance imaging (MRI) sequence which uses steady states of magnetizations. In general, SSFP MRI sequences are based on a (low flip angle) gradient echo MRI sequence with a short repetition time which in its generic form has been described as the FLASH MRI technique. While spoiled gradient-echo sequences refer to a steady state of the longitudinal magnetization only, SSFP gradient-echo sequences include transverse coherences (magnetizations) from overlapping multi-order spin echoes and stimulated echoes. This is usually accomplished by refocusing the phase-encoding gradient in each repetition interval in order to keep the phase integral (or gradient moment) constant. Fully balanced SSFP MRI sequences achieve a phase of zero by refocusing all imaging gradients.
Gradient moments are zero or not
If, within one TR, either one of the gradient moments of magnetic gradients along three logical directions, including slice selection direction (Gss), phase encoding (Gpe) and readout (Gro), is not zero, then spins along such direction obtain different phases, making the signal intensity (SI) of a single voxel the vector sum of magnetizations therein. It causes some inevitable loss of signal. Such situations belong to ordinary SSFP imaging, with its commercial names listed below.
Otherwise, if all gradient moments are zero within one TR, i.e. gradients of opposite polarities cancel out, then there are no additional effects on the phase from gradients; that is to say, SI of each voxels is the contributions of a series of RF pulses and relaxation phenomena. Although the principles underlying echo formation in balanced SSFP have long been known, widespread clinical implementation has been slow due to stringent technical requirements. bSSFP sequences demand a very high level of magnetic field homogeneity and control over gradient switching and shaping. The refocusing mechanism fails if intravoxel dephasing exceeds over ±180º manifest by band-like artifacts. During the last decade modern scanners have overcome these limitations making bSSFP a viable and useful sequence on most mid- and high-field systems. When the echo is recorded close to the middle of the interval (TE ≈ TR/2, as is usually the case), the final term e−TE/T2 depends on T2, not T2*. Thus, bSSFP sequences behave more like spin echo than gradient echo sequences in that they do not have T2*-dependence. Also, since TR is nearly always much, much shorter than T1 or T2, the exponential terms containing TR can be disregarded.
Localizer
SSFP is beneficial as a localizer sequence, such as for initial images of the anal canal in order to align the planes of subsequent T2-weighted images to be cross-sections and longitudinal sections of the canal. A particular SSFP used for this purpose is one termed TRUE FISP by Siemens, FIESTA by GE, and balanced FFE by Philips.
Commercial names
SSFP protocols have different names among different MRI manufacturers.
See also
MRI
FLASH MRI
References
Magnetic resonance imaging | Steady-state free precession imaging | [
"Chemistry"
] | 640 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
1,702,398 | https://en.wikipedia.org/wiki/Hubbard%20model | The Hubbard model is an approximate model used to describe the transition between conducting and insulating systems. It is particularly useful in solid-state physics. The model is named for John Hubbard.
The Hubbard model states that each electron experiences competing forces: one pushes it to tunnel to neighboring atoms, while the other pushes it away from its neighbors. Its Hamiltonian thus has two terms: a kinetic term allowing for tunneling ("hopping") of particles between lattice sites and a potential term reflecting on-site interaction. The particles can either be fermions, as in Hubbard's original work, or bosons, in which case the model is referred to as the "Bose–Hubbard model".
The Hubbard model is a useful approximation for particles in a periodic potential at sufficiently low temperatures, where all the particles may be assumed to be in the lowest Bloch band, and long-range interactions between the particles can be ignored. If interactions between particles at different sites of the lattice are included, the model is often referred to as the "extended Hubbard model". In particular, the Hubbard term, most commonly denoted by U, is applied in first principles based simulations using Density Functional Theory, DFT. The inclusion of the Hubbard term in DFT simulations is important as this improves the prediction of electron localisation and thus it prevents the incorrect prediction of metallic conduction in insulating systems.
The Hubbard model introduces short-range interactions between electrons to the tight-binding model, which only includes kinetic energy (a "hopping" term) and interactions with the atoms of the lattice (an "atomic" potential). When the interaction between electrons is strong, the behavior of the Hubbard model can be qualitatively different from a tight-binding model. For example, the Hubbard model correctly predicts the existence of Mott insulators: materials that are insulating due to the strong repulsion between electrons, even though they satisfy the usual criteria for conductors, such as having an odd number of electrons per unit cell.
History
The model was originally proposed in 1963 to describe electrons in solids. Hubbard, Martin Gutzwiller and Junjiro Kanamori each independently proposed it.
Since then, it has been applied to the study of high-temperature superconductivity, quantum magnetism, and charge density waves.
Narrow energy band theory
The Hubbard model is based on the tight-binding approximation from solid-state physics, which describes particles moving in a periodic potential, typically referred to as a lattice. For real materials, each lattice site might correspond with an ionic core, and the particles would be the valence electrons of these ions. In the tight-binding approximation, the Hamiltonian is written in terms of Wannier states, which are localized states centered on each lattice site. Wannier states on neighboring lattice sites are coupled, allowing particles on one site to "hop" to another. Mathematically, the strength of this coupling is given by a "hopping integral", or "transfer integral", between nearby sites. The system is said to be in the tight-binding limit when the strength of the hopping integrals falls off rapidly with distance. This coupling allows states associated with each lattice site to hybridize, and the eigenstates of such a crystalline system are Bloch's functions, with the energy levels divided into separated energy bands. The width of the bands depends upon the value of the hopping integral.
The Hubbard model introduces a contact interaction between particles of opposite spin on each site of the lattice. When the Hubbard model is used to describe electron systems, these interactions are expected to be repulsive, stemming from the screened Coulomb interaction. However, attractive interactions have also been frequently considered. The physics of the Hubbard model is determined by competition between the strength of the hopping integral, which characterizes the system's kinetic energy, and the strength of the interaction term. The Hubbard model can therefore explain the transition from metal to insulator in certain interacting systems. For example, it has been used to describe metal oxides as they are heated, where the corresponding increase in nearest-neighbor spacing reduces the hopping integral to the point where the on-site potential is dominant. Similarly, the Hubbard model can explain the transition from conductor to insulator in systems such as rare-earth pyrochlores as the atomic number of the rare-earth metal increases, because the lattice parameter increases (or the angle between atoms can also change) as the rare-earth element atomic number increases, thus changing the relative importance of the hopping integral compared to the on-site repulsion.
Example: one dimensional hydrogen atom chain
The hydrogen atom has one electron, in the so-called s orbital, which can either be spin up () or spin down (). This orbital can be occupied by at most two electrons, one with spin up and one down (see Pauli exclusion principle).
Under band theory, for a 1D chain of hydrogen atoms, the 1s orbital forms a continuous band, which would be exactly half-full. The 1D chain of hydrogen atoms is thus predicted to be a conductor under conventional band theory. This 1D string is the only configuration simple enough to be solved directly.
But in the case where the spacing between the hydrogen atoms is gradually increased, at some point the chain must become an insulator.
Expressed using the Hubbard model, the Hamiltonian is made up of two terms. The first term describes the kinetic energy of the system, parameterized by the hopping integral, . The second term is the on-site interaction of strength that represents the electron repulsion. Written out in second quantization notation, the Hubbard Hamiltonian then takes the form
where is the spin-density operator for spin on the -th site. The density operator is and occupation of -th site for the wavefunction is . Typically t is taken to be positive, and U may be either positive or negative, but is assumed to be positive when considering electronic systems.
Without the contribution of the second term, the Hamiltonian resolves to the tight binding formula from regular band theory.
Including the second term yields a realistic model that also predicts a transition from conductor to insulator as the ratio of interaction to hopping, , is varied. This ratio can be modified by, for example, increasing the inter-atomic spacing, which would decrease the magnitude of without affecting . In the limit where , the chain simply resolves into a set of isolated magnetic moments. If is not too large, the overlap integral provides for superexchange interactions between neighboring magnetic moments, which may lead to a variety of interesting magnetic correlations, such as ferromagnetic, antiferromagnetic, etc. depending on the model parameters. The one-dimensional Hubbard model was solved by Lieb and Wu using the Bethe ansatz. Essential progress was achieved in the 1990s: a hidden symmetry was discovered, and the scattering matrix, correlation functions, thermodynamic and quantum entanglement were evaluated.
More complex systems
Although Hubbard is useful in describing systems such as a 1D chain of hydrogen atoms, it is important to note that more complex systems may experience other effects that the Hubbard model does not consider. In general, insulators can be divided into Mott–Hubbard insulators and charge-transfer insulators.
A Mott–Hubbard insulator can be described as
This can be seen as analogous to the Hubbard model for hydrogen chains, where conduction between unit cells can be described by a transfer integral.
However, it is possible for the electrons to exhibit another kind of behavior:
This is known as charge transfer and results in charge-transfer insulators. Unlike Mott–Hubbard insulators electron transfer happens only within a unit cell.
Both of these effects may be present and compete in complex ionic systems.
Numerical treatment
The fact that the Hubbard model has not been solved analytically in arbitrary dimensions has led to intense research into numerical methods for these strongly correlated electron systems. One major goal of this research is to determine the low-temperature phase diagram of this model, particularly in two-dimensions. Approximate numerical treatment of the Hubbard model on finite systems is possible via various methods.
One such method, the Lanczos algorithm, can produce static and dynamic properties of the system. Ground state calculations using this method require the storage of three vectors of the size of the number of states. The number of states scales exponentially with the size of the system, which limits the number of sites in the lattice to about 20 on 21st century hardware. With projector and finite-temperature auxiliary-field Monte Carlo, two statistical methods exist that can obtain certain properties of the system. For low temperatures, convergence problems appear that lead to an exponential computational effort with decreasing temperature due to the so-called fermion sign problem.
The Hubbard model can be studied within dynamical mean-field theory (DMFT). This scheme maps the Hubbard Hamiltonian onto a single-site impurity model, a mapping that is formally exact only in infinite dimensions and in finite dimensions corresponds to the exact treatment of all purely local correlations only. DMFT allows one to compute the local Green's function of the Hubbard model for a given and a given temperature. Within DMFT, the evolution of the spectral function can be computed and the appearance of the upper and lower Hubbard bands can be observed as correlations increase.
Simulator
Stacks of heterogeneous 2-dimensional transition metal dichalcogenides (TMD) have been used to simulate geometries in more than one dimension. Tungsten diselenide and tungsten sulfide were stacked. This created a moiré superlattice consisting of hexagonal supercells (repetition units defined by the relationship of the two materials). Each supercell then behaves as though it were a single atom. The distance between supercells is roughly 100x that of the atoms within them. This larger distance drastically reduces electron tunneling across supercells.
They can be used to form Wigner crystals. Electrodes can be attached to regulate an electric field. The electric field controls how many electrons fill each supercell. The number of electrons per supercell effectively determines which "atom" the lattice simulates. One electron/cell behaves like hydrogen, two/cell like helium, etc. As of 2022, supercells with up to eight electrons (oxygen) could be simulated. One result of the simulation showed that the difference between metal and insulator is a continuous function of the electric field strength.
A "backwards" stacking regime allows the creation of a Chern insulator via the anomalous quantum Hall effect (with the edges of the device acting as a conductor while the interior acted as an insulator.) The device functioned at a temperature of 5 Kelvins, far above the temperature at which the effect had first been observed.
See also
Anderson impurity model
Bloch's theorem
Electronic band structure
Solid-state physics
Bose–Hubbard model
t-J model
Heisenberg model (quantum)
Dynamical mean-field theory
Stoner criterion
References
Further reading
Correlated electrons
Condensed matter physics
Quantum chemistry
Lattice models
Quantum lattice models | Hubbard model | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,271 | [
"Quantum chemistry",
"Phases of matter",
"Quantum mechanics",
"Lattice models",
"Computational physics",
"Materials science",
"Theoretical chemistry",
"Condensed matter physics",
" molecular",
"Correlated electrons",
"Atomic",
" and optical physics",
"Statistical mechanics",
"Matter",
"Q... |
1,705,432 | https://en.wikipedia.org/wiki/Routh%E2%80%93Hurwitz%20stability%20criterion | In the control system theory, the Routh–Hurwitz stability criterion is a mathematical test that is a necessary and sufficient condition for the stability of a linear time-invariant (LTI) dynamical system or control system. A stable system is one whose output signal is bounded; the position, velocity or energy do not increase to infinity as time goes on. The Routh test is an efficient recursive algorithm that English mathematician Edward John Routh proposed in 1876 to determine whether all the roots of the characteristic polynomial of a linear system have negative real parts. German mathematician Adolf Hurwitz independently proposed in 1895 to arrange the coefficients of the polynomial into a square matrix, called the Hurwitz matrix, and showed that the polynomial is stable if and only if the sequence of determinants of its principal submatrices are all positive. The two procedures are equivalent, with the Routh test providing a more efficient way to compute the Hurwitz determinants () than computing them directly. A polynomial satisfying the Routh–Hurwitz criterion is called a Hurwitz polynomial.
The importance of the criterion is that the roots p of the characteristic equation of a linear system with negative real parts represent solutions ept of the system that are stable (bounded). Thus the criterion provides a way to determine if the equations of motion of a linear system have only stable solutions, without solving the system directly. For discrete systems, the corresponding stability test can be handled by the Schur–Cohn criterion, the Jury test and the Bistritz test. With the advent of computers, the criterion has become less widely used, as an alternative is to solve the polynomial numerically, obtaining approximations to the roots directly.
The Routh test can be derived through the use of the Euclidean algorithm and Sturm's theorem in evaluating Cauchy indices. Hurwitz derived his conditions differently.
Using Euclid's algorithm
The criterion is related to Routh–Hurwitz theorem. From the statement of that theorem, we have where:
is the number of roots of the polynomial with negative real part;
is the number of roots of the polynomial with positive real part (according to the theorem, is supposed to have no roots lying on the imaginary line);
w(x) is the number of variations of the generalized Sturm chain obtained from and (by successive Euclidean divisions) where for a real y.
By the fundamental theorem of algebra, each polynomial of degree n must have n roots in the complex plane (i.e., for an ƒ with no roots on the imaginary line, p + q = n). Thus, we have the condition that ƒ is a (Hurwitz) stable polynomial if and only if p − q = n (the proof is given below). Using the Routh–Hurwitz theorem, we can replace the condition on p and q by a condition on the generalized Sturm chain, which will give in turn a condition on the coefficients of ƒ.
Using matrices
Let f(z) be a complex polynomial. The process is as follows:
Compute the polynomials and such that where y is a real number.
Compute the Sylvester matrix associated to and .
Rearrange each row in such a way that an odd row and the following one have the same number of leading zeros.
Compute each principal minor of that matrix.
If at least one of the minors is negative (or zero), then the polynomial f is not stable.
Example
Let (for the sake of simplicity we take real coefficients) where (to avoid a root in zero so that we can use the Routh–Hurwitz theorem). First, we have to calculate the real polynomials and :
Next, we divide those polynomials to obtain the generalized Sturm chain:
and the Euclidean division stops.
Notice that we had to suppose different from zero in the first division. The generalized Sturm chain is in this case
Putting , the sign of is the opposite sign of and the sign of is the sign of . When we put , the sign of the first element of the chain is again the opposite sign of and the sign of is the opposite sign of . Finally, has always the opposite sign of .
Suppose now that is Hurwitz-stable. This means that (the degree of ). By the properties of the function , this is the same as and . Thus, and must have the same sign. We have thus found the necessary condition of stability for polynomials of degree 2.
Routh–Hurwitz criterion for second, third and fourth-order polynomials
For a second-order polynomial
all coefficients must be positive, where for .
For a third-order polynomial
all coefficients must be positive, where
For a fourth-order polynomial
all coefficients must be positive, where
(When this is derived you do not know all coefficients should be positive, and you add .)
In general the Routh stability criterion states a polynomial has all roots in the open left half-plane if and only if all first-column elements of the Routh array have the same sign.
All coefficients being positive (or all negative) is necessary for all roots to be located in the open left half-plane. That is why here is fixed to 1, which is positive. When this is assumed, we can remove from fourth-order polynomial, and conditions for fifth- and sixth-order can be simplified. For fifth-order we only need to check that and for sixth-order we only need to check and this is further optimised in Liénard–Chipart criterion. Indeed, some coefficients being positive is not independent with principal minors being positive, like check can be removed for third-order polynomial.
Higher-order example
A tabular method can be used to determine the stability when the roots of a higher order characteristic polynomial are difficult to obtain. For an th-degree polynomial whose all coefficients are the same signs
the table has rows and the following structure:
where the elements and can be computed as follows:
When completed, the number of sign changes in the first column will be the number of roots whose real part are non-negative.
In the first column, there are two sign changes (, and ), thus there are two roots whose real part are non-negative and the system is unstable.
The characteristic equation of an example servo system is given by:
For which we have the following table:
for stability, all the elements in the first column of the Routh array must be positive when And the conditions that must be satisfied for stability of the given system as follows:
We see that if
then
is satisfied.
Another example is:
We have the following table :
there are two sign changes. The system is unstable, since it has two right-half-plane poles and two left-half-plane poles. The system cannot have jω poles since a row of zeros did not appear in the Routh table.
For the case
We have the following table with zero appeared in the first column which prevents further calculation steps:
we replace 0 by and we have the table
When we make , there are two sign changes.
The system is unstable, since it has two right-half-plane poles and two left-half-plane poles.
Sometimes the presence of poles on the imaginary axis creates a situation of marginal stability. In that case the coefficients of the "Routh array" in a whole row become zero and thus further solution of the polynomial for finding changes in sign is not possible. Then another approach comes into play. The row of polynomial which is just above the row containing the zeroes is called the "auxiliary polynomial".
We have the following table:
In such a case the auxiliary polynomial is which is again equal to zero. The next step is to differentiate the above equation which yields the polynomial . The coefficients of the row containing zero now become
"8" and "24". The process of Routh array is proceeded using these values which yield two points on the imaginary axis. These two points on the imaginary axis are the prime cause of marginal stability.
See also
Control engineering
Derivation of the Routh array
Nyquist stability criterion
Routh–Hurwitz theorem
Root locus
Transfer function
Liénard–Chipart criterion (variant requiring fewer computations)
Kharitonov's theorem (variant for unknown coefficients bounded within intervals)
Jury stability criterion (analog for discrete-time LTI systems)
Bistritz stability criterion (analog for discrete-time LTI systems)
References
Felix Gantmacher (J.L. Brenner translator) (1959). Applications of the Theory of Matrices, pp 177–80, New York: Interscience.
Stephen Barnett (1983). Polynomials and Linear Control Systems, New York: Marcel Dekker, Inc.
External links
A MATLAB script implementing the Routh-Hurwitz test
Online implementation of the Routh-Hurwitz Criterion
Stability theory
Electronic feedback
Electronic amplifiers
Signal processing
Polynomials | Routh–Hurwitz stability criterion | [
"Mathematics",
"Technology",
"Engineering"
] | 1,803 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Polynomials",
"Stability theory",
"Electronic amplifiers",
"Amplifiers",
"Algebra",
"Dynamical systems"
] |
1,705,627 | https://en.wikipedia.org/wiki/Liquidmetal | Liquidmetal and Vitreloy are commercial names of a series of amorphous metal alloys developed by a California Institute of Technology (Caltech) research team and marketed by Liquidmetal Technologies. Liquidmetal alloys combine a number of desirable material features, including high tensile strength, excellent corrosion resistance, very high coefficient of restitution and excellent anti-wearing characteristics, while also being able to be heat-formed in processes similar to thermoplastics. Despite the name, they are not liquid at room temperature.
Liquidmetal was introduced for commercial applications in 2003. It is used for, among other things, golf clubs, watches, and covers of cell phones.
The alloy was the result of a research program into amorphous metals carried out at Caltech. It was the first of a series of experimental alloys that could achieve an amorphous structure at relatively slow cooling rates. Amorphous metals had been made before, but only in small batches because cooling rates needed to be in the millions of degrees per second. For example, amorphous wires could be fabricated by splat quenching a stream of molten metal on a spinning disk. Because Vitreloy allowed such slow cooling rates, production of larger batch sizes was possible. More recently, a number of additional alloys have been added to the Liquidmetal portfolio. These alloys also retain their amorphous structure after repeated re-heating, allowing them to be used in a wide variety of traditional machining processes.
Characteristics
Liquidmetal, created by Dr. Atakan Peker, contain atoms of significantly different sizes. They form a dense mix with low free volume. Unlike crystalline metals, there is no obvious melting point at which viscosity drops suddenly. Vitreloy behaves more like other glasses, in that its viscosity drops gradually with increased temperature. At high temperature, it behaves in a plastic manner, allowing the mechanical properties to be controlled relatively easily during casting. The viscosity prevents the atoms moving enough to form an ordered lattice, so the material retains its amorphous properties even after being heat-formed.
The alloys have relatively low softening temperatures, allowing casting of complex shapes without needing finishing. The material properties immediately after casting are much better than those of conventional metals; usually, cast metals have worse properties than forged or wrought ones. The alloys are also malleable at low temperatures ( for the earliest formulation), and can be molded. The low free volume also results in low shrinkage during cooling. For all of these reasons, Liquidmetal can be formed into complex shapes using processes similar to thermoplastics, which makes Liquidmetal a potential replacement for many applications where plastics would normally be used.
Due to their non-crystalline (amorphous) structures, Liquidmetals are harder than alloys of titanium or aluminum of similar composition. The zirconium and titanium based Liquidmetal alloys achieved yield strength of over 1723 MPa, nearly twice the strength of conventional crystalline titanium alloys ( is ~830 MPa), and about the strength of high-strength steels and some highly engineered bulk composite materials (see tensile strength for a list of common materials). However, the early casting methods introduced microscopic flaws that were excellent sites for crack propagation which led to Vitreloy being fragile like glass. Although strong, these early batches shattered easily when struck. Newer casting methods, adjustments of the alloy mixtures and other changes have improved this.
The lack of grain boundaries contributes to the high yield strength (and thereby resilience) exhibited. In a demonstration, a metal sphere dropped on amorphous steel bounced significantly longer than the same metal sphere dropped on crystalline steel.
The lack of grain boundaries in a metallic glass eliminates grain-boundary corrosion—a common problem in high-strength alloys produced by precipitation hardening and sensitized stainless steels. Liquidmetal alloys are therefore generally more corrosion resistant, both due to the mechanical structure as well as the elements used in its alloy. The combination of mechanical hardness, high elasticity and corrosion resistance makes Liquidmetal wear resistant.
Although at high temperatures, plastic deformation occurs easily, almost none occurs at room temperature before the onset of catastrophic failure. This limits the material's applicability in reliability-critical applications, as the impending failure is not evident. The material is also susceptible to metal fatigue with crack growth. A two-phase composite structure with amorphous matrix and a ductile dendritic crystalline-phase reinforcement, or a metal matrix composite reinforced with fibers of other material can reduce or eliminate this disadvantage.
Uses
Liquidmetal combines a number of features that are normally not found in any one material. This makes them useful in a wide variety of applications.
One of the first commercial uses of Liquidmetal was in golf clubs made by the company, where the highly elastic metal was used in portions of the club face. These were highly rated by users, but the product was later dropped, in part because the prototypes shattered after fewer than 40 hits. Since then, Liquidmetal has appeared in other sports equipment, including the cores of golf balls, skis, baseball and softball bats, and tennis racquets.
The ability to be cast and molded, combined with high wear resistance, has also led to Liquidmetal being used as a replacement for plastics in some applications. It has been used on the casing of late-model SanDisk "Cruzer Titanium" USB flash drives as well as their Sansa line of flash-based MP3 player, and casings of some mobile phones, like the luxury Vertu products, and other toughened consumer electronics. Liquidmetal was used in the Biolase dental laser Ilase and the Socketmobile ring bar code scanner. Liquidmetal has also notably been used for making the SIM ejector tool of some iPhone 3Gs made by Apple Inc., shipped in the US. This was done by Apple as an exercise to test the viability of usage of the metal. They retain a scratch-free surface longer than competing materials, while still being made in complex shapes. The same qualities lend it to use as protective coatings for industrial machinery, including petroleum drill pipes and power plant boiler tubes.
It also replaces titanium in applications ranging from medical instruments and cars to the military and aerospace industry. In military applications, amorphous metals could replace depleted uranium in kinetic energy penetrators. Plates of Liquidmetal were used in the solar wind ion collector array in the Genesis space probe.
Commercial alloys
A range of zirconium-based alloys have been marketed under this trade name. Some example compositions are listed below, in molar percent:
An early alloy, Vitreloy 1:
Zirconium: 41.2, beryllium: 22.5, titanium: 13.8, copper: 12.5, nickel: 10
A variant, Vitreloy 4 (Vit4):
Zirconium: 46.75, beryllium: 27.5, titanium: 8.25, copper: 7.5, nickel: 10
Vitreloy 105 (Vit105):
Zirconium: 52.5, titanium: 5, copper: 17.9, nickel: 14.6, aluminium: 10
A more recent development (Vitreloy 106a), which forms glass under less rapid cooling:
Zirconium: 58.5, copper: 15.6, nickel: 12.8, aluminium: 10.3, niobium: 2.8
Licensed uses
Apple Inc., acquired a perpetual, exclusive license to use the technologies developed after 2010 in consumer electronics.
The Swatch Group, was granted an exclusive license to utilize Liquidmetal alloys developed after 2010 in its timepieces.''
References
External links
Official website
Overview of amorphous metals
NASA Spinoff 2004: Amorphous Alloy Surpasses Steel and Titanium
Another NASA article on LiquidMetal
Live Video demonstration showing elastic properties of Liquidmetal
Alloys
Amorphous metals
Apple Inc. industrial design
California Institute of Technology
Products introduced in 2003
The Swatch Group | Liquidmetal | [
"Physics",
"Chemistry"
] | 1,662 | [
"Unsolved problems in physics",
"Amorphous metals",
"Chemical mixtures",
"Alloys",
"Amorphous solids"
] |
81,231 | https://en.wikipedia.org/wiki/Pyrometer | A pyrometer, or radiation thermometer, is a type of remote sensing thermometer used to measure the temperature of distant objects. Various forms of pyrometers have historically existed. In the modern usage, it is a device that from a distance determines the temperature of a surface from the amount of the thermal radiation it emits, a process known as pyrometry, a type of radiometry.
The word pyrometer comes from the Greek word for fire, "πῦρ" (pyr), and meter, meaning to measure. The word pyrometer was originally coined to denote a device capable of measuring the temperature of an object by its incandescence, visible light emitted by a body which is at least red-hot. Infrared thermometers, can also measure the temperature of cooler objects, down to room temperature, by detecting their infrared radiation flux. Modern pyrometers are available for a wide range of wavelengths and are generally called radiation thermometers.
Principle
It is based on the principle that the intensity of light received by the observer depends upon the distance of the observer from the source and the temperature of the distant source. A modern pyrometer has an optical system and a detector. The optical system focuses the thermal radiation onto the detector. The output signal of the detector (temperature T) is related to the thermal radiation or irradiance of the target object through the Stefan–Boltzmann law, the constant of proportionality σ, called the Stefan–Boltzmann constant and the emissivity ε of the object:
This output is used to infer the object's temperature from a distance, with no need for the pyrometer to be in thermal contact with the object; most other thermometers (e.g. thermocouples and resistance temperature detectors (RTDs)) are placed in thermal contact with the object and allowed to reach thermal equilibrium.
Pyrometry of gases presents difficulties. These are most commonly overcome by using thin-filament pyrometry or soot pyrometry. Both techniques involve small solids in contact with hot gases.
History
The term "pyrometer" was coined in the 1730s by Pieter van Musschenbroek, better known as the inventor of the Leyden jar. His device, of which no surviving specimens are known, may be now called a dilatometer because it measured the dilation of a metal rod.
The earliest example of a pyrometer thought to be in existence is the Hindley Pyrometer held by the London Science Museum, dating from 1752, produced for the Royal collection. The pyrometer was a well known enough instrument that it was described in some detail by the mathematician Euler in 1760.
Around 1782 potter Josiah Wedgwood invented a different type of pyrometer (or rather a pyrometric device) to measure the temperature in his kilns, which first compared the color of clay fired at known temperatures, but was eventually upgraded to measuring the shrinkage of pieces of clay, which depended on kiln temperature (see Wedgwood scale for details). Later examples used the expansion of a metal bar.
In the 1860s–1870s brothers William and Werner Siemens developed a platinum resistance thermometer, initially to measure temperature in undersea cables, but then adapted for measuring temperatures in metallurgy up to 1000 °C, hence deserving a name of a pyrometer.
Around 1890 Henry Louis Le Chatelier developed the thermoelectric pyrometer.
The first disappearing-filament pyrometer was built by L. Holborn and F. Kurlbaum in 1901. This device had a thin electrical filament between an observer's eye and an incandescent object. The current through the filament was adjusted until it was of the same colour (and hence temperature) as the object, and no longer visible; it was calibrated to allow temperature to be inferred from the current.
The temperature returned by the vanishing-filament pyrometer and others of its kind, called brightness pyrometers, is dependent on the emissivity of the object. With greater use of brightness pyrometers, it became obvious that problems existed with relying on knowledge of the value of emissivity. Emissivity was found to change, often drastically, with surface roughness, bulk and surface composition, and even the temperature itself.
To get around these difficulties, the ratio or two-color pyrometer was developed. They rely on the fact that Planck's law, which relates temperature to the intensity of radiation emitted at individual wavelengths, can be solved for temperature if Planck's statement of the intensities at two different wavelengths is divided. This solution assumes that the emissivity is the same at both wavelengths and cancels out in the division. This is known as the gray-body assumption. Ratio pyrometers are essentially two brightness pyrometers in a single instrument. The operational principles of the ratio pyrometers were developed in the 1920s and 1930s, and they were commercially available in 1939.
As the ratio pyrometer came into popular use, it was determined that many materials, of which metals are an example, do not have the same emissivity at two wavelengths. For these materials, the emissivity does not cancel out, and the temperature measurement is in error. The amount of error depends on the emissivities and the wavelengths where the measurements are taken. Two-color ratio pyrometers cannot measure whether a material's emissivity is wavelength-dependent.
To more accurately measure the temperature of real objects with unknown or changing emissivities, multiwavelength pyrometers were envisioned at the US National Institute of Standards and Technology and described in 1992. Multiwavelength pyrometers use three or more wavelengths and mathematical manipulation of the results to attempt to achieve accurate temperature measurement even when the emissivity is unknown, changing or differs according to wavelength of measurement.
Applications
Pyrometers are suited especially to the measurement of moving objects or any surfaces that cannot be reached or cannot be touched. Contemporary multispectral pyrometers are suitable for measuring high temperatures inside combustion chambers of gas turbine engines with high accuracy.
Temperature is a fundamental parameter in metallurgical furnace operations. Reliable and continuous measurement of the metal temperature is essential for effective control of the operation. Smelting rates can be maximized, slag can be produced at the optimal temperature, fuel consumption is minimized and refractory life may also be lengthened. Thermocouples were the traditional devices used for this purpose, but they are unsuitable for continuous measurement because they melt and degrade.
Salt bath furnaces operate at temperatures up to 1300 °C and are used for heat treatment. At very high working temperatures with intense heat transfer between the molten salt and the steel being treated, precision is maintained by measuring the temperature of the molten salt. Most errors are caused by slag on the surface, which is cooler than the salt bath.
The tuyère pyrometer is an optical instrument for temperature measurement through the tuyeres, which are normally used for feeding air or reactants into the bath of the furnace.
A steam boiler may be fitted with a pyrometer to measure the steam temperature in the superheater.
A hot air balloon is equipped with a pyrometer for measuring the temperature at the top of the envelope in order to prevent overheating of the fabric.
Pyrometers may be fitted to experimental gas turbine engines to measure the surface temperature of turbine blades. Such pyrometers can be paired with a tachometer to tie the pyrometer output with the position of an individual turbine blade. Timing combined with a radial position encoder allows engineers to determine the temperature at exact points on blades moving past the probe.
See also
Aethrioscope
Tasimeter
Thermography
References
External links
The tuyère pyrometer patent
Infrared and radiation pyrometers
A multiwavelength pyrometer patent
Optical Pyrometer
Radiometry
Thermometers
Combustion
Measuring instruments
Metallurgical processes
Infrared imaging
English inventions
French inventions
18th-century inventions | Pyrometer | [
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 1,688 | [
"Telecommunications engineering",
"Metallurgical processes",
"Metallurgy",
"Measuring instruments",
"Combustion",
"Thermometers",
"Radiometry"
] |
81,511 | https://en.wikipedia.org/wiki/Stall%20%28fluid%20dynamics%29 | In fluid dynamics, a stall is a reduction in the lift coefficient generated by a foil as angle of attack exceeds its critical value. The critical angle of attack is typically about 15°, but it may vary significantly depending on the fluid, foil – including its shape, size, and finish – and Reynolds number.
Stalls in fixed-wing aircraft are often experienced as a sudden reduction in lift. It may be caused either by the pilot increasing the wing's angle of attack or by a decrease in the critical angle of attack. The latter may be due to slowing down (below stall speed) or the accretion of ice on the wings (especially if the ice is rough). A stall does not mean that the engine(s) have stopped working, or that the aircraft has stopped moving—the effect is the same even in an unpowered glider aircraft. Vectored thrust in aircraft is used to maintain altitude or controlled flight with wings stalled by replacing lost wing lift with engine or propeller thrust, thereby giving rise to post-stall technology.
Because stalls are most commonly discussed in connection with aviation, this article discusses stalls as they relate mainly to aircraft, in particular fixed-wing aircraft. The principles of stall discussed here translate to foils in other fluids as well.
Formal definition
A stall is a condition in aerodynamics and aviation such that if the angle of attack on an aircraft increases beyond a certain point, then lift begins to decrease. The angle at which this occurs is called the critical angle of attack. If the angle of attack increases beyond the critical value, the lift decreases and the aircraft descends, further increasing the angle of attack and causing further loss of lift. The critical angle of attack is dependent upon the airfoil section or profile of the wing, its planform, its aspect ratio, and other factors, but is typically in the range of 8 to 20 degrees relative to the incoming wind (relative wind) for most subsonic airfoils. The critical angle of attack is the angle of attack on the lift coefficient versus angle-of-attack (Cl~alpha) curve at which the maximum lift coefficient occurs.
Stalling is caused by flow separation which, in turn, is caused by the air flowing against a rising pressure. Whitford describes three types of stall: trailing-edge, leading-edge and thin-aerofoil, each with distinctive Cl~alpha features. For the trailing-edge stall, separation begins at small angles of attack near the trailing edge of the wing while the rest of the flow over the wing remains attached. As angle of attack increases, the separated regions on the top of the wing increase in size as the flow separation moves forward, and this hinders the ability of the wing to create lift. This is shown by the reduction in lift-slope on a Cl~alpha curve as the lift nears its maximum value. The separated flow usually causes buffeting. Beyond the critical angle of attack, separated flow is so dominant that additional increases in angle of attack cause the lift to fall from its peak value.
Piston-engined and early jet transports had very good stall behaviour with pre-stall buffet warning and, if ignored, a straight nose-drop for a natural recovery. Wing developments that came with the introduction of turbo-prop engines introduced unacceptable stall behaviour. Leading-edge developments on high-lift wings, and the introduction of rear-mounted engines and high-set tailplanes on the next generation of jet transports, also introduced unacceptable stall behaviour. The probability of achieving the stall speed inadvertently, a potentially hazardous event, had been calculated, in 1965, at about once in every 100,000 flights, often enough to justify the cost of development of warning devices, such as stick shakers, and devices to automatically provide an adequate nose-down pitch, such as stick pushers.
When the mean angle of attack of the wings is beyond the stall a spin, which is an autorotation of a stalled wing, may develop. A spin follows departures in roll, yaw and pitch from balanced flight. For example, a roll is naturally damped with an unstalled wing, but with wings stalled the damping moment is replaced with a propelling moment.
Variation of lift with angle of attack
The graph shows that the greatest amount of lift is produced as the critical angle of attack is reached (which in early-20th century aviation was called the "burble point"). This angle is 17.5 degrees in this case, but it varies from airfoil to airfoil. In particular, for aerodynamically thick airfoils (thickness to chord ratios of around 10%), the critical angle is higher than with a thin airfoil of the same camber. Symmetric airfoils have lower critical angles (but also work efficiently in inverted flight). The graph shows that, as the angle of attack exceeds the critical angle, the lift produced by the airfoil decreases.
The information in a graph of this kind is gathered using a model of the airfoil in a wind tunnel. Because aircraft models are normally used, rather than full-size machines, special care is needed to make sure that data is taken in the same Reynolds number regime (or scale speed) as in free flight. The separation of flow from the upper wing surface at high angles of attack is quite different at low Reynolds number from that at the high Reynolds numbers of real aircraft. In particular at high Reynolds numbers the flow tends to stay attached to the airfoil for longer because the inertial forces are dominant with respect to the viscous forces which are responsible for the flow separation ultimately leading to the aerodynamic stall. For this reason wind tunnel results carried out at lower speeds and on smaller scale models of the real life counterparts often tend to overestimate the aerodynamic stall angle of attack. High-pressure wind tunnels are one solution to this problem.
In general, steady operation of an aircraft at an angle of attack above the critical angle is not possible because, after exceeding the critical angle, the loss of lift from the wing causes the nose of the aircraft to fall, reducing the angle of attack again. This nose drop, independent of control inputs, indicates the pilot has actually stalled the aircraft.
This graph shows the stall angle, yet in practice most pilot operating handbooks (POH) or generic flight manuals describe stalling in terms of airspeed. This is because all aircraft are equipped with an airspeed indicator, but fewer aircraft have an angle of attack indicator. An aircraft's stalling speed is published by the manufacturer (and is required for certification by flight testing) for a range of weights and flap positions, but the stalling angle of attack is not published.
As speed reduces, angle of attack has to increase to keep lift constant until the critical angle is reached. The airspeed at which this angle is reached is the (1g, unaccelerated) stalling speed of the aircraft in that particular configuration. Deploying flaps/slats decreases the stall speed to allow the aircraft to take off and land at a lower speed.
Aerodynamic description
Fixed-wing aircraft
A fixed-wing aircraft can be made to stall in any pitch attitude or bank angle or at any airspeed but deliberate stalling is commonly practiced by reducing the speed to the unaccelerated stall speed, at a safe altitude. Unaccelerated (1g) stall speed varies on different fixed-wing aircraft and is represented by colour codes on the airspeed indicator. As the plane flies at this speed, the angle of attack must be increased to prevent any loss of altitude or gain in airspeed (which corresponds to the stall angle described above). The pilot will notice the flight controls have become less responsive and may also notice some buffeting, a result of the turbulent air separated from the wing hitting the tail of the aircraft.
In most light aircraft, as the stall is reached, the aircraft will start to descend (because the wing is no longer producing enough lift to support the aircraft's weight) and the nose will pitch down. Recovery from the stall involves lowering the aircraft nose, to decrease the angle of attack and increase the air speed, until smooth air-flow over the wing is restored. Normal flight can be resumed once recovery is complete. The maneuver is normally quite safe, and, if correctly handled, leads to only a small loss in altitude (). It is taught and practised in order for pilots to recognize, avoid, and recover from stalling the aircraft. A pilot is required to demonstrate competency in controlling an aircraft during and after a stall for certification in the United States, and it is a routine maneuver for pilots when getting to know the handling of an unfamiliar aircraft type. The only dangerous aspect of a stall is a lack of altitude for recovery.
A special form of asymmetric stall in which the aircraft also rotates about its yaw axis is called a spin. A spin can occur if an aircraft is stalled and there is an asymmetric yawing moment applied to it. This yawing moment can be aerodynamic (sideslip angle, rudder, adverse yaw from the ailerons), thrust related (p-factor, one engine inoperative on a multi-engine non-centreline thrust aircraft), or from less likely sources such as severe turbulence. The net effect is that one wing is stalled before the other and the aircraft descends rapidly while rotating, and some aircraft cannot recover from this condition without correct pilot control inputs (which must stop yaw) and loading. A new solution to the problem of difficult (or impossible) stall-spin recovery is provided by the ballistic parachute recovery system.
The most common stall-spin scenarios occur on takeoff (departure stall) and during landing (base to final turn) because of insufficient airspeed during these maneuvers. Stalls also occur during a go-around manoeuvre if the pilot does not properly respond to the out-of-trim situation resulting from the transition from low power setting to high power setting at low speed. Stall speed is increased when the wing surfaces are contaminated with ice or frost creating a rougher surface, and heavier airframe due to ice accumulation.
Stalls occur not only at slow airspeed, but at any speed when the wings exceed their critical angle of attack. Attempting to increase the angle of attack at 1g by moving the control column back normally causes the aircraft to climb. However, aircraft often experience higher g-forces, such as when turning steeply or pulling out of a dive. In these cases, the wings are already operating at a higher angle of attack to create the necessary force (derived from lift) to accelerate in the desired direction. Increasing the g-loading still further, by pulling back on the controls, can cause the stalling angle to be exceeded, even though the aircraft is flying at a high speed. These "high-speed stalls" produce the same buffeting characteristics as 1g stalls and can also initiate a spin if there is also any yawing.
Characteristics
Different aircraft types have different stalling characteristics but they only have to be good enough to satisfy their particular Airworthiness authority. For example, the Short Belfast heavy freighter had a marginal nose drop which was acceptable to the Royal Air Force. When the aircraft were sold to a civil operator they had to be fitted with a stick pusher to meet the civil requirements. Some aircraft may naturally have very good behaviour well beyond what is required. For example, first generation jet transports have been described as having an immaculate nose drop at the stall. Loss of lift on one wing is acceptable as long as the roll, including during stall recovery, doesn't exceed about 20 degrees, or in turning flight the roll shall not exceed 90 degrees bank. If pre-stall warning followed by nose drop and limited wing drop are naturally not present or are deemed to be unacceptably marginal by an Airworthiness authority the stalling behaviour has to be made good enough with airframe modifications or devices such as a stick shaker and pusher. These are described in "Warning and safety devices".
Stall speeds
Stalls depend only on angle of attack, not airspeed. However, the slower an aircraft flies, the greater the angle of attack it needs to produce lift equal to the aircraft's weight. As the speed decreases further, at some point this angle will be equal to the critical (stall) angle of attack. This speed is called the "stall speed". An aircraft flying at its stall speed cannot climb, and an aircraft flying below its stall speed cannot stop descending. Any attempt to do so by increasing angle of attack, without first increasing airspeed, will result in a stall.
The actual stall speed will vary depending on the airplane's weight, altitude, configuration, and vertical and lateral acceleration. Propeller slipstream reduces the stall speed by energizing the flow over the wings.
Speed definitions vary and include:
VS: Stall speed: the speed at which the airplane exhibits those qualities accepted as defining the stall.
VS0: The stall speed or minimum steady flight speed in landing configuration. The zero-thrust stall speed at the most extended landing flap setting.
VS1: The stall speed or minimum steady flight speed obtained in a specified configuration. The zero thrust stall speed at a specified flap setting.
An airspeed indicator, for the purpose of flight-testing, may have the following markings: the bottom of the white arc indicates VS0 at maximum weight, while the bottom of the green arc indicates VS1 at maximum weight. While an aircraft's VS speed is computed by design, its VS0 and VS1 speeds must be demonstrated empirically by flight testing.
In accelerated and turning flight
The normal stall speed, specified by the VS values above, always refers to straight and level flight, where the load factor is equal to 1g. However, if the aircraft is turning or pulling up from a dive, additional lift is required to provide the vertical or lateral acceleration, and so the stall speed is higher. An accelerated stall is a stall that occurs under such conditions.
In a banked turn, the lift required is equal to the weight of the aircraft plus extra lift to provide the centripetal force necessary to perform the turn:
where:
= lift
= load factor (greater than 1 in a turn)
= weight of the aircraft
To achieve the extra lift, the lift coefficient, and so the angle of attack, will have to be higher than it would be in straight and level flight at the same speed. Therefore, given that the stall always occurs at the same critical angle of attack, by increasing the load factor (e.g. by tightening the turn) the critical angle will be reached at a higher airspeed:
where:
= stall speed
= stall speed of the aircraft in straight, level flight
= load factor
The table that follows gives some examples of the relation between the angle of bank and the square root of the load factor. It derives from the trigonometric relation (secant) between and .
{| class="wikitable" style="text-align:center;"
|-
! Bank angle
!
|-
| 30°
| 1.07
|-
| 45°
| 1.19
|-
| 60°
| 1.41
|}
For example, in a turn with bank angle of 45°, Vst is 19% higher than Vs.
According to Federal Aviation Administration (FAA) terminology, the above example illustrates a so-called turning flight stall, while the term accelerated is used to indicate an accelerated turning stall only, that is, a turning flight stall where the airspeed decreases at a given rate.
The tendency of powerful propeller aircraft to roll in reaction to engine torque creates a risk of accelerated stalls. When an aircraft such as a Mitsubishi MU-2 is flying close to its stall speed, the sudden application of full power may cause it to roll, creating the same aerodynamic conditions that induce an accelerated stall in turning flight even if the pilot did not deliberately initiate a turn. Pilots of such aircraft are trained to avoid sudden and drastic increases in power at low altitude and low airspeed as it may be difficult to recover from an accelerated stall under these conditions.
A notable example of an air accident involving a low-altitude turning flight stall is the 1994 Fairchild Air Force Base B-52 crash.
Types
Dynamic stall
Dynamic stall is a non-linear unsteady aerodynamic effect that occurs when airfoils rapidly change the angle of attack. The rapid change can cause a strong vortex to be shed from the leading edge of the aerofoil, and travel backwards above the wing. The vortex, containing high-velocity airflows, briefly increases the lift produced by the wing. As soon as it passes behind the trailing edge, however, the lift reduces dramatically, and the wing is in normal stall.
Dynamic stall is an effect most associated with helicopters and flapping wings, though also occurs in wind turbines, and due to gusting airflow. During forward flight, some regions of a helicopter blade may incur flow that reverses (compared to the direction of blade movement), and thus includes rapidly changing angles of attack. Oscillating (flapping) wings, such as those of insects like the bumblebee—may rely almost entirely on dynamic stall for lift production, provided the oscillations are fast compared to the speed of flight, and the angle of the wing changes rapidly compared to airflow direction.
Stall delay can occur on airfoils subject to a high angle of attack and a three-dimensional flow. When the angle of attack on an airfoil is increasing rapidly, the flow will remain substantially attached to the airfoil to a significantly higher angle of attack than can be achieved in steady-state conditions. As a result, the stall is delayed momentarily and a lift coefficient significantly higher than the steady-state maximum is achieved. The effect was first noticed on propellers.
Deep stall
A deep stall (or super-stall) is a dangerous type of stall that affects certain aircraft designs, notably jet aircraft with a T-tail configuration and rear-mounted engines. In these designs, the turbulent wake of a stalled main wing, nacelle-pylon wakes and the wake from the fuselage "blanket" the horizontal stabilizer, rendering the elevators ineffective and preventing the aircraft from recovering from the stall. Aircraft with rear-mounted nacelles may also exhibit a loss of thrust. T-tail propeller aircraft are generally resistant to deep stalls, because the prop wash increases airflow over the wing root, but may be fitted with a precautionary vertical tail booster during flight testing, as happened with the A400M.
Trubshaw gives a broad definition of deep stall as penetrating to such angles of attack that pitch control effectiveness is reduced by the wing and nacelle wakes. He also gives a definition that relates deep stall to a locked-in condition where recovery is impossible. This is a single value of , for a given aircraft configuration, where there is no pitching moment, i.e. a trim point.
Typical values both for the range of deep stall, as defined above, and the locked-in trim point are given for the Douglas DC-9 Series 10 by Schaufele. These values are from wind-tunnel tests for an early design. The final design had no locked-in trim point, so recovery from the deep stall region was possible, as required to meet certification rules. Normal stall beginning at the "g break" (sudden decrease of the vertical load factor) was at , deep stall started at about 30°, and the locked-in unrecoverable trim point was at 47°.
The very high for a deep stall locked-in condition occurs well beyond the normal stall but can be attained very rapidly, as the aircraft is unstable beyond the normal stall and requires immediate action to arrest it. The loss of lift causes high sink rates, which, together with the low forward speed at the normal stall, give a high with little or no rotation of the aircraft. BAC 1-11 G-ASHG, during stall flight tests before the type was modified to prevent a locked-in deep-stall condition, descended at over and struck the ground in a flat attitude moving only forward after initial impact. Sketches showing how the wing wake blankets the tail may be misleading if they imply that deep stall requires a high body angle. Taylor and Ray show how the aircraft attitude in the deep stall is relatively flat, even less than during the normal stall, with very high negative flight-path angles.
Effects similar to deep stall had been known to occur on some aircraft designs before the term was coined. A prototype Gloster Javelin (serial WD808) was lost in a crash on 11 June 1953 to a "locked-in" stall. However, Waterton states that the trimming tailplane was found to be the wrong way for recovery. Low-speed handling tests were being done to assess a new wing. Handley Page Victor XL159 was lost to a "stable stall" on 23 March 1962. It had been clearing the fixed droop leading edge with the test being stall approach, landing configuration, C of G aft. The brake parachute had not been streamed, as it may have hindered rear crew escape.
The name "deep stall" first came into widespread use after the crash of the prototype BAC 1-11 G-ASHG on 22 October 1963, which killed its crew. This led to changes to the aircraft, including the installation of a stick shaker (see below) to clearly warn the pilot of an impending stall. Stick shakers are now a standard part of commercial airliners. Nevertheless, the problem continues to cause accidents; on 3 June 1966, a Hawker Siddeley Trident (G-ARPY), was lost to deep stall; deep stall is suspected to be cause of another Trident (the British European Airways Flight 548 G-ARPI) crash – known as the "Staines Disaster" – on 18 June 1972, when the crew failed to notice the conditions and had disabled the stall-recovery system. On 3 April 1980, a prototype of the Canadair Challenger business jet crashed after initially entering a deep stall from 17,000 ft and having both engines flame-out. It recovered from the deep stall after deploying the anti-spin parachute but crashed after being unable to jettison the chute or relight the engines. One of the test pilots was unable to escape from the aircraft in time and was killed. On 26 July 1993, a Canadair CRJ-100 was lost in flight testing due to a deep stall. It has been reported that a Boeing 727 entered a deep stall in a flight test, but the pilot was able to rock the airplane to increasingly higher bank angles until the nose finally fell through and normal control response was recovered. The crash of West Caribbean Airways Flight 708 in 2005 was also attributed to a deep stall.
Deep stalls can occur at apparently normal pitch attitudes, if the aircraft is descending quickly enough. The airflow is coming from below, so the angle of attack is increased. Early speculation on reasons for the crash of Air France Flight 447 blamed an unrecoverable deep stall, since it descended in an almost flat attitude (15°) at an angle of attack of 35° or more. However, it was held in a stalled glide by the pilots, who held the nose up amid all the confusion of what was actually happening to the aircraft.
Canard-configured aircraft are also at risk of getting into a deep stall. Two Velocity aircraft crashed due to locked-in deep stalls. Testing revealed that the addition of leading-edge cuffs to the outboard wing prevented the aircraft from getting into a deep stall. The Piper Advanced Technologies PAT-1, N15PT, another canard-configured aircraft, also crashed in an accident attributed to a deep stall. Wind-tunnel testing of the design at the NASA Langley Research Center showed that it was vulnerable to a deep stall.
In the early 1980s, a Schweizer SGS 1-36 sailplane was modified for NASA's controlled deep-stall flight program.
Tip stall
Wing sweep and taper cause stalling at the tip of a wing before the root. The position of a swept wing along the fuselage has to be such that the lift from the wing root, well forward of the aircraft center of gravity (c.g.), must be balanced by the wing tip, well aft of the c.g. If the tip stalls first the balance of the aircraft is upset causing dangerous nose pitch up. Swept wings have to incorporate features which prevent pitch-up caused by premature tip stall.
A swept wing has a higher lift coefficient on its outer panels than on the inner wing, causing them to reach their maximum lift capability first and to stall first. This is caused by the downwash pattern associated with swept/tapered wings. To delay tip stall the outboard wing is given washout to reduce its angle of attack. The root can also be modified with a suitable leading-edge and airfoil section to make sure it stalls before the tip. However, when taken beyond stalling incidence the tips may still become fully stalled before the inner wing despite initial separation occurring inboard. This causes pitch-up after the stall and entry to a super-stall on those aircraft with super-stall characteristics. Span-wise flow of the boundary layer is also present on swept wings and causes tip stall. The amount of boundary layer air flowing outboard can be reduced by generating vortices with a leading-edge device such as a fence, notch, saw tooth or a set of vortex generators behind the leading edge.
Warning and safety devices
Fixed-wing aircraft can be equipped with devices to prevent or postpone a stall or to make it less (or in some cases more) severe, or to make recovery easier.
An aerodynamic twist can be introduced to the wing with the leading edge near the wing tip twisted downward. This is called washout and causes the wing root to stall before the wing tip. This makes the stall gentle and progressive. Since the stall is delayed at the wing tips, where the ailerons are, roll control is maintained when the stall begins.
A stall strip is a small sharp-edged device that, when attached to the leading edge of a wing, encourages the stall to start there in preference to any other location on the wing. If attached close to the wing root, it makes the stall gentle and progressive; if attached near the wing tip, it encourages the aircraft to drop a wing when stalling.
A stall fence is a flat plate in the direction of the chord to stop separated flow progressing out along the wing
Vortex generators, tiny strips of metal or plastic placed on top of the wing near the leading edge that protrude past the boundary layer into the free stream. As the name implies, they energize the boundary layer by mixing free stream airflow with boundary layer flow, thereby creating vortices, this increases the momentum in the boundary layer. By increasing the momentum of the boundary layer, airflow separation and the resulting stall may be delayed.
An anti-stall strake is a leading edge extension that generates a vortex on the wing upper surface to postpone the stall.
A stick pusher is a mechanical device that prevents the pilot from stalling an aircraft. It pushes the elevator control forward as the stall is approached, causing a reduction in the angle of attack. In generic terms, a stick pusher is known as a stall identification device or stall identification system.
A stick shaker is a mechanical device that shakes the pilot's controls to warn of the onset of stall.
A stall warning is an electronic or mechanical device that sounds an audible warning as the stall speed is approached. The majority of aircraft contain some form of this device that warns the pilot of an impending stall. The simplest such device is a stall warning horn, which consists of either a pressure sensor or a movable metal tab that actuates a switch and produces an audible warning in response.
An angle-of-attack indicator for light aircraft, the "AlphaSystemsAOA" and a nearly identical "lift reserve indicator", are both pressure-differential instruments that display margin above stall and/or angle of attack on an instantaneous, continuous readout. The General Technics CYA-100 displays true angle of attack via a magnetically coupled vane. An AOA indicator provides a visual display of the amount of available lift throughout its slow-speed envelope regardless of the many variables that act upon an aircraft. This indicator is immediately responsive to changes in speed, angle of attack, and wind conditions, and automatically compensates for aircraft weight, altitude, and temperature.
An angle of attack limiter or an "alpha limiter" is a flight computer that automatically prevents pilot input from causing the plane to rise over the stall angle. Some alpha limiters can be disabled by the pilot.
Stall warning systems often involve inputs from a broad range of sensors and systems to include a dedicated angle of attack sensor.
Blockage, damage, or inoperation of stall and angle of attack (AOA) probes can lead to unreliability of the stall warning and cause the stick pusher, overspeed warning, autopilot, and yaw damper to malfunction.
If a forward canard is used for pitch control, rather than an aft tail, the canard is designed to meet the airflow at a slightly greater angle of attack than the wing. Therefore, when the aircraft pitch increases abnormally, the canard will usually stall first, causing the nose to drop and so preventing the wing from reaching its critical AOA. Thus, the risk of main-wing stalling is greatly reduced. However, if the main wing stalls, recovery becomes difficult, as the canard is more deeply stalled, and angle of attack increases rapidly.
If an aft tail is used, the wing is designed to stall before the tail. In this case, the wing can be flown at higher lift coefficient (closer to stall) to produce more overall lift.
Most military combat aircraft have an angle of attack indicator among the pilot's instruments, which lets the pilot know precisely how close to the stall point the aircraft is. Modern airliner instrumentation may also measure angle of attack, although this information may not be directly displayed on the pilot's display, instead driving a stall warning indicator or giving performance information to the flight computer (for fly-by-wire systems).
Flight beyond the stall
As a wing stalls, aileron effectiveness is reduced, rendering the plane difficult to control and increasing the risk of a spin. Post stall, steady flight beyond the stalling angle (where the coefficient of lift is largest) requires engine thrust to replace lift, as well as alternative controls to replace the loss of effectiveness of the ailerons. Short-term stalls at 90–120° (e.g. Pugachev's cobra) are sometimes performed at airshows. The highest angle of attack in sustained flight so far demonstrated was 70° in the X-31 at the Dryden Flight Research Center. Sustained post-stall flight is a type of supermaneuverability.
Spoilers
Except for flight training, airplane testing, and aerobatics, a stall is usually an undesirable event. Spoilers (sometimes called lift dumpers), however, are devices that are intentionally deployed to create a carefully controlled flow separation over part of an aircraft's wing to reduce the lift it generates, increase the drag, and allow the aircraft to descend more rapidly without gaining speed. Spoilers are also deployed asymmetrically (one wing only) to enhance roll control. Spoilers can also be used on aborted take-offs and after main wheel contact on landing to increase the aircraft's weight on its wheels for better braking action.
Unlike powered airplanes, which can control descent by increasing or decreasing thrust, gliders have to increase drag to increase the rate of descent. In high-performance gliders, spoiler deployment is extensively used to control the approach to landing.
Spoilers can also be thought of as "lift reducers" because they reduce the lift of the wing in which the spoiler resides. For example, an uncommanded roll to the left could be reversed by raising the right wing spoiler (or only a few of the spoilers present in large airliner wings). This has the advantage of avoiding the need to increase lift in the wing that is dropping (which may bring that wing closer to stalling).
History
German aviator Otto Lilienthal died while flying in 1896 as the result of a stall. Wilbur Wright encountered stalls for the first time in 1901, while flying his second glider. Awareness of Lilienthal's accident and Wilbur's experience motivated the Wright Brothers to design their plane in "canard" configuration. This purportedly made recoveries from stalls easier and more gentle. The design allegedly saved the brothers' lives more than once. Although, canard configurations, without careful design, can actually make a stall unrecoverable.
The aircraft engineer Juan de la Cierva worked on his "Autogiro" project to develop a rotary wing aircraft which, he hoped, would be unable to stall and which therefore would be safer than aeroplanes. In developing the resulting "autogyro" aircraft, he solved many engineering problems which made the helicopter possible.
See also
Articles
Aviation safety
Coffin corner (aerodynamics)
Compressor stall
Lift coefficient
Spin (flight)
Spoiler (aeronautics)
Wing twist
Notable accidents
1963 BAC One-Eleven test crash
1966 Felthorpe Trident crash
British European Airways Flight 548
China Airlines Flight 140
China Airlines Flight 676
Yeti Airlines Flight 691
Air France Flight 447
Colgan Air Flight 3407
Turkish Airlines Flight 1951
Indonesia AirAsia Flight 8501
West Caribbean Airways Flight 708
2017 Teterboro Learjet crash
Northwest Orient Airlines Flight 6231
Voepass Linhas Aéreas Flight 2283
Notes
References
USAF & NATO Report RTO-TR-015 AC/323/(HFM-015)/TP-1 (2001
Anderson, J.D., A History of Aerodynamics (1997). Cambridge University Press.
L. J. Clancy (1975), Aerodynamics, Pitman Publishing Limited, London.
Stengel, R. (2004), Flight Dynamics, Princeton University Press,
Aircraft aerodynamics
Aviation risks
Aircraft wing design
Aerial maneuvers
Emergency aircraft operations
Aerospace engineering | Stall (fluid dynamics) | [
"Engineering"
] | 6,985 | [
"Aerospace engineering"
] |
81,580 | https://en.wikipedia.org/wiki/Spider%20silk | Spider silk is a protein fibre or silk spun by spiders. Spiders use silk to make webs or other structures that function as adhesive traps to catch prey, to entangle and restrain prey before biting, to transmit tactile information, or as nests or cocoons to protect their offspring. They can use the silk to suspend themselves from height, to float through the air, or to glide away from predators. Most spiders vary the thickness and adhesiveness of their silk according to its use.
In some cases, spiders may use silk as a food source. While methods have been developed to collect silk from a spider by force, gathering silk from many spiders is more difficult than from silk-spinning organisms such as silkworms.
All spiders produce silk, although some spiders do not make webs. Silk is tied to courtship and mating. Silk produced by females provides a transmission channel for male vibratory courtship signals, while webs and draglines provide a substrate for female sex pheromones. Observations of male spiders producing silk during sexual interactions are common across widespread taxa. The function of male-produced silk in mating has received little study.
Properties
Structural
Silks have a hierarchical structure. The primary structure is the amino acid sequence of its proteins (spidroin), mainly consisting of highly repetitive glycine and alanine blocks, which is why silks are often referred to as a block co-polymer. On a secondary level, the short side-chained alanine is mainly found in the crystalline domains (beta sheets) of the nanofibril. Glycine is mostly found in the so-called amorphous matrix consisting of helical and beta turn structures. The interplay between the hard crystalline segments and the strained elastic semi-amorphous regions gives spider silk its extraordinary properties. Various compounds other than protein are used to enhance the fibre's properties. Pyrrolidine has hygroscopic properties that keep the silk moist while warding off ant invasion. It occurs in high concentration in glue threads. Potassium hydrogen phosphate releases hydrogen ions in aqueous solution, resulting in a pH of about 4, making the silk acidic and thus protecting it from fungi and bacteria that would otherwise digest the protein. Potassium nitrate is believed to prevent the protein from denaturing in the acidic milieu.
Termonia introduced this first basic model of silk in 1994. He suggested crystallites embedded in an amorphous matrix interlinked with hydrogen bonds. Refinements to this model include: semi-crystalline regions were found as well as a fibrillar skin core model suggested for spider silk, later visualised by AFM and TEM. Sizes of the nanofibrillar structure and the crystalline and semi-crystalline regions were revealed by neutron scattering.
The fibres' microstructural information and macroscopic mechanical properties are related. Ordered regions (i) mainly reorient by deformation for low-stretched fibres and (ii) the fraction of ordered regions increases progressively for higher fibre stretching.
Mechanical
Each spider and each type of silk has a set of mechanical properties optimised for their biological function.
Most silks, in particular dragline silk, have exceptional mechanical properties. They exhibit a unique combination of high tensile strength and extensibility (ductility). This enables a silk fibre to absorb a large amount of energy before breaking (toughness, the area under a stress-strain curve).
Strength and toughness are distinct quantities. Weight for weight, silk is stronger than steel, but not as strong as Kevlar. Spider silk is, however, tougher than both.
The variability of spider silk fibre mechanical properties is related to their degree of molecular alignment. Mechanical properties also depend on ambient conditions, i.e. humidity and temperature.
Young's modulus
Young's modulus is the resistance to deformation elastically along the tensile force direction. Unlike steel or Kevlar which are stiff, spider silk is ductile and elastic, having lower Young's modulus. According to Spider Silkome Database, Ariadna lateralis silk has the highest Young's modulus with 37 GPa, compared to 208 GPa for steel and 112 GPa for Kevlar.
Tensile strength
A dragline silk's tensile strength is comparable to that of high-grade alloy steel (450−2000 MPa), and about half as strong as aramid filaments, such as Twaron or Kevlar (3000 MPa). According to Spider Silkome Database, Clubiona vigil silk has the highest tensile strength.
Density
Consisting of mainly protein, silks are about a sixth of the density of steel (1.3 g/cm3). As a result, a strand long enough to circle the Earth would weigh about . (Spider dragline silk has a tensile strength of roughly 1.3 GPa. The tensile strength listed for steel might be slightly highere.g. 1.65 GPa, but spider silk is a much less dense material, so that a given weight of spider silk is five times as strong as the same weight of steel.)
Energy density
The energy density of dragline spider silk is roughly .
Ductility
Silks are ductile, with some able to stretch up to five times their relaxed length without breaking.
Toughness
The combination of strength and ductility gives dragline silks a high toughness (or work to fracture), which "equals that of commercial polyaramid (aromatic nylon) filaments, which themselves are benchmarks of modern polymer fibre technology". According to Spider Silkome Database, Araneus ishisawai silk is the toughest.
Elongation at break
Elongation at break compares initial object length to final length at break. According to Spider Silkome Database, Caerostris darwini silk has the highest strain at break for any spider silk, breaking at 65% extension.
Temperature
While unlikely to be relevant in nature, dragline silks can hold their strength below -40 °C (-40 °F) and up to 220 °C (428 °F). As occurs in many materials, spider silk fibres undergo a glass transition. The glass-transition temperature depends on humidity, as water is a plasticiser for spider silk.
Supercontraction
When exposed to water, dragline silks undergo supercontraction, shrinking up to 50% in length and behaving like a weak rubber under tension. Many hypotheses have attempted to explain its use in nature, most popularly to re-tension webs built in the night using the morning dew.
Highest-performance
The toughest known spider silk is produced by the species Darwin's bark spider (Caerostris darwini): "The toughness of forcibly silked fibers averages 350 MJ/m3, with some samples reaching 520 MJ/m3. Thus, C. darwini silk is more than twice as tough as any previously described silk and over 10 times tougher than Kevlar".
Adhesive
Silk fibre is a two-compound pyriform secretion, spun into patterns (called "attachment discs") using a minimum of silk substrate. The pyriform threads polymerise under ambient conditions, become functional immediately, and are usable indefinitely, remaining biodegradable, versatile and compatible with other materials in the environment. The adhesive and durability properties of the attachment disc are controlled by functions within the spinnerets. Some adhesive properties of the silk resemble glue, consisting of microfibrils and lipid enclosures.
Uses
All spiders produce silks, and a single spider can produce up to seven different types of silk for different uses. This is in contrast to insect silks, where an individual usually only produces a single type. Spiders use silks in many ways, in accord with the silk's properties. As spiders have evolved, so has their silks' complexity and uses, for example from primitive tube webs 300–400 million years ago to complex orb webs 110 million years ago.
Silk types
Meeting the specification for all these ecological uses requires different types of silk presenting different properties, as either a fibre, a structure of fibres, or a globule. These types include glues and fibres. Some types of fibres are used for structural support, others for protective structures. Some can absorb energy effectively, whereas others transmit vibration efficiently. These silk types are produced in different glands; so the silk from a particular gland can be linked to its use.
Many species have different glands to produce silk with different properties for different purposes, including housing, web construction, defence, capturing and detaining prey, egg protection, and mobility (fine "gossamer" thread for ballooning, or for a strand allowing the spider to drop down as silk is extruded).
Synthesis and fibre spinning
Silk production differs in an important aspect from that of most other fibrous biomaterials. It is pulled on demand from a precursor out of specialised glands, rather than continuously grown like plant cell walls.
The spinning process occurs when a fibre is pulled away from the body of a spider, whether by the spider's legs, by the spider's falling under its own weight, or by any other method. The term "spinning" is misleading because no rotation occurs. It comes from analogy to the textile spinning wheels. Silk production is a pultrusion, similar to extrusion, with the subtlety that the force is induced by pulling at the finished fibre rather than squeezing it out of a reservoir. The fibre is pulled through (possibly multiple) silk glands of multiple types.
Silk gland
The gland's visible, or external, part is termed the spinneret. Depending on the complexity of the species, spiders have two to eight spinnerets, usually in pairs. Species have varying specialised glands, ranging from a sac with an opening at one end, to the complex, multiple-section ampullate glands of the golden silk orb-weavers.
Behind each spinneret on the surface of the spider lies a gland, a generalised form of which is shown in the figure.
Gland characteristics
The leftmost section is the secretory or tail section. The walls of this section are lined with cells that secrete proteins Spidroin I and Spidroin II, the main components of this spider's dragline. These proteins are found in the form of droplets that gradually elongate to form long channels along the length of the final fibre, hypothesised to assist in preventing crack formation or self-healing.
The ampulla (storage sac) is next. This stores and maintains the gel-like unspun silk dope. In addition, it secretes proteins that coat the surface of the final fibre.
The funnel rapidly reduces the large diameter of the storage sac to the small diameter of the tapering duct.
The final length is the tapering duct, the site of most of the fibre formation. This consists of a tapering tube with several tight sharp turns, a valve near the end includes a spigot from which the solid silk fibre emerges. The tube tapers hyperbolically, therefore the unspun silk is under constant elongational shear stress, an important factor in fibre formation. This section is lined with cells that exchange ions, reduce the dope pH from neutral to acidic, and remove water from the fibre. Collectively, the shear stress and the ion and pH changes induce the liquid silk dope to undergo a phase transition and condense into a solid protein fibre with high molecular organisation. The spigot at the end has lips that clamp around the fibre, controlling fibre diameter and further retaining water.
Almost at the end is a valve. Though discovered some time ago, its precise purpose is still under discussion. It is believed to assist in restarting and rejoining broken fibres, acting much in the way of a helical pump, regulating the thickness of the fibre, and/or clamping the fibre as a spider falls upon it. The similarity of the silk worm's silk press and the roles each of these valves play in the silk production in these two organisms are under discussion.
Throughout the process the silk appears to have a nematic texture, in a manner similar to a liquid crystal, arising in part due to the high protein concentration of silk dope (around 30% in terms of weight per volume). This allows the silk to flow through the duct as a liquid while maintaining molecular order.
As an example of a complex spinning field, the spinneret apparatus of an adult Araneus diadematus (garden cross spider) consists of many glands shown below. A similar gland architecture appears in the black widow spider.
500 pyriform glands for attachment points
4 ampullate glands for the web frame
300 aciniform glands for the outer lining of egg sacs, and for ensnaring prey
4 tubuliform glands for egg sac silk
4 aggregate glands for adhesive functions
2 coronate glands for the thread of adhesion lines
Artificial synthesis
To artificially synthesise spider silk into fibres, two broad tasks are required. These are synthesis of the feedstock (the unspun silk dope in spiders), and synthesis of the production conditions (the funnel, valve, tapering duct, and spigot). Few strategies have produced silk that can efficiently be synthesised into fibres.
Feedstock
The molecular structure of unspun silk is both complex and long. Though this endows the fibres with desirable properties, it also complicates replication. Various organisms have been used as a basis for attempts to replicate necessary protein components. These proteins must then be extracted, purified, and then spun before their properties can be tested.
Geometry
Spider silks with comparatively simple molecular structure need complex ducts to be able to form an effective fibre. Approaches:
Syringe and needle
Feedstock is forced through a hollow needle using a syringe.
Although cheap and easy to produce, gland shape and conditions are loosely approximated. Fibres created using this method may need encouragement to solidify by removing water from the fibre with chemicals such as (environmentally undesirable) methanol or acetone, and also may require later stretching of the fibre to achieve desirable properties.
Superhydrophobic surfaces
Placing a solution of spider silk on a superhydrophobic surface can generate sheets, particles, and nanowires of spider silk.
Sheets
Self-assembly of silk at standing liquid-gas interphases of a solution tough and strong sheets. These sheets are now explored for mimicking the basal membrane in tissue modeling.
Microfluidics
Microfluidics have the advantage of being controllable and able to test spin small volumes of unspun fibre, but setup and development costs are high. A patent has been granted and continuously spun fibres have achieved commercial use.
Electrospinning
Electrospinning is an old technique whereby a fluid is held in a container such that it flows out through capillary action. A conducting substrate is positioned below, and a difference in electrical potential is applied between the fluid and the substrate. The fluid is attracted to the substrate, and tiny fibres jump from their point of emission, the Taylor cone, to the substrate, drying as they travel. This method creates nano-scale fibres from silk dissected from organisms and regenerated silk fibroin.
Other shapes
Silk can be formed into other shapes and sizes such as spherical capsules for drug delivery, cell scaffolds and wound healing, textiles, cosmetics, coatings, and many others. Spider silk proteins can self-assemble on superhydrophobic surfaces into nanowires, as well as micron-sized circular sheets. Recombinant spider silk proteins can self-assemble at the liquid-air interface of a standing solution to form protein-permeable, strong and flexible nanomembranes that support cell proliferation. Potential applications include skin transplants, and supportive membranes in organ-on-a-chip. These nanomembranes have been used to create a static in-vitro model of a blood vessel.
Synthetic spider silk
Replicating the complex conditions required to produce comparable fibres has challenged research and early-stage manufacturing. Through genetic engineering, E. coli bacteria, yeasts, plants, silkworms, and animals other than silkworms have been used to produce spider silk-like proteins, which have different characteristics than those from a spider. Extrusion of protein fibres in an aqueous environment is known as "wet-spinning". This process has produced silk fibres of diameters ranging from 10 to 60 μm, compared to diameters of 2.5–4 μm for natural spider silk. Artificial spider silks have fewer and simpler proteins than natural dragline silk, and consequently offer half the diameter, strength, and flexibility of natural dragline silk.
Research
In March 2010, researchers from the Korea Advanced Institute of Science & Technology succeeded in making spider silk directly using E. coli modified with certain genes of the spider Nephila clavipes. This approach eliminates the need to "milk" spiders.
A 556 kDa spider silk protein was manufactured from 192 repeat motifs of the N. clavipes dragline spidroin, having similar mechanical characteristics as their natural counterparts, i.e., tensile strength (1.03 ± 0.11 GPa), modulus (13.7 ± 3.0 GPa), extensibility (18 ± 6%), and toughness (114 ± 51 MJ/m3).
AMSilk developed spidroin using bacteria.
Bolt Threads produced a recombinant spidroin using yeast, for use in apparel fibers and personal care. They produced the first commercial apparel products made of recombinant spider silk, trademarked Microsilk, demonstrated in ties and beanies.
Kraig Biocraft Laboratories used research from the Universities of Wyoming and Notre Dame to create silkworms genetically altered to produce spider silk.
Defunct Canadian biotechnology company Nexia produced spider silk protein in transgenic goats; the milk produced by the goats contained significant quantities of the protein, 1–2 grams of silk proteins per litre of milk. Attempts to spin the protein into a fibre similar to natural spider silk resulted in fibres with tenacities of 2–3 grams per denier. Nexia used wet spinning and squeezed the silk protein solution through small extrusion holes to simulate the spinneret, but this was not sufficient to replicate native spider silk properties.
Spiber produced a synthetic spider silk (Q/QMONOS). In partnership with Goldwin, a ski parka made from this was in testing in 2016.
Researchers from Japan's RIKEN Center constructed an artificial gland that reproduced spider silk's molecular structure. Precise microfluidic mechanisms directed proteins to self-assemble into functional fibers. The process used negative pressure to pull (rather than push) a spidroin solution through the device. The resulting fibers matched the hierarchical structure of natural fiber.
Research
Human uses
The earliest recorded attempt to weave fabric from spider silk was in 1709 by François Xavier Bon who, using a process similar to creating silkworm silk, wove silk derived spider's egg cocoons into stockings and gloves. Fifty years later Jesuit missionary invented a reeling device for harvesting spider silk directly from spiders, allowing it to be spun into threads. Neither Bon nor Termeyer were successful in producing commercially viable quantities.
The development of methods to mass-produce spider silk led to the manufacturing of military, medical, and consumer goods, such as ballistic armour, athletic footwear, personal care products, breast implant and catheter coatings, mechanical insulin pumps, fashion clothing, and outerwear. However, due to the difficulties in extracting and processing, the largest known piece of cloth made of spider silk is an textile with a golden tint made in Madagascar in 2009. Eighty-two people worked for four years to collect over one million golden orb spiders and extract silk from them. In 2012, spider silk fibres were used to create a set of violin strings.
Medicine
Peasants in the southern Carpathian Mountains used to cut up tubes built by Atypus and cover wounds with the inner lining. It reportedly facilitated healing, and connected with the skin. This is believed to be due to the silk's antiseptic properties, and because silk is rich in vitamin K, which can aid in clotting blood. N. clavipes silk was used in research concerning mammalian neuronal regeneration.
Science and technology
Spider silk has been used as a thread for crosshairs in optical instruments such as telescopes, microscopes, and telescopic rifle sights. In 2011, silk fibres were used to generate fine diffraction patterns over N-slit interferometric signals used in optical communications. Silk has been used to create biolenses that could be used in conjunction with lasers to create high-resolution images of the inside of the human body.
Silk has been used to suspend inertial confinement fusion targets during laser ignition, as it remains considerably elastic and has a high energy to break at temperatures as low as 10–20 K. In addition, it is made from "light" atomic number elements that emit no x-rays during irradiation that could preheat the target, limiting the pressure differential required for fusion.
References
External links
"The Silk Spinners", a BBC program about silk-producing animals
Archived at Ghostarchive and the Wayback Machine:
Materials science
Animal glandular products
Polyamides
Silk
Spider anatomy
Articles containing video clips | Spider silk | [
"Physics",
"Materials_science",
"Engineering"
] | 4,436 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
81,611 | https://en.wikipedia.org/wiki/Coenzyme%20A | Coenzyme A (CoA, SHCoA, CoASH) is a coenzyme, notable for its role in the synthesis and oxidation of fatty acids, and the oxidation of pyruvate in the citric acid cycle. All genomes sequenced to date encode enzymes that use coenzyme A as a substrate, and around 4% of cellular enzymes use it (or a thioester) as a substrate. In humans, CoA biosynthesis requires cysteine, pantothenate (vitamin B5), and adenosine triphosphate (ATP).
In its acetyl form, coenzyme A is a highly versatile molecule, serving metabolic functions in both the anabolic and catabolic pathways. Acetyl-CoA is utilised in the post-translational regulation and allosteric regulation of pyruvate dehydrogenase and carboxylase to maintain and support the partition of pyruvate synthesis and degradation.
Discovery of structure
Coenzyme A was identified by Fritz Lipmann in 1946, who also later gave it its name. Its structure was determined during the early 1950s at the Lister Institute, London, together by Lipmann and other workers at Harvard Medical School and Massachusetts General Hospital. Lipmann initially intended to study acetyl transfer in animals, and from these experiments he noticed a unique factor that was not present in enzyme extracts but was evident in all organs of the animals. He was able to isolate and purify the factor from pig liver and discovered that its function was related to a coenzyme that was active in choline acetylation. Work with Beverly Guirard, Nathan Kaplan, and others determined that pantothenic acid was a central component of coenzyme A. The coenzyme was named coenzyme A to stand for "activation of acetate". In 1953, Fritz Lipmann won the Nobel Prize in Physiology or Medicine "for his discovery of co-enzyme A and its importance for intermediary metabolism".
Biosynthesis
Coenzyme A is naturally synthesized from pantothenate (vitamin B5), which is found in food such as meat, vegetables, cereal grains, legumes, eggs, and milk. In humans and most living organisms, pantothenate is an essential vitamin that has a variety of functions. In some plants and bacteria, including Escherichia coli, pantothenate can be synthesised de novo and is therefore not considered essential. These bacteria synthesize pantothenate from the amino acid aspartate and a metabolite in valine biosynthesis.
In all living organisms, coenzyme A is synthesized in a five-step process that requires four molecules of ATP, pantothenate and cysteine (see figure):
Pantothenate (vitamin B5) is phosphorylated to 4′-phosphopantothenate by the enzyme pantothenate kinase (PanK; CoaA; CoaX). This is the committed step in CoA biosynthesis and requires ATP.
A cysteine is added to 4′-phosphopantothenate by the enzyme phosphopantothenoylcysteine synthetase (PPCS; CoaB) to form 4'-phospho-N-pantothenoylcysteine (PPC). This step is coupled with ATP hydrolysis.
PPC is decarboxylated to 4′-phosphopantetheine by phosphopantothenoylcysteine decarboxylase (PPC-DC; CoaC)
4′-phosphopantetheine is adenylated (or more properly, AMPylated) to form dephospho-CoA by the enzyme phosphopantetheine adenylyl transferase (COASY; PPAT; CoaD)
Finally, dephospho-CoA is phosphorylated to coenzyme A by the enzyme dephosphocoenzyme A kinase (COASY, DPCK; CoaE). This final step requires ATP.
Enzyme nomenclature abbreviations in parentheses represent mammalian, other eukaryotic, and prokaryotic enzymes respectively. In mammals steps 4 and 5 are catalyzed by a bifunctional enzyme called COASY. This pathway is regulated by product inhibition. CoA is a competitive inhibitor for Pantothenate Kinase, which normally binds ATP. Coenzyme A, three ADP, one monophosphate, and one diphosphate are harvested from biosynthesis.
Coenzyme A can be synthesized through alternate routes when intracellular coenzyme A level are reduced and the de novo pathway is impaired. In these pathways, coenzyme A needs to be provided from an external source, such as food, in order to produce 4′-phosphopantetheine. Ectonucleotide pyrophosphates (ENPP) degrade coenzyme A to 4′-phosphopantetheine, a stable molecule in organisms. Acyl carrier proteins (ACP) (such as ACP synthase and ACP degradation) are also used to produce 4′-phosphopantetheine. This pathway allows for 4′-phosphopantetheine to be replenished in the cell and allows for the conversion to coenzyme A through enzymes, PPAT and PPCK.
A 2024 article detailed a plausible chemical synthesis mechanism for the pantetheine component (the main functional part) of coenzyme A in a primordial prebiotic world.
Commercial production
Coenzyme A is produced commercially via extraction from yeast, however this is an inefficient process (yields approximately 25 mg/kg) resulting in an expensive product. Various ways of producing CoA synthetically, or semi-synthetically have been investigated, although none are currently operating at an industrial scale.
Function
Fatty acid synthesis
Since coenzyme A is, in chemical terms, a thiol, it can react with carboxylic acids to form thioesters, thus functioning as an acyl group carrier. It assists in transferring fatty acids from the cytoplasm to mitochondria. A molecule of coenzyme A carrying an acyl group is also referred to as acyl-CoA. When it is not attached to an acyl group, it is usually referred to as 'CoASH' or 'HSCoA'. This process facilitates the production of fatty acids in cells, which are essential in cell membrane structure.
Coenzyme A is also the source of the phosphopantetheine group that is added as a prosthetic group to proteins such as acyl carrier protein and formyltetrahydrofolate dehydrogenase.
Energy production
Coenzyme A is one of five crucial coenzymes that are necessary in the reaction mechanism of the citric acid cycle. Its acetyl-coenzyme A form is the primary input in the citric acid cycle and is obtained from glycolysis, amino acid metabolism, and fatty acid beta oxidation. This process is the body's primary catabolic pathway and is essential in breaking down the building blocks of the cell such as carbohydrates, amino acids, and lipids.
Regulation
When there is excess glucose, coenzyme A is used in the cytosol for synthesis of fatty acids. This process is implemented by regulation of acetyl-CoA carboxylase, which catalyzes the committed step in fatty acid synthesis. Insulin stimulates acetyl-CoA carboxylase, while epinephrine and glucagon inhibit its activity.
During cell starvation, coenzyme A is synthesized and transports fatty acids in the cytosol to the mitochondria. Here, acetyl-CoA is generated for oxidation and energy production. In the citric acid cycle, coenzyme A works as an allosteric regulator in the stimulation of the enzyme pyruvate dehydrogenase.
Antioxidant function and regulation
Discovery of the novel antioxidant function of coenzyme A highlights its protective role during cellular stress. Mammalian and bacterial cells subjected to oxidative and metabolic stress show significant increase in the covalent modification of protein cysteine residues by coenzyme A. This reversible modification is termed protein CoAlation (Protein-S-SCoA), which plays a similar role to protein S-glutathionylation by preventing the irreversible oxidation of the thiol group of cysteine residues.
Using anti-coenzyme A antibody and liquid chromatography tandem mass spectrometry (LC-MS/MS) methodologies, more than 2,000 CoAlated proteins were identified from stressed mammalian and bacterial cells. The majority of these proteins are involved in cellular metabolism and stress response. Different research studies have focused on deciphering the coenzyme A-mediated regulation of proteins. Upon protein CoAlation, inhibition of the catalytic activity of different proteins (e.g., metastasis suppressor NME1, peroxiredoxin 5, GAPDH, among others) is reported. To restore the protein's activity, antioxidant enzymes that reduce the disulfide bond between coenzyme A and the protein cysteine residue play an important role. This process is termed protein deCoAlation. Thioredoxin A and Thioredoxin-like protein (YtpP), two bacterial proteins, are shown to deCoAlate proteins.
Use in biological research
Coenzyme A is available from various chemical suppliers as the free acid and lithium or sodium salts. The free acid of coenzyme A is detectably unstable, with around 5% degradation observed after 6 months when stored at −20 °C, and near complete degradation after 1 month at 37 °C. The lithium and sodium salts of CoA are more stable, with negligible degradation noted over several months at various temperatures. Aqueous solutions of coenzyme A are unstable above pH 8, with 31% of activity lost after 24 hours at 25 °C and pH 8. CoA stock solutions are relatively stable when frozen at pH 2–6. The major route of CoA activity loss is likely the air oxidation of CoA to CoA disulfides. CoA mixed disulfides, such as CoA-S–S-glutathione, are commonly noted contaminants in commercial preparations of CoA. Free CoA can be regenerated from CoA disulfide and mixed CoA disulfides with reducing agents such as dithiothreitol or 2-mercaptoethanol.
Non-exhaustive list of coenzyme A-activated acyl groups
Acetyl-CoA
fatty acyl-CoA (activated form of all fatty acids; only the CoA esters are substrates for important reactions such as mono-, di-, and triacylglycerol synthesis, carnitine palmitoyl transferase, and cholesterol esterification)
Propionyl-CoA
Butyryl-CoA
Myristoyl-CoA
Crotonyl-CoA
Acetoacetyl-CoA
Coumaroyl-CoA (used in flavonoid and stilbenoid biosynthesis)
Benzoyl-CoA
Phenylacetyl-CoA
Acyl derived from dicarboxylic acids
Malonyl-CoA (important in chain elongation in fatty acid biosynthesis and polyketide biosynthesis)
Succinyl-CoA (used in heme biosynthesis)
Hydroxymethylglutaryl-CoA (used in isoprenoid biosynthesis)
Pimelyl-CoA (used in biotin biosynthesis)
References
Bibliography
Coenzymes
Metabolism
Thiols | Coenzyme A | [
"Chemistry",
"Biology"
] | 2,470 | [
"Coenzymes",
"Thiols",
"Organic compounds",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
81,660 | https://en.wikipedia.org/wiki/Flavin%20group | Flavins (from Latin flavus, "yellow") refers generally to the class of organic compounds containing the tricyclic heterocycle isoalloxazine or its isomer alloxazine, and derivatives thereof. The biochemical source of flavin is the yellow B vitamin riboflavin. The flavin moiety is often attached with an adenosine diphosphate to form flavin adenine dinucleotide (FAD), and, in other circumstances, is found as flavin mononucleotide (or FMN), a phosphorylated form of riboflavin. It is in one or the other of these forms that flavin is present as a prosthetic group in flavoproteins. Despite the similar names, flavins (with "i") are chemically and biologically distinct from the flavanoids (with "a"), and the flavonols (with "o").
The flavin group is capable of undergoing oxidation-reduction reactions, and can accept either one electron in a two-step process or two electrons at once. Reduction is made with the addition of hydrogen atoms to specific nitrogen atoms on the isoalloxazine ring system:
In aqueous solution, flavins are yellow-coloured when oxidized, taking a red colour in the semi-reduced anionic state or blue in the neutral (semiquinone) state, and colourless when totally reduced. The oxidized and reduced forms are in fast equilibrium with the semiquinone (radical) form, shifted against the formation of the radical:
Flox + FlredH2 ⇌ FlH•
where Flox is the oxidized flavin, FlredH2 the reduced flavin (upon addition of two hydrogen atoms) and FlH• the semiquinone form (addition of one hydrogen atom).
In the form of FADH2, it is one of the cofactors that can transfer electrons to the electron transfer chain.
Photoreduction
Both free and protein-bound flavins are photoreducible, that is, able to be reduced by light, in a mechanism mediated by several organic compounds, such as some amino acids, carboxylic acids and amines. This property of flavins is exploited by various light-sensitive proteins. For example, the LOV domain, found in many species of plant, fungi and bacteria, undergoes a reversible, light-dependent structural change which involves the formation of a bond between a cysteine residue in its peptide sequence and a bound FMN.
FAD
Flavin adenine dinucleotide is a group bound to many enzymes including ferredoxin-NADP+ reductase, monoamine oxidase, D-amino acid oxidase, glucose oxidase, xanthine oxidase, and acyl CoA dehydrogenase.
FADH/FADH2
FADH and FADH2 are reduced forms of FAD. FADH2 is produced as a prosthetic group in succinate dehydrogenase, an enzyme involved in the citric acid cycle. In oxidative phosphorylation, two molecules of FADH2 typically yield 1.5 ATP each, or three ATP combined.
FMN
Flavin mononucleotide is a prosthetic group found in, among other proteins, NADH dehydrogenase, E.coli nitroreductase and old yellow enzyme.
See also
Pteridine
Pterin
Deazaflavin (5-deazaflavin)
References
Further reading
Cellular respiration | Flavin group | [
"Chemistry",
"Biology"
] | 764 | [
"Biochemistry",
"Cellular respiration",
"Metabolism"
] |
81,666 | https://en.wikipedia.org/wiki/Pyruvic%20acid | Pyruvic acid (CH3COCOOH) is the simplest of the alpha-keto acids, with a carboxylic acid and a ketone functional group. Pyruvate, the conjugate base, CH3COCOO−, is an intermediate in several metabolic pathways throughout the cell.
Pyruvic acid can be made from glucose through glycolysis, converted back to carbohydrates (such as glucose) via gluconeogenesis, or converted to fatty acids through a reaction with acetyl-CoA. It can also be used to construct the amino acid alanine and can be converted into ethanol or lactic acid via fermentation.
Pyruvic acid supplies energy to cells through the citric acid cycle (also known as the Krebs cycle) when oxygen is present (aerobic respiration), and alternatively ferments to produce lactate when oxygen is lacking.
Chemistry
In 1834, Théophile-Jules Pelouze distilled tartaric acid and isolated glutaric acid and another unknown organic acid. Jöns Jacob Berzelius characterized this other acid the following year and named pyruvic acid because it was distilled using heat. The correct molecular structure was deduced by the 1870s.
Pyruvic acid is a colorless liquid with a smell similar to that of acetic acid and is miscible with water. In the laboratory, pyruvic acid may be prepared by heating a mixture of tartaric acid and potassium hydrogen sulfate, by the oxidation of propylene glycol by a strong oxidizer (e.g., potassium permanganate or bleach), or by the hydrolysis of acetyl cyanide, formed by reaction of acetyl chloride with potassium cyanide:
CH3COCl + KCN → CH3COCN + KCl
CH3COCN → CH3COCOOH
Biochemistry
Pyruvate is an important chemical compound in biochemistry. It is the output of the metabolism of glucose known as glycolysis. One molecule of glucose breaks down into two molecules of pyruvate, which are then used to provide further energy, in one of two ways. Pyruvate is converted into acetyl-coenzyme A, which is the main input for a series of reactions known as the Krebs cycle (also known as the citric acid cycle or tricarboxylic acid cycle). Pyruvate is also converted to oxaloacetate by an anaplerotic reaction, which replenishes Krebs cycle intermediates; also, the oxaloacetate is used for gluconeogenesis.
These reactions are named after Hans Adolf Krebs, the biochemist awarded the 1953 Nobel Prize for physiology, jointly with Fritz Lipmann, for research into metabolic processes. The cycle is also known as the citric acid cycle or tricarboxylic acid cycle, because citric acid is one of the intermediate compounds formed during the reactions.
If insufficient oxygen is available, the acid is broken down anaerobically, creating lactate in animals and ethanol in plants and microorganisms (and in carp). Pyruvate from glycolysis is converted by fermentation to lactate using the enzyme lactate dehydrogenase and the coenzyme NADH in lactate fermentation, or to acetaldehyde (with the enzyme pyruvate decarboxylase) and then to ethanol in alcoholic fermentation.
Pyruvate is a key intersection in the network of metabolic pathways. Pyruvate can be converted into carbohydrates via gluconeogenesis, to fatty acids or energy through acetyl-CoA, to the amino acid alanine, and to ethanol. Therefore, it unites several key metabolic processes.
Pyruvic acid production by glycolysis
In the last step of glycolysis, phosphoenolpyruvate (PEP) is converted to pyruvate by pyruvate kinase. This reaction is strongly exergonic and irreversible; in gluconeogenesis, it takes two enzymes, pyruvate carboxylase and PEP carboxykinase, to catalyze the reverse transformation of pyruvate to PEP.
Decarboxylation to acetyl CoA
Pyruvate decarboxylation by the pyruvate dehydrogenase complex produces acetyl-CoA.
Carboxylation to oxaloacetate
Carboxylation by pyruvate carboxylase produces oxaloacetate.
Transamination to alanine
Transamination by alanine transaminase produces alanine.
Reduction to lactate
Reduction by lactate dehydrogenase produces lactate.
Environmental chemistry
Pyruvic acid is an abundant carboxylic acid in secondary organic aerosols.
Uses
Pyruvate is sold as a weight-loss supplement, though credible science has yet to back this claim. A systematic review of six trials found a statistically significant difference in body weight with pyruvate compared to placebo. However, all of the trials had methodological weaknesses and the magnitude of the effect was small. The review also identified adverse events associated with pyruvate such as diarrhea, bloating, gas, and increase in low-density lipoprotein (LDL) cholesterol. The authors concluded that there was insufficient evidence to support the use of pyruvate for weight loss.
There is also in vitro as well as in vivo evidence in hearts that pyruvate improves metabolism by NADH production stimulation and increases cardiac function.
See also
Pyruvate scale
Uvitonic acid
Notes
References
External links
Pyruvic acid mass spectrum
Alpha-keto acids
Cellular respiration
Exercise physiology
Metabolism
Glycolysis | Pyruvic acid | [
"Chemistry",
"Biology"
] | 1,236 | [
"Carbohydrate metabolism",
"Cellular respiration",
"Glycolysis",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
81,756 | https://en.wikipedia.org/wiki/Rochester%20Institute%20of%20Technology | The Rochester Institute of Technology (RIT) is a private research university in Henrietta, New York, a suburb of Rochester. It was founded in 1829. It is one of only two institutes of technology in New York state, the other being the New York Institute of Technology.
RIT enrolls about 19,000 students, of whom 16,000 are undergraduate and 3,000 are graduate students. These students come from all 50 states in the United States and more than 100 countries. The university has more than 4,000 faculty and staff. It also has branches abroad in China, Croatia, Kosovo, and the United Arab Emirates. The university is classified among "R2: Doctoral Universities – High research activity".
History
The university began as a result of an 1891 merger between Rochester Athenæum, a struggling literary society founded in 1829 by Colonel Nathaniel Rochester and associates, and The Mechanics Institute, a Rochester school of practical technical training for local residents founded in 1885 by a consortium of local businessmen including Captain Henry Lomb, co-founder of Bausch & Lomb. The name of the merged institution at the time was called Rochester Athenæum and Mechanics Institute (RAMI). The Mechanics Institute was considered as the surviving school and took over The Rochester Athenaeum's 1829 founding charter. From the time of the merger until 1944, many of its students, administration and faculty staff alike, not only celebrated the former Mechanics Institute's 1885 founding charter, but its former name as well. In 1944, the school changed its name to Rochester Institute of Technology, re-established The Athenaeum's 1829 founding charter and became a full-fledged research university.
The university originally resided within the city of Rochester, New York, proper, on a block bounded by the Erie Canal, South Plymouth Avenue, Spring Street, and South Washington Street (approximately ). Its art department was originally located in the Bevier Memorial Building. By the middle of the twentieth century, RIT began to outgrow its facilities, and surrounding land was scarce and expensive; additionally, in 1959, the New York Department of Public Works announced a new freeway, the Inner Loop, was to be built through the city along a path that bisected the university's campus and required demolition of key university buildings. In 1961, a donation of $3.27 million from local Grace Watson, for whom RIT's dining hall was later named, allowed the university to purchase land for a new campus several miles south along the east bank of the Genesee River in suburban Henrietta. Upon completion in 1968, the university moved to the new suburban campus, where it resides today.
In 1966, RIT was selected by the federal government to be the site of the newly founded National Technical Institute for the Deaf (NTID). NTID admitted its first students in 1968, concurrent with RIT's transition to the Henrietta campus.
In 1979, RIT took over Eisenhower College, a liberal arts college located in Seneca Falls, New York. Despite making a 5-year commitment to keep Eisenhower open, RIT announced in July 1982 that the college would close immediately. One final year of operation by Eisenhower's academic program took place in the 1982–83 school year on the Henrietta campus. The final Eisenhower graduation took place in May 1983 back in Seneca Falls.
The microelectronic engineering program, created in 1982 and the only ABET-accredited undergraduate program in the country, was the nation's first Bachelor of Science program specializing in the fabrication of semiconductor devices and integrated circuits. In 1990, RIT started its first PhD program, in imaging science – the first PhD program of its kind in the U.S. The information technology program was the first nationally recognized IT degree, created in 1993. In 1996, RIT became the first college in the U.S. to offer a Software Engineering degree at the undergraduate level.
Campus
The main campus is housed on a property. This property is largely covered with woodland and fresh-water swamp making it a very diverse wetland that is home to a number of somewhat rare plant species. The campus comprises 237 buildings and of building space. The nearly universal use of bricks in the campus's construction – estimated at 15,710,693 bricks as of August 6, 2018 – prompted students to give it the semi-affectionate nickname "Brick City," reflected in the name of events such as the annual "Brick City Homecoming." Though the buildings erected in the first few decades of the campus's existence reflected the architectural style known as brutalism, the warm color of the bricks softened the impact somewhat. More recent additions to the campus have diversified the architecture while still incorporating the traditional brick colors. The main campus was listed as a census-designated place in 2020.
In 2009, the campus was named a "Campus Sustainability Leader" by the Sustainable Endowments Institute.
The residence halls and the academic side of campus are connected with a walkway called the "Quarter Mile". Along the Quarter Mile, between the academic and residence hall side are various administration and support buildings. On the academic side of the walkway is a courtyard, known as the Infinity Quad due to a striking polished stainless steel sculpture (by Jose' de Rivera, 1968, 19'×8'×2') of a continuous ribbon-like Möbius strip (commonly referred to as the infinity loop because if the sun hits the strip at a certain angle it will cast a shadow in the shape of an infinity symbol on the ground) in the middle of it; on the residence hall side is a sundial and a clock. Standing near the Administration Building and the Student Alumni Union is The Sentinel, a steel structure created by the acclaimed metal sculptor, Albert Paley. Reaching high and weighing 110 tons, the sculpture is the largest on any American university campus. There are four RIT-owned apartment complexes: Global Village, Perkins Green, Riverknoll, and University Commons.
Along the Quarter Mile is the Gordon Field House, a , two-story athletic center. Opened in 2004 and named in honor of Lucius "Bob" Gordon and his wife Marie, the Field House hosts numerous campus and community activities, including concerts, career fairs, athletic competitions, graduations, and other functions. Other facilities between the residence halls and academic buildings include the Hale-Andrews Student Life Center, Student Alumni Union, Ingle Auditorium, Clark Gymnasium, Frank Ritter Memorial Ice Arena, and the Schmitt Interfaith Center.
Art on campus
The RIT Art Collection, part of the RIT Archive Collections at RIT Libraries, comprises thousands of works, including hundreds by RIT faculty, students, and alumni. The collection grows every year through the Purchase Prize Program, which enables the university to purchase select art works from students in the School of Art and Design, the School for American Crafts, and the School of Photographic Arts and Sciences.
Many pieces from the collection are on public display around campus, including:
Sentinel – a 73-foot-tall sculpture created by the acclaimed metal sculptor, Albert Paley, located on Administration Circle.
Growth and Youth – a set of two murals by Josef Albers located in the lobby of the George Eastman Building.
Principia – a mural by Larry Kirkland that is etched into the black granite floor of the atrium in the College of Science (Gosnell Hall). The work features illustrations, symbols, formulae, quotes, and images representing milestones in the history of science.
Three Piece Reclining Figure No. 1 – a bronze sculpture by English artist Henry Moore located in Eastman Kodak Quad.
Grand Hieroglyph – a 24-foot-long tapestry by Shiela Hicks located in the George Eastman Building.
Sundial – a sculpture by Alistair Bevington located on the Residence Quad.
The Monument to Ephemeral Facts – a mixed media sculpture by Douglas Holleley located in Wallace Library.
Unity – a 24-foot-tall stainless steel sculpture sited between the College of Art and Design, the College of Engineering Technology, and the College of Engineering.
Demographics
The RIT campus is a census-designated place (CDP) with a population of 7,322.
Note: the US Census treats Hispanic/Latino as an ethnic category. This table excludes Latinos from the racial categories and assigns them to a separate category. Hispanics/Latinos can be of any race.
Organization and administration
As of 2017, the president is David C. Munson Jr., formerly the dean of engineering at the University of Michigan. Munson, the university's tenth president, took office on July 1, 2017, replacing William W. Destler, who retired after 10 years at RIT. Prabu David, formerly vice provost at Michigan State University, was named provost in August 2023. He replaced Ellen Granberg, the first woman to serve in that role at RIT.
The school is also a member of the Association of Independent Technological Universities.
Colleges
RIT has nine colleges:
College of Art and Design
Saunders College of Business
Golisano College of Computing and Information Sciences
Kate Gleason College of Engineering
College of Engineering Technology
College of Health Sciences and Technology
College of Liberal Arts
National Technical Institute for the Deaf
College of Science
There are also two smaller academic units that grant RIT degrees but do not have full college faculties:
Golisano Institute for Sustainability
School of Individualized Study
In addition to these colleges, RIT operates three branch campuses in Europe, one in the Middle East and one in East Asia:
RIT Croatia (formerly the American College of Management and Technology) in Dubrovnik and Zagreb, Croatia
RIT Kosovo (formerly the American University in Kosovo) in Pristina, Kosovo
RIT Dubai in Dubai, United Arab Emirates
RIT China – Weihai
Academics
The university is chartered by the New York state legislature and accredited by the Middle States Association of Colleges and Schools. The university offers more than 200 academic programs, including seven doctoral programs across its nine constituent colleges. In 2008–2009, RIT awarded 2,483 bachelor's degrees, 912 master's degrees, 10 doctorates, and 523 other certificates and diplomas.
The four-year, full-time undergraduate program constitutes the majority of enrollments at the university and emphasizes instruction in the "arts & sciences/professions." RIT is a member of the Rochester Area College consortium, which allows students to register at other colleges in the Rochester metropolitan area without tuition charges. RIT's full-time undergraduate and graduate programs used to operate on an approximately 10-week quarter system with the primary three academic quarters beginning on Labor Day in early September and ending in late May. In August 2013, RIT transitioned from a quarter system to a semester system. The change was hotly debated on campus, with a majority of students opposed according to an informal survey; Student Government also voted against the change.
Among the eight colleges, 6.8% of the student body is enrolled in the Saunders College of Business, 15.0% in the Kate Gleason College of Engineering, 4.3% in the College of Liberal Arts, 25.4% in the College of Applied Science and Technology, 18.0% in the B. Thomas Golisano College of Computing and Information Sciences, 13.9% in the College of Imaging Arts and Science, 5.7% in the National Technical Institute for the Deaf, and 9.2% in the College of Science. The five most commonly awarded degrees are in Business Administration, Engineering Technology, School of Photographic Arts & Sciences, School of Art and Design, and Information Technology.
RIT has struggled with student retention, although the situation has improved during president Destler's tenure. 91.3% of freshmen in the fall of 2009 registered for fall 2010 classes, which Destler noted as a school record.
Student body
RIT enrolled 13,711 undergraduate and 3,131 graduate students in fall 2015. Admissions are characterized as "more selective, higher transfer-in" by the Carnegie Foundation. RIT received 12,725 applications for undergraduate admission in Fall 2008, 60% were admitted, 34% enrolled, and 84% of students re-matriculated as second-year students. The interquartile range on the SAT was 1630–1910. 26% of students graduated after four years and 64% after six years. As of 2013, the 25th–75th percentile SAT scores are 540–650 Critical Reading, 570–680 Math, and 520–630 Writing—the average composite score being 1630–1960.
Rankings
In 2017, RIT was ranked No. 97 (tie) in the National Universities category by U.S. News & World Report.
Business Insider ranked RIT No. 14 in Northeast and No. 36 in the country for Computer Science.
RIT was ranked among the top 50 national universities in a national survey of "High School Counselors Top College Picks".
RIT's Saunders College of Business ranked No. 26 in the United States for "Best Online MBA Programs" for the online executive MBA program by U.S. News & World Report. Times Higher Education/The Wall Street Journal ranked the MBA program at Saunders College of Business No. 54 among business colleges and universities around the world for the year 2019.
RIT was ranked among the top 20 universities recognized for excellent co-operative learning and internship programs. It was further placed at No. 24 in the top 30 universities for Computer Science with the best Returns on Investment (ROI) in the US.
The Princeton Review ranked RIT No. 8 nationally for "top schools for video game design for 2019" in undergraduate programs and No. 7 in graduate programs. Among the top 75 universities for Video Game Design in the US, RIT was ranked No. 4.
Co-op program
RIT's co-op program, which began in 1912, is the fourth-oldest in the world. It is also the fifth-largest in the nation, with approximately 3,500 students completing a co-op each year at over 2,000 businesses. The program requires (or allows, depending on major) students to work in the workplace for up to five quarters alternating with quarters of class. The amount of co-op varies by major, usually between 3 and 5 three-month "blocks" or academic quarters. Many employers prefer students to co-op for two consecutive blocks, referred to as a "double-block co-op". During a co-op, the student is not required to pay tuition to the school and is still considered a "full time" student.
Library and special collections
RIT library services are based in the Wallace Library. The Cary Graphic Arts Collection contains books, manuscripts, printing-type specimens, letterpress printing equipment, documents, and other artifacts related to the history of graphic communication. RIT Archives document more than 180 years of the university's history, and students in the Museum Studies program frequently work with these artifacts and help create exhibitions. The RIT/NTID Deaf Studies Archive preserves and illustrates the history, art, culture, technology, and language of the Deaf community. The RIT Art Collection contains thousands of works showcasing RIT's visual arts curriculum.
Vignelli Center for Design Studies
The Vignelli Center for Design Studies was established in 2010 and houses the archives of Italian designers Massimo and Lella Vignelli. The center is a hub for design education, scholarship and research.
ESL Global Cybersecurity Institute
Founded in 2020, the Global Cybersecurity Institute was funded in part by a $50 million gift from RIT alumnus Austin McChord. The gift also funded four named endowments for students and cybersecurity researchers. In 2022, the Institute received a $3 million naming gift from ESL Federal Credit Union, a Rochester-area company that provides banking and wealth management services.
Research
The total value of research grants to university faculty for fiscal year 2022 totaled $92 million. The university currently offers twelve PhD programs: Imaging science, Microsystems Engineering, Computing and Information Sciences, Color science, Astrophysical Sciences and Technology, Sustainability, Electrical and Computer Engineering, Biomedical and Chemical Engineering, Business Administration, Physics, and Mathematical Modeling.
In 1986, RIT founded the Chester F. Carlson Center for Imaging Science, and started its first doctoral program in Imaging Science in 1989. The Imaging Science department also offers the only Bachelors (BS) and Masters (MS) degree programs in imaging science in the country. The Carlson Center features a diverse research portfolio; its major research areas include Digital Image Restoration, Remote Sensing, Magnetic Resonance Imaging, Printing Systems Research, Color Science, Nanoimaging, Imaging Detectors, Astronomical Imaging, Visual Perception, and Ultrasonic Imaging.
The Center for Advancing the Study of CyberInfrastructure (CASCI) is a multidisciplinary center housed in the College of Computing and Information Sciences. The Departments of Computer science, Software Engineering, Information technology, Computer engineering, Imaging Science, and Bioinformatics collaborate in a variety of research programs at this center. RIT was the first university to launch a Bachelor's program in Information technology in 1991, the first university to launch a Bachelor's program in Software Engineering in 1996, and was also among the first universities to launch a Computer science Bachelor's program in 1972. RIT helped standardize the Forth programming language, and developed the CLAWS software package.
RIT has collaborated with many industry players in the field of research as well, including IBM, Xerox, Rochester's Democrat and Chronicle, Siemens, NASA, and the Defense Advanced Research Projects Agency (DARPA). In 2005, it was announced by Russell W. Bessette, Executive Director New York State Office of Science Technology & Academic Research (NYSTAR), that RIT will lead the University at Buffalo and Alfred University in an initiative to create key technologies in microsystems, photonics, nanomaterials, and remote sensing systems and to integrate next generation IT systems. In addition, the collaboratory is tasked with helping to facilitate economic development and tech transfer in New York State. More than 35 other notable organizations have joined the collaboratory, including Boeing, Eastman Kodak, IBM, Intel, SEMATECH, ITT, Motorola, Xerox, and several Federal agencies, including as NASA.
In 2017, the U.S. Department of Energy selected RIT to lead its Reducing Embodied-Energy and Decreasing Emissions (REMADE) Institute aimed at forging new clean energy measures through the Manufacturing USA initiative.
Athletics
RIT was a long-time member of the Empire 8, an NCAA Division III athletic conference, but moved to the Liberty League beginning with the 2011–2012 academic year. All of RIT's teams compete at the Division III level, with the exception of the men's and women's ice hockey programs. Those teams play at the Division I level in Atlantic Hockey America, formed after the 2023–24 season by the merger of the Tigers' former hockey homes of the men-only Atlantic Hockey Association and the women-only College Hockey America. In 2010, the men's ice hockey team was the first ever from the Atlantic Hockey Association to reach the NCAA tournament semi-finals: The Frozen Four.
In 2011–2012, the RIT women's ice hockey team had a regular season record of 28–1–1, and won the NCAA Division III national championship, defeating the defending champion Norwich University 4–1. The women's team had carried a record of 54–3–3 over their past two regular seasons leading up to that point. The women's hockey team then moved from Division III to Division I. Starting in the 2012–2013 season, the women's team played in the College Hockey America conference. In 2014–2015, the team became eligible for NCAA Division I postseason play.
In 2021, the RIT men's lacrosse team beat Salisbury in double overtime to take the NCAA Division III national championship. In 2022, the RIT men's lacrosse team won a second national title, following a 12–10 victory over Union College.
RIT's Alpine Ski Club competes at United States Collegiate Ski & Snowboard Association (USCSA), which uses NCAA II competition and academic standards. The varsity Alpine Ski Team competes at the USCSA Mid East Region.
Tom Coughlin, coach of the NFL's 2008 and 2012 Super Bowl champion New York Giants, taught physical education and was the head coach of the RIT Men's Varsity Football team for four seasons in the early 1970s. Overseeing RIT football's transition from a club sport to an NCAA Division III team, this was the first head coaching job of Coughlin's career with him calling his time at RIT "a great experience."
Since 1968 RIT's hockey teams played at Frank Ritter Memorial Ice Arena on campus. In 2010, RIT began raising money for a new arena. In 2011, B. Thomas Golisano and the Polisseni Foundation donated $4.5 million for the new arena, which came to be named the Gene Polisseni Center.
Mascot
RIT's athletics nickname is the "Tigers", a name given following the undefeated men's basketball season of 1955–56. Prior to that, RIT's athletic teams were called the "Techmen" and had blue and silver as the sports colors. In 1963, RIT students fundraised using ‘Tigershares’ to buy a rescued Bengal tiger cub that became the university's mascot, named SpiRIT which stands for Student Pride in RIT. Ambitious students were trained as the Tiger Cubs handlers and took him to most sport events until 1964. It was then discovered that the cub was ill and eventually he was put down due to these health complications. The original tiger's pelt now resides in the RIT Archive Collections at RIT Libraries. RIT helped the Seneca Park Zoo purchase a new tiger shortly after SpiRIT's death, but it was not used as a school mascot. A bronze sculpture by D.H.S. Wehle in the center of the Henrietta campus now provides an everlasting version of the mascot.
A costumed tiger mascot named RITchie was later introduced, appearing at a variety of campus events. The name was selected as part of a student contest in 1989.
Student life
In addition to its academic and athletic endeavors, RIT has over 150 student clubs, 10 major student organizations, an interfaith center and 30 different Greek organizations.
Reporter magazine, founded in 1951, is the university's primary student-run magazine. RIT also has its own ambulance corps, bi-weekly television athletics program RIT SportsZone, pep band, radio station, and tech crew.
The university's Gordon Field House and Activities Center is home to competitive and recreational athletics and aquatics, a fitness center, and an auditorium hosting frequent concerts and other entertainment. Its opening in late 2004 was inaugurated by concerts by performers including Kanye West and Bob Dylan. It is the second-largest venue in Monroe County.
Deaf and hard-of-hearing students
One of RIT's unique features is the large presence of deaf and hard-of-hearing students, who make up 8.8% of the student body. The National Technical Institute for the Deaf, one of RIT's nine colleges, provides interpreting and captioning services to students for classes and events. Many courses' lectures at RIT are interpreted into American Sign Language or captioned in real-time for the benefit of hard-of-hearing and deaf students. There are several deaf and hard-of-hearing professors and lecturers, too; an interpreter can vocalize their lectures for hearing students. This significant portion of the RIT population provides another dynamic to the school's diversity, and it has contributed to Rochester's high number of deaf residents per capita.
Fraternities and sororities
RIT's campus is host to thirty fraternities and sororities (eighteen fraternities and twelve sororities), that make up 6.5% of the total RIT population. RIT and Phi Kappa Psi alumni built six large buildings for Greek students on the academic side of campus next to the Riverknoll apartments. In addition to these six houses, there is also limited space within the residence halls for another six chapters.
Special Interest Houses
RIT's dormitories are home to six "Special Interest Houses" — Art House, Computer Science House, Engineering House, House of General Science, Photo House, and Unity House — that provide an environment to live immersed in a specific interest, such as art, engineering, or computing. Members of a special-interest house share their interests with each other and the rest of campus through academic focus and special activities. Special Interest Houses are self-governing and accept members based on their own criteria. In the early 2000s, RIT had a Special Interest House called Business Leaders for Tomorrow, but it no longer exists. Prior to the 2022–2023 academic year, RIT had a Special Interest House called "International House", but it no longer exists.
ROTC programs
RIT is the host of the Air Force ROTC Detachment 538 "Blue Tigers" and the Army ROTC "Tiger Battalion". RIT students may also enroll in the Naval ROTC program based at the University of Rochester.
In 2009, the "Tiger Battalion" was awarded the Eastern Region's Outstanding ROTC Unit Award, given annually by the Order of the Founders and Patriots of America. In 2010, it was awarded the National MacArthur Award for 2nd Brigade.
Reporter Magazine
Reporter magazine (Reporter) is a completely student-run organization. The magazine is a 32-page full-color issue distributed on the first Monday of the month for the duration of the academic year, supplemented with regular online content.
Reporter began as a newspaper in 1951 and changed to a magazine format in 1969 to better showcase the talents of students enrolled in programs at the College of Imaging Arts & Sciences. The first magazine issue was released on January 10, 1969. The magazine continued to be released on a weekly cycle until 2013.
K2GXT – RIT Amateur Radio Club
Students interested in amateur radio can join K2GXT, the RIT amateur radio club. It is the oldest club on campus, founded in 1952 at the original downtown Rochester campus. The club maintains a UHF and VHF amateur radio repeater system operating on the 2 meter band, and the 70 centimeter band. The repeater system serves the campus and surrounding areas.
WITR 89.7
An FM radio station run by students at RIT, WITR 89.7 broadcasts various music genres, RIT athletic events, and several talk radio programs.
College Activities Board
The College Activities Board, frequently abbreviated as CAB, is a student-run organization responsible for providing "diverse entertainment and activities to enhance student life on the RIT campus." CAB is responsible for annual concerts, class trips, movie screenings, and other frequent events.
Imagine RIT
An annual festival, publicized as "Imagine RIT", was initiated in May 2008 to showcase innovative and creative activity at RIT. It is one of the most prominent changes brought to RIT by former university president, William Destler.
An open event, visitors to Imagine RIT have an opportunity to tour the RIT campus and view new ideas for products and services, admire fine art, explore faculty and student research, examine engineering design projects, and interact with hundreds of hands-on exhibits. Theatrical and musical performances take place on stages in many locations on the RIT campus. Intended to appeal to visitors of all ages, including children, the festival features a variety of exhibits. More than 17,000 people attended the inaugural festival on May 3, 2008, and ten years later the number of people attending has doubled, reaching almost 35,000.
Rochester Game Festival
Sponsored by RIT's MAGIC Center, ROC Game Dev, and the Irondequoit Library, the Rochester Game Festival is an annual convention that showcases video games and tabletop games produced by students and by independent developers in the surrounding region. More than 1,300 people attended the festival in 2019.
RIT Ambulance
RIT Ambulance (RITA) is a community run, 9-1-1 dispatched New York State Certified Basic Life Support Ambulance agency.
Public Safety
RIT Public Safety is the primary agency responsible for the protection of students, staff, and property, as well as enforcement of both college policies and state laws. Officers are NYS Licensed Security Guards who possess an expanded scope of authority under NYS Education Law, and many Officers have prior law enforcement backgrounds. In 2016, it was announced that RIT Public Safety will deploy officers armed with long guns to respond to active shooter incidents. Public Safety Officers operate both a dispatch center and various types of patrol units on campus and at off-campus holdings (such as The Inn and Conference Center) and also manage the Call Box System. Activating a call box will automatically place the user in touch with an Officer in the dispatch center who will direct Patrol Officers to respond to the location; if necessary, Officers will summon the Monroe County Sheriff's to respond as well. As the college does not have 24/7 on campus crisis intervention counselors, in the event of a mental or behavioral health incident during hours where a counselor is not available, Public Safety Officers are also trained to act as mediators until an on-call counselor can be summoned.
Dining services
RIT Dining Services manages a large number of restaurants and food shops, along with the sole dining hall on campus. There are multiple cafeterias and small retail locations throughout the campus, including near the Residence Halls, in the Student Alumni Union, Global Village, and in certain academic buildings. Dining Services at RIT is completely internal and run through the university. RIT Dining Services also provides opportunities for international students to work on campus. In early 2019 the campus started providing food from a Hydroponic farm on campus that supplied lettuce, kale, and other crops.
Governance
RIT is governed under a shared governance model. The shared governance system is composed of the Student Government, the Staff Council, and the Academic Senate. The University Council brings together representatives from all three groups and makes recommendations to the president of the university. Once the University Council has made a recommendation, the president makes the final decision.
Student Government
The Student Government consists of an elected student senate and a cabinet appointed by the president and vice president. Elections for academic and community senators occur each spring, along with the elections for the president and vice president. The cabinet is appointed by the president and vice president.
The Student Government is an advocate for students and is responsible for basic representation as well as improving campus for students. The Student Government endorses proposal that are brought before the University Council.
Academic Senate
The Academic Senate is responsible for representing faculty within the shared governance system. The Academic Senate has 43 senators.
Staff Council
The Staff Council represents staff in the shared governance system.
Notable alumni
RIT has over 125,000 alumni worldwide. Eleven RIT alumni, affiliates, and faculty members have been recipients of the Pulitzer Prize, winning a total of 15 prizes.
Notable alumni include Fredericka Douglass Sprague Perry, a philanthropist, a pioneer in the welfare of Black children, and the granddaughter of Frederick Douglass; Bob Duffy, former New York Lieutenant Governor; Tom Curley, former president and CEO of the Associated Press; Daniel Carp, former chairman of the Eastman Kodak Company; John Resig, software developer and creator of jQuery; N. Katherine Hayles, critical theorist; Austin McChord, founder and CEO of Datto; Jack Van Antwerp, former director of photography for The Wall Street Journal; and photojournalist Bernie Boston.
Presidents and provosts
In the decades prior to the selection of RIT's first president, the university was administered primarily by the board of trustees.
In addition to the ten official presidents, Thomas R. Plough served as acting president twice: once, in February 1991 when M. Richard Rose was on sabbatical with the CIA, and again in 1992 between Rose's retirement and Albert J. Simone's installation.
See also
Association of Independent Technological Universities
List of Rochester Institute of Technology alumni
RIT Tigers
References
Further reading
External links
Rochester Institute of Technology
Universities and colleges in Monroe County, New York
Education in Rochester, New York
Engineering universities and colleges in New York (state)
Technological universities in the United States
Private universities and colleges in New York (state)
Universities and colleges established in 1829
1829 establishments in New York (state)
Educational buildings in Rochester, New York
Glassmaking schools | Rochester Institute of Technology | [
"Materials_science",
"Engineering"
] | 6,635 | [
"Glass engineering and science",
"Glassmaking schools"
] |
81,825 | https://en.wikipedia.org/wiki/Composite%20armour | Composite armour is a type of vehicle armour consisting of layers of different materials such as metals, plastics, ceramics or air. Most composite armours are lighter than their all-metal equivalent, but instead occupy a larger volume for the same resistance to penetration. It is possible to design composite armour stronger, lighter and less voluminous than traditional armour, but the cost is often prohibitively high, restricting its use to especially vulnerable parts of a vehicle. Its primary purpose is to help defeat high-explosive anti-tank (HEAT) projectiles.
HEAT had posed a serious threat to armoured vehicles since its introduction in World War II. Lightweight and small, HEAT projectiles could nevertheless penetrate hundreds of millimetres of the most resistant steel armours. The capability of most materials for defeating HEAT follows the "density law", which states that the penetration of shaped charge jets is proportional to the square root of the shaped charge liner density (typically copper) divided by the square root of the target density. On a weight basis, lighter targets are more advantageous than heavier targets, but using large quantities of lightweight materials has obvious disadvantages in terms of mechanical layout. Certain materials have an optimal compromise in terms of density that makes them particularly useful in this role.
History
Some early ironclads used armor composed of multiple layers of thinner armor bolted or welded together. The results were greatly less effective for a given overall thickness than a single plate, but was done because making thicker plates or plates with different metallurgical properties through their thickness (for example Harvey armor) was prohibitively expensive or too technically challenging. Teak was used to sandwich layers that could not be easily fitted together, or provide a backing to catch splinters.
During WWII, in an effort to provide protection against the German Army’s Panzerfaust anti-tank weapon, an M4A3 was fitted with an armor “kit” incorporating a mixture of quartz gravel, asphalt and wood flour known as “HCR2.” This add-on armor was successfully live-fire tested in September 1945 against both the German Panzerfaust and 76mm High-Velocity Armor Piercing (HVAP) ammunition. Aside from this early test, the first serious development began as part of the US Army's T95 experimental series from the mid-1950s. The T95 featured siliceous-cored armour which contained a plate of fused silica glass between rolled steel plates. The stopping power of glass exceeds that of steel armour on a thickness basis and in many cases glass is more than twice as effective as steel on a thickness basis. Although the T95 never entered production, a number of its concepts were used on the M60 Patton, and during the development stage (as the XM60) the siliceous-cored armour was at least considered for use, although it was not a feature of the production vehicles.
The first widespread use of a composite armour appears to have been on the Soviet T-64. It used an armour known as combination K, which apparently is glass-reinforced plastic sandwiched between inner and outer steel layers. Through a mechanism called thixotropy, the resin changes to a fluid under constant pressure, allowing the armour to be moulded into curved shapes. Later models of the T-64, along with newer designs, use a boron carbide-filled resin aggregate for greatly improved protection . The Soviets also invested heavily in reactive armour, which allowed them some ability to control quality, even after production.
Among NATO nations and allies, the most common type of composite armour today is Chobham armour, first developed and used by the British in the experimental FV 4211 tank, which was based on Chieftain tank components. Chobham uses multiple non-explosive reactive armour plates (NERA), which sandwich a layer of elastomer (like rubber) between two plates of steel armour. This was shown to dramatically increase the resistance to HEAT projectiles, even in comparison to other composite armour designs. Chobham was such an improvement that it was soon used on the new U.S. M1 Abrams main battle tank (MBT) as well. The need to mount multiple angled plates, along with an outer steel layer to protect the armour array, gives the Challenger and Abrams their "slab sided" look.
The Soviets/Russians had a similar composite armour to the West's own "NERA", with rubber sandwiches between plates of steel. This armour was confirmed to be inside the T-72B's "Super Dolly Parton" armour, but is suspected to be inside the T-80A as well, since it is unlikely the Soviets would put worse armour in their "premier" tank.
Design
Chobham armour defeats HEAT warheads by disrupting the high speed jet generated by the warhead. The outer steel "burster" plate detonates the shell and protects the composite array from the blast, increasing the armour's multi hit abilities. After making it through the burster plate, the jet penetrates into the first NERA plate, and begins to compress the elastomer. The elastomer quickly reaches maximum compression and rapidly expands, pushing the two steel plates in opposite directions. It is the movement of the steel plates that disrupts the jet, both by feeding more material into the jet's path, and introducing lateral forces to break the jet apart. The effectiveness of the system was amply demonstrated in Desert Storm, where not a single British Army Challenger tank was lost to enemy tank fire. (However, one was destroyed by friendly fire on March 25, 2003, killing two crew members after a HESH projectile detonated on the commander's hatch causing high-velocity fragments to enter the turret.) Chobham-type armour is currently in its third generation and is used on modern western tanks such as the British Challenger 2 and the American M1 Abrams. The Abrams is also unique in its usage of depleted uranium armour plates in conjunction with composite armour, increasing overall vehicle protection. The Leopard 2A4 is similar in its use of tungsten inserts.
Use
All modern third-generation main battle tanks use composite armour arrays in their construction. While many of these vehicles feature the composite armour permanently integrated with the vehicle, the Japanese Type 10 and Type 90 Kyū-maru MBTs, French Leclerc, Iranian Karrar, Turkish Altay, Indian Arjun, Italian Ariete and Chinese Type 96/98 and Type 99 tanks use a modular composite armour, where sections of the composite armour can be easily and quickly switched out or upgraded with armour modules. The adoption of modular composite armour design facilitates far more efficient and easier upgrades and exchanges of the armour.
Soviet/Russian main battle tanks such as T-90s T-80Us and the Chinese Type 96/99s use composite armour in tandem with explosive reactive armour (ERA), making it hard for shaped charge munitions such as HEAT projectiles and missile warheads to penetrate the frontal and a portion of their side armour. The most advanced versions of these armours such as the Relikt and Kontakt-5 armour provide protection not only against shaped charges but also kinetic energy penetrators by using the explosive force to shear the projectile apart.
Applique armour has also been used in conjunction with composite armour to provide increased amounts of protection and to supplant existing composite arrays on a vehicle. The German Leopard 2A5 featured distinctive arrowhead laminated armour modules that was mounted directly onto the turret composite arrays, increasing protection markedly above the previous 2A4 model.
Composite armour has since been applied to smaller vehicles, right down to jeep-sized automobiles. Many of these systems are applied as upgrades to existing armour, which makes them difficult to place around the entire vehicle. Nevertheless, they are often surprisingly effective; upgrades with MEXAS ceramic armour to Canadian M113s were carried out in the 1990s, after it was realized that it would offer more protection than newly built IFVs like the M2 Bradley.
Improvised
In 2004, American Marvin Heemeyer used an ad hoc composite armour on his Komatsu D355A bulldozer ("which he called the MK Tank and in popular culture, the Killdozer") used in a rampage in response to a dispute with the city he lived in over a zoning issue. The armour, at some places a thick, consisted of a layer of concrete sandwiched between layers of steel, successfully rendering the vehicle impervious to small arms fire and small explosives used by law enforcement in an attempt to stop the vehicle.
See also
Advanced Modular Armor Protection (AMAP)
Chobham armour
Combination K
Compound armour
Kanchan armour
MEXAS
Plastic composite
Pykrete
References
Vehicle armour
Armour, Composite | Composite armour | [
"Physics"
] | 1,765 | [
"Materials",
"Composite materials",
"Matter"
] |
81,859 | https://en.wikipedia.org/wiki/James%20Prescott%20Joule | James Prescott Joule (; 24 December 1818 11 October 1889) was an English physicist. Joule studied the nature of heat and discovered its relationship to mechanical work. This led to the law of conservation of energy, which in turn led to the development of the first law of thermodynamics. The SI unit of energy, the joule (J), is named after him.
He worked with Lord Kelvin to develop an absolute thermodynamic temperature scale, which came to be called the Kelvin scale. Joule also made observations of magnetostriction, and he found the relationship between the current through a resistor and the heat dissipated, which is also called Joule's first law. His experiments about energy transformations were first published in 1843.
Early years
James Joule was born in 1818, the son of Benjamin Joule (1784–1858), a wealthy brewer, and his wife, Alice Prescott, on New Bailey Street in Salford. Joule was tutored as a young man by the famous scientist John Dalton and was strongly influenced by chemist William Henry and Manchester engineers Peter Ewart and Eaton Hodgkinson. He was fascinated by electricity, and he and his brother experimented by giving electric shocks to each other and to the family's servants.
As an adult, Joule managed the brewery. Science was merely a serious hobby. Sometime around 1840, he started to investigate the feasibility of replacing the brewery's steam engines with the newly invented electric motor. His first scientific papers on the subject were contributed to William Sturgeon's Annals of Electricity. Joule was a member of the London Electrical Society, established by Sturgeon and others.
Motivated in part by a businessman's desire to quantify the economics of the choice, and in part by his scientific inquisitiveness, he set out to determine which prime mover was more efficient. He discovered Joule's first law in 1841, that "the heat which is evolved by the proper action of any voltaic current is proportional to the square of the intensity of that current, multiplied by the resistance to conduction which it experiences". He went on to realize that burning a pound of coal in a steam engine was more economical than a costly pound of zinc consumed in an electric battery. Joule captured the output of the alternative methods in terms of a common standard, the ability to raise a mass weighing one pound to a height of one foot, the foot-pound.
However, Joule's interest diverted from the narrow financial question to that of how much work could be extracted from a given source, leading him to speculate about the convertibility of energy. In 1843 he published results of experiments showing that the heating effect he had quantified in 1841 was due to generation of heat in the conductor and not its transfer from another part of the equipment. This was a direct challenge to the caloric theory which held that heat could neither be created nor destroyed. Caloric theory had dominated thinking in the science of heat since introduced by Antoine Lavoisier in 1783. Lavoisier's prestige and the practical success of Sadi Carnot's caloric theory of the heat engine since 1824 ensured that the young Joule, working outside either academia or the engineering profession, had a difficult road ahead. Supporters of the caloric theory readily pointed to the symmetry of the Peltier–Seebeck effect to claim that heat and current were convertible in an, at least approximately, reversible process.
The mechanical equivalent of heat
Further experiments and measurements with his electric motor led Joule to estimate the mechanical equivalent of heat as 4.1868 joules per calorie of work to raise the temperature of one gram of water by one kelvin. He announced his results at a meeting of the chemical section of the British Association for the Advancement of Science in Cork in August 1843 and was met by silence.
Joule was undaunted and started to seek a purely mechanical demonstration of the conversion of work into heat. By forcing water through a perforated cylinder, he could measure the slight viscous heating of the fluid. He obtained a mechanical equivalent of . The fact that the values obtained both by electrical and purely mechanical means were in agreement to at least two significant digits was, to Joule, compelling evidence of the reality of the convertibility of work into heat.
Joule now tried a third route. He measured the heat generated against the work done in compressing a gas. He obtained a mechanical equivalent of . In many ways, this experiment offered the easiest target for Joule's critics but Joule disposed of the anticipated objections by clever experimentation. Joule read his paper to the Royal Society on 20 June 1844, but his paper was rejected for publication by the Royal Society and he had to be content with publishing in the Philosophical Magazine in 1845. In the paper he was forthright in his rejection of the caloric reasoning of Carnot and Émile Clapeyron, a rejection partly theologically driven:
Joule here adopts the language of vis viva (energy), possibly because Hodgkinson had read a review of Ewart's On the measure of moving force to the Literary and Philosophical Society in April 1844.
In June 1845, Joule read his paper On the Mechanical Equivalent of Heat to the British Association meeting in Cambridge. In this work, he reported his best-known experiment, involving the use of a falling weight, in which gravity does the mechanical work, to spin a paddle wheel in an insulated barrel of water which increased the temperature. He now estimated a mechanical equivalent of . He wrote a letter to the Philosophical Magazine, published in September 1845 describing his experiment.
In 1850, Joule published a refined measurement of , closer to twentieth century estimates.
Reception and priority
Much of the initial resistance to Joule's work stemmed from its dependence upon extremely precise measurements. He claimed to be able to measure temperatures to within of a degree Fahrenheit (3 mK). Such precision was certainly uncommon in contemporary experimental physics but his doubters may have neglected his experience in the art of brewing and his access to its practical technologies. He was also ably supported by scientific instrument-maker John Benjamin Dancer. Joule's experiments complemented the theoretical work of Rudolf Clausius, who is considered by some to be the coinventor of the energy concept.
Joule was proposing a kinetic theory of heat (he believed it to be a form of rotational, rather than translational, kinetic energy), and this required a conceptual leap: if heat was a form of molecular motion, why did the motion of the molecules not gradually die out? Joule's ideas required one to believe that the collisions of molecules were perfectly elastic. Importantly, the very existence of atoms and molecules was not widely accepted for another 50 years, though the essential work on the existence of molecules, atoms and electrons was underway throughout the 19th and early 20th centuries, from that of John Dalton through to Ernest Rutherford. A collection of Dalton’s works was published in 1893, 49 years after his death.
Although it may be hard today to understand the allure of the caloric theory, at the time it seemed to have some clear advantages. Carnot's successful theory of heat engines had also been based on the caloric assumption, and only later was it proved by Lord Kelvin that Carnot's mathematics were equally valid without assuming a caloric fluid.
However, in Germany, Hermann Helmholtz became aware both of Joule's work and the similar 1842 work of Julius Robert von Mayer. Though both men had been neglected since their respective publications, Helmholtz's definitive 1847 declaration of the conservation of energy credited them both.
Also in 1847, another of Joule's presentations at the British Association in Oxford was attended by George Gabriel Stokes, Michael Faraday, and the precocious and maverick William Thomson, later to become Lord Kelvin, who had just been appointed professor of natural philosophy at the University of Glasgow. Stokes was "inclined to be a Joulite" and Faraday was "much struck with it" though he harboured doubts. Thomson was intrigued but sceptical.
Unanticipated, Thomson and Joule met later that year in Chamonix. Joule married Amelia Grimes on 18 August and the couple went on honeymoon. Marital enthusiasm notwithstanding, Joule and Thomson arranged to attempt an experiment a few days later to measure the temperature difference between the top and bottom of the Cascade de Sallanches waterfall, though this subsequently proved impractical.
Though Thomson felt that Joule's results demanded theoretical explanation, he retreated into a spirited defence of the Carnot–Clapeyron school. In his 1848 account of absolute temperature, Thomson wrote that "the conversion of heat (or caloric) into mechanical effect is probably impossible, certainly undiscovered" – but a footnote signalled his first doubts about the caloric theory, referring to Joule's "very remarkable discoveries". Surprisingly, Thomson did not send Joule a copy of his paper but when Joule eventually read it he wrote to Thomson on 6 October, claiming that his studies had demonstrated conversion of heat into work but that he was planning further experiments. Thomson replied on the 27th, revealing that he was planning his own experiments and hoping for a reconciliation of their two views. Though Thomson conducted no new experiments, over the next two years he became increasingly dissatisfied with Carnot's theory and convinced of Joule's. In his 1851 paper, Thomson was willing to go no further than a compromise and declared "the whole theory of the motive power of heat is founded on two propositions, due respectively to Joule, and to Carnot and Clausius".
As soon as Joule read the paper he wrote to Thomson with his comments and questions. Thus began a fruitful, though largely epistolary, collaboration between the two men, Joule conducting experiments, Thomson analysing the results and suggesting further experiments. The collaboration lasted from 1852 to 1856, its discoveries including the Joule–Thomson effect, and the published results did much to bring about general acceptance of Joule's work and the kinetic theory.
Kinetic theory
Kinetics is the science of motion. Joule was a pupil of Dalton and it is no surprise that he had learned a firm belief in the atomic theory, even though there were many scientists of his time who were still skeptical. He had also been one of the few people receptive to the neglected work of John Herapath on the kinetic theory of gases. He was further profoundly influenced by Peter Ewart's 1813 paper "On the measure of moving force".
Joule perceived the relationship between his discoveries and the kinetic theory of heat. His laboratory notebooks reveal that he believed heat to be a form of rotational, rather than translational motion.
Joule could not resist finding antecedents of his views in Francis Bacon, Sir Isaac Newton, John Locke, Benjamin Thompson (Count Rumford) and Sir Humphry Davy. Though such views are justified, Joule went on to estimate a value for the mechanical equivalent of heat of 1,034 foot-pound from Rumford's publications. Some modern writers have criticised this approach on the grounds that Rumford's experiments in no way represented systematic quantitative measurements. In one of his personal notes, Joule contends that Mayer's measurement was no more accurate than Rumford's, perhaps in the hope that Mayer had not anticipated his own work.
Joule has been attributed with explaining the sunset green flash phenomenon in a letter to the Manchester Literary and Philosophical Society in 1869; actually, he merely noted (with a sketch) the last glimpse as bluish green, without attempting to explain the cause of the phenomenon.
Published work
Read before the British Association at Cambridge, June 1845.
Honours
Joule died at home in Sale and is buried in Brooklands cemetery there. His gravestone is inscribed with the number "772.55", his climacteric 1878 measurement of the mechanical equivalent of heat, in which he found that this amount of foot-pounds of work must be expended at sea level to raise the temperature of one pound of water from to . There is also a quotation from the Gospel of John: "I must work the work of him that sent me, while it is day: the night cometh, when no man can work". The Wetherspoons pub in Sale, the town of his death, is named "The J. P. Joule" after him.
Joule's many honours and commendations include:
Fellow of the Royal Society (1850)
Royal Medal (1852), 'For his paper on the mechanical equivalent of heat, printed in the Philosophical Transactions for 1850'
Copley Medal (1870), 'For his experimental researches on the dynamical theory of heat'
President of Manchester Literary and Philosophical Society (1860)
President of the British Association for the Advancement of Science (1872, 1887)
Honorary Membership of the Institution of Engineers and Shipbuilders in Scotland (1857)
Honorary degrees:
LL.D., Trinity College, Dublin (1857)
DCL, University of Oxford (1860)
LL.D., University of Edinburgh (1871)
Joule received a civil list pension of £200 per annum in 1878 for services to science
Albert Medal of the Royal Society of Arts (1880), 'for having established, after most laborious research, the true relation between heat, electricity and mechanical work, thus affording to the engineer a sure guide in the application of science to industrial pursuits'
There is a memorial to Joule in the north choir aisle of Westminster Abbey, though he is not buried there, contrary to what some biographies state. A statue of Joule by Alfred Gilbert stands in Manchester Town Hall, opposite that of Dalton.
Family
Joule married Amelia Grimes in 1847. She died in 1854, seven years after their wedding. They had three children together: a son, Benjamin Arthur Joule (1850–1922), a daughter, Alice Amelia (1852–1899), and a second son, Joe (born 1854, died three weeks later).
See also
Latent heat
Sensible heat
Internal energy
References
Notes
Citations
Sources
Further reading
Fox, R, "James Prescott Joule, 1818–1889", in
External links
The scientific papers of James Prescott Joule (1884) – annotated by Joule
The joint scientific papers of James Prescott Joule (1887) – annotated by Joule
Classic papers of 1845 and 1847 at ChemTeam website On the Mechanical Equivalent of Heat and On the Existence of an Equivalent Relation between Heat and the ordinary Forms of Mechanical Power
Joule's water friction apparatus at London Science Museum
Some Remarks on Heat and the Constitution of Elastic Fluids, Joule's 1851 estimate of the speed of a gas molecule
Joule Manuscripts at the University of Manchester Library.
University of Manchester material on Joule – includes photographs of Joule's house and gravesite
Joule Physics Laboratory at the University of Salford
1818 births
1889 deaths
19th-century British physicists
English brewers
English physicists
Fellows of the American Academy of Arts and Sciences
Fellows of the Royal Society
Foreign associates of the National Academy of Sciences
Fluid dynamicists
History of Greater Manchester
People associated with electricity
People associated with energy
Scientists from Salford
Recipients of the Copley Medal
Royal Medal winners
Thermodynamicists
Manchester Literary and Philosophical Society | James Prescott Joule | [
"Physics",
"Chemistry"
] | 3,131 | [
"Fluid dynamics",
"Fluid dynamicists",
"Thermodynamics",
"Thermodynamicists"
] |
81,884 | https://en.wikipedia.org/wiki/Newton%27s%20law%20of%20cooling | In the study of heat transfer, Newton's law of cooling is a physical law which states that the rate of heat loss of a body is directly proportional to the difference in the temperatures between the body and its environment. The law is frequently qualified to include the condition that the temperature difference is small and the nature of heat transfer mechanism remains the same. As such, it is equivalent to a statement that the heat transfer coefficient, which mediates between heat losses and temperature differences, is a constant.
In heat conduction, Newton's Law is generally followed as a consequence of Fourier's law. The thermal conductivity of most materials is only weakly dependent on temperature, so the constant heat transfer coefficient condition is generally met. In convective heat transfer, Newton's Law is followed for forced air or pumped fluid cooling, where the properties of the fluid do not vary strongly with temperature, but it is only approximately true for buoyancy-driven convection, where the velocity of the flow increases with temperature difference. In the case of heat transfer by thermal radiation, Newton's law of cooling holds only for very small temperature differences.
When stated in terms of temperature differences, Newton's law (with several further simplifying assumptions, such as a low Biot number and a temperature-independent heat capacity) results in a simple differential equation expressing temperature-difference as a function of time. The solution to that equation describes an exponential decrease of temperature-difference over time. This characteristic decay of the temperature-difference is also associated with Newton's law of cooling.
Historical background
Isaac Newton published his work on cooling anonymously in 1701 as "Scala graduum Caloris" in Philosophical Transactions.
Newton did not originally state his law in the above form in 1701. Rather, using today's terms, Newton noted after some mathematical manipulation that the rate of temperature change of a body is proportional to the difference in temperatures between the body and its surroundings. This final simplest version of the law, given by Newton himself, was partly due to confusion in Newton's time between the concepts of heat and temperature, which would not be fully disentangled until much later.
In 2020, Maruyama and Moriya repeated Newton's experiments with modern apparatus, and they applied modern data reduction techniques. In particular, these investigators took account of thermal radiation at high temperatures (as for the molten metals Newton used), and they accounted for buoyancy effects on the air flow. By comparison to Newton's original data, they concluded that his measurements (from 1692 to 1693) had been "quite accurate".
Relationship to mechanism of cooling
Convection cooling is sometimes said to be governed by "Newton's law of cooling." When the heat transfer coefficient is independent, or relatively independent, of the temperature difference between object and environment, Newton's law is followed. The law holds well for forced air and pumped liquid cooling, where the fluid velocity does not rise with increasing temperature difference. Newton's law is most closely obeyed in purely conduction-type cooling. However, the heat transfer coefficient is a function of the temperature difference in natural convective (buoyancy driven) heat transfer. In that case, Newton's law only approximates the result when the temperature difference is relatively small. Newton himself realized this limitation.
A correction to Newton's law concerning convection for larger temperature differentials by including an exponent, was made in 1817 by Dulong and Petit. (These men are better-known for their formulation of the Dulong–Petit law concerning the molar specific heat capacity of a crystal.)
Another situation that does not obey Newton's law is radiative heat transfer. Radiative cooling is better described by the Stefan–Boltzmann law in which the heat transfer rate varies as the difference in the 4th powers of the absolute temperatures of the object and of its environment.
Mathematical formulation of Newton's law
The statement of Newton's law used in the heat transfer literature puts into mathematics the idea that the rate of heat loss of a body is proportional to the difference in temperatures between the body and its surroundings. For a temperature-independent heat transfer coefficient, the statement is:
where
is the heat flux transferred out of the body (SI unit: watt/m2),
is the heat transfer coefficient (assumed independent of T and averaged over the surface) (SI unit: W/(m2⋅K)),
is the temperature of the object's surface (SI unit: K),
is the temperature of the environment; i.e., the temperature suitably far from the surface (SI unit: K),
is the time-dependent temperature difference between environment and object (SI unit: K).
In global parameters by integrating on the surface area the heat flux, it can be also stated as:
where
is the rate of heat transfer out of the body (SI unit: watt),
is the heat transfer coefficient (assumed independent of T and averaged over the surface) (SI unit: W/(m2⋅K)),
is the heat transfer surface area (SI unit: m2),
is the temperature of the object's surface (SI unit: K),
is the temperature of the environment; i.e., the temperature suitably far from the surface (SI unit: K),
is the time-dependent temperature difference between environment and object (SI unit: K).
If the heat transfer coefficient and the temperature difference are uniform along the heat transfer surface, the above formula simplifies to:
.
The heat transfer coefficient h depends upon physical properties of the fluid and the physical situation in which convection occurs. Therefore, a single usable heat transfer coefficient (one that does not vary significantly across the temperature-difference ranges covered during cooling and heating) must be derived or found experimentally for every system that is to be analyzed.
Formulas and correlations are available in many references to calculate heat transfer coefficients for typical configurations and fluids. For laminar flows, the heat transfer coefficient is usually smaller than in turbulent flows because turbulent flows have strong mixing within the boundary layer on the heat transfer surface. Note the heat transfer coefficient changes in a system when a transition from laminar to turbulent flow occurs.
The Biot number
The Biot number, a dimensionless quantity, is defined for a body as
where
h = film coefficient or heat transfer coefficient or convective heat transfer coefficient,
LC = characteristic length, which is commonly defined as the volume of the body divided by the surface area of the body, such that ,
kb = thermal conductivity of the body.
The physical significance of Biot number can be understood by imagining the heat flow from a hot metal sphere suddenly immersed in a pool to the surrounding fluid. The heat flow experiences two resistances: the first outside the surface of the sphere, and the second within the solid metal (which is influenced by both the size and composition of the sphere). The ratio of these resistances is the dimensionless Biot number.
If the thermal resistance at the fluid/sphere interface exceeds that thermal resistance offered by the interior of the metal sphere, the Biot number will be less than one. For systems where it is much less than one, the interior of the sphere may be presumed always to have the same temperature, although this temperature may be changing, as heat passes into the sphere from the surface. The equation to describe this change in (relatively uniform) temperature inside the object, is the simple exponential one described in Newton's law of cooling expressed in terms of temperature difference (see below).
In contrast, the metal sphere may be large, causing the characteristic length to increase to the point that the Biot number is larger than one. In this case, temperature gradients within the sphere become important, even though the sphere material is a good conductor. Equivalently, if the sphere is made of a thermally insulating (poorly conductive) material, such as wood or styrofoam, the interior resistance to heat flow will exceed that at the fluid/sphere boundary, even with a much smaller sphere. In this case, again, the Biot number will be greater than one.
Values of the Biot number smaller than 0.1 imply that the heat conduction inside the body is much faster than the heat convection away from its surface, and temperature gradients are negligible inside of it. This can indicate the applicability (or inapplicability) of certain methods of solving transient heat transfer problems. For example, a Biot number less than 0.1 typically indicates less than 5% error will be present when assuming a lumped-capacitance model of transient heat transfer (also called lumped system analysis). Typically, this type of analysis leads to simple exponential heating or cooling behavior ("Newtonian" cooling or heating) since the internal energy of the body is directly proportional to its temperature, which in turn determines the rate of heat transfer into or out of it. This leads to a simple first-order differential equation which describes heat transfer in these systems.
Having a Biot number smaller than 0.1 labels a substance as "thermally thin," and temperature can be assumed to be constant throughout the material's volume. The opposite is also true: A Biot number greater than 0.1 (a "thermally thick" substance) indicates that one cannot make this assumption, and more complicated heat transfer equations for "transient heat conduction" will be required to describe the time-varying and non-spatially-uniform temperature field within the material body. Analytic methods for handling these problems, which may exist for simple geometric shapes and uniform material thermal conductivity, are described in the article on the heat equation.
Application of Newton's law of transient cooling
Simple solutions for transient cooling of an object may be obtained when the internal thermal resistance within the object is small in comparison to the resistance to heat transfer away from the object's surface (by external conduction or convection), which is the condition for which the Biot number is less than about 0.1. This condition allows the presumption of a single, approximately uniform temperature inside the body, which varies in time but not with position. (Otherwise the body would have many different temperatures inside it at any one time.) This single temperature will generally change exponentially as time progresses (see below).
The condition of low Biot number leads to the so-called lumped capacitance model. In this model, the internal energy (the amount of thermal energy in the body) is calculated by assuming a constant heat capacity. In that case, the internal energy of the body is a linear function of the body's single internal temperature.
The lumped capacitance solution that follows assumes a constant heat transfer coefficient, as would be the case in forced convection. For free convection, the lumped capacitance model can be solved with a heat transfer coefficient that varies with temperature difference.
First-order transient response of lumped-capacitance objects
A body treated as a lumped capacitance object, with a total internal energy of (in joules), is characterized by a single uniform internal temperature, The heat capacitance, of the body is (in J/K), for the case of an incompressible material. The internal energy may be written in terms of the temperature of the body, the heat capacitance (taken to be independent of temperature), and a reference temperature at which the internal energy is zero:
Differentiating with respect to time gives:
Applying the first law of thermodynamics to the lumped object gives where the rate of heat transfer out of the body, may be expressed by Newton's law of cooling, and where no work transfer occurs for an incompressible material. Thus,
where the time constant of the system is The heat capacitance may be written in terms of the object's specific heat capacity, (J/kg-K), and mass, (kg). The time constant is then
When the environmental temperature is constant in time, we may define The equation becomes
The solution of this differential equation, by integration from the initial condition, is
where is the temperature difference at time 0. Reverting to temperature, the solution is
The temperature difference between the body and the environment decays exponentially as a function of time.
Standard Formulation
By defining , the differential equation becomes
where
is the rate of heat loss (SI unit: K/second),
is the temperature of the object's surface (SI unit: K),
is the temperature of the environment; i.e., the temperature suitably far from the surface (SI unit: K),
is the coefficient of heat transfer (SI unit: second).
Solving the initial-value problem using separation of variables gives
See also
Thermal transmittance
List of thermal conductivities
Convection–diffusion equation
R-value (insulation)
Heat pipe
Fick's laws of diffusion
Relativistic heat conduction
Churchill–Bernstein equation
Fourier number
Biot number
False diffusion
Mpemba effect
References
See also:
Dehghani, F 2007, CHNG2801 – Conservation and Transport Processes: Course Notes, University of Sydney, Sydney
External links
Heat conduction – Thermal-FluidsPedia
Newton's Law of Cooling by Jeff Bryant based on a program by Stephen Wolfram, Wolfram Demonstrations Project.
A Heat Transfer Textbook, 5/e, free ebook.
Equations of physics
Heat conduction
Heat transfer
Isaac Newton
History of physics
Scientific observation
Experimental physics
Eponymous laws of physics | Newton's law of cooling | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,765 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Equations of physics",
"Mathematical objects",
"Equations",
"Experimental physics",
"Thermodynamics",
"Heat conduction"
] |
81,959 | https://en.wikipedia.org/wiki/Metacharacter | A metacharacter is a character that has a special meaning to a computer program, such as a shell interpreter or a regular expression (regex) engine.
In POSIX extended regular expressions, there are 14 metacharacters that must be escaped — preceded by a backslash (\) — in order to drop their special meaning and be treated literally inside an expression: opening and closing square brackets ([ and ]); backslash (\); caret (^); dollar sign ($); period/full stop/dot (.); vertical bar/pipe symbol (|); question mark (?); asterisk (*); plus and minus signs (+ and -); opening and closing curly brackets/braces ({ and }); and opening and closing parentheses (( and )).
For example, to match the arithmetic expression (1+1)*3=6 with a regex, the correct regex is \(1\+1\)\*3=6; otherwise, the parentheses, plus sign, and asterisk will have special meanings.
Other examples
Some other characters may have special meaning in some environments.
In some Unix shells the semicolon (";") is a statement separator.
In XML and HTML, the ampersand ("&") introduces an HTML entity. It also has special meaning in MS-DOS/Windows Command Prompt.
In some Unix shells and MS-DOS/Windows Command Prompt, the less-than sign and greater-than sign ("<" and ">") are used for redirection and the backtick/grave accent ("`") is used for command substitution.
In many programming languages, strings are delimited using quotes (" or '). In some cases, escape characters (and other methods) are used to avoid delimiter collision, e.g. "He said, \"Hello\"".
In printf format strings, the percent sign ("%") is used to introduce format specifiers and must be escaped as "%%" to be interpreted literally. In SQL, the percent is used as a wildcard character.
In SQL, the underscore ("_") is used to match any single character.
Escaping
The term "to escape a metacharacter" means to make the metacharacter ineffective (to strip it of its special meaning), causing it to have its literal meaning. For example, in PCRE, a dot (".") stands for any single character. The regular expression "A.C" will match "ABC", "A3C", or even "A C". However, if the "." is escaped, it will lose its meaning as a metacharacter and will be interpreted literally as ".", causing the regular expression "A\.C" to only match the string "A.C".
The usual way to escape a character in a regex and elsewhere is by prefixing it with a backslash ("\"). Other environments may employ different methods, like MS-DOS/Windows Command Prompt, where a caret ("^") is used instead.
See also
Markup language
References
Formal languages
Pattern matching
Programming language topics | Metacharacter | [
"Mathematics",
"Engineering"
] | 684 | [
"Software engineering",
"Formal languages",
"Mathematical logic",
"Programming language topics"
] |
82,104 | https://en.wikipedia.org/wiki/Bioprospecting | Bioprospecting (also known as biodiversity prospecting) is the exploration of natural sources for small molecules, macromolecules and biochemical and genetic information that could be developed into commercially valuable products for the agricultural, aquaculture, bioremediation, cosmetics, nanotechnology, or pharmaceutical industries. In the pharmaceutical industry, for example, almost one third of all small-molecule drugs approved by the U.S. Food and Drug Administration (FDA) between 1981 and 2014 were either natural products or compounds derived from natural products.
Terrestrial plants, fungi and actinobacteria have been the focus of many past bioprospecting programs, but interest is growing in less explored ecosystems (e.g. seas and oceans) and organisms (e.g. myxobacteria, archaea) as a means of identifying new compounds with novel biological activities. Species may be randomly screened for bioactivity or rationally selected and screened based on ecological, ethnobiological, ethnomedical, historical or genomic information.
When a region's biological resources or indigenous knowledge are unethically appropriated or commercially exploited without providing fair compensation, this is known as biopiracy. Various international treaties have been negotiated to provide countries legal recourse in the event of biopiracy and to offer commercial actors legal certainty for investment. These include the UN Convention on Biological Diversity and the Nagoya Protocol. The WIPO is currently negotiating more treaties to bridge gaps in this field.
Other risks associated with bioprospecting are the overharvesting of individual species and environmental damage, but legislation has been developed to combat these also. Examples include national laws such as the US Marine Mammal Protection Act and US Endangered Species Act, and international treaties such as the UN Convention on Biological Diversity, the UN Convention on the Law of the Sea, the Biodiversity Beyond National Jurisdictions Treaty, and the Antarctic Treaty.
Bioprospecting-derived resources and products
Agriculture
Bioprospecting-derived resources and products used in agriculture include biofertilizers, biopesticides and veterinary antibiotics. Rhizobium is a genus of soil bacteria used as biofertilizers, Bacillus thuringiensis (also called Bt) and the annonins (obtained from seeds of the plant Annona squamosa) are examples of biopesticides, and valnemulin and tiamulin (discovered and developed from the basidiomycete fungi Omphalina mutila and Clitopilus passeckerianus) are examples of veterinary antibiotics.
Bioremediation
Examples of bioprospecting products used in bioremediation include Coriolopsis gallica- and Phanerochaete chrysosporium-derived laccase enzymes, used for treating beer factory wastewater and for dechlorinating and decolorizing paper mill effluent.
Cosmetics and personal care
Cosmetics and personal care products obtained from bioprospecting include Porphyridium cruentum-derived oligosaccharide and oligoelement blends used to treat erythema (rosacea, flushing and dark circles), Xanthobacter autotrophicus-derived zeaxanthin used for skin hydration and UV protection, Clostridium histolyticum-derived collagenases used for skin regeneration, and Microsporum-derived keratinases used for hair removal.
Nanotechnology and biosensors
Because microbial laccases have a broad substrate range, they can be used in biosensor technology to detect a wide range of organic compounds. For example, laccase-containing electrodes are used to detect polyphenolic compounds in wine, and lignins and phenols in wastewater.
Pharmaceuticals
Many of the antibacterial drugs in current clinical use were discovered through bioprospecting including the aminoglycosides, tetracyclines, amphenicols, polymyxins, cephalosporins and other β-lactam antibiotics, macrolides, pleuromutilins, glycopeptides, rifamycins, lincosamides, streptogramins, and phosphonic acid antibiotics. The aminoglycoside antibiotic streptomycin, for example, was discovered from the soil bacterium Streptomyces griseus, the fusidane antibiotic fusidic acid was discovered from the soil fungus Acremonium fusidioides, and the pleuromutilin antibiotics (eg. lefamulin) were discovered and developed from the basidiomycete fungi Omphalina mutila and Clitopilus passeckerianus.
Other examples of bioprospecting-derived anti-infective drugs include the antifungal drug griseofulvin (discovered from the soil fungus Penicillium griseofulvum), the antifungal and antileishmanial drug amphotericin B (discovered from the soil bacterium Streptomyces nodosus), the antimalarial drug artemisinin (discovered from the plant Artemisia annua), and the antihelminthic drug ivermectin (developed from the soil bacterium Streptomyces avermitilis).
Bioprospecting-derived pharmaceuticals have been developed for the treatment of non-communicable diseases and conditions too. These include the anticancer drug bleomycin (obtained from the soil bacterium Streptomyces verticillus), the immunosuppressant drug ciclosporin used to treat autoimmune diseases such as rheumatoid arthritis and psoriasis (obtained from the soil fungus Tolypocladium inflatum), the anti-inflammatory drug colchicine used to treat and prevent gout flares (obtained from the plant Colchicum autumnale), the analgesic drug ziconotide (developed from the cone snail Conus magus), and the acetylcholinesterase inhibitor galantamine used to treat Alzheimer's disease (obtained from plants in the Galanthus genus).
Bioprospecting as a discovery strategy
Bioprospecting has both strengths and weaknesses as a strategy for discovering new genes, molecules, and organisms suitable for development and commercialization.
Strengths
Bioprospecting-derived small molecules (also known as natural products) are more structurally complex than synthetic chemicals, and therefore show greater specificity towards biological targets. This is a big advantage in drug discovery and development, especially pharmacological aspects of drug discovery and development, where off-target effects can cause adverse drug reactions.
Natural products are also more amenable to membrane transport than synthetic compounds. This is advantageous when developing antibacterial drugs, which may need to traverse both an outer membrane and plasma membrane to reach their target.
For some biotechnological innovations to work, it is important to have enzymes that function at unusually high or low temperatures. An example of this is the polymerase chain reaction (PCR), which is dependent on a DNA polymerase that can operate at 60°C and above. In other situations, for example dephosphorylation, it can be desirable to run the reaction at low temperature. Extremophile bioprospecting is an important source of such enzymes, yielding thermostable enzymes such as Taq polymerase (from Thermus aquaticus), and cold-adapted enzymes such as shrimp alkaline phosphatase (from Pandalus borealis).
With the Convention on Biological Diversity (CBD) now ratified by most countries, bioprospecting has the potential to bring biodiversity-rich and technologically advanced nations together, and benefit them both educationally and economically (eg. information sharing, technology transfer, new product development, royalty payment).
For useful molecules identified through microbial bioprospecting, scale up of production is feasible at reasonable cost because the producing microorganism can be cultured in a bioreactor.
Weaknesses
Although some potentially very useful microorganisms are known to exist in nature (eg. lignocellulose-metabolizing microbes), difficulties have been encountered cultivating these in a laboratory setting. This problem may be resolvable by genetically manipulating easier-to-culture organisms such as Escherichia coli or Streptomyces coelicolor to express the gene cluster responsible for the desired activity.
Isolating and identifying the compound(s) responsible for a biological extract's activity can be difficult. Also, subsequent elucidation of the mechanism of action of the isolated compound can be time-consuming. Technological advancements in liquid chromatography, mass spectrometry and other techniques are helping to overcome these challenges.
Implementing and enforcing bioprospecting-related treaties and legislation is not always easy. Drug development is an inherently expensive and time-consuming process with low success rates, and this makes it difficult to quantify the value of potential products when drafting bioprospecting agreements. Intellectual property rights may be difficult to award too. For example, legal rights to a medicinal plant may be disputable if it has been discovered by different people in different parts of the world at different times.
Whilst the structural complexity of natural products is generally advantageous in drug discovery, it can make the subsequent manufacture of drug candidates difficult. This problem is sometimes resolvable by identifying the part of the natural product structure responsible for activity and developing a simplified synthetic analogue. This was necessary with the natural product halichondrin B, its simplified analogue eribulin now approved and marketed as an anticancer drug.
Bioprospecting pitfalls
Errors and oversights can occur at different steps in the bioprospecting process including collection of source material, screening source material for bioactivity, testing isolated compounds for toxicity, and identification of mechanism of action.
Collection of source material
Prior to collecting biological material or traditional knowledge, the correct permissions must be obtained from the source country, land owner etc. Failure to do so can result in criminal proceedings and rejection of any subsequent patent applications. It is also important to collect biological material in adequate quantities, to have biological material formally identified, and to deposit a voucher specimen with a repository for long-term preservation and storage. This helps ensure any important discoveries are reproducible.
Bioactivity and toxicity testing
When testing extracts and isolated compounds for bioactivity and toxicity, the use of standard protocols (eg. CLSI, ISO, NIH, EURL ECVAM, OECD) is desirable because this improves test result accuracy and reproducibility. Also, if the source material is likely to contain known (previously discovered) active compounds (eg. streptomycin in the case of actinomycetes), then dereplication is necessary to exclude these extracts and compounds from the discovery pipeline as early as possible. In addition, it is important to consider solvent effects on the cells or cell lines being tested, to include reference compounds (ie. pure chemical compounds for which accurate bioactivity and toxicity data are available), to set limits on cell line passage number (eg. 10–20 passages), to include all the necessary positive and negative controls, and to be aware of assay limitations. These steps help ensure assay results are accurate, reproducible and interpreted correctly.
Identification of mechanism of action
When attempting to elucidate the mechanism of action of an extract or isolated compound, it is important to use multiple orthogonal assays. Using just a single assay, especially a single in vitro assay, gives a very incomplete picture of an extract or compound's effect on the human body. In the case of Valeriana officinalis root extract, for example, the sleep-inducing effects of this extract are due to multiple compounds and mechanisms including interaction with GABA receptors and relaxation of smooth muscle. The mechanism of action of an isolated compound can also be misidentified if a single assay is used because some compounds interfere with assays. For example, the sulfhydryl-scavenging assay used to detect histone acetyltransferase inhibition can give a false positive result if the test compound reacts covalently with cysteines.
Biopiracy
The term biopiracy was coined by Pat Mooney, to describe a practice in which indigenous knowledge of nature, originating with indigenous peoples, is used by others for profit, without authorization or compensation to the indigenous people themselves. For example, when bioprospectors draw on indigenous knowledge of medicinal plants which is later patented by medical companies without recognizing the fact that the knowledge is not new or invented by the patenter, this deprives the indigenous community of their potential rights to the commercial product derived from the technology that they themselves had developed. Critics of this practice, such as Greenpeace, claim these practices contribute to inequality between developing countries rich in biodiversity, and developed countries hosting biotech firms.
In the 1990s many large pharmaceutical and drug discovery companies responded to charges of biopiracy by ceasing work on natural products, turning to combinatorial chemistry to develop novel compounds.
Famous cases of biopiracy
The rosy periwinkle
The rosy periwinkle case dates from the 1950s. The rosy periwinkle, while native to Madagascar, had been widely introduced into other tropical countries around the world well before the discovery of vincristine. Different countries are reported as having acquired different beliefs about the medical properties of the plant. This meant that researchers could obtain local knowledge from one country and plant samples from another. The use of the plant for diabetes was the original stimulus for research. Effectiveness in the treatment of both Hodgkin lymphoma and leukemia were discovered instead. The Hodgkin lymphoma chemotherapeutic drug vinblastine is derivable from the rosy periwinkle.
The Maya ICBG controversy
The Maya ICBG bioprospecting controversy took place in 1999–2000, when the International Cooperative Biodiversity Group led by ethnobiologist Brent Berlin was accused of being engaged in unethical forms of bioprospecting by several NGOs and indigenous organizations. The ICBG aimed to document the biodiversity of Chiapas, Mexico, and the ethnobotanical knowledge of the indigenous Maya people – in order to ascertain whether there were possibilities of developing medical products based on any of the plants used by the indigenous groups.
The Maya ICBG case was among the first to draw attention to the problems of distinguishing between benign forms of bioprospecting and unethical biopiracy, and to the difficulties of securing community participation and prior informed consent for would-be bioprospectors.
The neem tree
In 1994, the U.S. Department of Agriculture and W. R. Grace and Company received a European patent on methods of controlling fungal infections in plants using a composition that included extracts from the neem tree (Azadirachta indica), which grows throughout India and Nepal. In 2000 the patent was successfully opposed by several groups from the EU and India including the EU Green Party, Vandana Shiva, and the International Federation of Organic Agriculture Movements (IFOAM) on the basis that the fungicidal activity of neem extract had long been known in Indian traditional medicine. WR Grace appealed and lost in 2005.
Basmati rice
In 1997, the US corporation RiceTec (a subsidiary of RiceTec AG of Liechtenstein) attempted to patent certain hybrids of basmati rice and semidwarf long-grain rice. The Indian government challenged this patent and, in 2002, fifteen of the patent's twenty claims were invalidated.
The Enola bean
The Enola bean is a variety of Mexican yellow bean, so called after the wife of the man who patented it in 1999. The allegedly distinguishing feature of the variety is seeds of a specific shade of yellow. The patent-holder subsequently sued a large number of importers of Mexican yellow beans with the following result: "...export sales immediately dropped over 90% among importers that had been selling these beans for years, causing economic damage to more than 22,000 farmers in northern Mexico who depended on sales of this bean." A lawsuit was filed on behalf of the farmers and, in 2005, the US-PTO ruled in favor of the farmers. In 2008, the patent was revoked.
Hoodia gordonii
Hoodia gordonii, a succulent plant, originates from the Kalahari Desert of South Africa. For generations it has been known to the traditionally living San people as an appetite suppressant. In 1996 South Africa's Council for Scientific and Industrial Research began working with companies, including Unilever, to develop dietary supplements based on Hoodia. Originally the San people were not scheduled to receive any benefits from the commercialization of their traditional knowledge, but in 2003 the South African San Council made an agreement with CSIR in which they would receive from 6 to 8% of the revenue from the sale of Hoodia products.
In 2008 after having invested €20 million in R&D on Hoodia as a potential ingredient in dietary supplements for weight loss, Unilever terminated the project because their clinical studies did not show that Hoodia was safe and effective enough to bring to market.
Further cases
The following is a selection of further recent cases of biopiracy. Most of them do not relate to traditional medicines.
Thirty-six cases of biopiracy in Africa.
The case of the Maya people's pozol drink.
The case of the Maya and other people's use of Mimosa tenuiflora and many other cases.
The case of the Andean maca radish.
The cases of turmeric (India), karela (India), quinoa (Bolivia), oubli berries (Gabon), and others.
The case of captopril (developed from a Brazilian tribe's arrowhead poison).
Legal and political aspects
Patent law
One common misunderstanding is that pharmaceutical companies patent the plants they collect. While obtaining a patent on a naturally occurring organism as previously known or used is not possible, patents may be taken out on specific chemicals isolated or developed from plants. Often these patents are obtained with a stated and researched use of those chemicals. Generally the existence, structure and synthesis of those compounds is not a part of the indigenous medical knowledge that led researchers to analyze the plant in the first place. As a result, even if the indigenous medical knowledge is taken as prior art, that knowledge does not by itself make the active chemical compound "obvious," which is the standard applied under patent law.
In the United States, patent law can be used to protect "isolated and purified" compounds – even, in one instance, a new chemical element (see USP 3,156,523). In 1873, Louis Pasteur patented a "yeast" which was "free from disease" (patent #141072). Patents covering biological inventions have been treated similarly. In the 1980 case of Diamond v. Chakrabarty, the Supreme Court upheld a patent on a bacterium that had been genetically modified to consume petroleum, reasoning that U.S. law permits patents on "anything under the sun that is made by man." The United States Patent and Trademark Office (USPTO) has observed that "a patent on a gene covers the isolated and purified gene but does not cover the gene as it occurs in nature".
Also possible under US law is patenting a cultivar, a new variety of an existing organism. The patent on the Enola bean (now revoked) was an example of this sort of patent. The intellectual property laws of the US also recognize plant breeders' rights under the Plant Variety Protection Act, 7 U.S.C. §§ 2321–2582.
Convention on Biological Diversity
The Convention on Biological Diversity (CBD) came into force in 1993. It secured rights to control access to genetic resources for the countries in which those resources are located. One objective of the CBD is to enable lesser-developed countries to better benefit from their resources and traditional knowledge. Under the rules of the CBD, bioprospectors are required to obtain informed consent to access such resources, and must share any benefits with the biodiversity-rich country. However, some critics believe that the CBD has failed to establish appropriate regulations to prevent biopiracy. Others claim that the main problem is the failure of national governments to pass appropriate laws implementing the provisions of the CBD. The Nagoya Protocol to the CBD, which came into force in 2014, provides further regulations. The CBD has been ratified, acceded or accepted by 196 countries and jurisdictions globally, with exceptions including the Holy See and United States.
Bioprospecting contracts
The requirements for bioprospecting as set by CBD has created a new branch of international patent and trade law, bioprospecting contracts. Bioprospecting contracts lay down the rules of benefit sharing between researchers and countries, and can bring royalties to lesser-developed countries. However, although these contracts are based on prior informed consent and compensation (unlike biopiracy), every owner or carrier of an indigenous knowledge and resources are not always consulted or compensated, as it would be difficult to ensure every individual is included. Because of this, some have proposed that the indigenous or other communities form a type of representative micro-government that would negotiate with researchers to form contracts in such a way that the community benefits from the arrangements. Unethical bioprospecting contracts (as distinct from ethical ones) can be viewed as a new form of biopiracy.
An example of a bioprospecting contract is the agreement between Merck and INBio of Costa Rica.
Traditional knowledge database
Due to previous cases of biopiracy and to prevent further cases, the Government of India has converted traditional Indian medicinal information from ancient manuscripts and other resources into an electronic resource; this resulted in the Traditional Knowledge Digital Library in 2001. The texts are being recorded from Tamil, Sanskrit, Urdu, Persian and Arabic; made available to patent offices in English, German, French, Japanese and Spanish. The aim is to protect India's heritage from being exploited by foreign companies. Hundreds of yoga poses are also kept in the collection. The library has also signed agreements with leading international patent offices such as European Patent Office (EPO), United Kingdom Trademark & Patent Office (UKTPO) and the United States Patent and Trademark Office to protect traditional knowledge from biopiracy as it allows patent examiners at International Patent Offices to access TKDL databases for patent search and examination purposes.
See also
Intellectual capital/Intellectual property
Natural capital
Biological patent
Traditional knowledge/Indigenous knowledge
Pharmacognosy
Plant breeders' rights
Bioethics
Maya ICBG bioprospecting controversy
International Cooperative Biodiversity Group
Biological Diversity Act, 2002
Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) (1994)
International Treaty on Plant Genetic Resources for Food and Agriculture (2001)
References
Bibliography and resources
The Secretariat of the Convention on Biological Diversity (United Nations Environment Programme) maintains an information centre which as of April 2006 lists some 3000 "monographs, reports and serials".
Secretariat of the Convention on Biological Diversity (United Nations Environment Programme), Bibliography of Journal Articles on the Convention on Biological Diversity (March 2006). Contains references to almost 200 articles. Some of these are available in full text from the CBD information centre.
External links
Out of Africa: Mysteries of Access and Benefit-Sharing – a 2006 report on biopiracy in Africa by The Edmonds Institute
Cape Town Declaration – Biowatch South Africa
Genetic Resources Action International (GRAIN)
Indian scientist denies accusation of biopiracy – SciDev.Net
African 'biopiracy' debate heats up – SciDev.Net
Bioprospecting: legitimate research or 'biopiracy'? – SciDev.Net
ETC Group papers on Biopiracy : Topics include: Monsanto's species-wide patent on all genetically modified soybeans (EP0301749); Synthetic Biology Patents (artificial, unique life forms); Terminator Seed Technology; etc...
Who Owns Biodiversity, and How Should the Owners Be Compensated?, Plant Physiology, April 2004, Vol. 134, pp. 1295–1307
Bioethics
Biopiracy
Botany
Plant genetics
Plant breeding
Biodiversity
Food security
Plant conservation
Seeds
Sustainable agriculture
Commercialization of traditional medicines | Bioprospecting | [
"Chemistry",
"Technology",
"Biology"
] | 5,087 | [
"Bioethics",
"Molecular biology",
"Plants",
"Plant genetics",
"Biopiracy",
"Botany",
"Biodiversity",
"Ethics of science and technology",
"Plant breeding"
] |
82,245 | https://en.wikipedia.org/wiki/Geiger%E2%80%93M%C3%BCller%20tube | The Geiger–Müller tube or G–M tube is the sensing element of the Geiger counter instrument used for the detection of ionizing radiation. It is named after Hans Geiger, who invented the principle in 1908, and Walther Müller, who collaborated with Geiger in developing the technique further in 1928 to produce a practical tube that could detect a number of different radiation types.
It is a gaseous ionization detector and uses the Townsend avalanche phenomenon to produce an easily detectable electronic pulse from as little as a single ionizing event due to a radiation particle. It is used for the detection of gamma radiation, X-rays, and alpha and beta particles. It can also be adapted to detect neutrons. The tube operates in the "Geiger" region of ion pair generation. This is shown on the accompanying plot for gaseous detectors showing ion current against applied voltage.
While it is a robust and inexpensive detector, the G–M is unable to measure high radiation rates efficiently, has a finite life in high radiation areas and cannot measure incident radiation energy, so no spectral information can be generated and there is no discrimination between radiation types; such as between alpha and beta particles.
Principle of operation
A G-M tube consists of a chamber filled with a gas mixture at a low pressure of about 0.1 atmosphere. The chamber contains two electrodes, between which there is a potential difference of several hundred volts. The walls of the tube are either metal or have their inside surface coated with a conducting material or a spiral wire to form the cathode, while the anode is a wire mounted axially in the center of the chamber.
When ionizing radiation strikes the tube, some molecules of the fill gas are ionized directly by the incident radiation, and if the tube cathode is an electrical conductor, such as stainless steel, indirectly by means of secondary electrons produced in the walls of the tube, which migrate into the gas. This creates positively charged ions and free electrons, known as ion pairs, in the gas. The strong electric field created by the voltage across the tube's electrodes accelerates the positive ions towards the cathode and the electrons towards the anode. Close to the anode in the "avalanche region" where the electric field strength rises inversely proportional to radial distance as the anode is approached, free electrons gain sufficient energy to ionize additional gas molecules by collision and create a large number of electron avalanches. These spread along the anode and effectively throughout the avalanche region. This is the "gas multiplication" effect which gives the tube its key characteristic of being able to produce a significant output pulse from a single original ionizing event.
If there were to be only one avalanche per original ionizing event, then the number of excited molecules would be in the order of 106 to 108. However the production of multiple avalanches results in an increased multiplication factor which can produce 109 to 1010 ion pairs. The creation of multiple avalanches is due to the production of UV photons in the original avalanche, which are not affected by the electric field and move laterally to the axis of the anode to instigate further ionizing events by collision with gas molecules. These collisions produce further avalanches, which in turn produce more photons, and thereby more avalanches in a chain reaction which spreads laterally through the fill gas, and envelops the anode wire. The accompanying diagram shows this graphically. The speed of propagation of the avalanches is typically 2–4 cm per microsecond, so that for common sizes of tubes the complete ionization of the gas around the anode takes just a few microseconds.
This short, intense pulse of current can be measured as a count event in the form of a voltage pulse developed across an external electrical resistor. This can be in the order of volts, thus making further electronic processing simple.
The discharge is terminated by the collective effect of the positive ions created by the avalanches. These ions have lower mobility than the free electrons due to their higher mass and move slowly from the vicinity of the anode wire. This creates a "space charge" which counteracts the electric field that is necessary for continued avalanche generation. For a particular tube geometry and operating voltage this termination always occurs when a certain number of avalanches has been created, therefore the pulses from the tube are always of the same magnitude regardless of the energy of the initiating particle. Consequently, there is no radiation energy information in the pulses which means the Geiger–Müller tube cannot be used to generate spectral information about the incident radiation. In practice the termination of the avalanche is improved by the use of "quenching" techniques (see later).
Pressure of the fill gas is important in the generation of avalanches. Too low a pressure and the efficiency of interaction with incident radiation is reduced. Too high a pressure, and the “mean free path” for collisions between accelerated electrons and the fill gas is too small, and the electrons cannot gather enough energy between each collision to cause ionization of the gas. The energy gained by electrons is proportional to the ratio “e/p”, where “e” is the electric field strength at that point in the gas, and “p” is the gas pressure.
Types of tube
Broadly, there are two important types of Geiger tube construction.
End window type
For alpha particles, low energy beta particles, and low energy X-rays, the usual form is a cylindrical end-window tube. This type has a window at one end covered in a thin material through which low-penetrating radiation can easily pass. Mica is a commonly used material due to its low mass per unit area. The other end houses the electrical connection to the anode.
Pancake tube
The pancake tube is a variant of the end window tube, but which is designed for use for beta and gamma contamination monitoring. It has roughly the same sensitivity to particles as the end window type, but has a flat annular shape so the largest window area can be utilized with a minimum of gas space. Like the cylindrical end window tube, mica is a commonly used window material due to its low mass per unit area. The anode is normally multi-wired in concentric circles so it extends fully throughout the gas space.
Windowless type
This general type is distinct from the dedicated end window type, but has two main sub-types, which use different radiation interaction mechanisms to obtain a count.
Thick walled
Used for gamma radiation detection above energies of about 25 KeV, this type generally has an overall wall thickness of about 1-2mm of chrome steel. Because most high energy gamma photons will pass through the low density fill gas without interacting, the tube uses the interaction of photons on the molecules of the wall material to produce high energy secondary electrons within the wall. Some of these electrons are produced close enough to the inner wall of the tube to escape into the fill gas. As soon as this happens the electron drifts to the anode and an electron avalanche occurs as though the free electron had been created within the gas. The avalanche is a secondary effect of a process that starts within the tube wall with the production of electrons that migrate to the inner surface of the tube wall, and then enter the fill gas. This effect is considerably attenuated at low energies below about 20 KeV
Thin walled
Thin walled tubes are used for:
High energy beta detection, where the beta enters via the side of the tube and interacts directly with the gas, but the radiation has to be energetic enough to penetrate the tube wall. Low energy beta, which would penetrate an end window, would be stopped by the tube wall.
Low energy gamma and X-ray detection. The lower energy photons interact better with the fill gas so this design concentrates on increasing the volume of the fill gas by using a long thin walled tube and does not use the interaction of photons in the tube wall. The transition from thin walled to thick walled design takes place at the 300–400 keV energy levels. Above these levels thick walled designs are used, and beneath these levels the direct gas ionization effect is predominant.
Neutron detection
G–M tubes will not detect neutrons since these do not ionize the gas. However, neutron-sensitive tubes can be produced which either have the inside of the tube coated with boron, or the tube contains boron trifluoride or helium-3 as the fill gas, or the tube is wrapped in about thick cadmium foil. The neutrons interact with the boron nuclei, producing alpha particles, or directly with the helium-3 nuclei producing hydrogen and tritium ions and electrons, or with the cadmium, producing gamma rays. These energetic particles interact and produce ions that then trigger the normal avalanche process.
Gas mixtures
The components of the gas mixture are vital to the operation and application of a G-M tube. The mixture is composed of an inert gas such as helium, argon or neon which is ionized by incident radiation, and a "quench" gas of 5–10% of an organic vapor or a halogen gas to prevent spurious pulsing by quenching the electron avalanches. This combination of gases is known as a Penning mixture and makes use of the Penning ionization effect.
The modern halogen-filled G–M tube was invented by Sidney H. Liebson in 1947 and has several advantages over the older tubes with organic mixtures. The halogen tube discharge takes advantage of a metastable state of the inert gas atom to more-readily ionize a halogen molecule than an organic vapor, enabling the tube to operate at much lower voltages, typically 400–600 volts instead of 900–1200 volts. While halogen-quenched tubes have greater plateau voltage slopes compared to organic-quenched tubes (an undesirable quality), they have a vastly longer life than tubes quenched with organic compounds. This is because an organic vapor is gradually destroyed by the discharge process, giving organic-quenched tubes a useful life of around 109 events. However, halogen ions can recombine over time, giving halogen-quenched tubes an effectively unlimited lifetime for most uses, although they will still eventually fail at some point due to other ionization-initiated processes that limit the lifetime of all Geiger tubes. For these reasons, the halogen-quenched tube is now the most common.
Neon is the most common filler gas. Chlorine is the most common quencher, though bromine is occasionally used as well. Halogens are most commonly used with neon, argon or krypton, organic quenchers with helium.
An example of a gas mixture, used primarily in proportional detectors, is P10 (90% argon, 10% methane).
Another is used in bromine-quenched tubes, typically 0.1% argon, 1-2% bromine, and the balance of neon.
Halogen quenchers are highly chemically reactive and attack the materials of the electrodes, especially at elevated temperatures, leading to tube performance degradation over time. The cathode materials can be chosen from e.g. chromium, platinum, or nickel-copper alloy, or coated with colloidal graphite, and suitably passivated. Oxygen plasma treatment can provide a passivation layer on stainless steel. Dense non-porous coating with platinum or a tungsten layer or a tungsten foil liner can provide protection here.
Pure noble gases exhibit threshold voltages increasing with increasing atomic weight. Addition of polyatomic organic quenchers increases threshold voltage, due to dissipation of large percentage of collisions energy in molecular vibrations. Argon with alcohol vapors was one of the most common fills of early tubes. As little as 1 ppm of impurities (argon, mercury, and krypton in neon) can significantly lower the threshold voltage. Admixture of chlorine or bromine provides quenching and stability to low-voltage neon-argon mixtures, with wide temperature range. Lower operating voltages lead to longer rise times of pulses, without appreciably changing the dead times.
Spurious pulses are caused mostly by secondary electrons emitted by the cathode due to positive ion bombardment. The resulting spurious pulses have the nature of a relaxation oscillator and show uniform spacing, dependent on the tube fill gas and overvoltage. At high enough overvoltages, but still below the onset of continuous corona discharges, sequences of thousands of pulses can be produced. Such spurious counts can be suppressed by coating the cathode with higher work function materials, chemical passivation, lacquer coating, etc.
The organic quenchers can decompose to smaller molecules (ethyl alcohol and ethyl acetate) or polymerize into solid deposits (typical for methane). Degradation products of organic molecules may or may not have quenching properties. Larger molecules degrade to more quenching products than small ones; tubes quenched with amyl acetate tend to have ten times higher lifetime than ethanol ones. Tubes quenched with hydrocarbons often fail due to coating of the electrodes with polymerization products, before the gas itself can be depleted; simple gas refill won't help, washing the electrodes to remove the deposits is necessary. Low ionization efficiency is sometimes deliberately sought; mixtures of low pressure hydrogen or helium with organic quenchers are used in some cosmic rays experiments, to detect heavily ionizing muons and electrons.
Argon, krypton and xenon are used to detect soft x-rays, with increasing absorption of low energy photons with decreasing atomic mass, due to direct ionization by photoelectric effect. Above 60-70 keV the direct ionization of the filler gas becomes insignificant, and secondary photoelectrons, Compton electrons or electron-positron pair production by interaction of the gamma photons with the cathode material become the dominant ionization initiation mechanisms. Tube windows can be eliminated by putting the samples directly inside the tube, or, if gaseous, mixing them with the filler gas. Vacuum-tightness requirement can be eliminated by using continuous flow of gas at atmospheric pressure.
Geiger plateau
The Geiger plateau is the voltage range in which the G-M tube operates in its correct mode, where ionization occurs along the length of the anode. If a G–M tube is exposed to a steady radiation source and the applied voltage is increased from zero, it follows the plot of current shown in the "Geiger region" where the gradient flattens; this is the Geiger plateau.
This is shown in more detail in the accompanying Geiger Plateau Curve diagram. If the tube voltage is progressively increased from zero the efficiency of detection will rise until the most energetic radiation starts to produce pulses which can be detected by the electronics. This is the "starting voltage". Increasing the voltage still further results in rapidly rising counts until the "knee" or threshold of the plateau is reached, where the rate of increase of counts falls off. This is where the tube voltage is sufficient to allow a complete discharge along the anode for each detected radiation count, and the effect of different radiation energies are equal. However, the plateau has a slight slope mainly due to the lower electric fields at the ends of the anode because of tube geometry. As the tube voltage is increased, these fields strengthen to produce avalanches. At the end of the plateau the count rate begins to increase rapidly again, until the onset of continuous discharge where the tube cannot detect radiation, and may be damaged.
Depending on the characteristics of the specific tube (manufacturer, size, gas type, etc.) the voltage range of the plateau will vary. The slope is usually expressed as percentage change of counts per 100 V. To prevent overall efficiency changes due to variation of tube voltage, a regulated voltage supply is used, and it is normal practice to operate in the middle of the plateau to reduce the effect of any voltage variations.
Quenching and dead time
The ideal G–M tube should produce a single pulse for every single ionizing event due to radiation. It should not give spurious pulses, and should recover quickly to the passive state, ready for the next radiation event. However, when positive argon ions reach the cathode and become neutral atoms by gaining electrons, the atoms can be elevated to enhanced energy levels. These atoms then return to their ground state by emitting photons which in turn produce further ionization and thereby spurious secondary discharges. If nothing were done to counteract this, ionization would be prolonged and could even escalate. The prolonged avalanche would increase the "dead time" when new events cannot be detected, and could become continuous and damage the tube. Some form of quenching of the ionization is therefore essential to reduce the dead time and protect the tube, and a number of quenching techniques are used.
Gas quenching
Self-quenching or internal-quenching tubes stop the discharge without external assistance, originally by means of the addition of a small amount of a polyatomic organic vapor originally such as butane or ethanol, but for modern tubes is a halogen such as bromine or chlorine.
If a poor gas quencher is introduced to the tube, the positive argon ions, during their motion toward the cathode, would have multiple collisions with the quencher gas molecules and transfer their charge and some energy to them. Thus, neutral argon atoms would be produced and the quencher gas ions in their turn would reach the cathode, gain electrons therefrom, and move into excited states which would decay by photon emission, producing tube discharge. However, effective quencher molecules, when excited, lose their energy not by photon emission, but by dissociation into neutral quencher molecules. No spurious pulses are thus produced.
Even with chemical quenching, for a short time after a discharge pulse there is a period during which the tube is rendered insensitive and is thus temporarily unable to detect the arrival of any new ionizing particle (the so-called dead time; typically 50–100 microseconds). This causes a loss of counts at sufficiently high count rates and limits the G–M tube to an effective (accurate) count rate of approximately 103 counts per second even with external quenching. While a G-M tube is technically capable of reading higher count rates before it truly saturates, the level of uncertainty involved and the risk of saturation makes it extremely dangerous to rely upon higher count rate readings when attempting to calculate an equivalent radiation dose rate from the count rate. A consequence of this is that ion chamber instruments are usually preferred for higher count rates, however a modern external quenching technique can extend this upper limit considerably.
External quenching
External quenching, sometimes called "active quenching" or "electronic quenching", uses simplistic high speed control electronics to rapidly remove and re-apply the high voltage between the electrodes for a fixed time after each discharge peak in order to increase the maximum count rate and lifetime of the tube. Although this can be used instead of a quench gas, it is much more commonly used in conjunction with a quench gas.
The "time-to-first-count method" is a sophisticated modern implementation of external quenching that allows for dramatically increased maximum count rates via the use of statistical signal processing techniques and much more complex control electronics. Due to uncertainty in the count rate introduced by the simplistic implementation of external quenching, the count rate of a Geiger tube becomes extremely unreliable above approximately 103 counts per second. With the time-to-first-count method, effective count rates of 105 counts per second are achievable, two orders of magnitude larger than the normal effective limit. The time-to-first-count method is significantly more complicated to implement than traditional external quenching methods, and as a result of this it has not seen widespread use.
Fold-back effect
One consequence of the dead time effect is the possibility of a high count rate continually triggering the tube before the recovery time has elapsed. This may produce pulses too small for the counting electronics to detect and lead to the very undesirable situation whereby a G–M counter in a very high radiation field is falsely indicating a low level. This phenomenon is known as "fold-back". An industry rule of thumb is that the discriminator circuit receiving the output from the tube should detect down to 1/10 of the magnitude of a normal pulse to guard against this. Additionally the circuit should detect when "pulse pile-up " has occurred, where the apparent anode voltage has moved to a new DC level through the combination of high pulse count and noise. The electronic design of Geiger–Müller counters must be able to detect this situation and give an alarm; it is normally done by setting a threshold for excessive tube current.
Detection efficiency
The efficiency of detection of a G–M tube varies with the type of incident radiation. Tubes with thin end windows have very high efficiencies (can be nearly 100%) for high energy beta, though this drops off as the beta energy decreases due to attenuation by the window material. Alpha particles are also attenuated by the window. As alpha particles have a maximum range of less than 50 mm in air, the detection window should be as close as possible to the source of radiation. The attenuation of the window adds to the attenuation of air, so the window should have a density as low as 1.5 to 2.0 mg/cm2 to give an acceptable level of detection efficiency. The article on stopping power explains in more detail the ranges for particles types of various energies.
The counting efficiency of photon radiation (gamma and X-rays above 25 keV) depends on the efficiency of radiation interaction in the tube wall, which increases with the atomic number of the wall material. Chromium iron is a commonly used material, which gives an efficiency of about 1% over a wide range of energies.
Photon energy compensation
If a G–M tube is to be used for gamma or X-ray dosimetry measurements, the energy of incident radiation, which affects the ionizing effect, must be taken into account. However pulses from a G–M tube do not carry any energy information, and attribute equal dose to each count event. Consequently, the count rate response of a "bare" G–M tube to photons at different energy levels is non-linear with the effect of over-reading at low energies. The variation in dose response can be a factor between 5 and 15, according to individual tube construction; the very small tubes having the highest values.
To correct this a technique known as "energy compensation" is applied, which consists of adding a shield of absorbing material around the tube. This filter preferentially absorbs the low energy photons and the dose response is "flattened". The aim is that sensitivity/energy characteristic of the tube should be matched by the absorption/energy characteristic of the filter. This cannot be exactly achieved, but the result is a more uniform response over the stated range of detection energies for the tube.
Lead and tin are commonly used materials, and a simple filter effective above can be made using a continuous collar along the length of the tube. However, at lower energy levels this attenuation can become too great, so air gaps are left in the collar to allow low energy radiation to have a greater effect. In practice, compensation filter design is an empirical compromise to produce an acceptably uniform response, and a number of different materials and geometries are used to obtain the required correction.
See also
Dosimeter
Geiger counter
Gaseous ionization detectors
Ionization chamber
Stopping power of radiation particles
References
External links
Patents
, H. J. Spanner, "Gas Filled Tube"
, G. J. Weissenberg, "Electron Discharge Tube"
, J. A. Victoreen, "Geiger tube"
, J. A. Victoreen, "Geiger tube"
Other
Geiger counter history
IAEA Practical Radiation Technical Manual
Electrical breakdown
Particle detectors
Measuring instruments
Ionising radiation detectors | Geiger–Müller tube | [
"Physics",
"Technology",
"Engineering"
] | 4,956 | [
"Physical phenomena",
"Radioactive contamination",
"Particle detectors",
"Measuring instruments",
"Ionising radiation detectors",
"Electrical phenomena",
"Electrical breakdown"
] |
82,269 | https://en.wikipedia.org/wiki/Focal%20length | The focal length of an optical system is a measure of how strongly the system converges or diverges light; it is the inverse of the system's optical power. A positive focal length indicates that a system converges light, while a negative focal length indicates that the system diverges light. A system with a shorter focal length bends the rays more sharply, bringing them to a focus in a shorter distance or diverging them more quickly. For the special case of a thin lens in air, a positive focal length is the distance over which initially collimated (parallel) rays are brought to a focus, or alternatively a negative focal length indicates how far in front of the lens a point source must be located to form a collimated beam. For more general optical systems, the focal length has no intuitive meaning; it is simply the inverse of the system's optical power.
In most photography and all telescopy, where the subject is essentially infinitely far away, longer focal length (lower optical power) leads to higher magnification and a narrower angle of view; conversely, shorter focal length or higher optical power is associated with lower magnification and a wider angle of view. On the other hand, in applications such as microscopy in which magnification is achieved by bringing the object close to the lens, a shorter focal length (higher optical power) leads to higher magnification because the subject can be brought closer to the center of projection.
Thin lens approximation
For a thin lens in air, the focal length is the distance from the center of the lens to the principal foci (or focal points) of the lens. For a converging lens (for example a convex lens), the focal length is positive and is the distance at which a beam of collimated light will be focused to a single spot. For a diverging lens (for example a concave lens), the focal length is negative and is the distance to the point from which a collimated beam appears to be diverging after passing through the lens.
When a lens is used to form an image of some object, the distance from the object to the lens u, the distance from the lens to the image v, and the focal length f are related by
The focal length of a thin convex lens can be easily measured by using it to form an image of a distant light source on a screen. The lens is moved until a sharp image is formed on the screen. In this case is negligible, and the focal length is then given by
Determining the focal length of a concave lens is somewhat more difficult. The focal length of such a lens is defined as the point at which the spreading beams of light meet when they are extended backwards. No image is formed during such a test, and the focal length must be determined by passing light (for example, the light of a laser beam) through the lens, examining how much that light becomes dispersed/ bent, and following the beam of light backwards to the lens's focal point.
General optical systems
For a thick lens (one which has a non-negligible thickness), or an imaging system consisting of several lenses or mirrors (e.g. a photographic lens or a telescope), there are several related concepts that are referred to as focal lengths:
Effective focal length (EFL) The effective focal length is the inverse of the optical power of an optical system, and is the value used to calculate the magnification of the system. The imaging properties of the optical system can be modeled by replacing the system with an ideal thin lens with the same EFL. The EFL also provides a simple method for finding the nodal points without tracing any rays. It was previously called equivalent focal length (not to be confused with 35 mm-equivalent focal length).
Front focal length (FFL) The front focal length is the distance from the front focal point to the front principal plane .
Rear focal length (RFL) The rear focal length is the distance from the rear principal plane to the rear focal point .
Front focal distance (FFD) The front focal distance (FFD) () is the distance from the front focal point of the system () to the vertex of the first optical surface (). Some authors refer to this as "front focal length".
Back focal distance (BFD) Back focal distance (BFD) () is the distance from the vertex of the last optical surface of the system () to the rear focal point (). Some authors refer to this as "back focal length".
For an optical system in air the effective focal length, front focal length, and rear focal length are all the same and may be called simply "focal length".
For an optical system in a medium other than air or vacuum, the front and rear focal lengths are equal to the EFL times the refractive index of the medium in front of or behind the lens ( and in the diagram above). The term "focal length" by itself is ambiguous in this case. The historical usage was to define the "focal length" as the EFL times the index of refraction of the medium. For a system with different media on both sides, such as the human eye, the front and rear focal lengths are not equal to one another, and convention may dictate which one is called "the focal length" of the system. Some modern authors avoid this ambiguity by instead defining "focal length" to be a synonym for EFL.
The distinction between front/rear focal length and EFL is important for studying the human eye. The eye can be represented by an equivalent thin lens at an air/fluid boundary with front and rear focal lengths equal to those of the eye, or it can be represented by a equivalent thin lens that is totally in air, with focal length equal to the eye's EFL.
For the case of a lens of thickness in air (), and surfaces with radii of curvature and , the effective focal length is given by the Lensmaker's equation:
where is the refractive index of the lens medium. The quantity is also known as the optical power of the lens.
The corresponding front focal distance is:
and the back focal distance:
In the sign convention used here, the value of will be positive if the first lens surface is convex, and negative if it is concave. The value of is negative if the second surface is convex, and positive if concave. Sign conventions vary between different authors, which results in different forms of these equations depending on the convention used.
For a spherically-curved mirror in air, the magnitude of the focal length is equal to the radius of curvature of the mirror divided by two. The focal length is positive for a concave mirror, and negative for a convex mirror. In the sign convention used in optical design, a concave mirror has negative radius of curvature, so
where is the radius of curvature of the mirror's surface.
See Radius of curvature (optics) for more information on the sign convention for radius of curvature used here.
In photography
Camera lens focal lengths are usually specified in millimetres (mm), but some older lenses are marked in centimetres (cm) or inches.
Focal length and field of view (FOV) of a lens are inversely proportional. For a standard rectilinear lens, , where is the width of the film or imaging sensor.
When a photographic lens is set to "infinity", its rear principal plane is separated from the sensor or film, which is then situated at the focal plane, by the lens's focal length. Objects far away from the camera then produce sharp images on the sensor or film, which is also at the image plane.
To render closer objects in sharp focus, the lens must be adjusted to increase the distance between the rear principal plane and the film, to put the film at the image plane. The focal length , the distance from the front principal plane to the object to photograph , and the distance from the rear principal plane to the image plane are then related by:
As is decreased, must be increased. For example, consider a normal lens for a 35 mm camera with a focal length of 50 mm. To focus a distant object (), the rear principal plane of the lens must be located a distance 50 mm from the film plane, so that it is at the location of the image plane. To focus an object 1 m away ( 1,000 mm), the lens must be moved 2.6 mm farther away from the film plane, to 52.6 mm.
The focal length of a lens determines the magnification at which it images distant objects. It is equal to the distance between the image plane and a pinhole that images distant objects the same size as the lens in question. For rectilinear lenses (that is, with no image distortion), the imaging of distant objects is well modelled as a pinhole camera model.
This model leads to the simple geometric model that photographers use for computing the angle of view of a camera; in this case, the angle of view depends only on the ratio of focal length to film size. In general, the angle of view depends also on the distortion.
A lens with a focal length about equal to the diagonal size of the film or sensor format is known as a normal lens; its angle of view is similar to the angle subtended by a large-enough print viewed at a typical viewing distance of the print diagonal, which therefore yields a normal perspective when viewing the print;
this angle of view is about 53 degrees diagonally. For full-frame 35 mm-format cameras, the diagonal is 43 mm and a typical "normal" lens has a 50 mm focal length. A lens with a focal length shorter than normal is often referred to as a wide-angle lens (typically 35 mm and less, for 35 mm-format cameras), while a lens significantly longer than normal may be referred to as a telephoto lens (typically 85 mm and more, for 35 mm-format cameras). Technically, long focal length lenses are only "telephoto" if the focal length is longer than the physical length of the lens, but the term is often used to describe any long focal length lens.
Due to the popularity of the 35 mm standard, camera–lens combinations are often described in terms of their 35 mm-equivalent focal length, that is, the focal length of a lens that would have the same angle of view, or field of view, if used on a full-frame 35 mm camera. Use of a 35 mm-equivalent focal length is particularly common with digital cameras, which often use sensors smaller than 35 mm film, and so require correspondingly shorter focal lengths to achieve a given angle of view, by a factor known as the crop factor.
Optical power
The optical power of a lens or curved mirror is a physical quantity equal to the reciprocal of the focal length, expressed in metres. A dioptre is its unit of measurement with dimension of reciprocal length, equivalent to one reciprocal metre, 1 dioptre = 1 m−1. For example, a 2-dioptre lens brings parallel rays of light to focus at metre. A flat window has an optical power of zero dioptres, as it does not cause light to converge or diverge.
The main benefit of using optical power rather than focal length is that the thin lens formula has the object distance, image distance, and focal length all as reciprocals. Additionally, when relatively thin lenses are placed close together their powers approximately add. Thus, a thin 2.0-dioptre lens placed close to a thin 0.5-dioptre lens yields almost the same focal length as a single 2.5-dioptre lens.
See also
Depth of field
Dioptre
f-number or focal ratio
References
Geometrical optics
Length
Science of photography
Optical quantities | Focal length | [
"Physics",
"Mathematics"
] | 2,417 | [
"Scalar physical quantities",
"Physical quantities",
"Distance",
"Quantity",
"Size",
"Optical quantities",
"Length",
"Wikipedia categories named after physical quantities"
] |
82,330 | https://en.wikipedia.org/wiki/Electric%20generator | In electricity generation, a generator is a device that converts motion-based power (potential and kinetic energy) or fuel-based power (chemical energy) into electric power for use in an external circuit. Sources of mechanical energy include steam turbines, gas turbines, water turbines, internal combustion engines, wind turbines and even hand cranks. The first electromagnetic generator, the Faraday disk, was invented in 1831 by British scientist Michael Faraday. Generators provide nearly all the power for electrical grids.
In addition to electricity- and motion-based designs, photovoltaic and fuel cell powered generators use solar power and hydrogen-based fuels, respectively, to generate electrical output.
The reverse conversion of electrical energy into mechanical energy is done by an electric motor, and motors and generators are very similar. Many motors can generate electricity from mechanical energy.
Terminology
Electromagnetic generators fall into one of two broad categories, dynamos and alternators.
Dynamos generate pulsing direct current through the use of a commutator.
Alternators generate alternating current.
Mechanically, a generator consists of a rotating part and a stationary part which together form a magnetic circuit:
Rotor: The rotating part of an electrical machine.
Stator: The stationary part of an electrical machine, which surrounds the rotor.
One of these parts generates a magnetic field, the other has a wire winding in which the changing field induces an electric current:
Field winding or field (permanent) magnets: The magnetic field-producing component of an electrical machine. The magnetic field of the dynamo or alternator can be provided by either wire windings called field coils or permanent magnets. Electrically-excited generators include an excitation system to produce the field flux. A generator using permanent magnets (PMs) is sometimes called a magneto, or a permanent magnet synchronous generator (PMSG).
Armature: The power-producing component of an electrical machine. In a generator, alternator, or dynamo, the armature windings generate the electric current, which provides power to an external circuit.
The armature can be on either the rotor or the stator, depending on the design, with the field coil or magnet on the other part.
History
Before the connection between magnetism and electricity was discovered, electrostatic generators were invented. They operated on electrostatic principles, by using moving electrically charged belts, plates and disks that carried charge to a high potential electrode. The charge was generated using either of two mechanisms: electrostatic induction or the triboelectric effect. Such generators generated very high voltage and low current. Because of their inefficiency and the difficulty of insulating machines that produced very high voltages, electrostatic generators had low power ratings, and were never used for generation of commercially significant quantities of electric power. Their only practical applications were to power early X-ray tubes, and later in some atomic particle accelerators.
Faraday disk generator
The operating principle of electromagnetic generators was discovered in the years of 1831–1832 by Michael Faraday. The principle, later called Faraday's law, is that an electromotive force is generated in an electrical conductor which encircles a varying magnetic flux.
Faraday also built the first electromagnetic generator, called the Faraday disk; a type of homopolar generator, using a copper disc rotating between the poles of a horseshoe magnet. It produced a small DC voltage.
This design was inefficient, due to self-cancelling counterflows of current in regions of the disk that were not under the influence of the magnetic field. While current was induced directly underneath the magnet, the current would circulate backwards in regions that were outside the influence of the magnetic field. This counterflow limited the power output to the pickup wires and induced waste heating of the copper disc. Later homopolar generators would solve this problem by using an array of magnets arranged around the disc perimeter to maintain a steady field effect in one current-flow direction.
Another disadvantage was that the output voltage was very low, due to the single current path through the magnetic flux. Experimenters found that using multiple turns of wire in a coil could produce higher, more useful voltages. Since the output voltage is proportional to the number of turns, generators could be easily designed to produce any desired voltage by varying the number of turns. Wire windings became a basic feature of all subsequent generator designs.
Jedlik and the self-excitation phenomenon
Independently of Faraday, Ányos Jedlik started experimenting in 1827 with the electromagnetic rotating devices which he called electromagnetic self-rotors. In the prototype of the single-pole electric starter (finished between 1852 and 1854) both the stationary and the revolving parts were electromagnetic. It was also the discovery of the principle of dynamo self-excitation, which replaced permanent magnet designs. He also may have formulated the concept of the dynamo in 1861 (before Siemens and Wheatstone) but did not patent it as he thought he was not the first to realize this.
Direct current generators
A coil of wire rotating in a magnetic field produces a current which changes direction with each 180° rotation, an alternating current (AC). However many early uses of electricity required direct current (DC). In the first practical electric generators, called dynamos, the AC was converted into DC with a commutator, a set of rotating switch contacts on the armature shaft. The commutator reversed the connection of the armature winding to the circuit every 180° rotation of the shaft, creating a pulsing DC current. One of the first dynamos was built by Hippolyte Pixii in 1832.
The dynamo was the first electrical generator capable of delivering power for industry.
The Woolrich Electrical Generator of 1844, now in Thinktank, Birmingham Science Museum, is the earliest electrical generator used in an industrial process. It was used by the firm of Elkingtons for commercial electroplating.
The modern dynamo, fit for use in industrial applications, was invented independently by Sir Charles Wheatstone, Werner von Siemens and Samuel Alfred Varley. Varley took out a patent on 24 December 1866, while Siemens and Wheatstone both announced their discoveries on 17 January 1867 by delivering papers at the Royal Society.
The "dynamo-electric machine" employed self-powering electromagnetic field coils rather than permanent magnets to create the stator field. Wheatstone's design was similar to Siemens', with the difference that in the Siemens design the stator electromagnets were in series with the rotor, but in Wheatstone's design they were in parallel. The use of electromagnets rather than permanent magnets greatly increased the power output of a dynamo and enabled high power generation for the first time. This invention led directly to the first major industrial uses of electricity. For example, in the 1870s Siemens used electromagnetic dynamos to power electric arc furnaces for the production of metals and other materials.
The dynamo machine that was developed consisted of a stationary structure, which provides the magnetic field, and a set of rotating windings which turn within that field. On larger machines the constant magnetic field is provided by one or more electromagnets, which are usually called field coils.
Large power generation dynamos are now rarely seen due to the now nearly universal use of alternating current for power distribution. Before the adoption of AC, very large direct-current dynamos were the only means of power generation and distribution. AC has come to dominate due to the ability of AC to be easily transformed to and from very high voltages to permit low losses over large distances.
Synchronous generators (alternating current generators)
Through a series of discoveries, the dynamo was succeeded by many later inventions, especially the AC alternator, which was capable of generating alternating current. It is commonly known to be the Synchronous Generators (SGs). The synchronous machines are directly connected to the grid and need to be properly synchronized during startup. Moreover, they are excited with special control to enhance the stability of the power system.
Alternating current generating systems were known in simple forms from Michael Faraday's original discovery of the magnetic induction of electric current. Faraday himself built an early alternator. His machine was a "rotating rectangle", whose operation was heteropolar: each active conductor passed successively through regions where the magnetic field was in opposite directions.
Large two-phase alternating current generators were built by a British electrician, J. E. H. Gordon, in 1882. The first public demonstration of an "alternator system" was given by William Stanley Jr., an employee of Westinghouse Electric in 1886.
Sebastian Ziani de Ferranti established Ferranti, Thompson and Ince in 1882, to market his Ferranti-Thompson Alternator, invented with the help of renowned physicist Lord Kelvin. His early alternators produced frequencies between 100 and 300 Hz. Ferranti went on to design the Deptford Power Station for the London Electric Supply Corporation in 1887 using an alternating current system. On its completion in 1891, it was the first truly modern power station, supplying high-voltage AC power that was then "stepped down" for consumer use on each street. This basic system remains in use today around the world.
After 1891, polyphase alternators were introduced to supply currents of multiple differing phases. Later alternators were designed for varying alternating-current frequencies between sixteen and about one hundred hertz, for use with arc lighting, incandescent lighting and electric motors.
Self-excitation
As the requirements for larger scale power generation increased, a new limitation rose: the magnetic fields available from permanent magnets. Diverting a small amount of the power generated by the generator to an electromagnetic field coil allowed the generator to produce substantially more power. This concept was dubbed self-excitation.
The field coils are connected in series or parallel with the armature winding. When the generator first starts to turn, the small amount of remanent magnetism present in the iron core provides a magnetic field to get it started, generating a small current in the armature. This flows through the field coils, creating a larger magnetic field which generates a larger armature current. This "bootstrap" process continues until the magnetic field in the core levels off due to saturation and the generator reaches a steady state power output.
Very large power station generators often utilize a separate smaller generator to excite the field coils of the larger. In the event of a severe widespread power outage where islanding of power stations has occurred, the stations may need to perform a black start to excite the fields of their largest generators, in order to restore customer power service.
Specialised types of generator
Direct current (DC)
A dynamo uses commutators to produce direct current. It is self-excited, i.e. its field electromagnets are powered by the machine's own output. Other types of DC generators use a separate source of direct current to energise their field magnets.
Homopolar generator
A homopolar generator is a DC electrical generator comprising an electrically conductive disc or cylinder rotating in a plane perpendicular to a uniform static magnetic field. A potential difference is created between the center of the disc and the rim (or ends of the cylinder), the electrical polarity depending on the direction of rotation and the orientation of the field.
It is also known as a unipolar generator, acyclic generator, disk dynamo, or Faraday disc. The voltage is typically low, on the order of a few volts in the case of small demonstration models, but large research generators can produce hundreds of volts, and some systems have multiple generators in series to produce an even larger voltage. They are unusual in that they can produce tremendous electric current, some more than a million amperes, because the homopolar generator can be made to have very low internal resistance.
Magnetohydrodynamic (MHD) generator
A magnetohydrodynamic generator directly extracts electric power from moving hot gases through a magnetic field, without the use of rotating electromagnetic machinery. MHD generators were originally developed because the output of a plasma MHD generator is a flame, well able to heat the boilers of a steam power plant. The first practical design was the AVCO Mk. 25, developed in 1965. The U.S. government funded substantial development, culminating in a 25 MW demonstration plant in 1987. In the Soviet Union from 1972 until the late 1980s, the MHD plant U 25 was in regular utility operation on the Moscow power system with a rating of 25 MW, the largest MHD plant rating in the world at that time. MHD generators operated as a topping cycle are currently (2007) less efficient than combined cycle gas turbines.
Alternating current (AC)
Induction generator
Induction AC motors may be used as generators, turning mechanical energy into electric current. Induction generators operate by mechanically turning their rotor faster than the simultaneous speed, giving negative slip. A regular AC non-simultaneous motor usually can be used as a generator, without any changes to its parts. Induction generators are useful in applications like minihydro power plants, wind turbines, or in reducing high-pressure gas streams to lower pressure, because they can recover energy with relatively simple controls. They do not require another circuit to start working because the turning magnetic field is provided by induction from the one they have. They also do not require speed governor equipment as they inherently operate at the connected grid frequency.
An induction generator must be powered with a leading voltage; this is usually done by connection to an electrical grid, or by powering themselves with phase correcting capacitors.
Linear electric generator
In the simplest form of linear electric generator, a sliding magnet moves back and forth through a solenoid, a copper wire or a coil. An alternating current is induced in the wire, or loops of wire, by Faraday's law of induction each time the magnet slides through. This type of generator is used in the Faraday flashlight. Larger linear electricity generators are used in wave power schemes.
Variable-speed constant-frequency generators
Grid-connected generators deliver power at a constant frequency. For generators of the synchronous or induction type, the primer mover speed turning the generator shaft must be at a particular speed (or narrow range of speed) to deliver power at the required utility frequency. Mechanical speed-regulating devices may waste a significant fraction of the input energy to maintain a required fixed frequency.
Where it is impractical or undesired to tightly regulate the speed of the prime mover, doubly fed electric machines may be used as generators. With the assistance of power electronic devices, these can regulate the output frequency to a desired value over a wider range of generator shaft speeds. Alternatively, a standard generator can be used with no attempt to regulate frequency, and the resulting power converted to the desired output frequency with a rectifier and converter combination. Allowing a wider range of prime mover speeds can improve the overall energy production of an installation, at the cost of more complex generators and controls. For example, where a wind turbine operating at fixed frequency might be required to spill energy at high wind speeds, a variable speed system can allow recovery of energy contained during periods of high wind speed.
Common use cases
Power station
A power station, also known as a power plant or powerhouse and sometimes generating station or generating plant, is an industrial facility that generates electricity. Most power stations contain one or more generators, or spinning machines converting mechanical power into three-phase electrical power. The relative motion between a magnetic field and a conductor creates an electric current. The energy source harnessed to turn the generator varies widely. Most power stations in the world burn fossil fuels such as coal, oil, and natural gas to generate electricity. Cleaner sources include nuclear power, and increasingly use renewables such as the sun, wind, waves and running water.
Vehicular generators
Roadway vehicles
Motor vehicles require electrical energy to power their instrumentation, keep the engine itself operating, and recharge their batteries. Until about the 1960s motor vehicles tended to use DC generators (dynamos) with electromechanical regulators. Following the historical trend above and for many of the same reasons, these have now been replaced by alternators with built-in rectifier circuits.
Bicycles
Bicycles require energy to power running lights and other equipment. There are two common kinds of generator in use on bicycles: bottle dynamos which engage the bicycle's tire on an as-needed basis, and hub dynamos which are directly attached to the bicycle's drive train. The name is conventional as they are small permanent-magnet alternators, not self-excited DC machines as are dynamos. Some electric bicycles are capable of regenerative braking, where the drive motor is used as a generator to recover some energy during braking.
Sailboats
Sailing boats may use a water- or wind-powered generator to trickle-charge the batteries. A small propeller, wind turbine or turbine is connected to a low-power generator to supply currents at typical wind or cruising speeds.
Recreational vehicles
Recreational vehicles need an extra power supply to power their onboard accessories, including air conditioning units, and refrigerators. An RV power plug is connected to the electric generator to obtain a stable power supply.
Electric scooters
Electric scooters with regenerative braking have become popular all over the world. Engineers use kinetic energy recovery systems on the scooter to reduce energy consumption and increase its range up to 40-60% by simply recovering energy using the magnetic brake, which generates electric energy for further use. Modern vehicles reach speed up to 25–30 km/h and can run up to 35–40 km.
Genset
An engine-generator is the combination of an electrical generator and an engine (prime mover) mounted together to form a single piece of self-contained equipment. The engines used are usually piston engines, but gas turbines can also be used, and there are even hybrid diesel-gas units, called dual-fuel units. Many different versions of engine-generators are available – ranging from very small portable petrol powered sets to large turbine installations. The primary advantage of engine-generators is the ability to independently supply electricity, allowing the units to serve as backup power sources.
Human powered electrical generators
A generator can also be driven by human muscle power (for instance, in field radio station equipment).
Human powered electric generators are commercially available, and have been the project of some DIY enthusiasts. Typically operated by means of pedal power, a converted bicycle trainer, or a foot pump, such generators can be practically used to charge batteries, and in some cases are designed with an integral inverter. An average "healthy human" can produce a steady 75 watts (0.1 horsepower) for a full eight hour period, while a "first class athlete" can produce approximately 298 watts (0.4 horsepower) for a similar period, at the end of which an undetermined period of rest and recovery will be required. At 298 watts, the average "healthy human" becomes exhausted within 10 minutes. The net electrical power that can be produced will be less, due to the efficiency of the generator. Portable radio receivers with a crank are made to reduce battery purchase requirements, see clockwork radio. During the mid 20th century, pedal powered radios were used throughout the Australian outback, to provide schooling (School of the Air), medical and other needs in remote stations and towns.
Mechanical measurement
A tachogenerator is an electromechanical device which produces an output voltage proportional to its shaft speed. It may be used for a speed indicator or in a feedback speed control system. Tachogenerators are frequently used to power tachometers to measure the speeds of electric motors, engines, and the equipment they power. Generators generate voltage roughly proportional to shaft speed. With precise construction and design, generators can be built to produce very precise voltages for certain ranges of shaft speeds.
Equivalent circuit
An equivalent circuit of a generator and load is shown in the adjacent diagram. The generator is represented by an abstract generator consisting of an ideal voltage source and an internal impedance. The generator's and parameters can be determined by measuring the winding resistance (corrected to operating temperature), and measuring the open-circuit and loaded voltage for a defined current load.
This is the simplest model of a generator, further elements may need to be added for an accurate representation. In particular, inductance can be added to allow for the machine's windings and magnetic leakage flux, but a full representation can become much more complex than this.
See also
Diesel generator
Electricity generation
Electric motor
Engine-generator
Faraday's law of induction
Gas turbine
Generation expansion planning
Goodness factor
Hydropower
Steam generator (boiler)
Steam generator (railroad)
Steam turbine
Superconducting electric machine
Thermoelectric generator
Thermal power station
Tidal stream generator
References
English inventions
19th-century inventions | Electric generator | [
"Physics",
"Technology"
] | 4,260 | [
"Physical systems",
"Electrical generators",
"Machines"
] |
82,342 | https://en.wikipedia.org/wiki/Lymph%20node | A lymph node, or lymph gland, is a kidney-shaped organ of the lymphatic system and the adaptive immune system. A large number of lymph nodes are linked throughout the body by the lymphatic vessels. They are major sites of lymphocytes that include B and T cells. Lymph nodes are important for the proper functioning of the immune system, acting as filters for foreign particles including cancer cells, but have no detoxification function.
In the lymphatic system, a lymph node is a secondary lymphoid organ. A lymph node is enclosed in a fibrous capsule and is made up of an outer cortex and an inner medulla.
Lymph nodes become inflamed or enlarged in various diseases, which may range from trivial throat infections to life-threatening cancers. The condition of lymph nodes is very important in cancer staging, which decides the treatment to be used and determines the prognosis. Lymphadenopathy refers to glands that are enlarged or swollen. When inflamed or enlarged, lymph nodes can be firm or tender.
Structure
Lymph nodes are kidney or oval shaped and range in size from 2 mm to 25 mm on their long axis, with an average of 15 mm.
Each lymph node is surrounded by a fibrous capsule (made of collagenous connective tissue), which extends inside a lymph node to form trabeculae. The substance of a lymph node is divided into the outer cortex and the inner medulla. These are rich with cells. The hilum is an indent on the concave surface of the lymph node where lymphatic vessels leave and blood vessels enter and leave.
Lymph enters the convex side of a lymph node through multiple afferent lymphatic vessels, and from there, it flows into a series of sinuses. Upon entering the lymph node, lymph first passes into a space beneath the capsule known as the subcapsular sinus, then moves into the cortical sinuses. After traversing the cortex, lymph collects in the medullary sinuses. Finally, all of these sinuses drain into the efferent lymphatic vessels, which carry the lymph away from the node, exiting at the hilum on the concave side.
Location
Lymph nodes are present throughout the body, are more concentrated near and within the trunk, and are divided into groups. There are about 450 lymph nodes in the adult. Some lymph nodes can be felt when enlarged (and occasionally when not), such as the axillary lymph nodes under the arm, the cervical lymph nodes of the head and neck and the inguinal lymph nodes near the groin crease. Most lymph nodes lie within the trunk adjacent to other major structures in the body - such as the paraaortic lymph nodes and the tracheobronchial lymph nodes. The lymphatic drainage patterns are different from person to person and even asymmetrical on each side of the same body.
There are no lymph nodes in the central nervous system, which is separated from the body by the blood–brain barrier. Lymph from the meningeal lymphatic vessels in the CNS drains to the deep cervical lymph nodes. However, the CNS does innervate lymph node by sympathetic nerves. These regulate lymphocyte proliferation and migration, antibody secretion, blood perfusion, and inflammatory cytokine production.
Size
Subdivisions
A lymph node is divided into compartments called nodules (or lobules), each consisting of a region of cortex with combined follicle B cells, a paracortex of T cells, and a part of the nodule in the medulla. The substance of a lymph node is divided into the outer cortex and the inner medulla. The cortex of a lymph node is the outer portion of the node, underneath the capsule and the subcapsular sinus. It has an outer part and a deeper part known as the paracortex. The outer cortex consists of groups of mainly inactivated B cells called follicles. When activated, these may develop into what is called a germinal centre. The deeper paracortex mainly consists of the T cells. Here the T-cells mainly interact with dendritic cells, and the reticular network is dense.
The medulla contains large blood vessels, sinuses and medullary cords that contain antibody-secreting plasma cells. There are fewer cells in the medulla.
The medullary cords are cords of lymphatic tissue, and include plasma cells, macrophages, and B cells.
Cells
In the lymphatic system a lymph node is a secondary lymphoid organ. Lymph nodes contain lymphocytes, a type of white blood cell, and are primarily made up of B cells and T cells. B cells are mainly found in the outer cortex where they are clustered together as follicular B cells in lymphoid follicles, and T cells and dendritic cells are mainly found in the paracortex.
There are fewer cells in the medulla than the cortex. The medulla contains plasma cells, as well as macrophages which are present within the medullary sinuses.
As part of the reticular network, there are follicular dendritic cells in the B cell follicle and fibroblastic reticular cells in the T cell cortex. The reticular network provides structural support and a surface for adhesion of the dendritic cells, macrophages and lymphocytes. It also allows exchange of material with blood through the high endothelial venules and provides the growth and regulatory factors necessary for activation and maturation of immune cells.
Lymph flow
Lymph enters the convex side of a lymph node through multiple afferent lymphatic vessels, which form a network of lymphatic vessels () and flows into a space () underneath the capsule called the subcapsular sinus. From here, lymph flows into sinuses within the cortex. After passing through the cortex, lymph then collects in medullary sinuses. All of these sinuses drain into the efferent lymphatic vessels to exit the node at the hilum on the concave side.
These are channels within the node lined by endothelial cells along with fibroblastic reticular cells, allowing for the smooth flow of lymph. The endothelium of the subcapsular sinus is continuous with that of the afferent lymph vessel and also with that of the similar sinuses flanking the trabeculae and within the cortex. These vessels are smaller and do not allow the passage of macrophages so that they remain contained to function within a lymph node. In the course of the lymph, lymphocytes may be activated as part of the adaptive immune response.
There is usually only one efferent vessel though sometimes there may be two, in contrast to the multiple afferent channels that bring lymph into the node. Medullary sinuses contain histiocytes (immobile macrophages) and reticular cells, the former of which, along with T and B cells, become activated in the presence of antigens through lymphatic flow. The fewer efferent vessels allow this flow to be slowed, providing time to activate and distribute a larger number of immune cells in the event of an infection.
A lymph node contains lymphoid tissue, i.e., a meshwork or fibers called with white blood cells enmeshed in it. The regions where there are few cells within the meshwork are known as . It is lined by reticular cells, fibroblasts and fixed macrophages.
Capsule
Thin reticular fibers (reticulin) of reticular connective tissue form a supporting meshwork inside the node.
These reticular cells also form a conduit network within the lymph node that functions as a molecular sieve, to prevent pathogens that enter the lymph node through afferent vessels re-enter the blood stream. The lymph node capsule is composed of dense irregular connective tissue with some plain collagenous fibers, and a number of membranous processes or trabeculae extend from its internal surface. The trabeculae pass inward, radiating toward the center of the node, for about one-third or one-fourth of the space between the circumference and the center of the node. In some animals they are sufficiently well-marked to divide the peripheral or cortical portion of the node into a number of compartments (nodules), but in humans this arrangement is not obvious. The larger trabeculae springing from the capsule break up into finer bands, and these interlace to form a mesh-work in the central or medullary portion of the node. These trabecular spaces formed by the interlacing trabeculae contain the proper lymph node substance or lymphoid tissue. The node pulp does not, however, completely fill the spaces, but leaves between its outer margin and the enclosing trabeculae a channel or space of uniform width throughout. This is termed the subcapsular sinus (lymph path or lymph sinus). Running across it are a number of finer trabeculae of reticular fibers, mostly covered by ramifying cells.
Function
In the lymphatic system, a lymph node is a secondary lymphoid organ.
The primary function of lymph nodes is the filtering of lymph to identify and fight infection. In order to do this, lymph nodes contain lymphocytes, a type of white blood cell, which includes B cells and T cells. These circulate through the bloodstream and enter and reside in lymph nodes. B cells produce antibodies. Each antibody has a single predetermined target, an antigen, that it can bind to. These circulate throughout the bloodstream and if they find this target, the antibodies bind to it and stimulate an immune response. Each B cell produces different antibodies, and this process is driven in lymph nodes. B cells enter the bloodstream as "naive" cells produced in bone marrow. After entering a lymph node, they then enter a lymphoid follicle, where they multiply and divide, each producing a different antibody. If a cell is stimulated, it will go on to produce more antibodies (a plasma cell) or act as a memory cell to help the body fight future infection. If a cell is not stimulated, it will undergo apoptosis and die.
Antigens are molecules found on bacterial cell walls, chemical substances secreted from bacteria, or sometimes even molecules present in body tissue itself. These are taken up by cells throughout the body called antigen-presenting cells, such as dendritic cells. These antigen presenting cells enter the lymph system and then lymph nodes. They present the antigen to T cells and, if there is a T cell with the appropriate T cell receptor, it will be activated.
B cells acquire antigen directly from the afferent lymph. If a B cell binds its cognate antigen it will be activated. Some B cells will immediately develop into antibody secreting plasma cells, and secrete IgM. Other B cells will internalize the antigen and present it to follicular helper T cells on the B and T cell zone interface. If a cognate FTh cell is found it will upregulate CD40L and promote somatic hypermutation and isotype class switching of the B cell, increasing its antigen binding affinity and changing its effector function. Proliferation of cells within a lymph node will make the node expand.
Lymph is present throughout the body, and circulates through lymphatic vessels. These drain into and from lymph nodesafferent vessels drain into nodes, and efferent vessels from nodes. When lymph fluid enters a node, it drains into the node just beneath the capsule in a space called the subcapsular sinus. The subcapsular sinus drains into trabecular sinuses and finally into medullary sinuses. The sinus space is criss-crossed by the pseudopods of macrophages, which act to trap foreign particles and filter the lymph. The medullary sinuses converge at the hilum and lymph then leaves the lymph node via the efferent lymphatic vessel towards either a more central lymph node or ultimately for drainage into a central venous subclavian blood vessel.
The B cells migrate to the nodular cortex and medulla.
The T cells migrate to the deep cortex. This is a region of a lymph node called the paracortex that immediately surrounds the medulla. Because both naive T cells and dendritic cells express CCR7, they are drawn into the paracortex by the same chemotactic factors, increasing the chance of T cell activation. Both B and T lymphocytes enter lymph nodes from circulating blood through specialized high endothelial venules found in the paracortex.
Clinical significance
Swelling
Lymph node enlargement or swelling is known as lymphadenopathy. Swelling may be due to many causes, including infections, tumors, autoimmune disease, drug reactions, diseases such as amyloidosis and sarcoidosis, or because of lymphoma or leukemia. Depending on the cause, swelling may be painful, particularly if the expansion is rapid and due to an infection or inflammation. Lymph node enlargement may be localized to an area, which might suggest a local source of infection or a tumour in that area that has spread to the lymph node. It may also be generalized, which might suggest infection, connective tissue or autoimmune disease, or a malignancy of blood cells such as a lymphoma or leukemia. Rarely, depending on location, lymph node enlargement may cause problems such as difficulty breathing, or compression of a blood vessel (for example, superior vena cava obstruction).
Enlarged lymph nodes might be felt as part of a medical examination, or found on medical imaging. Features of the medical history may point to the cause, such as the speed of onset of swelling, pain, and other constitutional symptoms such as fevers or weight loss. For example, a tumour of the breast may result in swelling of the lymph nodes under the arms and weight loss and night sweats may suggest a malignancy such as lymphoma.
In addition to a medical exam by a medical practitioner, medical tests may include blood tests and scans may be needed to further examine the cause. A biopsy of a lymph node may also be needed.
Cancer
Lymph nodes can be affected by both primary cancers of lymph tissue, and secondary cancers affecting other parts of the body. Primary cancers of lymph tissue are called lymphomas and include Hodgkin lymphoma and non-Hodgkin lymphoma. Cancer of lymph nodes can cause a wide range of symptoms from painless long-term slowly growing swelling to sudden, rapid enlargement over days or weeks, with symptoms depending on the grade of the tumour. Most lymphomas are tumours of B-cells. Lymphoma is managed by haematologists and oncologists.
Local cancer in many parts of the body can cause lymph nodes to enlarge because of tumorous cells that have metastasised into the node. Lymph node involvement is often a key part in the diagnosis and treatment of cancer, acting as "sentinels" of local disease, incorporated into TNM staging and other cancer staging systems. As part of the investigations or workup for cancer, lymph nodes may be imaged or even surgically removed. If removed, the lymph node will be stained and examined under a microscope by a pathologist to determine if there is evidence of cells that appear cancerous (i.e. have metastasized into the node). The staging of the cancer, and therefore the treatment approach and prognosis, is predicated on the presence of node metastases.
Lymphedema
Lymphedema is the condition of swelling (edema) of tissue relating to insufficient clearance by the lymphatic system. It can be congenital as a result usually of undeveloped or absent lymph nodes, and is known as primary lymphedema. Lymphedema most commonly arises in the arms or legs, but can also occur in the chest wall, genitals, neck, and abdomen. Secondary lymphedema usually results from the removal of lymph nodes during breast cancer surgery or from other damaging treatments such as radiation. It can also be caused by some parasitic infections. Affected tissues are at a great risk of infection. Management of lymphedema may include advice to lose weight, exercise, keep the affected limb moist, and compress the affected area. Sometimes surgical management is also considered.
Similar lymphoid organs
The spleen and the tonsils are the larger secondary lymphoid organs that serve somewhat similar functions to lymph nodes, though the spleen filters blood cells rather than lymph. The tonsils are sometimes erroneously referred to as lymph nodes. Although the tonsils and lymph nodes do share certain characteristics, there are also many important differences between them, such as their location, structure and size. Furthermore, the tonsils filter tissue fluid whereas lymph nodes filter lymph.
The appendix contains lymphoid tissue and is therefore believed to play a role not only in the digestive system, but also in the immune system.
See also
Peyer's patch
Lymph sacs
References
Bibliography
External links
Lymph Nodes
Lymph Nodes Drainage
An overview of Normal Lymph Nodes and Swollen lymph nodes and their evaluation
Immune system
Lymphoid organ | Lymph node | [
"Biology"
] | 3,860 | [
"Immune system",
"Organ systems"
] |
82,361 | https://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt%20process | In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a way of finding a set of two or more vectors that are perpendicular to each other.
By technical definition, it is a method of constructing an orthonormal basis from a set of vectors in an inner product space, most commonly the Euclidean space equipped with the standard inner product. The Gram–Schmidt process takes a finite, linearly independent set of vectors for and generates an orthogonal set that spans the same -dimensional subspace of as .
The method is named after Jørgen Pedersen Gram and Erhard Schmidt, but Pierre-Simon Laplace had been familiar with it before Gram and Schmidt. In the theory of Lie group decompositions, it is generalized by the Iwasawa decomposition.
The application of the Gram–Schmidt process to the column vectors of a full column rank matrix yields the QR decomposition (it is decomposed into an orthogonal and a triangular matrix).
The Gram–Schmidt process
The vector projection of a vector on a nonzero vector is defined as
where denotes the inner product of the vectors and . This means that is the orthogonal projection of onto the line spanned by . If is the zero vector, then is defined as the zero vector.
Given vectors the Gram–Schmidt process defines the vectors as follows:
The sequence is the required system of orthogonal vectors, and the normalized vectors form an orthonormal set. The calculation of the sequence is known as Gram–Schmidt orthogonalization, and the calculation of the sequence is known as Gram–Schmidt orthonormalization.
To check that these formulas yield an orthogonal sequence, first compute by substituting the above formula for : we get zero. Then use this to compute again by substituting the formula for : we get zero. For arbitrary the proof is accomplished by mathematical induction.
Geometrically, this method proceeds as follows: to compute , it projects orthogonally onto the subspace generated by , which is the same as the subspace generated by . The vector is then defined to be the difference between and this projection, guaranteed to be orthogonal to all of the vectors in the subspace .
The Gram–Schmidt process also applies to a linearly independent countably infinite sequence . The result is an orthogonal (or orthonormal) sequence such that for natural number : the algebraic span of is the same as that of .
If the Gram–Schmidt process is applied to a linearly dependent sequence, it outputs the vector on the th step, assuming that is a linear combination of . If an orthonormal basis is to be produced, then the algorithm should test for zero vectors in the output and discard them because no multiple of a zero vector can have a length of 1. The number of vectors output by the algorithm will then be the dimension of the space spanned by the original inputs.
A variant of the Gram–Schmidt process using transfinite recursion applied to a (possibly uncountably) infinite sequence of vectors yields a set of orthonormal vectors with such that for any , the completion of the span of is the same as that of In particular, when applied to a (algebraic) basis of a Hilbert space (or, more generally, a basis of any dense subspace), it yields a (functional-analytic) orthonormal basis. Note that in the general case often the strict inequality holds, even if the starting set was linearly independent, and the span of need not be a subspace of the span of (rather, it's a subspace of its completion).
Example
Euclidean space
Consider the following set of vectors in (with the conventional inner product)
Now, perform Gram–Schmidt, to obtain an orthogonal set of vectors:
We check that the vectors and are indeed orthogonal:
noting that if the dot product of two vectors is 0 then they are orthogonal.
For non-zero vectors, we can then normalize the vectors by dividing out their sizes as shown above:
Properties
Denote by the result of applying the Gram–Schmidt process to a collection of vectors . This yields a map .
It has the following properties:
It is continuous
It is orientation preserving in the sense that .
It commutes with orthogonal maps:
Let be orthogonal (with respect to the given inner product). Then we have
Further, a parametrized version of the Gram–Schmidt process yields a (strong) deformation retraction of the general linear group onto the orthogonal group .
Numerical stability
When this process is implemented on a computer, the vectors are often not quite orthogonal, due to rounding errors. For the Gram–Schmidt process as described above (sometimes referred to as "classical Gram–Schmidt") this loss of orthogonality is particularly bad; therefore, it is said that the (classical) Gram–Schmidt process is numerically unstable.
The Gram–Schmidt process can be stabilized by a small modification; this version is sometimes referred to as modified Gram-Schmidt or MGS. This approach gives the same result as the original formula in exact arithmetic and introduces smaller errors in finite-precision arithmetic.
Instead of computing the vector as
it is computed as
This method is used in the previous animation, when the intermediate vector is used when orthogonalizing the blue vector .
Here is another description of the modified algorithm. Given the vectors , in our first step we produce vectors by removing components along the direction of . In formulas, . After this step we already have two of our desired orthogonal vectors , namely , but we also made already orthogonal to . Next, we orthogonalize those remaining vectors against . This means we compute by subtraction . Now we have stored the vectors where the first three vectors are already and the remaining vectors are already orthogonal to . As should be clear now, the next step orthogonalizes against . Proceeding in this manner we find the full set of orthogonal vectors . If orthonormal vectors are desired, then we normalize as we go, so that the denominators in the subtraction formulas turn into ones.
Algorithm
The following MATLAB algorithm implements classical Gram–Schmidt orthonormalization. The vectors (columns of matrix V, so that V(:,j) is the th vector) are replaced by orthonormal vectors (columns of U) which span the same subspace.
function U = gramschmidt(V)
[n, k] = size(V);
U = zeros(n,k);
U(:,1) = V(:,1) / norm(V(:,1));
for i = 2:k
U(:,i) = V(:,i);
for j = 1:i-1
U(:,i) = U(:,i) - (U(:,j)'*U(:,i)) * U(:,j);
end
U(:,i) = U(:,i) / norm(U(:,i));
end
end
The cost of this algorithm is asymptotically floating point operations, where is the dimensionality of the vectors.
Via Gaussian elimination
If the rows are written as a matrix , then applying Gaussian elimination to the augmented matrix will produce the orthogonalized vectors in place of . However the matrix must be brought to row echelon form, using only the row operation of adding a scalar multiple of one row to another. For example, taking as above, we have
And reducing this to row echelon form produces
The normalized vectors are then
as in the example above.
Determinant formula
The result of the Gram–Schmidt process may be expressed in a non-recursive formula using determinants.
where and, for , is the Gram determinant
Note that the expression for is a "formal" determinant, i.e. the matrix contains both scalars and vectors; the meaning of this expression is defined to be the result of a cofactor expansion along the row of vectors.
The determinant formula for the Gram-Schmidt is computationally (exponentially) slower than the recursive algorithms described above; it is mainly of theoretical interest.
Expressed using geometric algebra
Expressed using notation used in geometric algebra, the unnormalized results of the Gram–Schmidt process can be expressed as
which is equivalent to the expression using the operator defined above. The results can equivalently be expressed as
which is closely related to the expression using determinants above.
Alternatives
Other orthogonalization algorithms use Householder transformations or Givens rotations. The algorithms using Householder transformations are more stable than the stabilized Gram–Schmidt process. On the other hand, the Gram–Schmidt process produces the th orthogonalized vector after the th iteration, while orthogonalization using Householder reflections produces all the vectors only at the end. This makes only the Gram–Schmidt process applicable for iterative methods like the Arnoldi iteration.
Yet another alternative is motivated by the use of Cholesky decomposition for inverting the matrix of the normal equations in linear least squares. Let be a full column rank matrix, whose columns need to be orthogonalized. The matrix is Hermitian and positive definite, so it can be written as using the Cholesky decomposition. The lower triangular matrix with strictly positive diagonal entries is invertible. Then columns of the matrix are orthonormal and span the same subspace as the columns of the original matrix . The explicit use of the product makes the algorithm unstable, especially if the product's condition number is large. Nevertheless, this algorithm is used in practice and implemented in some software packages because of its high efficiency and simplicity.
In quantum mechanics there are several orthogonalization schemes with characteristics better suited for certain applications than original Gram–Schmidt. Nevertheless, it remains a popular and effective algorithm for even the largest electronic structure calculations.
Run-time complexity
Gram-Schmidt orthogonalization can be done in strongly-polynomial time. The run-time analysis is similar to that of Gaussian elimination.
See also
Linear algebra
Recursion
Orthogonality (mathematics)
References
Notes
Sources
.
.
.
.
External links
Harvey Mudd College Math Tutorial on the Gram-Schmidt algorithm
Earliest known uses of some of the words of mathematics: G The entry "Gram-Schmidt orthogonalization" has some information and references on the origins of the method.
Demos: Gram Schmidt process in plane and Gram Schmidt process in space
Gram-Schmidt orthogonalization applet
NAG Gram–Schmidt orthogonalization of n vectors of order m routine
Proof: Raymond Puzio, Keenan Kidwell. "proof of Gram-Schmidt orthogonalization algorithm" (version 8). PlanetMath.org.
Linear algebra
Functional analysis
Articles with example MATLAB/Octave code | Gram–Schmidt process | [
"Mathematics"
] | 2,187 | [
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Mathematical relations",
"Linear algebra",
"Algebra"
] |
82,381 | https://en.wikipedia.org/wiki/Electron%20capture | Electron capture (K-electron capture, also K-capture, or L-electron capture, L-capture) is a process in which the proton-rich nucleus of an electrically neutral atom absorbs an inner atomic electron, usually from the K or L electron shells. This process thereby changes a nuclear proton to a neutron and simultaneously causes the emission of an electron neutrino.
or when written as a nuclear reaction equation, ^{0}_{-1}e + ^{1}_{1}p -> ^{1}_{0}n + ^{0}_{0} ν
Since this single emitted neutrino carries the entire decay energy, it has this single characteristic energy. Similarly, the momentum of the neutrino emission causes the daughter atom to recoil with a single characteristic momentum.
The resulting daughter nuclide, if it is in an excited state, then transitions to its ground state. Usually, a gamma ray is emitted during this transition, but nuclear de-excitation may also take place by internal conversion.
Following capture of an inner electron from the atom, an outer electron replaces the electron that was captured and one or more characteristic X-ray photons is emitted in this process. Electron capture sometimes also results in the Auger effect, where an electron is ejected from the atom's electron shell due to interactions between the atom's electrons in the process of seeking a lower energy electron state.
Following electron capture, the atomic number is reduced by one, the neutron number is increased by one, and there is no change in mass number. Simple electron capture by itself results in a neutral atom, since the loss of the electron in the electron shell is balanced by a loss of positive nuclear charge. However, a positive atomic ion may result from further Auger electron emission.
Electron capture is an example of weak interaction, one of the four fundamental forces.
Electron capture is the primary decay mode for isotopes with a relative superabundance of protons in the nucleus, but with insufficient energy difference between the isotope and its prospective daughter (the isobar with one less positive charge) for the nuclide to decay by emitting a positron. Electron capture is always an alternative decay mode for radioactive isotopes that do have sufficient energy to decay by positron emission. Electron capture is sometimes included as a type of beta decay, because the basic nuclear process, mediated by the weak force, is the same. In nuclear physics, beta decay is a type of radioactive decay in which a beta ray (fast energetic electron or positron) and a neutrino are emitted from an atomic nucleus. Electron capture is sometimes called inverse beta decay, though this term usually refers to the interaction of an electron antineutrino with a proton.
If the energy difference between the parent atom and the daughter atom is less than 1.022 MeV, positron emission is forbidden as not enough decay energy is available to allow it, and thus electron capture is the sole decay mode. For example, rubidium-83 (37 protons, 46 neutrons) will decay to krypton-83 (36 protons, 47 neutrons) solely by electron capture (the energy difference, or decay energy, is about 0.9 MeV).
History
The theory of electron capture was first discussed by Gian-Carlo Wick in a 1934 paper, and then developed by Hideki Yukawa and others. K-electron capture was first observed by Luis Alvarez, in vanadium, , which he reported in 1937. Alvarez went on to study electron capture in gallium () and other nuclides.
Reaction details
The electron that is captured is one of the atom's own electrons, and not a new, incoming electron, as might be suggested by the way the reactions are written below. A few examples of electron capture are:
{|
|-
|
|
|
|
|-
|
|
|
|
|-
|
|
|
|
|}
Radioactive isotopes that decay by pure electron capture can be inhibited from radioactive decay if they are fully ionized ("stripped" is sometimes used to describe such ions). It is hypothesized that such elements, if formed by the r-process in exploding supernovae, are ejected fully ionized and so do not undergo radioactive decay as long as they do not encounter electrons in outer space. Anomalies in elemental distributions are thought to be partly a result of this effect on electron capture. Inverse decays can also be induced by full ionisation; for instance, decays into by electron capture; however, a fully ionised decays into a bound state of by the process of bound-state β− decay.
Chemical bonds can also affect the rate of electron capture to a small degree (in general, less than 1%) depending on the proximity of electrons to the nucleus. For example, in 7Be, a difference of 0.9% has been observed between half-lives in metallic and insulating environments. This relatively large effect is due to the fact that beryllium is a small atom that employs valence electrons that are close to the nucleus, and also in orbitals with no orbital angular momentum. Electrons in s orbitals (regardless of shell or primary quantum number), have a probability antinode at the nucleus, and are thus far more subject to electron capture than p or d electrons, which have a probability node at the nucleus.
Around the elements in the middle of the periodic table, isotopes that are lighter than stable isotopes of the same element tend to decay through electron capture, while isotopes heavier than the stable ones decay by electron emission. Electron capture happens most often in the heavier neutron-deficient elements where the mass change is smallest and positron emission is not always possible. When the loss of mass in a nuclear reaction is greater than zero but less than the process cannot occur by positron emission, but occurs spontaneously for electron capture.
Common examples
Some common radionuclides that decay solely by electron capture include:
For a full list, see the table of nuclides.
See also
Chandrasekhar limit
References
External links
with filter on electron capture
Nuclear physics
Nuclear chemistry
Radioactivity | Electron capture | [
"Physics",
"Chemistry"
] | 1,267 | [
"Nuclear chemistry",
"nan",
"Radioactivity",
"Nuclear physics"
] |
82,728 | https://en.wikipedia.org/wiki/Quantum%20superposition | Quantum superposition is a fundamental principle of quantum mechanics that states that linear combinations of solutions to the Schrödinger equation are also solutions of the Schrödinger equation. This follows from the fact that the Schrödinger equation is a linear differential equation in time and position. More precisely, the state of a system is given by a linear combination of all the eigenfunctions of the Schrödinger equation governing that system.
An example is a qubit used in quantum information processing. A qubit state is most generally a superposition of the basis states and :
where is the quantum state of the qubit, and , denote particular solutions to the Schrödinger equation in Dirac notation weighted by the two probability amplitudes and that both are complex numbers. Here corresponds to the classical 0 bit, and to the classical 1 bit. The probabilities of measuring the system in the or state are given by and respectively (see the Born rule). Before the measurement occurs the qubit is in a superposition of both states.
The interference fringes in the double-slit experiment provide another example of the superposition principle.
Wave postulate
The theory of quantum mechanics postulates that a wave equation completely determines the state of a quantum system at all times. Furthermore, this differential equation is restricted to be linear and homogeneous. These conditions mean that for any two solutions of the wave equation, and , a linear combination of those solutions also solve the wave equation:
for arbitrary complex coefficients and . If the wave equation has more than two solutions, combinations of all such solutions are again valid solutions.
Transformation
The quantum wave equation can be solved using functions of position, , or using functions of momentum, and consequently the superposition of momentum functions are also solutions:
The position and momentum solutions are related by a linear transformation, a Fourier transformation. This transformation is itself a quantum superposition and every position wave function can be represented as a superposition of momentum wave functions and vice versa. These superpositions involve an infinite number of component waves.
Generalization to basis states
Other transformations express a quantum solution as a superposition of eigenvectors, each corresponding to a possible result of a measurement on the quantum system. An eigenvector for a mathematical operator, , has the equation
where is one possible measured quantum value for the observable . A superposition of these eigenvectors can represent any solution:
The states like are called basis states.
Compact notation for superpositions
Important mathematical operations on quantum system solutions can be performed using only the coefficients of the superposition, suppressing the details of the superposed functions. This leads to quantum systems expressed in the Dirac bra-ket notation:
This approach is especially effect for systems like quantum spin with no classical coordinate analog. Such shorthand notation is very common in textbooks and papers on quantum mechanics and superposition of basis states is a fundamental tool in quantum mechanics.
Consequences
Paul Dirac described the superposition principle as follows:
The non-classical nature of the superposition process is brought out clearly if we consider the superposition of two states, A and B, such that there exists an observation which, when made on the system in state A, is certain to lead to one particular result, a say, and when made on the system in state B is certain to lead to some different result, b say. What will be the result of the observation when made on the system in the superposed state? The answer is that the result will be sometimes a and sometimes b, according to a probability law depending on the relative weights of A and B in the superposition process. It will never be different from both a and b [i.e., either a or b]. The intermediate character of the state formed by superposition thus expresses itself through the probability of a particular result for an observation being intermediate between the corresponding probabilities for the original states, not through the result itself being intermediate between the corresponding results for the original states.
Anton Zeilinger, referring to the prototypical example of the double-slit experiment, has elaborated regarding the creation and destruction of quantum superposition:
"[T]he superposition of amplitudes ... is only valid if there is no way to know, even in principle, which path the particle took. It is important to realize that this does not imply that an observer actually takes note of what happens. It is sufficient to destroy the interference pattern, if the path information is accessible in principle from the experiment or even if it is dispersed in the environment and beyond any technical possibility to be recovered, but in principle still ‘‘out there.’’ The absence of any such information is the essential criterion for quantum interference to appear.
Theory
General formalism
Any quantum state can be expanded as a sum or superposition of the eigenstates of an Hermitian operator, like the Hamiltonian, because the eigenstates form a complete basis:
where are the energy eigenstates of the Hamiltonian. For continuous variables like position eigenstates, :
where is the projection of the state into the basis and is called the wave function of the particle. In both instances we notice that can be expanded as a superposition of an infinite number of basis states.
Example
Given the Schrödinger equation
where indexes the set of eigenstates of the Hamiltonian with energy eigenvalues we see immediately that
where
is a solution of the Schrödinger equation but is not generally an eigenstate because and are not generally equal. We say that is made up of a superposition of energy eigenstates. Now consider the more concrete case of an electron that has either spin up or down. We now index the eigenstates with the spinors in the basis:
where and denote spin-up and spin-down states respectively. As previously discussed, the magnitudes of the complex coefficients give the probability of finding the electron in either definite spin state:
where the probability of finding the particle with either spin up or down is normalized to 1. Notice that and are complex numbers, so that
is an example of an allowed state. We now get
If we consider a qubit with both position and spin, the state is a superposition of all possibilities for both:
where we have a general state is the sum of the tensor products of the position space wave functions and spinors.
Experiments
Successful experiments involving superpositions of relatively large (by the standards of quantum physics) objects have been performed.
A beryllium ion has been trapped in a superposed state.
A double slit experiment has been performed with molecules as large as buckyballs and functionalized oligoporphyrins with up to 2000 atoms.
Molecules with masses exceeding 10,000 and composed of over 810 atoms have successfully been superposed
Very sensitive magnetometers have been realized using superconducting quantum interference devices (SQUIDS) that operate using quantum interference effects in superconducting circuits.
A piezoelectric "tuning fork" has been constructed, which can be placed into a superposition of vibrating and non-vibrating states. The resonator comprises about 10 trillion atoms.
Recent research indicates that chlorophyll within plants appears to exploit the feature of quantum superposition to achieve greater efficiency in transporting energy, allowing pigment proteins to be spaced further apart than would otherwise be possible.
In quantum computers
In quantum computers, a qubit is the analog of the classical information bit and qubits can be superposed. Unlike classical bits, a superposition of qubits represents information about two states in parallel. Controlling the superposition of qubits is a central challenge in quantum computation. Qubit systems like nuclear spins with small coupling strength are robust to outside disturbances but the same small coupling makes it difficult to readout results.
See also
References
Further reading
Bohr, N. (1927/1928). The quantum postulate and the recent development of atomic theory, Nature Supplement 14 April 1928, 121: 580–590.
Cohen-Tannoudji, C., Diu, B., Laloë, F. (1973/1977). Quantum Mechanics, translated from the French by S. R. Hemley, N. Ostrowsky, D. Ostrowsky, second edition, volume 1, Wiley, New York, .
Einstein, A. (1949). Remarks concerning the essays brought together in this co-operative volume, translated from the original German by the editor, pp. 665–688 in Schilpp, P. A. editor (1949), Albert Einstein: Philosopher-Scientist, volume , Open Court, La Salle IL.
Feynman, R. P., Leighton, R.B., Sands, M. (1965). The Feynman Lectures on Physics, volume 3, Addison-Wesley, Reading, MA.
Merzbacher, E. (1961/1970). Quantum Mechanics, second edition, Wiley, New York.
Messiah, A. (1961). Quantum Mechanics, volume 1, translated by G.M. Temmer from the French Mécanique Quantique, North-Holland, Amsterdam.
Quantum mechanics
Articles containing video clips | Quantum superposition | [
"Physics"
] | 1,875 | [
"Theoretical physics",
"Quantum mechanics"
] |
82,916 | https://en.wikipedia.org/wiki/Gear | A gear or gearwheel is a rotating machine part typically used to transmit rotational motion and/or torque by means of a series of teeth that engage with compatible teeth of another gear or other part. The teeth can be integral saliences or cavities machined on the part, or separate pegs inserted into it. In the latter case, the gear is usually called a cogwheel. A cog may be one of those pegs or the whole gear. Two or more meshing gears are called a gear train.
The smaller member of a pair of meshing gears is often called pinion. Most commonly, gears and gear trains can be used to trade torque for rotational speed between two axles or other rotating parts and/or to change the axis of rotation and/or to invert the sense of rotation. A gear may also be used to transmit linear force and/or linear motion to a rack, a straight bar with a row of compatible teeth.
Gears are among the most common mechanical parts. They come in a great variety of shapes and materials, and are used for many different functions and applications. Diameters may range from a few μm in micromachines, to a few mm in watches and toys to over 10 metres in some mining equipment. Other types of parts that are somewhat similar in shape and function to gears include the sprocket, which is meant to engage with a link chain instead of another gear, and the timing pulley, meant to engage a timing belt. Most gears are round and have equal teeth, designed to operate as smoothly as possible; but there are several applications for non-circular gears, and the Geneva drive has an extremely uneven operation, by design.
Gears can be seen as instances of the basic lever "machine". When a small gear drives a larger one, the mechanical advantage of this ideal lever causes the torque T to increase but the rotational speed ω to decrease. The opposite effect is obtained when a large gear drives a small one. The changes are proportional to the gear ratio r, the ratio of the tooth counts: namely, , and . Depending on the geometry of the pair, the sense of rotation may also be inverted (from clockwise to anti-clockwise, or vice-versa).
Most vehicles have a transmission or "gearbox" containing a set of gears that can be meshed in multiple configurations. The gearbox lets the operator vary the torque that is applied to the wheels without changing the engine's speed. Gearboxes are used also in many other machines, such as lathes and conveyor belts. In all those cases, terms like "first gear", "high gear", and "reverse gear" refer to the overall torque ratios of different meshing configurations, rather than to specific physical gears. These terms may be applied even when the vehicle does not actually contain gears, as in a continuously variable transmission.
History
The earliest surviving gears date from the 4th century BC in China (Zhan Guo times – Late East Zhou dynasty), which have been preserved at the Luoyang Museum of Henan Province, China.
In Europe, Aristotle mentions gears around 330 BC, as wheel drives in windlasses. He observed that the direction of rotation is reversed when one gear wheel drives another gear wheel. Philon of Byzantium was one of the first who used gears in water raising devices. Gears appear in works connected to Hero of Alexandria, in Roman Egypt circa AD 50, but can be traced back to the mechanics of the Library of Alexandria in 3rd-century BC Ptolemaic Egypt, and were greatly developed by the Greek polymath Archimedes (287–212 BC). The earliest surviving gears in Europe were found in the Antikythera mechanism an example of a very early and intricate geared device, designed to calculate astronomical positions of the sun, moon, and planets, and predict eclipses. Its time of construction is now estimated between 150 and 100 BC.
The Chinese engineer Ma Jun (–265) described a south-pointing chariot. A set of differential gears connected to the wheels and to a pointer on top of the chariot kept the direction of latter unchanged as the chariot turned.
Another early surviving example of geared mechanism is a complex calendrical device showing the phase of the Moon, the day of the month and the places of the Sun and the Moon in the Zodiac was invented in the Byzantine empire in the early 6th century.
Geared mechanical water clocks were built in China by 725.
Around 1221, a geared astrolabe was built in Isfahan showing the position of the moon in the zodiac and its phase, and the number of days since new moon.
The worm gear was invented in the Indian subcontinent, for use in roller cotton gins, some time during the 13th–14th centuries.
A complex astronomical clock, called the Astrarium, was built between 1348 and 1364 by Giovanni Dondi dell'Orologio. It had seven faces and 107 moving parts; it showed the positions of the sun, the moon and the five planets then known, as well as religious feast days. The Salisbury Cathedral clock, built in 1386, it is the world's oldest still working geared mechanical clock.
Differential gears were used by the British clock maker Joseph Williamson in 1720.
However, the oldest functioning gears by far were created by Nature, and are seen in the hind legs of the nymphs of the planthopper insect Issus coleoptratus.
Etymology
The word gear is probably from Old Norse gørvi (plural gørvar) 'apparel, gear,' related to gøra, gørva 'to make, construct, build; set in order, prepare,' a common verb in Old Norse, "used in a wide range of situations from writing a book to dressing meat". In this context, the meaning of 'toothed wheel in machinery' first attested 1520s; specific mechanical sense of 'parts by which a motor communicates motion' is from 1814; specifically of a vehicle (bicycle, automobile, etc.) by 1888.
A cog is a tooth on a wheel. From Middle English cogge, from Old Norse (compare Norwegian kugg ('cog'), Swedish kugg, kugge ('cog, tooth')), from Proto-Germanic *kuggō (compare Dutch kogge ('cogboat'), German Kock), from Proto-Indo-European *gugā ('hump, ball') (compare Lithuanian gugà ('pommel, hump, hill'), from PIE *gēw- ('to bend, arch'). First used c. 1300 in the sense of 'a wheel having teeth or cogs; late 14c., 'tooth on a wheel'; cog-wheel, early 15c.
Materials
The gears of the Antikythera mechanism are made of bronze, and the earliest surviving Chinese gears are made of iron, These metals, as well as tin, have been generally used for clocks and similar mechanisms to this day.
Historically, large gears, such as used in flour mills, were commonly made of wood rather than metal. They were cogwheels, made by inserting a series of wooden pegs or cogs around the rim of a wheel. The cogs were often made of maple wood.
Wooden gears have been gradually replaced by ones made or metal, such as cast iron at first, then steel and aluminum. Steel is most commonly used because of its high strength-to-weight ratio and low cost. Aluminum is not as strong as steel for the same geometry, but is lighter and easier to machine. powder metallurgy may be used with alloys that cannot be easily cast or machined.
Still, because of cost or other considerations, some early metal gears had wooden cogs, each tooth forming a type of specialised 'through' mortise and tenon joint
More recently engineering plastics and composite materials have been replacing metals in many applications, especially those with moderate speed and torque. They are not as strong as steel, but are cheaper, can be mass-manufactured by injection molding don't need lubrication. Plastic gears may even be intentionally designed to be the weakest part in a mechanism, so that in case of jamming they will fail first and thus avoid damage to more expensive parts. Such sacrificial gears may be a simpler alternative to other overload-protection devices such as clutches and torque- or current-limited motors.
In spite of the advantages of metal and plastic, wood continued to be used for large gears until a couple of centuries ago, because of cost, weight, tradition, or other considerations. In 1967 the Thompson Manufacturing Company of Lancaster, New Hampshire still had a very active business in supplying tens of thousands of maple gear teeth per year, mostly for use in paper mills and grist mills, some dating back over 100 years.
Manufacture
The most common techniques for gear manufacturing are dies, sand, and investment casting; injection molding; powder metallurgy; blanking; and gear cutting.
As of 2014, an estimated 80% of all gearing produced worldwide is produced by net shape molding. Molded gearing is usually powder metallurgy, plastic injection, or metal die casting. Gears produced by powder metallurgy often require a sintering step after they are removed from the mold. Cast gears require gear cutting or other machining to shape the teeth to the necessary precision. The most common form of gear cutting is hobbing, but gear shaping, milling, and broaching may be used instead.
Metal gears intended for heavy duty operation, such as in the transmissions of cars and trucks, the teeth are heat treated to make them hard and more wear resistant while leaving the core soft but tough. For large gears that are prone to warp, a quench press is used.
Gears can be made by 3D printing; however, this alternative is typically used only for prototypes or very limited production quantities, because of its high cost, low accuracy, and relatively low strength of the resulting part.
Comparison with other drive mechanisms
Besides gear trains, other alternative methods of transmitting torque between non-coaxial parts include link chains driven by sprockets, friction drives, belts and pulleys, hydraulic couplings, and timing belts.
One major advantage of gears is that their rigid body and the snug interlocking of the teeth ensure precise tracking of the rotation across the gear train, limited only by backlash and other mechanical defects. For this reason they are favored in precision applications such as watches. Gear trains also can have fewer separate parts (only two) and have minimal power loss, minimal wear, and long life. Gears are also often the most efficient and compact way of transmitting torque between two non-parallel axes.
On the other hand, gears are more expensive to manufacture, may require periodic lubrication, and may have greater mass and rotational inertia than the equivalent pulleys. More importantly, the distance between the axes of matched gears is limited and cannot be changed once they are manufactured. There are also applications where slippage under overload or transients (as occurs with belts, hydraulics, and friction wheels) is not only acceptable but desirable.
Ideal gear model
For basic analysis purposes, each gear can be idealized as a perfectly rigid body that, in normal operation, turns around a rotation axis that is fixed in space, without sliding along it. Thus, each point of the gear can move only along a circle that is perpendicular to its axis and centered on it. At any moment t, all points of the gear will be rotating around that axis with the same angular speed ω(t), in the same sense. The speed need not be constant over time.
The action surface of the gear consists of all points of its surface that, in normal operation, may contact the matching gear with positive pressure. All other parts of the surface are irrelevant (except that they cannot be crossed by any part of the matching gear). In a gear with N teeth, the working surface has N-fold rotational symmetry about the axis, meaning that it is congruent with itself when the gear rotates by of a turn.
If the gear is meant to transmit or receive torque with a definite sense only (clockwise or counterclockwise with respect to some reference viewpoint), the action surface consists of N separate patches, the tooth faces; which have the same shape and are positioned in the same way relative to the axis, spaced turn apart.
If the torque on each gear may have both senses, the action surface will have two sets of N tooth faces; each set will be effective only while the torque has one specific sense, and the two sets can be analyzed independently of the other. However, in this case the gear usually has also "flip over" symmetry, so that the two sets of tooth faces are congruent after the gear is flipped. This arrangement ensures that the two gears are firmly locked together, at all times, with no backlash.
During operation, each point p of each tooth face will at some moment contact a tooth face of the matching gear at some point q of one of its tooth faces. At that moment and at those points, the two faces must have the same perpendicular direction but opposite orientation. But since the two gears are rotating around different axes, the points p and q are moving along different circles; therefore, the contact cannot last more than one instant, and p will then either slide across the other face, or stop contacting it altogether.
On the other hand, at any given moment there is at least one such pair of contact points; usually more than one, even a whole line or surface of contact.
Actual gears deviate from this model in many ways: they are not perfectly rigid, their mounting does not ensure that the rotation axis will be perfectly fixed in space, the teeth may have slightly different shapes and spacing, the tooth faces are not perfectly smooth, and so on. Yet, these deviations from the ideal model can be ignored for a basic analysis of the operation of a gear set.
Relative axis position
One criterion for classifying gears is the relative position and direction of the axes or rotation of the gears that are to be meshed together.
Parallel
In the most common configuration, the axes of rotation of the two gears are parallel, and usually their sizes are such that they contact near a point between the two axes. In this configuration, the two gears turn in opposite senses.
Occasionally the axes are parallel but one gear is nested inside the other. In this configuration, both gears turn in the same sense.
If the two gears are cut by an imaginary plane perpendicular to the axes, each section of one gear will interact only with the corresponding section of the other gear. Thus the three-dimensional gear train can be understood as a stack of gears that are flat and infinitesimally thin — that is, essentially two-dimensional.
Crossed
In a crossed arrangement, the axes of rotation of the two gears are not parallel but cross at an arbitrary angle except zero or 180 degrees.
For best operation, each wheel then must be a bevel gear, whose overall shape is like a slice (frustum) of a cone whose apex is the meeting point of the two axes.
Bevel gears with equal numbers of teeth and shaft axes at 90 degrees are called miter (US) or mitre (UK) gears.
Independently of the angle between the axes, the larger of two unequal matching bevel gears may be internal or external, depending the desired relative sense of rotation.
If the two gears are sliced by an imaginary sphere whose center is the point where the two axes cross, each section will remain on the surface of that sphere as the gear rotates, and the section of one gear will interact only with the corresponding section of the other gear. In this way, a pair of meshed 3D gears can be understood as a stack of nested infinitely thin cup-like gears.
Skew
The gears in a matching pair are said to be skew if their axes of rotation are skew lines -- neither parallel nor intersecting.
In this case, the best shape for each pitch surface is neither cylindrical nor conical but a portion of a hyperboloid of revolution. Such gears are called hypoid for short. Hypoid gears are most commonly found with shafts at 90 degrees.
Contact between hypoid gear teeth may be even smoother and more gradual than with spiral bevel gear teeth, but also have a sliding action along the meshing teeth as it rotates and therefore usually require some of the most viscous types of gear oil to avoid it being extruded from the mating tooth faces, the oil is normally designated HP (for hypoid) followed by a number denoting the viscosity. Also, the pinion can be designed with fewer teeth than a spiral bevel pinion, with the result that gear ratios of 60:1 and higher are feasible using a single set of hypoid gears. This style of gear is most common in motor vehicle drive trains, in concert with a differential. Whereas a regular (nonhypoid) ring-and-pinion gear set is suitable for many applications, it is not ideal for vehicle drive trains because it generates more noise and vibration than a hypoid does. Bringing hypoid gears to market for mass-production applications was an engineering improvement of the 1920s.
Tooth orientation
Internal and external
A gear is said to be external if its teeth are directed generally away from the rotation axis, and internal otherwise. In a pair of matching wheels, only one of them (the larger one) may be internal.
Crown
A crown gear or contrate gear is one whose teeth project at right angles to the plane. A crown gear is also sometimes meshed with an escapement such as found in mechanical clocks.
Tooth cut direction
Gear teeth typically extend across the whole thickness of the gear. Another criterion for classifying gears is the general direction of the teeth across that dimension. This attribute is affected by the relative position and direction of the axes or rotation of the gears that are to be meshed together.
Straight
In a cylindrical spur gear or straight-cut gear, the tooth faces are straight along the direction parallel to the axis of rotation. Any imaginary cylinder with the same axis will cut the teeth along parallel straight lines.
The teeth can be either internal or external. Two spur gears mesh together correctly only if fitted to parallel shafts. No axial thrust is created by the tooth loads. Spur gears are excellent at moderate speeds but tend to be noisy at high speeds.
For arrangements with crossed non-parallel axes, the faces in a straight-cut gear are parts of a general conical surface whose generating lines (generatrices) go through the meeting point of the two axes, resulting in a bevel gear. Such gears are generally used only at speeds below , or, for small gears, 1000 rpm.
Helical
In a helical or dry fixed gear the tooth walls are not parallel to the axis of rotation, but are set at an angle. An imaginary pitch surface (cylinder, cone, or hyperboloid, depending on the relative axis positions) intersects each tooth face along an arc of a helix. Helical gears can be meshed in parallel or orientations. The former refers to when the shafts are parallel to each other; this is the most common orientation. In the latter, the shafts are non-parallel, and in this configuration the gears are sometimes known as "skew gears".
The angled teeth engage more gradually than do spur gear teeth, causing them to run more smoothly and quietly. With parallel helical gears, each pair of teeth first make contact at a single point at one side of the gear wheel; a moving curve of contact then grows gradually across the tooth face to a maximum, then recedes until the teeth break contact at a single point on the opposite side. In spur gears, teeth suddenly meet at a line contact across their entire width, causing stress and noise. Spur gears make a characteristic whine at high speeds. For this reason spur gears are used in low-speed applications and in situations where noise control is not a problem, and helical gears are used in high-speed applications, large power transmission, or where noise abatement is important. The speed is considered high when the pitch line velocity exceeds 25 m/s.
A disadvantage of helical gears is a resultant thrust along the axis of the gear, which must be accommodated by appropriate thrust bearings. However, this issue can be circumvented by using a herringbone gear or double helical gear, which has no axial thrust - and also provides self-aligning of the gears. This results in less axial thrust than a comparable spur gear.
A second disadvantage of helical gears is a greater degree of sliding friction between the meshing teeth, often addressed with additives in the lubricant.
For a "crossed" or "skew" configuration, the gears must have the same pressure angle and normal pitch; however, the helix angle and handedness can be different. The relationship between the two shafts is actually defined by the helix angle(s) of the two shafts and the handedness, as defined:
for gears of the same handedness,
for gears of opposite handedness,
where is the helix angle for the gear. The crossed configuration is less mechanically sound because there is only a point contact between the gears, whereas in the parallel configuration there is a line contact.
Quite commonly, helical gears are used with the helix angle of one having the negative of the helix angle of the other; such a pair might also be referred to as having a right-handed helix and a left-handed helix of equal angles. The two equal but opposite angles add to zero: the angle between shafts is zero—that is, the shafts are parallel. Where the sum or the difference (as described in the equations above) is not zero, the shafts are crossed. For shafts crossed at right angles, the helix angles are of the same hand because they must add to 90 degrees. (This is the case with the gears in the illustration above: they mesh correctly in the crossed configuration: for the parallel configuration, one of the helix angles should be reversed. The gears illustrated cannot mesh with the shafts parallel.)
3D animation of helical gears (parallel axis)
3D animation of helical gears (crossed axis)
Double helical
Double helical gears overcome the problem of axial thrust presented by single helical gears by using a double set of teeth, slanted in opposite directions. A double helical gear can be thought of as two mirrored helical gears mounted closely together on a common axle. This arrangement cancels out the net axial thrust, since each half of the gear thrusts in the opposite direction, resulting in a net axial force of zero. This arrangement can also remove the need for thrust bearings. However, double helical gears are more difficult to manufacture due to their more complicated shape.
Herringbone gears are a special type of helical gears. They do not have a groove in the middle like some other double helical gears do; the two mirrored helical gears are joined so that their teeth form a V shape. This can also be applied to bevel gears, as in the final drive of the Citroën Type A. Another type of double helical gear is a Wüst gear.
For both possible rotational directions, there exist two possible arrangements for the oppositely-oriented helical gears or gear faces. One arrangement is called stable, and the other unstable. In a stable arrangement, the helical gear faces are oriented so that each axial force is directed toward the center of the gear. In an unstable arrangement, both axial forces are directed away from the center of the gear. In either arrangement, the total (or net) axial force on each gear is zero when the gears are aligned correctly. If the gears become misaligned in the axial direction, the unstable arrangement generates a net force that may lead to disassembly of the gear train, while the stable arrangement generates a net corrective force. If the direction of rotation is reversed, the direction of the axial thrusts is also reversed, so a stable configuration becomes unstable, and vice versa.
Stable double helical gears can be directly interchanged with spur gears without any need for different bearings.
Worm
Worms resemble screws. A worm is meshed with a worm wheel, which looks similar to a spur gear.
Worm-and-gear sets are a simple and compact way to achieve a high torque, low speed gear ratio. For example, helical gears are normally limited to gear ratios of less than 10:1 while worm-and-gear sets vary from 10:1 to 500:1. A disadvantage is the potential for considerable sliding action, leading to low efficiency.
A worm gear is a species of helical gear, but its helix angle is usually somewhat large (close to 90 degrees) and its body is usually fairly long in the axial direction. These attributes give it screw like qualities. The distinction between a worm and a helical gear is that at least one tooth persists for a full rotation around the helix. If this occurs, it is a 'worm'; if not, it is a 'helical gear'. A worm may have as few as one tooth. If that tooth persists for several turns around the helix, the worm appears, superficially, to have more than one tooth, but what one in fact sees is the same tooth reappearing at intervals along the length of the worm. The usual screw nomenclature applies: a one-toothed worm is called single thread or single start; a worm with more than one tooth is called multiple thread or multiple start. The helix angle of a worm is not usually specified. Instead, the lead angle, which is equal to 90 degrees minus the helix angle, is given.
In a worm-and-gear set, the worm can always drive the gear. However, if the gear attempts to drive the worm, it may or may not succeed. Particularly if the lead angle is small, the gear's teeth may simply lock against the worm's teeth, because the force component circumferential to the worm is not sufficient to overcome friction. In traditional music boxes, however, the gear drives the worm, which has a large helix angle. This mesh drives the speed-limiter vanes which are mounted on the worm shaft.
Worm-and-gear sets that do lock are called self locking, which can be used to advantage, as when it is desired to set the position of a mechanism by turning the worm and then have the mechanism hold that position. An example is the machine head found on some types of stringed instruments.
If the gear in a worm-and-gear set is an ordinary helical gear only a single point of contact is achieved. If medium to high power transmission is desired, the tooth shape of the gear is modified to achieve more intimate contact by making both gears partially envelop each other. This is done by making both concave and joining them at a saddle point; this is called a cone-drive or "Double enveloping".
Worm gears can be right or left-handed, following the long-established practice for screw threads.
Tooth profile
Another criterion to classify gears is the tooth profile, the shape of the cross-section of a tooth face by an imaginary cut perpendicular to the pitch surface, such as the transverse, normal, or axial plane.
The tooth profile is crucial for the smoothness and uniformity of the movement of matching gears, as well as for the friction and wear.
Artisanal
The teeth of antique or artisanal gears that were cut by hand from sheet material, like those in the Antikhytera mechanism, generally had simple profiles, such as triangles.
The teeth of larger gears — such as used in windmills — were usually pegs with simple shapes like cylinders, parallelepipeds, or triangular prisms inserted into a smooth wooden or metal wheel; or were holes with equally simple shapes cut into such a wheel.
Because of their sub-optimal profile, the effective gear ratio of such artisanal matching gears was not constant, but fluctuated over each tooth cycle, resulting in vibrations, noise, and accelerated wear.
Cage
A cage gear, also called a lantern gear or lantern pinion is one of those artisanal has cylindrical rods for teeth, parallel to the axle and arranged in a circle around it, much as the bars on a round bird cage or lantern. The assembly is held together by disks at each end, into which the tooth rods and axle are set. Cage gears are more efficient than solid pinions, and dirt can fall through the rods rather than becoming trapped and increasing wear. They can be constructed with very simple tools as the teeth are not formed by cutting or milling, but rather by drilling holes and inserting rods.
Sometimes used in clocks, a cage gear should always be driven by a gearwheel, not used as the driver. The cage gear was not initially favoured by conservative clock makers. It became popular in turret clocks where dirty working conditions were most commonplace. Domestic American clock movements often used them.
Mathematical
In most modern gears, the tooth profile is usually not straight or circular, but of special form designed to achieve a constant angular velocity ratio.
There is an infinite variety of tooth profiles that will achieve this goal. In fact, given a fairly arbitrary tooth shape, it is possible to develop a tooth profile for the mating gear that will do it.
Parallel and crossed axes
However, two constant velocity tooth profiles are the most commonly used in modern times for gears with parallel or crossed axes, based on the cycloid and involute curves.
Cycloidal gears were more common until the late 1800s. Since then, the involute has largely superseded it, particularly in drive train applications. The cycloid is in some ways the more interesting and flexible shape; however the involute has two advantages: it is easier to manufacture, and it permits the center-to-center spacing of the gears to vary over some range without ruining the constancy of the velocity ratio. Cycloidal gears only work properly if the center spacing is exactly right. Cycloidal gears are still commonly used in mechanical clocks.
Skew axes
For non-parallel axes with non-straight tooth cuts, the best tooth profile is one of several spiral bevel gear shapes. These include Gleason types (circular arc with non-constant tooth depth), Oerlikon and Curvex types (circular arc with constant tooth depth), Klingelnberg Cyclo-Palloid (Epicycloid with constant tooth depth) or Klingelnberg Palloid.
The tooth faces in these gear types are not involute cylinders or cones but patches of octoidal surfaces. Manufacturing such tooth faces may require a 5-axis milling machine.
Spiral bevel gears have the same advantages and disadvantages relative to their straight-cut cousins as helical gears do to spur gears, such as lower noise and vibration. Simplified calculated bevel gears on the basis of an equivalent cylindrical gear in normal section with an involute tooth form show a deviant tooth form with reduced tooth strength by 10-28% without offset and 45% with offset.
Special gear trains
Rack and pinion
A rack is a toothed bar or rod that can be thought of as a sector gear with an infinitely large radius of curvature. Torque can be converted to linear force by meshing a rack with a round gear called a pinion: the pinion turns, while the rack moves in a straight line. Such a mechanism is used in the steering of automobiles to convert the rotation of the steering wheel into the left-to-right motion of the tie rod(s) that are attached to the front wheels.
Racks also feature in the theory of gear geometry, where, for instance, the tooth shape of an interchangeable set of gears may be specified for the rack (infinite radius), and the tooth shapes for gears of particular actual radii are then derived from that. The rack and pinion gear type is also used in a rack railway.
Epicyclic gear train
In epicyclic gearing, one or more of the gear axes moves. Examples are sun and planet gearing (see below), cycloidal drive, automatic transmissions, and mechanical differentials.
Sun and planet
Sun and planet gearing is a method of converting reciprocating motion into rotary motion that was used in steam engines. James Watt used it on his early steam engines to get around the patent on the crank, but it also provided the advantage of increasing the flywheel speed so Watt could use a lighter flywheel.
In the illustration, the sun is yellow, the planet red, the reciprocating arm is blue, the flywheel is green and the driveshaft is gray.
Non-circular gears
Non-circular gears are designed for special purposes. While a regular gear is optimized to transmit torque to another engaged member with minimum noise and wear and maximum efficiency, a non-circular gear's main objective might be ratio variations, axle displacement oscillations and more. Common applications include textile machines, potentiometers and continuously variable transmissions.
Non-rigid gears
Most gears are ideally rigid bodies which transmit torque and movement through the lever principle and contact forces between the teeth. Namely, the torque applied to one gear causes it to rotate as rigid body, so that its teeth push against those of the matched gear, which in turn rotates as a rigid body transmitting the torque to its axle. Some specialized gear escape this pattern, however.
Harmonic gear
A harmonic gear or strain wave gear is a specialized gearing mechanism often used in industrial motion control, robotics and aerospace for its advantages over traditional gearing systems, including lack of backlash, compactness and high gear ratios.
Though the diagram does not demonstrate the correct configuration, it is a "timing gear," conventionally with far more teeth than a traditional gear to ensure a higher degree of precision.
Magnetic gear
In a magnetic gear pair there is no contact between the two members; the torque is instead transmitted through magnetic fields. The cogs of each gear are constant magnets with periodic alternation of opposite magnetic poles on mating surfaces. Gear components are mounted with a backlash capability similar to other mechanical gearings. Although they cannot exert as much force as a traditional gear due to limits on magnetic field strength, such gears work without touching and so are immune to wear, have very low noise, minimal power losses from friction and can slip without damage making them very reliable. They can be used in configurations that are not possible for gears that must be physically touching and can operate with a non-metallic barrier completely separating the driving force from the load. The magnetic coupling can transmit force into a hermetically sealed enclosure without using a radial shaft seal, which may leak. Magnetic gears are also used in brushless motors along with electromagnets to make the motor spin.
Nomenclature
General
Rotational frequency, n Measured in rotation over time, such as revolutions per minute (RPM or rpm).
Angular frequency, ω Measured in radians per second. 1RPM = 2rad/minute = rad/second.
Number of teeth, N How many teeth a gear has, an integer. In the case of worms, it is the number of thread starts that the worm has.
Gear, wheel The larger of two interacting gears or a gear on its own.
Pinion The smaller of two interacting gears.
Path of contact Path followed by the point of contact between two meshing gear teeth.
Line of action, pressure line Line along which the force between two meshing gear teeth is directed. It has the same direction as the force vector. In general, the line of action changes from moment to moment during the period of engagement of a pair of teeth. For involute gears, however, the tooth-to-tooth force is always directed along the same line—that is, the line of action is constant. This implies that for involute gears the path of contact is also a straight line, coincident with the line of action—as is indeed the case.
Axis Axis of revolution of the gear; center line of the shaft.
Pitch point Point where the line of action crosses a line joining the two gear axes.
Pitch circle, pitch line Circle centered on and perpendicular to the axis, and passing through the pitch point. A predefined diametral position on the gear where the circular tooth thickness, pressure angle and helix angles are defined.
Pitch diameter, d A predefined diametral position on the gear where the circular tooth thickness, pressure angle and helix angles are defined. The standard pitch diameter is a design dimension and cannot be measured, but is a location where other measurements are made. Its value is based on the number of teeth (N), the normal module (mn; or normal diametral pitch, Pd), and the helix angle ():
in metric units or in imperial units.
Module or modulus, m Since it is impractical to calculate circular pitch with irrational numbers, mechanical engineers usually use a scaling factor that replaces it with a regular value instead. This is known as the module or modulus of the wheel and is simply defined as:
where m is the module and p the circular pitch. The units of module are customarily millimeters; an English Module is sometimes used with the units of inches. When the diametral pitch, DP, is in English units,
in conventional metric units.
The distance between the two axis becomes:
where a is the axis distance, z1 and z2 are the number of cogs (teeth) for each of the two wheels (gears). These numbers (or at least one of them) is often chosen among primes to create an even contact between every cog of both wheels, and thereby avoid unnecessary wear and damage. An even uniform gear wear is achieved by ensuring the tooth counts of the two gears meshing together are relatively prime to each other; this occurs when the greatest common divisor (GCD) of each gear tooth count equals 1, e.g. GCD(16,25)=1; if a 1:1 gear ratio is desired a relatively prime gear may be inserted in between the two gears; this maintains the 1:1 ratio but reverses the gear direction; a second relatively prime gear could also be inserted to restore the original rotational direction while maintaining uniform wear with all 4 gears in this case. Mechanical engineers, at least in continental Europe, usually use the module instead of circular pitch. The module, just like the circular pitch, can be used for all types of cogs, not just evolvent based straight cogs.
Operating pitch diameters Diameters determined from the number of teeth and the center distance at which gears operate. Example for pinion:
Pitch surface In cylindrical gears, cylinder formed by projecting a pitch circle in the axial direction. More generally, the surface formed by the sum of all the pitch circles as one moves along the axis. For bevel gears it is a cone.
Angle of action Angle with vertex at the gear center, one leg on the point where mating teeth first make contact, the other leg on the point where they disengage.
Arc of action Segment of a pitch circle subtended by the angle of action.
Pressure angle, θ The complement of the angle between the direction that the teeth exert force on each other, and the line joining the centers of the two gears. For involute gears, the teeth always exert force along the line of action, which, for involute gears, is a straight line; and thus, for involute gears, the pressure angle is constant.
Outside diameter, Do Diameter of the gear, measured from the tops of the teeth.
Root diameter Diameter of the gear, measured at the base of the tooth.
Addendum, a Radial distance from the pitch surface to the outermost point of the tooth.
Dedendum, b Radial distance from the depth of the tooth trough to the pitch surface.
Whole depth, ht The distance from the top of the tooth to the root; it is equal to addendum plus dedendum or to working depth plus clearance.
Clearance Distance between the root circle of a gear and the addendum circle of its mate.
Working depth Depth of engagement of two gears, that is, the sum of their operating addendums.
Circular pitch, p Distance from one face of a tooth to the corresponding face of an adjacent tooth on the same gear, measured along the pitch circle.
Diametral pitch, DP
Ratio of the number of teeth to the pitch diameter. Could be measured in teeth per inch or teeth per centimeter, but conventionally has units of per inch of diameter. Where the module, m, is in metric units
in imperial units
Base circle In involute gears, the tooth profile is generated by the involute of the base circle. The radius of the base circle is somewhat smaller than that of the pitch circle
Base pitch, normal pitch, pb In involute gears, distance from one face of a tooth to the corresponding face of an adjacent tooth on the same gear, measured along the base circle
Interference Contact between teeth other than at the intended parts of their surfaces
Interchangeable set A set of gears, any of which mates properly with any other
Helical gear
Helix angle, ψ the Angle between a tangent to the helix and the gear axis. It is zero in the limiting case of a spur gear, albeit it can be considered as the hypotenuse angle as well.
Normal circular pitch, pn Circular pitch in the plane normal to the teeth.
Transverse circular pitch, p Circular pitch in the plane of rotation of the gear. Sometimes just called "circular pitch".
Several other helix parameters can be viewed either in the normal or transverse planes. The subscript n usually indicates the normal.
Worm gear
Lead Distance from any point on a thread to the corresponding point on the next turn of the same thread, measured parallel to the axis.
Linear pitch, p Distance from any point on a thread to the corresponding point on the adjacent thread, measured parallel to the axis. For a single-thread worm, lead and linear pitch are the same.
Lead angle, λ Angle between a tangent to the helix and a plane perpendicular to the axis. Note that the complement of the helix angle is usually given for helical gears.
Pitch diameter, dw Same as described earlier in this list. Note that for a worm it is still measured in a plane perpendicular to the gear axis, not a tilted plane.
Subscript w denotes the worm, subscript g denotes the gear.
Tooth contact
Point of contact Any point at which two tooth profiles touch each other.
Line of contact A line or curve along which two tooth surfaces are tangent to each other.
Path of action The locus of successive contact points between a pair of gear teeth, during the phase of engagement. For conjugate gear teeth, the path of action passes through the pitch point. It is the trace of the surface of action in the plane of rotation.
Line of action The path of action for involute gears. It is the straight line passing through the pitch point and tangent to both base circles.
Surface of action The imaginary surface in which contact occurs between two engaging tooth surfaces. It is the summation of the paths of action in all sections of the engaging teeth.
Plane of action The surface of action for involute, parallel axis gears with either spur or helical teeth. It is tangent to the base cylinders.
Zone of action (contact zone) For involute, parallel-axis gears with either spur or helical teeth, is the rectangular area in the plane of action bounded by the length of action and the effective face width.
Path of contact The curve on either tooth surface along which theoretical single point contact occurs during the engagement of gears with crowned tooth surfaces or gears that normally engage with only single point contact.
Length of action The distance on the line of action through which the point of contact moves during the action of the tooth profile.
Arc of action, Qt The arc of the pitch circle through which a tooth profile moves from the beginning to the end of contact with a mating profile.
Arc of approach, Qa The arc of the pitch circle through which a tooth profile moves from its beginning of contact until the point of contact arrives at the pitch point.
Arc of recess, Qr The arc of the pitch circle through which a tooth profile moves from contact at the pitch point until contact ends.
Contact ratio, mc or ε The number of angular pitches through which a tooth surface rotates from the beginning to the end of contact. In a simple way, it can be defined as a measure of the average number of teeth in contact during the period during which a tooth comes and goes out of contact with the mating gear.
Transverse contact ratio, mp or εα The contact ratio in a transverse plane. It is the ratio of the angle of action to the angular pitch. For involute gears it is most directly obtained as the ratio of the length of action to the base pitch.
Face contact ratio, mF or εβ The contact ratio in an axial plane, or the ratio of the face width to the axial pitch. For bevel and hypoid gears it is the ratio of face advance to circular pitch.
Total contact ratio, mt or εγ The sum of the transverse contact ratio and the face contact ratio.
Modified contact ratio, mo For bevel gears, the square root of the sum of the squares of the transverse and face contact ratios.
Limit diameter Diameter on a gear at which the line of action intersects the maximum (or minimum for internal pinion) addendum circle of the mating gear. This is also referred to as the start of active profile, the start of contact, the end of contact, or the end of active profile.
Start of active profile (SAP) Intersection of the limit diameter and the involute profile.
Face advance Distance on a pitch circle through which a helical or spiral tooth moves from the position at which contact begins at one end of the tooth trace on the pitch surface to the position where contact ceases at the other end.
Tooth thickness
Circular thickness Length of arc between the two sides of a gear tooth, on the specified datum circle.
Transverse circular thickness Circular thickness in the transverse plane.
Normal circular thickness Circular thickness in the normal plane. In a helical gear it may be considered as the length of arc along a normal helix.
Axial thickness In helical gears and worms, tooth thickness in an axial cross section at the standard pitch diameter.
Base circular thickness In involute teeth, length of arc on the base circle between the two involute curves forming the profile of a tooth.
Normal chordal thickness Length of the chord that subtends a circular thickness arc in the plane normal to the pitch helix. Any convenient measuring diameter may be selected, not necessarily the standard pitch diameter.
Chordal addendum (chordal height) Height from the top of the tooth to the chord subtending the circular thickness arc. Any convenient measuring diameter may be selected, not necessarily the standard pitch diameter.
Profile shift Displacement of the basic rack datum line from the reference cylinder, made non-dimensional by dividing by the normal module. It is used to specify the tooth thickness, often for zero backlash.
Rack shift Displacement of the tool datum line from the reference cylinder, made non-dimensional by dividing by the normal module. It is used to specify the tooth thickness.
Measurement over pins Measurement of the distance taken over a pin positioned in a tooth space and a reference surface. The reference surface may be the reference axis of the gear, a datum surface or either one or two pins positioned in the tooth space or spaces opposite the first. This measurement is used to determine tooth thickness.
Span measurement Measurement of the distance across several teeth in a normal plane. As long as the measuring device has parallel measuring surfaces that contact on an unmodified portion of the involute, the measurement wis along a line tangent to the base cylinder. It is used to determine tooth thickness.
Modified addendum teeth Teeth of engaging gears, one or both of which have non-standard addendum.
Full-depth teeth Teeth in which the working depth equals 2.000 divided by the normal diametral pitch.
Stub teeth Teeth in which the working depth is less than 2.000 divided by the normal diametral pitch.
Equal addendum teeth Teeth in which two engaging gears have equal addendums.
Long and short-addendum teeth Teeth in which the addendums of two engaging gears are unequal.
Undercut An undercut is a condition in generated gear teeth when any part of the fillet curve lies inside of a line drawn tangent to the working profile at its point of juncture with the fillet. Undercut may be deliberately introduced to facilitate finishing operations. With undercut the fillet curve intersects the working profile. Without undercut the fillet curve and the working profile have a common tangent.
Root fillet or fillet curve, the concave portion of the tooth profile where it joins the bottom of the tooth space.2
Pitch
Pitch is the distance between a point on one tooth and the corresponding point on an adjacent tooth. It is a dimension measured along a line or curve in the transverse, normal, or axial directions. The use of the single word pitch without qualification may be ambiguous, and for this reason it is preferable to use specific designations such as transverse circular pitch, normal base pitch, axial pitch.
Circular pitch, p Arc distance along a specified pitch circle or pitch line between corresponding profiles of adjacent teeth.
Transverse circular pitch, pt Circular pitch in the transverse plane.
Normal circular pitch, pn, pe Circular pitch in the normal plane, and also the length of the arc along the normal pitch helix between helical teeth or threads.
Axial pitch, px Linear pitch in an axial plane and in a pitch surface. In helical gears and worms, axial pitch has the same value at all diameters. In gearing of other types, axial pitch may be confined to the pitch surface and may be a circular measurement. The term axial pitch is preferred to the term linear pitch. The axial pitch of a helical worm and the circular pitch of its worm gear are the same.
Normal base pitch, pN, pbn An involute helical gear is the base pitch in the normal plane. It is the normal distance between parallel helical involute surfaces on the plane of action in the normal plane, or is the length of arc on the normal base helix. It is a constant distance in any helical involute gear.
Transverse base pitch, pb, pbt In an involute gear, the pitch is on the base circle or along the line of action. Corresponding sides of involute gear teeth are parallel curves, and the base pitch is the constant and fundamental distance between them along a common normal in a transverse plane.
Diametral pitch (transverse), Pd Ratio of the number of teeth to the standard pitch diameter in inches.
Normal diametral pitch, Pnd Value of diametral pitch in a normal plane of a helical gear or worm.
Angular pitch, θN, τ Angle subtended by the circular pitch, usually expressed in radians.
degrees or radians
Backlash
Backlash is the error in motion that occurs when gears change direction. It exists because there is always some gap between the trailing face of the driving tooth and the leading face of the tooth behind it on the driven gear, and that gap must be closed before force can be transferred in the new direction. The term "backlash" can also be used to refer to the size of the gap, not just the phenomenon it causes; thus, one could speak of a pair of gears as having, for example, "0.1 mm of backlash." A pair of gears could be designed to have zero backlash, but this would presuppose perfection in manufacturing, uniform thermal expansion characteristics throughout the system, and no lubricant. Therefore, gear pairs are designed to have some backlash. It is usually provided by reducing the tooth thickness of each gear by half the desired gap distance. In the case of a large gear and a small pinion, however, the backlash is usually taken entirely off the gear and the pinion is given full sized teeth. Backlash can also be provided by moving the gears further apart. The backlash of a gear train equals the sum of the backlash of each pair of gears, so in long trains backlash can become a problem.
For situations that require precision, such as instrumentation and control, backlash can be minimized through one of several techniques. For instance, the gear can be split along a plane perpendicular to the axis, one half fixed to the shaft in the usual manner, the other half placed alongside it, free to rotate about the shaft, but with springs between the two-halves providing relative torque between them, so that one achieves, in effect, a single gear with expanding teeth. Another method involves tapering the teeth in the axial direction and letting the gear slide in the axial direction to take up slack.
Standard pitches and the module system
Although gears can be made with any pitch, for convenience and interchangeability standard pitches are frequently used. Pitch is a property associated with linear dimensions and so differs whether the standard values are in the imperial (inch) or metric systems. Using inch measurements, standard diametral pitch values with units of "per inch" are chosen; the diametral pitch is the number of teeth on a gear of one inch pitch diameter. Common standard values for spur gears are 3, 4, 5, 6, 8, 10, 12, 16, 20, 24, 32, 48, 64, 72, 80, 96, 100, 120, and 200. Certain standard pitches such as and in inch measurements, which mesh with linear rack, are actually (linear) circular pitch values with units of "inches"
When gear dimensions are in the metric system the pitch specification is generally in terms of module or modulus, which is effectively a length measurement across the pitch diameter. The term module is understood to mean the pitch diameter in millimetres divided by the number of teeth. When the module is based upon inch measurements, it is known as the English module to avoid confusion with the metric module. Module is a direct dimension ("millimeters per tooth"), unlike diametral pitch, which is an inverse dimension ("teeth per inch"). Thus, if the pitch diameter of a gear is 40 mm and the number of teeth 20, the module is 2, which means that there are 2 mm of pitch diameter for each tooth. The preferred standard module values are 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.8, 1.0, 1.25, 1.5, 2.0, 2.5, 3, 4, 5, 6, 8, 10, 12, 16, 20, 25, 32, 40 and 50.
Gear model in modern physics
Modern physics adopted the gear model in different ways. In the nineteenth century, James Clerk Maxwell developed a model of electromagnetism in which magnetic field lines were rotating tubes of incompressible fluid. Maxwell used a gear wheel and called it an "idle wheel" to explain the electric current as a rotation of particles in opposite directions to that of the rotating field lines.
More recently, quantum physics uses "quantum gears" in their model. A group of gears can serve as a model for several different systems, such as an artificially constructed nanomechanical device or a group of ring molecules.
The three wave hypothesis compares the wave–particle duality to a bevel gear.
Gear mechanism in natural world
The gear mechanism was previously considered exclusively artificial, but as early as 1957, gears had been recognized in the hind legs of various species of planthoppers and scientists from the University of Cambridge characterized their functional significance in 2013 by doing high-speed photography of the nymphs of Issus coleoptratus at Cambridge University. These gears are found only in the nymph forms of all planthoppers, and are lost during the final molt to the adult stage. In I. coleoptratus, each leg has a 400-micrometer strip of teeth, pitch radius 200 micrometers, with 10 to 12 fully interlocking spur-type gear teeth, including filleted curves at the base of each tooth to reduce the risk of shearing. The joint rotates like mechanical gears, and synchronizes Issus's hind legs when it jumps to within 30 microseconds, preventing yaw rotation. The gears are not connected all the time. One is located on each of the juvenile insect's hind legs, and when it prepares to jump, the two sets of teeth lock together. As a result, the legs move in almost perfect unison, giving the insect more power as the gears rotate to their stopping point and then unlock.
See also
Gear box
Sprocket
Differential
Superposition principle
Kinematic chain
References
Bibliography
Industrial Press (2012), Machinery's Handbook (29th ed.),
Further reading
Kravchenko A.I., Bovda A.M. Gear with magnetic couple. Pat. of Ukraine N. 56700 – Bul. N. 2, 2011 – F16H 49/00.
Sclater, Neil. (2011). "Gears: devices, drives and mechanisms." Mechanisms and Mechanical Devices Sourcebook. 5th ed. New York: McGraw Hill. pp. 131–174. . Drawings and designs of various gearings.
"Wheels That Can't Slip." Popular Science, February 1945, pp. 120–125.
External links
Geararium. Museum of gears and toothed wheels - antique and vintage gears, sprockets, ratchets and other gear-related objects.
Kinematic Models for Design Digital Library (KMODDL) - movies and photos of hundreds of working models at Cornell University
Short historical account on the application of analytical geometry to the form of gear teeth
Mathematical Tutorial for Gearing (Relating to Robotics)
American Gear Manufacturers Association
Gear Technology, the Journal of Gear Manufacturing
Tribology
Articles containing video clips | Gear | [
"Chemistry",
"Materials_science",
"Engineering"
] | 12,011 | [
"Tribology",
"Mechanical engineering",
"Materials science",
"Surface science"
] |
83,060 | https://en.wikipedia.org/wiki/Magnetometer | A magnetometer is a device that measures magnetic field or magnetic dipole moment. Different types of magnetometers measure the direction, strength, or relative change of a magnetic field at a particular location. A compass is one such device, one that measures the direction of an ambient magnetic field, in this case, the Earth's magnetic field. Other magnetometers measure the magnetic dipole moment of a magnetic material such as a ferromagnet, for example by recording the effect of this magnetic dipole on the induced current in a coil.
The first magnetometer capable of measuring the absolute magnetic intensity at a point in space was invented by Carl Friedrich Gauss in 1833 and notable developments in the 19th century included the Hall effect, which is still widely used.
Magnetometers are widely used for measuring the Earth's magnetic field, in geophysical surveys, to detect magnetic anomalies of various types, and to determine the dipole moment of magnetic materials. In an aircraft's attitude and heading reference system, they are commonly used as a heading reference. Magnetometers are also used by the military as a triggering mechanism in magnetic mines to detect submarines. Consequently, some countries, such as the United States, Canada and Australia, classify the more sensitive magnetometers as military technology, and control their distribution.
Magnetometers can be used as metal detectors: they can detect only magnetic (ferrous) metals, but can detect such metals at a much greater distance than conventional metal detectors, which rely on conductivity. Magnetometers are capable of detecting large objects, such as cars, at over , while a conventional metal detector's range is rarely more than .
In recent years, magnetometers have been miniaturized to the extent that they can be incorporated in integrated circuits at very low cost and are finding increasing use as miniaturized compasses (MEMS magnetic field sensor).
Introduction
Magnetic fields
Magnetic fields are vector quantities characterized by both strength and direction. The strength of a magnetic field is measured in units of tesla in the SI units, and in gauss in the cgs system of units. 10,000 gauss are equal to one tesla. Measurements of the Earth's magnetic field are often quoted in units of nanotesla (nT), also called a gamma. The Earth's magnetic field can vary from 20,000 to 80,000 nT depending on location, fluctuations in the Earth's magnetic field are on the order of 100 nT, and magnetic field variations due to magnetic anomalies can be in the picotesla (pT) range. Gaussmeters and teslameters are magnetometers that measure in units of gauss or tesla, respectively. In some contexts, magnetometer is the term used for an instrument that measures fields of less than 1 millitesla (mT) and gaussmeter is used for those measuring greater than 1 mT.
Types of magnetometer
There are two basic types of magnetometer measurement. Vector magnetometers measure the vector components of a magnetic field. Total field magnetometers or scalar magnetometers measure the magnitude of the vector magnetic field. Magnetometers used to study the Earth's magnetic field may express the vector components of the field in terms of declination (the angle between the horizontal component of the field vector and true, or geographic, north) and the inclination (the angle between the field vector and the horizontal surface).
Absolute magnetometers measure the absolute magnitude or vector magnetic field, using an internal calibration or known physical constants of the magnetic sensor. Relative magnetometers measure magnitude or vector magnetic field relative to a fixed but uncalibrated baseline. Also called variometers, relative magnetometers are used to measure variations in magnetic field.
Magnetometers may also be classified by their situation or intended use. Stationary magnetometers are installed to a fixed position and measurements are taken while the magnetometer is stationary. Portable or mobile magnetometers are meant to be used while in motion and may be manually carried or transported in a moving vehicle. Laboratory magnetometers are used to measure the magnetic field of materials placed within them and are typically stationary. Survey magnetometers are used to measure magnetic fields in geomagnetic surveys; they may be fixed base stations, as in the INTERMAGNET network, or mobile magnetometers used to scan a geographic region. An early adoption (in the 1950s) of airborne magnetometry by Inco prompted the discovery of Thompson, Manitoba.
Performance and capabilities
The performance and capabilities of magnetometers are described through their technical specifications. Major specifications include
Sample rate is the number of readings given per second. The inverse is the cycle time in seconds per reading. Sample rate is important in mobile magnetometers; the sample rate and the vehicle speed determine the distance between measurements.
Bandwidth or bandpass characterizes how well a magnetometer tracks rapid changes in magnetic field. For magnetometers with no onboard signal processing, bandwidth is determined by the Nyquist limit set by sample rate. Modern magnetometers may perform smoothing or averaging over sequential samples, achieving a lower noise in exchange for lower bandwidth.
Resolution is the smallest change in a magnetic field the magnetometer can resolve. A magnetometer should have a resolution a good deal smaller than the smallest change one wishes to observe. This includes quantization error which is caused by recording roundoff and truncation of digital expressions of the data.
Absolute error is the difference between the readings of a magnetometer true magnetic field.
Drift is the change in absolute error over time.
Thermal stability is the dependence of the measurement on temperature. It is given as a temperature coefficient in units of nT per degree Celsius.
Noise is the random fluctuations generated by the magnetometer sensor or electronics. Noise is given in units of , where frequency component refers to the bandwidth.
Sensitivity is the larger of the noise or the resolution.
Heading error is the change in the measurement due to a change in orientation of the instrument in a constant magnetic field.
The dead zone is the angular region of magnetometer orientation in which the instrument produces poor or no measurements. All optically pumped, proton-free precession, and Overhauser magnetometers experience some dead zone effects.
Gradient tolerance is the ability of a magnetometer to obtain a reliable measurement in the presence of a magnetic field gradient. In surveys of unexploded ordnance or landfills, gradients can be large.
Early magnetometers
The compass, consisting of a magnetized needle whose orientation changes in response to the ambient magnetic field, is a simple type of magnetometer, one that measures the direction of the field. The oscillation frequency of a magnetized needle is proportional to the square-root of the strength of the ambient magnetic field; so, for example, the oscillation frequency of the needle of a horizontally situated compass is proportional to the square-root of the horizontal intensity of the ambient field.
In 1833, Carl Friedrich Gauss, head of the Geomagnetic Observatory in Göttingen, published a paper on measurement of the Earth's magnetic field. It described a new instrument that consisted of a permanent bar magnet suspended horizontally from a gold fibre. The difference in the oscillations when the bar was magnetised and when it was demagnetised allowed Gauss to calculate an absolute value for the strength of the Earth's magnetic field.
The gauss, the CGS unit of magnetic flux density was named in his honour, defined as one maxwell per square centimeter; it equals 1×10−4 tesla (the SI unit).
Francis Ronalds and Charles Brooke independently invented magnetographs in 1846 that continuously recorded the magnet's movements using photography, thus easing the load on observers. They were quickly utilised by Edward Sabine and others in a global magnetic survey and updated machines were in use well into the 20th century.
Laboratory magnetometers
Laboratory magnetometers measure the magnetization, also known as the magnetic moment of a sample material. Unlike survey magnetometers, laboratory magnetometers require the sample to be placed inside the magnetometer, and often the temperature, magnetic field, and other parameters of the sample can be controlled. A sample's magnetization, is primarily dependent on the ordering of unpaired electrons within its atoms, with smaller contributions from nuclear magnetic moments, Larmor diamagnetism, among others. Ordering of magnetic moments are primarily classified as diamagnetic, paramagnetic, ferromagnetic, or antiferromagnetic (although the zoology of magnetic ordering also includes ferrimagnetic, helimagnetic, toroidal, spin glass, etc.). Measuring the magnetization as a function of temperature and magnetic field can give clues as to the type of magnetic ordering, as well as any phase transitions between different types of magnetic orders that occur at critical temperatures or magnetic fields. This type of magnetometry measurement is very important to understand the magnetic properties of materials in physics, chemistry, geophysics and geology, as well as sometimes biology.
SQUID (superconducting quantum interference device)
SQUIDs are a type of magnetometer used both as survey and as laboratory magnetometers. SQUID magnetometry is an extremely sensitive absolute magnetometry technique. However SQUIDs are noise sensitive, making them impractical as laboratory magnetometers in high DC magnetic fields, and in pulsed magnets. Commercial SQUID magnetometers are available for sample temperatures between 300 mK and 400 K, and magnetic fields up to 7 tesla.
Inductive pickup coils
Inductive pickup coils (also referred as inductive sensor) measure the magnetic dipole moment of a material by detecting the current induced in a coil due to the changing magnetic moment of the sample. The sample's magnetization can be changed by applying a small ac magnetic field (or a rapidly changing dc field), as occurs in capacitor-driven pulsed magnets. These measurements require differentiating between the magnetic field produced by the sample and that from the external applied field. Often a special arrangement of cancellation coils is used. For example, half of the pickup coil is wound in one direction, and the other half in the other direction, and the sample is placed in only one half. The external uniform magnetic field is detected by both halves of the coil, and since they are counter-wound, the external magnetic field produces no net signal.
VSM (vibrating-sample magnetometer)
Vibrating-sample magnetometers (VSMs) detect the dipole moment of a sample by mechanically vibrating the sample inside of an inductive pickup coil or inside of a SQUID coil. Induced current or changing flux in the coil is measured. The vibration is typically created by a motor or a piezoelectric actuator. Typically the VSM technique is about an order of magnitude less sensitive than SQUID magnetometry. VSMs can be combined with SQUIDs to create a system that is more sensitive than either one alone. Heat due to the sample vibration can limit the base temperature of a VSM, typically to 2 kelvin. VSM is also impractical for measuring a fragile sample that is sensitive to rapid acceleration.
Pulsed-field extraction magnetometry
Pulsed-field extraction magnetometry is another method making use of pickup coils to measure magnetization. Unlike VSMs where the sample is physically vibrated, in pulsed-field extraction magnetometry, the sample is secured and the external magnetic field is changed rapidly, for example in a capacitor-driven magnet. One of multiple techniques must then be used to cancel out the external field from the field produced by the sample. These include counterwound coils that cancel the external uniform field and background measurements with the sample removed from the coil.
Torque magnetometry
Magnetic torque magnetometry can be even more sensitive than SQUID magnetometry. However, magnetic torque magnetometry doesn't measure magnetism directly as all the previously mentioned methods do. Magnetic torque magnetometry instead measures the torque τ acting on a sample's magnetic moment μ as a result of a uniform magnetic field B, τ = μ × B. A torque is thus a measure of the sample's magnetic or shape anisotropy. In some cases the sample's magnetization can be extracted from the measured torque. In other cases, the magnetic torque measurement is used to detect magnetic phase transitions or quantum oscillations. The most common way to measure magnetic torque is to mount the sample on a cantilever and measure the displacement via capacitance measurement between the cantilever and nearby fixed object, or by measuring the piezoelectricity of the cantilever, or by optical interferometry off the surface of the cantilever.
Faraday force magnetometry
Faraday force magnetometry uses the fact that a spatial magnetic field gradient produces force that acts on a magnetized object, F = (M⋅∇)B. In Faraday force magnetometry the force on the sample can be measured by a scale (hanging the sample from a sensitive balance), or by detecting the displacement against a spring. Commonly a capacitive load cell or cantilever is used because of its sensitivity, size, and lack of mechanical parts. Faraday force magnetometry is approximately one order of magnitude less sensitive than a SQUID. The biggest drawback to Faraday force magnetometry is that it requires some means of not only producing a magnetic field, but also producing a magnetic field gradient. While this can be accomplished by using a set of special pole faces, a much better result can be achieved by using set of gradient coils. A major advantage to Faraday force magnetometry is that it is small and reasonably tolerant to noise, and thus can be implemented in a wide range of environments, including a dilution refrigerator. Faraday force magnetometry can also be complicated by the presence of torque (see previous technique). This can be circumvented by varying the gradient field independently of the applied DC field so the torque and the Faraday force contribution can be separated, and/or by designing a Faraday force magnetometer that prevents the sample from being rotated.
Optical magnetometry
Optical magnetometry makes use of various optical techniques to measure magnetization. One such technique, Kerr magnetometry makes use of the magneto-optic Kerr effect, or MOKE. In this technique, incident light is directed at the sample's surface. Light interacts with a magnetized surface nonlinearly so the reflected light has an elliptical polarization, which is then measured by a detector. Another method of optical magnetometry is Faraday rotation magnetometry. Faraday rotation magnetometry utilizes nonlinear magneto-optical rotation to measure a sample's magnetization. In this method a Faraday modulating thin film is applied to the sample to be measured and a series of images are taken with a camera that senses the polarization of the reflected light. To reduce noise, multiple pictures are then averaged together. One advantage to this method is that it allows mapping of the magnetic characteristics over the surface of a sample. This can be especially useful when studying such things as the Meissner effect on superconductors. Microfabricated optically pumped magnetometers (μOPMs) can be used to detect the origin of brain seizures more precisely and generate less heat than currently available superconducting quantum interference devices, better known as SQUIDs. The device works by using polarized light to control the spin of rubidium atoms which can be used to measure and monitor the magnetic field.
Survey magnetometers
Survey magnetometers can be divided into two basic types:
Scalar magnetometers measure the total strength of the magnetic field to which they are subjected, but not its direction
Vector magnetometers have the capability to measure the component of the magnetic field in a particular direction, relative to the spatial orientation of the device.
A vector is a mathematical entity with both magnitude and direction. The Earth's magnetic field at a given point is a vector. A magnetic compass is designed to give a horizontal bearing direction, whereas a vector magnetometer measures both the magnitude and direction of the total magnetic field. Three orthogonal sensors are required to measure the components of the magnetic field in all three dimensions.
They are also rated as "absolute" if the strength of the field can be calibrated from their own known internal constants or "relative" if they need to be calibrated by reference to a known field.
A magnetograph is a magnetometer that continuously records data over time. This data is typically represented in magnetograms.
Magnetometers can also be classified as "AC" if they measure fields that vary relatively rapidly in time (>100 Hz), and "DC" if they measure fields that vary only slowly (quasi-static) or are static. AC magnetometers find use in electromagnetic systems (such as magnetotellurics), and DC magnetometers are used for detecting mineralisation and corresponding geological structures.
Scalar magnetometers
Proton precession magnetometer
Proton precession magnetometers, also known as proton magnetometers, PPMs or simply mags, measure the resonance frequency of protons (hydrogen nuclei) in the magnetic field to be measured, due to nuclear magnetic resonance (NMR). Because the precession frequency depends only on atomic constants and the strength of the ambient magnetic field, the accuracy of this type of magnetometer can reach 1 ppm.
A direct current flowing in a solenoid creates a strong magnetic field around a hydrogen-rich fluid (kerosene and decane are popular, and even water can be used), causing some of the protons to align themselves with that field. The current is then interrupted, and as protons realign themselves with the ambient magnetic field, they precess at a frequency that is directly proportional to the magnetic field. This produces a weak rotating magnetic field that is picked up by a (sometimes separate) inductor, amplified electronically, and fed to a digital frequency counter whose output is typically scaled and displayed directly as field strength or output as digital data.
For hand/backpack carried units, PPM sample rates are typically limited to less than one sample per second. Measurements are typically taken with the sensor held at fixed locations at approximately 10 metre increments.
Portable instruments are also limited by sensor volume (weight) and power consumption. PPMs work in field gradients up to 3,000 nT/m, which is adequate for most mineral exploration work. For higher gradient tolerance, such as mapping banded iron formations and detecting large ferrous objects, Overhauser magnetometers can handle 10,000 nT/m, and caesium magnetometers can handle 30,000 nT/m.
They are relatively inexpensive (< US$8,000) and were once widely used in mineral exploration. Three manufacturers dominate the market: GEM Systems, Geometrics and Scintrex. Popular models include G-856/857, Smartmag, GSM-18, and GSM-19T.
For mineral exploration, they have been superseded by Overhauser, caesium, and potassium instruments, all of which are fast-cycling, and do not require the operator to pause between readings.
Overhauser effect magnetometer
The Overhauser effect magnetometer or Overhauser magnetometer uses the same fundamental effect as the proton precession magnetometer to take measurements. By adding free radicals to the measurement fluid, the nuclear Overhauser effect can be exploited to significantly improve upon the proton precession magnetometer. Rather than aligning the protons using a solenoid, a low power radio-frequency field is used to align (polarise) the electron spin of the free radicals, which then couples to the protons via the Overhauser effect. This has two main advantages: driving the RF field takes a fraction of the energy (allowing lighter-weight batteries for portable units), and faster sampling as the electron-proton coupling can happen even as measurements are being taken. An Overhauser magnetometer produces readings with a 0.01 nT to 0.02 nT standard deviation while sampling once per second.
Caesium vapour magnetometer
The optically pumped caesium vapour magnetometer is a highly sensitive (300 fT/Hz0.5) and accurate device used in a wide range of applications. It is one of a number of alkali vapours (including rubidium and potassium) that are used in this way.
The device broadly consists of a photon emitter, such as a laser, an absorption chamber containing caesium vapour mixed with a "buffer gas" through which the emitted photons pass, and a photon detector, arranged in that order. The buffer gas is usually helium or nitrogen and they are used to reduce collisions between the caesium vapour atoms.
The basic principle that allows the device to operate is the fact that a caesium atom can exist in any of nine energy levels, which can be informally thought of as the placement of electron atomic orbitals around the atomic nucleus. When a caesium atom within the chamber encounters a photon from the laser, it is excited to a higher energy state, emits a photon and falls to an indeterminate lower energy state. The caesium atom is "sensitive" to the photons from the laser in three of its nine energy states, and therefore, assuming a closed system, all the atoms eventually fall into a state in which all the photons from the laser pass through unhindered and are measured by the photon detector. The caesium vapour has become transparent. This process happens continuously to maintain as many of the electrons as possible in that state.
At this point, the sample (or population) is said to have been optically pumped and ready for measurement to take place. When an external field is applied it disrupts this state and causes atoms to move to different states which makes the vapour less transparent. The photo detector can measure this change and therefore measure the magnitude of the magnetic field.
In the most common type of caesium magnetometer, a very small AC magnetic field is applied to the cell. Since the difference in the energy levels of the electrons is determined by the external magnetic field, there is a frequency at which this small AC field makes the electrons change states. In this new state, the electrons once again can absorb a photon of light. This causes a signal on a photo detector that measures the light passing through the cell. The associated electronics use this fact to create a signal exactly at the frequency that corresponds to the external field.
Another type of caesium magnetometer modulates the light applied to the cell. This is referred to as a Bell-Bloom magnetometer, after the two scientists who first investigated the effect. If the light is turned on and off at the frequency corresponding to the Earth's field, there is a change in the signal seen at the photo detector. Again, the associated electronics use this to create a signal exactly at the frequency that corresponds to the external field. Both methods lead to high performance magnetometers.
Potassium vapour magnetometer
Potassium is the only optically pumped magnetometer that operates on a single, narrow electron spin resonance (ESR) line in contrast to other alkali vapour magnetometers that use irregular, composite and wide spectral lines and helium with the inherently wide spectral line.
Metastable helium-4 scalar magnetometer
Magnetometers based on helium-4 excited to its metastable triplet state thanks to a plasma discharge have been developed in the 1960s and 70s by Texas Instruments, then by its spinoff Polatomic, and from late 1980s by CEA-Leti. The latter pioneered a configuration which cancels the dead-zones, which are a recurrent problem of atomic magnetometers. This configuration was demonstrated to show an accuracy of 50 pT in orbit operation. The ESA chose this technology for the Swarm mission, which was launched in 2013. An experimental vector mode, which could compete with fluxgate magnetometers was tested in this mission with overall success.
Applications
The caesium and potassium magnetometers are typically used where a higher performance magnetometer than the proton magnetometer is needed. In archaeology and geophysics, where the sensor sweeps through an area and many accurate magnetic field measurements are often needed, caesium and potassium magnetometers have advantages over the proton magnetometer.
The caesium and potassium magnetometer's faster measurement rate allows the sensor to be moved through the area more quickly for a given number of data points. Caesium and potassium magnetometers are insensitive to rotation of the sensor while the measurement is being made.
The lower noise of caesium and potassium magnetometers allow those measurements to more accurately show the variations in the field with position.
Vector magnetometers
Vector magnetometers measure one or more components of the magnetic field electronically. Using three orthogonal magnetometers, both azimuth and dip (inclination) can be measured. By taking the square root of the sum of the squares of the components the total magnetic field strength (also called total magnetic intensity, TMI) can be calculated by the Pythagorean theorem.
Vector magnetometers are subject to temperature drift and the dimensional instability of the ferrite cores. They also require leveling to obtain component information, unlike total field (scalar) instruments. For these reasons they are no longer used for mineral exploration.
Rotating coil magnetometer
The magnetic field induces a sine wave in a rotating coil. The amplitude of the signal is proportional to the strength of the field, provided it is uniform, and to the sine of the angle between the rotation axis of the coil and the field lines. This type of magnetometer is obsolete.
Hall effect magnetometer
The most common magnetic sensing devices are solid-state Hall effect sensors. These sensors produce a voltage proportional to the applied magnetic field and also sense polarity. They are used in applications where the magnetic field strength is relatively large, such as in anti-lock braking systems in cars, which sense wheel rotation speed via slots in the wheel disks.
Magnetoresistive devices
These are made of thin strips of Permalloy, a high magnetic permeability, nickel-iron alloy, whose electrical resistance varies with a change in magnetic field. They have a well-defined axis of sensitivity, can be produced in 3-D versions and can be mass-produced as an integrated circuit. They have a response time of less than 1 microsecond and can be sampled in moving vehicles up to 1,000 times/second. They can be used in compasses that read within 1°, for which the underlying sensor must reliably resolve 0.1°.
Fluxgate magnetometer
A fluxgate magnetometer consists of a small magnetically susceptible core wrapped by two coils of wire. An alternating electric current is passed through one coil, driving the core through an alternating cycle of magnetic saturation; i.e., magnetised, unmagnetised, inversely magnetised, unmagnetised, magnetised, and so forth. This constantly changing field induces a voltage in the second coil which is measured by a detector. In a magnetically neutral background, the input and output signals match. However, when the core is exposed to a background field, it is more easily saturated in alignment with that field and less easily saturated in opposition to it. Hence the alternating magnetic field and the induced output voltage, are out of step with the input current. The extent to which this is the case depends on the strength of the background magnetic field. Often, the signal in the output coil is integrated, yielding an output analog voltage proportional to the magnetic field.
The fluxgate magnetometer was invented by H. Aschenbrenner and G. Goubau in 1936. A team at Gulf Research Laboratories led by Victor Vacquier developed airborne fluxgate magnetometers to detect submarines during World War II and after the war confirmed the theory of plate tectonics by using them to measure shifts in the magnetic patterns on the sea floor.
A wide variety of sensors are currently available and used to measure magnetic fields. Fluxgate compasses and gradiometers measure the direction and magnitude of magnetic fields. Fluxgates are affordable, rugged and compact with miniaturization recently advancing to the point of complete sensor solutions in the form of IC chips, including examples from both academia and industry. This, plus their typically low power consumption makes them ideal for a variety of sensing applications. Gradiometers are commonly used for archaeological prospecting, and unexploded ordnance (UXO) detection such as the German military's popular Foerster. Utility location specialists also use gradiometers for locating underground utilities such as pipeline valves, septic tanks, and manhole covers.
The typical fluxgate magnetometer consists of a "sense" (secondary) coil surrounding an inner "drive" (primary) coil that is closely wound around a highly permeable core material, such as mu-metal or permalloy. An alternating current is applied to the drive winding, which drives the core in a continuous repeating cycle of saturation and unsaturation. To an external field, the core is alternately weakly permeable and highly permeable. The core is often a toroidally wrapped ring or a pair of linear elements whose drive windings are each wound in opposing directions. Such closed flux paths minimise coupling between the drive and sense windings. In the presence of an external magnetic field, with the core in a highly permeable state, such a field is locally attracted or gated (hence the name fluxgate) through the sense winding. When the core is weakly permeable, the external field is less attracted. This continuous gating of the external field in and out of the sense winding induces a signal in the sense winding, whose principal frequency is twice that of the drive frequency, and whose strength and phase orientation vary directly with the external-field magnitude and polarity.
There are additional factors that affect the size of the resultant signal. These factors include the number of turns in the sense winding, magnetic permeability of the core, sensor geometry, and the gated flux rate of change with respect to time.
Phase synchronous detection is used to extract these harmonic signals from the sense winding and convert them into a DC voltage proportional to the external magnetic field. Active current feedback may also be employed, such that the sense winding is driven to counteract the external field. In such cases, the feedback current varies linearly with the external magnetic field and is used as the basis for measurement. This helps to counter inherent non-linearity between the applied external field strength and the flux gated through the sense winding.
SQUID magnetometer
SQUIDs, or superconducting quantum interference devices, measure extremely small changes in magnetic fields. They are very sensitive vector magnetometers, with noise levels as low as 3 fT Hz−½ in commercial instruments and 0.4 fT Hz−½ in experimental devices. Many liquid-helium-cooled commercial SQUIDs achieve a flat noise spectrum from near DC (less than 1 Hz) to tens of kilohertz, making such devices ideal for time-domain biomagnetic signal measurements. SERF atomic magnetometers demonstrated in laboratories so far reach competitive noise floor but in relatively small frequency ranges.
SQUID magnetometers require cooling with liquid helium () or liquid nitrogen () to operate, hence the packaging requirements to use them are rather stringent both from a thermal-mechanical as well as magnetic standpoint. SQUID magnetometers are most commonly used to measure the magnetic fields produced by laboratory samples, also for brain or heart activity (magnetoencephalography and magnetocardiography, respectively). Geophysical surveys use SQUIDs from time to time, but the logistics of cooling the SQUID are much more complicated than other magnetometers that operate at room temperature.
Zero-field optically-pumped magnetometers
Magnetometers based on atomic gasses can perform vector measurements of the magnetic field in the low field regime, where the decay of the atomic coherence becomes faster than the Larmor frequency. The physics of such magnetometers is based on the Hanle effect. Such zero-field optically pumped magnetometers have been tested in various configurations and with different atomic species, notably alkali (potassium, rubidium and cesium), helium and mercury. For the case of alkali, the coherence times were greatly limited due to spin-exchange relaxation. A major breakthrough happened at the beginning of the 2000 decade, Romalis group in Princeton demonstrated that in such a low field regime, alkali coherence times can be greatly enhanced if a high enough density can be reached by high temperature heating, this is the so-called SERF effect.
The main interest of optically-pumped magnetometers is to replace SQUID magnetometers in applications where cryogenic cooling is a drawback. This is notably the case of medical imaging where such cooling imposes a thick thermal insulation, strongly affecting the amplitude of the recorded biomagnetic signals. Several startup companies are currently developing optically pumped magnetometers for biomedical applications: those of TwinLeaf, quSpin and FieldLine being based on alkali vapors, and those of Mag4Health on metastable helium-4.
Spin-exchange relaxation-free (SERF) atomic magnetometers
At sufficiently high atomic density, extremely high sensitivity can be achieved. Spin-exchange-relaxation-free (SERF) atomic magnetometers containing potassium, caesium, or rubidium vapor operate similarly to the caesium magnetometers described above, yet can reach sensitivities lower than 1 fT Hz−. The SERF magnetometers only operate in small magnetic fields. The Earth's field is about 50 μT; SERF magnetometers operate in fields less than 0.5 μT.
Large volume detectors have achieved a sensitivity of 200 aT Hz−. This technology has greater sensitivity per unit volume than SQUID detectors. The technology can also produce very small magnetometers that may in the future replace coils for detecting radio-frequency magnetic fields. This technology may produce a magnetic sensor that has all of its input and output signals in the form of light on fiber-optic cables. This lets the magnetic measurement be made near high electrical voltages.
Calibration of magnetometers
The calibration of magnetometers is usually performed by means of coils which are supplied by an electrical current to create a magnetic field. It allows to characterize the sensitivity of the magnetometer (in terms of V/T). In many applications the homogeneity of the calibration coil is an important feature. For this reason, coils like Helmholtz coils are commonly used either in a single axis or a three axis configuration. For demanding applications a high homogeneity magnetic field is mandatory, in such cases magnetic field calibration can be performed using a Maxwell coil, cosine coils, or calibration in the highly homogenous Earth's magnetic field.
Uses
Magnetometers have a very diverse range of applications, including locating objects such as submarines, sunken ships, hazards affecting tunnel boring machines, coal mine hazards, unexploded ordnance, toxic waste drums, as well as a wide range of mineral deposits and geological structures. They also have applications in heart beat monitors, concealed weapons detection, military weapon systems positioning, sensors in anti-locking brakes, weather prediction (via solar cycles), steel pylons, drill guidance systems, archaeology, plate tectonics, radio wave propagation, and planetary exploration. Laboratory magnetometers determine the magnetic dipole moment of a magnetic sample, typically as a function of temperature, magnetic field, or other parameter. This helps to reveal its magnetic properties such as ferromagnetism, antiferromagnetism, superconductivity, or other properties that affect magnetism.
Depending on the application, magnetometers can be deployed in spacecraft, aeroplanes (fixed wing magnetometers), helicopters (stinger and bird), on the ground (backpack), towed at a distance behind quad bikes (ATVs) on a (sled or trailer), lowered into boreholes (tool, probe, or sonde), or towed behind boats (tow fish).
Mechanical stress measurement
Magnetometers are used to measure or monitor mechanical stress in ferromagnetic materials. Mechanical stress will improve alignment of magnetic domains in microscopic scale that will raise the magnetic field measured close to the material by magnetometers. There are different hypothesis about stress-magnetisation relationship. However the effect of mechanical stress on measured magnetic field near the specimen is claimed to be proven in many scientific publications. There have been efforts to solve the inverse problem of magnetisation-stress resolution in order to quantify the stress based on measured magnetic field.
Accelerator physics
Magnetometers are used extensively in experimental particle physics to measure the magnetic field of pivotal components such as the concentration or focusing beam-magnets.
Archaeology
Magnetometers are also used to detect archaeological sites, shipwrecks, and other buried or submerged objects. Fluxgate gradiometers are popular due to their compact configuration and relatively low cost. Gradiometers enhance shallow features and negate the need for a base station. Caesium and Overhauser magnetometers are also very effective when used as gradiometers or as single-sensor systems with base stations.
The TV program Time Team popularised 'geophys', including magnetic techniques used in archaeological work to detect fire hearths, walls of baked bricks and magnetic stones such as basalt and granite. Walking tracks and roadways can sometimes be mapped with differential compaction in magnetic soils or with disturbances in clays, such as on the Great Hungarian Plain. Ploughed fields behave as sources of magnetic noise in such surveys.
Auroras
Magnetometers can give an indication of auroral activity before the light from the aurora becomes visible. A grid of magnetometers around the world constantly measures the effect of the solar wind on the Earth's magnetic field, which is then published on the K-index.
Coal exploration
While magnetometers can be used to help map basin shape at a regional scale, they are more commonly used to map hazards to coal mining, such as basaltic intrusions (dykes, sills, and volcanic plug) that destroy resources and are dangerous to longwall mining equipment. Magnetometers can also locate zones ignited by lightning and map siderite (an impurity in coal).
The best survey results are achieved on the ground in high-resolution surveys (with approximately 10 m line spacing and 0.5 m station spacing). Bore-hole magnetometers using a Ferretcan also assist when coal seams are deep, by using multiple sills or looking beneath surface basalt flows.
Modern surveys generally use magnetometers with GPS technology to automatically record the magnetic field and their location. The data set is then corrected with data from a second magnetometer (the base station) that is left stationary and records the change in the Earth's magnetic field during the survey.
Directional drilling
Magnetometers are used in directional drilling for oil or gas to detect the azimuth of the drilling tools near the drill. They are most often paired with accelerometers in drilling tools so that both the inclination and azimuth of the drill can be found.
Military
For defensive purposes, navies use arrays of magnetometers laid across sea floors in strategic locations (i.e. around ports) to monitor submarine activity. The Russian Alfa-class titanium submarines were designed and built at great expense to thwart such systems (as pure titanium is non-magnetic).
Military submarines are degaussed—by passing through large underwater loops at regular intervals—to help them escape detection by sea-floor monitoring systems, magnetic anomaly detectors, and magnetically-triggered mines. However, submarines are never completely de-magnetised. It is possible to tell the depth at which a submarine has been by measuring its magnetic field, which is distorted as the pressure distorts the hull and hence the field. Heating can also change the magnetization of steel.
Submarines tow long sonar arrays to detect ships, and can even recognise different propeller noises. The sonar arrays need to be accurately positioned so they can triangulate direction to targets (e.g. ships). The arrays do not tow in a straight line, so fluxgate magnetometers are used to orient each sonar node in the array.
Fluxgates can also be used in weapons navigation systems, but have been largely superseded by GPS and ring laser gyroscopes.
Magnetometers such as the German Foerster are used to locate ferrous ordnance. Caesium and Overhauser magnetometers are used to locate and help clean up old bombing and test ranges.
UAV payloads also include magnetometers for a range of defensive and offensive tasks.
Mineral exploration
Magnetometric surveys can be useful in defining magnetic anomalies which represent ore (direct detection), or in some cases gangue minerals associated with ore deposits (indirect or inferential detection). This includes iron ore, magnetite, hematite, and often pyrrhotite.
Developed countries such as Australia, Canada and USA invest heavily in systematic airborne magnetic surveys of their respective continents and surrounding oceans, to assist with map geology and in the discovery of mineral deposits. Such aeromag surveys are typically undertaken with 400 m line spacing at 100 m elevation, with readings every 10 meters or more. To overcome the asymmetry in the data density, data is interpolated between lines (usually 5 times) and data along the line is then averaged. Such data is gridded to an 80 m × 80 m pixel size and image processed using a program like ERMapper. At an exploration lease scale, the survey may be followed by a more detailed helimag or crop duster style fixed wing at 50 m line spacing and 50 m elevation (terrain permitting). Such an image is gridded on a 10 x 10 m pixel, offering 64 times the resolution.
Where targets are shallow (<200 m), aeromag anomalies may be followed up with ground magnetic surveys on 10 m to 50 m line spacing with 1 m station spacing to provide the best detail (2 to 10 m pixel grid) (or 25 times the resolution prior to drilling).
Magnetic fields from magnetic bodies of ore fall off with the inverse distance cubed (dipole target), or at best inverse distance squared (magnetic monopole target). One analogy to the resolution-with-distance is a car driving at night with lights on. At a distance of 400 m one sees one glowing haze, but as it approaches, two headlights, and then the left blinker, are visible.
There are many challenges interpreting magnetic data for mineral exploration. Multiple targets mix together like multiple heat sources and, unlike light, there is no magnetic telescope to focus fields. The combination of multiple sources is measured at the surface. The geometry, depth, or magnetisation direction (remanence) of the targets are also generally not known, and so multiple models can explain the data.
Potent by Geophysical Software Solutions is a leading magnetic (and gravity) interpretation package used extensively in the Australian exploration industry.
Magnetometers assist mineral explorers both directly (i.e., gold mineralisation associated with magnetite, diamonds in kimberlite pipes) and, more commonly, indirectly, such as by mapping geological structures conducive to mineralisation (i.e., shear zones and alteration haloes around granites).
Airborne Magnetometers detect the change in the Earth's magnetic field using sensors attached to the aircraft in the form of a "stinger" or by towing a magnetometer on the end of a cable. The magnetometer on a cable is often referred to as a "bomb" because of its shape. Others call it a "bird".
Because hills and valleys under the aircraft make the magnetic readings rise and fall, a radar altimeter keeps track of the transducer's deviation from the nominal altitude above ground. There may also be a camera that takes photos of the ground. The location of the measurement is determined by also recording a GPS.
Mobile phones
Many smartphones contain miniaturized microelectromechanical systems (MEMS) magnetometers which are used to detect magnetic field strength and are used as compasses. The iPhone 3GS has a magnetometer, a magnetoresistive permalloy sensor, the AN-203 produced by Honeywell. In 2009, the price of three-axis magnetometers dipped below US$1 per device and dropped rapidly. The use of a three-axis device means that it is not sensitive to the way it is held in orientation or elevation. Hall effect devices are also popular.
Researchers at Deutsche Telekom have used magnetometers embedded in mobile devices to permit touchless 3D interaction. Their interaction framework, called MagiTact, tracks changes to the magnetic field around a cellphone to identify different gestures made by a hand holding or wearing a magnet.
Oil exploration
Seismic methods are preferred to magnetometers as the primary survey method for oil exploration although magnetic methods can give additional information about the underlying geology and in some environments evidence of leakage from traps. Magnetometers are also used in oil exploration to show locations of geologic features that make drilling impractical, and other features that give geophysicists a more complete picture of stratigraphy.
Spacecraft
A three-axis fluxgate magnetometer was part of the Mariner 2 and Mariner 10 missions. A dual technique magnetometer is part of the Cassini–Huygens mission to explore Saturn. This system is composed of a vector helium and fluxgate magnetometers. Magnetometers were also a component instrument on the Mercury MESSENGER mission. A magnetometer can also be used by satellites like GOES to measure both the magnitude and direction of the magnetic field of a planet or moon.
Magnetic surveys
Systematic surveys can be used to in searching for mineral deposits or locating lost objects. Such surveys are divided into:
Aeromagnetic survey
Borehole
Ground
Marine
Aeromag datasets for Australia can be downloaded from the GADDS database.
Data can be divided in point located and image data, the latter of which is in ERMapper format.
Magnetovision
On the base of space measured distribution of magnetic field parameters (e.g. amplitude or direction), the magnetovision images may be generated. Such presentation of magnetic data is very useful for further analyse and data fusion.
Gradiometer
Magnetic gradiometers are pairs of magnetometers with their sensors separated, usually horizontally, by a fixed distance. The readings are subtracted to measure the difference between the sensed magnetic fields, which gives the field gradients caused by magnetic anomalies. This is one way of compensating both for the variability in time of the Earth's magnetic field and for other sources of electromagnetic interference, thus allowing for more sensitive detection of anomalies. Because nearly equal values are being subtracted, the noise performance requirements for the magnetometers is more extreme.
Gradiometers enhance shallow magnetic anomalies and are thus good for archaeological and site investigation work. They are also good for real-time work such as unexploded ordnance (UXO) location. It is twice as efficient to run a base station and use two (or more) mobile sensors to read parallel lines simultaneously (assuming data is stored and post-processed). In this manner, both along-line and cross-line gradients can be calculated.
Position control of magnetic surveys
In traditional mineral exploration and archaeological work, grid pegs placed by theodolite and tape measure were used to define the survey area. Some UXO surveys used ropes to define the lanes. Airborne surveys used radio triangulation beacons, such as Siledus.
Non-magnetic electronic hipchain triggers were developed to trigger magnetometers. They used rotary shaft encoders to measure distance along disposable cotton reels.
Modern explorers use a range of low-magnetic signature GPS units, including real-time kinematic GPS.
Heading errors in magnetic surveys
Magnetic surveys can suffer from noise coming from a range of sources. Different magnetometer technologies suffer different kinds of noise problems.
Heading errors are one group of noise. They can come from three sources:
Sensor
Console
Operator
Some total field sensors give different readings depending on their orientation. Magnetic materials in the sensor itself are the primary cause of this error. In some magnetometers, such as the vapor magnetometers (caesium, potassium, etc.), there are sources of heading error in the physics that contribute small amounts to the total heading error.
Console noise comes from magnetic components on or within the console. These include ferrite in cores in inductors and transformers, steel frames around LCDs, legs on IC chips and steel cases in disposable batteries. Some popular MIL spec connectors also have steel springs.
Operators must take care to be magnetically clean and should check the 'magnetic hygiene' of all apparel and items carried during a survey. Akubra hats are very popular in Australia, but their steel rims must be removed before use on magnetic surveys. Steel rings on notepads, steel capped boots and steel springs in overall eyelets can all cause unnecessary noise in surveys. Pens, mobile phones and stainless steel implants can also be problematic.
The magnetic response (noise) from ferrous object on the operator and console can change with heading direction because of induction and remanence. Aeromagnetic survey aircraft and quad bike systems can use special compensators to correct for heading error noise.
Heading errors look like herringbone patterns in survey images. Alternate lines can also be corrugated.
Image processing of magnetic data
Recording data and image processing is superior to real-time work because subtle anomalies often missed by the operator (especially in magnetically noisy areas) can be correlated between lines, shapes and clusters better defined. A range of sophisticated enhancement techniques can also be used. There is also a hard copy and need for systematic coverage.
Aircraft navigation
The Magnetometer Navigation (MAGNAV) algorithm was initially running as a flight experiment in 2004. Later on, diamond magnetometers were developed by the United States Air Force Research Laboratory (AFRL) as a better method of navigation which cannot be jammed by the enemy.
See also
References
Further reading
External links
Earthquake forecasting techniques and more research on the study of electromagnetic fields
USGS Geomagnetism Program
Earth's Field NMR (EFNMR)
Space-based magnetometers
Practical guidelines for building a magnetometer by hobbyists – Part 1 Introduction
Practical guidelines for building a magnetometer by hobbyists – Part 2 Building
Articles containing video clips
Magnetic devices
Measuring instruments
Nuclear magnetic resonance
Sensors | Magnetometer | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 10,473 | [
"Nuclear magnetic resonance",
"Measuring instruments",
"Magnetometers",
"Nuclear physics",
"Sensors"
] |
83,124 | https://en.wikipedia.org/wiki/Black%20dwarf | A black dwarf is a theoretical stellar remnant, specifically a white dwarf that has cooled sufficiently to no longer emit significant heat or light. Because the time required for a white dwarf to reach this state is calculated to be longer than the current age of the universe (13.8 billion years), no black dwarfs are expected to exist in the universe at the present time. The temperature of the coolest white dwarfs is one observational limit on the universe's age.
The name "black dwarf" has also been applied to hypothetical late-stage cooled brown dwarfs substellar objects with insufficient mass (less than approximately 0.07 ) to maintain hydrogen-burning nuclear fusion.
Formation
A white dwarf is what remains of a main sequence star of low or medium mass (below approximately 9 to 10 solar masses ()) after it has either expelled or fused all the elements for which it has sufficient temperature to fuse. What is left is then a dense sphere of electron-degenerate matter that cools slowly by thermal radiation, eventually becoming a black dwarf.
If black dwarfs were to exist, they would be challenging to detect because, by definition, they would emit very little radiation. They would, however, be detectable through their gravitational influence. Various white dwarfs cooled below (equivalent to M0 spectral class) were found in 2012 by astronomers using MDM Observatory's 2.4 meter telescope. They are estimated to be 11 to 12 billion years old.
Because the far-future evolution of stars depends on physical questions which are poorly understood, such as the nature of dark matter and the possibility and rate of proton decay (which is yet to be proven to exist), it is not known precisely how long it would take white dwarfs to cool to blackness. Barrow and Tipler estimate that it would take 1015 years for a white dwarf to cool to ; however, if weakly interacting massive particles (WIMPs) exist, interactions with these particles may keep some white dwarfs much warmer than this for approximately 1025 years. If protons are not stable, white dwarfs will also be kept warm by energy released from proton decay. For a hypothetical proton lifetime of 1037 years, Adams and Laughlin calculate that proton decay will raise the effective surface temperature of an old one-solar-mass white dwarf to approximately . Although cold, this is thought to be hotter than the cosmic microwave background radiation temperature 1037 years in the future.
It is speculated that some massive black dwarfs may eventually produce supernova explosions. These will occur if pycnonuclear (density-based) fusion processes much of the star to nickel-56, which decays into iron via emitting a positron. This would lower the Chandrasekhar limit for some black dwarfs below their actual mass. If this point is reached, it would then collapse and initiate runaway nuclear fusion. The most massive to explode would be just below the Chandrasekhar limit at around 1.41 solar masses and would take of the order of , while the least massive to explode would be about 1.16 solar masses and would take of the order , totaling around 1% of all black dwarfs. One major caveat is that proton decay would decrease the mass of a black dwarf far more rapidly than pycnonuclear processes occur, preventing any supernova explosions.
Future of the Sun
Once the Sun stops fusing helium in its core and ejects its layers in a planetary nebula in about 8 billion years, it will become a white dwarf and also, over trillions of years, eventually will no longer emit any light. After that, the Sun will not be visible to the equivalent of the naked human eye, removing it from optical view even if the gravitational effects are evident. The estimated time for the Sun to cool enough to become a black dwarf is at least 1015 (1 quadrillion) years, though it could take much longer than this, if weakly interacting massive particles (WIMPs) exist, as described above. The described phenomena are considered a promising method of verification for the existence of WIMPs and black dwarfs.
See also
References
+
Stellar evolution
Hypothetical stars | Black dwarf | [
"Physics"
] | 834 | [
"Astrophysics",
"Stellar evolution"
] |
83,137 | https://en.wikipedia.org/wiki/Software-defined%20radio | Software-defined radio (SDR) is a radio communication system where components that conventionally have been implemented in analog hardware (e.g. mixers, filters, amplifiers, modulators/demodulators, detectors, etc.) are instead implemented by means of software on a computer or embedded system. While the concept of SDR is not new, the rapidly evolving capabilities of digital electronics render practical many processes which were once only theoretically possible.
A basic SDR system may consist of a computer equipped with a sound card, or other analog-to-digital converter, preceded by some form of RF front end. Significant amounts of signal processing are handed over to the general-purpose processor, rather than being done in special-purpose hardware (electronic circuits). Such a design produces a radio which can receive and transmit widely different radio protocols (sometimes referred to as waveforms) based solely on the software used.
Software radios have significant utility for the military and cell phone services, both of which must serve a wide variety of changing radio protocols in real time. In the long term, software-defined radios are expected by proponents like the Wireless Innovation Forum to become the dominant technology in radio communications. SDRs, along with software defined antennas are the enablers of cognitive radio.
Operating principles
Superheterodyne receivers use a VFO (variable-frequency oscillator), mixer, and filter to tune the desired signal to a common IF (intermediate frequency) or baseband. Typically in SDR, this signal is then sampled by the analog-to-digital converter. However, in some applications it is not necessary to tune the signal to an intermediate frequency and the radio frequency signal is directly sampled by the analog-to-digital converter (after amplification).
Real analog-to-digital converters lack the dynamic range to pick up sub-microvolt, nanowatt-power radio signals produced by an antenna. Therefore, a low-noise amplifier must precede the conversion step and this device introduces its own problems. For example, if spurious signals are present (which is typical), these compete with the desired signals within the amplifier's dynamic range. They may introduce distortion in the desired signals, or may block them completely. The standard solution is to put band-pass filters between the antenna and the amplifier, but these reduce the radio's flexibility. Real software radios often have two or three analog channel filters with different bandwidths that are switched in and out.
The flexibility of SDR allows for dynamic spectrum usage, alleviating the need to statically assign the scarce spectral resources to a single fixed service.
History
In 1970, a researcher at a United States Department of Defense laboratory coined the term "digital receiver". A laboratory called the Gold Room at TRW in California created a software baseband analysis tool called Midas, which had its operation defined in software.
In 1982, while working under a US Department of Defense contract at RCA, Ulrich L. Rohde's department developed the first SDR, which used the COSMAC (Complementary Symmetry Monolithic Array Computer) chip. Rohde was the first to present on this topic with his February 1984 talk, "Digital HF Radio: A Sampling of Techniques" at the Third International Conference on HF Communication Systems and Techniques in London.
In 1984, a team at the Garland, Texas, Division of E-Systems Inc. (now Raytheon) coined the term "software radio" to refer to a digital baseband receiver, as published in their E-Team company newsletter. A 'Software Radio Proof-of-Concept' laboratory was developed by the E-Systems team that popularized Software Radio within various government agencies. This 1984 Software Radio was a digital baseband receiver that provided programmable interference cancellation and demodulation for broadband signals, typically with thousands of adaptive filter taps, using multiple array processors accessing shared memory.
In 1991, Joe Mitola independently reinvented the term software radio for a plan to build a GSM base station that would combine Ferdensi's digital receiver with E-Systems Melpar's digitally controlled communications jammers for a true software-based transceiver. E-Systems Melpar sold the software radio idea to the US Air Force. Melpar built a prototype commanders' tactical terminal in 1990–1991 that employed Texas Instruments TMS320C30 processors and Harris Corporation digital receiver chip sets with digitally synthesized transmission. The Melpar prototype didn't last long because when E-Systems ECI Division manufactured the first limited production units, they decided to "throw out those useless C30 boards", replacing them with conventional RF filtering on transmit and receive and reverting to a digital baseband radio instead of the SpeakEasy like IF ADC/DACs of Mitola's prototype. The Air Force would not let Mitola publish the technical details of that prototype, nor would they let Diane Wasserman publish related software life cycle lessons learned because they regarded it as a "USAF competitive advantage". So instead, with USAF permission, in 1991, Mitola described the architecture principles without implementation details in a paper, "Software Radio: Survey, Critical Analysis and Future Directions" which became the first IEEE publication to employ the term in 1992. When Mitola presented the paper at the conference, Bob Prill of GEC Marconi began his presentation following Mitola with: "Joe is absolutely right about the theory of a software radio and we are building one." Prill gave a GEC Marconi paper on PAVE PILLAR, a SpeakEasy precursor. SpeakEasy, the military software radio was formulated by Wayne Bonser, then of Rome Air Development Center (RADC), now Rome Labs; by Alan Margulies of MITRE Rome, NY; and then Lt Beth Kaspar, the original DARPA SpeakEasy project manager and by others at Rome including Don Upmal. Although Mitola's IEEE publications resulted in the largest global footprint for software radio, Mitola privately credits that DoD lab of the 1970s with its leaders Carl, Dave, and John with inventing the digital receiver technology on which he based software radio once it was possible to transmit via software.
A few months after the National Telesystems Conference 1992, in an E-Systems corporate program review, a vice-president of E-Systems Garland Division objected to Melpar's (Mitola's) use of the term "software radio" without credit to Garland. Alan Jackson, Melpar VP of marketing at that time, asked the Garland VP if their laboratory or devices included transmitters. The Garland VP said: "No, of course not — ours is a software radio receiver." Al replied: "Then it's a digital receiver but without a transmitter, it's not a software radio." Corporate leadership agreed with Al, so the publication stood. Many amateur radio operators and HF radio engineers had realized the value of digitizing HF at RF and of processing it with Texas Instruments TI C30 digital signal processors (DSPs) and their precursors during the 1980s and early 1990s. Radio engineers at Roke Manor in the UK and at an organization in Germany had recognized the benefits of ADC at the RF in parallel. Mitola's publication of software radio in the IEEE opened the concept to the broad community of radio engineers. His May 1995 special issue of the IEEE Communications Magazine with the cover "Software Radio" was regarded as a watershed event with thousands of academic citations. Mitola was introduced by Joao da Silva in 1997 at the First International Conference on Software Radio as "godfather" of software radio in no small part for his willingness to share such a valuable technology "in the public interest".
Perhaps the first software-based radio transceiver was designed and implemented by Peter Hoeher and Helmuth Lang at the German Aerospace Research Establishment (DLR, formerly DFVLR) in Oberpfaffenhofen, Germany, in 1988. Both transmitter and receiver of an adaptive digital satellite modem were implemented according to the principles of a software radio, and a flexible hardware periphery was proposed.
In 1995, Stephen Blust coined the term "software defined radio", publishing a request for information from Bell South Wireless at the first meeting of the Modular Multifunction Information Transfer Systems (MMITS) forum in 1996 (in 1998 the name was changed to the Software Defined Radio Forum), organized by the USAF and DARPA around the commercialization of their SpeakEasy II program. Mitola objected to Blust's term, but finally accepted it as a pragmatic pathway towards the ideal software radio. Although the concept was first implemented with an IF ADC in the early 1990s, software-defined radios have their origins in the U.S. and European defense sectors of the late 1970s (for example, Walter Tuttlebee described a VLF radio that used an ADC and an 8085 microprocessor), about a year after the First International Conference in Brussels. One of the first public software radio initiatives was the U.S. DARPA-Air Force military project named SpeakEasy. The primary goal of the SpeakEasy project was to use programmable processing to emulate more than 10 existing military radios, operating in frequency bands between 2 and 2000 MHz. Another SpeakEasy design goal was to be able to easily incorporate new coding and modulation standards in the future, so that military communications can keep pace with advances in coding and modulation techniques.
In 1997, Blaupunkt introduced the term "DigiCeiver" for their new range of DSP-based tuners with Sharx in car radios such as the Modena & Lausanne RD 148.
SpeakEasy phase I
From 1990 to 1995, the goal of the SpeakEasy program was to demonstrate a radio for the U.S. Air Force tactical ground air control party that could operate from 2 MHz to 2 GHz, and thus could interoperate with ground force radios (frequency-agile VHF, FM, and SINCGARS), Air Force radios (VHF AM), Naval Radios (VHF AM and HF SSB teleprinters) and satellites (microwave QAM). Some particular goals were to provide a new signal format in two weeks from a standing start, and demonstrate a radio into which multiple contractors could plug parts and software.
The project was demonstrated at TF-XXI Advanced Warfighting Exercise, and demonstrated all of these goals in a non-production radio. There was some discontent with failure of these early software radios to adequately filter out of band emissions, to employ more than the simplest of interoperable modes of the existing radios, and to lose connectivity or crash unexpectedly. Its cryptographic processor could not change context fast enough to keep several radio conversations on the air at once. Its software architecture, though practical enough, bore no resemblance to any other. The SpeakEasy architecture was refined at the MMITS Forum between 1996 and 1999 and inspired the DoD integrated process team (IPT) for programmable modular communications systems (PMCS) to proceed with what became the Joint Tactical Radio System (JTRS).
The basic arrangement of the radio receiver used an antenna feeding an amplifier and down-converter (see Frequency mixer) feeding an automatic gain control, which fed an analog-to-digital converter that was on a computer VMEbus with a lot of digital signal processors (Texas Instruments C40s). The transmitter had digital-to-analog converters on the PCI bus feeding an up converter (mixer) that led to a power amplifier and antenna. The very wide frequency range was divided into a few sub-bands with different analog radio technologies feeding the same analog to digital converters. This has since become a standard design scheme for wideband software radios.
SpeakEasy phase II
The goal was to get a more quickly reconfigurable architecture, i.e., several conversations at once, in an open software architecture, with cross-channel connectivity (the radio can "bridge" different radio protocols). The secondary goals were to make it smaller, cheaper, and weigh less.
The project produced a demonstration radio only fifteen months into a three-year research project. This demonstration was so successful that further development was halted, and the radio went into production with only a 4 MHz to 400 MHz range.
The software architecture identified standard interfaces for different modules of the radio: "radio frequency control" to manage the analog parts of the radio, "modem control" managed resources for modulation and demodulation schemes (FM, AM, SSB, QAM, etc.), "waveform processing" modules actually performed the modem functions, "key processing" and "cryptographic processing" managed the cryptographic functions, a "multimedia" module did voice processing, a "human interface" provided local or remote controls, there was a "routing" module for network services, and a "control" module to keep it all straight.
The modules are said to communicate without a central operating system. Instead, they send messages over the PCI computer bus to each other with a layered protocol.
As a military project, the radio strongly distinguished "red" (unsecured secret data) and "black" (cryptographically-secured data).
The project was the first known to use FPGAs (field programmable gate arrays) for digital processing of radio data. The time to reprogram these was an issue limiting application of the radio. Today, the time to write a program for an FPGA is still significant, but the time to download a stored FPGA program is around 20 milliseconds. This means an SDR could change transmission protocols and frequencies in one fiftieth of a second, probably not an intolerable interruption for that task.
2000s
The SpeakEasy SDR system in the 1994 uses a Texas Instruments TMS320C30 CMOS digital signal processor (DSP), along with several hundred integrated circuit chips, with the radio filling the back of a truck. By the late 2000s, the emergence of RF CMOS technology made it practical to scale down an entire SDR system onto a single mixed-signal system-on-a-chip, which Broadcom demonstrated with the BCM21551 processor in 2007. The Broadcom BCM21551 has practical commercial applications, for use in 3G mobile phones.
Military usage
United States
The Joint Tactical Radio System (JTRS) was a program of the US military to produce radios that provide flexible and interoperable communications. Examples of radio terminals that require support include hand-held, vehicular, airborne and dismounted radios, as well as base-stations (fixed and maritime).
This goal is achieved through the use of SDR systems based on an internationally endorsed open Software Communications Architecture (SCA). This standard uses CORBA on POSIX operating systems to coordinate various software modules.
The program is providing a flexible new approach to meet diverse soldier communications needs through software programmable radio technology. All functionality and expandability is built upon the SCA.
The SCA, despite its military origin, is under evaluation by commercial radio vendors for applicability in their domains. The adoption of general-purpose SDR frameworks outside of military, intelligence, experimental and amateur uses, however, is inherently hampered by the fact that civilian users can more easily settle with a fixed architecture, optimized for a specific function, and as such more economical in mass market applications. Still, software defined radio's inherent flexibility can yield substantial benefits in the longer run, once the fixed costs of implementing it have gone down enough to overtake the cost of iterated redesign of purpose built systems. This then explains the increasing commercial interest in the technology.
SCA-based infrastructure software and rapid development tools for SDR education and research are provided by the Open Source SCA Implementation Embedded (OSSIE) project. The Wireless Innovation Forum funded the SCA Reference Implementation project, an open source implementation of the SCA specification. (SCARI) can be downloaded for free.
Amateur and home use
A typical amateur software radio uses a direct conversion receiver. Unlike direct conversion receivers of the more distant past, the mixer technologies used are based on the quadrature sampling detector and the quadrature sampling exciter.
The receiver performance of this line of SDRs is directly related to the dynamic range of the analog-to-digital converters (ADCs) utilized. Radio frequency signals are down converted to the audio frequency band, which is sampled by a high performance audio frequency ADC. First generation SDRs used a 44 kHz PC sound card to provide ADC functionality. The newer software defined radios use embedded high performance ADCs that provide higher dynamic range and are more resistant to noise and RF interference.
A fast PC performs the digital signal processing (DSP) operations using software specific for the radio hardware. Several software radio implementations use the open source SDR library DttSP.
The SDR software performs all of the demodulation, filtering (both radio frequency and audio frequency), and signal enhancement (equalization and binaural presentation). Uses include every common amateur modulation: morse code, single-sideband modulation, frequency modulation, amplitude modulation, and a variety of digital modes such as radioteletype, slow-scan television, and packet radio. Amateurs also experiment with new modulation methods: for instance, the DREAM open-source project decodes the COFDM technique used by Digital Radio Mondiale.
There is a broad range of hardware solutions for radio amateurs and home use. There are professional-grade transceiver solutions, e.g. the Zeus ZS-1 or FlexRadio, home-brew solutions, e.g. PicAStar transceiver, the SoftRock SDR kit, and starter or professional receiver solutions, e.g. the FiFi SDR for shortwave, or the Quadrus coherent multi-channel SDR receiver for short wave or VHF/UHF in direct digital mode of operation.
RTL-SDR
Eric Fry discovered that some common low-cost DVB-T USB dongles with the Realtek RTL2832U controller and tuner, e.g. the Elonics E4000 or the Rafael Micro R820T, can be used as a wide-band (3 MHz) SDR receiver. Experiments proved the capability of this setup to analyze Perseids meteor shower using Graves radar signals. This project is being maintained at Osmocom.
HPSDR
The HPSDR (High Performance Software Defined Radio) project uses a 16-bit analog-to-digital converter that provides performance over the range 0 to comparable to that of a conventional analogue HF radio. The receiver will also operate in the VHF and UHF range using either mixer image or alias responses. Interface to a PC is provided by a USB 2.0 interface, although Ethernet could be used as well. The project is modular and comprises a backplane onto which other boards plug in. This allows experimentation with new techniques and devices without the need to replace the entire set of boards. An exciter provides of RF over the same range or into the VHF and UHF range using image or alias outputs.
WebSDR
WebSDR is a project initiated by Pieter-Tjerk de Boer providing access via browser to multiple SDR receivers worldwide covering the complete shortwave spectrum. De Boer has analyzed Chirp Transmitter signals using the coupled system of receivers.
KiwiSDR
KiwiSDR is also a via-browser SDR like WebSDR. Unlike WebSDR, the frequency is limited to 3 Hz to 30 MHz (ELF to HF)
Other applications
On account of its increasing accessibility, with lower cost hardware, more software tools and documentation, the applications of SDR have expanded past their primary and historic use cases. SDR is now being used in areas such as wildlife tracking, radio astronomy, medical imaging research, and art.
See also
List of software-defined radios
List of amateur radio software
Digital radio
Digital signal processing (DSP)
Radio Interface Layer (RIL)
Softmodem
Software defined mobile network (SDMN)
Software GNSS Receiver
White space (radio)
White space (database)
Bit banging
References
Further reading
Software defined radio : architectures, systems, and functions. Dillinger, Madani, Alonistioti. Wiley, 2003. 454 pages.
Cognitive Radio Technology. Bruce Fette. Elsevier Science & Technology Books, 2006. 656 pags.
Software Defined Radio for 3G, Burns. Artech House, 2002.
Software Radio: A Modern Approach to Radio Engineering, Jeffrey H. Reed. Prentice Hall PTR, 2002.
Signal Processing Techniques for Software Radio, Behrouz Farhang-Beroujeny. LuLu Press.
RF and Baseband Techniques for Software Defined Radio, Peter B. Kenington. Artech House, 2005,
The ABC's of Software Defined Radio, Martin Ewing, AA6E. The American Radio Relay League, Inc., 2012,
Software Defined Radio using MATLAB & Simulink and the RTL-SDR, R Stewart, K Barlee, D Atkinson, L Crockett, Strathclyde Academic Media, September 2015.
External links
The world's first web-based software-defined receiver at the university of Twente, the Netherlands
Software-defined receivers connected to the Internet
Using software-defined television tuners as multimode HF / VHF / UHF receivers
Free SDR textbook: Software Defined Radio using MATLAB & Simulink and the RTL-SDR
Software Defined Terahertz Radio at Polytechnique Montreal, Canada | Software-defined radio | [
"Engineering"
] | 4,438 | [
"Radio electronics",
"Software-defined radio"
] |
15,064,099 | https://en.wikipedia.org/wiki/RFX2 | DNA-binding protein RFX2 is a protein that in humans is encoded by the RFX2 gene.
This gene is a member of the regulatory factor X gene family, which encodes transcription factors that contain a highly-conserved winged helix DNA binding domain. The protein encoded by this gene is structurally related to regulatory factors X1, X3, X4, and X5. It is a transcriptional activator that can bind DNA as a monomer or as a heterodimer with other RFX family members. This protein can bind to cis elements in the promoter of the IL-5 receptor alpha gene. Two transcript variants encoding different isoforms have been described for this gene, and both variants utilize alternative polyadenylation sites.
References
Further reading
External links
Transcription factors | RFX2 | [
"Chemistry",
"Biology"
] | 160 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
15,064,950 | https://en.wikipedia.org/wiki/TEAD3 | Transcriptional enhancer factor TEF-5 is a protein that in humans is encoded by the TEAD3 gene.
Function
This gene product is a member of the transcriptional enhancer factor (TEF) family of transcription factors, which contain the TEA/ATTS DNA-binding domain. Members of the family in mammals are TEAD1, TEAD2, TEAD3, TEAD4. Transcriptional coregulators, such as WWTR1 (TAZ) bind to these transcription factors. TEAD3 is predominantly expressed in the placenta and is involved in the transactivation of the chorionic somatomammotropin-B gene enhancer. It is expressed in nervous system and muscle in fish embryos. Translation of this protein is initiated at a non-AUG (AUA) start codon.
References
Further reading
External links
Transcription factors | TEAD3 | [
"Chemistry",
"Biology"
] | 181 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
11,381,694 | https://en.wikipedia.org/wiki/Orphan%20source | An orphan source is a self-contained radioactive source that is no longer under regulatory control.
The United States Nuclear Regulatory Commission definition is:
...a sealed source of radioactive material contained in a small volume—but not radioactively contaminated soils and bulk metals—in any one or more of the following conditions:
In an uncontrolled condition that requires removal to protect public health and safety from a radiological threat
Controlled or uncontrolled, but for which a responsible party cannot be readily identified
Controlled, but the material's continued security cannot be assured. If held by a licensee, the licensee has few or no options for, or is incapable of providing for, the safe disposition of the material
In the possession of a person, not licensed to possess the material, who did not seek to possess the material
In the possession of a State radiological protection program for the sole purpose of mitigating a radiological threat because the orphan source is in one of the conditions described in one of the first four bullets and for which the State does not have a means to provide for the material's appropriate disposition
Most known orphan sources were, generally, small radioactive sources produced legitimately under governmental regulation and put into service for radiography, generating electricity in radioisotope thermoelectric generators, medical radiotherapy or irradiation. These sources were then "abandoned, lost, misplaced or stolen" and so no longer subject to proper regulation.
See also
List of orphan source incidents
References
Radiation accidents and incidents | Orphan source | [
"Physics"
] | 308 | [
"Nuclear and atomic physics stubs",
"Nuclear physics"
] |
11,384,086 | https://en.wikipedia.org/wiki/Spin%E2%80%93spin%20relaxation | In physics, the spin–spin relaxation is the mechanism by which , the transverse component of the magnetization vector, exponentially decays towards its equilibrium value in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI). It is characterized by the spin–spin relaxation time, known as 2, a time constant characterizing the signal decay. It is named in contrast to 1, the spin–lattice relaxation time. It is the time it takes for the magnetic resonance signal to irreversibly decay to 37% (1/e) of its initial value after its generation by tipping the longitudinal magnetization towards the magnetic transverse plane. Hence the relation
.
2 relaxation generally proceeds more rapidly than 1 recovery, and different samples and different biological tissues have different 2. For example, fluids have the longest 2, and water based tissues are in the 40–200 ms range, while fat based tissues are in the 10–100 ms range. Amorphous solids have 2 in the range of milliseconds, while the transverse magnetization of crystalline samples decays in around 1/20 ms.
Origin
When excited nuclear spins—i.e., those lying partially in the transverse plane—interact with each other by sampling local magnetic field inhomogeneities on the micro- and nanoscales, their respective accumulated phases deviate from expected values. While the slow- or non-varying component of this deviation is reversible, some net signal will inevitably be lost due to short-lived interactions such as collisions and random processes such as diffusion through heterogeneous space.
T2 decay does not occur due to the tilting of the magnetization vector away from the transverse plane. Rather, it is observed due to the interactions of an ensemble of spins dephasing from each other. Unlike spin-lattice relaxation, considering spin-spin relaxation using only a single isochromat is trivial and not informative.
Determining parameters
Like spin-lattice relaxation, spin-spin relaxation can be studied using a molecular tumbling autocorrelation framework. The resulting signal decays exponentially as the echo time (TE), i.e., the time after excitation at which readout occurs, increases. In more complicated experiments, multiple echoes can be acquired simultaneously in order to quantitatively evaluate one or more superimposed T2 decay curves. The relaxation rate experienced by a spin, which is the inverse of T2, is proportional to a spin's tumbling energy at the frequency difference between one spin and another; in less mathematical terms, energy is transferred between two spins when they rotate at a similar frequency to their beat frequency, in the figure at right. In that the beat frequency range is very small relative to the average rotation rate , spin-spin relaxation is not heavily dependent on magnetic field strength. This directly contrasts with spin-lattice relaxation, which occurs at tumbling frequencies equal to the Larmor frequency . Some frequency shifts, such as the NMR chemical shift, occur at frequencies proportional to the Larmor frequency, and the related but distinct parameter T2* can be heavily dependent on field strength due to the difficulty of correcting for inhomogeneity in stronger magnet bores.
Assuming isothermal conditions, spins tumbling faster through space will generally have a longer T2. Since slower tumbling displaces the spectral energy at high tumbling frequencies to lower frequencies, the relatively low beat frequency will experience a monotonically increasing amount of energy as increases, decreasing relaxation time. The figure at the left illustrates this relationship. Fast tumbling spins, such as those in pure water, have similar T1 and T2 relaxation times, while slow tumbling spins, such as those in crystal lattices, have very distinct relaxation times.
Measurement
A spin echo experiment can be used to reverse time-invariant dephasing phenomena such as millimeter-scale magnetic inhomogeneities. The resulting signal decays exponentially as the echo time (TE), i.e., the time after excitation at which readout occurs, increases. In more complicated experiments, multiple echoes can be acquired simultaneously in order to quantitatively evaluate one or more superimposed T2 decay curves.
In MRI, T2-weighted images can be obtained by selecting an echo time on the order of the various tissues' T2s. In order to reduce the amount of T1 information and therefore contamination in the image, excited spins are allowed to return to near-equilibrium on a T1 scale before being excited again. (In MRI parlance, this waiting time is called the "repetition time" and is abbreviated TR). Pulse sequences other than the conventional spin echo can also be used to measure T2; gradient echo sequences such as steady-state free precession (SSFP) and multiple spin echo sequences can be used to accelerate image acquisition or inform on additional parameters.
See also
Relaxation (NMR)
Spin–lattice relaxation
Spin echo
References
McRobbie D., et al. MRI, From picture to proton. 2003
Hashemi Ray, et al. MRI, The Basics 2ED. 2004.
Magnetic resonance imaging
Nuclear magnetic resonance
Articles containing video clips | Spin–spin relaxation | [
"Physics",
"Chemistry"
] | 1,045 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging",
"Nuclear physics"
] |
11,384,459 | https://en.wikipedia.org/wiki/Spin%E2%80%93lattice%20relaxation | During nuclear magnetic resonance observations, spin–lattice relaxation is the mechanism by which the longitudinal component of the total nuclear magnetic moment vector (parallel to the constant magnetic field) exponentially relaxes from a higher energy, non-equilibrium state to thermodynamic equilibrium with its surroundings (the "lattice"). It is characterized by the spin–lattice relaxation time, a time constant known as T1.
There is a different parameter, T2, the spin–spin relaxation time, which concerns the exponential relaxation of the transverse component of the nuclear magnetization vector ( to the external magnetic field). Measuring the variation of T1 and T2 in different materials is the basis for some magnetic resonance imaging techniques.
Nuclear physics
T1 characterizes the rate at which the longitudinal Mz component of the magnetization vector recovers exponentially towards its thermodynamic equilibrium, according to equation
Or, for the specific case that
It is thus the time it takes for the longitudinal magnetization to recover approximately 63% [1-(1/e)] of its initial value after being flipped into the magnetic transverse plane by a 90° radiofrequency pulse.
Nuclei are contained within a molecular structure, and are in constant vibrational and rotational motion, creating a complex magnetic field. The magnetic field caused by thermal motion of nuclei within the lattice is called the lattice field. The lattice field of a nucleus in a lower energy state can interact with nuclei in a higher energy state, causing the energy of the higher energy state to distribute itself between the two nuclei. Therefore, the energy gained by nuclei from the RF pulse is dissipated as increased vibration and rotation within the lattice, which can slightly increase the temperature of the sample. The name spin–lattice relaxation refers to the process in which the spins give the energy they obtained from the RF pulse back to the surrounding lattice, thereby restoring their equilibrium state. The same process occurs after the spin energy has been altered by a change of the surrounding static magnetic field (e.g. pre-polarization by or insertion into high magnetic field) or if the nonequilibrium state has been achieved by other means (e.g., hyperpolarization by optical pumping).
The relaxation time, T1 (the average lifetime of nuclei in the higher energy state) is dependent on the gyromagnetic ratio of the nucleus and the mobility of the lattice. As mobility increases, the vibrational and rotational frequencies increase, making it more likely for a component of the lattice field to be able to stimulate the transition from high to low energy states. However, at extremely high mobilities, the probability decreases as the vibrational and rotational frequencies no longer correspond to the energy gap between states.
Different tissues have different T1 values. For example, fluids have long T1s (1500-2000 ms), and water-based tissues are in the 400-1200 ms range, while fat based tissues are in the shorter 100-150 ms range. The presence of strongly magnetic ions or particles (e.g., ferromagnetic or paramagnetic) also strongly alter T1 values and are widely used as MRI contrast agents.
T1 weighted images
Magnetic resonance imaging uses the resonance of the protons to generate images. Protons are excited by a radio frequency pulse at an appropriate frequency (Larmor frequency) and then the excess energy is released in the form of a minuscule amount of heat to the surroundings as the spins return to their thermal equilibrium. The magnetization of the proton ensemble goes back to its equilibrium value with an exponential curve characterized by a time constant T1 (see Relaxation (NMR)).
T1 weighted images can be obtained by setting short repetition time (TR) such as < 750 ms and echo time (TE) such as < 40 ms in conventional spin echo sequences, while in Gradient Echo Sequences they can be obtained by using flip angles of larger than 50o while setting TE values to less than 15 ms.
T1 is significantly different between grey matter and white matter and is used when undertaking brain scans. A strong T1 contrast is present between fluid and more solid anatomical structures, making T1 contrast suitable for morphological assessment of the normal or pathological anatomy, e.g., for musculoskeletal applications.
In the rotating frame
Spin–lattice relaxation in the rotating frame is the mechanism by which Mxy, the transverse component of the magnetization vector, exponentially decays towards its equilibrium value of zero, under the influence of a radio frequency (RF) field in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI). It is characterized by the spin–lattice relaxation time constant in the rotating frame, T1ρ. It is named in contrast to T1, the spin-lattice relaxation time.
T1ρ MRI is an alternative to conventional T1 and T2 MRI by its use of a long-duration, low-power radio frequency referred to as spin-lock (SL) pulse applied to the magnetization in the transverse plane. The magnetization is effectively spin-locked around an effective B1 field created by the vector sum of the applied B1 and any off-resonant component. The spin-locked magnetization will relax with a time constant T1ρ, which is the time it takes for the magnetic resonance signal to reach 37% (1/e) of its initial value, . Hence the relation:
, where tSL is the duration of the RF field.
Measurement
T1ρ can be quantified (relaxometry) by curve fitting the signal expression above as a function of the duration of the spin-lock pulse while the amplitude of spin-lock pulse (γB1~0.1-few kHz) is fixed. Quantitative T1ρ MRI relaxation maps reflect the biochemical composition of tissues.
Imaging
T1ρ MRI has been used to image tissues such as cartilage, intervertebral discs, brain, and heart, as well as certain types of cancers.
See also
Relaxation (NMR)
Spin–spin relaxation time
Ernst angle
References
Further reading
McRobbie D., et al. MRI, From picture to proton. 2003
Hashemi Ray, et al. MRI, The Basics 2ED. 2004
Magnetic resonance imaging
Nuclear magnetic resonance | Spin–lattice relaxation | [
"Physics",
"Chemistry"
] | 1,277 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging",
"Nuclear physics"
] |
11,386,287 | https://en.wikipedia.org/wiki/Levy%E2%80%93Mises%20equations | The Levi–Mises equations (also called flow rules) describe the relationship between stress and strain for an ideal plastic solid where the elastic strains are negligible.
The generalized Levy–Mises equation can be written as:
Materials science
Continuum mechanics
Solid mechanics | Levy–Mises equations | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 53 | [
"Solid mechanics",
"Applied and interdisciplinary physics",
"Continuum mechanics",
"Applied mathematics",
"Materials science",
"Classical mechanics",
"Mechanics",
"Applied mathematics stubs",
"nan"
] |
9,933,420 | https://en.wikipedia.org/wiki/DNA%20repair%20protein%20XRCC4 | DNA repair protein XRCC4 (hXRCC4) also known as X-ray repair cross-complementing protein 4 is a protein that in humans is encoded by the XRCC4 gene. XRCC4 is also expressed in many other animals, fungi and plants. hXRCC4 is one of several core proteins involved in the non-homologous end joining (NHEJ) pathway to repair DNA double strand breaks (DSBs).
NHEJ requires two main components to achieve successful completion. The first component is the cooperative binding and phosphorylation of artemis by the catalytic subunit of the DNA-dependent protein kinase (DNA-PKcs). Artemis cleaves the ends of damaged DNA to prepare it for ligation. The second component involves the bridging of DNA to DNA ligase 4, by hXRCC4, with the aid of Cernunnos-XLF. DNA-PKcs and hXRCC4 are anchored to Ku70 / Ku80 heterodimer, which are bound to the DNA ends.
Since hXRCC4 is the key protein that enables interaction of DNA ligase 4 to damaged DNA and therefore ligation of the ends, mutations in the XRCC4 gene were found to cause embryonic lethality in mice and developmental inhibition and immunodeficiency in humans. Furthermore, certain mutations in XRCC4 are associated with an increased risk of cancer.
Double strand breaks
Double strand breaks (DSBs) are mainly caused by free radicals generated from ionizing radiation in the environment and from by-products released continually during cellular metabolism. DSBs that are not efficiently repaired may result in the loss of important protein coding genes and regulatory sequences required for gene expression necessary for the life of a cell. DSBs that cannot rely on a newly copied sister chromosome generated by DNA replication to fill in the gap will go into the NHEJ pathway. This method of repair is essential as it is a last resort to prevent loss of long stretches of the chromosome. NHEJ is also used to repair DSBs generated during V(D)J recombination when gene regions are rearranged to create the unique antigen binding sites of antibodies and T-cell receptors.
Sources of DNA damage
DNA damage occurs very frequently and is generated from exposure to a variety of both exogenous and endogenous genotoxic sources. One of these include ionizing radiation, such as gamma radiation and X-rays, which ionize the deoxyribose groups in the DNA backbone and can induce DSBs. Reactive oxygen species (ROS), such as superoxide (O2–), hydrogen peroxide (H2O2), hydroxyl radicals (HO•), and singlet oxygen (1O2), can also produce DSBs as a result of ionizing radiation as well as cellular metabolic processes that are naturally occurring. DSBs can also be caused by the action of DNA polymerase while attempting to replicate DNA over a nick that was introduced as a result of DNA damage.
Consequences of DSBs
There are many types of DNA damage, but DSBs, in particular, are the most harmful as both strands are completely disjointed from the rest of the chromosome. If an efficient repair mechanism does not exist, the ends of the DNA can eventually degrade, leading to a permanent loss of sequence. A double-stranded gap in DNA will also prevent replication from proceeding, resulting in an incomplete copy of that specific chromosome, targeting the cell for apoptosis. As with all DNA damage, DSBs can introduce new mutations that can ultimately lead to cancer.
DSB repair methods
There are two methods for repairing DSBs depending on when the damage occurs during mitosis. If the DSB occurs after DNA replication has completed proceeding S phase of the cell cycle, the DSB repair pathway will use homologous recombination by pairing with the newly synthesized daughter strand to repair the break. However, if the DSB is generated prior to synthesis of the sister chromosome, then the template sequence that is required will be absent. For this circumstance, the NHEJ pathway provides a solution for repairing the break and is the main system used to repair DSBs in humans and multicellular eukaryotes. During NHEJ, very short stretches of complementary DNA, one base pair or more at a time, are hybridized together, and the overhangs are removed. As a result, this specific region of the genome is permanently lost and the deletion can lead to cancer and premature aging.
Properties
Gene and protein
XRCC4 is located on chromosome 5, specifically at 5q14.2. This gene contains eight exons and three mRNA transcript variants, which encode two different protein isoforms. Transcript variant 1, mRNA, RefSeq NM_003401.3, is 1688 bp long and is the shortest out of the three variants. It is missing a short sequence in the 3' coding region as compared to variant 2. Isoform 1 contains 334 amino acids. Transcript variant 2, mRNA, RefSeq NM_022406, is 1694 bp long and encodes the longest isoform 2, which contains 336 amino acids. Transcript variant 3, RefSeq NM_022550.2, is 1735 bp and is the longest, but it also encodes for the same isoform 1 as variant 1. It contains an additional sequence in the 5'UTR of the mRNA transcript and lacks a short sequence in the 3'coding region as compared to variant 2.
Structure
hXRCC4 is a tetramer that resembles the shape of a dumbbell containing two globular ends separated by a long, thin stalk. The tetramer is composed of two dimers, and each dimer is made up of two similar subunits. The first subunit (L) contains amino acid residues 1–203 and has a longer stalk than the second subunit (S) which contains residues 1–178.
The globular N-terminal domains of each subunit are identical. They are made up of two, antiparallel beta sheets that face each other in a beta sandwich-like structure (i.e., a "flattened" beta barrel) and are separated by two alpha helices on one side. The N-terminus begins with one beta sheet composed of strands 1, 2, 3, and 4, followed by a helix-turn-helix motif of the two alpha helices, αA and αB, which continues into strands 5, 6, 7, and ending with one alpha-helical stalk at the C-terminus. αA and αB are perpendicular to one another, and because one end of αB is partially inserted between the two beta sheets, it causes them to flare out away from each other. The beta sandwich structure is held together through three hydrogen bonds between antiparallel strands 4 and 7 and one hydrogen bond between strands 1 and 5.
The two helical stalks between subunits L and S intertwine with a single left-handed crossover into a coiled-coil at the top, near the globular domains forming a palm tree configuration. This region interacts with the two alpha helices of the second dimer in an opposite orientation to form a four-helix bundle and the dumbbell-shaped tetramer.
Post-translational modifications
In order for hXRCC4 to be sequestered from the cytoplasm to the nucleus to repair a DSB during NHEJ or to complete V(D)J recombination, post-translational modification at lysine 210 with a small ubiquitin-related modifier (SUMO), or sumoylation, is required. SUMO modification of diverse types of DNA repair proteins can be found in topoisomerases, base excision glycosylase TDG, Ku70/80, and BLM helicase. A common conserved motif is typically found to be a target of SUMO modification, ΨKXE (where Ψ is a bulky, hydrophobic amino acid). In the case of the XRCC4 protein, the consensus sequence surrounding lysine 210 is IKQE. Chinese hamster ovary cells, CHO, that express the mutated form of XRCC4 at K210 cannot be modified with SUMO, fail recruitment to the nucleus and instead accumulate in the cytoplasm. Furthermore, these cells are radiation sensitive and do not successfully complete V(D)J recombination.
Interactions
Upon generation of a DSB, Ku proteins will move through the cytoplasm until they find the site of the break and bind to it. Ku recruits XRCC4 and Cer-XLF and both of these proteins interact cooperatively with one another through specific residues to form a nucleoprotein pore complex that wraps around DNA. Cer-XLF is a homodimer that is very similar to XRCC4 in the structure and size of its N-terminal and C-terminal domains. Residues arginine 64, leucine 65, and leucine 115 in Cer-XLF interact with lysines 65 and 99 in XRCC4 within their N-terminal domains. Together they form a filament bundle that wraps around DNA in an alternating pattern. Hyper-phosphorylation of the C-terminal alpha helical domains of XRCC4 by DNA-PKcs facilitates this interaction. XRCC4 dimer binds to a second dimer on an adjacent DNA strand to create a tetramer for DNA bridging early on in NHEJ. Prior to ligation, Lig IV binds to the C-terminal stalk of XRCC4 at the site of the break and displaces the second XRCC4 dimer. The BRCT2 domain of Lig IV hydrogen bonds with XRCC4 at this domain through multiple residues and introduces a kink in the two alpha helical tails. The helix-loop-helix clamp connected to the BRCT-linker also makes extensive contacts.
Mechanism
NHEJ
The process of NHEJ involves XRCC4 and a number of tightly coupled proteins acting in concert to repair the DSB. The system begins with the binding of one heterodimeric protein called Ku70/80 to each end of the DSB to maintain them close together in preparation for ligation and prevent their degradation. Ku70/80 then sequesters one DNA-dependent protein kinase catalytic subunit (DNA-PKcs) to the DNA ends to enable the binding of Artemis protein to one end of each DNA-PKcs. One end of the DNA-PKcs joins to stabilize the proximity of the DSB and allow very short regions of DNA complementarity to hybridize. DNA-PKcs then phosphorylates Artemis at a serine/threonine to activate its exonuclease activity and cleave nucleotides at the single strand tails that are not hybridized in a 5' to 3' direction. Two XRCC4 proteins are post-translationally modified for recognition and localization to Ku70/80 (5). The two XRCC4 proteins dimerize together and bind to Ku70/80 at the ends of the DNA strands to promote ligation. XRCC4 then forms a strong complex with DNA ligase IV, LigIV, which is enhanced by Cernunnos XRCC4-like factor, Cer-XLF. Cer-XLF only binds to XRCC4 without direct interaction with LigIV. LigIV then joins the DNA ends by catalyzing a covalent phosphodiester bond.
V(D)J recombination
V(D)J recombination is the rearrangement of multiple, distinct gene segments in germ-line DNA to produce the unique protein domains of immune cells, B cells and T cells, that will specifically recognize foreign antigens such as viruses, bacteria, and pathogenic eukaryotes. B cells produce antibodies that are secreted into the bloodstream and T cells produce receptors that once translated are transported to the outer lipid bilayer of the cell. Antibodies are composed of two light and two heavy chains. The antigen binding site consists of two variable regions, VL and VH. The remainder of the antibody structure is made up of constant regions, CL, CH, CH2 and CH3. The Kappa locus in the mouse encodes an antibody light chain and contains approximately 300 gene segments for the variable region, V, four J segments than encode a short protein region, and one constant, C, segment. To produce a light chain with one unique type of VL, when B cells are differentiating, DNA is rearranged to incorporate a unique combination of the V and J segments. RNA splicing joins the recombined region with the C segment. The heavy chain gene also contain numerous diversity segments, D, and multiple constant segments, Cμ, Cδ, Cγ, Cε, Cα. Recombination occurs in a specific region of the gene that is located between two conserved sequence motifs called recombination signal sequences. Each motif is flanked by a 7 bp and 9 bp sequence that is separated by a 12 bp spacer, referred to as class 1, or a 23 bp spacer, referred to as class 2. A recombinase made up of RAG1 and RAG2 subunits always cleave between these two sites. The cleavage results in two hairpin structures for the V and J segments, respectively, and the non-coding region, are now separated from the V and J segments by a DSB. The hairpin coding region goes through the process of NHEJ where the closed end is cleaved and repaired. The non-coding region is circularized and degraded. Thus, NHEJ is also important in the development of the immune system via its role in V(D)J recombination.
Pathology
Recent studies have shown an association between XRCC4 and potential susceptibility to a variety of pathologies. The most frequently observed linkage is between XRCC4 mutations and susceptibility to cancers such as bladder cancer, breast cancer, and lymphomas. Studies have also pointed to a potential linkage between XRCC4 mutation and endometriosis. Autoimmunity is also being studied in this regard. Linkage between XRCC4 mutations and certain pathologies may provide a basis for diagnostic biomarkers and, eventually, potential development of new therapeutics.
Cancer susceptibility
XRCC4 polymorphisms have been linked to a risk of susceptibility for cancers such as bladder cancer, breast cancer, prostate cancer, hepatocellular carcinoma, lymphomas, and multiple myeloma. With respect to bladder cancer, for example, the link between XRCC4 and risk of cancer susceptibility was based on hospital-based case-control histological studies of gene variants of both XRCC4 and XRCC3 and their possible association with risk for urothelial bladder cancer. The linkage with risk for urothelial bladder cancer susceptibility was shown for XRCC4, but not for XRCC3 With regard to breast cancer, the linkage with "increased risk of breast cancer" was based on an examination of functional polymorphisms of the XRCC4 gene carried out in connection with a meta-analysis of five case-control studies . There is also at least one hospital-based case-control histological study indicating that polymorphisms in XRCC4 may have an "influence" on prostate cancer susceptibility. Conditional (CD21-cre-mediated) deletion of the XRCC4 NHEJ gene in p53-deficient peripheral mouse B cells resulted in surface Ig-negative B-cell lymphomas, and these lymphomas often had a "reciprocal chromosomal translocation" fusing IgH to Myc (and also had "large chromosomal deletions or translocations" involving IgK or IgL, with IgL "fusing" to oncogenes or to IgH). XRCC4- and p53-deficient pro-B lymphomas "routinely activate c-myc by gene amplification"; and furthermore, XRCC4- and p53-deficient peripheral B-cell lymphomas "routinely ectopically activate" a single copy of c-myc. Indeed, in view of the observation by some that "DNA repair enzymes are correctives for DNA damage induced by carcinogens and anticancer drugs", it should not be surprising that "SNPs in DNA repair genes may play an important part" in cancer susceptibility. In addition to the cancers identified above, XRCC4 polymorphisms have been identified as having a potential link to various additional cancers such as oral cancer, lung cancer, gastric cancer, and gliomas.
Senescence
Declining ability to repair DNA double-strand breaks by NHEJ may be a significant factor in the aging process. Li et al. found that, in humans, the efficiency of NHEJ repair declines from age 16 to 75 years. Their study indicated that decreased expression of XRCC4 and other NHEJ proteins drives an age-associated decline in NHEJ efficiency and fidelity. They suggested that the age related decline in expression of XRCC4 may contribute to cellular senescence.
Autoimmunity
Based on the findings that (1) several polypeptides in the NHEJ pathway are "potential targets of autoantibodies" and (2) "one of the autoimmune epitopes in XRCC4 coincides with a sequence that is a nexus for radiation-induced regulatory events", it has been suggested that exposure to DNA double-strand break-introducing agents "may be one of the factors" mediating autoimmune responses.
Endometriosis susceptibility
There has been speculation that "XRCC4 codon 247*A and XRCC4 promoter -1394*T related genotypes and alleles... might be associated with higher endometriosis susceptibilities and pathogenesis".
Potential use as a cancer biomarker
In view of the possible associations of XRCC4 polymorphisms with risk of cancer susceptibility (see discussion above), XRCC4 could be used as a biomarker for cancer screening, particularly with respect to prostate cancer, breast cancer, and bladder cancer. In fact, XRCC4 polymorphisms were specifically identified as having the potential to be novel useful markers for "primary prevention and anticancer intervention" in the case of urothelial bladder cancer.
Radiosensitization of tumor cells
In view of the role of XRCC4 in DNA double-strand break repair, the relationship between impaired XRCC4 function and the radiosensitization of tumor cells has been investigated. For instance, it has been reported that "RNAi-mediated targeting of noncoding and coding sequences in DNA repair gene messages efficiently radiosensitizes human tumor cells".
Potential role in therapeutics
There has been discussion in the literature concerning the potential role of XRCC4 in the development of novel therapeutics. For instance, Wu et al. have suggested that since the XRCC4 gene is "critical in NHEJ" and is "positively associated with cancer susceptibility", some XRCC4 SNPs such as G-1394T (rs6869366) "may serve as a common SNP for detecting and predict[ing] various cancers (so far for breast, gastric and prostate cancers...)"; and, although further investigation is needed, "they may serve as candidate targets for personalized anticancer drugs". The possibility of detecting endometriosis on this basis has also been mentioned, and this may also possibly lead to the eventual development of treatments. In evaluating further possibilities for anticancer treatments, Wu et al. also commented on the importance of "co-treatments of DNA-damaging agents and radiation". Specifically, Wu et al. noted that the "balance between DNA damage and capacity of DNA repair mechanisms determines the final therapeutic outcome" and "the capacity of cancer cells to complete DNA repair mechanisms is important for therapeutic resistance and has a negative impact upon therapeutic efficacy", and thus theorized that "[p]harmacological inhibition of recently detected targets of DNA repair with several small-molecule compounds... has the potential to enhance the cytotoxicity of anticancer agents".
Microcephalic primordial dwarfism
In humans, mutations in the XRCC4 gene cause microcephalic primordial dwarfism, a phenotype characterized by marked microcephaly, facial dysmorphism, developmental delay and short stature. Although immunoglobulin junctional diversity is impaired, these individuals do not show a recognizable immunological phenotype. In contrast to individuals with a LIG4 mutation, pancytopenia resulting in bone marrow failure is not observed in individuals with XRCC4 deficiency. At the cellular level, disruption of XRCC4 induces hypersensitivity to agents that induce double-strand breaks, defective double-strand break repair and increased apoptosis after induction of DNA damage.
Anti-XRCC4 antibodies
Anti-XRCC4 antibodies including phosphospecific antibodies to pS260 and pS318 in XRCC4 have been developed. Antibodies to XRCC4 can have a variety of uses, including use in immunoassays to conduct research in areas such as DNA damage and repair, non-homologous end joining, transcription factors, epigenetics and nuclear signaling.
History
Research carried out in the 1980s revealed that a Chinese hamster ovary (CHO) cell mutant called XR-1 was "extremely sensitive" with regard to being killed by gamma rays during the G1 portion of the cell cycle but, in the same research studies, showed "nearly normal resistance" to gamma-ray damage during the late S phase; and in the course of this research, XR-1's cell-cycle sensitivity was correlated with its inability to repair DNA double-strand breaks produced by ionizing radiation and restriction enzymes. In particular, in a study using somatic cell hybrids of XR-1 cells and human fibroblasts, Giaccia et al. (1989) showed that the XR-1 mutation was a recessive mutation; and in follow-up to this work, Giaccia et al. (1990) carried out further studies examining the XR-1 mutation (again using somatic cell hybrids formed between XR-1 and human fibroblasts) and were able to map the human complementing gene to chromosome 5 using chromosome-segregation analysis. Giaccia et al, tentatively assigned this human gene the name "XRCC4" (an abbreviation of "X-ray-complementing Chinese hamster gene 4") and determined that (a) the newly named XRCC4 gene biochemically restored the hamster defect to normal levels of resistance to gamma-ray radiation and bleomycin and (b) the XRCC4 gene restored the proficiency to repair DNA DSBs. Based on these findings, Giaccia et al. proposed that XRCC4―as a single gene―was responsible for the XR-1 phenotype.
References
Further reading
External links
DNA repair | DNA repair protein XRCC4 | [
"Biology"
] | 4,936 | [
"Molecular genetics",
"DNA repair",
"Cellular processes"
] |
9,934,503 | https://en.wikipedia.org/wiki/Radiation%20damage | Radiation damage is the effect of ionizing radiation on physical objects including non-living structural materials. It can be either detrimental or beneficial for materials.
Radiobiology is the study of the action of ionizing radiation on living things, including the health effects of radiation in humans. High doses of ionizing radiation can cause damage to living tissue such as radiation burning and harmful mutations such as causing cells to become cancerous, and can lead to health problems such as radiation poisoning.
Causes
This radiation may take several forms:
Cosmic rays and subsequent energetic particles caused by their collision with the atmosphere and other materials.
Radioactive daughter products (radioisotopes) caused by the collision of cosmic rays with the atmosphere and other materials, including living tissues.
Energetic particle beams from a particle accelerator.
Energetic particles or electro-magnetic radiation (X-rays) released from collisions of such particles with a target, as in an X ray machine or incidentally in the use of a particle accelerator.
Particles or various types of rays released by radioactive decay of elements, which may be naturally occurring, created by accelerator collisions, or created in a nuclear reactor. They may be manufactured for therapeutic or industrial use or be released accidentally by nuclear accident, or released intentionally by a dirty bomb, or released into the atmosphere, ground, or ocean incidental to the explosion of a nuclear weapon for warfare or nuclear testing.
Effects on materials and devices
Radiation may affect materials and devices in deleterious and beneficial ways:
By causing the materials to become radioactive (mainly by neutron activation, or in presence of high-energy gamma radiation by photodisintegration).
By nuclear transmutation of the elements within the material including, for example, the production of Hydrogen and Helium which can in turn alter the mechanical properties of the materials and cause swelling and embrittlement.
By radiolysis (breaking chemical bonds) within the material, which can weaken it, cause it to swell, polymerize, promote corrosion, cause belittlements, promote cracking or otherwise change its desirable mechanical, optical, or electronic properties. On the other hand, radiolysis can also be used to induce crosslinking of polymers, which can harden them or make them more resistant to watering.
By formation of reactive compounds, affecting other materials (e.g. ozone cracking by ozone formed by ionization of air).
By ionization, causing electrical breakdown, particularly in semiconductors employed in electronic equipment, with subsequent currents introducing operation errors or even permanently damaging the devices. Devices intended for high radiation environments such as the nuclear industry and extra atmospheric (space) applications may be made radiation hard to resist such effects through design, material selection, and fabrication methods.
By introducing dopants or defects by ion implantation to modify their electrical functionality in desired ways
To treat cancer by electron, gamma or ion irradiation or via boron neutron capture therapy.
Many of the radiation effects on materials are produced by collision cascades and covered by radiation chemistry.
Effects on metals and concrete
Radiation can have harmful effects on solid materials as it can degrade their properties so that they are no longer mechanically sound. This is of special concern as it can greatly affect their ability to perform in nuclear reactors and is the emphasis of radiation material science, which seeks to mitigate this danger.
As a result of their usage and exposure to radiation, the effects on metals and concrete are particular areas of study. For metals, exposure to radiation can result in radiation hardening which strengthens the material while subsequently embrittling it (lowers toughness, allowing brittle fracture to occur). This occurs as a result of knocking atoms out of their lattice sites through both the initial interaction as well as a resulting cascade of damage, leading to the creation of defects, dislocations (similar to work hardening and precipitation hardening). Grain boundary engineering through thermomechanical processing has been shown to mitigate these effects by changing the fracture mode from intergranular (occurring along grain boundaries) to transgranular. This increases the strength of the material, mitigating the embrittling effect of radiation. Radiation can also lead to segregation and diffusion of atoms within materials, leading to phase segregation and voids as well as enhancing the effects of stress corrosion cracking through changes in both the water chemistry and alloy microstructure.
As concrete is used extensively in the construction of nuclear power plants, where it provides structure as well as containing radiation, the effect of radiation on it is also of major interest. During its lifetime, concrete will change properties naturally due to its normal aging process, however nuclear exposure will lead to a loss of mechanical properties due to swelling of the concrete aggregates, and thus damaging the bulk material. For instance, the biological shield of the reactor is frequently composed of Portland cement, where dense aggregates are added in order to decrease the radiation flux through the shield. These aggregates can swell and make the shield mechanically unsound. Numerous studies have shown decreases in both compressive and tensile strength as well as elastic modulus of concrete at around a dosage of around 1019 neutrons per square centimeter. These trends were also shown to exist in reinforced concrete, a composite of both concrete and steel.
The knowledge gained from current analyses of materials in fission reactors in regards to the effects of temperature, irradiation dosage, materials compositions, and surface treatments will be helpful in the design of future fission reactors as well as the development of fusion reactors.
Solids subject to radiation are constantly being bombarded with high energy particles. The interaction between particles, and atoms in the lattice of the reactor materials causes displacement in the atoms. Over the course of sustained bombardment, some of the atoms do not come to rest at lattice sites, which results in the creation of defects. These defects cause changes in the microstructure of the material, and ultimately result in a number of radiation effects.
Radiation damage event
Interaction of an energetic incident particle with a lattice atom
Transfer of kinetic energy to the lattice atom, giving birth to a primary displacement atom
Displacement of the atom from its lattice site
Movement of the atom through the lattice, creating additional displaced atoms
Production of displacement cascade (collection of point defects created by primary displacement atom)
Termination of displacement atom as an interstitial
Radiation cross section
The probability of an interaction between two atoms is dependent on the thermal neutron cross section (measured in barn). Given a macroscopic cross section of (where is the microscopic cross section, and is the density of atoms in the target), and a reaction rate of (where is the beam flux), the probability of interaction becomes .
Listed below are the cross sections of common atoms or alloys.
Thermal Neutron Cross Sections (Barn)
Microstructural evolution under irradiation
Microstructural evolution is driven in the material by the accumulation of defects over a period of sustained radiation. This accumulation is limited by defect recombination, by clustering of defects, and by the annihilation of defects at sinks. Defects must thermally migrate to sinks, and in doing so often recombine, or arrive at sinks to recombine. In most cases, Drad = DvCv + DiCi >> Dtherm, that is to say, the motion of interstitials and vacancies throughout the lattice structure of a material as a result of radiation often outweighs the thermal diffusion of the same material.
One consequence of a flux of vacancies towards sinks is a corresponding flux of atoms away from the sink. If vacancies are not annihilated or recombined before collecting at sinks, they will form voids. At sufficiently high temperature, dependent on the material, these voids can fill with gases from the decomposition of the alloy, leading to swelling in the material. This is a tremendous issue for pressure sensitive or constrained materials that are under constant radiation bombardment, like pressurized water reactors. In many cases, the radiation flux is non-stoichiometric, which causes segregation within the alloy. This non-stoichiometric flux can result in significant change in local composition near grain boundaries, where the movement of atoms and dislocations is impeded. When this flux continues, solute enrichment at sinks can result in the precipitation of new phases.
Thermo-mechanical effects of irradiation
Hardening
Radiation hardening is the strengthening of the material in question by the introduction of defect clusters, impurity-defect cluster complexes, dislocation loops, dislocation lines, voids, bubbles and precipitates. For pressure vessels, the loss in ductility that occurs as a result of the increase in hardness is a particular concern.
Embrittlement
Radiation embrittlement results in a reduction of the energy to fracture, due to a reduction in strain hardening (as hardening is already occurring during irradiation). This is motivated for very similar reasons to those that cause radiation hardening; development of defect clusters, dislocations, voids, and precipitates. Variations in these parameters make the exact amount of embrittlement difficult to predict, but the generalized values for the measurement show predictable consistency.
Creep
Thermal creep in irradiated materials is negligible, by comparison to the irradiation creep, which can exceed 10−6sec−1. The mechanism is not enhanced diffusivities, as would be intuitive from the elevated temperature, but rather interaction between the stress and the developing microstructure. Stress induces the nucleation of loops, and causes preferential absorption of interstitials at dislocations, which results in swelling. Swelling, in combination with the embrittlement and hardening, can have disastrous effects on any nuclear material under substantial pressure.
Growth
Growth in irradiated materials is caused by Diffusion Anisotropy Difference (DAD). This phenomenon frequently occurs in zirconium, graphite, and magnesium because of natural properties.
Conductivity
Thermal and electrical conductivity rely on the transport of energy through the electrons and the lattice of a material. Defects in the lattice and substitution of atoms via transmutation disturb these pathways, leading to a reduction in both types of conduction by radiation damage. The magnitude of reduction depends on the dominant type of conductivity (electronic or Wiedemann–Franz law, phononic) in the material and the details of the radiation damage and is therefore still hard to predict.
Effects on polymers
Radiation damage can affect polymers that are found in nuclear reactors, medical devices, electronic packaging, and aerospace parts, as well as polymers that undergo sterilization or irradiation for use in food and pharmaceutical industries. Ionizing radiation can also be used to intentionally strengthen and modify the properties of polymers. Research in this area has focused on the three most common sources of radiation used for these applications, including gamma, electron beam, and x-ray radiation.
The mechanisms of radiation damage are different for polymers and metals, since dislocations and grain boundaries do not have real significance in a polymer. Instead, polymers deform via the movement and rearrangement of chains, which interact through Van der Waals forces and hydrogen bonding. In the presence of high energy, such as ionizing radiation, the covalent bonds that connect the polymer chains themselves can overcome their forces of attraction to form a pair of free radicals. These radicals then participate in a number of polymerization reactions that fall under the classification of radiation chemistry. Crosslinking describes the process through which carbon-centered radicals on different chains combine to form a network of crosslinks. In contrast, chain scission occurs when a carbon-centered radical on the polymer backbone reacts with another free radical, typically from oxygen in the atmosphere, causing a break in the main chain. Free radicals can also undergo reactions that graft new functional groups onto the backbone, or laminate two polymer sheets without an adhesive.
There is contradictory information about the expected effects of ionizing radiation for most polymers, since the conditions of radiation are so influential. For example, dose rate determines how fast free radicals are formed and whether they are able to diffuse through the material to recombine, or participate in chemical reactions. The ratio of crosslinking to chain scission is also affected by temperature, environment, presence of oxygen versus inert gases, radiation source (changing the penetration depth), and whether the polymer has been dissolved in an aqueous solution.
Crosslinking and chain scission have diverging effects on mechanical properties. Irradiated polymers typically undergo both types of reactions simultaneously, but not necessarily to the same extent. Crosslinks strengthen the polymer by preventing chain sliding, effectively leading to thermoset behavior. Crosslinks and branching lead to higher molecular weight and polydispersity. Thus, these polymers will generally have increased stiffness, tensile strength, and yield strength, and decreased solubility. Polyethylene is well known to experience improved mechanical properties as a result of crosslinking, including increased tensile strength and decreased elongation at break. Thus, it has “several advantageous applications in areas as diverse as rock bolts for mining, reinforcement of concrete, manufacture of light weight high strength ropes and high performance fabrics.”
In contrast, chain scission reactions will weaken the material by decreasing the average molecular weight of the chains, such that tensile and flexural strength decrease and solubility increases. Chain scission occurs primarily in the amorphous regions of the polymer. It can increase crystallinity in these regions by making it easier for the short chains to reassemble. Thus, it has been observed that crystallinity increases with dose, leading to a more brittle material on the macroscale. In addition, “gaseous products, such as , may be trapped in the polymer, and this can lead to subsequent crazing and cracking due to accumulated local stresses." An example of this phenomenon is 3D printed materials, which are often porous as a result of their printing configuration. Oxygen can diffuse into the pores and react with the surviving free radicals, leading to embrittlement. Some materials continue to weaken through aging, as the remaining free radicals react.
The resistance of these polymers to radiation damage can be improved by grafting or copolymerizing aromatic groups, which enhance stability and decrease reactivity, and by adding antioxidants and nanomaterials, which act as free radical scavengers. In addition, higher molecular weight polymers will be more resistant to radiation.
Effects on gases
Exposure to radiation causes chemical changes in gases. The least susceptible to damage are noble gases, where the major concern is the nuclear transmutation with follow-up chemical reactions of the nuclear reaction products.
High-intensity ionizing radiation in air can produce a visible ionized air glow of telltale bluish-purplish color. The glow can be observed e.g. during criticality accidents, around mushroom clouds shortly after a nuclear explosion, or inside of a damaged nuclear reactor like during the Chernobyl disaster.
Significant amounts of ozone can be produced. Even small amounts of ozone can cause ozone cracking in many polymers over time, in addition to the damage by the radiation itself.
Gas-filled radiation detectors
In some gaseous ionisation detectors, radiation damage to gases plays an important role in the device's ageing, especially in devices exposed for long periods to high intensity radiation, e.g. detectors for the Large Hadron Collider or the Geiger–Müller tube
Ionization processes require energy above 10 eV, while splitting covalent bonds in molecules and generating free radicals requires only 3-4 eV. The electrical discharges initiated by the ionization events by the particles result in plasma populated by large amount of free radicals. The highly reactive free radicals can recombine back to original molecules, or initiate a chain of free-radical polymerization reactions with other molecules, yielding compounds with increasing molecular weight. These high molecular weight compounds then precipitate from gaseous phase, forming conductive or non-conductive deposits on the electrodes and insulating surfaces of the detector and distorting its response. Gases containing hydrocarbon quenchers, e.g. argon–methane, are typically sensitive to aging by polymerization; addition of oxygen tends to lower the aging rates. Trace amounts of silicone oils, present from outgassing of silicone elastomers and especially from traces of silicone lubricants, tend to decompose and form deposits of silicon crystals on the surfaces. Gaseous mixtures of argon (or xenon) with carbon dioxide and optionally also with 2-3% of oxygen are highly tolerant to high radiation fluxes. The oxygen is added as noble gas with carbon dioxide has too high transparency for high-energy photons; ozone formed from the oxygen is a strong absorber of ultraviolet photons. Carbon tetrafluoride can be used as a component of the gas for high-rate detectors; the fluorine radicals produced during the operation however limit the choice of materials for the chambers and electrodes (e.g. gold electrodes are required, as the fluorine radicals attack metals, forming fluorides). Addition of carbon tetrafluoride can however eliminate the silicon deposits. Presence of hydrocarbons with carbon tetrafluoride leads to polymerization. A mixture of argon, carbon tetrafluoride, and carbon dioxide shows low aging in high hadron flux.
Effects on liquids
Like gases, liquids lack fixed internal structure; the effects of radiation is therefore mainly limited to radiolysis, altering the chemical composition of the liquids. As with gases, one of the primary mechanisms is formation of free radicals.
All liquids are subject to radiation damage, with few exotic exceptions; e.g. molten sodium, where there are no chemical bonds to be disrupted, and liquid hydrogen fluoride, which produces gaseous hydrogen and fluorine, which spontaneously react back to hydrogen fluoride.
Effects on water
Water subjected to ionizing radiation forms free radicals of hydrogen and hydroxyl, which can recombine to form gaseous hydrogen, oxygen, hydrogen peroxide, hydroxyl radicals, and peroxide radicals. In living organisms, which are composed mostly of water, majority of the damage is caused by the reactive oxygen species, free radicals produced from water. The free radicals attack the biomolecules forming structures within the cells, causing oxidative stress (a cumulative damage which may be significant enough to cause the cell death, or may cause DNA damage possibly leading to cancer).
In cooling systems of nuclear reactors, the formation of free oxygen would promote corrosion and is counteracted by addition of hydrogen to the cooling water. The hydrogen is not consumed as for each molecule reacting with oxygen one molecule is liberated by radiolysis of water; the excess hydrogen just serves to shift the reaction equilibriums by providing the initial hydrogen radicals. The reducing environment in pressurized water reactors is less prone to buildup of oxidative species. The chemistry of boiling water reactor coolant is more complex, as the environment can be oxidizing. Most of the radiolytic activity occurs in the core of the reactor where the neutron flux is highest; the bulk of energy is deposited in water from fast neutrons and gamma radiation, the contribution of thermal neutrons is much lower. In air-free water, the concentration of hydrogen, oxygen, and hydrogen peroxide reaches steady state at about 200 Gy of radiation. In presence of dissolved oxygen, the reactions continue until the oxygen is consumed and the equilibrium is shifted. Neutron activation of water leads to buildup of low concentrations of nitrogen species; due to the oxidizing effects of the reactive oxygen species, these tend to be present in the form of nitrate anions. In reducing environments, ammonia may be formed. Ammonia ions may be however also subsequently oxidized to nitrates. Other species present in the coolant water are the oxidized corrosion products (e.g. chromates) and fission products (e.g. pertechnetate and periodate anions, uranyl and neptunyl cations). Absorption of neutrons in hydrogen nuclei leads to buildup of deuterium and tritium in the water.
Behavior of supercritical water, important for the supercritical water reactors, differs from the radiochemical behavior of liquid water and steam and is currently under investigation.
The magnitude of the effects of radiation on water is dependent on the type and energy of the radiation, namely its linear energy transfer. A gas-free water subjected to low-LET gamma rays yields almost no radiolysis products and sustains an equilibrium with their low concentration. High-LET alpha radiation produces larger amounts of radiolysis products. In presence of dissolved oxygen, radiolysis always occurs. Dissolved hydrogen completely suppresses radiolysis by low-LET radiation while radiolysis still occurs with
The presence of reactive oxygen species has strongly disruptive effect on dissolved organic chemicals. This is exploited in groundwater remediation by electron beam treatment.
Countermeasures
Two main approaches to reduce radiation damage are reducing the amount of energy deposited in the sensitive material (e.g. by shielding, distance from the source, or spatial orientation), or modification of the material to be less sensitive to radiation damage (e.g. by adding antioxidants, stabilizers, or choosing a more suitable material).
In addition to the electronic device hardening mentioned above, some degree of protection may be obtained by shielding, usually with the interposition of high density materials (particularly lead, where space is critical, or concrete where space is available) between the radiation source and areas to be protected. For biological effects of substances such as radioactive iodine the ingestion of non-radioactive isotopes may substantially reduce the biological uptake of the radioactive form, and chelation therapy may be applied to accelerate the removal of radioactive materials formed from heavy metals from the body by natural processes.
For solid radiation damage
Solid countermeasures to radiation damage consist of three approaches. Firstly, saturating the matrix with oversized solutes. This acts to trap the swelling that occurs as a result of the creep and dislocation motion. They also act to help prevent diffusion, which restricts the ability of the material to undergo radiation induced segregation. Secondly, dispersing an oxide inside the matrix of the material. Dispersed oxide helps to prevent creep, and to mitigate swelling and reduce radiation induced segregation as well, by preventing dislocation motion and the formation and motion of interstitials. Finally, by engineering grain boundaries to be as small as possible, dislocation motion can be impeded, which prevents the embrittlement and hardening that result in material failure.
Effects on humans
Ionizing radiation is generally harmful and potentially lethal to living things but can have health benefits in radiation therapy for the treatment of cancer and thyrotoxicosis. Its most common impact is the induction of cancer with a latent period of years or decades after exposure. High doses can cause visually dramatic radiation burns, and/or rapid fatality through acute radiation syndrome. Controlled doses are used for medical imaging and radiotherapy.
Most adverse health effects of radiation exposure may be grouped in two general categories:
Deterministic effects (harmful tissue reactions) due in large part to the killing or malfunction of cells following high doses; and
Stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells.
See also
Radiation material science
Stopping power (particle radiation)
Collision cascade
Ion track
Radiation hardening
Radiation Damage in Metals and Alloys
Further reading
References
Radiation effects | Radiation damage | [
"Physics",
"Materials_science",
"Engineering"
] | 4,846 | [
"Physical phenomena",
"Materials science",
"Radiation",
"Condensed matter physics",
"Radiation effects"
] |
9,944,821 | https://en.wikipedia.org/wiki/Ribosomal%20s6%20kinase | In molecular biology, ribosomal s6 kinase (rsk) is a family of protein kinases involved in signal transduction. There are two subfamilies of rsk, p90rsk, also known as MAPK-activated protein kinase-1 (MAPKAP-K1), and p70rsk, also known as S6-H1 Kinase or simply S6 Kinase. There are three variants of p90rsk in humans, rsk 1-3. Rsks are serine/threonine kinases and are activated by the MAPK/ERK pathway. There are two known mammalian homologues of S6 Kinase: S6K1 and S6K2.
Substrates
Both p90 and p70 Rsk phosphorylate ribosomal protein s6, part of the translational machinery, but several other substrates have been identified, including other ribosomal proteins. Cytosolic substrates of p90rsk include protein phosphatase 1; glycogen synthase kinase 3 (GSK3); L1 CAM, a neural cell adhesion molecule; Son of Sevenless, the Ras exchange factor; and Myt1, an inhibitor of cdc2.
RSK phosphorylation of SOS1 (Son of Sevenless) at Serines 1134 and 1161 creates 14-3-3 docking site. This interaction of phospho SOS1 and 14-3-3 negatively regulates Ras-MAPK pathway.
p90rsk also regulates transcription factors including cAMP response element-binding protein (CREB); estrogen receptor-α (ERα); IκBα/NF-κB; and c-Fos.
Genomics
p90 Rsk-1 is located at 1p.
p90 Rsk-2 is located at Xp22.2 and contains 22 exons. Mutations in this gene have been associated with Coffin–Lowry syndrome, a disease characterised by severe psychomotor retardation and other developmental abnormalities.
p90 Rsk-3 is located at 6q27.
Proteomics
The main distinguishing feature between p90rsk and p70rsk is that the 90 kDa family contain two non-identical kinase domains, while the 70 kDa family contain only one kinase domain.
Research history
Rsk was first identified in Xenopus laevis eggs by Erikson and Maller in 1985.
References
External links
Proteins
Signal transduction | Ribosomal s6 kinase | [
"Chemistry",
"Biology"
] | 502 | [
"Biomolecules by chemical classification",
"Signal transduction",
"Molecular biology",
"Biochemistry",
"Proteins",
"Neurochemistry"
] |
9,946,905 | https://en.wikipedia.org/wiki/Ammonium%20perchlorate%20composite%20propellant | Ammonium perchlorate composite propellant (APCP) is a solid rocket propellant. It differs from many traditional solid rocket propellants such as black powder or zinc-sulfur, not only in chemical composition and overall performance but also by being cast into shape, as opposed to powder pressing as with black powder. This provides manufacturing regularity and repeatability, which are necessary requirements for use in the aerospace industry.
Uses
Ammonium perchlorate composite propellant is typically for aerospace rocket propulsion where simplicity and reliability are desired and specific impulses (depending on the composition and operating pressure) of are adequate. Because of these performance attributes, APCP has been used in the Space Shuttle Solid Rocket Boosters, aircraft ejection seats, and specialty space exploration applications such as NASA's Mars Exploration Rover descent stage retrorockets. In addition, the high-power rocketry community regularly uses APCP in the form of commercially available propellant "reloads", as well as single-use motors. Experienced experimental and amateur rocketeers also often work with APCP, processing the APCP themselves.
Composition
Overview
Ammonium perchlorate composite propellant is a composite propellant, meaning that it has both fuel and oxidizer combined into a homogeneous mixture, in this case with a rubbery binder as part of the fuel. The propellant is most often composed of ammonium perchlorate (AP), an elastomer binder such as hydroxyl-terminated polybutadiene (HTPB) or polybutadiene acrylic acid acrylonitrile prepolymer (PBAN), powdered metal (typically aluminium), and various burn rate catalysts. In addition, curing additives induce elastomer binder cross-linking to solidify the propellant before use. The perchlorate serves as the oxidizer, while the binder and aluminium serve as the fuel. Burn rate catalysts determine how quickly the mixture burns. The resulting cured propellant is fairly elastic (rubbery), which also helps limit fracturing during accumulated damage (such as shipping, installing, cutting) and high acceleration applications such as rocketry. This includes the Space Shuttle missions, in which APCP was used for the two SRBs.
The composition of APCP can vary significantly depending on the application, intended burn characteristics, and constraints such as nozzle thermal limitations or specific impulse (Isp). Rough mass proportions (in high-performance configurations) tend to be about 70/15/15 AP/HTPB/Al, though fairly high performance "low-smoke" can have compositions of roughly 80/18/2 AP/HTPB/Al. While metal fuel is not required in APCP, most formulations include at least a few percent as a combustion stabilizer, propellant opacifier (to limit excessive infrared propellant preheating), and increase the temperature of the combustion gases (increasing Isp).
Common components
Oxidizers
Ammonium perchlorate as the primary oxidizer
Metal-oxide catalysts as thermite oxidizers
High energy fuels
Aluminium (high performance, most common)
Magnesium (medium performance)
Zinc (low performance)
Low energy fuels acting as binders
Hydroxyl-terminated polybutadiene (HTPB)
Carboxyl-terminated polybutadiene (CTPB)
Polybutadiene acrylonitrile (PBAN)
Special considerations
Though increasing the ratio of metal-fuel to oxidizer up to the stoichiometric point increases the combustion temperature, the presence of an increasing molar fraction of metal oxides, particularly aluminium oxide (Al2O3) precipitating from the gaseous solution creates globules of solids or liquids that slow down the flow velocity as the mean molecular mass of the flow increases. In addition, the chemical composition of the gases changes, varying the effective heat capacity of the gas. Because of these phenomena, there exists an optimal non-stoichiometric composition for maximizing Isp of roughly 16% by mass, assuming the combustion reaction goes to completion inside the combustion chamber.
The combustion time of the aluminium particles in the hot combustion gas varies depending on aluminium particle size and shape. In small APCP motors with high aluminium content, the residence time of the combustion gases does not allow for full combustion of the aluminium and thus a substantial fraction of the aluminium is burned outside the combustion chamber, leading to decreased performance. This effect is often mitigated by reducing aluminium particle size, inducing turbulence (and therefore a long characteristic path length and residence time), and/or by reducing the aluminium content to ensure a combustion environment with a higher net oxidizing potential, ensuring more complete aluminium combustion. Aluminium combustion inside the motor is the rate-limiting pathway since the liquid-aluminium droplets (even still liquid at temperatures ) limit the reaction to a heterogeneous globule interface, making the surface area to volume ratio an important factor in determining the combustion residence time and required combustion chamber size/length.
Particle size
The propellant particle size distribution has a profound impact on APCP rocket motor performance. Smaller AP and Al particles lead to higher combustion efficiency but also lead to increased linear burn rate. The burn rate is heavily dependent on mean AP particle size as the AP absorbs heat to decompose into a gas before it can oxidize the fuel components. This process may be a rate-limiting step in the overall combustion rate of APCP. The phenomenon can be explained by considering the heat-flux-to-mass ratio: As the particle radius increases the volume (and, therefore, mass and heat capacity) increases as the cube of the radius. However, the surface area increases as the square of the radius, which is roughly proportional to the heat flux into the particle. Therefore, a particle's rate of temperature rise is maximized when the particle size is minimized.
Common APCP formulations call for 30–400 μm AP particles (often spherical), as well as 2–50 μm Al particles (often spherical). Because of the size discrepancy between the AP and Al, Al will often take an interstitial position in a pseudo-lattice of AP particles.
Characteristics
Geometric
APCP deflagrates from the surface of exposed propellant in the combustion chamber. In this fashion, the geometry of the propellant inside the rocket motor plays an important role in the overall motor performance. As the surface of the propellant burns, the shape evolves (a subject of study in internal ballistics), most often changing the propellant surface area exposed to the combustion gases. The mass flux (kg/s) [and therefore pressure] of combustion gases generated is a function of the instantaneous surface area (m2), propellant density (kg/m3), and linear burn rate (m/s):
Several geometric configurations are often used depending on the application and desired thrust curve:
Circular bore: if in BATES configuration, produces progressive-regressive thrust curve.
End burner: propellant burns from one axial end to another producing steady long burn, though has thermal difficulties, CG shift.
C-slot: propellant with large wedge cut out of the side (along axial direction), producing fairly long regressive thrust, though has thermal difficulties and asymmetric CG characteristics.
Moon burner: off-center circular bore produces progressive-regressive long burn though has slight asymmetric CG characteristics.
Finocyl: usually a 5 or 6 legged star-like shape that can produce very level thrust, with a bit quicker burn than circular bore due to increased surface area.
Burn rate
While the surface area can be easily tailored by careful geometric design of the propellant, the burn rate is dependent on several subtle factors:
Propellant chemical composition.
AP, Al, additive particle sizes.
Combustion pressure.
Heat transfer characteristics.
Erosive burning (high-velocity flow moving past the propellant).
Initial temperature of propellant.
In summary, however, most formulations have a burn rate between 1–3 mm/s at STP and 6–12 mm/s at 68 atm. The burn characteristics (such as linear burn rate) are often determined prior to rocket motor firing using a strand burner test. This test allows the APCP manufacturer to characterize the burn rate as a function of pressure. Empirically, APCP adheres fairly well to the following power-function model:
It is worth noting that typically for APCP, n is 0.3–0.5 indicating that APCP is sub-critically pressure sensitive. That is, if surface area were maintained constant during a burn the combustion reaction would not run away to (theoretically) infinite as the pressure would reach an internal equilibrium. This isn't to say that APCP cannot cause an explosion, just that it will not detonate. Thus, any explosion would be caused by the pressure surpassing the burst pressure of the container (rocket motor).
Model/high-power rocketry applications
Commercial APCP rocket engines usually come in the form of reloadable motor systems (RMS) and fully assembled single-use rocket motors. For RMS, the APCP "grains" (cylinders of propellant) are loaded into the reusable motor casing along with a sequence of insulator disks and o-rings and a (graphite or glass-filled phenolic resin) nozzle. The motor casing and closures are typically bought separately from the motor manufacturer and are often precision-machined from aluminium. The assembled RMS contains both reusable (typically metal) and disposable components.
The major APCP suppliers for hobby use are:
Aerotech Consumer Aerospace
Cesaroni Technology
Loki Research
Gorilla Rocket Motors
To achieve different visual effects and flight characteristics, hobby APCP suppliers offer a variety of different characteristic propellant types. These can range from fast-burning with little smoke and blue flame to classic white smoke and white flame. In addition, colored formulations are available to display reds, greens, blues, and even black smoke.
In the medium- and high-power rocket applications, APCP has largely replaced black powder as a rocket propellant. Compacted black powder slugs become prone to fracture in larger applications, which can result in catastrophic failure in rocket vehicles. APCP's elastic material properties make it less vulnerable to fracture from accidental shock or high-acceleration flights. Due to these attributes, widespread adoption of APCP and related propellant types in the hobby has significantly enhanced the safety of rocketry.
Environmental and other concerns
The exhaust from APCP solid rocket motors contains mostly water, carbon dioxide, hydrogen chloride, and a metal oxide (typically aluminium oxide). The hydrogen chloride can easily dissolve in water and create corrosive hydrochloric acid. The environmental fate of hydrogen chloride is not well documented. The hydrochloric acid component of APCP exhaust leads to the condensation of atmospheric moisture in the plume and this enhances the visible signature of the contrail. This visible signature, among other reasons, led to research in cleaner burning propellants with no visible signatures. Minimum signature propellants contain primarily nitrogen-rich organic molecules (e.g., ammonium dinitramide) and depending on their oxidizer source can be hotter burning than APCP composite propellants.
Regulation and legality
In the United States, APCP for hobby use is regulated indirectly by two non-government agencies: the National Association of Rocketry (NAR), and the Tripoli Rocketry Association (TRA). Both agencies set forth rules regarding the impulse classification of rocket motors and the level of certification required by rocketeers in order to purchase certain impulse (size) motors. The NAR and TRA require motor manufacturers to certify their motors for distribution to vendors and ultimately hobbyists. The vendor is charged with the responsibility (by the NAR and TRA) to check hobbyists for high-power rocket certification before a sale can be made. The amount of APCP that can be purchased (in the form of a rocket motor reload) correlates to the impulse classification, and therefore the quantity of APCP purchasable by a hobbyist (in any single reload kit) is regulated by the NAR and TRA.
The overarching legality concerning the implementation of APCP in rocket motors is outlined in NFPA 1125. Use of APCP outside hobby use is regulated by state and municipal fire codes. On March 16, 2009, it was ruled that APCP is not an explosive and that manufacture and use of APCP no longer requires a license or permit from the ATF.
Footnotes
References
Rocket Propulsion Elements. Sutton, George P.
Amateur Experimental Solid Propellants by Richard Nakka
Solid Propellant Burn Rate by Richard Nakka
Intro to Solid Propulsion by Graham Orr, Harvey Mudd College Experimental Engineering
BATFE Lawsuit Documents, 2002–Present, Tripoli Rocketry Association
Rocketry
Model rocketry
Rocket propellants
Solid fuels | Ammonium perchlorate composite propellant | [
"Engineering"
] | 2,671 | [
"Rocketry",
"Aerospace engineering"
] |
9,947,225 | https://en.wikipedia.org/wiki/Pan%20evaporation | Pan evaporation is a measurement that combines or integrates the effects of several climate elements: temperature, humidity, rain fall, drought dispersion, solar radiation, and wind. Evaporation is greatest on hot, windy, dry, sunny days; and is greatly reduced when clouds block the sun and when air is cool, calm, and humid. Pan evaporation measurements enable farmers and ranchers to understand how much water their crops will need.
Evaporation pan
An evaporation pan is used to hold water during observations for the determination of the quantity of evaporation at a given location. Such pans are of varying sizes and shapes, the most commonly used being circular or square. The best known of the pans are the "Class A" evaporation pan and the "Sunken Colorado Pan". In Europe, India and South Africa, a Symon's Pan (or sometimes Symon's Tank) is used. Often the evaporation pans are automated with water level sensors and a small weather station is located nearby.
Standard methods
A variety of evaporation pans are used throughout the world. There are formulas for converting from one type of pan to another and to measures representative of the environment. Also, research has been done about the installation practices of evaporation pans so that they can make more reliable and repeatable measurements.
Class A evaporation pan
In the United States, the National Weather Service has standardized its measurements on the Class A evaporation pan, a cylinder with a diameter of 47.5 in (120.7 cm) that has a depth of 10 in (25 cm). The pan rests on a carefully leveled, wooden base and is often enclosed by a chain link fence to prevent animals drinking from it. Evaporation is measured daily as the depth of water (in inches) evaporates from the pan. The measurement day begins with the pan filled to exactly two inches (5 cm) from the pan top. At the end of 24 hours, the amount of water to refill the pan to exactly two inches from its top is measured.
If precipitation occurs in the 24-hour period, it is taken into account in calculating the evaporation. Sometimes precipitation is greater than evaporation, and measured increments of water must be dipped from the pan. Evaporation cannot be measured in a Class A pan when the pan's water surface is frozen.
The Class A Evaporation Pan is of limited use on days with rainfall events of >30mm (203mm rain gauge) unless it is emptied more than once per 24hours. Analysis of the daily rainfall and evaporation readings in areas with regular heavy rainfall events shows that almost without fail, on days with rainfall in excess of 30mm (203mm Rain Gauge) the daily evaporation is spuriously higher than other days in the same month where conditions more receptive to evaporation prevailed.
The most common and obvious error is in daily rainfall events of >55mm (203mm rain gauge) where the Class A Evaporation pan will likely overflow.
The less obvious, and therefore more concerning, is the influence of heavy or intense rainfall causing spuriously high daily evaporation totals without obvious overflow.
Sunken Colorado pan
The sunken Colorado pan is square, 0.92 m (3 ft) on a side and 0.46 m (18 in.) deep and made of unpainted galvanized iron. As the name suggests, it is buried in the ground to within about 5 cm (2 in.) of its rim. Evaporation from a Sunken Colorado Pan can be compared with a Class A pan using conversion constants. The pan coefficient, on an annual basis, is about 0.8.
Symons pan or tank
The Symons pan or tank is a standard instrument of the UK Met Office. It is a steel container 1.83 m (6 ft) on a side and 0.61 m (2 ft) deep, sunk into the ground with an above-ground rim of and is painted black internally. Its evaporation rate is lower than the Class A pan and conversion factors must be used.
Decreasing trend of pan evaporation
Over the last 50 or so years, pan evaporation has been carefully monitored. For decades, pan evaporation measurements were not analyzed critically for long term trends. But in the 1990s scientists reported that the rate of evaporation was falling. According to data, the downward trend had been observed all over the world except in a few places where it has increased.
It is currently theorized that, all other things being equal, as the global climate warms evaporation would increase proportionately and as a result, the hydrological cycle in its most general sense is bound to accelerate. The downward trend of pan evaporation has since also been linked to a phenomenon called global dimming. In 2005 Wild et al. and Pinker et al. found that the "dimming" trend had reversed since about 1990.
Other theories suggest that measurements have not taken the local environment into account. Since the local moisture level has increased in the local terrain, less water evaporates from the pan. This leads to false measurements and must be compensated for in the data analysis. Models accounting for additional local terrain moisture match global estimates.
In a different view, an analysis of pan trends in records from 154 instruments shows no coherency and pattern of statistically significant trends, with 38% decreasing, 42% no change and 20% increasing. Changes in the local environment are implicated, in which increasing tree density near the pans elevating surface friction and slowing local wind runs, reducing pan evaporation. The evaporation paradox is a result of ongoing changes in the nearby environments.
Lake evaporation vs. pan evaporation
Pan evaporation is used to estimate the evaporation from lakes. There is a correlation between lake evaporation and pan evaporation. Evaporation from a natural body of water is usually at a lower rate because the body of water does not have metal sides that get hot with the sun, and while light penetration in a pan is essentially uniform, light penetration in natural bodies of water will decrease as depth increases. Most textbooks suggest multiplying the pan evaporation by 0.75 to correct for this.
Relationship to hydrological cycle
"It is generally agreed that the evaporation from pans has been decreasing for the past half century over many regions of the Earth. However, the significance of this negative trend, as regards terrestrial evaporation, is still somewhat controversial, and its implications for the global hydrologic cycle remain unclear. The controversy stems from the alternative views that these evaporative changes resulted, either from global radiative dimming, or from the complementary relationship between pan and terrestrial evaporation. Actually, these factors are not mutually exclusive but act concurrently."
See also
See also
Atmometer (evaporimeter)
Hydrology
External links
References
Environmental engineering
Meteorological instrumentation and equipment
it:Evaporimetro#Evaporimetro di classe A | Pan evaporation | [
"Chemistry",
"Technology",
"Engineering"
] | 1,450 | [
"Meteorological instrumentation and equipment",
"Chemical engineering",
"Measuring instruments",
"Civil engineering",
"Environmental engineering"
] |
19,006,537 | https://en.wikipedia.org/wiki/Malibu%20Hydro | Malibu Hydro System was designed to provide electricity to the Malibu Club in Canada. This hydro system starts from a high alpine lake where water is diverted from the lake through a steel penstock to a power house nearly below near the shore of Jervis Inlet. The flow of water turns a pelton wheel which is attached to a generator to create electricity. The electricity is then transmitted to camp by a submarine cable running under Jervis Inlet at a high voltage to reduce losses. Power is then distributed throughout camp on the existing and upgraded electrical system.
Sources
The hydro turbine is feed by a yearlong creek called McCannel Creek, which is directly across Jervis Inlet from Malibu. The creek's source is McCannel Lake, at an elevation of . At the outlet of the lake, a dam was built to maintain the lake level and control the flow of the creek.
Weir (Item A)
A small weir type dam was built at the lake outlet, and, by limiting the discharge into the creek, aids in maintaining the lake level. The weir has a large pipe and valve in its base to pass the minimum required to maintain the creek. This is in addition to the water which will be spilled to create power.
Intake (Item B)
Water regulated at the lake will flow down the creek from the level to the level. Here a second weir was built across the creek that forms a large deep pool from which the penstock draws water at a rate of up to .
Power House (Item C & D)
The power house contains the turbine, generator, and the necessary equipment to control and distribute the electricity for Malibu. It is located a short distance above the beach. The following diagram and table explains what happens once the water flow reaches the Power house (read the diagram right to left)
Power Line (Item E)
Transformers at the hydro generator raise 600 volts to 15,000 volts for transmission. Power is transmitted across Jervis inlet, a distance of two miles (3 km) via an underwater power line consisting of three parallel conductors. On the Malibu side of the inlet, a transformer steps the 15,000 volts down to 120 and 208 volts for use at the camp.
History
It began installation in 2005.
References
Hydroelectric power stations in British Columbia
Turbines
Appropriate technology
Renewable energy in Canada | Malibu Hydro | [
"Chemistry"
] | 463 | [
"Turbines",
"Turbomachinery"
] |
12,346,323 | https://en.wikipedia.org/wiki/Resistivity%20logging | Resistivity logging is a method of well logging that works by characterizing the rock or sediment in a borehole by measuring its electrical resistivity. Resistivity is a fundamental material property which represents how strongly a material opposes the flow of electric current. In these logs, resistivity is measured using four electrical probes to eliminate the resistance of the contact leads. The log must run in holes containing electrically conductive mud or water, i.e., with enough ions present in the drilling fluid.
Indeed, in the borehole fluids the electrical charge carriers are only ions (cations and anions) present in aqueous solution in the fluid. In the absence of dissolved ions, water is a very poor electrical conductor. Indeed, pure water is very poorly dissociated by its self-ionisation (at 25 °C, pKw = 14, so at pH = 7, [H+] = [OH–] = 10−7 mol/L) and thus water itself does not significantly contribute to conduct electricity in an aqueous solution. The resistivity of pure water at 25 °C is 18 MΩ·cm, or its conductivity (C = 1/R) is 0.055 μS/cm. The electrical charge carriers in aqueous solution are only ions and not electrons as in metals. Most common minerals such as quartz () or calcite () found respectively in siliceous and in carbonaceous formations are electrical insulators. In mineral exploration, some minerals are semi-conductors, e.g., hematite (), magnetite (), and chalcopyrite () and when present in sufficiently large quantities in the ore body can affect the resistivity of the host formation. However, in most common cases (oil and gas drilling, water-well drilling), the solid mineral phases do not contribute to the electrical conductivity: electricity is carried by ions in solution in the pore water or in the water filling the cracks of hard rocks. If the pores of the rock are not saturated by water but also contains gases such as air above the water table or gaseous hydrocarbons like methane and light alkanes, the conductivity also drops and resistivity increases.
Resistivity logging is used in mineral exploration (for example for exploration for iron and copper ore bodies), geological exploration (deep geological disposal, geothermal wells), and water-well drilling. It is an indispensable tool for formation evaluation in oil- and gas-well drilling. As mentioned here above, most rock materials are essentially electrical insulators, while their enclosed fluids are electrical conductors. In contrast to aqueous solutions containing conducting ions, hydrocarbon fluids are almost infinitely resistive because they do not contain electrical charge carriers. Indeed, hydrocarbons does not dissociate in ions because of the covalent nature of their chemical bonds. When a formation is porous and contains salty water, the overall resistivity will be low. When the formation contains hydrocarbon, or has a very low porosity, its resistivity will be high. High resistivity values may indicate a hydrocarbon bearing formation.
In geological exploration and water-well drilling, resistivity measurements also allows to distinguish the contrast between clay aquitard and sandy aquifer because of their difference in porosity, pore water conductivity and of the cations (, , and ) present in the interlayer space of clay minerals whose external electrical double layer is also much more developed than that of quartz.
Usually while drilling, drilling fluids invade the formation, changes in the resistivity are measured by the tool in the invaded zone. For this reason, several resistivity tools with different investigation lengths are used to measure the formation resistivity. If water based mud is used and oil is displaced, "deeper" resistivity logs (or those of the "intact zone" sufficiently away from the borehole disturbed zone) will show lower conductivity than the invaded zone. If oil based mud is used and water is displaced, deeper logs will show higher conductivity than the invaded zone. This provides not only an indication of the fluids present, but also, at least qualitatively, whether the formation is permeable or not.
See also
Electric logs (in: Formation evaluation)
References
Further reading
Apparao, A. (1997). Developments in geoelectrical methods. Taylor & Francis.
Well logging | Resistivity logging | [
"Engineering"
] | 904 | [
"Petroleum engineering",
"Well logging"
] |
12,347,137 | https://en.wikipedia.org/wiki/Sonic%20logging | Sonic logging is a well logging tool that provides a formation’s interval transit time, designated as , which is a measure of a how fast elastic seismic compressional and shear waves travel through the formations. Geologically, this capacity varies with many things including lithology and rock textures, most notably decreasing with an increasing effective porosity and increasing with an increasing effective confining stress. This means that a sonic log can be used to calculate the porosity, confining stress, or pore pressure of a formation if the seismic velocity of the rock matrix, , and pore fluid, , are known, which is very useful for hydrocarbon exploration.
Process of sonic logging
The velocity is calculated by measuring the travel time from the piezoelectric transmitter to the receiver, normally with the units microsecond per foot (a measure of slowness). To compensate for the variations in the drilling mud thickness, there are actually two receivers, one near and one far. This is because the travel time within the drilling mud will be common for both, so the travel time within the formation is given by:
= ;
where = travel time to far receiver; = travel time to near receiver.
If it is necessary to compensate for tool tilt and variations in the borehole width then both up-down and down-up arrays can be used and an average can be calculated. Overall this gives a sonic log that can be made up of 1 or 2 pulse generators and 2 or 4 detectors, all located in single unit called a “sonde”, which is lowered down the well.
An additional way in which the sonic log tool can be altered is increasing or decreasing the separation between the source and receivers. This gives deeper penetration and overcomes the problem of low velocity zones posed by borehole wall damage.
Cycle skipping
The returning signal is a wavetrain and not a sharp pulse, so the detectors are only activated at a certain signal threshold. Sometimes, both detectors won’t be activated by the same peak (or trough) and the next peak (or trough) wave will activate one of them instead. This type of error is called cycle skipping and is easily identified because the time difference is equal to the time interval between successive pulse cycles.
Calculating porosity
Many relationships between travel time and porosity have been proposed, the most commonly accepted is the Wyllie time-average equation. The equation basically holds that the total travel time recorded on the log is the sum of the time the sonic wave spends travelling the solid part of the rock, called the rock matrix and the time spent travelling through the fluids in the pores. This equation is empirical and makes no allowance for the structure of the rock matrix or the connectivity of the pore spaces so extra corrections can often be added to it. The Wyllie time-average equation is:
where = seismic velocity of the formation; = seismic velocity of the pore fluid; = seismic velocity of the rock matrix; = porosity.
Accuracy
The accuracy of modern compressional and shear sonic logs obtained with wireline logging tools is well known now to be within 2% for boreholes that are less than 14 inches in diameter and within 5% for larger boreholes. Some suggest that the fact that regular- and long-spaced log measurements often conflict means these logs are not accurate. That is actually not true. Quite often there is drilling induced damage or chemical alteration around the borehole that causes the near-borehole formation to be up to 15% slower than the deeper formation. This "gradient" in slowness can be as large as 2–3 feet. The long-spaced measurements (7.5–13.5 ft) always measures the deeper, unaltered formation velocity and should always be used instead of the shorter offset logs. Discrepancies between seismic data and sonic log data are due to upscaling and anisotropy considerations, which can be handled by using Backus Averaging on sonic log data.
Some suggest that to investigate how the varying size of a borehole has affected a sonic log, the results can be plotted against those of a caliper log. However, this is usually prone to leading one to the wrong conclusions because the more compliant formations that are prone to washouts or diameter enlargements also inherently have "slower" velocities.
Calibrated sonic log
To improve the tie between well data and seismic data a "check-shot" survey is often used to generate a calibrated sonic log. A geophone, or array of geophones is lowered down the borehole, with a seismic source located at the surface. The seismic source is fired with the geophone(s) at a series of different depths, with the interval transit times being recorded. This is often done during the acquisition of a vertical seismic profile.
Use in mineral exploration
Sonic logs are also used in mineral exploration, especially exploration for iron and potassium.
See also
Well logging
References
Well logging
Petroleum geology | Sonic logging | [
"Chemistry",
"Engineering"
] | 1,004 | [
"Petroleum",
"Petroleum geology",
"Petroleum engineering",
"Well logging"
] |
12,347,279 | https://en.wikipedia.org/wiki/Density%20logging | Density logging is a well logging tool that can provide a continuous record of a formation's bulk density along the length of a borehole. In geology, bulk density is a function of the density of the minerals forming a rock (i.e. matrix) and the fluid enclosed in the pore spaces. This is one of three well logging tools that are commonly used to calculate porosity, the other two being sonic logging and neutron porosity logging
History & Principle
The tool was initially developed in the 1950s and became widely utilized across the hydrocarbon industry by the 1960s. A type of active nuclear tool, a radioactive source and detector are lowered down the borehole and the source emits medium-energy gamma rays into the formation. Radioactive sources are typically a directional Cs-137 source. These gamma rays interact with electrons in the formation and are scattered in an interaction known as Compton scattering. The number of scattered gamma rays that reach the detector, placed at a set distance from the emitter, is related to the formation's electron density, which itself is related to the formation's bulk density () via
where is the atomic number, and is the molecular weight of the compound. For most elements is about 1/2 (except for hydrogen where this ratio is 1). The electron density () in g/cm3 determines the response of the density tool.
General tool design
The tool itself initially consisted of a radioactive source and a single detector, but this configuration is susceptible to the effects of the drilling fluid. In a similar way to how the sonic logging tool was improved to compensate for borehole effects, density logging now conventionally uses 2 or more detectors. In a 2 detector configuration, the short-spaced detector has a much shallower depth of investigation than the long-spaced detector so it is used to measure the effect that the drilling fluid has on the gamma ray detection. This result is then used to correct the long-spaced detector.
Inferring porosity from bulk density
Assuming that the measured bulk density () only depends on matrix density () and fluid density (), and that these values are known along the wellbore, porosity () can be inferred by the formula
Common values of matrix density (in g/cm3) are:
Quartz sand - 2.65
Limestone - 2.71
Dolomite - 2.87
This method is the most reliable porosity indicator for sandstones and limestones because their density is well known. On the other hand, the density of clay minerals such as mudstone is highly variable, depending on depositional environment, overburden pressure, type of clay mineral and many other factors. It can vary from 2.1 (montmorillonite) to 2.76 (chlorite) so this tool is not as useful for determining their porosity. A fluid bulk density of 1 g/cm3 is appropriate where the water is fresh but highly saline water has a slightly higher density and lower values should be used for hydrocarbon reservoirs, depending on the hydrocarbon density and residual saturation.
In some applications hydrocarbons are indicated by the presence of abnormally high log porosities.
See also
Sonic logging
References
Well logging
Drilling technology | Density logging | [
"Engineering"
] | 650 | [
"Petroleum engineering",
"Well logging"
] |
3,259,020 | https://en.wikipedia.org/wiki/Programmable%20interval%20timer | In computing and in embedded systems, a programmable interval timer (PIT) is a counter that generates an output signal when it reaches a programmed count. The output signal may trigger an interrupt.
Common features
PITs may be one-shot or periodic. One-shot timers will signal only once and then stop counting. Periodic timers signal every time they reach a specific value and then restart, thus producing a signal at periodic intervals. Periodic timers are typically used to invoke activities that must be performed at regular intervals.
Counters are usually programmed with fixed intervals that determine how long the counter will count before it will output a signal.
IBM PC compatible
The Intel 8253 PIT was the original timing device used on IBM PC compatibles. It used a 1.193182 MHz clock signal (one third of the color burst frequency used by NTSC, one twelfth of the system clock crystal oscillator, therefore one quarter of the 4.77 MHz CPU clock) and contains three timers. Timer 0 is used by Microsoft Windows (uniprocessor) and Linux as a system timer, timer 1 was historically used for dynamic random access memory refreshes and timer 2 for the PC speaker.
The LAPIC in newer Intel systems offers a higher-resolution (one microsecond) timer. This is used in preference to the PIT timer in Linux kernels starting with 2.6.18.
See also
High Precision Event Timer
Monostable multivibrator
NE555
References
External links
High Performance Windows Timers
Timing on the PC family under DOS
IBM PC compatibles
Digital electronics | Programmable interval timer | [
"Engineering"
] | 324 | [
"Electronic engineering",
"Digital electronics"
] |
3,259,030 | https://en.wikipedia.org/wiki/Polarimeter | A polarimeter is a scientific instrument used to measure optical rotation: the angle of rotation caused by passing linearly polarized light through an optically active substance.
Some chemical substances are optically active, and linearly polarized (uni-directional) light will rotate either to the left (counter-clockwise) or right (clockwise) when passed through these substances. The amount by which the light is rotated is known as the angle of rotation. The direction (clockwise or counterclockwise) and magnitude of the rotation reveals information about the sample's chiral properties such as the relative concentration of enantiomers present in the sample.
History
Polarization by reflection was discovered in 1808 by Étienne-Louis Malus (1775–1812).
Measuring principle
The ratio, the purity, and the concentration of two enantiomers can be measured via polarimetry. Enantiomers are characterized by their property to rotate the plane of linear polarized light. Therefore, those compounds are called optically active and their property is referred to as optical rotation. Light sources such as a light bulb, Tungsten Halogen, or the sun emit electromagnetic waves at the frequency of visible light. Their electric field oscillates in all possible planes relative to their direction of propagation. In contrast to that, the waves of linear-polarized light oscillate in parallel planes.
If light encounters a polarizer, only the part of the light that oscillates in the defined plane of the polarizer may pass through. That plane is called the plane of polarization. The plane of polarization is turned by optically active compounds. According to the direction in which the light is rotated, the enantiomer is referred to as dextro-rotatory or levo-rotatory.
The optical activity of enantiomers is additive. If different enantiomers exist together in one solution, their optical activity adds up. That is why racemates are optically inactive, as they nullify their clockwise and counter clockwise optical activities. The optical rotation is proportional to the concentration of the optically active substances in solution. Polarimeters may therefore be applied for concentration measurements of enantiomer-pure samples. With a known concentration of a sample, polarimeters may also be applied to determine the specific rotation when characterizing a new substance.
The specific rotation is a physical property and defined as the optical rotation α at a path length l of 1 dm, a concentration c of 10 g/L, a temperature T (usually 20 °C) and a light wavelength λ (usually sodium D line at 589.3 nm):
This tells us how much the plane of polarization is rotated when the ray of light passes through a specific amount of optically active molecules of a sample. Therefore, the optical rotation depends on temperature, concentration, wavelength, path length, and the substancebeing analyzed.
Construction
The polarimeter is made up of two Nicol prisms (the polarizer and analyzer). The polarizer is fixed and the analyzer can be rotated. The prisms may be thought of as slits S1 and S2. The light waves may be considered to correspond to waves in the string. The polarizer S1 allows only those light waves which move in a single plane. This causes the light to become plane polarized. When the analyzer is also placed in a similar position it allows the light waves coming from the polarizer to pass through it. When it is rotated through the right angle no waves can pass through the right angle and the field appears to be dark. If now a glass tube containing an optically active solution is placed between the polarizer and analyzer the light now rotates through the plane of polarization through a certain angle, the analyzer will have to be rotated in same angle.
Operation
Polarimeters measure this by passing monochromatic light through the first of two polarising plates, creating a polarized beam. This first plate is known as the polarizer. This beam is then rotated as it passes through the sample. After passing through the sample, a second polarizer, known as the analyzer, rotates either via manual rotation or automatic detection of the angle. When the analyzer is rotated such that all the light or no light can pass through, then one can find the angle of rotation which is equal to the angle θ by which the analyser was rotated in the former case, or 90-θ in the latter case.
Types of polarimeter
Laurent's half-shade polarimeter
When plane-polarised light passes through some crystals, the velocity of left-polarized light is different from that of the right-polarized light, thus the crystals are said to have two refractive indices, i.e. double refracting.
Construction: The polarimeter consists of a monochromatic source S which is placed at focal point of a convex lens L. Just after the convex lens there is a Nicol Prism P which acts as a polariser. H is a half shade device which divides the field of polarized light emerging out of the Nicol P into two halves, generally of unequal brightness. T is a glass tube in which an optically active solution is filled. The light, after passing through T, is allowed to fall on the analyzing Nicol A which can be rotated about the axis of the tube. The rotation of the analyzer can be measured with the help of a scale C.
Working principle: To understand the need of a half-shade device, let us suppose that it is not present. The position of the analyzer is adjusted so that the field of view is dark when the tube is empty. The position of the analyzer is noted on the circular scale. Now the tube is filled with the optically active solution and it is set in its proper position. The optically active solution rotates the plane of polarization of the light emerging out of the polarizer P by some angle, so the light is transmitted by analyzer A and the field of view of the telescope becomes bright. Now the analyzer is rotated by a finite angle so that the field of view of the telescope again becomes dark. This will happen only when the analyzer is rotated by the same angle by which the plane of polarization of light is rotated by the optically active solution.
The position of the analyzer is again noted. The difference of the two readings will give the angle of rotation of the plane of polarization.
A difficulty faced in the above procedure is that when analyzer is rotated for the total darkness, then it is attained gradually and hence it is difficult to find the exact position correctly for which complete darkness is obtained. To overcome the above difficulty, the half-shade device is introduced between polarizer P and the glass tube T.
Half shade device: It consist of two semicircular plates ACB and ADB. One half ACB is made of glass while other half is made of quartz. Both halves are cemented together. The quartz is cut parallel to the optic axis. Thickness of the quartz is selected in such a way that it introduces a path difference of ’A/2 between ordinary and extraordinary ray. The thickness of the glass is selected in such a way that it absorbs the same amount of light as is absorbed by the quartz half.
Consider that the vibration of polarization is along OP. On passing through the glass half the vibrations remain along OP. But on passing through the quartz half these vibrations will split into 0- and £-components. The £-components are parallel to the optic axis while O- component is perpendicular to optic axis. The O-component travels faster in quartz and hence an emergence 0-component will be along OD instead of along OC. Thus components OA and OD will combine to form a resultant vibration along OQ which makes the same angle with optic axis as OP. Now if the Principal plane of the analyzing Nicol is parallel to OP then the light will pass through the glass half unobstructed. Hence the glass half will be brighter than the quartz half or we can say that the glass half will be bright and the quartz half will be dark. Similarly if the principal plane of the analyzing Nicol is parallel to OQ then the quartz half will be bright and the glass half will be dark.
When the principal plane of the analyzer is along AOB then both halves will be equally bright. On the other hand, if the principal plane of the analyzer is along DOC then both the halves will be equally dark.
Thus it is clear that if the analyzing Nicol is slightly disturbed from DOC then one half becomes brighter than the other. Hence by using the half shade device, one can measure the angle of rotation more accurately.
Determination of specific rotation: In order to determine a specific rotation of an optically active substance (say, sugar), the polarimeter tube is first filled with pure water and the analyzer is adjusted for equal darkness (both the halves should be equally dark) point. The position of the analyzer is noted with the help of the scale. Now the polarimeter tube is filled with a sugar solution of known concentration and again the analyzer is adjusted in such a way that again the equally dark point is achieved. The position of the analyzer is again noted. The difference of the two readings will give the angle of rotation θ. Hence, a specific rotation S is determined as S = θ/LC, where L is the optical path length and C is concentration of the substance.
Biquartz polarimeter
A biquartz polarimeter uses a biquartz plate, consisting of two semicircular plates of quartz, each of thickness 3.75mm. One half consists of right-handed optically active quartz, while the other is left-handed optically active quartz.
Manual
The earliest polarimeters, which date back to the 1830s, required the user to physically rotate one polarizing element (the analyzer) whilst viewing through another static element (the detector). The detector was positioned at the opposite end of a tube containing the optically active sample, and the user used his/her eye to judge the "alignment" when least light was observed. The angle of rotation was then read from a simple fixed to the moving polariser to within a degree or so.
Although most manual polarimeters produced today still adopt this basic principle, the many developments applied to the original opto-mechanical design over the years have significantly improved measurement performance. The introduction of a half-wave plate increased "distinction sensitivity", whilst a precision glass scale with vernier drum facilitated the final reading to within ca. ±0.05º. Most modern manual polarimeters also incorporate a long-life yellow LED in place of the more costly sodium arc lamp as a light source.
Semi-automatic
Today, semi-automatic polarimeters are available. The operator views the image via a digital display adjusts the analyzer angle with electronic controls.
Fully automatic
Fully automatic polarimeters are now widely used and simply require the user to press a button
and wait for a digital readout. Fast automatic digital polarimeters yield an accurate result within a few seconds, regardless of the rotation angle of the sample. In addition, they provide continuous measurement, facilitating High-performance liquid chromatography and other kinetic investigations.
Another feature of modern polarimeters is the Faraday modulator. The Faraday modulator creates an alternating current magnetic field. It oscillates the plane of polarization to enhance the detection accuracy by allowing the point of maximal darkness to be passed through again and again and thus be determined with even more accuracy.
As the temperature of the sample has a significant influence on the optical rotation of the sample, modern polarimeters have already included Peltier elements to actively control the temperature. Special techniques as temperature controlled sample tubes reduce measuring errors and ease operation. Results can directly be transferred to computers or networks for automatic processing. Traditionally, accurate filling of the sample cell had to be checked outside the instrument, as an appropriate control from within the device was not possible. Nowadays a camera system can help to monitor the sample and accurate filling conditions in the sample cell. Furthermore, features for automatic filling introduced by few companies are available on the market. When working with caustic chemicals, acids, and bases it can be beneficial to not load the polarimeter cell by hand. Both of these options help to avoid potential errors caused by bubbles or particles.
Sources of error
The angle of rotation of an optically active substance can be affected by:
Concentration of the sample
Wavelength of light passing through the sample (generally, angle of rotation and wavelength tend to be inversely proportional)
Temperature of the sample (generally the two are directly proportional)
Length of the sample cell (input by the user into most automatic polarimeters to ensure better accuracy)
Filling conditions (bubbles, temperature and concentration gradients)
Most modern polarimeters have methods for compensating or/and controlling these errors.
Calibration
Traditionally, a sucrose solution with a defined concentration was used to calibrate polarimeters relating the amount of sugar molecules to the light polarization rotation. The International Commission for Uniform Methods of Sugar Analysis (ICUMSA) played a key role in unifying analytical methods for the sugar industry, set standards for the International Sugar Scale (ISS) and the specifications for polarimeters in sugar industry. However, sugar solutions are prone to contamination and evaporation. Moreover, the optical rotation of a substance is very sensitive to temperature. A more reliable and stable standard was found: crystalline quartz which is oriented and cut in a way that it matches the optical rotation of a normal sugar solution, but without showing the disadvantages mentioned above. Quartz (silicon dioxide, SiO2) is a common mineral, a trigonal chemical compound of silicon and oxygen. Nowadays, quartz plates or quartz control plates of different thickness serve as standards to calibrate polarimeters and saccharimeters. In order to ensure reliable and comparable results, quartz plates can be calibrated and certified by metrology institutes. However,. Alternatively, calibration may be checked using a Polarization Reference Standard, which consists of a plate of quartz mounted in a holder perpendicular to the light path. These standards are available, traceable to NIST, by contacting Rudolph Research Analytical, located at 55 Newburgh Road, Hackettstown, NJ 07840, USA. A calibration first consists of a preliminary test in which the fundamental calibration capability is checked. The quartz control plates must meet the required minimum requirements with respect to their dimensions, optical pureness, flatness, parallelism of the faces and optical axis errors. After that, the actual measurement value - the optical rotation - is measured with the precision polarimeter. The measurement uncertainty of the polarimeter amounts to 0.001° (k=2).
Applications
Because many optically active chemicals such as tartaric acid, are stereoisomers, a polarimeter can be used to identify which isomer is present in a sample – if it rotates polarized light to the left, it is a levo-isomer, and to the right, a dextro-isomer. It can also be used to measure the ratio of enantiomers in solutions.
The optical rotation is proportional to the concentration of the optically active substances in solution. Polarimetry may therefore be applied for concentration measurements of enantiomer-pure samples. With a known concentration of a sample, polarimetry may also be applied to determine the specific rotation (a physical property) when characterizing a new substance.
Chemical industry
Many chemicals exhibit a specific rotation as a unique property (an intensive property like refractive index or Specific gravity) which can be used to distinguish it. Polarimeters can identify unknown samples based on this if other variables such as concentration and length of sample cell length are controlled or at least known. This is used in the chemical industry.
By the same token, if the specific rotation of a sample is already known, then the concentration and/or purity of a solution containing it can be calculated.
Most automatic polarimeters make this calculation automatically, given input on variables from the user.
Food, beverage and pharmaceutical industries
Concentration and purity measurements are especially important to determine product or ingredient quality in the food & beverage and pharmaceutical industries. Samples that display specific rotations that can be calculated for purity with a polarimeter include:
Steroids
Diuretics
Antibiotics
Narcotics
Vitamins
Analgesics
Amino acids
Essential oils
Polymers
Starches
Sugars
Polarimeters are used in the sugar industry for determining quality of both juice from sugar cane and the refined sucrose. Often, the sugar refineries use a modified polarimeter with a flow cell (and used in conjunction with a refractometer) called a saccharimeter. These instruments use the International Sugar Scale, as defined by the International Commission for Uniform Methods of Sugar Analysis (ICUMSA).
See also
Optical rotation
Polarimetry
Polarization
Chirality
Enantiomers
References
Polarization (waves)
Optical instruments
French inventions | Polarimeter | [
"Physics"
] | 3,476 | [
"Polarization (waves)",
"Astrophysics"
] |
3,259,068 | https://en.wikipedia.org/wiki/Fast%20folding%20algorithm | The Fast-Folding Algorithm (FFA) is a computational method primarily utilized in the domain of astronomy for detecting periodic signals.
FFA is designed to reveal repeating or cyclical patterns by "folding" data, which involves dividing the data set into numerous segments, aligning these segments to a common phase, and summing them together to enhance the signal of periodic events. This algorithm is particularly advantageous when dealing with non-uniformly sampled data or signals with a drifting period, which refer to signals that exhibit a frequency or period drifting over space and time, such cycles are not stable and consistent; rather, they are randomized.
A quintessential application of FFA is in the detection and analysis of pulsars—highly magnetized, rotating neutron stars that emit beams of electromagnetic radiation. By employing FFA, astronomers can effectively distinguish noisy data to identify the regular pulses of radiation emitted by these celestial bodies. Moreover, the Fast-Folding Algorithm is instrumental in detecting long-period signals, which is often a challenge for other algorithms like the FFT (Fast-Fourier Transform) that operate under the assumption of a constant frequency. Through the process of folding and summing data segments, FFA provides a robust mechanism for unveiling periodicities despite noisy observational data, thereby playing a pivotal role in advancing our understanding of pulsar properties and behaviors.
History of the FFA
The Fast Folding Algorithm (FFA) has its roots dating back to 1969 when it was introduced by Professor David H. Staelin from the Massachusetts Institute of Technology (MIT).
At the time, the scientific community was deeply involved in the study of pulsars, which are rapidly rotating neutron stars emitting beams of electromagnetic radiation. Professor Staelin recognized the potential of the FFA as a powerful instrument for detecting periodic signals within these pulsar surveys. These surveys were not just about understanding pulsars but held a much broader significance. They played a pivotal role in testing and validating Einstein's theory of general relativity, a cornerstone in the field of Astronomy. As the years progressed, the FFA saw various refinements, with researchers making tweaks and optimizations to enhance its efficiency and accuracy. Despite its potential, the FFA was mostly underutilized thanks to the dominance of Fast Fourier Transform (FFT)-based techniques, which were the preferred choice for many in signal processing during that era. As a result, while the FFA showed promise, its applications in the broader scientific community remained underutilized for several decades.
Technical Foundations of the FFA
The Fast Folding Algorithm (FFA) was initially developed as a method to search for periodic signals amidst noise in the time domain, contrasting with the FFT search technique that operates in the frequency domain. The primary advantage of the FFA is its efficiency in avoiding redundant summations (unnecessary additional computations). Specifically, the FFA is much faster than standard folding at all possible trial periods, achieving this by performing summations through N×log2(N/p−1) steps rather than N×(N/p−1). This efficiency arises because the logarithmic term log2(N/p−1) grows much slower than the linear term (N/p−1), making the number of steps more manageable as N increases, N represents the number of samples in the time series, and p is the trial folding period in units of samples. The FFA method involves folding each time series at multiple periods, performing partial summations in a series of log2(p) stages, and combining those sums to fold the data with a trial period between p and p+1. This approach retains all harmonic structures, making it especially effective for identifying narrow-pulsed signals in the long-period regime. One of the FFA's unique features is its hierarchical approach to folding, breaking the data down into smaller chunks, folding these chunks, and then combining them. This method, combined with its inherent tolerance to noise and adaptability for different types of data and hardware configurations, ensures the FFA remains a powerful tool for detecting periodic signals, especially in environments with significant noise or interference which makes it especially useful for Astronomical endavours.
In signal processing, the fast folding algorithm is an efficient algorithm for the detection of approximately-periodic events within time series data. It computes superpositions of the signal modulo various window sizes simultaneously.
The FFA is best known for its use in the detection of pulsars, as popularised by SETI@home and Astropulse.
It was also used by the Breakthrough Listen Initiative during their 2023 Investigation for Periodic Spectral Signals campaign.
See also
Pulsar
References
External links
The search for unknown pulsars
Signal processing | Fast folding algorithm | [
"Technology",
"Engineering"
] | 968 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
3,260,134 | https://en.wikipedia.org/wiki/Nesiritide | Nesiritide, sold under the brand name Natrecor, is the recombinant form of the 32 amino acid human B-type natriuretic peptide, which is normally produced by the ventricular myocardium. Nesiritide works to facilitate cardiovascular fluid homeostasis through counterregulation of the renin–angiotensin–aldosterone system, stimulating cyclic guanosine monophosphate, leading to smooth muscle cell relaxation.
Nesiritide was believed initially to be beneficial for acute decompensated congestive heart failure. It received approval from the United States' Food and Drug Administration for this purpose in 2001 after initial non-approval. In July 2011, the results of the largest study so far for nesiritide was published in The New England Journal of Medicine. The study failed to show a difference between nesiritide and placebo on mortality or re-hospitalizations.
Administration
Nesiritide is only administered intravenously, usually by bolus, followed by IV infusion. For most adults and the elderly, a normal dosage is 2 mg/kg followed by a continuous IV infusion of 0.01 mg/kg/min. This may be increased every three hours for a maximum of 0.03 mg/kg/min.
Controversy
In 2005, after several academic papers published by Jonathan Sackner-Bernstein on the efficacy and side effects of nesiritide, Johnson & Johnson met with the FDA and altered its stated plans for the drug and agreed to revise its labeling.
Heart doctors at the Cleveland Clinic then voted unanimously not to permit the prescription of the drug to its patients. Johnson and Johnson convened a panel of experts whose advice included the recommendation to conduct the large-scale clinical trial that was subsequently published in 2011. Following this, the United States Department of Justice announced an inquiry into the marketing of the drug that led to a fine against the Scios unit of J&J.
Side effects
Common side effects include:
Low blood pressure (11% of patients)
Headache
Nausea
Slow heart rate
Kidney failure
More rare side effects include:
Confusion
Paresthesia
Somnolence
Tremor
References
Further reading
External links
Natrecor.com
Peptides
Drugs acting on the cardiovascular system
Drugs developed by Johnson & Johnson | Nesiritide | [
"Chemistry"
] | 466 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
3,260,340 | https://en.wikipedia.org/wiki/Shear%20thinning | In rheology, shear thinning is the non-Newtonian behavior of fluids whose viscosity decreases under shear strain. It is sometimes considered synonymous for pseudo-plastic behaviour, and is usually defined as excluding time-dependent effects, such as thixotropy.
Shear thinning is the most common type of non-Newtonian behavior of fluids and is seen in many industrial and everyday applications. Although shear thinning is generally not observed in pure liquids with low molecular mass or ideal solutions of small molecules like sucrose or sodium chloride, it is often observed in polymer solutions and molten polymers, as well as complex fluids and suspensions like ketchup, whipped cream, blood, paint, and nail polish.
Theories behind shear-thinning behaviour
Though the exact cause of shear thinning is not fully understood, it is widely regarded to be the effect of small structural changes within the fluid, such that microscale geometries within the fluid rearrange to facilitate shearing. In colloid systems, phase separation during flow leads to shear thinning. In polymer systems such as polymer melts and solutions, shear thinning is caused by the disentanglement of polymer chains during flow. At rest, high molecular weight polymers are entangled and randomly oriented. However, when undergoing agitation at a high enough rate, these highly anisotropic polymer chains start to disentangle and align along the direction of the shear force. This leads to less molecular/particle interaction and a larger amount of free space, decreasing the viscosity.
Power law model
At both sufficiently high and very low shear rates, viscosity of a polymer system is independent of the shear rate. At high shear rates, polymers are entirely disentangled and the viscosity value of the system plateaus at η∞, or the infinite shear viscosity plateau. At low shear rates, the shear is too low to be impeded by entanglements and the viscosity value of the system is η0, or the zero shear rate viscosity. The value of η∞ represents the lowest viscosity attainable and may be orders of magnitude lower than η0, depending on the degree of shear thinning.
Viscosity is plotted against shear rate in a log(η) vs. log() plot, where the linear region is the shear-thinning regime and can be expressed using the Ostwald and de Waele power law equation:
The Ostwald and de Waele equation can be written in a logarithmic form:
The apparent viscosity is defined as , and this may be plugged into the Ostwald equation to yield a second power-law equation for apparent viscosity:
This expression can also be used to describe dilatant (shear thickening) behaviour, where the value of n is greater than 1.
Herschel–Bulkley model
Bingham plastics require a critical shear stress to be exceeded in order to start flowing. This behaviour is usually seen in polymer/silica micro- and nanocomposites, where the formation of a silica network in the material provides a solid-like response at low shear stress. The shear-thinning behavior of plastic fluids can be described with the Herschel-Bulkley model, which adds a threshold shear stress component to the Ostwald equation:
Relationship with thixotropy
Some authors consider shear thinning to be a special case of thixotropic behaviour, because the recovery of the microstructure of the liquid to its initial state will always require a non-zero time. When the recovery of viscosity after disturbance is very rapid however, the observed behaviour is classic shear thinning or pseudoplasticity, because as soon as the shear is removed, the viscosity returns to normal. When it takes a measurable time for the viscosity to recover, thixotropic behaviour is observed. When describing the viscosity of liquids, however, it is therefore useful to distinguish shear-thinning (pseudoplastic) behaviour from thixotropic behaviour, where the viscosity at all shear rates is decreased for some duration after agitation: both of these effects can often be seen separately in the same liquid.
Everyday examples
Wall paint is a pseudoplastic material. When modern wall paint is applied, the shear created by the brush or roller will allow it to thin and wet out the surface evenly. Once applied, the paint regains its higher viscosity, which avoids drips and runs.
Ketchup is a shear-thinning material, viscous when at rest, but flowing at speed when agitated by squeezing, shaking, or striking the bottle.
Whipped cream is also a shear-thinning material. When whipped cream is sprayed out of its canister, it flows out smoothly from the nozzle due to the low viscosity at high flow rate. However, after whipped cream is sprayed into a spoon, it does not flow and its increased viscosity allows it to be rigid.
See also
Time-dependent viscosity
Rheopecty: The longer the fluid is subjected to a shear strain, the higher the viscosity. Time-dependent shear thickening behavior.
Thixotropy: The longer a fluid is subjected to a shear strain, the lower its viscosity. It is a time-dependent shear thinning behavior.
Shear thickening: Similar to rheopecty, but independent of the passage of time.
Thickening agent
Paint thinner
External links
The Great Ketchup Mystery
References
Continuum mechanics
Rheology
Non-Newtonian fluids
Smart materials
Tribology | Shear thinning | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,151 | [
"Tribology",
"Continuum mechanics",
"Classical mechanics",
"Surface science",
"Materials science",
"Mechanical engineering",
"Smart materials",
"Rheology",
"Fluid dynamics"
] |
3,262,898 | https://en.wikipedia.org/wiki/Metallome | In biochemistry, the metallome is the distribution of metal ions in a cellular compartment. The term was coined in analogy with proteome as metallomics is the study of metallome: the "comprehensive analysis of the entirety of metal and metalloid species within a cell or tissue type". Therefore, metallomics can be considered a branch of metabolomics, even though the metals are not typically considered as metabolites.
An alternative definition of "metallomes" as metalloproteins or any other metal-containing biomolecules, and "metallomics" as a study of such biomolecules.
Metallointeractome
In the study of metallomes the transcriptome, proteome and the metabolome constitutes the whole metallome. A study of the metallome is done to arrive at the metallointeractome.
Metallotranscriptome
The metallotranscriptome can be defined as the map of the entire transcriptome in the presence of biologically or environmentally relevant concentrations of an essential or toxic metal, respectively. The metallometabolome constitutes the complete pool of small metabolites in a cell at any given time. This gives rise to the whole metallointeractome and knowledge of this is important in comparative metallomics dealing with toxicity and drug discovery.
See also
Bioinorganic chemistry
-omics
References
Sources
electronic-book electronic-
Systems biology
Metabolism
Bioinformatics
Biochemistry methods | Metallome | [
"Chemistry",
"Engineering",
"Biology"
] | 307 | [
"Biochemistry methods",
"Biological engineering",
"Bioinformatics",
"Cellular processes",
"Biochemistry",
"Metabolism",
"Systems biology"
] |
3,263,174 | https://en.wikipedia.org/wiki/CPK%20coloring | In chemistry, the CPK coloring (for Corey–Pauling–Koltun) is a popular color convention for distinguishing atoms of different chemical elements in molecular models.
History
August Wilhelm von Hofmann was apparently the first to introduce molecular models into organic chemistry, following August Kekule's introduction of the theory of chemical structure in 1858, and Alexander Crum Brown's introduction of printed structural formulas in 1861. At a Friday Evening Discourse at London's Royal Institution on April 7, 1865, he displayed molecular models of simple organic substances such as methane, ethane, and methyl chloride, which he had had constructed from differently colored table croquet balls connected together with thin brass tubes. Hofmann's original colour scheme (carbon = black, hydrogen = white, nitrogen = blue, oxygen = red, chlorine = green, and sulphur = yellow) has evolved into the later color schemes.
In 1952, Corey and Pauling published a description of space-filling models of proteins and other biomolecules that they had been building at Caltech. Their models represented atoms by faceted hardwood balls, painted in different bright colors to indicate the respective chemical elements. Their color schema included
White for hydrogen
Black for carbon
Sky blue for nitrogen
Red for oxygen
They also built smaller models using plastic balls with the same color schema.
In 1965 Koltun patented an improved version of the Corey and Pauling modeling technique. In his patent he mentions the following colors:
White for hydrogen
Black for carbon
Blue for nitrogen
Red for oxygen
Deep yellow for sulfur
Purple for phosphorus
Light, medium, medium dark, and dark green for the halogens (F, Cl, Br, I)
Silver for metals (Co, Fe, Ni, Cu)
Typical assignments
Typical CPK color assignments include:
Several of the CPK colors refer mnemonically to colors of the pure elements or notable compound. For example, hydrogen is a colorless gas, carbon as charcoal, graphite or coke is black, sulfur powder is yellow, chlorine is a greenish gas, bromine is a dark red liquid, iodine in ether is violet, amorphous phosphorus is red, rust is dark orange-red, etc. For some colors, such as those of oxygen and nitrogen, the inspiration is less clear. Perhaps red for oxygen is inspired by the fact that oxygen is normally required for combustion or that the oxygen-bearing chemical in blood, hemoglobin, is bright red, and the blue for nitrogen by the fact that nitrogen is the main component of Earth's atmosphere, which appears to human eyes as being colored sky blue.
It is likely that the CPK colours were inspired by models in the nineteenth century. In 1865, August Wilhelm von Hofmann, in a talk at the Royal Institution in London, was using models made from croquet balls to illustrate valence, so he used the coloured balls available to him. (At the time, croquet was the most popular sport in England, so the balls were plentiful.) "On the Combining Power of Atoms", Chemical News, 12 (1865, 176–9, 189, states that "Hofmann, at a lecture given at the Royal Institution in April 1865 made use of croquet balls of different colours to represent various kinds of atoms (e.g. carbon black, hydrogen white, chlorine green, 'fiery' oxygen red, nitrogen blue)."
Modern variants
The following table shows colors assigned to each element by some popular software products.
Column C is the original assignment by Corey and Pauling.
Column K is that of Koltun's patent.
Column J is the color scheme used by the molecular visualizer Jmol.
Column R is the scheme used by Rasmol; when two colors are shown, the second one is valid for versions 2.7.3 and later.
Column P consists of the colors in the PubChem database managed by the United States National Institute of Health.
All colors are approximate and may depend on the display hardware and viewing conditions.
See also
Ball-and-stick model
Molecular graphics
Software for molecular modeling
References
External links
Physical Molecular Models
Color codes
Molecular modelling | CPK coloring | [
"Chemistry"
] | 855 | [
"Theoretical chemistry",
"Molecular modelling",
"Molecular physics"
] |
3,263,222 | https://en.wikipedia.org/wiki/Multi-Object%20Spectrometer | A multi-object spectrometer is a type of optical spectrometer capable of simultaneously acquiring the spectra of multiple separate objects in its field of view. It is used in astronomical spectroscopy and is related to long-slit spectroscopy. This technique became available in the 1980s.
Description
The term multi-object spectrograph is commonly used for spectrographs using a bundle of fibers to image part of the field. The entrance of the fibers is at the focal plane of the imaging instrument. The bundle is then reshaped; the individual fibers are aligned at the entrance slit of a spectrometer, dispersing the light on a detector.
This technique is closely related to integral field spectrography (IFS), more specifically to fiber-IFS. It is a form of snapshot hyperspectral imaging, itself a part of imaging spectroscopy.
Apertures
Typically, the apertures of multi-object spectrographs can be modified to fit the needs of the given observation.
For example, the MOSFIRE (Multi-Object Spectrometer for Infra-Red Exploration
) instrument on the W. M. Keck Observatory contains the Configurable Slit Unit (CSU) allowing arbitrary positioning of up to forty-six 18 cm slits by moving opposable bars.
Some fiber-fed spectroscopes, such as the Large Sky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST) can move the fibers to desired position. The LAMOST moves its 4000 fibers separately within designated areas for the requirements of a measurement, and can correct positioning errors in real time.
The James Webb Space Telescope uses a fixed Micro-Shutter Assembly (MSA), an array of nearly 250000 5.1 mm by 11.7 mm shutters that can independently be opened or closed to change the location of the open slits on the device.
Uses in telescopes
Ground-based instruments
Instruments with multi-object spectrometry capabilities are available on most 8-10 meter-class ground-based observatories. For example, the Large Binocular Telescope, W. M. Keck Observatory, Gran Telescopio Canarias, Gemini Observatory, New Technology Telescope, William Herschel Telescope, UK Schmidt Telescope and LAMOST include such system.
Four instruments in the Very Large Telescope, including the KMOS (K-band multi-object spectrograph) and the VIMOS (Visible Multi Object Spectrograph) instruments, have multi-object spectroscopic capabilities.
Space-based instruments
The Hubble Space Telescope has been operating the NICMOS (Near Infrared Camera and Multi-Object Spectrometer) from 1997 to 1999 and from 2002 to 2008.
The James Webb Space Telescope's NIRSpec (Near-Infrared Spectrograph) instrument is a multi-object spectrometer.
References
Observational astronomy
Astronomical spectroscopy | Multi-Object Spectrometer | [
"Physics",
"Chemistry",
"Astronomy"
] | 580 | [
"Spectrum (physical sciences)",
"Observational astronomy",
"Astrophysics",
"Astronomical spectroscopy",
"Spectroscopy",
"Astronomical sub-disciplines"
] |
3,264,380 | https://en.wikipedia.org/wiki/Bacterial%20translation | Bacterial translation is the process by which messenger RNA is translated into proteins in bacteria.
Initiation
Initiation of translation in bacteria involves the assembly of the components of the translation system, which are: the two ribosomal subunits (50S and 30S subunits); the mature mRNA to be translated; the tRNA charged with N-formylmethionine (the first amino acid in the nascent peptide); guanosine triphosphate (GTP) as a source of energy, and the three prokaryotic initiation factors IF1, IF2, and IF3, which help the assembly of the initiation complex. Variations in the mechanism can be anticipated.
The ribosome has three active sites: the A site, the P site, and the E site. The A site is the point of entry for the aminoacyl tRNA (except for the first aminoacyl tRNA, which enters at the P site). The P site is where the peptidyl tRNA is formed in the ribosome. And the E site which is the exit site of the now uncharged tRNA after it gives its amino acid to the growing peptide chain.
Canonical initiation: Shine-Dalgarno sequence
The majority of mRNAs in E. coli are prefaced with a Shine-Dalgarno (SD) sequence. The SD sequence is recognized by an complementary "anti-SD" region on the 16S rRNA component of the 30S subunit. In the canonical model, the 30S ribosome is first joined up with the three initiation factors, forming an unstable "pre-initiation complex". The mRNA then pairs up with this anti-SD region, causing it to form a double-stranded RNA structure, roughly positioning the start codon at the P site. An initiating tRNAfMet arrives and is positioned with the help of IF2, starting the translation.
There are a lot of uncertainties even in the canonical model. The initiation site has been shown to be not strictly limited to AUG. Well-known coding regions that do not have AUG initiation codons are those of lacI (GUG) and lacA (UUG) in the E. coli lac operon. Two studies have independently shown that 17 or more non-AUG start codons may initiate translation in E. coli. Nevertheless, AUG seems to at least be the strongest initiation codon among all possibilities.
The SD sequence also does not appear strictly necessary, as a wide range of mRNAs lack them and are still translated, with an entire phylum of bacteria (Bacteroidetes) using no such sequence. Simply SD followed by AUG is also not sufficient to initiate translation. It does, at least, function as a very important initiating signal in E. coli.
70S scanning model
When translating a polycistronic mRNA, a 70S ribosome ends translation at a stop codon. It is now shown that instead of immediately splitting into its two halves, the ribosome can "scan" forward until it hits another Shine–Dalgarno sequence and the downstream initiation codon, initiating another translation with the help of IF2 and IF3. This mode is thought to be important for the translation of genes that are clustered in poly-cistronic operons, where the canonical binding mode can be disruptive due to small distances between neighboring genes on the same mRNA molecule.
Leaderless initiation
A number of bacterial mRNAs have no 5'UTR whatsoever, or a very short one. The complete 70S ribosome, with the help of IF2 (recruiting fMet-tRNA), can simply start translating such a "leaderless" mRNA.
A number of factors modify the efficiency of leaderless initiation. A 5' phosphate group attached to the start codon seems near-essential. AUG is strongly preferred in E. coli, but not necessarily in other species. IF3 inhibits leaderless initiation. A longer 5'UTR or one with significant secondary structure also inhibits leaderless initiation.
Elongation
Elongation of the polypeptide chain involves addition of amino acids to the carboxyl end of the growing chain. The growing protein exits the ribosome through the polypeptide exit tunnel in the large subunit.
Elongation starts when the fMet-tRNA enters the P site, causing a conformational change which opens the A site for the new aminoacyl-tRNA to bind. This binding is facilitated by elongation factor-Tu (EF-Tu), a small GTPase. For fast and accurate recognition of the appropriate tRNA, the ribosome utilizes large conformational changes (conformational proofreading).
Now the P site contains the beginning of the peptide chain of the protein to be encoded and the A site has the next amino acid to be added to the peptide chain. The growing polypeptide connected to the tRNA in the P site is detached from the tRNA in the P site and a peptide bond is formed between the last amino acids of the polypeptide and the amino acid still attached to the tRNA in the A site. This process, known as peptide bond formation, is catalyzed by a ribozyme (the 23S ribosomal RNA in the 50S ribosomal subunit). Now, the A site has the newly formed peptide, while the P site has an uncharged tRNA (tRNA with no amino acids). The newly formed peptide in the A site tRNA is known as dipeptide and the whole assembly is called dipeptidyl-tRNA. The tRNA in the P site minus the amino acid is known to be deacylated. In the final stage of elongation, called translocation, the deacylated tRNA (in the P site) and the dipeptidyl-tRNA (in the A site) along with its corresponding codons move to the E and P sites, respectively, and a new codon moves into the A site. This process is catalyzed by elongation factor G (EF-G). The deacylated tRNA at the E site is released from the ribosome during the next A-site occupation by an aminoacyl-tRNA again facilitated by EF-Tu.
The ribosome continues to translate the remaining codons on the mRNA as more aminoacyl-tRNA bind to the A site, until the ribosome reaches a stop codon on mRNA(UAA, UGA, or UAG).
The translation machinery works relatively slowly compared to the enzyme systems that catalyze DNA replication. Proteins in bacteria are synthesized at a rate of only 18 amino acid residues per second, whereas bacterial replisomes synthesize DNA at a rate of 1000 nucleotides per second. This difference in rate reflects, in part, the difference between polymerizing four types of nucleotides to make nucleic acids and polymerizing 20 types of amino acids to make proteins. Testing and rejecting incorrect aminoacyl-tRNA molecules takes time and slows protein synthesis. In bacteria, translation initiation occurs as soon as the 5' end of an mRNA is synthesized, and translation and transcription are coupled. This is not possible in eukaryotes because transcription and translation are carried out in separate compartments of the cell (the nucleus and cytoplasm).
Termination
Termination occurs when one of the three termination codons
moves into the A site. These codons are not recognized by any tRNAs. Instead, they are recognized by proteins called release factors, namely RF1 (recognizing the UAA and UAG stop codons) or RF2 (recognizing the UAA and UGA stop codons). These factors trigger the hydrolysis of the ester bond in peptidyl-tRNA and the release of the newly synthesized protein from the ribosome. A third release factor RF-3 catalyzes the release of RF-1 and RF-2 at the end of the termination process.
Recycling
The post-termination complex formed by the end of the termination step consists of mRNA with the termination codon at the A-site, an uncharged tRNA in the P site, and the intact 70S ribosome. Ribosome recycling step is responsible for the disassembly of the post-termination ribosomal complex. Once the nascent protein is released in termination, Ribosome Recycling Factor and Elongation Factor G (EF-G) function to release mRNA and tRNAs from ribosomes and dissociate the 70S ribosome into the 30S and 50S subunits. IF3 then replaces the deacylated tRNA releasing the mRNA. All translational components are now free for additional rounds of translation.
Depending on the tRNA, IF1–IF3 may also perform recycling.
Polysomes
Translation is carried out by more than one ribosome simultaneously. Because of the relatively large size of ribosomes, they can only attach to sites on mRNA 35 nucleotides apart. The complex of one mRNA and a number of ribosomes is called a polysome or polyribosome.
Regulation of translation
When bacterial cells run out of nutrients, they enter stationary phase and downregulate protein synthesis. Several processes mediate this transition. For instance, in E. coli, 70S ribosomes form 90S dimers upon binding with a small 6.5 kDa protein, ribosome modulation factor RMF. These intermediate ribosome dimers can subsequently bind a hibernation promotion factor (the 10.8 kDa protein, HPF) molecule to form a mature 100S ribosomal particle, in which the dimerization interface is made by the two 30S subunits of the two participating ribosomes. The ribosome dimers represent a hibernation state and are translationally inactive. A third protein that can bind to ribosomes when E. coli cells enter the stationary phase is YfiA (previously known as RaiA). HPF and YfiA are structurally similar, and both proteins can bind to the catalytic A- and P-sites of the ribosome. RMF blocks ribosome binding to mRNA by preventing interaction of the messenger with 16S rRNA. When bound to the ribosomes the C-terminal tail of E. coli YfiA interferes with the binding of RMF, thus preventing dimerization and resulting in the formation of translationally inactive monomeric 70S ribosomes.
In addition to ribosome dimerization, the joining of the two ribosomal subunits can be blocked by RsfS (formerly called RsfA or YbeB). RsfS binds to L14, a protein of the large ribosomal subunit, and thereby blocks joining of the small subunit to form a functional 70S ribosome, slowing down or blocking translation entirely. RsfS proteins are found in almost all eubacteria (but not archaea) and homologs are present in mitochondria and chloroplasts (where they are called C7orf30 and iojap, respectively). However, it is not known yet how the expression or activity of RsfS is regulated.
Another ribosome-dissociation factor in Escherichia coli is HflX, previously a GTPase of unknown function. Zhang et al. (2015) showed that HflX is a heat shock–induced ribosome-splitting factor capable of dissociating vacant as well as mRNA-associated ribosomes. The N-terminal effector domain of HflX binds to the peptidyl transferase center in a strikingly similar manner as that of the class I release factors and induces dramatic conformational changes in central intersubunit bridges, thus promoting subunit dissociation. Accordingly, loss of HflX results in an increase in stalled ribosomes upon heat shock and possibly other stress conditions.
Effect of antibiotics
Several antibiotics exert their action by targeting the translation process in bacteria. They exploit the differences between prokaryotic and eukaryotic translation mechanisms to selectively inhibit protein synthesis in bacteria without affecting the host.
See also
Prokaryotic initiation factors
Prokaryotic elongation factors
References
Molecular biology
Protein biosynthesis
Gene expression | Bacterial translation | [
"Chemistry",
"Biology"
] | 2,530 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
3,264,389 | https://en.wikipedia.org/wiki/Eukaryotic%20translation | Eukaryotic translation is the biological process by which messenger RNA is translated into proteins in eukaryotes. It consists of four phases: initiation, elongation, termination, and recapping.
Initiation
Translation initiation is the process by which the ribosome and its associated factors bind to an mRNA and are assembled at the start codon. This process is defined as either cap-dependent, in which the ribosome binds initially at the 5' cap and then travels to the stop codon, or as cap-independent, where the ribosome does not initially bind the 5' cap.
Cap-dependent initiation
Initiation of translation usually involves the interaction of certain key proteins, the initiation factors, with a special tag bound to the 5'-end of an mRNA molecule, the 5' cap, as well as with the 5' UTR. These proteins bind the small (40S) ribosomal subunit and hold the mRNA in place.
eIF3 is associated with the 40S ribosomal subunit and plays a role in keeping the large (60S) ribosomal subunit from prematurely binding. eIF3 also interacts with the eIF4F complex, which consists of three other initiation factors: eIF4A, eIF4E, and eIF4G. eIF4G is a scaffolding protein that directly associates with both eIF3 and the other two components. eIF4E is the cap-binding protein. Binding of the cap by eIF4E is often considered the rate-limiting step of cap-dependent initiation, and the concentration of eIF4E is a regulatory nexus of translational control. Certain viruses cleave a portion of eIF4G that binds eIF4E, thus preventing cap-dependent translation to hijack the host machinery in favor of the viral (cap-independent) messages. eIF4A is an ATP-dependent RNA helicase that aids the ribosome by resolving certain secondary structures formed along the mRNA transcript. Recent structural biology results also indicated that a second eIF4A protein can simultaneously associate with the initiation complex, specifically interacting with eIF3. The poly(A)-binding protein (PABP) also associates with the eIF4F complex via eIF4G, and binds the poly-A tail of most eukaryotic mRNA molecules. This protein has been implicated in playing a role in circularization of the mRNA during translation.
This 43S preinitiation complex (43S PIC) accompanied by the protein factors moves along the mRNA chain toward its 3'-end, in a process known as 'scanning', to reach the start codon (typically AUG). In eukaryotes and archaea, the amino acid encoded by the start codon is methionine. The Met-charged initiator tRNA (Met-tRNAiMet) is brought to the P-site of the small ribosomal subunit by eukaryotic initiation factor 2 (eIF2). It hydrolyzes GTP, and signals for the dissociation of several factors from the small ribosomal subunit, eventually leading to the association of the large subunit (or the 60S subunit). The complete ribosome (80S) then commences translation elongation.
Regulation of protein synthesis is partly influenced by phosphorylation of eIF2 (via the α subunit), which is a part of the eIF2-GTP-Met-tRNAiMet ternary complex (eIF2-TC). When large numbers of eIF2 are phosphorylated, protein synthesis is inhibited. This occurs under amino acid starvation or after viral infection. However, a small fraction of this initiation factor is naturally phosphorylated. Another regulator is 4EBP, which binds to the initiation factor eIF4E and inhibits its interactions with eIF4G, thus preventing cap-dependent initiation. To oppose the effects of 4EBP, growth factors phosphorylate 4EBP, reducing its affinity for eIF4E and permitting protein synthesis.
While protein synthesis is globally regulated by modulating the expression of key initiation factors as well as the number of ribosomes, individual mRNAs can have different translation rates due to the presence of regulatory sequence elements. This has been shown to be important in a variety of settings including yeast meiosis and ethylene response in plants. In addition, recent work in yeast and humans suggest that evolutionary divergence in cis-regulatory sequences can impact translation regulation. Additionally, RNA helicases such as DHX29 and Ded1/DDX3 may participate in the process of translation initiation, especially for mRNAs with structured 5'UTRs.
Cap-independent initiation
The best-studied example of cap-independent translation initiation in eukaryotes uses the internal ribosome entry site (IRES). Unlike cap-dependent translation, cap-independent translation does not require a 5' cap to initiate scanning from the 5' end of the mRNA until the start codon. The ribosome can localize to the start site by direct binding, initiation factors, and/or ITAFs (IRES trans-acting factors) bypassing the need to scan the entire 5' UTR. This method of translation is important in conditions that require the translation of specific mRNAs during cellular stress, when overall translation is reduced. Examples include factors responding to apoptosis and stress-induced responses.
Elongation
Elongation depends on eukaryotic elongation factors. At the end of the initiation step, the mRNA is positioned so that the next codon can be translated during the elongation stage of protein synthesis. The initiator tRNA occupies the P site in the ribosome, and the A site is ready to receive an aminoacyl-tRNA. During chain elongation, each additional amino acid is added to the nascent polypeptide chain in a three-step microcycle. The steps in this microcycle are (1) positioning the correct aminoacyl-tRNA in the A site of the ribosome, which is brought into that site by eEF1, (2) forming the peptide bond, and (3) shifting the mRNA by one codon relative to the ribosome with the help of eEF2.
Unlike bacteria, in which translation initiation occurs as soon as the 5' end of an mRNA is synthesized, in eukaryotes, such tight coupling between transcription and translation is not possible because transcription and translation are carried out in separate compartments of the cell (the nucleus and cytoplasm). Eukaryotic mRNA precursors must be processed in the nucleus (e.g., capping, polyadenylation, splicing) in ribosomes before they are exported to the cytoplasm for translation.
Translation can also be affected by ribosomal pausing, which can trigger endonucleolytic attack of the tRNA, a process termed mRNA no-go decay. Ribosomal pausing also aids co-translational folding of the nascent polypeptide on the ribosome, and delays protein translation while it is encoding tRNA. This can trigger ribosomal frameshifting.
Termination
Termination of elongation depends on eukaryotic release factors. The process is similar to that of bacterial termination, but unlike bacterial termination, there is a universal release factor, eRF1, that recognizes all three stop codons. Upon termination, the ribosome is disassembled and the completed polypeptide is released. eRF3 is a ribosome-dependent GTPase that helps eRF1 release the completed polypeptide. The human genome encodes a few genes whose mRNA stop codon are surprisingly leaky: In these genes, termination of translation is inefficient due to special RNA bases in the vicinity of the stop codon. Leaky termination in these genes leads to translational readthrough of up to 10% of the stop codons of these genes. Some of these genes encode functional protein domains in their readthrough extension so that new protein isoforms can arise. This process has been termed 'functional translational readthrough'.
Regulation and modification of translation
Translation is one of the key energy consumers in cells, hence it is strictly regulated. Numerous mechanisms have evolved that control and regulate translation in eukaryotes as well as prokaryotes. Regulation of translation can impact the global rate of protein synthesis which is closely coupled to the metabolic and proliferative state of a cell. To delve deeper into this intricate process, scientists typically use a technique known as ribosome profiling. This method enables researchers to take a snapshot of the translatome, showing which parts of the mRNA are being translated into proteins by ribosomes at a given time. Ribosome profiling provides valuable insights into translation dynamics, revealing the complex interplay between gene sequence, mRNA structure, and translation regulation. Expanding on this concept, a more recent development is single-cell ribosome profiling, a technique that allows us to study the translation process at the resolution of individual cells. Single-cell ribosome profiling has the potential to shed light on the heterogeneous nature of cells, leading to a more nuanced understanding of how translation regulation can impact cell behavior, metabolic state, and responsiveness to various stimuli or conditions.
Amino acid substitution
In some cells certain amino acids can be depleted and thus affect translation efficiency. For instance, activated T cells secrete interferon-γ which triggers intracellular tryptophan shortage by upregulating the indoleamine 2,3-dioxygenase 1 (IDO1) enzyme. Surprisingly, despite tryptophan depletion, in-frame protein synthesis continues across tryptophan codons. This is achieved by incorporation of phenylalanine instead of tryptophan. The resulting peptides are called W>F "substitutants". Such W>F substitutants are abundant in certain cancer types and have been associated with increased IDO1 expression. Functionally, W>F substitutants can impair protein activity.
See also
40S
60S
80S
Eukaryotic initiation factor
Eukaryotic elongation factors
Eukaryotic release factors
References
External links
Animation at wku.edu
Animations at nobelprize.org
Molecular biology
Protein biosynthesis
Gene expression
Eukaryote genetics | Eukaryotic translation | [
"Chemistry",
"Biology"
] | 2,174 | [
"Protein biosynthesis",
"Eukaryote genetics",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Genetics by type of organism"
] |
3,264,579 | https://en.wikipedia.org/wiki/Initial%20mass%20function | In astronomy, the initial mass function (IMF) is an empirical function that describes the initial distribution of masses for a population of stars during star formation. IMF not only describes the formation and evolution of individual stars, it also serves as an important link that describes the formation and evolution of galaxies.
The IMF is often given as a probability density function (PDF) that describes the probability of a star that has a certain mass during its formation. It differs from the present-day mass function (PDMF), which describes the current distribution of masses of stars, such as red giants, white dwarfs, neutron stars, and black holes, after some time of evolution away from the main sequence stars and after a certain amount of mass loss. Since there are not enough young clusters of stars available for the calculation of IMF, PDMF is used instead and the results are extrapolated back to IMF. IMF and PDMF can be linked through the "stellar creation function". Stellar creation function is defined as the number of stars per unit volume of space in a mass range and a time interval. In the case that all the main sequence stars have greater lifetimes than the galaxy, IMF and PDMF are equivalent. Similarly, IMF and PDMF are equivalent in brown dwarfs due to their unlimited lifetimes.
The properties and evolution of a star are closely related to its mass, so the IMF is an important diagnostic tool for astronomers studying large quantities of stars. For example, the initial mass of a star is the primary factor of determining its colour, luminosity, radius, radiation spectrum, and quantity of materials and energy it emitted into interstellar space during its lifetime. At low masses, the IMF sets the Milky Way Galaxy mass budget and the number of substellar objects that form. At intermediate masses, the IMF controls chemical enrichment of the interstellar medium. At high masses, the IMF sets the number of core collapse supernovae that occur and therefore the kinetic energy feedback.
The IMF is relatively invariant from one group of stars to another, though some observations suggest that the IMF is different in different environments, and potentially dramatically different in early galaxies.
Development
The mass of a star can only be directly determined by applying Kepler's third law to a binary star system. However, the number of binary systems that can be directly observed is low, thus not enough samples to estimate the initial mass function. Therefore, the stellar luminosity function is used to derive a mass function (a present-day mass function, PDMF) by applying mass–luminosity relation. The luminosity function requires accurate determination of distances, and the most straightforward way is by measuring stellar parallax within 20 parsecs from the earth. Although short distances yield a smaller number of samples with greater uncertainty of distances for stars with faint magnitudes (with a magnitude > 12 in the visual band), it reduces the error of distances for nearby stars, and allows accurate determination of binary star systems. Since the magnitude of a star varies with its age, the determination of mass-luminosity relation should also take into account its age. For stars with masses above , it takes more than 10 billion years for their magnitude to increase substantially. For low-mass stars with below , it takes 5 × 108 years to reach main sequence stars.
The IMF is often stated in terms of a series of power laws, where (sometimes also represented as ), the number of stars with masses in the range to within a specified volume of space, is proportional to , where is a dimensionless exponent.
Commonly used forms of the IMF are the Kroupa (2001) broken power law and the Chabrier (2003) log-normal.
Salpeter (1955)
Edwin E. Salpeter is the first astrophysicist who attempted to quantify IMF by applying power law into his equations. His work is based upon the sun-like stars that can be easily observed with great accuracy. Salpeter defined the mass function as the number of stars in a volume of space observed at a time as per logarithmic mass interval. His work enabled a large number of theoretical parameters to be included in the equation while converging all these parameters into an exponent of . The Salpeter IMF is
where is a constant relating to the local stellar density.
Miller–Scalo (1979)
Glenn E. Miller and John M. Scalo extended the work of Salpeter, by suggesting that the IMF "flattened" () when stellar masses fell below .
Kroupa (2002)
Pavel Kroupa kept between , but introduced between and below . Above , correcting for unresolved binary stars also adds a fourth domain with .
Chabrier (2003)
Gilles Chabrier gave the following expression for the density of individual stars in the Galactic disk, in units of pc:
This expression is log-normal, meaning that the logarithm of the mass follows a Gaussian distribution up to .
For stellar systems (namely binaries), he gave:
Slope
The initial mass function is typically graphed on a logarithm scale of log(N) vs log(m). Such plots give approximately straight lines with a slope Γ equal to 1–α. Hence Γ is often called the slope of the initial mass function. The present-day mass function, for coeval formation, has the same slope except that it rolls off at higher masses which have evolved away from the main sequence.
Uncertainties
There are large uncertainties concerning the substellar region. In particular, the classical assumption of a single IMF covering the whole substellar and stellar mass range is being questioned, in favor of a two-component IMF to account for possible different formation modes for substellar objects—one IMF covering brown dwarfs and very-low-mass stars, and another ranging from the higher-mass brown dwarfs to the most massive stars. This leads to an overlap region approximately between where both formation modes may account for bodies in this mass range.
Variation
The possible variation of the IMF affects our interpretation of the galaxy signals and the estimation of cosmic star formation history thus is important to consider.
In theory, the IMF should vary with different star-forming conditions. Higher ambient temperature increases the mass of collapsing gas clouds (Jeans mass); lower gas metallicity reduces the radiation pressure thus make the accretion of the gas easier, both lead to more massive stars being formed in a star cluster. The galaxy-wide IMF can be different from the star-cluster scale IMF and may systematically change with the galaxy star formation history.
Measurements of the local universe where single stars can be resolved are consistent with an invariant IMF but the conclusion suffers from large measurement uncertainty due to the small number of massive stars and difficulties in distinguishing binary systems from the single stars. Thus IMF variation effect is not prominent enough to be observed in the local universe. However, recent photometric survey across cosmic time does suggest a potentially systematic variation of the IMF at high redshift.
Systems formed at much earlier times or further from the galactic neighborhood, where star formation activity can be hundreds or even thousands time stronger than the current Milky Way, may give a better understanding. It has been consistently reported both for star clusters and galaxies that there seems to be a systematic variation of the IMF. However, the measurements are less direct. For star clusters the IMF may change over time due to complicated dynamical evolution.
Origin of the Stellar IMF
Recent studies have suggested that filamentary structures in molecular clouds play a crucial role in the initial conditions of star formation and the origin of the stellar IMF. Herschel observations of the California giant molecular cloud show that both the prestellar core mass function (CMF) and the filament line mass function (FLMF) follow power-law distributions at the high-mass end, consistent with the Salpeter power-law IMF. Specifically, the CMF follows for masses greater than , and the FLMF follows for filament line masses greater than . Recent research suggests that the global prestellar CMF in molecular clouds is the result of the integration of CMFs generated by individual thermally supercritical filaments, which indicates a tight connection between the FLMF and the CMF/IMF, supporting the idea that filamentary structures are a critical evolutionary step in establishing a Salpeter-like mass function.
References
Notes
Further reading
External links
Stellar astronomy
Equations of astronomy
Mass
Concepts in stellar astronomy | Initial mass function | [
"Physics",
"Astronomy",
"Mathematics"
] | 1,750 | [
"Scalar physical quantities",
"Astronomical sub-disciplines",
"Physical quantities",
"Concepts in astrophysics",
"Concepts in astronomy",
"Quantity",
"Mass",
"Size",
"Equations of astronomy",
"Concepts in stellar astronomy",
"Wikipedia categories named after physical quantities",
"Matter",
"... |
16,774,174 | https://en.wikipedia.org/wiki/Reflectin | Reflectins are a family of intrinsically disordered proteins evolved by a certain number of cephalopods including Euprymna scolopes and Doryteuthis opalescens to produce iridescent camouflage and signaling. The recently identified protein family is enriched in aromatic and sulfur-containing amino acids, and is utilized by certain cephalopods to refract incident light in their environment. The reflectin protein is responsible for dynamic pigmentation and iridescence in organisms. This process is "dynamic" due to its reversible properties, allowing reflectin to change an organism's appearance in response to external factors such as needing to camouflage or send warning signals.
Reflectin proteins are likely distributed in the outer layer of cells called "sheath cells" that surround an organism's pigment cells also known as chromatocyte. Specific sequences of reflectin ables cephalopods to communicate and camouflage by adjusting color and reflectivity.
Origin
Reflectin is presumed to have originated from a type of transposon (nicknamed jumping genes), which is a DNA sequence that can change positions within genetic material by encoding an enzyme. The encoded enzyme detaches transposon from one location in a genome and ligates (binds) it to another. "Jumps" of transposon can create or reverse mutations that alter a cell's genetic identity which can result in new characteristics. This process can be thought of as a "cut and paste" mechanism. Transposons' ability to adapt in a genome and quickly shift its identity is a property that closely resemble the behavior of reflectin.
An additional ancestor could be symbiotic Vibrio fischeri (also called Aliivibrio fischeri) which is a bioluminescent (produces and emits light) bacterium often found in symbiotic relationships. As reflectin and Vibrio fischeri share similar functions such as producing an iridescent appearance in organisms, it is also thought that, just like Vibrio fischeri, Reflectin is symbiotic and is used by cephalopods to interact with their environment.
Structure
Reflectin is a disordered protein made up of conserved amino acid sequences. Each sequence includes a combination of standard and sulphur-containing amino acids. Although the basic structure can be deduced, the exact molecular structure is yet to be determined. Light interacting properties of reflectin can be attributed to its ordered hierarchical structure and hydrogen bonding.
Reflectin in membranes
Reflectin make up the majority of Bragg reflectors which are formed by invaginations of the cell membrane. Bragg reflectors are responsible for reflecting color in a type of skin cell called iridocyte. Reflectors are composed of periodically stacked lamellae which are thin layers of tissue bound to a membrane. The color and brightness of light reflected by many species is determined by the thickness, spacing, and refractive index (how fast light can travel through the membrane) of the Bragg lamellae. A change in membrane thickness triggers an outflow of water from the Bragg lamellae, essentially dehydrating it, increasing their refractive index and decreasing thickness and spacing. This results in an increase in reflectance from the Bragg lamellae, and a change in color of the reflected light. This change additionally allows initially transparent cells to increase in brightness
Mechanisms
Reflectin is able to receive information from signals for a continuous process to fine-tune the osmotic pressure of sub-cellular structures of cephlapods. This ongoing process is used to regulate photonic behavior, or in other words, control how an organism changes color. The components of reflectin carry a very strong positive charge. Nerve signals are sent to iridophore cells (also called chromatophores) which are pigment-containing cells that add a negative charge to reflectin. With the charges balanced, the protein folds up to expose a sticky surface, causing reflecting molecules to clump together. This process repeats until enough reflectin proteins have accumulated to change the fluid pressure of the membrane of the cell walls. The thickness of the membrane reduces as water escapes, a process that changes the wavelength of light reflected. By adapting an organism's membrane to reflect different wavelengths, reflection allows cephlapods to shift from different colors of red, yellow, green, and blue as well as adjust the brightness of the projected color.
Current Research
Research teams of ICB (Institute for Collaborative Biotechnologies) discovered that reflectin assembly can be electrically fine-tuned, suggesting a new approach of controlling protein machines similar to reflectin Biotic-abiotic manipulating by electrically fine-tuning reflectin assembly
Researchers at the University of California in Santa Barbara (UCSB) may have implications for molecular engineering based on the mechanisms similar to transformations controlled by reflectin. Discoveries about reflectin may even point the way towards treatments for Alzheimer's disease. Processes used by reflectin are similar to those seen when proteins assemble in the brain during the progress of protein-related diseases like Alzheimer's and Parkinson's. Understanding how brain-damaging pathology might be reversed.
Researchers think that reversible mechanisms used by reflectin protein may be replicated to develop dynamic living human cells and tissues. These findings could be applied to the development of biophotonic tools used in material science and bioengineering Optical engineering of human cells
Based on reflectin's function to camouflage cephalopods, researchers believe it is possible to create a material used for the growth of human neural and progenitor cells. Using reflectin as a material for neural stem cell growth
Use in bioengineering
Reflectins have been heterologously expressed in mammalian cells to change their refractive index.
References
Further reading
Molluscan proteins
Marine biology
Optical phenomena
Bobtail squid | Reflectin | [
"Physics",
"Biology"
] | 1,181 | [
"Optical phenomena",
"Physical phenomena",
"Marine biology"
] |
16,774,511 | https://en.wikipedia.org/wiki/Fluting%20%28architecture%29 | Fluting in architecture and the decorative arts consists of shallow grooves running along a surface. The term typically refers to the curved grooves (flutes) running vertically on a column shaft or a pilaster, but is not restricted to those two applications. If the scoops taken out of the material meet in a sharp ridge, the ridge is called an arris. If the raised ridge between two flutes appears flat, the ridge is a . Fluted columns are common in the tradition of classical architecture but were not invented by the ancient Greeks, but rather passed down or learned from the Mycenaeans or the Egyptians.
Especially in stone architecture, fluting distinguishes the column shafts and pilasters visually from plain masonry walls behind. Fluting promotes a play of light on a column which helps the column appear more perfectly round than a smooth column. As a strong vertical element it also has the visual effect of minimizing any horizontal joints. Greek architects viewed rhythm as an important design element. As such, fluting was often used on buildings and temples to increase the sense of rhythm. It may also be incorporated in columns to make them look thinner, lighter, and more elegant.
It is generally agreed that fluting was used on wooden columns (none of which have survived) before it was used on stone; with a curved adze applying concave fluting to wooden columns made from tree trunks, would have been relatively easy. Convex fluting was probably intended to imitate plant forms. Minoan and Mycenaean architecture used both, but Greek and Roman architecture used the concave style almost exclusively.
Fluting was very common in formal ancient Greek architecture, and compulsory in the Greek Doric order. It was optional for the Ionic and Corinthian orders. In Roman architecture it was used a good deal less, and effectively disappeared in European medieval architecture. It was revived in Renaissance architecture, without becoming usual, but in Neoclassical architecture once again became very common in larger buildings. Throughout all this, fluting was used in several of the decorative arts in various media.
Cabled fluting
If the flutes (hollowed-out grooves) are partly re-filled with moulding, this form of decorated fluting is cabled fluting, ribbed fluting, rudenture, stopped fluting or stop-fluting. Cabling refers to this or cable molding.
When this occurs in columns, it is on roughly the lower third of the grooves. This decorative element is not used in Doric order columns. Cabled fluting may have been used to prevent wear and damage to the sharp edges of the flutes along the bottom part of the column.
Spiral fluting
Spiral fluting is a rather rare style in Roman architecture, and even rarer in the later classical tradition. However, it was in fashion in the Eastern Roman Empire between about 100 and 250 AD.
What is in effect horizontal "fluting" is sometimes applied, in particular to parts of the bases of columns. It tends to be called "banding".
Applications
Fluted columns in the Doric order of classical architecture have 20 flutes. Ionic, Corinthian, and Composite columns traditionally have 24. Fluting is never used on Tuscan order columns. Flat-faced pilasters generally have between five and seven flutes.
Fluting is always applied exclusively to the shaft of the column, and may run either the entire shaft length from the base to the capital, or with the lower third of the column shaft filled. The latter application is used to complement the entasis of the column, which begins one third of the way up from the bottom of the shaft.
Fluting might be applied to freestanding, structural columns, as well as engaged columns and decorative pilasters.
By period
Egyptian architecture
Ancient Egyptian architecture used fluting in many buildings; most often the flutes are convex rather than concave, so the effect is the inverse of Greek fluting. Fluting is generally with the intention of making the column look like a bundle of plant stems, and the "papyriform column" is one of several types, which did not become standardized into "orders" in the Greek way. Often vertical fluting is interrupted by horizontal bands, suggesting binding holding a group of stems together.
One of the earliest remaining examples of fluting in limestone columns can be seen at Djoser's necropolis in Saqqara, built by Imhotep in the 27th century BC. The Temple of Luxor, mostly about 1400 BC, has different types in different areas. In some types only part of the shaft is fluted; some columns at Luxor have five different zones of vertical fluting or horizontal banding.
Some of the smaller columns at the Temple of Hatshepsut, Deir el-Bahari, Egypt, 1470 BC bear a considerable resemblance to the Greek Doric column, although the capitals are plain square blocks. The columns taper slightly and have broad flutes that disappear into the floor. It has been suggested that columns of this type influenced the Greeks.
Persian architecture
Persian columns do not follow the Classical orders, but were developed during the Achaemenid Empire in ancient Persia, over roughly the same period that Doric temples developed in Greece. The ruins of Persepolis, Iran, where examples can be most clearly be seen, are probably mostly from the 6th century BC. In grand settings the columns are usually fluted, with tall capitals featuring two highly decorated animals, and column bases of various types.
The flutes are shallow, with arrises, like the Greek Doric, but they are more numerous, and therefore narrower. The large columns at Persepolis have as many as 40 or 48 flutes, with smaller columns elsewhere 32; the width of a flute is kept fairly constant, so the number of flutes increases with the girth of the column, in contrast to the Greek practice of keeping the number of flutes on a column constant and varying the width of the flute. The early Doric temples seem to have had a similar principle, before 20 flutes became the convention.
Fluting is also found in other parts of the classical Persian column. The bases are often fluted, and the "bell" part of the capital has stylized plant ornament that comes close to fluting. Above this there is usually a tall section with four flat fluted volutes.
Classical architecture
Fluting was used in both Greek and Roman architecture, especially for temples, but then became rare in Byzantine architecture, where the emphasis was on fine coloured stone, and the architecture of the Middle Ages in the West.
Greek architecture
Columns in buildings of the Doric order were almost always fluted; the unfluted columns of the temple of Segesta in Sicily are one of the reasons that archaeologists believe the temple was never completed, probably because of war. They demonstrate that the plain columns, made of several circular "drums", were put into place before the flutes were carved to ensure the grooves matched up perfectly.
But the flutes of the top and bottom drums appear to have been started, to give a guide for the rest. A now isolated Ionic column at the Temple of Apollo, Didyma shows this; only part of the top drum has been fluted. Another unfinished Ionic drum section in the agora at Kos has been marked up for fluting, which never took place. In both of these examples there are rather wide margins outside the fluting to the roughly finished surface. There has been considerable modern exploration of the mathematical techniques used to create models of templates for fluting. The practical problems for the masons were increased by the variable girth of the shafts, which both tapered overall and had the entasis swelling in the middle.
Greek masons had also to allow for the various refinements, or subtle departures from the apparent geometry of the design, that Greek architects introduced. These include entasis, swelling in the middle part of the shaft, tapering at the top of the shaft, and a slight slant to the whole column. In the Parthenon the depth of the flutes increases towards the top of the shafts.
In the earliest Doric examples the columns are rather slim, and often only have 16 flutes. By the mid-6th century BC shafts were thicker, and 20 became settled as the number of flutes, thereafter very rarely deviated from when using the Doric order. This fixing of the number seems to have happened while "Temple C" at Selinus was being built, around 550 BC, as there is a mixture of 16 and 20 flutes.
In some buildings, especially secular stoas and the like, the bottom of the shaft might be left smooth up to about the height of a man. Greek Doric columns had no base, and this prevented the flutes, which ended in a sharp arris, being worn down by people brushing past. The flutes continue right down to the base of the column, and at the top usually pass through three very narrow bands cut into the stone before reaching the base of the capital, where the shaft swells slightly. The flutes were carved by making an initial narrow cut to the appropriate depth in the centre of each flute, then shaping the curved sides. By the time of the second Heraion of Samos, perhaps around 550 BC, lathes were being used.
Fluting is treated as optional in Ionic and Corinthian buildings, or perhaps was sometimes left for later if money was running short; in some buildings the fluting was probably carved long after the initial "completion".
The fluting used for the Ionic and Corinthian orders was slightly different, normally with fillets between the flutes, that may appear flat, but actually follow the curvature of the column. Despite Ionic columns of a given height being slimmer than Doric ones, they have more flutes, with 24 being settled on as the standard, after early experiments. These took the number as high as 48 in some columns in the second building of the Temple of Artemis at Ephesus in Turkey, one of the earliest "really large Greek temples", of about 550 BC.
Ionic and Corinthian flutes are also deeper, some approaching a semi-circle, and are usually terminated at the top and bottom by a semi-circular scoop, followed by a small distance where the column has its full circular profile, or indeed swells. These orders always have a base to the columns, often an elaborate one.
Roman architecture
While Greek temples employed columns for load-bearing purposes, Roman architects often used columns more as decorative elements. They tend to use fluting less often than the Greeks in the Ionic and Corinthian orders, and to mix fluted and unfluted columns in the same building more often. The external columns on the Colosseum, which use the three classical orders on different levels, are not fluted, nor are the large monolithic granite Corinthian columns of the portico of the Pantheon, Rome, a very grand temple, though many columns in the interior are.
However, it is possible that in some buildings fluting in stucco, "so much used and so rarely preserved" according to J. B. Ward-Perkins, was applied to stone columns. Roman Doric columns "nearly always" have a base, although Vitruvius does not insist on one.
Fluted Corinthian columns perhaps became associated with imperial grandeur. Even rather small provincial caesariums, or temples of the Imperial cult have them on their porches, as do imperial triumphal arches. Examples of temples include the Maison carrée, the Roman Temple of Évora, and Temple of Augustus, Barcelona in provincial centres, as well as the much larger temples in Rome, such as the Temple of Vespasian and Titus. However the Temple of Augustus, Pula has plain Corinthian columns. Triumphal arches with fluting include the Arch of Augustus in Rimini, and the one in Susa, Arch of Trajan in Ancona, and all the imperial arches in Rome. Large temples with unfluted columns include the Temple of Saturn (Ionic, and a late rebuilding), the Temple of Venus and Rome, and others in the Roman Forum.
Indian architecture
Sections of column shafts with relatively shallow vertical concave fluting were used in India, especially in early rock-cut architecture, as at the Buddhist Ajanta Caves. They were typically mixed with horizontal bands of more complex ornament, such as garlands or floral scrolls. These were useful for covering what might be awkward transitions between different zones. Spiral fluting is sometimes found in the same way, as inside Cave 26 at Ajanta, from the late 5th or early 6th century.
Similar visual effects are more often achieved by giving column shafts several flat faces. The Heliodorus pillar of about 113 BC has three different zones with 8, 16 and 32 flat faces (lowest first), with a round zone above that.
Fluting was also used in capitals, in contrast to the Greco-Roman tradition. The "bell" capitals of the Ashoka columns are fluted, as are the flatter capitals in Cave 26 of the Ajanta Caves. In the Ashoka columns the flutes are stylized leaves, clinging to the bell, with round bottoms.
Chinese architecture
Fluted columns, some with entasis, were one of the options available to Chinese architects and cave-carvers (survivals are mostly in Buddhist rock-carved shrines) in the 3rd to 6th centuries AD. Some engaged columns were also topped by quasi-capital with volutes, but usually curling up, rather than down as in the Ionic; in some cases these were also at the bottom of the shaft. The possibility of influence, perhaps indirect, from the Greco-Roman world has been discussed by scholars. However, vertical fluting cannot be called a common form of decoration.
Byzantine and medieval European architecture
In Byzantine architecture columns were mostly relatively small and functional rather than decorative. They were used to support galleries, ciboriums over altars and the like. Byzantine taste appreciated rare and expensive types of stone, and like to see these in round and polished form. Even ancient columns re-used as spolia were probably smoothed down if fluted, as they are so rarely seen in Byzantine buildings.
Columns continued to be important in Romanesque and Gothic architecture, often engaged or clustered together in bunches. But the shafts are almost always plain. An exception is two of the large columns ("piers") in the nave of Durham Cathedral (c. 1120s). These have a distinctive format of alternating convex and concave flutes. These were carved on the stones before the pier was erected.
The entrance of the Castel del Monte, Apulia, Italy, an imperial castle from the 1240s, has very thin fluted pilasters under a pediment, in an early and rather shaky attempt to revive classical forms.
Renaissance architecture
The revival of classical architectural elements, including Classical order columns, was central to Renaissance architecture, built between the 15th and 17th centuries in Europe. But columns were used sparingly in the Early Renaissance, except for courtyard arcades, and fluting is slow to appear.
The Pazzi Chapel in Florence by Filippo Brunelleschi (1429) has plain columns (outside) but cable-fluted pilasters inside and out. A similar mixture is seen in St Peter's Basilica in Rome, where the giant order columns on the facade are plain, but the main pilasters in the interior are cable-fluted, and smaller columns, for example framing the doors, are fluted.
Plain columns and fluted pilasters became a common mixture, not least because at least the internal pilasters are often stucco over brick, making fluting much easier and cheaper than carving in stone.
Although, like other Renaissance manuals, I quattro libri dell'architettura by Andrea Palladio (1570) recommended and illustrated the conventional Vitruvian styles of fluting, in his own buildings Palladio very rarely used fluting; in the Doric and Corinthian orders, his shafts are "almost never fluted", and in the Ionic he "never used fluted shafts".
Neoclassical architecture
Fluting dramatically returned to European architecture in the late 18th century with Neoclassical architecture, especially Greek Revival architecture. By this time publications which measured and illustrated authentic Greek Doric buildings were available, and a stark Doric look became fashionable in Germany (where it was partly a gesture against over-elegant French styles), Britain and the United States. Fluting became more common, even usual for grand buildings, even in the Ionic and Corinthian orders.
A gentler version of the style is exemplified throughout many government buildings and monuments in the United States, though some buildings like the Lincoln Memorial in Washington, D.C. (1922), continued to use Greek Doric with no bases to the columns. In the 20th century New Classical architecture made considerable use of fluting.
Decorative arts
Fluting, very often convex, is also found in various media in the decorative arts, including metalware, wooden furniture, glass and pottery. It was common in English cut glass of the Georgian period. In metal plate armour, fluting was very practical, strengthening the plate against heavy blows. It was especially common in the early 16th-century style called Maximilian armour, after Maximilian I, Holy Roman Emperor.
See also
Fluting (geology)
Solomonic column
Gadrooning: curving convex fluting
Reeding: the opposite of fluting
Molding (decorative)
Notes
References
Irwin, John, "The Heliodorus Pillar: A Fresh Appraisal", AARP, Art and Archaeology Research Papers, December, 1974, Internet archive, (also published in Purātattva, 8, 1975–1976, pp. 166–178)
Lawrence, A. W., Greek Architecture, 1957, Penguin, Pelican history of art
Semper, Gottfried, Style in the technical and tectonic arts, or, Practical aesthetics, 2004 translation of Der Stil in der technischen und tektonischen Künsten (1860–62), Getty Research Institute, ISBN 9780892365975, google books
Steinhardt, Nancy Shatzman, Chinese Architecture in an Age of Turmoil, 200-600, University of Hawaii Press, 2014, ISBN 9780824838232, google books
Summerson, John, The Classical Language of Architecture, 1980 edition, Thames and Hudson World of Art series,
Ward-Perkins, John Bryan, Roman Imperial Architecture, 1981, Penguin Books,
Watkin, David, A History of Western Architecture, 1986, Barrie & Jenkins,
External links
University of Pittsburgh - "fluting" from the Medieval Art and Architecture glossary
Architectural elements
Decorative arts | Fluting (architecture) | [
"Technology",
"Engineering"
] | 3,817 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
16,779,974 | https://en.wikipedia.org/wiki/Rubidium%20iodide | Rubidium iodide is a salt of rubidium and iodine, with the chemical formula RbI. It is a white solid with a melting point of 642 °C.
Preparation
Rubidium iodide can be synthesized in several ways. One is to use a mixed reaction of rubidium hydroxide and hydriodic acid/hydrogen iodide:
Another method is to neutralize rubidium carbonate with hydriodic acid:
Another method is to use rubidium metal to react directly with iodine, but because rubidium metal is very expensive, it is the least commonly used method. In addition, rubidium reacts violently with halogens and burns:
Properties
Rubidium iodide forms colorless crystals, and has a red-violet flame color. The refractive index of the crystals is nD = 1.6474. It reacts with halogens to form polyhalides: RbI3, RbICl2, RbICl4. It is easily soluble in water, liquid ammonia, sulfuric acid, RbI·6NH3 and RbI·3SO2. Rubidium iodide is soluble only in the following solvents:
The standard enthalpy of formation of rubidium iodide is ΔfH0298 = −328.7 kJ mol−1, the standard free enthalpy of formation ΔG0298 = −325.7 kJ mol−1, and the standard molar entropy S0298 = 118.11 J K−1·mol−1.
Rubidium iodide has a sodium chloride structure; its lattice constant is a = 7.326 Å, and the Rb–I bond length is 3.66 Å.
Applications
Rubidium iodide is used as a component of eye drops, in which it is sold in Romania under the name Rubjovit® (containing 8 mg/ml RbI). Another product is Polijodurato®. However, there are studies that show that rubidium iodide has allergy-triggering and inflammation-causing side effects. Homeopathic products containing rubidium iodide are available under the name 'Rubidium iodatum'. In the past, towards the end of the 19th century, it was used to treat syphilis.
It found isolated use in organic synthesis, for example for the targeted saponification of a polymethylated phosphate.
References
Bibliography
CRC Handbook of Chemistry and Physics, 77th edition
Rubidium compounds
Iodides
Metal halides
Alkali metal iodides
Rock salt crystal structure | Rubidium iodide | [
"Chemistry"
] | 522 | [
"Inorganic compounds",
"Metal halides",
"Salts"
] |
4,426,385 | https://en.wikipedia.org/wiki/Aridity%20index | An aridity index (AI) is a numerical indicator of the degree of dryness of the climate at a given location. The American Meteorological Society defined it in meteorology and climatology, as "the degree to which a climate lacks effective, life-promoting moisture". Aridity is different from drought because aridity is permanent whereas drought is temporary. A number of aridity indices have been proposed (see below); these indicators serve to identify, locate or delimit regions that suffer from a deficit of available water, a condition that can severely affect the effective use of the land for such activities as agriculture or stock-farming.
Historical background and indices
Köppen
At the turn of the 20th century, Wladimir Köppen and Rudolf Geiger developed the concept of a climate classification where arid regions were defined as those places where the annual rainfall accumulation (in centimetres) is less than , where:
if rainfall occurs mainly in the cold season,
if rainfall is evenly distributed throughout the year, and
if rainfall occurs mainly in the hot season.
where is the mean annual temperature in Celsius.
This was one of the first attempts at defining an aridity index, one that reflects the effects of the thermal regime and the amount and distribution of precipitation in determining the native vegetation possible in an area. It recognizes the significance of temperature in allowing colder places such as northern Canada to be seen as humid with the same level of precipitation as some tropical deserts because of lower levels of potential evapotranspiration in colder places. In the subtropics, the allowance for the distribution of rainfall between warm and cold seasons recognizes that winter rainfall is more effective for plant growth that can flourish in the winter and go dormant in the summer than the same amount of summer rainfall during a warm-to-hot season. Thus a place like Athens, Greece that gets most of its rainfall in winter can be considered to have a humid climate (as attested in lush foliage) with roughly the same amount of rainfall that imposes semi-desert conditions in Midland, Texas, where rainfall largely occurs in the summer.
Thornthwaite
In 1948, C. W. Thornthwaite proposed an AI defined as:
where the water deficiency is calculated as the sum of the monthly differences between precipitation and potential evapotranspiration for those months when the normal precipitation is less than the normal evapotranspiration; and where stands for the sum of monthly values of potential evapotranspiration for the deficient months (after Huschke, 1959). This AI was later used by Meigs (1961) to delineate the arid zones of the world in the context of the UNESCO Arid Zone Research programme.
United Nations Environment Programme
In the preparations leading to the 1977 UN Conference on Desertification (UNCOD), the United Nations Environment Programme (UNEP) issued a dryness map based on a different aridity index, proposed originally by Mikhail Ivanovich Budyko (1958) and defined as follows:
where is the mean annual net radiation (also known as the net radiation balance), is the mean annual precipitation, and is the latent heat of vaporization for water. Note that this index is dimensionless and that the variables , and can be expressed in any system of units that is self-consistent.
More recently in 1992, the UNEP has adopted yet another index of aridity, defined as:
where is the potential evapotranspiration and is the average annual precipitation (UNEP, 1992). Here also, and must be expressed in the same units, e.g., in millimetres. In this latter case, the boundaries that define various degrees of aridity and the approximate areas involved are as follows:
As this index increases with wetter conditions, some hydrologists refer to this as a humidity index.
See also
Climate classification
Aridification
Desertification
Drought
References
Huschke, Ralph E. (1959) Glossary of Meteorology, American Meteorological Society, Boston, Second printing-1970.
McIntosh, D. H. (1972) Meteorological Glossary, Her Majesty's Stationery Office, Met. O. 842, A.P. 897, 319 p.
Climatology
Hydrology | Aridity index | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 854 | [
"Hydrology",
"Environmental engineering"
] |
4,426,872 | https://en.wikipedia.org/wiki/Co-stimulation | Co-stimulation is a secondary signal which immune cells rely on to activate an immune response in the presence of an antigen-presenting cell. In the case of T cells, two stimuli are required to fully activate their immune response. During the activation of lymphocytes, co-stimulation is often crucial to the development of an effective immune response. Co-stimulation is required in addition to the antigen-specific signal from their antigen receptors.
T cell co-stimulation
T cells require two signals to become fully activated. A first signal, which is antigen-specific, is provided through the T cell receptor (TCR) which interacts with peptide-MHC molecules on the membrane of an antigen presenting cell (APC). A second signal, the co-stimulatory signal, is antigen nonspecific and is provided by the interaction between co-stimulatory molecules expressed on the membrane of the APC and the T cell. This interaction promotes and enhances the TCR signaling, but can also be bi-directional. The co-stimulatory signal is necessary for T cell proliferation, differentiation and survival. Activation of T cells without co-stimulation may lead to the unresponsiveness of the T cell (also called anergy), apoptosis or the acquisition of the immune tolerance.
The counterpart of the co-stimulatory signal is a (co-)inhibitory signal, where inhibitory molecules interact with different signaling pathways in order to arrest T cell activation. Mostly known inhibitory molecules are CTLA4 and PD1, used in cancer immunotherapy.
In T cell biology there are several co-stimulatory molecules from different protein families. Mostly studied are those belonging to Immunoglobulin super-family (IgSF) (such as CD28, B7, ICOS, CD226 or CRTAM) and TNF receptor super-family (TNFRSF) (such as 41-BB, OX40, CD27, GITR, HVEM, CD40, BAFFR, BAFF and others). Additionally, some co-stimulatory molecules belong to TIM family, CD2/SLAM family or BTN/BTN-like family.
The surface expression of different co-stimulatory molecules is regulated on a transcriptional and post-transcriptional level, but also by endocytosis. The dynamics of the receptor expression usually depends on the cell state. Some molecules are permanently expressed on non-stimulated cells, such as CD28, others only after TCR triggering, for example 41-BB or CD27.
Mechanism of function
Generally, the mechanism of function of co-stimulatory molecules is based on the overlap of their signaling pathway with the primary (TCR) signal and the induction of other, distal pathways often using different routes, leading to the enhancement of TCR signal and expression of effector genes. Additionally, co-stimulatory signaling can also have a unique outcome.
The example of IgSF molecule is one of the most important co-stimulatory molecules expressed on T cells, CD28, which interacts predominantly with CD80 (B7.1) and CD86 (B7.2), but also with B7-H2 (ICOS-L) in humans, present on the membrane of activated APCs. It is constitutively localized, among other important T cell signaling molecules, in the central SMAC (supramolecular activation complex) of the immunological synapse. Its signaling is involved in the recruitment of protein kinase C θ (PKCθ), Ras GEF and Ras GRP to the synapse. Moreover, it induces the activity of NFAT and NFκB transcription factors through interaction with lymphocyte cell-specific protein-tyrosine kinase (LCK) and GRB2 and/or activation of phopshoinositol-3-Kinase (PI3K) resulting in Akt kinase activation, promoting T cell proliferation and IL-2 production. Additionally, it's involved in other biochemical functions of the cell, including T cell metabolism, post-translational protein modifications or cytoskeletal remodeling.
Another costimulatory receptor expressed on T cells is ICOS ( Inducible Costimulator), which interacts with ICOS-L expressed mainly on the APCs. This receptor is genetically closely related to CD28 but cannot substitute for its function. Among many similarities with CD28, it also induces Akt activity through PI3K activation and promotes proliferation. However, there are differences in these pathways, which contribute to the disparity between CD28 and ICOS signaling.
Signaling through co-stimulatory molecules from TNFRSF often involves the interaction with TRAF adaptor proteins to enhance T cell stimulation. For instance, 41-BB (CD137; TNFRSF9) is a signaling molecule expressed mainly on T cells, but also on NK cells. Due to extracellular galectin 9 binding, 41-BB complexes are kept preassembled on the membrane. It interacts with TRAF1 and TRAF2 adaptor proteins, which are involved in pathway eventually leading to NFκB translocation to the nucleus, as well as MAPK/ERK pathway.
OX40 (CD134; TNFRSF4) is another co-stimulatory molecule expressed after T cell activation, but in the later timepoints, since it inhibits apoptosis and increases survival rate several days after the stimulation.
Co-stimulation in different T cell types
CD28 is important practically for all T cell types, but some other co-stimulatory molecules are expressed in some cell types more than in others.
CD2 was shown to prime naive T cells (TN) even without CD28 or TCR. Also, CD27 is a receptor constitutively expressed on TN (its expression is downregulated upon TCR stimulation) and enhances T cell proliferation.
The differentiation of T helper cells (TH) into different subsets also partially depends on their co-stimulatory molecules. TIM1, TIM4, ICOS, CD3 or DR3 and several molecules from the SLAM family were shown to induce polarization towards TH2. In contrast, CD27 and HVEM promote TH1 polarization. OX40 and ICOS expression was linked to T folicular helper (TFH) differentiation and maintenance. Regulatory T cells (TREG) need CD28 signal for their generation and ICOS signal for their peripheral maintenance and survival. In contrast, HVEM, GITR and CD30 are suppressing their activity.
Effector T cells are mainly regulated by TNFRSF molecules, such as 41-BB, CD27, OX40, DR3 or GITR, which enhance their proliferation and survival.
Memory T cells TM were also shown to necessitate co-stimulatory signals. Apart from CD28; ICOS, 41-BB, OX40, TIM3, CD30, BTLA or CD27 were also shown to play role in the proper formation and later signaling of TM.
B cell co-stimulation
B cell binds antigens with its BCR (a membrane-bound antibody), which transfers intracellular signals to the B cell as well as inducing the B cell to engulf the antigen, process it, and present it on the MHC II molecules. The latter case induces recognition by antigen-specific Th2 cells or Tfh cells, leading to activation of the B cell through binding of TCR to the MHC-antigen complex. It is followed by synthesis and presentation of CD40L (CD154) on the Th2 cell, which binds to CD40 on the B cell, thus the Th2 cell can co-stimulate the B cell. Without this co-stimulation the B cell cannot proliferate further.
Co-stimulation for B cells is provided alternatively by complement receptors. Microbes may activate the complement system directly and complement component C3b bind to microbes. After C3b is degraded into a fragment iC3b (inactive derivative of C3b), then cleaved to C3dg, and finally to C3d, which continue to bind to microbial surface, B cells express complement receptor CR2 (CD21) to bind to iC3b, C3dg, or C3d. This additional binding makes the B cells 100- to 10,000-fold more sensitive to antigen. CR2 on mature B cells forms a complex with CD19 and CD81. This complex is called the B cell coreceptor complex for such sensitivity enhancement to the antigen.
Applications
Abatacept (Orencia) is a T cell co-stimulation modulator approved for the treatment of rheumatoid arthritis. The cytokines secreted by activated T cells are thought to both initiate and propagate the immunologically driven inflammation associated with rheumatoid arthritis. Orencia, a soluble fusion protein, works by altering the co-stimulatory signal required for full T-cell activation. Belatacept is another novel molecule which is being tested as an anti-rejection medication for use in renal transplantation.
A new co-stimulatory superagonistic drug, TGN1412, was the subject of a clinical trial at Northwick Park Hospital, London. The trial became surrounded in controversy as the six volunteers became seriously ill within minutes of being given the drug.
In essence, the co-stimulatory molecules function as "flashing red lights" that interact with the T cell, communicating that the material being presented by the dendritic cell material indicates danger. Dendritic cells displaying co-stimulatory molecules while presenting antigen are able to activate T cells. In contrast, T cells that recognize antigen presented by a dendritic cell not displaying co-stimulatory molecules are generally driven to apoptosis, or may become unresponsive to future encounters with the antigen.
References
Immunology | Co-stimulation | [
"Biology"
] | 2,096 | [
"Immunology"
] |
4,427,051 | https://en.wikipedia.org/wiki/Soil%20consolidation | Soil consolidation refers to the mechanical process by which soil changes volume gradually in response to a change in pressure. This happens because soil is a three-phase material, comprising soil grains and pore fluid, usually groundwater. When soil saturated with water is subjected to an increase in pressure, the high volumetric stiffness of water compared to the soil matrix means that the water initially absorbs all the change in pressure without changing volume, creating excess pore water pressure. As water diffuses away from regions of high pressure due to seepage, the soil matrix gradually takes up the pressure change and shrinks in volume. The theoretical framework of consolidation is therefore closely related to the concept of effective stress, and hydraulic conductivity. The early theoretical modern models were proposed one century ago, according to two different approaches, by Karl Terzaghi and Paul Fillunger. The Terzaghi’s model is currently the most utilized in engineering practice and is based on the diffusion equation.
In the narrow sense, "consolidation" refers strictly to this delayed volumetric response to pressure change due to gradual movement of water. Some publications also use "consolidation" in the broad sense, to refer to any process by which soil changes volume due to a change in applied pressure. This broader definition encompasses the overall concept of soil compaction, subsidence, and heave. Some types of soil, mainly those rich in organic matter, show significant creep, whereby the soil changes volume slowly at constant effective stress over a longer time-scale than consolidation due to the diffusion of water. To distinguish between the two mechanisms, "primary consolidation" refers to consolidation due to dissipation of excess water pressure, while "secondary consolidation" refers to the creep process.
The effects of consolidation are most conspicuous where a building sits over a layer of soil with low stiffness and low permeability, such as marine clay, leading to large settlement over many years. Types of construction project where consolidation often poses technical risk include land reclamation, the construction of embankments, and tunnel and basement excavation in clay.
Geotechnical engineers use oedometers to quantify the effects of consolidation. In an oedometer test, a series of known pressures are applied to a thin disc of soil sample, and the change of sample thickness with time is recorded. This allows the consolidation characteristics of the soil to be quantified in terms of the coefficient of consolidation () and hydraulic conductivity ().
Clays undergo consolidation settlement not only by the action of external loads (surcharge loads) but also under its own weight or weight of soils that exist above the clay.
Clays also undergo settlement when dewatered (groundwater pumping) because the effective stress on the clay increases.
Coarse-grained soils do not undergo consolidation settlement due to relatively high hydraulic conductivity compared to clays. Instead, coarse-grained soils undergo the immediate settlement.
History and terminology
The first modern theoretical models for soil consolidation were proposed in the 1920s by Terzaghi and Fillunger, according to two substantially different approaches. The former was based on diffusion equations in eulerian notation, whereas the latter considered the local Newton’s law for both liquid and solid phases, in which main variables, such as partial pressure, porosity, local velocity etc., were involved by means of the mixture theory. Terzaghi had an engineering approach to the problem of soil consolidation and provided simplified models that are still widely used in engineering practice today, whereas, on the other hand, Fillunger had a rigorous approach to the above problems and provided rigorous mathematical models that paid particular attention to the methods of local averaging of the involved variables. Fillunger’s model was very abstract and involved variables that were difficult to detect experimentally, and, therefore, it was not applicable to the study of real cases by engineers and/or designers. Nevertheless, this provided the basis for advanced theoretical studies of particularly complex problems. Due to the different approach to the problem of consolidation by the two scientists, a bitter scientific dispute arose between them, and this unfortunately led to a tragic ending in 1937. After Fillunger’s suicide, his theoretical results were forgotten for decades, whereas the methods proposed by Terzaghi found widespread diffusion among scientists and professionals. In the following decades Biot fully developed the three-dimensional soil consolidation theory, extending the one-dimensional model previously proposed by Terzaghi to more general hypotheses and introducing the set of basic equations of poroelasticity. Today, the Terzaghis’ one dimensional model is still the most utilized by engineers for its conceptual simplicity and because it is based on experimental data, such as oedometer tests, which are relatively simple, reliable and inexpensive and for which theoretical solutions in closed form are well known. According to the "father of soil mechanics", Karl von Terzaghi, consolidation is "any process which involves a decrease in water content of saturated soil without replacement of water by air". More generally, consolidation refers to the process by which soils change volume in response to a change in pressure, encompassing both compaction and swelling.
Magnitude of volume change
Consolidation is the process in which reduction in volume takes place by the gradual expulsion or absorption of water under long-term static loads.
When stress is applied to a soil, it causes the soil particles to pack together more tightly. When this occurs in a soil that is saturated with water, water will be squeezed out of the soil. The magnitude of consolidation can be predicted by many different methods. In the classical method developed by Terzaghi, soils are tested with an oedometer test to determine their compressibility. In most theoretical formulations, a logarithmic relationship is assumed between the volume of the soil sample and the effective stress carried by the soil particles. The constant of proportionality (change in void ratio per order of magnitude change in effective stress) is known as the compression index, given the symbol when calculated in natural logarithm and when calculated in base-10 logarithm.
This can be expressed in the following equation, which is used to estimate the volume change of a soil layer:
where
δc is the settlement due to consolidation.
Cc is the compression index.
e0 is the initial void ratio.
H is the height of the compressible soil.
σzf is the final vertical stress.
σz0 is the initial vertical stress.
When stress is removed from a consolidated soil, the soil will rebound, regaining some of the volume it had lost in the consolidation process. If the stress is reapplied, the soil will consolidate again along a recompression curve, defined by the recompression index. The gradient of the swelling and recompression lines on a plot of void ratio against the logarithm of effective stress often idealised to take the same value, known as the "swelling index" (given the symbol when calculated in natural logarithm and when calculated in base-10 logarithm).
Cc can be replaced by Cr (the recompression index) for use in overconsolidated soils where the final effective stress is less than the preconsolidation stress. When the final effective stress is greater than the preconsolidation stress, the two equations must be used in combination to model both the recompression portion and the virgin compression portion of the consolidation processes, as follows,
where σzc is the preconsolidation stress of the soil.
This method assumes consolidation occurs in only one-dimension. Laboratory data is used to construct a plot of strain or void ratio versus effective stress where the effective stress axis is on a logarithmic scale. The plot's slope is the compression index or recompression index. The equation for consolidation settlement of a normally consolidated soil can then be determined to be:
A soil which had its load removed is considered to be "overconsolidated". This is the case for soils that have previously had glaciers on them or that have been affected by land subsidence. The highest stress that it has been subjected to is termed the "preconsolidation stress". The "over-consolidation ratio" (OCR) is defined as the highest stress experienced divided by the current stress. A soil that is currently experiencing its highest stress is said to be "normally consolidated" and has an OCR of one. A soil could be considered "underconsolidated" or "unconsolidated" immediately after a new load is applied but before the excess pore water pressure has dissipated. Occasionally, soil strata form by natural deposition in rivers and seas may exist in an exceptionally low density that is impossible to achieve in an oedometer; this process is known as "intrinsic consolidation".
Time dependency
Spring analogy
The process of consolidation is often explained with an idealized system composed of a spring, a container with a hole in its cover, and water. In this system, the spring represents the compressibility or the structure of the soil itself, and the water which fills the container represents the pore water in the soil.
The container is completely filled with water, and the hole is closed. (Fully saturated soil)
A load is applied onto the cover, while the hole is still unopened. At this stage, only the water resists the applied load. (Development of excess pore water pressure)
As soon as the hole is opened, water starts to drain out through the hole and the spring shortens. (Drainage of excess pore water pressure)
After some time, the drainage of water no longer occurs. Now, the spring alone resists the applied load. (Full dissipation of excess pore water pressure. End of consolidation)
Analytical formulation of consolidation rate
The time for consolidation to occur can be predicted. Sometimes consolidation can take years. This is especially true in saturated clays because their hydraulic conductivity is extremely low, and this causes the water to take an exceptionally long time to drain out of the soil. While drainage is occurring, the pore water pressure is greater than normal because it is carrying part of the applied stress (as opposed to the soil particles).
where Tv is the time factor;
Hdr is the average longest drain path during consolidation;
t is the time at measurement;
Cv is defined as the coefficient of consolidation found using the log method with
or the root method with
;
t50 is the time duration to 50% deformation (consolidation) and t95 is the time duration to 95% deformation;
T95=1.129 T50=0.197 are constants.
Creep
The theoretical formulation above assumes that time-dependent volume change of a soil unit only depends on changes in effective stress due to the gradual restoration of steady-state pore water pressure. This is the case for most types of sand and clay with low amounts of organic material. However, in soils with a high amount of organic material such as peat, the phenomenon of creep also occurs, whereby the soil changes volume gradually at constant effective stress. Soil creep is typically caused by viscous behavior of the clay-water system and compression of organic matter.
This process of creep is sometimes known as "secondary consolidation" or "secondary compression" because it also involves gradual change of soil volume in response to an application of load; the designation "secondary" distinguishes it from "primary consolidation", which refers to volume change due to dissipation of excess pore water pressure. Creep typically takes place over a longer time-scale than (primary) consolidation, such that even after the restoration of hydrostatic pressure some compression of soil takes place at slow rate.
Analytically, the rate of creep is assumed to decay exponentially with time since application of load, giving the formula:
where H0 is the height of the consolidating medium;
e0 is the initial void ratio;
Ca is the secondary compression index;
t is the length of time after consolidation considered;
and t95 is the length of time for achieving 95% consolidation.
Deformation characteristics of consolidation
Coefficient of compressibility .
The compressibility of saturated specimens of clay minerals increases in the order kaolinite < illite < smectite. The compression index Cc, which is defined as the change in void ratio per 10-fold increase in consolidation pressure, is in the range of 0.19 to 0.28 for kaolinite, 0.50 to 1.10 for illite, and 1.0 to 2.6 for montmorillonite, for different ionic forms. The more compressible the clay, the more pronounced the influences of cation type and electrolyte concentration on compressibility.
Coefficient of volume compressibility
See also
Compaction (geology)
Settlement (structural)
Soil mechanics
Vacuum consolidation
References
Bibliography
Soil mechanics
Sedimentology | Soil consolidation | [
"Physics"
] | 2,609 | [
"Soil mechanics",
"Applied and interdisciplinary physics"
] |
4,429,175 | https://en.wikipedia.org/wiki/Mixed-use%20development | Mixed use is a type of urban development, urban design, urban planning and/or a zoning classification that blends multiple uses, such as residential, commercial, cultural, institutional, or entertainment, into one space, where those functions are to some degree physically and functionally integrated, and that provides pedestrian connections. Mixed-use development may be applied to a single building, a block or neighborhood, or in zoning policy across an entire city or other administrative unit. These projects may be completed by a private developer, (quasi-)governmental agency, or a combination thereof. A mixed-use development may be a new construction, reuse of an existing building or brownfield site, or a combination.
Use in North America vs. Europe
Traditionally, human settlements have developed in mixed-use patterns. However, with industrialization, governmental zoning regulations were introduced to separate different functions, such as manufacturing, from residential areas. Public health concerns and the protection of property values stood as the motivation behind this separation.
In the United States, the practice of zoning for single-family residential use was instigated to safeguard communities from negative externalities, including air, noise, and light pollution, associated with heavier industrial practices. These zones were also constructed to alleviate racial and class tensions.
The heyday of separate-use zoning in the United States came after World War II when planner and New York City Parks Commissioner, Robert Moses, championed superhighways to break up functions and neighborhoods of the city. The antithesis to these practices came from activist and writer, Jane Jacobs, who was a major proponent of mixed-use zoning, believing it played a key role in creating an organic, diverse, and vibrant streetscape. These two figures went head-to-head during much of the 1960s. Since the 1990s, mixed-use zoning has once again become desirable as it works to combat urban sprawl and increase economic vitality.
In most of Europe, government policy has encouraged the continuation of the city center's role as a main location for business, retail, restaurant, and entertainment activity, unlike in the United States where zoning actively discouraged such mixed use for many decades. In England, for example, hotels are included under the same umbrella as "residential," rather than commercial as they are classified under in the US. France similarly gravitates towards mixed-use as much of Paris is simply zoned to be "General Urban," allowing for a variety of uses. Even zones that house the mansions and villas of the aristocrats focus on historical and architectural preservation rather than single family zoning. Single family zoning is also absent in Germany and Russia where zoning codes make no distinction between different types of housing.
America's attachment to private property and the traditional 1950s suburban home, as well as deep racial and class divides, have marked the divergence in mixed-use zoning between the continents. As a result, much of Europe's central cities are mixed use "by default" and the term "mixed-use" is much more relevant regarding new areas of the city where an effort is made to mix residential and commercial activities – such as in Amsterdam's Eastern Docklands.
Contexts
Expanded use of mixed-use zoning and mixed-use developments may be found in a variety of contexts, such as the following (multiple such contexts might apply to one particular project or situation):
as part of smart growth planning strategies
in traditional urban neighborhoods, as part of urban renewal and/or infill, i.e., upgrading the buildings and public spaces and amenities of the neighborhood to provide more and/or better housing and a better quality of life—examples include Barracks Row in Washington, D.C., and East Liberty, Pittsburgh
in traditional suburbs, adding one or more mixed-use developments to provide a new or more prominent "downtown" for the community–examples include new projects in downtown Bethesda, Maryland, an inner suburb of Washington, D.C., and the Excelsior & Grand complex in St. Louis Park, Minnesota, an inner suburb of Minneapolis
greenfield developments, i.e., new construction on previously undeveloped land, particularly at the edge of metropolitan areas and in their exurbs, often as part of creating a relatively denser center for the community—an edge city, or part of one, zoned for mixed use, in the 2010s often labeled "urban villages" (examples include Avalon in Alpharetta, Georgia, and Halcyon in Forsyth County, Georgia, at the edge of the Atlanta metropolitan area)
the repurposing of shopping malls and intensification of development around them, particularly as many shopping malls' retail sales, and ability to rent space to retailers, decrease as part of the 2010s retail apocalypse
Any of the above contexts may also include parallel contexts such as:
Transit-oriented development—for example in Los Angeles and San Diego, where the cities made across-the-board zoning law changes permitting denser development within a certain distance of certain types of transit stations, with the primary aim of increasing the amount and affordability of housing
Older cities such as Chicago and San Francisco have historic preservation policies that sometimes offer more flexibility for older buildings to be used for purposes other than what they were originally zoned for, with the aim of preserving historic architecture
Benefits
Economic
Mixed-use developments are home to significant employment and housing opportunities. Many of these projects are already located in established downtown districts, meaning that development of public transit systems is incentivized in these regions. By taking undervalued and underutilized land, often former heavy industrial, developers can repurpose it to increase land and property values. These projects also increase housing variety, density, and oftentimes affordability through their focus on multifamily, rather than single-family housing compounds. A more equal balance between the supply and demand of jobs and housing is also found in these districts.
Social
This development pattern is centered around the idea of "live, work, play," transforming buildings and neighborhoods into multi-use entities. Efficiency, productivity, and quality of life are also increased with regards to workplaces holding a plethora of amenities. Examples include gyms, restaurants, bars, and shopping. Mixed-use neighborhoods promote community and socialization through their bringing together of employees, visitors, and residents. A distinctive character and sense-of-place is created by transforming single use districts that may run for eight hours a day (ex. commercial office buildings running 9am - 5pm) into communities that can run eighteen hours a day through the addition of cafes, restaurants, bars, and nightclubs. Safety of neighborhoods in turn may be increased as people stay out on the streets for longer hours.
Environmental
Mixed-use neighborhoods and buildings have a strong ability to adapt to changing social and economic environments. When the COVID-19 pandemic hit, New York retailers located on long, commercially oriented blocks suffered severely as they were no longer attracting an audience of passersby. By combining multiple functions into one building or development, mixed-use districts can build resiliency through their ability to attract and maintain visitors.
More sustainable transportation practices are also fostered. A study of Guangzhou, China, done by the Journal of Geographical Information Science, found that taxis located in regions where buildings housed a greater variety of functions had greatly reduced traveling distances. Shorter traveling distances, in turn, support the use of micro-mobility. Pedestrian and bike-friendly infrastructure are fostered due to increased density and reduced distances between housing, workplaces, retail businesses, and other amenities and destinations. Additionally, mixed-use projects promote health and wellness, as these developments often provide better access (whether it be by foot, bicycle, or transit) to farmer's markets and grocery stores. However, hybrid metropolises, areas that have large and tall buildings which accommodate a combination of public and private interests, do not show a decrease in carbon emissions in comparison to metropolitan areas that have a low, dense configuration. This is possibly because hybrid metropolises are prone to attract car traffic from visitors.
Drawbacks
Equity
Due to the speculative nature of large scale real estate developments, mega-mixed-use projects often fall short on meeting equity and affordability goals. High-end residential, upscale retail, and Class A office spaces appealing to high-profile tenants are often prioritized due to their speculative potential. There is also a trend towards making residential spaces in mixed-use developments to be condominiums, rather than rental spaces. A study done by the Journal of the American Planning Association found that a focus on homeownership predominantly excludes individuals working in public services, trades, cultural, sales and service, and manufacturing occupations from living in amenity-rich city centers. Despite incentives like density bonuses, municipalities and developers rarely put a significant focus on affordable housing provisions in these plans.
Financing
Mixed-use buildings can be risky given that there are multiple tenants residing in one development. Mega-mixed-use projects, like Hudson Yards, are also extremely expensive. This development has cost the City of New York over 2.2 billion dollars. Critics argue that taxpayer dollars could better serve the general public if spent elsewhere. Additionally, mixed-use developments, as a catalyst for economic growth, may not serve their intended purpose if they simply shift economic activity, rather than create it. A study done by Jones Lang LaSalle Incorporated (JLL) found that "90 percent of Hudson Yards' new office tenants relocated from Midtown."
Types of contemporary mixed-use zoning
Some of the more frequent mixed-use scenarios in the United States are:
Neighborhood commercial zoning – convenience goods and services, such as convenience stores, permitted in otherwise strictly residential areas
Main Street residential/commercial – two to three-story buildings with residential units above and commercial units on the ground floor facing the street
Urban residential/commercial – multi-story residential buildings with commercial and civic uses on ground floor
Office convenience – office buildings with small retail and service uses oriented to the office workers
Office/residential – multi-family residential units within office building(s)
Shopping mall conversion – residential and/or office units added (adjacent) to an existing standalone shopping mall
Retail district retrofit – retrofitting of a suburban retail area to a more village-like appearance and mix of uses
Live/work – residents can operate small businesses on the ground floor of the building where they live
Studio/light industrial – residents may operate studios or small workshops in the building where they live
Hotel/residence – mix hotel space and high-end multi-family residential
Parking structure with ground-floor retail
Single-family detached home district with standalone shopping center
Examples of cities' mixed-use planning policies
Australia
The first large-scale attempt to create mixed-use development in Australia was the Sydney Region Outline Plan, a plan that identified Sydney's need to decentralise and organise its growth around the metropolitan area. Its main objective was to control the city's rapid post-war population growth by introducing growth corridors and economic centres that would help prevent uncontrolled sprawl and the overuse of the car as a means of transport Several city centres such as Parramatta or Campbelltown benefited from these policies, creating economic hubs with his own inner-city amenities along Sydney's main thoroughfares.
Subsequent plans complemented the initial one with new policies focused on economic and urban renewal issues. In particular, the 1988 Plan was designed in collaboration with a transport strategy and was the first to recommend higher development densities. Since then, Australian planning authorities have given greater priority to mixed-use development of inner-city industrial land as a way of revitalising areas neglected by the decline in manufacturing, consolidating and densifying the previously underpopulated urban centres. This new urban planning approach has had a significant impact on the use of land parcels in major Australian cities: according to 2021 data from Australian Bureau of Statistics, mixed zoning already suppose more than 9% of new housing approvals.
Canada
One of the first cities to adopt a policy on mixed-use development is Toronto. The local government first played a role in 1986 with a zoning bylaw that allowed for commercial and residential units to be mixed. At the time, Toronto was in the beginning stages of planning a focus on developing mixed-use development due to the growing popularity of more social housing. The law has since been updated as recently as 2013, shifting much of its focus outside the downtown area which has been a part of the main city since 1998. With the regulations in place, the city has overseen the development of high-rise condominiums throughout the city with amenities and transit stops nearby. Toronto's policies of mixed-use development have inspired other North American cities in Canada and the United States to bring about similar changes.
One example of a Toronto mixed-use development is Mirvish Village by architect Gregory Henriquez. Located at Bloor and Bathurst Street, a significant intersection in Toronto, portions of the Mirvish Village project site are zoned as "commercial residential" and others as "mixed commercial residential". Within the City of Toronto's zoning by-laws, commercial residential includes "a range of commercial, residential and institutional uses, as well as parks." Mirvish Village's programmatic uses include rental apartments, a public market, and small-unit retail, while also preserving 23 of 27 heritage houses on site. The project is notable for its public consultation process, which was lauded by Toronto city officials. Architect Henriquez and the developer had previously collaborated on mixed-use projects in Vancouver, British Columbia, including the successful Woodward's Redevelopment.
United States
In the United States, the Environmental Protection Agency (EPA) collaborates with local governments by providing researchers developing new data that estimates how a city can be impacted by mixed-use development. With the EPA putting models in the spreadsheet, it makes it much easier for municipalities, and developers to estimate the traffic, with Mixed-use spaces. The linking models also used as a resource tool measures the geography, demographics, and land use characteristics in a city. The Environmental Protection Agency has conducted an analysis on six major metropolitan areas using land usage, household surveys, and GIS databases. States such as California, Washington, New Mexico, and Virginia have adopted this standard as statewide policy when assessing how urban developments can impact traffic. Preconditions for the success of mixed-use developments are employment, population, and consumer spending. The three preconditions ensure that a development can attract quality tenants and financial success. Other factors determining the success of the mixed-use development is the proximity of production time, and the costs from the surrounding market.
Portland
Mixed-use zoning has been implemented in Portland, Oregon, since the early 1990s, when the local government wanted to reduce the then-dominant car-oriented development style. The Metropolitan Area Express, Portland's light rail system, encourages the mixing of residential, commercial, and work spaces into one zone. With this one-zoning-type planning system, the use of land at increased densities provides a return in public investments throughout the city. Main street corridors provide flexible building heights and high density uses to enable "gathering places".
Hudson Yards, NYC
Hudson Yards project is the US's largest project to ever be financed by TIF (tax increment financing) subsidies. It did not require voter approval, nor did it have to go through the city's traditional budgeting process. Rather, the project is financed by future property taxes and the EB-5 Visa Program. This program provides VISAs to overseas investors in exchange for placing a minimum of $500,000 into US real estate.
See also
Notes
Further reading
Reclaiming the City, 1997, Andy Coupland
"Mixed use development, practice and potential", Department for Communities and Local Government, UK Government
What is functional mix?, Planning Theory and Practice 18(2):249-267 · February 2017
External links
Commercial real estate
Residential real estate
Sustainable urban planning
Sustainable transport
Urban design
Zoning
Shopping malls by type
New Urbanism | Mixed-use development | [
"Physics",
"Engineering"
] | 3,249 | [
"Zoning",
"Physical systems",
"Transport",
"Sustainable transport",
"Construction"
] |
4,430,013 | https://en.wikipedia.org/wiki/SAIDI | The System Average Interruption Duration Index (SAIDI) is commonly used as a reliability index by electric power utilities. SAIDI is the average cumulative outage duration for each customer served, and is calculated as:
where is the number of customers and is the annual outage time for location , and is the total number of customers served. In other words,
SAIDI is measured in units of time, often minutes or hours; it is usually measured over the course of a year. According to IEEE Standard 1366–1998, the median value for North American utilities is approximately 1.50 hours. According to the U.S. Energy Information Administration Annual Electric Power Industry Report, it is 2.0 hours, rising to the range of 3.5 to 8 hours, when "major events" are included.
Comparison of SAIDI by country
The following is a table of SAIDI for different countries, calculated using the methodology in the World Bank's Doing Business 2016-2020 studies:
References
External links
EIA Reliability Metrics video by the U.S. Energy Information Administration on YouTube
Electric power
Reliability indices | SAIDI | [
"Physics",
"Engineering"
] | 222 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
4,430,027 | https://en.wikipedia.org/wiki/SAIFI | The System Average Interruption Frequency Index (SAIFI) is commonly used as a reliability index by electric power utilities. SAIFI is the average number of interruptions that a customer would experience, and is calculated as
where is the failure rate and is the number of customers for location . In other words,
SAIFI is measured in units of interruptions per customer. It is usually measured over the course of a year, and according to IEEE Standard 1366-1998 the median value for North American utilities is approximately 1.10 interruptions per customer.
Sources
Electric power
Reliability indices | SAIFI | [
"Physics",
"Engineering"
] | 115 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
4,430,041 | https://en.wikipedia.org/wiki/CAIDI | The Customer Average Interruption Duration Index (CAIDI) is a reliability index commonly used by electric power utilities. It is related to SAIDI and SAIFI, and is calculated as
where is the failure rate, is the number of customers, and is the annual outage time for location . In other words,
CAIDI gives the average outage duration that any given customer would experience. CAIDI can also be viewed as the average restoration time.
CAIDI is measured in units of time, often minutes or hours. It is usually measured over the course of a year, and according to IEEE Standard 1366-1998 the median value for North American utilities is approximately 1.36 hours.
References
Electric power
Reliability indices | CAIDI | [
"Physics",
"Engineering"
] | 143 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
4,432,006 | https://en.wikipedia.org/wiki/Molecular%20Modelling%20Toolkit | The Molecular Modelling Toolkit (MMTK) is an open-source software package written in Python, which performs common tasks in molecular modelling.
, MMTK consists of about 18,000 lines of Python code, 12,000 lines of hand-written C code, and some machine-generated C code.
Features
construction of molecular systems, with special support for proteins and nucleic acids
infinite systems or periodic boundary conditions (orthorhombic elementary cells)
common geometrical operations on coordinates
rigid-body fits
visualization using external PDB and VRML viewers; animation of dynamics trajectories and normal modes
the AMBER 94 force field, with several options for handling electrostatic interactions
a deformation force field for fast normal mode calculations on proteins
energy minimization (steepest descent and conjugate gradient)
molecular dynamics (with optional thermostat, barostat, and distance constraints)
normal mode analysis
trajectory operations
point charge fits
molecular surface calculations
interfaces to other programs
See also
Software for molecular mechanics modelling
References
External links
Background information
Molecular modelling software
Molecular dynamics software
Python (programming language) software | Molecular Modelling Toolkit | [
"Chemistry",
"Biology"
] | 221 | [
"Molecular dynamics software",
"Molecular modelling software",
"Molecular physics",
"Bioinformatics stubs",
"Computational chemistry software",
"Biotechnology stubs",
"Biochemistry stubs",
"Molecular modelling",
"Molecular dynamics",
"Bioinformatics",
"Molecular physics stubs"
] |
4,432,401 | https://en.wikipedia.org/wiki/Euryhaline | Euryhaline organisms are able to adapt to a wide range of salinities. An example of a euryhaline fish is the short-finned molly, Poecilia sphenops, which can live in fresh water, brackish water, or salt water.
The green crab (Carcinus maenas) is an example of a euryhaline invertebrate that can live in salt and brackish water. Euryhaline organisms are commonly found in habitats such as estuaries and tide pools where the salinity changes regularly. However, some organisms are euryhaline because their life cycle involves migration between freshwater and marine environments, as is the case with salmon and eels.
The opposite of euryhaline organisms are stenohaline ones, which can only survive within a narrow range of salinities. Most freshwater organisms are stenohaline, and will die in seawater, and similarly most marine organisms are stenohaline, and cannot live in fresh water.
Osmoregulation
Osmoregulation is the active process by which an organism maintains its level of water content. The osmotic pressure in the body is homeostatically regulated in such a manner that it keeps the organism's fluids from becoming too diluted or too concentrated. Osmotic pressure is a measure of the tendency of water to move into one solution from another by osmosis.
Two major types of osmoregulation are osmoconformers and osmoregulators. Osmoconformers match their body osmolarity to their environment actively or passively. Most marine invertebrates are osmoconformers, although their ionic composition may be different from that of seawater.
Osmoregulators tightly regulate their body osmolarity, which always stays constant, and are more common in the animal kingdom. Osmoregulators actively control salt concentrations despite the salt concentrations in the environment. An example is freshwater fish. The gills actively uptake salt from the environment by the use of mitochondria-rich cells. Water will diffuse into the fish, so it excretes a very hypotonic (dilute) urine to expel all the excess water. A marine fish has an internal osmotic concentration lower than that of the surrounding seawater, so it tends to lose water (to the more negative surroundings) and gain salt. It actively excretes salt out from the gills. Most fish are stenohaline, which means they are restricted to either salt or fresh water and cannot survive in water with a different salt concentration than they are adapted to. However, some fish show a tremendous ability to effectively osmoregulate across a broad range of salinities; fish with this ability are known as euryhaline species, e.g., salmon. Salmon has been observed to inhabit two utterly disparate environments — marine and fresh water — and it is inherent to adapt to both by bringing in behavioral and physiological modifications.
Some marine fish, like sharks, have adopted a different, efficient mechanism to conserve water, i.e., osmoregulation. They retain urea in their blood in relatively higher concentration. Urea is damaging to living tissue so, to cope with this problem, some fish retain trimethylamine oxide. This provides a better solution to urea's toxicity. Sharks, having slightly higher solute concentration (i.e., above 1000 mOsm which is sea solute concentration), do not drink water like marine fish.
Euryhaline fish
The level of salinity in intertidal zones can also be quite variable. Low salinities can be caused by rainwater or river inputs of freshwater. Estuarine species must be especially euryhaline, or able to tolerate a wide range of salinities. High salinities occur in locations with high evaporation rates, such as in salt marshes and high intertidal pools. Shading by plants, especially in the salt marsh, can slow evaporation and thus ameliorate salinity stress. In addition, salt marsh plants tolerate high salinities by several physiological mechanisms, including excreting salt through salt glands and preventing salt uptake into the roots.
Despite having a regular freshwater presence, the Atlantic stingray is physiologically euryhaline and no population has evolved the specialized osmoregulatory mechanisms found in the river stingrays of the family Potamotrygonidae. This may be due to the relatively recent date of freshwater colonization (under one million years), and/or possibly incomplete genetic isolation of the freshwater populations, as they remain capable of surviving in salt water. Freshwater Atlantic stingrays have only 30-50% the concentration of urea and other osmolytes in their blood compared to marine populations. However, the osmotic pressure between their internal fluids and external environment still causes water to diffuse into their bodies, and they must produce large quantities of dilute urine (at 10 times the rate of marine individuals) to compensate.
Partial list
Atlantic stingray
Bull shark
Green chromide
Herring
Lamprey
Mummichog
Molly
Guppy
Puffer fish
Salmon
Shad
Striped bass
Sturgeon
Tilapia
Trout
Barramundi
Mangrove jack
White perch
Killifish
Desert pupfish
Other euryhaline organisms
See also
Fish migration
Osmoregulation
Stenohaline
Osmoconformer
References
Aquatic ecology | Euryhaline | [
"Biology"
] | 1,110 | [
"Aquatic ecology",
"Ecosystems"
] |
4,432,469 | https://en.wikipedia.org/wiki/Interference%20lithography | Interference lithography (or holographic lithography) is a technique that uses coherent light (such as light from a laser) for patterning regular arrays of fine features without the use of complex optical systems or photomasks.
Basic principle
The basic principle is the same as in interferometry or holography. An interference pattern between two or more coherent light waves is set up and recorded in a recording layer (photoresist). This interference pattern consists of a periodic series of fringes representing intensity minima and maxima. Upon post-exposure photolithographic processing, a photoresist pattern corresponding to the periodic intensity pattern emerges.
For 2-beam interference, the fringe-to-fringe spacing or period is given by , where is the wavelength and is the angle between the two interfering waves. The minimum period achievable is then half the wavelength.
By using 3-beam interference, arrays with hexagonal symmetry can be generated, while with 4 beams, arrays with rectangular symmetry or 3D photonic crystals are generated. With multi wave interference (by inserting a diffuser into the optical path) aperiodic patterns with defined spatial frequency spectrum can be originated. Hence, by superimposing different beam combinations, different patterns are made possible.
Coherence requirements
For interference lithography to be successful, coherence requirements must be met. First, a spatially coherent light source must be used. This is effectively a point light source in combination with a collimating lens. A laser or synchrotron beam are also often used directly without additional collimation. The spatial coherence guarantees a uniform wavefront prior to beam splitting. Second, it is preferred to use a monochromatic or temporally coherent light source. This is readily achieved with a laser but broadband sources would require a filter. The monochromatic requirement can be lifted if a diffraction grating is used as a beam splitter, since different wavelengths would diffract into different angles but eventually recombine anyway. Even in this case, spatial coherence and normal incidence would still be required.
Beam splitter
Coherent light must be split into two or more beams prior to being recombined in order to achieve interference. Typical methods for beam splitting are Lloyd´s mirrors, prisms and diffraction gratings.
Electron holographic lithography
The technique is readily extendible to electron waves as well, as demonstrated by the practice of electron holography. Spacings of a few nanometers or even less than a nanometer have been reported using electron holograms. This is because the wavelength of an electron is always shorter than for a photon of the same energy. The wavelength of an electron is given by the de Broglie relation , where is the Planck constant and is the electron momentum. For example, a 1 keV electron has a wavelength of slightly less than 0.04 nm. A 5 eV electron has a wavelength of 0.55 nm. This yields X-ray-like resolution without depositing significant energy. In order to ensure against charging, it must be ensured that electrons can penetrate sufficiently to reach the conducting substrate.
A fundamental concern for using low-energy electrons (≪100 eV) with this technique is their natural tendency to repel one another due to Coulomb forces as well as Fermi–Dirac statistics, though electron anti-bunching has been verified only in a single case.
Atom holographic lithography
The interference of atomic de Broglie waves is also possible provided one can obtain coherent beams of cooled atoms. The momentum of an atom is even larger than for electrons or photons, allowing even smaller wavelengths, per the de Broglie relation. Generally the wavelength will be smaller than the diameter of the atom itself.
Uses
The benefit of using interference lithography is the quick generation of dense features over a wide area without loss of focus. Seamless diffraction gratings on areas of more than one square meter have been originated by interference lithography. Hence, it is commonly used for the origination of master structures for subsequent micro or nano replication processes (e.g. nanoimprint lithography) or for testing photoresist processes for lithography techniques based on new wavelengths (e.g., EUV or 193 nm immersion). In addition, interfering laser beams of high-power pulsed lasers provides the opportunity of applying a direct treatment of the material's surface (including metals, ceramics and polymers) based on photothermal and/or photochemical mechanisms. Due to the above-mentioned characteristics, this method has been called in this case "Direct Laser Interference Patterning" (DLIP). Using DLIP, the substrates can be structured directly in one-step obtaining a periodic array on large areas in a few seconds. Such patterned surfaces can be used for different applications including tribology (wear and friction reduction), photovoltaics (increased photocurrent), or biotechnology.
Electron interference lithography may be used for patterns which normally take too long for conventional electron beam lithography to generate.
The drawback of interference lithography is that it is limited to patterning arrayed features or uniformly distributed aperiodic patterns only. Hence, for drawing arbitrarily shaped patterns, other photolithography techniques are required. In addition, for electron interference lithography non-optical effects, such as secondary electrons from ionizing radiation or photoacid generation and diffusion, cannot be avoided with interference lithography. For instance, the secondary electron range is roughly indicated by the width of carbon contamination (~20 nm) at the surface induced by a focused (2 nm) electron beam. This indicates that the lithographic patterning of 20 nm half-pitch features or smaller will be significantly affected by factors other than the interference pattern, such as the cleanliness of the vacuum.
References
External links
Large-area patterning using interference and nanoimprint lithography
Interference lithography at Fraunhofer ISE
Lithography (microfabrication) | Interference lithography | [
"Materials_science"
] | 1,249 | [
"Nanotechnology",
"Microtechnology",
"Lithography (microfabrication)"
] |
4,432,542 | https://en.wikipedia.org/wiki/Renewable%20heat | Renewable heat is an application of renewable energy referring to the generation of heat from renewable sources; for example, feeding radiators with water warmed by focused solar radiation rather than by a fossil fuel boiler. Renewable heat technologies include renewable biofuels, solar heating, geothermal heating, heat pumps and heat exchangers. Insulation is almost always an important factor in how renewable heating is implemented.
Many colder countries consume more energy for heating than for supplying electricity. For example, in 2005 the United Kingdom consumed 354 TWh of electric power, but had a heat requirement of 907 TWh, the majority of which (81%) was met using gas. The residential sector alone consumed 550 TWh of energy for heating, mainly derived from methane. Almost half of the final energy consumed in the UK (49%) was in the form of heat, of which 70% was used by households and in commercial and public buildings. Households used heat mainly for space heating (69%).
The relative competitiveness of renewable electricity and renewable heat depends on a nation's approach to energy and environment policy. In some countries renewable heat is hindered by subsidies for fossil fuelled heat. In those countries, such as Sweden, Denmark and Finland, where government intervention has been closest to a technology-neutral form of carbon valuation (i.e. carbon and energy taxes), renewable heat has played the leading role in a very substantial renewable contribution to final energy consumption. In those countries, such as Germany, Spain, the US, and the UK, where government intervention has been set at different levels for different technologies, uses and scales, the contributions of renewable heat and renewable electricity technologies have depended on the relative levels of support, and have resulted generally in a lower renewable contribution to final energy consumption.
Leading renewable heat technologies
Solar heating
Solar heating is a style of building construction which uses the energy of summer or winter sunshine to provide an economic supply of primary or supplementary heat to a structure. The heat can be used for both space heating (see solar air heat) and water heating (see solar hot water). Solar heating design is divided into two groups:
Passive solar heating relies on the design and structure of the house to collect heat. Passive solar building design must also consider the storage and distribution of heat, which may be accomplished passively, or use air ducting to draw heat actively to the foundation of the building for storage. One such design was measured lifting the temperature of a house to on a partially sunny winter day (-7 °C or 19 °F), and it is claimed that the system provides passively for the bulk of the building's heating. The home cost $125 per square foot (or 370 m2 at $1,351/m2), similar to the cost of a traditional new home.
Active solar heating uses pumps to move air or a liquid from the solar collector into the building or storage area. Applications such as solar air heating and solar water heating typically capture solar heat in panels which can then be used for applications such as space heating and supplementation of residential water heaters. In contrast to photovoltaic panels, which are used to generate electricity, solar heating panels are less expensive and capture a much higher proportion of the sun's energy.
Solar heating systems usually require a small supplementary backup heating system, either conventional or renewable.
Geothermal heating
Geothermal energy is accessed by drilling water or steam wells in a process similar to drilling for oil. Geothermal energy is an enormous, underused heat and power resource that is clean (emits little or no greenhouse gases), reliable (average system availability of 95%), and homegrown (making populations less dependent on oil).
The earth absorbs the sun's energy and stores it as heat in the oceans and underground. The ground temperature remains constant at a point of all year round depending on where you live on earth. A geothermal heating system takes advantage of the consistent temperature found below the Earth's surface and uses it to heat and cool buildings. The system is made up of a series of pipes installed underground, connected to pipes in a building. A pump circulates liquid through the circuit. In the winter the fluid in the pipe absorbs the heat of the earth and uses it to heat the building. In the summer the fluid absorbs heat from the building and disposes of it in the earth.
Heat pumps
Heat pumps use work to move heat from one place to another, and can be used for both heating and air conditioning. Though capital intensive, heat pumps are economical to run and can be powered by renewable electricity. Two common types of heat pump are air source heat pumps (ASHP) and ground-source heat pumps (GSHP), depending on whether heat is transferred from the air or from the ground. Air source heat pumps are not effective when the outside air temperature is lower than about -15 °C, while ground-source heat pumps are not affected. The efficiency of a heat pump is measured by the coefficient of performance (CoP): For every unit of electricity used to pump the heat, an air source heat pump generates 2.5 to 3 units of heat (i.e. it has a CoP of 2.5 to 3), whereas a GSHP generates 3 to 3.5 units of heat. Based on current fuel prices for the United Kingdom, assuming a CoP of 3–4, a GSHP is sometimes a cheaper form of space heating than electric, oil, and solid fuel heating. Heat pumps can be linked to an interseasonal thermal energy storage (hot or cold), doubling the CoP from 4 to 8 by extracting heat from warmer ground.
Interseasonal heat transfer
A heat pump with Interseasonal Heat Transfer combines active solar collection to store surplus summer heat in thermal banks with ground-source heat pumps to extract it for space heating in winter. This reduces the "Lift" needed and doubles the CoP of the heat pump because the pump starts with warmth from the thermal bank in place of cold from the ground.
CoP and lift
A heat pump CoP increases as the temperature difference, or "Lift", decreases between heat source and destination. The CoP can be maximized at design time by choosing a heating system requiring only a low final water temperature (e.g., underfloor heating), and by choosing a heat source with a high average temperature (e.g., the ground). Domestic hot water (DHW) and conventional radiators require high water temperatures, affecting the choice of heat pump technology. Low temperature radiators provide an alternative to conventional radiators.
Resistive electrical heating
Renewable electricity can be generated by hydropower, solar, wind, geothermal and by burning biomass. In a few countries where renewable electricity is inexpensive, resistance heating is common. In countries like Denmark where electricity is expensive, it is not permitted to install electric heating as the main heat source. Wind turbines have more output at night when there is a small demand for electricity, storage heaters consume this lower cost electricity at night and give off heat during the day.
Wood-pellet heating
Wood-pellet heating and other types of wood heating systems have achieved their greatest success in heating premises that are off the gas grid, typically being previously heated using heating oil or coal. Solid wood fuel requires a large amount of dedicated storage space, and the specialized heating systems can be expensive (though grant schemes are available in many European countries to offset this capital cost.) Low fuel costs mean that wood fuelled heating in Europe is frequently able to achieve a payback period of less than 3 to 5 years. Because of the large fuel storage requirement wood fuel can be less attractive in urban residential scenarios, or for premises connected to the gas grid (though rising gas prices and uncertainty of supply mean that wood fuel is becoming more competitive.) There is also growing concern over the air pollution from wood heating versus oil or gas heat, especially the fine particulates.
Wood-stove heating
Burning wood fuel in an open fire is both extremely inefficient (0-20%) and polluting due to low temperature partial combustion. In the same way that a drafty building loses heat through loss of warm air through poor sealing, an open fire is responsible for large heat losses by drawing very large volumes of warm air out of the building.
Modern wood stove designs allow for more efficient combustion and then heat extraction. In the United States, new wood stoves are certified by the U.S. Environmental Protection Agency (EPA) and burn cleaner and more efficiently (the overall efficiency is 60-80%) and draw smaller volumes of warm air from the building.
"Cleaner" should not, however, be confused with clean. An Australian study of real-life emissions from woodheaters satisfying the current Australian standard, found that particle emissions averaged 9.4 g/kg wood burned (range 2.6 to 21.7). A heater with average wood consumption of 4 tonnes per year therefore emits 37.6 kg of PM2.5, i.e. particles less than 2.5 micrometers. This can be compared with a passenger car satisfying the current Euro 5 standards (introduced September 2009) of 0.005 g/km. So one new wood heater emits as much PM2.5 per year as 367 passenger cars each driving 20,000 km a year. A recent European study identified PM2.5 as the most health-hazardous air pollutant, causing an estimated 492,000 premature deaths. The next worst pollutant, ozone, is responsible for 21,000 premature deaths.
Because of the problems with pollution, the Australian Lung Foundation recommends using alternative means for climate control. The American Lung Association "strongly recommends using cleaner, less toxic sources of heat. Converting a wood-burning fireplace or stove to use either natural gas or propane will eliminate exposure to the dangerous toxins wood burning generates including dioxin, arsenic and formaldehyde.
"Renewable" should not be confused with "greenhouse neutral". A recent peer-reviewed paper found that, even if burning firewood from a sustainable supply, methane emissions from a typical Australian wood heater satisfying the current standard cause more global warming than heating the same house with gas. However, because a large proportion of firewood sold in Australia is not from sustainable supplies, Australian households that use wood heating often cause more global warming than heating three similar homes with gas.
High efficiency stoves should meet the following design criteria:
Well sealed and precisely calibrated to draw a low yet sufficient volume of air. Air-flow restriction is critical; a lower inflow of cold air cools the furnace less (a higher temperature is thus achieved). It also allows greater time for extraction of heat from the exhaust gas, and draws less heat from the building.
The furnace must be well insulated to increase combustion temperature, and thus completeness.
A well insulated furnace radiates little heat. Thus heat must be extracted instead from the exhaust gas duct. Heat absorption efficiencies are higher when the heat-exchange duct is longer, and when the flow of exhaust gas is slower.
In many designs, the heat-exchange duct is built of a very large mass of heat-absorbing brick or stone. This design causes the absorbed heat to be emitted over a longer period - typically a day.
Renewable natural gas
Renewable natural gas is defined as gas obtained from biomass which is upgraded to a quality similar to natural gas. By upgrading the quality to that of natural gas, it becomes possible to distribute the gas to customers via the existing gas grid. According to the Energy research Centre of the Netherlands, renewable natural gas is 'cheaper than alternatives where biomass is used in a combined heat and power plant or local combustion plant'. Energy unit costs are lowered through 'favourable scale and operating hours', and end-user capital costs eliminated through distribution via the existing gas grid.
Energy efficiency
Renewable heat goes hand in hand with energy efficiency. Indeed, renewable heating projects depend heavily for their success on energy efficiency; in the case of solar heating to cut reliance on the requirement supplementary heating, in the case of wood fuel heating to cut the cost of wood purchased and volume stored, and in the case of heat pumps to reduce the size and investment in heat pump, heat sink and electricity costs.
Two main types of improvement can be made to a building's energy efficiency:
Insulation
Improvements to insulation can cut energy consumption greatly, making a space cheaper to heat and to cool. However existing housing can often be difficult or expensive to improve. Newer buildings can benefit from many of the techniques of superinsulation. Older buildings can benefit from several kinds of improvement:
Solid wall insulation: A building with solid walls can benefit from internal or external insulation. External wall insulation involves adding decorative weather-proof insulating panels or other treatment to the outside of the wall. Alternatively, internal wall insulation can be applied using ready-made insulation/plaster board laminates, or other methods. Thicknesses of internal or external insulation typically range between 50 and 100 mm.
Cavity wall insulation: A building with cavity walls can benefit from insulation pumped into the cavity. This form of insulation is very cost effective.
Programmable thermostats allow heating and cooling of a room to be switched off depending the time, day of the week, and temperature. A bedroom, for example, does not need to be heated during the day, but a living room does not need to be heated during the night.
Roof insulation
Insulated windows and doors
Draught proofing
Underfloor heating
Underfloor heating may sometimes be more energy efficient than traditional methods of heating:
Water circulates within the system at low temperatures (35 °C - 50 °C) making gas boilers, wood fired boilers, and heat pumps significantly more efficient.
Rooms with underfloor heating are cooler near the ceiling, where heat is not required, but warmer underfoot, where comfort is most required.
Traditional radiators are frequently positioned underneath poorly insulated windows, heating them unnecessarily.
Waste-water heat recovery
It is possible to recover significant amounts of heat from waste hot water via hot water heat recycling. Major consumption of hot water is sinks, showers, baths, dishwashers, and clothes washers. On average 30% of a property's domestic hot water is used for showering. Incoming fresh water is typically of a far lower temperature than the waste water from a shower. An inexpensive heat exchanger recovers up on average 40% of the heat that would normally be wasted, by warming incoming cold fresh water with heat from outgoing waste water.
Heat recovery ventilation
Heat recovery ventilation (HRV) is an energy recovery ventilation system which works between two air sources at different temperatures. By recovering the residual heat in the exhaust gas, the fresh air introduced into the air conditioning system is preheated.
See also
s
References
External links
Heat pumps based on R744 (CO2) FAQ
Heat pumps Long Awaited Way out of the Global Warming - Information from Heat Pump & Thermal Storage Technology Center of Japan
Department of Trade and Industry, 2005 study on Renewable Heat
Renewable Heat combining asphalt solar collectors, thermal banks and ground source heat pumps.
Energy Saving Trust information on Home Insulation
The Gill report on biomass in the UK - download
Solid wall insulation
Cavity wall insulation
Energy economics
Energy conservation
Heating
Low-energy building
Residential heating
Renewable energy technology
Sustainable technologies
Sustainable building
Sustainable architecture
Sustainable energy | Renewable heat | [
"Engineering",
"Environmental_science"
] | 3,130 | [
"Sustainable building",
"Sustainable architecture",
"Building engineering",
"Energy economics",
"Construction",
"Environmental social science",
"Architecture"
] |
4,432,875 | https://en.wikipedia.org/wiki/Goodyear%20Airdock | The Goodyear Airdock is a construction and storage airship hangar in Akron, Ohio. At its completion in 1929, it was the largest building in the world without interior supports.
Description
The building has a unique shape which has been described as "half a silkworm's cocoon, cut in half the long way." It is long, wide, and high, supported by 13 steel arches. There is 364,000 square feet (34 000 m2) of unobstructed floor space, or an area larger than 8 football fields side-by-side. The airdock has a volume of 55 million cubic feet (or about 1.5 million cubic meters). A control tower and radio aerial sit at its northeast end. At each end of the building are two huge semi-spherical doors that each weigh 600 tons (544 000 kg). At the top, the doors are fastened by hollow forged pins in diameter and long. The doors roll on 40 wheels along specially-designed curved railroad tracks, each powered by an individual power plant that can open the doors in about 5 minutes.
The airdock is so large that temperature changes within the structure can be very different from that on the outside of the structure. To accommodate these fluctuations, which could cause structural damage, a row of 12 windows off the ground was installed. Furthermore, the entire structure is mounted on rollers to compensate for expansion or contraction resulting from temperature changes. When the humidity is high in the Airdock, a sudden change in temperature causes condensation. This condensation falls in a mist, creating the illusion of rain, according to the designer.
History
In 1929, Goodyear Zeppelin Corporation, later Goodyear Aerospace, sought a structure in which "lighter-than-air" ships (later known as airships, dirigibles, and blimps) could be constructed. The company commissioned Karl Arnstein of Akron, Ohio, whose design was inspired by the blueprints of the first aerodynamic-shaped airship hangar, built in 1913 in Dresden, Germany.
Construction took place from April 20 to November 25, 1929, at a cost of $2.2 million (equivalent to $ million in ).
The first two airships to be constructed and launched at the airdock were , in 1931, and its sister ship, .
When World War II broke out, enclosed production areas were desperately needed, and the airdock was used for building airships. The last airship built in the airdock was the U.S. Navy's ZPG-3W in 1960. The building later housed the photographic division of the Goodyear Aerospace Corporation.
In 1980, the Goodyear Airdock was designated a Historic Civil Engineering Landmark by the American Society of Civil Engineers.
The airdock served as the site of the 1986 kickoff rally for the United Way of Summit County, where more than 350,000 members of the public visited. Bill Clinton spoke there during his 1992 election campaign.
In 1987, the Loral Corporation purchased Goodyear Aerospace and the Goodyear Airdock as a result of James Goldsmith's greenmailing of Goodyear. The Loral Corporation (and its holdings, including the Goodyear Airdock) was purchased by Lockheed Martin in 1996.
California company LTA Research and Exploration, together with the University of Akron, plans to use the airdock to develop electric-powered airships.
The airdock is not open to the public, but it can be seen by those traveling on U.S. Route 224 east of downtown Akron.
See also
Airship hangar
Hangar No. 1, Lakehurst Naval Air Station
Hangar One (Mountain View, California)
Weeksville Dirigible Hangar
Bartolomeu de Gusmão Airport
MCAS Tustin
References
External links
National Park Service history of the Goodyear Airdock
Facts and figures
Aviation: From Sand Dunes to Sonic Booms, a National Park Service Discover Our Shared Heritage Travel Itinerary
Ohio and Erie Canal National Heritage Corridor, a National Park Service Discover Our Shared Heritage Travel Itinerary
Airship hangars
Airships of the United States
Historic American Engineering Record in Ohio
Industrial buildings and structures on the National Register of Historic Places in Ohio
National Register of Historic Places in Summit County, Ohio
Transportation in Akron, Ohio
Buildings and structures in Akron, Ohio
Transport infrastructure completed in 1929
Airdock
Historic Civil Engineering Landmarks
Aircraft hangars on the National Register of Historic Places | Goodyear Airdock | [
"Engineering"
] | 886 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
4,433,215 | https://en.wikipedia.org/wiki/Submarine%20depth%20ratings | Depth ratings are primary design parameters and measures of a submarine's ability to operate underwater. The depths to which submarines can dive are limited by the strengths of their hulls.
Ratings
The hull of a submarine must be able to withstand the forces created by the outside water pressure being greater than the inside air pressure. The outside water pressure increases with depth and so the stresses on the hull also increase with depth. Each of depth puts another atmosphere (1 bar, 14.7 psi, 101 kPa) of pressure on the hull, so at , the hull is withstanding of water pressure.
Test depth
This is the maximum depth at which a submarine is permitted to operate under normal peacetime circumstances, and is tested during sea trials. The test depth is set at two-thirds (0.66) of the design depth for United States Navy submarines, while the Royal Navy sets test depth at 4/7 (0.57) the design depth, and the German Navy sets it at exactly one-half (0.50) of design depth.
Operating depth
Also known as the maximum operating depth (or the never-exceed depth), this is the maximum depth at which a submarine is allowed to operate under any (e.g. battle) conditions.
Design depth
The nominal depth listed in the submarine's specifications. From it the designers calculate the thickness of the hull metal, the boat's displacement, and many other related factors.
Crush depth
Sometimes referred to as the "collapse depth" in the United States, this is the submerged depth at which the submarine implodes due to water pressure. Technically speaking, the crush depth should be the same as the design depth, but in practice is usually somewhat deeper. This is the result of compounding safety margins throughout the production chain, where at each point an effort is made to at least slightly exceed the required specifications to account for imperceptible material defects or variations in machining tolerances.
A submarine, by definition, cannot exceed crush depth without being crushed. However, when a prediction is made as to what a submarine's crush depth might be, that prediction may subsequently be mistaken for the actual crush depth of the submarine. Such misunderstandings, compounded by errors in translation and general confusion as to what the various depth ratings mean, have resulted in multiple erroneous accounts of submarines not being crushed at their crush depth.
Notably, several World War II submarines reported that, due to flooding or mechanical failure, they had gone below crush depth, before successfully resurfacing after having the failure repaired or the water pumped out. In these cases, the "crush depth" is always either a mistranslated official "safe" or design depth (i.e. the test depth, or the maximum operating depth) or a prior (incorrect) estimate of what the crush depth might be. World War II German U-boats of the types VII and IX generally imploded at depths of .
See also
HY-80 steel
USS Thresher (SSN-593) – a submarine that likely imploded after reaching its crush depth
References
Pressure vessels
Failure
Pressure
Submarine design | Submarine depth ratings | [
"Physics",
"Chemistry",
"Engineering"
] | 637 | [
"Structural engineering",
"Scalar physical quantities",
"Mechanical quantities",
"Physical quantities",
"Chemical equipment",
"Pressure",
"Physical systems",
"Hydraulics",
"Wikipedia categories named after physical quantities",
"Pressure vessels"
] |
7,677,352 | https://en.wikipedia.org/wiki/Scleronomous | A mechanical system is scleronomous if the equations of constraints do not contain the time as an explicit variable and the equation of constraints can be described by generalized coordinates. Such constraints are called scleronomic constraints. The opposite of scleronomous is rheonomous.
Application
In 3-D space, a particle with mass , velocity has kinetic energy
Velocity is the derivative of position with respect to time . Use chain rule for several variables:
where are generalized coordinates.
Therefore,
Rearranging the terms carefully,
where , , are respectively homogeneous functions of degree 0, 1, and 2 in generalized velocities. If this system is scleronomous, then the position does not depend explicitly with time:
Therefore, only term does not vanish:
Kinetic energy is a homogeneous function of degree 2 in generalized velocities.
Example: pendulum
As shown at right, a simple pendulum is a system composed of a weight and a string. The string is attached at the top end to a pivot and at the bottom end to a weight. Being inextensible, the string’s length is a constant. Therefore, this system is scleronomous; it obeys scleronomic constraint
where is the position of the weight and is length of the string.
Take a more complicated example. Refer to the next figure at right, Assume the top end of the string is attached to a pivot point undergoing a simple harmonic motion
where is amplitude, is angular frequency, and is time.
Although the top end of the string is not fixed, the length of this inextensible string is still a constant. The distance between the top end and the weight must stay the same. Therefore, this system is rheonomous as it obeys constraint explicitly dependent on time
See also
Lagrangian mechanics
Holonomic system
Nonholonomic system
Rheonomous
Mass matrix
References
Mechanics
Classical mechanics
Lagrangian mechanics
de:Skleronom | Scleronomous | [
"Physics",
"Mathematics",
"Engineering"
] | 401 | [
"Lagrangian mechanics",
"Classical mechanics",
"Mechanics",
"Mechanical engineering",
"Dynamical systems"
] |
7,677,370 | https://en.wikipedia.org/wiki/Rheonomous | A mechanical system is rheonomous if its equations of constraints contain the time as an explicit variable. Such constraints are called rheonomic constraints. The opposite of rheonomous is scleronomous.
Example: simple 2D pendulum
As shown at right, a simple pendulum is a system composed of a weight and a string. The string is attached at the top end to a pivot and at the bottom end to a weight. Being inextensible, the string has a constant length. Therefore, this system is scleronomous; it obeys the scleronomic constraint
,
where is the position of the weight and the length of the string.
The situation changes if the pivot point is moving, e.g. undergoing a simple harmonic motion
,
where is the amplitude, the angular frequency, and time.
Although the top end of the string is not fixed, the length of this inextensible string is still a constant. The distance between the top end and the weight must stay the same. Therefore, this system is rheonomous; it obeys the rheonomic constraint
.
See also
Lagrangian mechanics
Holonomic constraints
References
Mechanics
Classical mechanics
Lagrangian mechanics | Rheonomous | [
"Physics",
"Mathematics",
"Engineering"
] | 250 | [
"Lagrangian mechanics",
"Classical mechanics",
"Mechanics",
"Mechanical engineering",
"Dynamical systems"
] |
7,682,407 | https://en.wikipedia.org/wiki/Plaster%20veneer | Plaster veneer (American English) or plaster skim (British English) is a construction methodology for surfacing interior walls, by applying a thin layer of plaster over a substrate—typically over specially formulated gypsum board base, similar in nature to drywall.
History
Until the mid twentieth century, it was standard practice in Western construction to surface interior walls using wooden lath and a layer of plaster about a half-inch thick ("lath and plaster"). Later, drywall became a standard. Typically, drywall is surfaced using the "mud-and-tape" method, where non-adhesive paper or mesh tape and drywall joint compound ("mud") is used to fill joints, cover nail heads, and repair any flaws.
Plaster veneer was developed as a way of taking advantage of the reduced labor of modern drywall, while providing a genuine plaster surface for a wall.
Applications
In much of the world, plaster veneer is a very rare wall surface. Consequently, it can be difficult to find a local trade worker skilled in the practice. However, in some regions, such as Ireland, or Massachusetts this situation is reversed, with plaster veneer a common standard, and mud-and-tape the less common alternative.
Plaster veneer is well-suited to the renovation of older buildings, since it is an easier option than full re-creation of the original lath and plaster. The veneer surface will closely mimic antique walls, with their hand-applied variations. In contrast, properly finished mud-and-tape drywall can be very planar, and industrially uniform in character. Drywall feels relatively warm and soft to the touch, while plaster feels cooler and very hard. Consequently, plaster veneer might be an appropriate choice in the renovation of an older house with existing lath-and-plaster walls.
Bare mud-and-tape drywall is generally only acceptable as a final decorating finish in utility spaces such as attics or garages. In most rooms, such walls are finished with paint or wallpaper. Plaster veneer walls are usually similarly decorated, but unpainted plaster can also serve as a finish. Because bare plaster can be appealing to the touch, and paint would add an additional layer, some decorators opt to leave exposed plaster in some or all of a room, as a creative choice. In such cases, if the plaster's natural color is not desired, tints can be added as part of the mixing process, or can be introduced unevenly for artistic color effects.
Methodology
The plaster veneer method begins with the hanging of specially designated drywall ("blueboard"), in the conventional manner. N.B. In North America, the color of the face paper of drywall indicates its intended application: white for standard, green for moisture resistance, etc. Plaster veneer wallboard is blue or gray. Plaster veneer may also be applied to ordinary drywall, or over existing walls, but this requires "gluing" the existing wall surface by painting on a special adhesive compound, and then applying a thin layer of "base coat" plaster. After the blueboard or base coat covers the interior of a room, the finishing plaster is mixed in batches (typically about 5 gallons), by blending plaster powder with water, to the ideal consistency. Working quickly, a thin layer (usually one to three millimeters) of finish plaster is applied over a wall face before the plaster begins to congeal in the bucket. Over a period of a few hours, as the plaster chemically sets on the wall, it is periodically smoothed or textured using hand trowels, until the desired finish is achieved. When a wall face has sufficiently set, an adjoining face can be safely applied. After the plaster has fully set, it is allowed to cure for a period of days to weeks, permitting excess moisture to escape into the air. If the plaster is no more than about three millimeters thick, two to three days' cure time is usually sufficient. A skilled eye can see areas where moisture is lingering until the curing is complete. After the walls are fully cured, they are ideal for painting or papering.
Advantages
Some considerations favor plaster veneer over mud-and-tape drywall.
Moisture resistance: Once cured, plaster is an effective water barrier. By contrast, unprotected drywall and joint compound absorb water, causing sagging, bloating, or complete structural failure. As a consequence of plaster's inherent water shedding properties, it is a very effective water and mildew barrier.
No sanding: Plaster is typically applied in one work session per wall or per several non-adjoining walls. The smoothness or texture is achieved by working the plaster as it sets, over a period of up to five hours. By contrast, drywall is typically sanded or otherwise mechanically smoothed as the final step of the wall surfacing process. The fine dust particles created can be difficult to clean and dangerous to breathe.
Pleasing surface: Bare plaster can be a beautiful interior surface. The natural color of veneer plaster is a mottled white. When applied for maximum smoothness, it can result in a hard, mirror-like surface, which masks the mechanical uniformity of the drywall with the subtly organic form of a hand-applied layer. Tinting can be added to the wet plaster for color effects.
Durable surface: Plaster veneer results in a harder and more durable surface than drywall. Scuffs and gouges are less likely.
Quicker: The overall calendar time from beginning to end for a plaster veneer project is typically slightly shorter than for conventional drywall. (N.B. The overall labor time is usually less with mud-and-tape drywall.) This is because drywall joint compound is applied in at least three phases, followed by sanding. Some drywall joint compounds ("hot mud") set chemically, allowing rapid re-coating, but these compounds can make sanding more difficult. By contrast, each wall in a plaster veneer project is applied as a single task, and allowed to set and cure without intervention.
Disadvantages
Some considerations make plaster veneer a less appropriate choice than mud-and-tape drywall.
Rare technique: In most locales, there are few trade professionals with the skills for plaster veneer. Where drywall is much more common, it can be difficult even to acquire plastering tools and materials from local suppliers. After a wall is surfaced with plaster, any future work on that wall should ideally be done by a craftsperson familiar with plaster veneer.
Application time commitment: Once a plaster wall is begun, it must be completed before the worker can stop. This time commitment makes scheduling each task more critical, and can cause a worker to end the work day early or late.
Cure time: There is only a 24-hour wait time to decorate over "one coat" veneer. Cure time is only for two or three coat plastering that may have lime in the finish. It is recommended to wait 28 days before painting a lime finish wall. However this wait time can be shortened by using an oil based primer once the plaster is dry. Plastering walls with one coat is actually faster than taping since you don't have to wait for three separate coats to dry for 24 hours, as in taping. If heavy patchwork is involved, then yes, there is also a wait time. However, that is typically for drying time not curing.
More expensive: In comparable settings, the costs of plaster veneer walls is higher than mud-and-tape drywall walls. That is, a skilled tradesman working start to finish at full efficiency can surface mud-and-tape with somewhat less labor than with plaster veneer. Additionally, plaster veneer uses significantly more plaster material than the amount of joint compound in a typical mud-and-tape wall.
References
Wallcoverings
Plastering | Plaster veneer | [
"Chemistry",
"Engineering"
] | 1,624 | [
"Building engineering",
"Coatings",
"Plastering"
] |
7,683,001 | https://en.wikipedia.org/wiki/Nef%20reaction | In organic chemistry, the Nef reaction is an organic reaction describing the acid hydrolysis of a salt of a primary or secondary nitroalkane () to an aldehyde () or a ketone () and nitrous oxide (). The reaction has been the subject of several literature reviews.
The reaction was reported in 1894 by the chemist John Ulric Nef, who treated the sodium salt of nitroethane with sulfuric acid resulting in an 85–89% yield of nitrous oxide and at least 70% yield of acetaldehyde. However, the reaction was pioneered a year earlier in 1893 by Konovalov, who converted the potassium salt of 1-phenylnitroethane with sulfuric acid to acetophenone.
Reaction mechanism
The reaction mechanism starting from the nitronate salt as the resonance structures 1a and 1b is depicted below:
The salt is protonated forming the nitronic acid 2 (in some cases these nitronates have been isolated) and once more to the iminium ion 3. This intermediate is attacked by water in a nucleophilic addition forming 4 which loses a proton and then water to the 1-nitroso-alkanol 5 which is believed to be responsible for the deep-blue color of the reaction mixture in many Nef reactions. This intermediate rearranges to hyponitrous acid 6 (forming nitrous oxide 6c through 6b) and the oxonium ion 7 which loses a proton to form the carbonyl compound.
Note that formation of the nitronate salt from the nitro compound requires an alpha hydrogen atom and therefore the reaction fails with tertiary nitro compounds.
Scope
Nef-type reactions are frequently encountered in organic synthesis, because they turn the Henry reaction into a convenient method for functionalization at the β and γ locations. Thus, for example, the reaction is combined with the Michael reaction in the synthesis of the γ-keto-carbonyl methyl 3-acetyl-5-oxohexanoate, itself a cyclopentenone intermediate:
In carbohydrate chemistry, they are a chain-extension method for aldoses, as in the isotope labeling of C14-Dmannose and C14-Dglucose from Darabinose and C14nitromethane (the first step here is a Henry reaction):
The opposite reaction is the Wohl degradation.
Variants
Nef's original protocol, using concentrated sulfuric acid, has been described as "violent". Strong-acid hydrolysis without the intermediate salt stage results in the formation of carboxylic acids and hydroxylamine salts, but Lewis acids such as tin(IV) chloride and iron(III) chloride give a clean hydrolysis. Alternatively, strong oxidizing agents, such as oxone, ozone, or permanganates, will cleave the nitronate tautomer at the double bond to form a carbonyl and nitrate. Oxophilic reductants, such as titanium salts, will reduce the nitronate to a hydrolysis-susceptible imine, but less selective reductants give the amine instead.
References
Substitution reactions
Name reactions | Nef reaction | [
"Chemistry"
] | 663 | [
"Name reactions"
] |
7,683,674 | https://en.wikipedia.org/wiki/Chabauty%20topology | In mathematics, the Chabauty topology is a certain topological structure introduced in 1950 by Claude Chabauty, on the set of all closed subgroups of a locally compact group G.
The intuitive idea may be seen in the case of the set of all lattices in a Euclidean space E. There these are only certain of the closed subgroups: others can be found by in a sense taking limiting cases or degenerating a certain sequence of lattices. One can find linear subspaces or discrete groups that are lattices in a subspace, depending on how one takes a limit. This phenomenon suggests that the set of all closed subgroups carries a useful topology. It is also linked to the Hausdorff topology for closed subsets of metric spaces.
This topology can be derived from the Vietoris topology construction, a topological structure on all non-empty subsets of a space. More precisely, it is an adaptation of the Fell topology construction, which itself derives from the Vietoris topology concept.
References
Claude Chabauty, Limite d'ensembles et géométrie des nombres. Bulletin de la Société Mathématique de France, 78 (1950), p. 143-151
Topological groups | Chabauty topology | [
"Mathematics"
] | 247 | [
"Topological spaces",
"Space (mathematics)",
"Topological groups"
] |
7,684,225 | https://en.wikipedia.org/wiki/Calicheamicin | The calicheamicins are a class of enediyne antitumor antibiotics derived from the bacterium Micromonospora echinospora, with calicheamicin γ1 being the most notable. It was isolated originally in the mid-1980s from the chalky soil, or "caliche pits", located in Kerrville, Texas. The sample was collected by a scientist working for Lederle Labs. It is extremely toxic to all cells and, in 2000, a CD33 antigen-targeted immunoconjugate N-acetyl dimethyl hydrazide calicheamicin was developed and marketed as targeted therapy against the non-solid tumor cancer acute myeloid leukemia (AML). A second calicheamicin-linked monoclonal antibody, inotuzumab ozogamicin (marketed as Besponsa), an anti-CD22-directed antibody-drug conjugate, was approved by the U.S. Food and Drug Administration on August 17, 2017, for use in the treatment of adults with relapsed or refractory B-cell precursor acute lymphoblastic leukemia. Calicheamicin γ1 and the related enediyne esperamicin are two of the most potent antitumor agents known.
Mechanism of toxicity
Calicheamicins target DNA and cause strand scission. Calicheamicins bind with DNA in the minor groove, wherein they then undergo a reaction analogous to the Bergman cyclization to generate a diradical species. This diradical, 1,4-didehydrobenzene, then abstracts hydrogen atoms from the deoxyribose (sugar) backbone of DNA, which ultimately leads to strand scission. The specificity of binding of calicheamicin to the minor groove of DNA was demonstrated by Crothers et al. (1999) to be due to the aryltetrasaccharide group of the molecule.
Biosynthesis
The core metabolic pathway for biosynthesis of this molecule resembles that of other characterized enediyne compounds and occurs via an iterative polyketide synthase (PKS) pathway. This type I PKS loads Acetyl-CoA and then repeatedly adds a total of seven Malonyl-CoAs. The growing polyketide is acted upon by the ketoreductase domain (KR) and dehydratase domain (DH) during each iteration to produce a 15-carbon polyene, which is then processed by accessory enzymes to yield the putative enediyne core of calicheamicin. Maturation of the polyketide core is anticipated to occur by the action of additional enzymes to provide a calicheamicinone-like intermediate as a substrate for subsequent glycosylation.
Glycosylation of calicheamicinone requires 4 glycosyltransferases (CalG1-4) and one acyltransferase (CalO4), each recognizing a specific sugar nucleotide or orsellinic acid substrate. Ground-breaking biochemical studies of CalG1-G4 by Thorson and coworkers revealed the reactions catalyzed by these glycosyltransferases to be highly reversible. This was a paradigm shift in the context of glycosyltransferase catalysis and Thorson and coworkers went on to demonstrate this to be a general phenomenon that could be exploited for sugar nucleotide synthesis and 'glycorandomization'. The structures of all four glycosyltransferases were also reported by the same group, revealing a conserved calicheamicin binding motif that coordinates the enediyne backbone thorough interactions with aromatic residues. The catalytic site of CalG1, CalG3, and CalG4 was shown to possess a highly conserved catalytic dyad of histidine and aspartate which promotes nucleophilic attack on the acceptor hydroxyl group of calicheamicin intermediates. Notably, this motif is absent from CalG2, suggesting a different catalytic mechanism in this enzyme.
Resistance
Calicheamicin displays unbiased toxicity to bacteria, fungi, viruses, and eukaryotic cells and organisms, which raises questions as to how the calicheamicin-producing Micromonospora manages not to poison itself. An answer to this question was presented in 2003 when Thorson and coworkers presented the first known example of a "self-sacrifice" resistance mechanism encoded by the gene calC from the calicheamicin biosynthetic gene cluster. In this study, the scientists revealed calicheamicin to cleave the protein CalC site-specifically, destroying both the calicheamicin and the CalC protein, thereby preventing DNA damage. The same group went on to solve the structure of CalC and, more recently, in collaboration with scientists from the Center for Pharmaceutical Research and Innovation (CPRI), discover structural or functional homologs encoded by genes in the calicheamicin gene cluster previously listed as encoding unknown function. In this latter study, the authors suggest that CalC homologs may serve in a biosynthetic capacity as the long-sought-after polyketide cyclases required to fold or cyclize early intermediates en route to calicheamicin.
History
It has been proposed that Alexander the Great was poisoned by drinking the water of the river Mavroneri (identified with the mythological River Styx) which is postulated to have been contaminated by this compound. However, toxicologists believe an extensive knowledge of biological chemistry would have been requisite for any application of this poison in antiquity.
See also
Antibody-drug conjugates using calicheamicins as cytotoxic agents:
Gemtuzumab ozogamicin
Inotuzumab ozogamicin
References
Cancer research
Enediynes
Polyketide antibiotics
Halogen-containing natural products
Iodobenzene derivatives
Benzoate esters
Thioesters
Pyrogallol ethers
Tertiary alcohols
Amines
Carbamates
Methyl esters
Glycerols
Acetals
Ten-membered rings
Micromonosporaceae | Calicheamicin | [
"Chemistry"
] | 1,297 | [
"Acetals",
"Functional groups",
"Thioesters",
"Amines",
"Bases (chemistry)"
] |
7,684,241 | https://en.wikipedia.org/wiki/Asymmetric%20Warfare%20Group | The Asymmetric Warfare Group was a United States Army special mission unit created during the War on Terrorism to mitigate various threats with regard to asymmetric warfare. The unit was headquartered at Fort Meade, Maryland and had a training facility (the Asymmetric Warfare Training Center) at Fort A.P. Hill, Virginia which was specialized in breaching and subterranean warfare. The unit provided the linkage between Training and Doctrine Command (TRADOC) and the operational Army, and reported directly to the commanding general of TRADOC.
In March 2021, the AWG held a casing of the colors ceremony and officially inactivated.
Organization
The Asymmetric Warfare Group was made up by a headquarters and headquarters detachment and four squadrons:
Able Squadron (Operations)
Baker Squadron (Operations)
Charlie Squadron (Operations)
Dog Squadron (Concepts & Integration)
Easy Squadron (Training)
Each squadron was commanded by a Lieutenant Colonel and subsequently divided into troops each commanded by a Major. AWG maintained forward deployed subject matter experts with all of the major combatant commands. Consisting of Army servicemembers, Department of the Army civilians, and contracted subject matter experts, the unit held an authorized strength of 377.
Mission
The U.S. Army Asymmetric Warfare Group (AWG) provided operational advisory assistance in support of Army and joint force commanders to enhance the combat effectiveness of the operating force and enable the defeat of asymmetric threats. AWG was the Army’s focal point for identifying asymmetric threats, enemy vulnerabilities and friendly capability gaps through first-hand observations. AWG key tasks include supporting Army and Joint Force Commanders by advising and assisting predeployment and in-theater forces; deploying and sustaining AWG forces worldwide to observe, assess, and disseminate information with regard to asymmetric threats; assisting in the identification, development, integration, and transition of material and non-material solutions for both offensive and defensive countermeasures; influencing culture to form a more innovative and adaptive force; and assessing, selecting, and training unit members.
AWG was designed to rapidly identify, develop, assess and disseminate solutions -- both physical products, and doctrinal improvements -- across the full spectrum of organizations in order to mitigate asymmetric vulnerabilities through first-hand observations and the deployment of civilian and military subject-matter experts directly into the field in-theater.
AWG in particular was tasked with countering the asymmetric threat of improvised explosive device (IED) proliferation against conventional and special operations forces. AWG would embed subject matter experts within combat units, observe both friendly and enemy tactics, techniques, and procedures (TTPs) and best practices, and provide advisory assistance and equipment improvement recommendations to mitigate the threat. AWG would then disseminate those best practices throughout the Army (more rapidly than through a traditional approval and publication process). AWG would also recommend "material solutions" -- either commercial off the shelf products, or modifications to existing military equipment, to mitigate these threats. For instance, in the counter-IED mission, AWG was tasked with solving challenges faced by Army units conducing route clearance patrols in Iraq, who were being exposed to IED risk when removing debris from roads; as a material solution, AWG developed tools such as "Iron Scrape" to rapidly clear the debris and remove the potential IED hiding spots. Similarly another material solution called "Air Digger" was developed to allow Buffalo armored vehicles used on the route clearance patrols to clear dirt and debris from a suspected IED without triggering a detonation that would compromise forensic evidence (to track, locate, and neutralize the bomb maker).
AWG also regularly deployed in support of JCETs and assisted with the training of foreign SOF forces in foreign internal defense.
History
The U.S. Army Asymmetric Warfare Group (AWG) was charged with identifying Army and joint force capability gaps to DOTMLPF-P, and developing solutions to those gaps. It further seeks to identify enemy threats and develop methods to defeat those threats.
2016 marked the group's 10th anniversary. In January, 2006, the AWG was established as a Field Operating Agency under the operational control of the Deputy Chief of Staff, G-3/5/7, Headquarters, Department of the Army. The AWG was activated on March 8, 2006, at Fort Meade, MD. The AWG was assigned to the TRADOC on November 11, 2011 as a direct reporting unit to the commanding general. The assignment to TRADOC enabled enhanced cooperation with the Army Capabilities Integration Center, Combined Arms Center, and the Centers of Excellence. Since 2011, AWG had experienced a significant growth in Operational Advisory support missions, and activated its third Operational Squadron in 2013. With this enhanced capacity, AWG provided observations, analysis, and solution development into both the operational and institutional forces of the Army. AWG's operational advisors deployed globally to complex operating environments to understand the current and emerging challenges to anticipate the character of future conflict. While their focus was on assisting the operating force, they ensured that any lessons learned were passed to the institutional Army for long-term integration and to enhance the development of our generating force.
AWG conducted vulnerability assessments to identify security risks for the Army, bridge the skill and knowledge gap between Special Operations Forces and the regular Army, and assists the Army in developing and implementing the Army Learning Model through AWG's unique instructing methods, or ASLTE, Adaptive Soldier Leader Training and Education.
Assessed current and emerging threats: AWG determines friendly force vulnerabilities and enemy threat capabilities and then develops programs of instruction to raise situational awareness and understanding of identified problems. AWG then seeks to integrate the programs of instruction into Combat Training Center (e.g., National Training Center) rotations, as well as in-theater Joint Reception Staging Onward Movement and Integration (JRSOI) training. JRSOI is the process that transitions deploying or redeploying forces, consisting of personnel, equipment, and materiel into forces capable of meeting the Combatant Commander's (CCDR's) operational requirements or returns them to their parent organization or Service. In addition, AWG evaluates engagement Tactics, Techniques, and Procedures (TTPs), and works with the Fires Center of Excellence to institutionalize its findings. AWG has also created a compendium of Islamic State of Iraq and the Levant (ISIL) techniques and procedures, probable scenarios, and ways to counter them, which are distributed in theater and are available through the Center for Army Lessons Learned (CALL). AWG developed a similar product addressing Russian hybrid warfare based on Ukraine.
Understanding a Complex Operating Environment: through AWG's support to Regionally Aligned Forces and Special Operations Forces, they identified the requirement to bring jungle skills back into the U.S. Army. AWG incorporated lessons learned into the 25th Infantry Division in Lightning Academy and created handbooks for CALL. In addition, observations and trend analysis from globally-deployed operational advisors identified the requirement to raise awareness of tunneling and subterranean operations. AWG created handbooks and references for distribution by CALL. AWG continuously deploys operational advisors in support of theater security cooperation plans and events to identify how the complex environment is changing around the world. A handbook was developed by AWG after working with Combined Joint Task Force – Horn of Africa to inform future leaders and planners of the operational challenges a CJTF has in an environment where the Department of Defense (DoD) is not the lead agency.
On October 2, 2020 it was announced that the Army planned to close the AWG by September 30, 2021. On May 13, 2021, the AWG officially inactivated.
References
External links
U.S. Army Asymmetric Warfare Group command
United States Army Professional Writing Collection description of unit
Official U.S. Army Asymmetric Warfare Group website
STAND TO!
Groups of the United States Army
Warfare group | Asymmetric Warfare Group | [
"Physics"
] | 1,625 | [
"Symmetry",
"Asymmetry"
] |
7,684,634 | https://en.wikipedia.org/wiki/Factor-critical%20graph | In graph theory, a mathematical discipline, a factor-critical graph (or hypomatchable graph) is a graph with vertices in which every induced subgraph of vertices has a perfect matching. (A perfect matching in a graph is a subset of its edges with the property that each of its vertices is the endpoint of exactly one of the edges in the subset.)
A matching that covers all but one vertex of a graph is called a near-perfect matching. So equivalently, a factor-critical graph is a graph in which there are near-perfect matchings that avoid every possible vertex.
Examples
Any odd-length cycle graph is factor-critical, as is any complete graph with an odd number of vertices. More generally, every Hamiltonian graph with an odd number of vertices is factor-critical. The friendship graphs (graphs formed by connecting a collection of triangles at a single common vertex) provide examples of graphs that are factor-critical but not Hamiltonian.
If a graph is factor-critical, then so is the Mycielskian of . For instance, the Grötzsch graph, the Mycielskian of a five-vertex cycle-graph, is factor-critical.
Every 2-vertex-connected claw-free graph with an odd number of vertices is factor-critical. For instance, the 11-vertex graph formed by removing a vertex from the regular icosahedron (the graph of the gyroelongated pentagonal pyramid) is both 2-connected and claw-free, so it is factor-critical. This result follows directly from the more fundamental theorem that every connected claw-free graph with an even number of vertices has a perfect matching.
Characterizations
Factor-critical graphs may be characterized in several different ways, other than their definition as graphs in which each vertex deletion allows for a perfect matching:
Tibor Gallai proved that a graph is factor-critical if and only if it is connected and, for each node of the graph, there exists a maximum matching that does not include . It follows from these properties that the graph must have an odd number of vertices and that every maximum matching must match all but one vertex.
László Lovász proved that a graph is factor-critical if and only if it has an odd ear decomposition, a partition of its edges into a sequence of subgraphs, each of which is an odd-length path or cycle, with the first in the sequence being a cycle, each path in the sequence having both endpoints but no interior points on vertices in previous subgraphs, and each cycle other than the first in the sequence having exactly one vertex in previous subgraphs. For instance, the graph in the illustration may be partitioned in this way into a cycle of five edges and a path of three edges. In the case that a near-perfect matching of the factor-critical graph is also given, the ear decomposition can be chosen in such a way that each ear alternates between matched and unmatched edges.
A graph is also factor-critical if and only if it can be reduced to a single vertex by a sequence of contractions of odd-length cycles. Moreover, in this characterization, it is possible to choose each cycle in the sequence so that it contains the vertex formed by the contraction of the previous cycle. For instance, if one contracts the ears of an ear decomposition, in the order given by the decomposition, then at the time each ear is contracted it forms an odd cycle, so the ear decomposition characterization may be used to find a sequence of odd cycles to contract. Conversely from a sequence of odd cycle contractions, each containing the vertex formed from the previous contraction, one may form an ear decomposition in which the ears are the sets of edges contracted in each step.
Suppose that a graph is given together with a choice of a vertex and a matching that covers all vertices other than . Then is factor-critical if and only if there is a set of paths in , alternating between matched and unmatched edges, that connect to each of the other vertices in . Based on this property, it is possible to determine in linear time whether a graph with a given near-perfect matching is factor-critical.
Properties
Factor-critical graphs must always have an odd number of vertices, and must be 2-edge-connected (that is, they cannot have any bridges). However, they are not necessarily 2-vertex-connected; the friendship graphs provide a counterexample. It is not possible for a factor-critical graph to be bipartite, because in a bipartite graph with a near-perfect matching, the only vertices that can be deleted to produce a perfectly matchable graph are the ones on the larger side of the bipartition.
Every 2-vertex-connected factor-critical graph with edges has at least different near-perfect matchings, and more generally every factor-critical graph with edges and blocks (2-vertex-connected components) has at least different near-perfect matchings. The graphs for which these bounds are tight may be characterized by having odd ear decompositions of a specific form.
Any connected graph may be transformed into a factor-critical graph by contracting sufficiently many of its edges. The minimal sets of edges that need to be contracted to make a given graph factor-critical form the bases of a matroid, a fact that implies that a greedy algorithm may be used to find the minimum weight set of edges to contract to make a graph factor-critical, in polynomial time.
Applications
A blossom is a factor-critical subgraph of a larger graph. Blossoms play a key role in Jack Edmonds' algorithms for maximum matching and minimum weight perfect matching in non-bipartite graphs.
In polyhedral combinatorics, factor-critical graphs play an important role in describing facets of the matching polytope of a given graph.
Generalizations and related concepts
A graph is said to be -factor-critical if every subset of vertices has a perfect matching. Under this definition, a hypomatchable graph is 1-factor-critical. Even more generally, a graph is -factor-critical if every subset of vertices has an -factor, that is, it is the vertex set of an -regular subgraph of the given graph.
A critical graph (without qualification) is usually assumed to mean a graph for which removing each of its vertices reduces the number of colors it needs in a graph coloring. The concept of criticality has been used much more generally in graph theory to refer to graphs for which removing each possible vertex changes or does not change some relevant property of the graph. A matching-critical graph is a graph for which the removal of any vertex does not change the size of a maximum matching; by Gallai's characterization, the matching-critical graphs are exactly the graphs in which every connected component is factor-critical. The complement graph of a critical graph is necessarily matching-critical, a fact that was used by Gallai to prove lower bounds on the number of vertices in a critical graph.
Beyond graph theory, the concept of factor-criticality has been extended to matroids by defining a type of ear decomposition on matroids and defining a matroid to be factor-critical if it has an ear decomposition in which all ears are odd.
References
Graph families
Matching (graph theory) | Factor-critical graph | [
"Mathematics"
] | 1,483 | [
"Matching (graph theory)",
"Mathematical relations",
"Graph theory"
] |
7,685,023 | https://en.wikipedia.org/wiki/CLHEP | CLHEP (short for A Class Library for High Energy Physics) is a C++ library that provides utility classes for general numerical programming, vector arithmetic, geometry, pseudorandom number generation, and linear algebra, specifically targeted for high energy physics simulation and analysis software.
The project is hosted by CERN and currently managed by a collaboration of researchers from CERN and other physics research laboratories and academic institutions. According to the project's website, CLHEP is in maintenance mode (accepting bug fixes but no further development is expected).
CLHEP was proposed by Swedish physicist Leif Lönnblad in 1992 at a Conference on Computing in High-Energy Physics. Lönnblad is still involved in maintaining CLHEP.
The project has more recently accepted contributions from other projects built on top of CLHEP, including the physics packages Geant4 and ZOOM, and the BaBar experiment at SLAC.
See also
Geant4, a software using CLHEP
FreeHEP, a similar library to CLHEP
COLT, a Java package for High Performance Scientific and Technical Computing, provided by CERN.
References
External links
Project CLHEP website
CLHEP User Guide
CLHEP at CERN
Physics software
CERN software | CLHEP | [
"Physics"
] | 254 | [
"Physics software",
"Computational physics stubs",
"Computational physics"
] |
7,685,557 | https://en.wikipedia.org/wiki/ABBYY%20FineReader | ABBYY FineReader PDF is an optical character recognition (OCR) application developed by ABBYY. First released in 1993, the program runs on Microsoft Windows (Windows 7 or later) and Apple macOS (10.12 Sierra or later). Since v15, the Windows version can also edit PDF files.
Users can use the program to convert image documents (photos, scans, PDF files) and screen captures into editable file formats, including Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Rich Text Format, HTML, PDF/A, searchable PDF, CSV and txt (plain text) files. Since Version 11, files can be saved in the DjVu format. Since Version 15, the program recognizes text in 192 languages and has a built-in spell check for 48 of them.
FineReader recognizes new characters in several ways. Users can train the app on characters, adding them to the recognition alphabet. Users can select characters from a list and add them to the alphabet of a selected language (for example, adding certain Icelandic characters to a German alphabet for a German text describing Iceland). Finally, users can add domain-specific vocabulary to the FineReader’s built-in lexicon.
The program also enables users to compare documents, add annotations and comments, and schedule batch processing.
, there were more than 20 million users of ABBYY FineReader worldwide. ABBYY licenses the embedded OCR technology to various companies including Fujitsu, Panasonic, Xerox, Plustek, and Samsung.
In February 2020, version 15 of the software was rated "Highest-quality OCR on the market" by PC Magazine.
References
External links
Official Linux website
Optical character recognition software
MacOS graphics-related software
MacOS text-related software
Windows graphics-related software
Windows text-related software
Linux text-related software
Graphics-related software for Linux
PDF software
PDF readers
Desktop publishing software
Automation software | ABBYY FineReader | [
"Engineering"
] | 393 | [
"Automation software",
"Automation"
] |
7,686,258 | https://en.wikipedia.org/wiki/Transmission%20solenoid | A transmission solenoid or cylinoid is an electro-hydraulic valve that controls fluid flow into and throughout an automatic transmission. Solenoids can be normally open or normally closed. They operate via a voltage or current supplied by the transmission computer or controller. Transmission solenoids are usually installed in a transmission valve body, transmission control unit, or transmission control module.
Types
Variable force solenoid
On-off solenoid
Pulse-width modulated solenoid
Low leak variable bleed solenoid
Manufacturers
American Axle
ZF
TREMEC
BorgWarner
Eaton
Bosch
Hilite Industries
Saturn Engineering and Electronics
TLX Technologies
References
Automotive transmission technologies
Valves | Transmission solenoid | [
"Physics",
"Chemistry"
] | 132 | [
"Physical systems",
"Valves",
"Hydraulics",
"Piping"
] |
7,687,767 | https://en.wikipedia.org/wiki/Forster%E2%80%93Decker%20method | The Forster–Decker method is a series of chemical reactions that have the effect of mono-alkylating a primary amine (1), forming a secondary amine (6). The process occurs by way of transient formation of an imine (3) that undergoes the actual alkylation reaction.
Process stages
Conversion of the primary amine to an imine (Schiff base) using an aldehyde.
Alkylation of the imine using an alkyl halide, forming an iminium ion.
Hydrolysis of the iminium, releasing the secondary amine and regenerating the aldehyde.
Because the actual alkylation occurs on the imine, over-alkylation is not possible. Therefore, this method does not suffer from side-reactions such as formation of tertiary amines as a simple SN2-type process can.
See also
Reductive amination
References
Substitution reactions
Name reactions | Forster–Decker method | [
"Chemistry"
] | 191 | [
"Name reactions",
"Chemical reaction stubs"
] |
13,961,495 | https://en.wikipedia.org/wiki/Infor%20XA | Infor XA is commercial ERP software used to control the operations of manufacturing companies. Its prior name, MAPICS, is an acronym for Manufacturing, Accounting and Production Information Control Systems. MAPICS was created by IBM. The product is now owned by Infor Global Solutions.
Originally all MAPICS code ran only on IBM midrange systems like the IBM System 34, 36, 38 and the IBM AS/400, via succeeding versions of the platform - currently IBM i on IBM Power Systems. Early versions were written in IBM RPG, augmented with Control Language programs. IBM's version of SQL is also utilized on the OS integrated database system called Db2 for i . Recent development efforts have added object oriented components written in the Java programming language, which extends a portion of the XA product to servers running Java.
However, the Infor XA product still requires the IBM i operating system. The Java components provide an application runtime which allow user customizations, a rich user interface, an optional web-based interface as well as support for XML interfaces.
Timeline
See also
List of ERP software packages
List of ERP vendors
References
Industrial automation
ERP software
IBM software | Infor XA | [
"Engineering"
] | 237 | [
"Industrial automation",
"Automation",
"Industrial engineering"
] |
13,963,236 | https://en.wikipedia.org/wiki/Fesoterodine | Fesoterodine (INN, used as the fumarate under the brand name Toviaz) is an antimuscarinic drug developed by Schwarz Pharma AG to treat overactive bladder syndrome (OAB). It was approved by the European Medicines Agency in April 2007, the US Food and Drug Administration on October 31, 2008 and Health Canada on February 9, 2012.
Fesoterodine is a prodrug. It is broken down into its active metabolite, desfesoterodine, by plasma esterases.
Efficacy
Fesoterodine has the advantage of allowing more flexible dosage than other muscarinic antagonists. Its tolerability and side effects are similar to other muscarinic antagonists and as a new drug seems unlikely to make great changes in practices of treatment for overactive bladder.
A Japanese study from 2017, showed that urgency and urge incontinence are improved after 3 days administration of the drug, with full efficacy able to be judged after 7 days administration. Overactive bladder was found to be resolved in 88% of patients after seven days usage.
References
Diisopropylamino compounds
Drugs developed by Pfizer
Isobutyrate esters
M1 receptor antagonists
M2 receptor antagonists
M3 receptor antagonists
M4 receptor antagonists
M5 receptor antagonists
Phenol esters
Primary alcohols
Prodrugs | Fesoterodine | [
"Chemistry"
] | 287 | [
"Chemicals in medicine",
"Prodrugs"
] |
13,963,625 | https://en.wikipedia.org/wiki/Brabender%20plastograph | The plastograph, or Brabender plastograph, is a device for the continuous observation of torque in the shearing of a polymer with a range of temperatures and shear rates. The generic device records lubricity, plasticity, scorch, cure, shear, and heat stability.
Perhaps the most popular use of the plastograph is its use in baking where it is known as a Farinograph.
It was designed by Carl Wilhelm Brabender and produced by Brabender Industries, founded in 1923.
External links
Official website
Polymers | Brabender plastograph | [
"Chemistry",
"Materials_science"
] | 117 | [
"Polymers",
"Polymer chemistry"
] |
13,966,153 | https://en.wikipedia.org/wiki/MANET%20database | The Molecular Ancestry Network (MANET) database is a bioinformatics database that maps evolutionary relationships of protein architectures directly onto biological networks. It was originally developed by Hee Shin Kim, Jay E. Mittenthal and Gustavo Caetano-Anolles in the Department of Crop Sciences of the University of Illinois at Urbana-Champaign.
MANET traces for example the ancestry of individual metabolic enzymes in metabolism with bioinformatic, phylogenetic, and statistical methods. MANET currently links information in the Structural Classification of Proteins (SCOP) database, the metabolic pathways database of the Kyoto Encyclopedia of Genes and Genomes (KEGG), and phylogenetic reconstructions describing the evolution of protein fold architecture at a universal level. The database has been updated to reflect evolution of metabolism at the level of protein fold families. MANET literally "paints" the ancestries of enzymes derived from rooted phylogenetic trees directly onto over one hundred metabolic pathways representations, paying homage to one of the fathers of impressionism. It also provides numerous functionalities that enable searching specific protein folds with defined ancestry values, displaying the distribution of enzymes that are painted, and exploring quantitative details describing individual protein folds. This permits the study of global and local metabolic network architectures, and the extraction of evolutionary patterns at global and local levels.
A statistical analysis of the data in MANET showed for example a patchy distribution of ancestry values assigned to protein folds in each subnetwork, indicating that evolution of metabolism occurred globally by widespread recruitment of enzymes. MANET was used recently to sort out enzymatic recruitment processes in metabolic networks and propose that modern metabolism originated in the purine nucleotide metabolic subnetwork. The database is useful for the study of metabolic evolution.
External links
Molecular Ancestry Network (MANET) database
References
Biochemistry databases
Metabolomic databases | MANET database | [
"Chemistry",
"Biology"
] | 365 | [
"Biochemistry",
"Biochemistry databases"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.