id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
4,145,225 | https://en.wikipedia.org/wiki/Great%20disnub%20dirhombidodecahedron | In geometry, the great disnub dirhombidodecahedron, also called Skilling's figure, is a degenerate uniform star polyhedron.
It was proven in 1970 that there are only 75 uniform polyhedra other than the infinite families of prisms and antiprisms. John Skilling discovered another degenerate example, the great disnub dirhombidodecahedron, by relaxing the condition that edges must be single. More precisely, he allowed any even number of faces to meet at each edge, as long as the set of faces couldn't be separated into two connected sets (Skilling, 1975). Due to its geometric realization having some double edges where 4 faces meet, it is considered a degenerate uniform polyhedron but not strictly a uniform polyhedron.
The number of edges is ambiguous, because the underlying abstract polyhedron has 360 edges, but 120 pairs of these have the same image in the geometric realization, so that the geometric realization has 120 single edges and 120 double edges where 4 faces meet, for a total of 240 edges. The Euler characteristic of the abstract polyhedron is −96. If the pairs of coinciding edges in the geometric realization are considered to be single edges, then it has only 240 edges and Euler characteristic 24.
The vertex figure has 4 square faces passing through the center of the model.
It may be constructed as the exclusive or (blend) of the great dirhombicosidodecahedron and compound of twenty octahedra.
Related polyhedra
It shares the same edge arrangement as the great dirhombicosidodecahedron, but has a different set of triangular faces. The vertices and edges are also shared with the uniform compounds of twenty octahedra or twenty tetrahemihexahedra. 180 of the edges are shared with the great snub dodecicosidodecahedron.
Dual polyhedron
The dual of the great disnub dirhombidodecahedron is called the great disnub dirhombidodecacron. It is a nonconvex infinite isohedral polyhedron.
Like the visually identical great dirhombicosidodecacron in Magnus Wenninger's Dual Models, it is represented with intersecting infinite prisms passing through the model center, cut off at a certain point that is convenient for the maker. Wenninger suggested these figures are members of a new class of stellation polyhedra, called stellation to infinity. However, he also acknowledged that strictly speaking they are not polyhedra because their construction does not conform to the usual definitions.
Gallery
See also
List of uniform polyhedra
References
.
http://www.software3d.com/MillersMonster.php
External links
http://www.orchidpalms.com/polyhedra/uniform/skilling.htm
http://www.georgehart.com/virtual-polyhedra/great_disnub_dirhombidodecahedron.html
Uniform polyhedra | Great disnub dirhombidodecahedron | [
"Physics"
] | 631 | [
"Uniform polytopes",
"Uniform polyhedra",
"Symmetry"
] |
4,145,551 | https://en.wikipedia.org/wiki/Iron%20fertilization | Iron fertilization is the intentional introduction of iron-containing compounds (like iron sulfate) to iron-poor areas of the ocean surface to stimulate phytoplankton production. This is intended to enhance biological productivity and/or accelerate carbon dioxide () sequestration from the atmosphere. Iron is a trace element necessary for photosynthesis in plants. It is highly insoluble in sea water and in a variety of locations is the limiting nutrient for phytoplankton growth. Large algal blooms can be created by supplying iron to iron-deficient ocean waters. These blooms can nourish other organisms.
Ocean iron fertilization is an example of a geoengineering technique. Iron fertilization attempts to encourage phytoplankton growth, which removes carbon from the atmosphere for at least a period of time. This technique is controversial because there is limited understanding of its complete effects on the marine ecosystem, including side effects and possibly large deviations from expected behavior. Such effects potentially include release of nitrogen oxides, and disruption of the ocean's nutrient balance. Controversy remains over the effectiveness of atmospheric sequestration and ecological effects. Since 1990, 13 major large scale experiments have been carried out to evaluate efficiency and possible consequences of iron fertilization in ocean waters. A study in 2017 considered that the method is unproven; the sequestering efficiency was low and sometimes no effect was seen and the amount of iron deposits needed to make a small cut in the carbon emissions would be in the million tons per year. However since 2021, interest is renewed in the potential of iron fertilization, among other from a white paper study of NOAA, the US National Oceanographic and Atmospheric Administration, which rated iron fertilization as having "moderate potential for cost, scalability and how long carbon might be stored compared to other marine sequestration ideas"
Approximately 25 per cent of the ocean surface has ample macronutrients, with little plant biomass (as defined by chlorophyll). The production in these high-nutrient low-chlorophyll (HNLC) waters is primarily limited by micronutrients, especially iron. The cost of distributing iron over large ocean areas is large compared with the expected value of carbon credits. Research in the early 2020s suggested that it could only permanently sequester a small amount of carbon.
Process
Role of iron in carbon sequestration
Ocean iron fertilization is an example of a geoengineering technique that involves intentional introduction of iron-rich deposits into oceans, and is aimed to enhance biological productivity of organisms in ocean waters in order to increase carbon dioxide () uptake from the atmosphere, possibly resulting in mitigating its global warming effects. Iron is a trace element in the ocean and its presence is vital for photosynthesis in plants, and in particular phytoplanktons, as it has been shown that iron deficiency can limit ocean productivity and phytoplankton growth. For this reason, the "iron hypothesis" was put forward by Martin in late 1980s where he suggested that changes in iron supply in iron-deficient seawater can bloom plankton growth and have a significant effect on the concentration of atmospheric carbon dioxide by altering rates of carbon sequestration. In fact, fertilization is an important process that occurs naturally in the ocean waters. For instance, upwellings of ocean currents can bring nutrient-rich sediments to the surface. Another example is through transfer of iron-rich minerals, dust, and volcanic ash over long distances by rivers, glaciers, or wind. Moreover, it has been suggested that whales can transfer iron-rich ocean dust to the surface, where planktons can take it up to grow. It has been shown that reduction in the number of sperm whales in the Southern Ocean has resulted in a 200,000 tonnes/yr decrease in the atmospheric carbon uptake, possibly due to limited phytoplankton growth.
Carbon sequestration by phytoplankton
Phytoplankton is photosynthetic: it needs sunlight and nutrients to grow, and takes up carbon dioxide in the process. Plankton can take up and sequester atmospheric carbon through generating calcium or silicon-carbonate skeletons. When these organisms die they sink to the ocean floor where their carbonate skeletons can form a major component of the carbon-rich deep sea precipitation, thousands of meters below plankton blooms, known as marine snow. Nonetheless, based on the definition, carbon is only considered "sequestered" when it is deposited in the ocean floor where it can be retained for millions of years. However, most of the carbon-rich biomass generated from plankton is generally consumed by other organisms (small fish, zooplankton, etc.) and substantial part of rest of the deposits that sink beneath plankton blooms may be re-dissolved in the water and gets transferred to the surface where it eventually returns to the atmosphere, thus, nullifying any possible intended effects regarding carbon sequestration. Nevertheless, supporters of the idea of iron fertilization believe that carbon sequestration should be re-defined over much shorter time frames and claim that since the carbon is suspended in the deep ocean it is effectively isolated from the atmosphere for hundreds of years, and thus, carbon can be effectively sequestered.
Efficiency and concerns
Assuming the ideal conditions, the upper estimates for possible effects of iron fertilization in slowing down global warming is about 0.3W/m2 of averaged negative forcing which can offset roughly 15–20% of the current anthropogenic emissions. However, although this approach could be looked upon as an easy option to lower the concentration of in the atmosphere, ocean iron fertilization is still quite controversial and highly debated due to possible negative consequences on marine ecosystems. Research on this area has suggested that fertilization through deposition of large quantities of iron-rich dust into the ocean floor can significantly disrupt the ocean's nutrient balance and cause major complications in the food chain for other marine organisms.
Methods
There are two ways of performing artificial iron fertilization: ship based direct into the ocean and atmospheric deployment.
Ship based deployment
Trials of ocean fertilization using iron sulphate added directly to the surface water from ships are described in detail in the experiment section below.
Atmospheric sourcing
Iron-rich dust rising into the atmosphere is a primary source of ocean iron fertilization. For example, wind blown dust from the Sahara desert fertilizes the Atlantic Ocean and the Amazon rainforest. The naturally occurring iron oxide in atmospheric dust reacts with hydrogen chloride from sea spray to produce iron chloride, which degrades methane and other greenhouse gases, brightens clouds and eventually falls with the rain in low concentration across a wide area of the globe. Unlike ship based deployment, no trials have been performed of increasing the natural level of atmospheric iron. Expanding this atmospheric source of iron could complement ship-based deployment.
One proposal is to boost the atmospheric iron level with iron salt aerosol. Iron(III) chloride added to the troposphere could increase natural cooling effects including methane removal, cloud brightening and ocean fertilization, helping to prevent or reverse global warming.
Experiments
Martin hypothesized that increasing phytoplankton photosynthesis could slow or even reverse global warming by sequestering in the sea. He died shortly thereafter during preparations for Ironex I, a proof of concept research voyage, which was successfully carried out near the Galapagos Islands in 1993 by his colleagues at Moss Landing Marine Laboratories. Thereafter 12 international ocean studies examined the phenomenon:
Ironex II, 1995
SOIREE (Southern Ocean Iron Release Experiment), 1999
EisenEx (Iron Experiment), 2000
SEEDS (Subarctic Pacific Iron Experiment for Ecosystem Dynamics Study), 2001
SOFeX (Southern Ocean Iron Experiments - North & South), 2002
SERIES (Subarctic Ecosystem Response to Iron Enrichment Study), 2002
SEEDS-II, 2004
EIFEX (European Iron Fertilization Experiment), A successful experiment conducted in 2004 in a mesoscale ocean eddy in the South Atlantic resulted in a bloom of diatoms, a large portion of which died and sank to the ocean floor when fertilization ended. In contrast to the LOHAFEX experiment, also conducted in a mesoscale eddy, the ocean in the selected area contained enough dissolved silicon for the diatoms to flourish.
CROZEX (CROZet natural iron bloom and Export experiment), 2005
A pilot project planned by Planktos, a U.S. company, was cancelled in 2008 for lack of funding. The company blamed environmental organizations for the failure.
LOHAFEX (Indian and German Iron Fertilization Experiment), 2009 Despite widespread opposition to LOHAFEX, on 26 January 2009 the German Federal Ministry of Education and Research (BMBF) gave clearance. The experiment was carried out in waters low in silicic acid, an essential nutrient for diatom growth. This affected sequestration efficacy. A portion of the southwest Atlantic was fertilized with iron sulfate. A large phytoplankton bloom was triggered. In the absence of diatoms, a relatively small amount of carbon was sequestered, because other phytoplankton are vulnerable to predation by zooplankton and do not sink rapidly upon death. These poor sequestration results led to suggestions that fertilization is not an effective carbon mitigation strategy in general. However, prior ocean fertilization experiments in high silica locations revealed much higher carbon sequestration rates because of diatom growth. LOHAFEX confirmed sequestration potential depends strongly upon appropriate siting.
Haida Salmon Restoration Corporation (HSRC), 2012 - funded by the Old Massett Haida band and managed by Russ George - dumped 100 tonnes of iron sulphate into the Pacific into an eddy west of the islands of Haida Gwaii. This resulted in increased algae growth over . Critics alleged George's actions violated the United Nations Convention on Biological Diversity (CBD) and the London convention on the dumping of wastes at sea which prohibited such geoengineering experiments. On 15 July 2014, the resulting scientific data was made available to the public.
John Martin, director of the Moss Landing Marine Laboratories, hypothesized that the low levels of phytoplankton in these regions are due to a lack of iron. In 1989 he tested this hypothesis (known as the Iron Hypothesis) by an experiment using samples of clean water from Antarctica. Iron was added to some of these samples. After several days the phytoplankton in the samples with iron fertilization grew much more than in the untreated samples. This led Martin to speculate that increased iron concentrations in the oceans could partly explain past ice ages.
IRONEX I
This experiment was followed by a larger field experiment (IRONEX I) where 445 kg of iron was added to a patch of ocean near the Galápagos Islands. The levels of phytoplankton increased three times in the experimental area. The success of this experiment and others led to proposals to use this technique to remove carbon dioxide from the atmosphere.
EisenEx
In 2000 and 2004, iron sulfate was discharged from the EisenEx. 10 to 20 percent of the resulting algal bloom died and sank to the sea floor.
Commercial projects
Planktos was a US company that abandoned its plans to conduct 6 iron fertilization cruises from 2007 to 2009, each of which would have dissolved up to 100 tons of iron over a 10,000 km2 area of ocean. Their ship Weatherbird II was refused entry to the port of Las Palmas in the Canary Islands where it was to take on provisions and scientific equipment.
In 2007 commercial companies such as Climos and GreenSea Ventures and the Australian-based Ocean Nourishment Corporation, planned to engage in fertilization projects. These companies invited green co-sponsors to finance their activities in return for provision of carbon credits to offset investors' CO2 emissions.
LOHAFEX
LOHAFEX was an experiment initiated by the German Federal Ministry of Research and carried out by the German Alfred Wegener Institute (AWI) in 2009 to study fertilization in the South Atlantic. India was also involved.
As part of the experiment, the German research vessel Polarstern deposited 6 tons of ferrous sulfate in an area of 300 square kilometers. It was expected that the material would distribute through the upper of water and trigger an algal bloom. A significant part of the carbon dioxide dissolved in sea water would then be bound by the emerging bloom and sink to the ocean floor.
The Federal Environment Ministry called for the experiment to halt, partly because environmentalists predicted damage to marine plants. Others predicted long-term effects that would not be detectable during short-term observation or that this would encourage large-scale ecosystem manipulation.
2012
A 2012 study deposited iron fertilizer in an eddy near Antarctica. The resulting algal bloom sent a significant amount of carbon into the deep ocean, where it was expected to remain for centuries to millennia. The eddy was chosen because it offered a largely self-contained test system.
As of day 24, nutrients, including nitrogen, phosphorus and silicic acid that diatoms use to construct their shells, declined. Dissolved inorganic carbon concentrations were reduced below equilibrium with atmospheric . In surface water, particulate organic matter (algal remains) including silica and chlorophyll increased.
After day 24, however, the particulate matter fell to between to the ocean floor. Each iron atom converted at least 13,000 carbon atoms into algae. At least half of the organic matter sank below, .
Haida Gwaii project
In July 2012, the Haida Salmon Restoration Corporation dispersed of iron sulphate dust into the Pacific Ocean several hundred miles west of the islands of Haida Gwaii. The Old Massett Village Council financed the action as a salmon enhancement project with $2.5 million in village funds. The concept was that the formerly iron-deficient waters would produce more phytoplankton that would in turn serve as a "pasture" to feed salmon. Then-CEO Russ George hoped to sell carbon offsets to recover the costs. The project was accompanied by charges of unscientific procedures and recklessness. George contended that 100 tons was negligible compared to what naturally enters the ocean.
Some environmentalists called the dumping a "blatant violation" of two international moratoria. George said that the Old Massett Village Council and its lawyers approved the effort and at least seven Canadian agencies were aware of it.
According to George, the 2013 salmon runs increased from 50 million to 226 million fish. However, many experts contend that changes in fishery stocks since 2012 cannot necessarily be attributed to the 2012 iron fertilization; many factors contribute to predictive models, and most data from the experiment are considered to be of questionable scientific value.
On 15 July 2014, the data gathered during the project were made publicly available under the ODbL license.
Experiments with iron-coated rice husks in Arabian Sea
In 2022, a UK/India research team plans to place iron-coated rice husks in the Arabian Sea, to test whether increasing time at the surface can stimulate a bloom using less iron. The iron will be confined within a plastic bag reaching from the surface several kilometers down to the sea bottom. The Centre for Climate Repair at the University of Cambridge, along with India's Institute of Maritime Studies assessed the impact of iron seeding in another experiment. They spread iron-coated rice husks across an area of the Arabian Sea. Iron is a limiting nutrient in many ocean waters. They hoped that the iron would fertilize algae, which would bolster the bottom of the marine food chain and sequester carbon as uneaten algae died. The experiment was demolished by a storm, leaving inconclusive results.
Science
The maximum possible result from iron fertilization, assuming the most favourable conditions and disregarding practical considerations, is 0.29 W/m2 of globally averaged negative forcing, offsetting 1/6 of current levels of anthropogenic emissions. These benefits have been called into question by research suggesting that fertilization with iron may deplete other essential nutrients in the seawater causing reduced phytoplankton growth elsewhere — in other words, that iron concentrations limit growth more locally than they do on a global scale.
Ocean fertilization occurs naturally when upwellings bring nutrient-rich water to the surface, as occurs when ocean currents meet an ocean bank or a sea mount. This form of fertilization produces the world's largest marine habitats. Fertilization can also occur when weather carries wind blown dust long distances over the ocean, or iron-rich minerals are carried into the ocean by glaciers, rivers and icebergs.
Role of iron
About 70% of the world's surface is covered in oceans. The part of these where light can penetrate is inhabited by algae (and other marine life). In some oceans, algae growth and reproduction is limited by the amount of iron. Iron is a vital micronutrient for phytoplankton growth and photosynthesis that has historically been delivered to the pelagic sea by dust storms from arid lands. This Aeolian dust contains 3–5% iron and its deposition has fallen nearly 25% in recent decades.
The Redfield ratio describes the relative atomic concentrations of critical nutrients in plankton biomass and is conventionally written "106 C: 16 N: 1 P." This expresses the fact that one atom of phosphorus and 16 of nitrogen are required to "fix" 106 carbon atoms (or 106 molecules of ). Research expanded this constant to "106 C: 16 N: 1 P: .001 Fe" signifying that in iron deficient conditions each atom of iron can fix 106,000 atoms of carbon, or on a mass basis, each kilogram of iron can fix 83,000 kg of carbon dioxide. The 2004 EIFEX experiment reported a carbon dioxide to iron export ratio of nearly 3000 to 1. The atomic ratio would be approximately: "3000 C: 58,000 N: 3,600 P: 1 Fe".
Therefore, small amounts of iron (measured by mass parts per trillion) in HNLC zones can trigger large phytoplankton blooms on the order of 100,000 kilograms of plankton per kilogram of iron. The size of the iron particles is critical. Particles of 0.5–1 micrometer or less seem to be ideal both in terms of sink rate and bioavailability. Particles this small are easier for cyanobacteria and other phytoplankton to incorporate and the churning of surface waters keeps them in the euphotic or sunlit biologically active depths without sinking for long periods. One way to add small amounts of iron to HNLC zones would be Atmospheric Methane Removal.
Atmospheric deposition is an important iron source. Satellite images and data (such as PODLER, MODIS, MSIR) combined with back-trajectory analyses identified natural sources of iron–containing dust. Iron-bearing dusts erode from soil and are transported by wind. Although most dust sources are situated in the Northern Hemisphere, the largest dust sources are located in northern and southern Africa, North America, central Asia and Australia.
Heterogeneous chemical reactions in the atmosphere modify the speciation of iron in dust and may affect the bioavailability of deposited iron. The soluble form of iron is much higher in aerosols than in soil (~0.5%). Several photo-chemical interactions with dissolved organic acids increase iron solubility in aerosols. Among these, photochemical reduction of oxalate-bound Fe(III) from iron-containing minerals is important. The organic ligand forms a surface complex with the Fe (III) metal center of an iron-containing mineral (such as hematite or goethite). On exposure to solar radiation the complex is converted to an excited energy state in which the ligand, acting as bridge and an electron donor, supplies an electron to Fe(III) producing soluble Fe(II). Consistent with this, studies documented a distinct diel variation in the concentrations of Fe (II) and Fe(III) in which daytime Fe(II) concentrations exceed those of Fe(III).
Volcanic ash as an iron source
Volcanic ash has a significant role in supplying the world's oceans with iron. Volcanic ash is composed of glass shards, pyrogenic minerals, lithic particles and other forms of ash that release nutrients at different rates depending on structure and the type of reaction caused by contact with water.
Increases of biogenic opal in the sediment record are associated with increased iron accumulation over the last million years. In August 2008, an eruption in the Aleutian Islands deposited ash in the nutrient-limited Northeast Pacific. This ash and iron deposition resulted in one of the largest phytoplankton blooms observed in the subarctic.
Carbon sequestration
Previous instances of biological carbon sequestration triggered major climatic changes, lowering the temperature of the planet, such as the Azolla event. Plankton that generate calcium or silicon carbonate skeletons, such as diatoms, coccolithophores and foraminifera, account for most direct sequestration. When these organisms die their carbonate skeletons sink relatively quickly and form a major component of the carbon-rich deep sea precipitation known as marine snow. Marine snow also includes fish fecal pellets and other organic detritus, and steadily falls thousands of meters below active plankton blooms.
Of the carbon-rich biomass generated by plankton blooms, half (or more) is generally consumed by grazing organisms (zooplankton, krill, small fish, etc.) but 20 to 30% sinks below into the colder water strata below the thermocline. Much of this fixed carbon continues into the abyss, but a substantial percentage is redissolved and remineralized. At this depth, however, this carbon is now suspended in deep currents and effectively isolated from the atmosphere for centuries.
Analysis and quantification
Evaluation of the biological effects and verification of the amount of carbon actually sequestered by any particular bloom involves a variety of measurements, combining ship-borne and remote sampling, submarine filtration traps, tracking buoy spectroscopy and satellite telemetry. Unpredictable ocean currents can remove experimental iron patches from the pelagic zone, invalidating the experiment.
The potential of fertilization to tackle global warming is illustrated by the following figures. If phytoplankton converted all the nitrate and phosphate present in the surface mixed layer across the entire Antarctic circumpolar current into organic carbon, the resulting carbon dioxide deficit could be compensated by uptake from the atmosphere amounting to about 0.8 to 1.4 gigatonnes of carbon per year. This quantity is comparable in magnitude to annual anthropogenic fossil fuels combustion of approximately 6 gigatonnes. The Antarctic circumpolar current region is one of several in which iron fertilization could be conducted—the Galapagos islands area another potentially suitable location.
Dimethyl sulfide and clouds
Some species of plankton produce dimethyl sulfide (DMS), a portion of which enters the atmosphere where it is oxidized by hydroxyl radicals (OH), atomic chlorine (Cl) and bromine monoxide (BrO) to form sulfate particles, and potentially increase cloud cover. This may increase the albedo of the planet and so cause cooling—this proposed mechanism is central to the CLAW hypothesis. This is one of the examples used by James Lovelock to illustrate his Gaia hypothesis.
During SOFeX, DMS concentrations increased by a factor of four inside the fertilized patch. Widescale iron fertilization of the Southern Ocean could lead to significant sulfur-triggered cooling in addition to that due to the uptake and that due to the ocean's albedo increase, however the amount of cooling by this particular effect is very uncertain.
Financial opportunities
Beginning with the Kyoto Protocol, several countries and the European Union established carbon offset markets which trade certified emission reduction credits (CERs) and other types of carbon credit instruments. In 2007 CERs sold for approximately €15–20/ton . Iron fertilization is relatively inexpensive compared to scrubbing, direct injection and other industrial approaches, and can theoretically sequester for less than €5/ton , creating a substantial return. In August, 2010, Russia established a minimum price of €10/ton for offsets to reduce uncertainty for offset providers. Scientists have reported a 6–12% decline in global plankton production since 1980. A full-scale plankton restoration program could regenerate approximately 3–5 billion tons of sequestration capacity worth €50-100 billion in carbon offset value. However, a 2013 study indicates the cost versus benefits of iron fertilization puts it behind carbon capture and storage and carbon taxes.
Debate
While ocean iron fertilization could represent a potent means to slow global warming, there is a current debate surrounding the efficacy of this strategy and the potential adverse effects of this.
Precautionary principle
The precautionary principle is a proposed guideline regarding environmental conservation. According to an article published in 2021, the precautionary principle (PP) is a concept that states, "The PP means that when it is scientifically plausible that human activities may lead to morally unacceptable harm, actions shall be taken to avoid or diminish that harm: uncertainty should not be an excuse to delay action." Based on this principle, and because there is little data quantifying the effects of iron fertilization, it is the responsibility of leaders in this field to avoid the harmful effects of this procedure. This school of thought is one argument against using iron fertilization on a wide scale, at least until more data is available to analyze the repercussions of this.
Ecological issues
Critics are concerned that fertilization will create harmful algal blooms (HAB) as many toxic algae are often favored when iron is deposited into the marine ecosystem. A 2010 study of iron fertilization in an oceanic high-nitrate, low-chlorophyll environment, however, found that fertilized Pseudo-nitzschia diatom spp., which are generally nontoxic in the open ocean, began producing toxic levels of domoic acid. Even short-lived blooms containing such toxins could have detrimental effects on marine food webs. Most species of phytoplankton are harmless or beneficial, given that they constitute the base of the marine food chain. Fertilization increases phytoplankton only in the open oceans (far from shore) where iron deficiency is substantial. Most coastal waters are replete with iron and adding more has no useful effect. Further, it has been shown that there are often higher mineralization rates with iron fertilization, leading to a turn over in the plankton masses that are produced. This results in no beneficial effects and actually causes an increase in CO2.
Finally, a 2010 study showed that iron enrichment stimulates toxic diatom production in high-nitrate, low-chlorophyll areas which, the authors argue, raises "serious concerns over the net benefit and sustainability of large-scale iron fertilizations". Nitrogen released by cetaceans and iron chelate are a significant benefit to the marine food chain in addition to sequestering carbon for long periods of time.
Ocean acidification
A 2009 study tested the potential of iron fertilization to reduce both atmospheric CO2 and ocean acidity using a global ocean carbon model. The study found that, "Our simulations show that ocean iron fertilization, even in the extreme scenario by depleting global surface macronutrient concentration to zero at all time, has a minor effect on mitigating CO2-induced acidification at the surface ocean." Unfortunately, the impact on ocean acidification would likely not change due to the low effects that iron fertilization has on CO2 levels.
History
Consideration of iron's importance to phytoplankton growth and photosynthesis dates to the 1930s when Dr Thomas John Hart, a British marine biologist based on the in the Southern Ocean speculated - in "On the phytoplankton of the South-West Atlantic and Bellingshausen Sea, 1929-31" - that great "desolate zones" (areas apparently rich in nutrients, but lacking in phytoplankton activity or other sea life) might be iron-deficient. Hart returned to this issue in a 1942 paper entitled "Phytoplankton periodicity in Antarctic surface waters", but little other scientific discussion was recorded until the 1980s, when oceanographer John Martin of the Moss Landing Marine Laboratories renewed controversy on the topic with his marine water nutrient analyses. His studies supported Hart's hypothesis. These "desolate" regions came to be called "high-nutrient, low-chlorophyll regions" (HNLC).
John Gribbin was the first scientist to publicly suggest that climate change could be reduced by adding large amounts of soluble iron to the oceans. Martin's 1988 quip four months later at Woods Hole Oceanographic Institution, "Give me a half a tanker of iron and I will give you an ice age," drove a decade of research.
The findings suggested that iron deficiency was limiting ocean productivity and offered an approach to mitigating climate change as well. Perhaps the most dramatic support for Martin's hypothesis came with the 1991 eruption of Mount Pinatubo in the Philippines. Environmental scientist Andrew Watson analyzed global data from that eruption and calculated that it deposited approximately 40,000 tons of iron dust into oceans worldwide. This single fertilization event preceded an easily observed global decline in atmospheric and a parallel pulsed increase in oxygen levels.
The parties to the London Dumping Convention adopted a non-binding resolution in 2008 on fertilization (labeled LC-LP.1(2008)). The resolution states that ocean fertilization activities, other than legitimate scientific research, "should be considered as contrary to the aims of the Convention and Protocol and do not currently qualify for any exemption from the definition of dumping". An Assessment Framework for Scientific Research Involving Ocean Fertilization, regulating the dumping of wastes at sea (labeled LC-LP.2(2010)) was adopted by the Contracting Parties to the Convention in October 2010 (LC 32/LP 5).
Multiple ocean labs, scientists and businesses have explored fertilization. Beginning in 1993, thirteen research teams completed ocean trials demonstrating that phytoplankton blooms can be stimulated by iron augmentation. Controversy remains over the effectiveness of atmospheric sequestration and ecological effects. Ocean trials of ocean iron fertilization took place in 2009 in the South Atlantic by project LOHAFEX, and in July 2012 in the North Pacific off the coast of British Columbia, Canada, by the Haida Salmon Restoration Corporation (HSRC).
See also
Carbon dioxide sink
Iron chelate
Ocean pipes
Liebig's law of the minimum
Iron cycle
References
Aquatic ecology
Planetary engineering
Climate engineering
Carbon dioxide removal
Climate change policy
Ecological restoration | Iron fertilization | [
"Chemistry",
"Engineering",
"Biology"
] | 6,405 | [
"Planetary engineering",
"Ecological restoration",
"Geoengineering",
"Ecosystems",
"Environmental engineering",
"Aquatic ecology"
] |
4,145,906 | https://en.wikipedia.org/wiki/Corpuscularianism | Corpuscularianism, also known as corpuscularism (), is a set of theories that explain natural transformations as a result of the interaction of particles (minima naturalia, partes exiles, partes parvae, particulae, and semina). It differs from atomism in that corpuscles are usually endowed with a property of their own and are further divisible, while atoms are neither. Although often associated with the emergence of early modern mechanical philosophy, and especially with the names of Thomas Hobbes, René Descartes, Pierre Gassendi, Robert Boyle, Isaac Newton, and John Locke, corpuscularian theories can be found throughout the history of Western philosophy.
Overview
Corpuscles vs. atoms
Corpuscularianism is similar to the theory of atomism, except that where atoms were supposed to be indivisible, corpuscles could in principle be divided. In this manner, for example, it was theorized that mercury could penetrate into metals and modify their inner structure, a step on the way towards the production of gold by transmutation.
Perceived vs. real properties
Corpuscularianism was associated by its leading proponents with the idea that some of the apparent properties of objects are artifacts of the perceiving mind, that is, "secondary" qualities as distinguished from "primary" qualities. Corpuscles were thought to be unobservable and having a very limited number of basic properties, such as size, shape, and motion.
Thomas Hobbes
The philosopher Thomas Hobbes used corpuscularianism to justify his political theories in Leviathan. It was used by Newton in his development of the corpuscular theory of light, while Boyle used it to develop his mechanical corpuscular philosophy, which laid the foundations for the Chemical Revolution.
Robert Boyle
Corpuscularianism remained a dominant theory for centuries and was blended with alchemy by early scientists such as Robert Boyle and Isaac Newton in the 17th century. In his work The Sceptical Chymist (1661), Boyle abandoned the Aristotelian ideas of the classical elements—earth, water, air, and fire—in favor of corpuscularianism. In his later work, The Origin of Forms and Qualities (1666), Boyle used corpuscularianism to explain all of the major Aristotelian concepts, marking a departure from traditional Aristotelianism.
Light corpuscules
Alchemical corpuscularianism
William R. Newman traces the origins from the fourth book of Aristotle, Meteorology. The "dry" and "moist" exhalations of Aristotle became the alchemical 'sulfur' and 'mercury' of the eighth-century Islamic alchemist, Jābir ibn Hayyān (died c. 806–816). Pseudo-Geber's Summa perfectionis contains an alchemical theory in which unified sulfur and mercury corpuscles, differing in purity, size, and relative proportions, form the basis of a much more complicated process.
Importance to the development of modern scientific theory
Several of the principles which corpuscularianism proposed became tenets of modern chemistry.
The idea that compounds can have secondary properties that differ from the properties of the elements which are combined to make them became the basis of molecular chemistry.
The idea that the same elements can be predictably combined in different ratios using different methods to create compounds with radically different properties became the basis of stoichiometry, crystallography, and established studies of chemical synthesis.
The ability of chemical processes to alter the composition of an object without significantly altering its form is the basis of fossil theory via mineralization and the understanding of numerous metallurgical, biological, and geological processes.
See also
Atomic theory
Atomism
Classical element
History of chemistry
References
Bibliography
Further reading
Atomism
History of chemistry
13th century in science
Metaphysical theories
Particles | Corpuscularianism | [
"Physics"
] | 774 | [
"Particles",
"Physical objects",
"Matter"
] |
329,400 | https://en.wikipedia.org/wiki/Solid%20of%20revolution | In geometry, a solid of revolution is a solid figure obtained by rotating a plane figure around some straight line (the axis of revolution), which may not intersect the generatrix (except at its boundary). The surface created by this revolution and which bounds the solid is the surface of revolution.
Assuming that the curve does not cross the axis, the solid's volume is equal to the length of the circle described by the figure's centroid multiplied by the figure's area (Pappus's second centroid theorem).
A representative disc is a three-dimensional volume element of a solid of revolution. The element is created by rotating a line segment (of length ) around some axis (located units away), so that a cylindrical volume of units is enclosed.
Finding the volume
Two common methods for finding the volume of a solid of revolution are the disc method and the shell method of integration. To apply these methods, it is easiest to draw the graph in question; identify the area that is to be revolved about the axis of revolution; determine the volume of either a disc-shaped slice of the solid, with thickness , or a cylindrical shell of width ; and then find the limiting sum of these volumes as approaches 0, a value which may be found by evaluating a suitable integral. A more rigorous justification can be given by attempting to evaluate a triple integral in cylindrical coordinates with two different orders of integration.
Disc method
The disc method is used when the slice that was drawn is perpendicular to the axis of revolution; i.e. when integrating parallel to the axis of revolution.
The volume of the solid formed by rotating the area between the curves of and and the lines and about the -axis is given by
If (e.g. revolving an area between the curve and the -axis), this reduces to:
The method can be visualized by considering a thin horizontal rectangle at between on top and on the bottom, and revolving it about the -axis; it forms a ring (or disc in the case that ), with outer radius and inner radius . The area of a ring is , where is the outer radius (in this case ), and is the inner radius (in this case ). The volume of each infinitesimal disc is therefore . The limit of the Riemann sum of the volumes of the discs between and becomes integral (1).
Assuming the applicability of Fubini's theorem and the multivariate change of variables formula, the disk method may be derived in a straightforward manner by (denoting the solid as D):
Shell Method of Integration
The shell method (sometimes referred to as the "cylinder method") is used when the slice that was drawn is parallel to the axis of revolution; i.e. when integrating perpendicular to the axis of revolution.
The volume of the solid formed by rotating the area between the curves of and and the lines and about the -axis is given by
If (e.g. revolving an area between curve and -axis), this reduces to:
The method can be visualized by considering a thin vertical rectangle at with height , and revolving it about the -axis; it forms a cylindrical shell. The lateral surface area of a cylinder is , where is the radius (in this case ), and is the height (in this case ). Summing up all of the surface areas along the interval gives the total volume.
This method may be derived with the same triple integral, this time with a different order of integration:
Parametric form
When a curve is defined by its parametric form in some interval , the volumes of the solids generated by revolving the curve around the -axis or the -axis are given by
Under the same circumstances the areas of the surfaces of the solids generated by revolving the curve around the -axis or the -axis are given by
This can also be derived from multivariable integration. If a plane curve is given by then its corresponding surface of revolution when revolved around the x-axis has Cartesian coordinates given by with . Then the surface area is given by the surface integral
Computing the partial derivatives yields
and computing the cross product yields
where the trigonometric identity was used. With this cross product, we get
where the same trigonometric identity was used again. The derivation for a surface obtained by revolving around the y-axis is similar.
Polar form
For a polar curve where and , the volumes of the solids generated by revolving the curve around the x-axis or y-axis are
The areas of the surfaces of the solids generated by revolving the curve around the -axis or the -axis are given
See also
Gabriel's Horn
Guldinus theorem
Pseudosphere
Surface of revolution
Ungula
Notes
References
()
Integral calculus
Solids | Solid of revolution | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 959 | [
"Calculus",
"Phases of matter",
"Condensed matter physics",
"Integral calculus",
"Solids",
"Matter"
] |
330,017 | https://en.wikipedia.org/wiki/Discretization | In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers. Dichotomization is the special case of discretization in which the number of discrete classes is 2, which can approximate a continuous variable as a binary variable (creating a dichotomy for modeling purposes, as in binary classification).
Discretization is also related to discrete mathematics, and is an important component of granular computing. In this context, discretization may also refer to modification of variable or category granularity, as when multiple discrete variables are aggregated or multiple discrete categories fused.
Whenever continuous data is discretized, there is always some amount of discretization error. The goal is to reduce the amount to a level considered negligible for the modeling purposes at hand.
The terms discretization and quantization often have the same denotation but not always identical connotations. (Specifically, the two terms share a semantic field.) The same is true of discretization error and quantization error.
Mathematical methods relating to discretization include the Euler–Maruyama method and the zero-order hold.
Discretization of linear state space models
Discretization is also concerned with the transformation of continuous differential equations into discrete difference equations, suitable for numerical computing.
The following continuous-time state space model
where and are continuous zero-mean white noise sources with power spectral densities
can be discretized, assuming zero-order hold for the input and continuous integration for the noise , to
with covariances
where
and is the sample time. If is nonsingular,
The equation for the discretized measurement noise is a consequence of the continuous measurement noise being defined with a power spectral density.
A clever trick to compute and in one step is by utilizing the following property:
Where and are the discretized state-space matrices.
Discretization of process noise
Numerical evaluation of is a bit trickier due to the matrix exponential integral. It can, however, be computed by first constructing a matrix, and computing the exponential of it
The discretized process noise is then evaluated by multiplying the transpose of the lower-right partition of with the upper-right partition of :
Derivation
Starting with the continuous model
we know that the matrix exponential is
and by premultiplying the model we get
which we recognize as
and by integrating,
which is an analytical solution to the continuous model.
Now we want to discretise the above expression. We assume that is constant during each timestep.
We recognize the bracketed expression as , and the second term can be simplified by substituting with the function . Note that . We also assume that is constant during the integral, which in turn yields
which is an exact solution to the discretization problem.
When is singular, the latter expression can still be used by replacing by its Taylor expansion,
This yields
which is the form used in practice.
Approximations
Exact discretization may sometimes be intractable due to the heavy matrix exponential and integral operations involved. It is much easier to calculate an approximate discrete model, based on that for small timesteps . The approximate solution then becomes:
This is also known as the Euler method, which is also known as the forward Euler method. Other possible approximations are , otherwise known as the backward Euler method and , which is known as the bilinear transform, or Tustin transform. Each of these approximations has different stability properties. The bilinear transform preserves the instability of the continuous-time system.
Discretization of continuous features
In statistics and machine learning, discretization refers to the process of converting continuous features or variables to discretized or nominal features. This can be useful when creating probability mass functions.
Discretization of smooth functions
In generalized functions theory, discretization
arises as a particular case of the Convolution Theorem
on tempered distributions
where is the Dirac comb,
is discretization, is
periodization, is a rapidly decreasing tempered distribution
(e.g. a Dirac delta function or any other
compactly supported function), is a smooth,
slowly growing
ordinary function (e.g. the function that is constantly
or any other band-limited function)
and is the (unitary, ordinary frequency) Fourier transform.
Functions which are not smooth can be made smooth using a mollifier prior to discretization.
As an example, discretization of the function that is constantly yields the sequence which, interpreted as the coefficients of a linear combination of Dirac delta functions, forms a Dirac comb. If additionally truncation is applied, one obtains finite sequences, e.g. . They are discrete in both, time and frequency.
See also
Discrete event simulation
Discrete space
Discrete time and continuous time
Finite difference method
Finite volume method for unsteady flow
Interpolation
Smoothing
Stochastic simulation
Time-scale calculus
References
Further reading
External links
Discretization in Geometry and Dynamics: research on the discretization of differential geometry and dynamics
Numerical analysis
Applied mathematics
Functional analysis
Iterative methods
Control theory | Discretization | [
"Mathematics"
] | 1,062 | [
"Functions and mappings",
"Functional analysis",
"Applied mathematics",
"Control theory",
"Mathematical objects",
"Computational mathematics",
"Mathematical relations",
"Numerical analysis",
"Approximations",
"Dynamical systems"
] |
330,206 | https://en.wikipedia.org/wiki/Differentiable%20function | In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its domain. A differentiable function is smooth (the function is locally well approximated as a linear function at each interior point) and does not contain any break, angle, or cusp.
If is an interior point in the domain of a function , then is said to be differentiable at if the derivative exists. In other words, the graph of has a non-vertical tangent line at the point . is said to be differentiable on if it is differentiable at every point of . is said to be continuously differentiable if its derivative is also a continuous function over the domain of the function . Generally speaking, is said to be of class if its first derivatives exist and are continuous over the domain of the function .
For a multivariable function, as shown here, the differentiability of it is something more complex than the existence of the partial derivatives of it.
Differentiability of real functions of one variable
A function , defined on an open set , is said to be differentiable at if the derivative
exists. This implies that the function is continuous at .
This function is said to be differentiable on if it is differentiable at every point of . In this case, the derivative of is thus a function from into
A continuous function is not necessarily differentiable, but a differentiable function is necessarily continuous (at every point where it is differentiable) as is shown below (in the section Differentiability and continuity). A function is said to be continuously differentiable if its derivative is also a continuous function; there exist functions that are differentiable but not continuously differentiable (an example is given in the section Differentiability classes).
Differentiability and continuity
If is differentiable at a point , then must also be continuous at . In particular, any differentiable function must be continuous at every point in its domain. The converse does not hold: a continuous function need not be differentiable. For example, a function with a bend, cusp, or vertical tangent may be continuous, but fails to be differentiable at the location of the anomaly.
Most functions that occur in practice have derivatives at all points or at almost every point. However, a result of Stefan Banach states that the set of functions that have a derivative at some point is a meagre set in the space of all continuous functions. Informally, this means that differentiable functions are very atypical among continuous functions. The first known example of a function that is continuous everywhere but differentiable nowhere is the Weierstrass function.
Differentiability classes
A function is said to be if the derivative exists and is itself a continuous function. Although the derivative of a differentiable function never has a jump discontinuity, it is possible for the derivative to have an essential discontinuity. For example, the function
is differentiable at 0, since
exists. However, for differentiation rules imply
which has no limit as Thus, this example shows the existence of a function that is differentiable but not continuously differentiable (i.e., the derivative is not a continuous function). Nevertheless, Darboux's theorem implies that the derivative of any function satisfies the conclusion of the intermediate value theorem.
Similarly to how continuous functions are said to be of continuously differentiable functions are sometimes said to be of . A function is of if the first and second derivative of the function both exist and are continuous. More generally, a function is said to be of if the first derivatives all exist and are continuous. If derivatives exist for all positive integers the function is smooth or equivalently, of
Differentiability in higher dimensions
A function of several real variables is said to be differentiable at a point if there exists a linear map such that
If a function is differentiable at , then all of the partial derivatives exist at , and the linear map is given by the Jacobian matrix, an n × m matrix in this case. A similar formulation of the higher-dimensional derivative is provided by the fundamental increment lemma found in single-variable calculus.
If all the partial derivatives of a function exist in a neighborhood of a point and are continuous at the point , then the function is differentiable at that point .
However, the existence of the partial derivatives (or even of all the directional derivatives) does not guarantee that a function is differentiable at a point. For example, the function defined by
is not differentiable at , but all of the partial derivatives and directional derivatives exist at this point. For a continuous example, the function
is not differentiable at , but again all of the partial derivatives and directional derivatives exist.
Differentiability in complex analysis
In complex analysis, complex-differentiability is defined using the same definition as single-variable real functions. This is allowed by the possibility of dividing complex numbers. So, a function is said to be differentiable at when
Although this definition looks similar to the differentiability of single-variable real functions, it is however a more restrictive condition. A function , that is complex-differentiable at a point is automatically differentiable at that point, when viewed as a function . This is because the complex-differentiability implies that
However, a function can be differentiable as a multi-variable function, while not being complex-differentiable. For example, is differentiable at every point, viewed as the 2-variable real function , but it is not complex-differentiable at any point because the limit does not exist (the limit depends on the angle of approach).
Any function that is complex-differentiable in a neighborhood of a point is called holomorphic at that point. Such a function is necessarily infinitely differentiable, and in fact analytic.
Differentiable functions on manifolds
If M is a differentiable manifold, a real or complex-valued function f on M is said to be differentiable at a point p if it is differentiable with respect to some (or any) coordinate chart defined around p. If M and N are differentiable manifolds, a function f: M → N is said to be differentiable at a point p if it is differentiable with respect to some (or any) coordinate charts defined around p and f(p).
See also
Generalizations of the derivative
Semi-differentiability
Differentiable programming
References
Multivariable calculus
Smooth functions | Differentiable function | [
"Mathematics"
] | 1,326 | [
"Multivariable calculus",
"Calculus"
] |
330,361 | https://en.wikipedia.org/wiki/Thyroid-stimulating%20hormone | Thyroid-stimulating hormone (also known as thyrotropin, thyrotropic hormone, or abbreviated TSH) is a pituitary hormone that stimulates the thyroid gland to produce thyroxine (T4), and then triiodothyronine (T3) which stimulates the metabolism of almost every tissue in the body. It is a glycoprotein hormone produced by thyrotrope cells in the anterior pituitary gland, which regulates the endocrine function of the thyroid.
Physiology
Hormone levels
TSH (with a half-life of about an hour) stimulates the thyroid gland to secrete the hormone thyroxine (T4), which has only a slight effect on metabolism. T4 is converted to triiodothyronine (T3), which is the active hormone that stimulates metabolism. About 80% of this conversion is in the liver and other organs, and 20% in the thyroid itself.
TSH is secreted throughout life but particularly reaches high levels during the periods of rapid growth and development, as well as in response to stress.
The hypothalamus, in the base of the brain, produces thyrotropin-releasing hormone (TRH). TRH stimulates the anterior pituitary gland to produce TSH.
Somatostatin is also produced by the hypothalamus, and has an opposite effect on the pituitary production of TSH, decreasing or inhibiting its release.
The concentration of thyroid hormones (T3 and T4) in the blood regulates the pituitary release of TSH; when T3 and T4 concentrations are low, the production of TSH is increased, and, conversely, when T3 and T4 concentrations are high, TSH production is decreased. This is an example of a negative feedback loop. Any inappropriateness of measured values, for instance a low-normal TSH together with a low-normal T4 may signal tertiary (central) disease and a TSH to TRH pathology. Elevated reverse T3 (RT3) together with low-normal TSH and low-normal T3, T4 values, which is regarded as indicative for euthyroid sick syndrome, may also have to be investigated for chronic subacute thyroiditis (SAT) with output of subpotent hormones. Absence of antibodies in patients with diagnoses of an autoimmune thyroid in their past would always be suspicious for development to SAT even in the presence of a normal TSH because there is no known recovery from autoimmunity.
For clinical interpretation of laboratory results it is important to acknowledge that TSH is released in a pulsatile manner resulting in both circadian and ultradian rhythms of its serum concentrations.
Subunits
TSH is a glycoprotein and consists of two subunits, the alpha and the beta subunit.
The α (alpha) subunit (i.e., chorionic gonadotropin alpha) is nearly identical to that of human chorionic gonadotropin (hCG), luteinizing hormone (LH), and follicle-stimulating hormone (FSH). The α subunit is thought to be the effector region responsible for stimulation of adenylate cyclase (involved the generation of cAMP). The α chain has a 92-amino acid sequence.
The β (beta) subunit (TSHB) is unique to TSH, and therefore determines its receptor specificity. The β chain has a 118-amino acid sequence.
The TSH receptor
The TSH receptor is found mainly on thyroid follicular cells. Stimulation of the receptor increases T3 and T4 production and secretion. This occurs through stimulation of six steps in thyroid hormone synthesis: (1) Up-regulating the activity of the sodium-iodide symporter (NIS) on the basolateral membrane of thyroid follicular cells, thereby increasing intracellular concentrations of iodine (iodine trapping). (2) Stimulating iodination of thyroglobulin in the follicular lumen, a precursor protein of thyroid hormone. (3) Stimulating the conjugation of iodinated tyrosine residues. This leads to the formation of thyroxine (T4) and triiodothyronine (T3) that remain attached to the thyroglobulin protein. (4) Increased endocytocis of the iodinated thyroglobulin protein across the apical membrane back into the follicular cell. (5) Stimulation of proteolysis of iodinated thyroglobulin to form free thyroxine (T4) and triiodothyronine (T3). (6) Secretion of thyroxine (T4) and triiodothyronine (T3) across the basolateral membrane of follicular cells to enter the circulation. This occurs by an unknown mechanism.
Stimulating antibodies to the TSH receptor mimic TSH and cause Graves' disease. In addition, hCG shows some cross-reactivity to the TSH receptor and therefore can stimulate production of thyroid hormones. In pregnancy, prolonged high concentrations of hCG can produce a transient condition termed gestational hyperthyroidism. This is also the mechanism of trophoblastic tumors increasing the production of thyroid hormones.
Applications
Diagnostics
Reference ranges for TSH may vary slightly, depending on the method of analysis, and do not necessarily equate to cut-offs for diagnosing thyroid dysfunction. In the UK, guidelines issued by the Association for Clinical Biochemistry suggest a reference range of 0.4–4.0 μIU/mL (or mIU/L). The National Academy of Clinical Biochemistry (NACB) stated that it expected the reference range for adults to be reduced to 0.4–2.5 μIU/mL, because research had shown that adults with an initially measured TSH level of over 2.0 μIU/mL had "an increased odds ratio of developing hypothyroidism over the [following] 20 years, especially if thyroid antibodies were elevated".
TSH concentrations in children are normally higher than in adults. In 2002, the NACB recommended age-related reference limits starting from about 1.3 to 19 μIU/mL for normal-term infants at birth, dropping to 0.6–10 μIU/mL at 10 weeks old, 0.4–7.0 μIU/mL at 14 months and gradually dropping during childhood and puberty to adult levels, 0.3–3.0 μIU/mL.
Diagnosis of disease
TSH concentrations are measured as part of a thyroid function test in patients suspected of having an excess (hyperthyroidism) or deficiency (hypothyroidism) of thyroid hormones. Interpretation of the results depends on both the TSH and T4 concentrations. In some situations measurement of T3 may also be useful.
A TSH assay is now also the recommended screening tool for thyroid disease. Recent advances in increasing the sensitivity of the TSH assay make it a better screening tool than free T4.
Monitoring
The therapeutic target range TSH level for patients on treatment ranges between 0.3 and 3.0 μIU/mL.
For hypothyroid patients on thyroxine, measurement of TSH alone is generally considered sufficient. An increase in TSH above the normal range indicates under-replacement or poor compliance with therapy. A significant reduction in TSH suggests over-treatment. In both cases, a change in dose may be required. A low or low-normal TSH value may also signal pituitary disease in the absence of replacement.
For hyperthyroid patients, both TSH and T4 are usually monitored. In pregnancy, TSH measurements do not seem to be a good marker for the well-known association of maternal thyroid hormone availability with offspring neurocognitive development.
TSH distribution progressively shifts toward higher concentrations with age.
Difficulties with interpretation of TSH measurement
Heterophile antibodies (which include human anti-mouse antibodies (HAMA) and Rheumatoid Factor (RF)), which bind weakly to the test assay's animal antibodies, causing a higher (or less commonly lower) TSH result than the actual true TSH level. Although the standard lab assay panels are designed to remove moderate levels of heterophilic antibodies, these fail to remove higher antibody levels. "Dr. Baumann [from Mayo Clinic] and her colleagues found that 4.4 percent of the hundreds of samples she tested were affected by heterophile antibodies.........The hallmark of this condition is a discrepancy between TSH value and free T4 value, and most important between laboratory values and patient's conditions. Endocrinologists, in particular, should be on alert for this."
Macro-TSH - endogenous antibodies bind to TSH reducing its activity, so the pituitary gland would need to produce more TSH to obtain the same overall level of TSH activity.
TSH Isomers - natural variations of the TSH molecule, which have lower activity, so the pituitary gland would need to produce more TSH to obtain the same overall level of TSH activity.
The same TSH concentration may have a different meaning whether it is used for diagnosis of thyroid dysfunction or for monitoring of substitution therapy with levothyroxine. Reasons for this lack of generalisation are Simpson's paradox and the fact that the TSH-T3 shunt is disrupted in treated hypothyroidism, so that the shape of the relation between free T4 and TSH concentration is distorted.
Therapeutic
Synthetic recombinant human TSH alpha (rhTSHα or simply rhTSH) or thyrotropin alfa (INN) is manufactured by Genzyme Corp under the trade name Thyrogen. It is used to manipulate endocrine function of thyroid-derived cells, as part of the diagnosis and treatment of thyroid cancer.
A Cochrane review compared treatments using recombinant human thyrotropin-aided radioactive iodine to radioactive iodine alone. In this review it was found that the recombinant human thyrotropin-aided radioactive iodine appeared to lead to a greater reduction in thyroid volume at the increased risk of hypothyroidism. No conclusive data on changes in quality of life with either treatments were found.
History
In 1916, Bennett M. Allen and Philip E. Smith found that the pituitary contained a thyrotropic substance. The first standardised purification protocol for this thyrotropic hormone was described by Charles George Lambie and Victor Trikojus, working at the University of Sydney in 1937.
References
External links
TSH at Lab Tests Online
Anterior pituitary hormones
Glycoproteins
Hormones of the hypothalamus-pituitary-thyroid axis
Human hormones
Peptide hormones
Pituitary gland
Sanofi
Thyroid | Thyroid-stimulating hormone | [
"Chemistry"
] | 2,275 | [
"Glycoproteins",
"Glycobiology"
] |
7,043,646 | https://en.wikipedia.org/wiki/Quantum%20game%20theory | Quantum game theory is an extension of classical game theory to the quantum domain. It differs from classical game theory in three primary ways:
Superposed initial states,
Quantum entanglement of initial states,
Superposition of strategies to be used on the initial states.
This theory is based on the physics of information much like quantum computing.
History
In 1969, John Clauser, Michael Horne, Abner Shimony, and Richard Holt (often referred to collectively as "CHSH") wrote an often-cited paper describing experiments which could be used to prove Bell's theorem. In one part of this paper, they describe a game where a player could have a better chance of winning by using quantum strategies than would be possible classically. While game theory was not explicitly mentioned in this paper, it is an early outline of how quantum entanglement could be used to alter a game.
In 1999, a professor in the math department at the University of California at San Diego named David A. Meyer first published Quantum Strategies which details a quantum version of the classical game theory game, matching pennies. In the quantum version, players are allowed access to quantum signals through the phenomenon of quantum entanglement.
Since Meyer's paper, many papers have been published exploring quantum games and the way that quantum strategies could be used in games that have been commonly studied in classical game theory.
Superposed initial states
The information transfer that occurs during a game can be viewed as a physical process.
In the simplest case of a classical game between two players with two strategies each, both the players can use a bit (a '0' or a '1') to convey their choice of strategy. A popular example of such a game is the prisoners' dilemma, where each of the convicts can either cooperate or defect: withholding knowledge or revealing that the other committed the crime. In the quantum version of the game, the bit is replaced by the qubit, which is a quantum superposition of two or more base states. In the case of a two-strategy game this can be physically implemented by the use of an entity like the electron which has a superposed spin state, with the base states being +1/2 (plus half) and −1/2 (minus half). Each of the spin states can be used to represent each of the two strategies available to the players. When a measurement is made on the electron, it collapses to one of the base states, thus conveying the strategy used by the player.
Entangled initial states
The set of qubits which are initially provided to each of the players (to be used to convey their choice of strategy) may be entangled. For instance, an entangled pair of qubits implies that an operation performed on one of the qubits, affects the other qubit as well, thus altering the expected pay-offs of the game. A simple example of this is a quantum version of the Two-up coin game in which the coins are entangled.
Superposition of strategies to be used on initial states
The job of a player in a game is to choose a strategy. In terms of bits this means that the player has to choose between 'flipping' the bit to its opposite state or leaving its current state untouched. When extended to the quantum domain this implies that the player can rotate the qubit to a new state, thus changing the probability amplitudes of each of the base states. Such operations on the qubits are required to be unitary transformations on the initial state of the qubit. This is different from the classical procedure which chooses the strategies with some statistical probabilities.
Multiplayer games
Introducing quantum information into multiplayer games allows a new type of "equilibrium strategy" which is not found in traditional games. The entanglement of players' choices can have the effect of a contract by preventing players from profiting from other player's betrayal.
Quantum Prisoner's Dilemma
The Classical Prisoner's Dilemma is a game played between two players with a choice to cooperate with or betray their opponent. Classically, the dominant strategy is to always choose betrayal. When both players choose this strategy every turn, they each ensure a suboptimal profit, but cannot lose, and the game is said to have reached a Nash equilibrium. Profit would be maximized for both players if each chose to cooperate every turn, but this is not the rational choice, thus a suboptimal solution is the dominant outcome. In the Quantum Prisoner's Dilemma, both parties choosing to betray each other is still an equilibrium, however, there can also exist multiple Nash equilibriums that vary based on the entanglement of the initial states. In the case where the states are only slightly entangled, there exists a certain unitary operation for Alice so that if Bob chooses betrayal every turn, Alice will actually gain more profit than Bob and vice versa. Thus, a profitable equilibrium can be reached in 2 additional ways. The case where the initial state is most entangled shows the most change from the classical game. In this version of the game, Alice and Bob each have an operator Q that allows for a payout equal to mutual cooperation with no risk of betrayal. This is a Nash equilibrium that also happens to be Pareto optimal.
Additionally, the quantum version of the Prisoner's Dilemma differs greatly from the classical version when the game is of unknown or infinite length. Classically, the infinite Prisoner's Dilemma has no defined fixed strategy but in the quantum version it is possible to develop an equilibrium strategy.
Quantum Volunteer's Dilemma
The Volunteer's dilemma is a well-known game in game theory that models the conflict players face when deciding whether to volunteer for a collective benefit, knowing that volunteering incurs a personal cost. One significant volunteer’s dilemma variant was introduced by Weesie and Franzen in 1998, involves cost-sharing among volunteers. In this variant of the volunteer's dilemma, if there is no volunteer, all players receive a payoff of 0. If there is at least one volunteer, the reward of b units is distributed to all players. In contrast, the total cost of c units incurred by volunteering is divided equally among all the volunteers. It is shown that for classical mixed strategies setting, there is a unique symmetric Nash equilibrium and the Nash equilibrium is obtained by setting the probability of volunteering for each player to be the unique root in the open interval (0,1) of the degree-n polynomial given by
In 2024, a quantum variant of the classical volunteer’s dilemma is introduced with b=2 and c=1 is studied, generalizing the classical setting by allowing players to utilize quantum strategies. This is achieved by employing the Eisert–Wilkens–Lewenstein quantization framework. In this setting, the players received an entangled n-qubit state with each player controlling one qubit. The decision of each player can be viewed as determining two angles. Symmetric Nash equilibria that attain a payoff value of for each player is shown and each player volunteers at this Nash Equilibrium. Furthermore, these Nash Equilibrium are Pareto optimal. It is shown that the payoff function of Nash equilibrium in the quantum setting is higher than the payoff of Nash equilibrium in the classical setting.
Quantum Card Game
A classically unfair card game can be played as follows: There are two players, Alice and Bob. Alice has three cards: one has a star on both sides, one has a diamond on both sides, and one has a star on one side and a diamond on the other side. Alice places the three cards in a box and shakes it up, then Bob draws a card so that both players can only see one side of the card. If the card has the same markings on both sides, Alice wins. But if the card has different markings on each side, Bob wins. Clearly, this is an unfair game, where Alice has a probability of winning of 2/3 and Bob has a probability of winning of 1/3. Alice gives Bob one chance to "operate" on the box and then allows him to withdraw from the game if he would like, but he can only classically obtain information on one card from this operation, so the game is still unfair.
However, Alice and Bob can play a version of this game adjusted to allow for quantum strategies. If we describe the state of a card with a diamond facing up as and the state where the star is facing up as , after shaking the box up, we can describe the state of the face-up part of the cards as:
where each is either 0 or 1.
Now, Bob can take advantage of his ability to operate on the box by constructing a machine as follows: First, he has a unitary matrix defined as . This matrix is equal to if is 0 and if is 1. He then creates his machine by putting this matrix between two Hadamard gates, so his machine now looks as follows:
This machine operating on the state gives
So if Bob inputs to his machine, he obtains
and he knows the state (i.e. the mark facing up) of all three of the cards. From here, Bob can draw one card, and then choose to either withdraw, or keep playing the game. Based on the first card that he draws, he can know from his knowledge of the face-up values of the cards whether or not he has drawn a card that will give him even chances of winning going forward (in which case he can continue to play a fair game) or if he has drawn the card that will guarantee that he loses the game. In this way, he can make the game fair for himself.
This is an example of a game where a quantum strategy can make a game fair for one player when it would be unfair for them with classical strategies.
Quantum Chess
Quantum Chess was first developed by a graduate student at the University of Southern California named Chris Cantwell. His motivation to develop the game was to expose non-physicists to the world of quantum mechanics.
The game uses the same pieces as classical chess (8 pawns, 2 knights, 2 bishops, 2 rooks, 1 queen, 1 king) and is won in the same manner (by capturing the opponent's king). However, the pieces are allowed to obey laws of quantum mechanics such as superposition. By allowed the introduction of superposition, it becomes possible for pieces to occupy more than one square in an instance. The movement rules for each piece are the same as classical chess.
The biggest difference between quantum chess and classical chess is the check rule. Check is not included in quantum chess because it is possible for the king, as well as all other pieces, to occupy multiple spots on the grid at once. Another difference is the concept of movement to occupied space. Superposition also allows two occupies to share space or move through each other.
Capturing an opponent's piece is also slightly different in quantum chess than in classical chess. Quantum chess uses quantum measurement as a method of capturing. When attempting to capture an opponent's piece, a measurement is made to determine the probability of whether or not the space is occupied and if the path is blocked. If the probability is favorable, a move can be made to capture.
PQ Penny Flip Game
The PQ penny flip game involves two players: Captain Picard and Q. Q places a penny in a box, then they take turns (Q, then Picard, then Q) either flipping or not flipping the penny without revealing its state to either player. After these three moves have been made, Q wins if the penny is heads up, and Picard if the penny is face down.
The classical Nash Equilibrium has both players taking a mixed strategy with each move having a 50% chance of either flipping or not flipping the penny, and Picard and Q will each win the game 50% of the time using classical strategies.
Allowing for Q to use quantum strategies, namely applying a Hadamard gate to the state of the penny places it into a superposition of face up and down, represented by the quantum state
In this state, if Picard does not flip the gate, then the state remains unchanged, and flipping the penny puts it into the state
Then, no matter Picard's move, Q can once again apply a Hadamard gate to the superposition which results in the penny being face up. In this way the quantization of Q's strategy guarantees a win against a player constrained by classical strategies.
This game is exemplary of how applying quantum strategies to classical games can shift an otherwise fair game in favor of the player using quantum strategies.
Quantum minimax theorems
The concepts of a quantum player, a zero-sum quantum game and the associated expected payoff were defined by A. Boukas in 1999 (for finite games) and in 2020 by L. Accardi and A. Boukas (for infinite games) within the framework of the spectral theorem for self-adjoint operators on Hilbert spaces. Quantum versions of Von Neumann's minimax theorem were proved.
Paradoxes
Quantum game theory also offers a solution to Newcomb's Paradox.
Take the two boxes offered in Newcomb's game to be coupled, as the contents of box 2 depend on if the ignorant player takes box 1. Quantum game theory enables a situation such that foreknowledge by otherwise omniscient player isn't required in order to achieve the situation. If the otherwise omniscient player operates on the state of the two boxes using a Hadamard gate, then sets up a device that operates on the state defined by the two boxes to operate again using a Hadamard gate after the ignorant player's choice. Then, no matter the pure or mixed strategy that the ignorant player uses, the ignorant player's choice will lead to its corresponding outcome as defined by the premise of the game. Because choosing a strategy for the game, then changing it to fool to otherwise omniscient player (corresponding to operating on the game state using a NOT gate) cannot give the ignorant player an additional advantage, as the two Hadamard operations ensure that the only two outcomes are those defined by the chosen strategy. In this way, the expected situation is achieved no matter the ignorant player's strategy without requiring a system knowledgeable about that player's future.
See also
Quantum tic-tac-toe: not a quantum game in the sense above, but a pedagogical tool based on metaphors for quantum mechanics
Quantum pseudo-telepathy
Quantum refereed game
CHSH game
Jan Sładkowski
Jens Eisert
References
Further reading
Danaci, Onur; Zhang, Wenlei; Coleman, Robert; Djakam, William; Amoo, Michaela; Glasser, Ryan T.; Kirby, Brian T.; N'Gom, Moussa; Searles, Thomas A. (2023-02-28), ManQala: Game-Inspired Strategies for Quantum State Engineering, doi:10.48550/arXiv.2302.14582, retrieved 2024-12-06
Quantum information science
Game theory | Quantum game theory | [
"Mathematics"
] | 3,077 | [
"Quantum game theory",
"Game theory"
] |
7,043,844 | https://en.wikipedia.org/wiki/Ideally%20hard%20superconductor | An ideally hard superconductor is a type II superconductor material with an infinite pinning force. In the external magnetic field it behaves like an ideal diamagnet if the field is switched on when the material is in the superconducting state, so-called "zero field cooled" (ZFC) regime. In the field cooled (FC) regime, the ideally hard superconductor screens perfectly the change of the magnetic field rather than the magnetic field itself. Its magnetization behavior can be described by Bean's critical state model.
The ideally hard superconductor is a good approximation for the melt-textured high temperature superconductors (HTSC) used in large scale HTSC applications such as flywheels, HTSC bearings, HTSC motors, etc.
See also
Frozen mirror image method
Bean's critical state model
References
Superconductivity
Magnetism | Ideally hard superconductor | [
"Physics",
"Materials_science",
"Engineering"
] | 191 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
7,044,083 | https://en.wikipedia.org/wiki/Electrical%20system%20of%20the%20International%20Space%20Station | The electrical system of the International Space Station is a critical part of the International Space Station (ISS) as it allows the operation of essential life-support systems, safe operation of the station, operation of science equipment, as well as improving crew comfort. The ISS electrical system uses solar cells to directly convert sunlight to electricity. Large numbers of cells are assembled in arrays to produce high power levels. This method of harnessing solar power is called photovoltaics.
The process of collecting sunlight, converting it to electricity, and managing and distributing this electricity builds up excess heat that can damage spacecraft equipment. This heat must be eliminated for reliable operation of the space station in orbit. The ISS power system uses radiators to dissipate the heat away from the spacecraft. The radiators are shaded from sunlight and aligned toward the cold void of deep space.
Solar array wing
Each ISS solar array wing (often abbreviated "SAW") consists of two retractable "blankets" of solar cells with a mast between them. Each wing is the largest ever deployed in space, weighing over 2,400 pounds and using nearly 33,000 solar arrays, each measuring 8-cm square with 4,100 diodes. When fully extended, each is in length and wide. Each SAW is capable of generating nearly 31 Kilowatts (kW) of direct current power. When retracted, each wing folds into a solar array blanket box just high and in length.
Altogether, the eight solar array wings can generate about 240 kilowatts in direct sunlight, or about 84 to 120 kilowatts average power (cycling between sunlight and shade).
The solar arrays normally track the Sun, with the "alpha gimbal" used as the primary rotation to follow the Sun as the space station moves around the Earth, and the "beta gimbal" used to adjust for the angle of the space station's orbit to the ecliptic. Several different tracking modes are used in operations, ranging from full Sun-tracking, to the drag-reduction mode (night glider and Sun slicer modes), to a drag-maximization mode used to lower the altitude.
Over time, the photovoltaic cells on the wings have degraded gradually, having been designed for a 15-year service life. This is especially noticeable with the first arrays to launch, with the P6 and P4 Trusses in 2000 (STS-97) and 2006 (STS-115).
STS-117 delivered the S4 truss and solar arrays in 2007.
STS-119 (ISS assembly flight 15A) delivered the S6 truss along with the fourth set of solar arrays and batteries to the station during March 2009.
To augment the oldest wings, NASA launched three pairs of large-scale versions of the ISS Roll Out Solar Array (IROSA) aboard three SpaceX Dragon 2 cargo launches from early June 2021 to early June 2023, SpaceX CRS-22, CRS-26 and CRS-28. These arrays were deployed along the central part of the wings up to two thirds of its length. Work to install iROSA's support brackets on the truss mast cans holding the Solar Array Wings was initiated by the crew members of Expedition 64 in late February 2021. After the first pair of arrays were delivered in early June, a spacewalk on 16 June by Shane Kimbrough and Thomas Pesquet of Expedition 65 to place one iROSA on the 2B power channel and mast can of the P6 truss ended early due to technical difficulties with the array's deployment.
The 20 June spacewalk saw the first iROSA's successful deployment and connection to the station's power system. The 25 June spacewalk saw the astronauts successfully install and deploy the second iROSA on the 4B mast can opposite the first iROSA.
The next pair of panels were launched on 26 November 2022. Astronauts Josh Cassada and Frank Rubio of Expedition 68 installed each one on the 3A power channel and mast can on the S4 segment, and the 4A power channel and mast can on the P4 truss segments, on 3 and 22 December 2022, respectively.
The third pair of panels were launched on 5 June 2023. On 9 June, astronauts Steve Bowen and Warren Hoburg of Expedition 69 installed the fifth iROSA on the 1A power channel and mast can on the S4 truss segment. On 15 June, Bowen and Hoburg installed the sixth iROSA on the 1B power channel and mast can on the S6 truss segment.
The last pair of iROSAs, the seventh and eighth, are planned to be installed on the 2A and 3B power channels on the P4 and S6 truss segments in 2025.
Batteries
Since the station is often not in direct sunlight, it relies on rechargeable lithium-ion batteries (initially nickel-hydrogen batteries) to provide continuous power during the "eclipse" part of the orbit (35 minutes of every 90 minute orbit).
Each battery assembly, situated on the S4, P4, S6, and P6 Trusses, consists of 24 lightweight lithium-ion battery cells and associated electrical and mechanical equipment. Each battery assembly has a nameplate capacity of 110 Ah ( C) (originally 81 Ah) and . This power is fed to the ISS via the BCDU and DCSU respectively.
The batteries ensure that the station is never without power to sustain life-support systems and experiments. During the sunlight part of the orbit, the batteries are recharged. The nickel-hydrogen batteries and the battery charge/discharge units were manufactured by Space Systems/Loral (SS/L), under contract to Boeing. Ni-H2 batteries on the P6 truss were replaced in 2009 and 2010 with more Ni-H2 batteries brought by Space Shuttle missions. The nickel-hydrogen batteries had a design life of 6.5 years and could exceed 38,000 charge/discharge cycles at 35% depth of discharge. They were replaced multiple times during the expected 30-year life of the station. Each battery measured and weighed .
From 2017 to 2021, the nickel-hydrogen batteries were replaced by lithium-ion batteries. On January 6, 2017, Expedition 50 members Shane Kimbrough and Peggy Whitson began the process of converting some of the oldest batteries on the ISS to the new lithium-ion batteries. Expedition 64 members Victor J. Glover and Michael S. Hopkins concluded the campaign on February 1, 2021. There are a number of differences between the two battery technologies. One difference is that the lithium-ion batteries can handle twice the charge, so only half as many lithium-ion batteries were needed during replacement. Also, the lithium-ion batteries are smaller than the older nickel-hydrogen batteries. Although Li-ion batteries typically have shorter lifetimes than Ni-H2 batteries as they cannot sustain as many charge/discharge cycles before suffering notable degradation, the ISS Li-ion batteries have been designed for 60,000 cycles and ten years of lifetime, much longer than the original Ni-H2 batteries' design life span of 6.5 years.
Power management and distribution
The power management and distribution subsystem operates at a primary bus voltage set to Vmp, the peak power point of the solar arrays. , Vmp was 160 volts DC (direct current). It can change over time as the arrays degrade from ionizing radiation. Microprocessor-controlled switches control the distribution of primary power throughout the station.
The battery charge/discharge units (BCDUs) regulate the amount of charge put into the battery. Each BCDU can regulate discharge current from two battery ORUs (each with 38 series-connected Ni-H2 cells), and can provide up to 6.6 kW to the Space Station. During insolation, the BCDU provides charge current to the batteries and controls the amount of battery overcharge. Each day, the BCDU and batteries undergo sixteen charge/discharge cycles. The Space Station has 24 BCDUs, each weighing 100 kg. The BCDUs are provided by SS/L
Sequential shunt unit (SSU)
Eighty-two separate solar array strings feed a sequential shunt unit (SSU) that provides coarse voltage regulation at the desired Vmp. The SSU applies a "dummy" (resistive) load that increases as the station's load decreases (and vice versa) so the array operates at a constant voltage and load. The SSUs are provided by SS/L.
DC-to-DC conversion
DC-to-DC converter units supply the secondary power system at a constant 124.5 volts DC, allowing the primary bus voltage to track the peak power point of the solar arrays.
Thermal control
The thermal control system regulates the temperature of the main power distribution electronics and the batteries and associated control electronics. Details on this subsystem can be found in the article External Active Thermal Control System.
Station to shuttle power transfer system
From 2007 the Station-to-Shuttle Power Transfer System (SSPTS; pronounced spits) allowed a docked Space Shuttle to make use of power provided by the International Space Station's solar arrays. Use of this system reduced usage of a shuttle's on-board power-generating fuel cells, allowing it to stay docked to the space station for an additional four days.
SSPTS was a shuttle upgrade that replaced the Assembly Power Converter Unit (APCU) with a new device called the Power Transfer Unit (PTU). The APCU had the capacity to convert shuttle 28 VDC main bus power to 124 VDC compatible with ISS's 120 VDC power system. This was used in the initial construction of the space station to augment the power available from the Russian Zvezda service module. The PTU adds to this the capability to convert the 120 VDC supplied by the ISS to the orbiter's 28 VDC main bus power. It is capable of transferring up to 8 kW of power from the space station to the orbiter. With this upgrade both the shuttle and the ISS were able to use each other's power systems when needed, though the ISS never again required the use of an orbiter's power systems.
In December 2006, during mission STS-116, PMA-2 (then at the forward end of the Destiny module) was rewired to allow for the use of the SSPTS. The first mission to make actual use of the system was STS-118 with Space Shuttle Endeavour.
Only Discovery and Endeavour were equipped with the SSPTS. Atlantis was the only surviving shuttle not equipped with the SSPTS, so it could only go on shorter length missions than the rest of the fleet.
References
External links
NASA Glenn Contributions to the International Space Station (ISS) Electrical Power System
https://ntrs.nasa.gov/citations/20110015485
Components of the International Space Station
Electrical systems
Solar power and space | Electrical system of the International Space Station | [
"Physics"
] | 2,209 | [
"Physical systems",
"Electrical systems"
] |
7,044,429 | https://en.wikipedia.org/wiki/Absolute%20irreducibility | In mathematics, a multivariate polynomial defined over the rational numbers is absolutely irreducible if it is irreducible over the complex field. For example, is absolutely irreducible, but while is irreducible over the integers and the reals, it is reducible over the complex numbers as and thus not absolutely irreducible.
More generally, a polynomial defined over a field K is absolutely irreducible if it is irreducible over every algebraic extension of K, and an affine algebraic set defined by equations with coefficients in a field K is absolutely irreducible if it is not the union of two algebraic sets defined by equations in an algebraically closed extension of K. In other words, an absolutely irreducible algebraic set is a synonym of an algebraic variety, which emphasizes that the coefficients of the defining equations may not belong to an algebraically closed field.
Absolutely irreducible is also applied, with the same meaning, to linear representations of algebraic groups.
In all cases, being absolutely irreducible is the same as being irreducible over the algebraic closure of the ground field.
Examples
A univariate polynomial of degree greater than or equal to 2 is never absolutely irreducible, due to the fundamental theorem of algebra.
The irreducible two-dimensional representation of the symmetric group S3 of order 6, originally defined over the field of rational numbers, is absolutely irreducible.
The representation of the circle group by rotations in the plane is irreducible (over the field of real numbers), but is not absolutely irreducible. After extending the field to complex numbers, it splits into two irreducible components. This is to be expected, since the circle group is commutative and it is known that all irreducible representations of commutative groups over an algebraically closed field are one-dimensional.
The real algebraic variety defined by the equation
is absolutely irreducible. It is the ordinary circle over the reals and remains an irreducible conic section over the field of complex numbers. Absolute irreducibility more generally holds over any field not of characteristic two. In characteristic two, the equation is equivalent to (x + y −1)2 = 0. Hence it defines the double line x + y =1, which is a non-reduced scheme.
The algebraic variety given by the equation
is not absolutely irreducible. Indeed, the left hand side can be factored as
where is a square root of −1.
Therefore, this algebraic variety consists of two lines intersecting at the origin and is not absolutely irreducible. This holds either already over the ground field, if −1 is a square, or over the quadratic extension obtained by adjoining i.
References
Algebraic geometry
Representation theory | Absolute irreducibility | [
"Mathematics"
] | 559 | [
"Representation theory",
"Fields of abstract algebra",
"Algebraic geometry"
] |
7,045,361 | https://en.wikipedia.org/wiki/Biomedical%20Engineering%20Society | BMES (the Biomedical Engineering Society) is the professional society for students, faculty, researchers and industry working in the broad area of biomedical engineering. BMES is the leading biomedical engineering society in the United States and was founded on February 1, 1968 "to promote the increase of biomedical engineering knowledge and its utilization." There are 7,000 members in 2018.
Since 1972, the society has published an academic journal, the Annals of Biomedical Engineering (online archive).
History
The BMES was first established in Illinois on February 1, 1968 as a non-profit organization that aims to serve the biomedical engineering students, academics, researchers, and professionals. Upon establishing the organization it first had 171 founding members and 89 charter members.
The BMES held its first meeting on April 17, 1968 with cooperation of the American Societies for Experimental Biology at the Ritz-Carlton Hotel in Atlantic City, NJ.
References
External links
Engineering Society
Biomedical engineering
Biomedical Engineering Society | Biomedical Engineering Society | [
"Engineering",
"Biology"
] | 190 | [
"Biological engineering",
"Bioengineering stubs",
"Biomedical engineering",
"Biotechnology stubs",
"Medical technology stubs",
"Medical technology"
] |
7,045,490 | https://en.wikipedia.org/wiki/Dual%20abelian%20variety | In mathematics, a dual abelian variety can be defined from an abelian variety A, defined over a field k. A 1-dimensional abelian variety is an elliptic curve, and every elliptic curve is isomorphic to its dual, but this fails for higher-dimensional abelian varieties, so the concept of dual becomes more interesting in higher dimensions.
Definition
Let A be an abelian variety over a field k. We define to be the subgroup consisting of line bundles L such that , where are the multiplication and projection maps respectively. An element of is called a degree 0 line bundle on A.
To A one then associates a dual abelian variety Av (over the same field), which is the solution to the following moduli problem. A family of degree 0 line bundles parametrized by a k-variety T is defined to be a line bundle L on
A×T such that
for all , the restriction of L to A×{t} is a degree 0 line bundle,
the restriction of L to {0}×T is a trivial line bundle (here 0 is the identity of A).
Then there is a variety Av and a line bundle , called the Poincaré bundle, which is a family of degree 0 line bundles parametrized by Av in the sense of the above definition. Moreover, this family is universal, that is, to any family L parametrized by T is associated a unique morphism f: T → Av so that L is isomorphic to the pullback of P along the morphism 1A×f: A×T → A×Av. Applying this to the case when T is a point, we see that the points of Av correspond to line bundles of degree 0 on A, so there is a natural group operation on Av given by tensor product of line bundles, which makes it into an abelian variety.
In the language of representable functors one can state the above result as follows. The contravariant functor, which associates to each k-variety T the set of families of degree 0 line bundles parametrised by T and to each k-morphism f: T → T the mapping induced by the pullback with f, is representable. The universal element representing this functor is the pair (Av, P).
This association is a duality in the sense that there is a natural isomorphism between the double dual Avv and A (defined via the Poincaré bundle) and that it is contravariant functorial, i.e. it associates to all morphisms f: A → B dual morphisms fv: Bv → Av in a compatible way. The n-torsion of an abelian variety and the n-torsion of its dual are dual to each other when n is coprime to the characteristic of the base. In general - for all n - the n-torsion group schemes of dual abelian varieties are Cartier duals of each other. This generalizes the Weil pairing for elliptic curves.
History
The theory was first put into a good form when K was the field of complex numbers. In that case there is a general form of duality between the Albanese variety of a complete variety V, and its Picard variety; this was realised, for definitions in terms of complex tori, as soon as André Weil had given a general definition of Albanese variety. For an abelian variety A, the Albanese variety is A itself, so the dual should be Pic0(A), the connected component of the identity element of what in contemporary terminology is the Picard scheme.
For the case of the Jacobian variety J of a compact Riemann surface C, the choice of a principal polarization of J gives rise to an identification of J with its own Picard variety. This in a sense is just a consequence of Abel's theorem. For general abelian varieties, still over the complex numbers, A is in the same isogeny class as its dual. An explicit isogeny can be constructed by use of an invertible sheaf L on A (i.e. in this case a holomorphic line bundle), when the subgroup
K(L)
of translations on L that take L into an isomorphic copy is itself finite. In that case, the quotient
A/K(L)
is isomorphic to the dual abelian variety Av.
This construction of Av extends to any field K of characteristic zero. In terms of this definition, the Poincaré bundle, a universal line bundle can be defined on
A × Av.
The construction when K has characteristic p uses scheme theory. The definition of K(L) has to be in terms of a group scheme that is a scheme-theoretic stabilizer, and the quotient taken is now a quotient by a subgroup scheme.
The Dual Isogeny
Let be an isogeny of abelian varieties. (That is, is finite-to-one and surjective.) We will construct an isogeny using the functorial description of , which says that the data of a map is the same as giving a family of degree zero line bundles on , parametrized by .
To this end, consider the isogeny and where is the Poincare line bundle for . This is then the required family of degree zero line bundles on .
By the aforementioned functorial description, there is then a morphism so that . One can show using this description that this map is an isogeny of the same degree as , and that .
Hence, we obtain a contravariant endofunctor on the category of abelian varieties which squares to the identity. This kind of functor is often called a dualizing functor.
Mukai's Theorem
A celebrated theorem of Mukai states that there is an isomorphism of derived categories , where denotes the bounded derived category of coherent sheaves on X. Historically, this was the first use of the Fourier-Mukai transform and shows that the bounded derived category cannot necessarily distinguish non-isomorphic varieties.
Recall that if X and Y are varieties, and is a complex of coherent sheaves, we define the Fourier-Mukai transform to be the composition , where p and q are the projections onto X and Y respectively.
Note that is flat and hence is exact on the level of coherent sheaves, and in applications is often a line bundle so one may usually leave the left derived functors underived in the above expression. Note also that one can analogously define a Fourier-Mukai transform using the same kernel, by just interchanging the projection maps in the formula.
The statement of Mukai's theorem is then as follows.Theorem:''' Let A be an abelian variety of dimension g'' and the Poincare line bundle on . Then, , where is the inversion map, and is the shift functor. In particular, is an isomorphism.
Notes
References
Abelian varieties
Abelian variety | Dual abelian variety | [
"Mathematics"
] | 1,432 | [
"Mathematical structures",
"Category theory",
"Duality theories",
"Geometry"
] |
10,872,064 | https://en.wikipedia.org/wiki/Hardmask | A hardmask is a material used in semiconductor processing as an etch mask instead of a polymer or other organic "soft" resist material.
Hardmasks are necessary when the material being etched is itself an organic polymer. Anything used to etch this material will also etch the photoresist being used to define its patterning since that is also an organic polymer. This arises, for instance, in the patterning of low-κ dielectric insulation layers used in VLSI fabrication. Polymers tend to be etched easily by oxygen, fluorine, chlorine and other reactive gases used in plasma etching.
Use of a hardmask involves an additional deposition process, and hence additional cost. First, the hardmask material is deposited and etched into the required pattern using a standard photoresist process. Following that the underlying material can be etched through the hardmask. Finally the hardmask is removed with a further etching process.
Hardmask materials can be metal or dielectric. Silicon based masks such as silicon dioxide or silicon carbide are usually used for etching low-κ dielectrics. However, SiOCH (carbon doped hydrogenated silicon oxide), a material used to insulate copper interconnects, requires an etchant that attacks silicon compounds. For this material, metal or amorphous carbon hardmasks are used. The most common metal for hardmasks is titanium nitride, but tantalum nitride has also been used.
References
Bibliography
Shi, Hualing; Shamiryan, Denis; de Marneffe, Jean-François; Huang, Huai; Ho, Paul S.; Baklanov, Mikhail R., "Plasma processing of low-κ dielectrics", ch. 3 in, Baklanov, Mikhail; Ho, Paul S.; Zschech, Ehrenfried (eds), Advanced Interconnects for ULSI Technology, John Wiley & Sons, 2012 .
Wong, T.; Ligatchev, V.; Rusli, R., "Structural properties and defect characterisation of plasma deposited carbon doped silicon oxide low-k dielectric films", pp. 133–141 in, Mathad, G.S. (ed); Baker, B.C.; Reidesma-Simpson, C.; Rathore, H.S.; Ritzdorf, T.L. (asst. eds), Copper Interconnects, New Contact Metallurgies, Structures, and Low-k Interlevel Dielectrics: Proceedings of the International Symposium, The Electrochemical Society, 2003
Semiconductor device fabrication | Hardmask | [
"Materials_science",
"Engineering"
] | 553 | [
"Semiconductor device fabrication",
"Materials science stubs",
"Materials science",
"Microtechnology"
] |
10,873,846 | https://en.wikipedia.org/wiki/V%C3%A1clav%20Chv%C3%A1tal | Václav (Vašek) Chvátal () is a Professor Emeritus in the Department of Computer Science and Software Engineering at Concordia University in Montreal, Quebec, Canada, and a visiting professor at Charles University in Prague. He has published extensively on topics in graph theory, combinatorics, and combinatorial optimization.
Biography
Chvátal was born in 1946 in Prague and educated in mathematics at Charles University in Prague, where he studied under the supervision of Zdeněk Hedrlín. He fled Czechoslovakia in 1968, three days after the Soviet invasion, and completed his Ph.D. in Mathematics at the University of Waterloo, under the supervision of Crispin St. J. A. Nash-Williams, in the fall of 1970. Subsequently, he took positions at McGill University (1971 and 1978–1986), Stanford University (1972 and 1974–1977), the Université de Montréal (1972–1974 and 1977–1978), and Rutgers University (1986–2004) before returning to Montreal for the Canada Research Chair in Combinatorial Optimization
at Concordia (2004–2011) and the Canada Research Chair in Discrete Mathematics (2011–2014) till his retirement.
Research
Chvátal first learned of graph theory in 1964, on finding a book by Claude Berge in a Plzeň bookstore and much of his research involves graph theory:
His first mathematical publication, at the age of 19, concerned directed graphs that cannot be mapped to themselves by any nontrivial graph homomorphism
Another graph-theoretic result of Chvátal was the 1970 construction of the smallest possible triangle-free graph that is both 4-chromatic and 4-regular, now known as the Chvátal graph.
A 1972 paper relating Hamiltonian cycles to connectivity and maximum independent set size of a graph, earned Chvátal his Erdős number of 1. Specifically, if there exists an s such that a given graph is s-vertex-connected and has no (s + 1)-vertex independent set, the graph must be Hamiltonian. Avis et al. tell the story of Chvátal and Erdős working out this result over the course of a long road trip, and later thanking Louise Guy "for her steady driving."
In a 1973 paper, Chvátal introduced the concept of graph toughness, a measure of graph connectivity that is closely connected to the existence of Hamiltonian cycles. A graph is t-tough if, for every k greater than 1, the removal of fewer than tk vertices leaves fewer than k connected components in the remaining subgraph. For instance, in a graph with a Hamiltonian cycle, the removal of any nonempty set of vertices partitions the cycle into at most as many pieces as the number of removed vertices, so Hamiltonian graphs are 1-tough. Chvátal conjectured that 3/2-tough graphs, and later that 2-tough graphs, are always Hamiltonian; despite later researchers finding counterexamples to these conjectures, it still remains open whether some constant bound on the graph toughness is enough to guarantee Hamiltonicity.
Some of Chvátal's work concerns families of sets, or equivalently hypergraphs, a subject already occurring in his Ph.D. thesis, where he also studied Ramsey theory.
In a 1972 conjecture that Erdős called "surprising" and "beautiful", and that remains open (with a $10 prize offered by Chvátal for its solution) he suggested that, in any family of sets closed under the operation of taking subsets, the largest pairwise-intersecting subfamily may always be found by choosing an element of one of the sets and keeping all sets containing that element.
In 1979, he studied a weighted version of the set cover problem, and proved that a greedy algorithm provides good approximations to the optimal solution, generalizing previous unweighted results by David S. Johnson (J. Comp. Sys. Sci. 1974) and László Lovász (Discrete Math. 1975).
Chvátal first became interested in linear programming through the influence of Jack Edmonds while Chvátal was a student at Waterloo. He quickly recognized the importance of cutting planes for attacking combinatorial optimization problems such as computing maximum independent sets and, in particular, introduced the notion of a cutting-plane proof. At Stanford in the 1970s, he began writing his popular textbook, Linear Programming, which was published in 1983.
Cutting planes lie at the heart of the branch and cut method used by efficient solvers for the traveling salesman problem. Between 1988 and 2005, the team of David L. Applegate, Robert E. Bixby, Vašek Chvátal, and William J. Cook developed one such solver, Concorde. The team was awarded The Beale-Orchard-Hays Prize for Excellence in Computational Mathematical Programming in 2000 for their ten-page paper enumerating some of Concorde's refinements of the branch and cut method that led to the solution of a 13,509-city instance and it was awarded the Frederick W. Lanchester Prize in 2007 for their book, The Traveling Salesman Problem: A Computational Study.
Chvátal is also known for proving the art gallery theorem, for researching a self-describing digital sequence, for his work with David Sankoff on the Chvátal–Sankoff constants controlling the behavior of the longest common subsequence problem on random inputs, and for his work with Endre Szemerédi on hard instances for resolution theorem proving.
Books
. Japanese translation published by Keigaku Shuppan, Tokyo, 1986.
See also
List of University of Waterloo people
References
External links
Chvátal's website on encs.concordia.ca
1946 births
Living people
Scientists from Prague
Canadian mathematicians
Canadian people of Czech descent
Czech mathematicians
Czechoslovak emigrants to Canada
Canada Research Chairs
Combinatorialists
University of Waterloo alumni
Charles University alumni
Academic staff of Concordia University
John von Neumann Theory Prize winners | Václav Chvátal | [
"Mathematics"
] | 1,203 | [
"Combinatorialists",
"Combinatorics"
] |
10,874,021 | https://en.wikipedia.org/wiki/Moldova%20Steel%20Works | Moldova Steel Works (; ) is a steel-producing company in Rîbnița, in the unrecognized state of Transnistria. It accounts for more than half of Transnistrian total industrial output.
Moldova Steel Works was founded in 1985 for reprocessing of scrap metal. In 1998, majority of its shares was sold to Russian energy company Itera and 28.8% of shares was given to the employees of the company. Production peaked in 2000. In 2004, 90% of shares was acquired by "Austro-Ukrainian Hares Group" of Hares Youssef. Moldova Steel Works became owned by group of Russian–Ukrainian oligarchs, including in addition to Hares Youssef also Hryhoriy Surkis, Ihor Kolomoyskyi, Alisher Usmanov, Vadym Novynskyi and Rinat Akhmetov. Later the Russian company Metalloinvest, controlled by Alisher Usmanov and Vasily Anisimov, became owner of the company. In 2015, the ownership was returned to the Transnistrian authorities for a symbolic price.
On 14 May 2018, the government of Ukraine included Moldova Steel Works in the list of sanctioned companies, but excluded it from the list on 19 March 2019 after a request by Moldovan Prime Minister Pavel Filip.
The initial annual production capacity of the company was 684,000 tonnes of crude steel and 500,000 tonnes of rolled products. Later the capacity was reported to be is around 1,000,000 tonnes of steel and 1,000,000 tonnes of rolled products. In 2018, it produced almost 502,900 tonnes of steel and 497,900 tonnes of rolled goods.
References
External links
Companies of Transnistria
Steel companies of Moldova
Iron and steel mills
Rîbnița
Moldavian Soviet Socialist Republic
1985 establishments in the Soviet Union | Moldova Steel Works | [
"Chemistry"
] | 383 | [
"Iron and steel mills",
"Metallurgical facilities"
] |
10,874,176 | https://en.wikipedia.org/wiki/Mullion%20wall | A mullion wall is a structural system in which the load of the floor slab is taken by prefabricated panels around the perimeter. Visually, the effect is similar to the stone-mullioned windows of Perpendicular Gothic or Elizabethan architecture.
The technology was devised by George Grenfell Baines and the engineer Felix Samuely in order to cope with material shortages at the Thomas Linacre School, Wigan (1952) and refined at the Shell Offices, Stanlow (1956), the Derby Colleges of Technology and Art (1956–64) and Manchester University Humanities Building (1961–67).
A similar concept to the mullion wall was adopted by Eero Saarinen at the US Embassy, London (1955–60) and by Minoru Yamasaki at the World Trade Center, New York (1966–73).
See also
Curtain wall
References
Structural system
Types of wall | Mullion wall | [
"Technology",
"Engineering"
] | 183 | [
"Structural system",
"Types of wall",
"Structural engineering",
"Building engineering"
] |
10,874,478 | https://en.wikipedia.org/wiki/Rope%20caulk | Rope caulk or caulking cord is a type of pliable putty or caulking formed into a rope-like shape. It is typically off-white in color, relatively odorless, and stays pliable for an extended period of time.
Rope caulk can be used as caulking or weatherstripping around conventional windows installed in conventional wooden or metal frames (see glazing). It is also used as a form for epoxy work, since epoxy does not adhere to this material.
Rope caulk has also been applied to the metallic structure supporting the magnet for a dynamic speaker to cut unwanted resonance of the metal structure, leading to improved speaker performance. It has also been used as a sonic damping material in sensitive phonograph components.
History
Mortite brand rope caulk was introduced by the J.W. Mortell Co. of Kankakee, Illinois in the 1940s, and called "pliable plastic tape". The trademark application was filed in March, 1943. It was later marketed as "caulking cord". The company was later acquired by Thermwell Products.
Mortite
Mortite putty is a brand of rope caulk marketed under the Frost King brand. Its primary ingredient is titanium dioxide; it has a specific gravity of 1.34.
It is listed by the state of California as containing ingredients known to the state to cause cancer or adversely affect reproductive health (a "P65 Warning").
Notes
Plastics
Building engineering | Rope caulk | [
"Physics",
"Engineering"
] | 311 | [
"Building engineering",
"Unsolved problems in physics",
"Architecture",
"Civil engineering",
"Amorphous solids",
"Plastics"
] |
10,875,676 | https://en.wikipedia.org/wiki/Particle%20physics%20in%20cosmology | Particle physics is the study of the interactions of elementary particles at high energies, whilst physical cosmology studies the universe as a single physical entity. The interface between these two fields is sometimes referred to as particle cosmology. Particle physics must be taken into account in cosmological models of the early universe, when the average energy density was very high. The processes of particle pair production, scattering and decay influence the cosmology.
As a rough approximation, a particle scattering or decay process is important at a particular cosmological epoch if its time scale is shorter than or similar to the time scale of the universe's expansion. The latter quantity is where is the time-dependent Hubble parameter. This is roughly equal to the age of the universe at that time.
For example, the pion has a mean lifetime to decay of about 26 nanoseconds. This means that particle physics processes involving pion decay can be neglected until roughly that much time has passed since the Big Bang.
Cosmological observations of phenomena such as the cosmic microwave background and the cosmic abundance of elements, together with the predictions of the Standard Model of particle physics, place constraints on the physical conditions in the early universe. The success of the Standard Model at explaining these observations support its validity under conditions beyond those which can be produced in a laboratory. Conversely, phenomena discovered through cosmological observations, such as dark matter and baryon asymmetry, suggest the presence of physics that goes beyond the Standard Model.
Further reading
Bergström, Lars & Goobar, Ariel (2004); Cosmology and Particle Astrophysics, 2nd ed. Springer Verlag. .
Branco, G. C., Shafi, Q., & Silva-Marcos, J. I. (2001). Recent developments in particle physics and cosmology. Dordrecht: Kluwer Academic.
Collins, P. D. B. (2007). Particle physics and cosmology. New York: John Wiley & Sons.
Kazakov, D. I., & Smadja, G. (2005). Particle physics and cosmology the interface. NATO science series, v. 188. Dordecht: Springer.
External links
Center for Particle Cosmology at the University of Pennsylvania
Physical cosmology
Particle physics | Particle physics in cosmology | [
"Physics",
"Astronomy"
] | 467 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Astrophysics",
"Particle physics",
"Particle physics stubs",
"Physical cosmology"
] |
10,877,079 | https://en.wikipedia.org/wiki/Criegee%20intermediate | A Criegee intermediate (also called a Criegee zwitterion or Criegee biradical) is a carbonyl oxide with two charge centers. These chemicals may react with sulfur dioxide and nitrogen oxides in the Earth's atmosphere, and are implicated in the formation of aerosols, which are an important factor in controlling global climate. Criegee intermediates are also an important source of OH (hydroxyl radicals). OH radicals are the most important oxidant in the troposphere, and are important in controlling air quality and pollution.
The formation of this sort of structure was first postulated in the 1950s by Rudolf Criegee, for whom it is named. It was not until 2012 that direct detection of such chemicals was reported. Infrared spectroscopy suggests the electronic structure has a substantially zwitterionic character rather than the biradical character that had previously been proposed.
Formation
Criegee intermediates are formed by the gas-phase reactions of alkenes and ozone in the Earth's atmosphere. Ozone adds across the carbon–carbon double bond of the alkene to form a molozonide, which then decomposes to produce a carbonyl (RR'CO) and a carbonyl oxide. The latter is known as the Criegee intermediate.
The alkene ozonolysis reaction is extremely exothermic, releasing about of excess energy. Therefore, the Criegee intermediates are formed with a large amount of internal energy.
Removal
When Criegee intermediates are formed, some portion of them will undergo prompt unimolecular decay, producing OH radicals and other products. However, they may instead become stabilized by interactions with other molecules or react with other chemicals to give different products.
Criegee intermediates may be collisionally stabilized via collisions with other molecules in the atmosphere. These stabilized Criegee intermediates may then undergo thermal unimolecular decay to OH radicals and other products, or may undergo bimolecular reactions with other atmospheric species.
In the ozonolysis reaction sequence, the Criegee intermediate reacts with another carbonyl compound (generally the aldehyde or ketone byproduct of the Criegee-intermediate formation reaction itself) to form an ozonide (1,2,4-trioxolane).
References
Free radicals
Chemical bonding
Environmental chemistry
Climate change mitigation | Criegee intermediate | [
"Physics",
"Chemistry",
"Materials_science",
"Biology",
"Environmental_science"
] | 488 | [
"Free radicals",
"Environmental chemistry",
"Senescence",
"Condensed matter physics",
"Biomolecules",
"nan",
"Chemical bonding"
] |
10,883,143 | https://en.wikipedia.org/wiki/Register%20transfer%20notation | Register Transfer Notation (or RTN) is a way of specifying the behavior of a digital synchronous circuit. It is said to be a specification language for this reason. Register Transfer Languages (or RTL, where the L sometimes stands for Level of abstraction) are similar to Register Transfer Notation and used to describe much the same thing, however they are of a synthesizable format and more similar to a standard computer programming language, like C.
RTN may be written as either abstract or concrete. Abstract RTN is a generic notation which does not have any specific machine implementation details. In contrast, concrete RTN is a notation which does implement specifics of the machine for which it is designed.
The possible locations in which transfer of information occurs are:
Memory-location
Processor Register
Registers in I/O device
References
Hardware description languages | Register transfer notation | [
"Engineering"
] | 172 | [
"Electronic engineering",
"Hardware description languages"
] |
10,883,352 | https://en.wikipedia.org/wiki/Winged%20infusion%20set | A winged infusion set—also known as "butterfly" or "scalp vein" set—is a device specialized for venipuncture: i.e. for accessing a superficial vein or artery for either intravenous injection or phlebotomy. It consists, from front to rear, of a hypodermic needle, two bilateral flexible "wings", flexible small-bore transparent tubing (often 20–35 cm long), and lastly a connector (often female Luer). This connector attaches to another device: e.g. syringe, vacuum tube holder/hub, or extension tubing from an infusion pump or gravity-fed infusion/transfusion bag/bottle.
Newer models include a slide and lock safety device slid over the needle after use, which helps prevent accidental needlestick injury and reuse of used needles, which can transmit infectious disease such as HIV and viral hepatitis.
Use
During venipuncture, the butterfly is held by its wings between thumb and index finger. This grasp very close to the needle facilitates precise placement. The needle is generally inserted toward the vein at a shallow angle, made possible by the set's design. When the needle enters the vein, venous blood pressure generally forces a small amount of blood into the set's transparent tubing providing a visual sign, called the "flash" or "flashback", that lets the practitioner know that the needle is actually inside of a vein.
The butterfly offers advantages over a simple straight needle. The butterfly's flexible tubing reaches more body surface and tolerates more patient movement. The butterfly's precise placement facilitates venipuncture of thin, "rolling", fragile, or otherwise poorly accessible veins. The butterfly's shallow-angle insertion design facilitates venipuncture of very superficial veins, e.g. hand, wrist, or scalp veins (hence name "scalp vein" set).
Needle size
Butterflies are commonly available in 18-27 gauge bore, 21G and 23G being most popular.
In phlebotomy, there is widespread avoidance of 25G and 27G butterflies based on belief that such small-bore needles hemolyze and/or clot blood samples and hence invalidate blood tests. Contrary to this belief, theoretical calculation and in vitro experiment both showed the exact opposite: namely, that shear stress and hence hemolysis decrease with decreasing needle bore (but the decrease can be clinically insignificant). In agreement with these results, a subsequent clinical trial found that 21G, 23G, and 25G butterflies connected directly to vacuum tubes caused the same amount of hemolysis and gave the same coagulation panel test results.
References
Medical equipment | Winged infusion set | [
"Biology"
] | 556 | [
"Medical equipment",
"Medical technology"
] |
994,039 | https://en.wikipedia.org/wiki/Mobile%20office | A mobile office is an office built within a truck, motorhome, trailer or shipping container. The term is also used for people who don't work at a physical office location but instead carry their office materials with them. The mobile office can allow businesses to cut costs and avoid building physical locations where it would be too costly or simply unnecessary.
See also
Mobile home
Virtual office
References
Office work
Construction
Portable buildings and shelters | Mobile office | [
"Engineering"
] | 86 | [
"Construction"
] |
995,019 | https://en.wikipedia.org/wiki/Mass%20gap | In quantum field theory, the mass gap is the difference in energy between the lowest energy state, the vacuum, and the next lowest energy state. The energy of the vacuum is zero by definition, and assuming that all energy states can be thought of as particles in plane-waves, the mass gap is the mass of the lightest particle.
Since the energies of exact (i.e. nonperturbative) energy eigenstates are spread out and therefore are not technically eigenstates, a more precise definition is that the mass gap is the greatest lower bound of the energy of any state which is orthogonal to the vacuum.
The analog of a mass gap in many-body physics on a discrete lattice arises from a gapped Hamiltonian.
Mathematical definitions
For a given real-valued quantum field , where , we can say that the theory has a mass gap if the two-point function has the property
with being the lowest energy value in the spectrum of the Hamiltonian and thus the mass gap. This quantity, easy to generalize to other fields, is what is generally measured in lattice computations. It was proved in this way that Yang–Mills theory develops a mass gap on a lattice. The corresponding time-ordered value, the propagator, will have the property
with the constant being finite. A typical example is offered by a free massive particle and, in this case, the constant has the value 1/m2. In the same limit, the propagator for a massless particle is singular.
Examples from classical theories
An example of mass gap arising for massless theories, already at the classical level, can be seen in spontaneous breaking of symmetry or the Higgs mechanism. In the former case, one has to cope with the appearance of massless excitations, Goldstone bosons, that are removed in the latter case due to gauge freedom. Quantization preserves this gauge freedom property.
A quartic massless scalar field theory develops a mass gap already at classical level. Consider the equation
This equation has the exact solution
—where and are integration constants, and sn is a Jacobi elliptic function—provided
At the classical level, a mass gap appears while, at quantum level, one has a tower of excitations, and this property of the theory is preserved after quantization in the limit of momenta going to zero.
Yang–Mills theory
While lattice computations have suggested that Yang–Mills theory indeed has a mass gap and a tower of excitations, a theoretical proof is still missing. This is one of the Clay Institute Millennium problems and it remains an open problem. Such states for Yang–Mills theory should be physical states, named glueballs, and should be observable in the laboratory.
Källén–Lehmann representation
If Källén–Lehmann spectral representation holds, at this stage we exclude gauge theories, the spectral density function can take a very simple form with a discrete spectrum starting with a mass gap
being the contribution from multi-particle part of the spectrum. In this case, the propagator will take the simple form
being approximatively the starting point of the multi-particle sector. Now, using the fact that
we arrive at the following conclusion for the constants in the spectral density
.
This could not be true in a gauge theory. Rather it must be proved that a Källén–Lehmann representation for the propagator holds also for this case. Absence of multi-particle contributions implies that the theory is trivial, as no bound states appear in the theory and so there is no interaction, even if the theory has a mass gap. In this case we have immediately the propagator just setting in the formulas above.
See also
Coleman–Mandula theorem
Scalar field theory
References
External links
Sadun, Lorenzo. Yang-Mills and the Mass Gap. Video lecture outlining the nature of the mass gap problem within the Yang-Mills formulation.
Mass gaps for scalar field theories on Dispersive Wiki
Quantum field theory | Mass gap | [
"Physics"
] | 812 | [
"Quantum field theory",
"Quantum mechanics"
] |
995,169 | https://en.wikipedia.org/wiki/Landau%20theory | Landau theory (also known as Ginzburg–Landau theory, despite the confusing name) in physics is a theory that Lev Landau introduced in an attempt to formulate a general theory of continuous (i.e., second-order) phase transitions. It can also be adapted to systems under externally-applied fields, and used as a quantitative model for discontinuous (i.e., first-order) transitions. Although the theory has now been superseded by the renormalization group and scaling theory formulations, it remains an exceptionally broad and powerful framework for phase transitions, and the associated concept of the order parameter as a descriptor of the essential character of the transition has proven transformative.
Mean-field formulation (no long-range correlation)
Landau was motivated to suggest that the free energy of any system should obey two conditions:
Be analytic in the order parameter and its gradients.
Obey the symmetry of the Hamiltonian.
Given these two conditions, one can write down (in the vicinity of the critical temperature, Tc) a phenomenological expression for the free energy as a Taylor expansion in the order parameter.
Second-order transitions
Consider a system that breaks some symmetry below a phase transition, which is characterized by an order parameter . This order parameter is a measure of the order before and after a phase transition; the order parameter is often zero above some critical temperature and non-zero below the critical temperature. In a simple ferromagnetic system like the Ising model, the order parameter is characterized by the net magnetization , which becomes spontaneously non-zero below a critical temperature . In Landau theory, one considers a free energy functional that is an analytic function of the order parameter. In many systems with certain symmetries, the free energy will only be a function of even powers of the order parameter, for which it can be expressed as the series expansion
In general, there are higher order terms present in the free energy, but it is a reasonable approximation to consider the series to fourth order in the order parameter, as long as the order parameter is small. For the system to be thermodynamically stable (that is, the system does not seek an infinite order parameter to minimize the energy), the coefficient of the highest even power of the order parameter must be positive, so . For simplicity, one can assume that , a constant, near the critical temperature. Furthermore, since changes sign above and below the critical temperature, one can likewise expand , where it is assumed that for the high-temperature phase while for the low-temperature phase, for a transition to occur. With these assumptions, minimizing the free energy with respect to the order parameter requires
The solution to the order parameter that satisfies this condition is either , or
It is clear that this solution only exists for , otherwise is the only solution. Indeed, is the minimum solution for , but the solution minimizes the free energy for , and thus is a stable phase. Furthermore, the order parameter follows the relation
below the critical temperature, indicating a critical exponent for this Landau mean-theory model.
The free-energy will vary as a function of temperature given by
From the free energy, one can compute the specific heat,
which has a finite jump at the critical temperature of size . This finite jump is therefore not associated with a discontinuity that would occur if the system absorbed latent heat, since . It is also noteworthy that the discontinuity in the specific heat is related to the discontinuity in the second derivative of the free energy, which is characteristic of a second-order phase transition. Furthermore, the fact that the specific heat has no divergence or cusp at the critical point indicates its critical exponent for is .
Irreducible representations
Landau expanded his theory to consider the restraints that it imposes on the symmetries before and after a transition of second order. They need to comply with a number of requirements:
The distorted (or ordered) symmetry needs to be a subgroup of the higher one.
The order parameter that embodies the distortion needs to transform as a single irreducible representation (irrep) of the parent symmetry
The irrep should not contain a third order invariant
If the irrep allows for more than one fourth order invariant, the resulting symmetry minimizes a linear combination of these invariants
In the latter case more than one daughter structure should be reacheable through a continuous transition. A good example of this are the structure of MnP (space group Cmca) and the low temperature structure of NbS (space group P63mc). They are both daughters of the NiAs-structure and their distortions transform according to the same irrep of that spacegroup.
Applied fields
In many systems, one can consider a perturbing field that couples linearly to the order parameter. For example, in the case of a classical dipole moment , the energy of the dipole-field system is . In the general case, one can assume an energy shift of due to the coupling of the order parameter to the applied field , and the Landau free energy will change as a result:
In this case, the minimization condition is
One immediate consequence of this equation and its solution is that, if the applied field is non-zero, then the magnetization is non-zero at any temperature. This implies there is no longer a spontaneous symmetry breaking that occurs at any temperature. Furthermore, some interesting thermodynamic and universal quantities can be obtained from this above condition. For example, at the critical temperature where , one can find the dependence of the order parameter on the external field:
indicating a critical exponent .
Furthermore, from the above condition, it is possible to find the zero-field susceptibility , which must satisfy
In this case, recalling in the zero-field case that at low temperatures, while for temperatures above the critical temperature, the zero-field susceptibility therefore has the following temperature dependence:
which is reminiscent of the Curie-Weiss law for the temperature dependence of magnetic susceptibility in magnetic materials, and yields the mean-field critical exponent .
It is noteworthy that although the critical exponents so obtained are incorrect for many models and systems, they correctly satisfy various exponent equalities such as the Rushbrook equality: .
First-order transitions
Landau theory can also be used to study first-order transitions. There are two different formulations, depending on whether or not the system is symmetric under a change in sign of the order parameter.
I. Symmetric Case
Here we consider the case where the system has a symmetry and the energy is invariant when the order parameter changes sign.
A first-order transition will arise if the quartic term in is negative. To ensure that the free energy remains positive at large , one must carry the free-energy expansion to sixth-order,
where , and is some temperature at which changes sign. We denote this temperature by and not , since it will emerge below that it is not the temperature of the first-order transition, and since there is no critical point, the notion of a "critical temperature" is misleading to begin with. and are positive coefficients.
We analyze this free energy functional as follows: (i) For , the and terms are concave upward for all , while the term is concave downward. Thus for sufficiently high temperatures is concave upward for all , and the equilibrium solution is . (ii) For , both the and terms are negative, so is a local maximum, and the minimum of is at some non-zero value , with
. (iii) For just above , turns into a local minimum, but the minimum at continues to be the global minimum since it has a lower free energy. It follows that as the temperature is raised above , the global minimum cannot continuously evolve from
to 0. Rather, at some intermediate temperature , the minima at and must become degenerate. For , the global minimum will jump discontinuously from to 0.
To find , we demand that free energy be zero at (just like the solution), and furthermore that this point should be a local minimum. These two conditions yield two equations,
which are satisfied when . The same equations also imply that . That is,
From this analysis both points made above can be seen explicitly. First, the order parameter suffers a discontinuous jump from to 0. Second, the transition temperature is not the same as the temperature where vanishes.
At temperatures below the transition temperature, , the order parameter is given by
which is plotted to the right. This shows the clear discontinuity associated with the order parameter as a function of the temperature. To further demonstrate that the transition is first-order, one can show that the free energy for this order parameter is continuous at the transition temperature , but its first derivative (the entropy) suffers from a discontinuity, reflecting the existence of a non-zero latent heat.
II. Nonsymmetric Case
Next we consider the case where the system does not have a symmetry. In this case there is no reason to keep only even powers of in the expansion of , and a cubic term must be allowed (The linear term can always be eliminated by a shift + constant.) We thus consider a free energy functional
Once again , and are all positive. The sign of the cubic term can always be chosen to be negative as we have done by reversing the sign of if necessary.
We analyze this free energy functional as follows: (i) For , we have a local maximum at , and since the free energy is bounded below, there must be two local minima at nonzero values and . The cubic term ensures that is the global minimum since it is deeper. (ii) For just above , the minimum at disappears, the maximum at turns into a local minimum, but the minimum at persists and continues to be the global minimum. As the temperature is further raised, rises until it equals zero at some temperature . At we get a discontinuous jump in the global minimum from to 0. (The minima cannot coalesce for that would require the first three derivatives of to vanish at .)
To find , we demand that free energy be zero at (just like the solution), and furthermore that this point should be a local minimum. These two conditions yield two equations,
which are satisfied when . The same equations also imply that . That is,
As in the symmetric case the order parameter suffers a discontinuous jump from to 0. Second, the transition temperature is not the same as the temperature where vanishes.
Applications
It was known experimentally that the liquid–gas coexistence curve and the ferromagnet magnetization curve both exhibited a scaling relation of the form , where was mysteriously the same for both systems. This is the phenomenon of universality. It was also known that simple liquid–gas models are exactly mappable to simple magnetic models, which implied that the two systems possess the same symmetries. It then followed from Landau theory why these two apparently disparate systems should have the same critical exponents, despite having different microscopic parameters. It is now known that the phenomenon of universality arises for other reasons (see Renormalization group). In fact, Landau theory predicts the incorrect critical exponents for the Ising and liquid–gas systems.
The great virtue of Landau theory is that it makes specific predictions for what kind of non-analytic behavior one should see when the underlying free energy is analytic. Then, all the non-analyticity at the critical point, the critical exponents, are because the equilibrium value of the order parameter changes non-analytically, as a square root, whenever the free energy loses its unique minimum.
The extension of Landau theory to include fluctuations in the order parameter shows that Landau theory is only strictly valid near the critical points of ordinary systems with spatial dimensions higher than 4. This is the upper critical dimension, and it can be much higher than four in more finely tuned phase transition. In Mukhamel's analysis of the isotropic Lifschitz point, the critical dimension is 8. This is because Landau theory is a mean field theory, and does not include long-range correlations.
This theory does not explain non-analyticity at the critical point, but when applied to superfluid and superconductor phase transition, Landau's theory provided inspiration for another theory, the Ginzburg–Landau theory of superconductivity.
Including long-range correlations
Consider the Ising model free energy above. Assume that the order parameter and external magnetic field, , may have spatial variations. Now, the free energy of the system can be assumed to take the following modified form:
where is the total spatial dimensionality. So,
Assume that, for a localized external magnetic perturbation , the order parameter takes the form . Then,
That is, the fluctuation in the order parameter corresponds to the order-order correlation. Hence, neglecting this fluctuation (like in the earlier mean-field approach) corresponds to neglecting the order-order correlation, which diverges near the critical point.
One can also solve for , from which the scaling exponent, , for correlation length can deduced. From these, the Ginzburg criterion for the upper critical dimension for the validity of the Ising mean-field Landau theory (the one without long-range correlation) can be calculated as:
In our current Ising model, mean-field Landau theory gives and so, it (the Ising mean-field Landau theory) is valid only for spatial dimensionality greater than or equal to 4 (at the marginal values of , there are small corrections to the exponents). This modified version of mean-field Landau theory is sometimes also referred to as the Landau–Ginzburg theory of Ising phase transitions. As a clarification, there is also a Ginzburg–Landau theory specific to superconductivity phase transition, which also includes fluctuations.
See also
Ginzburg–Landau theory
Landau–de Gennes theory
Ginzburg criterion
Stuart–Landau equation
Footnotes
Further reading
Landau L.D. Collected Papers (Nauka, Moscow, 1969)
Michael C. Cross, Landau theory of second order phase transitions, (Caltech statistical mechanics lecture notes).
Yukhnovskii, I R, Phase Transitions of the Second Order – Collective Variables Method, World Scientific, 1987,
Statistical mechanics
Phase transitions
Lev Landau | Landau theory | [
"Physics",
"Chemistry"
] | 2,956 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Critical phenomena",
"Statistical mechanics",
"Matter"
] |
995,903 | https://en.wikipedia.org/wiki/Mesna | Mesna, sold under the brand name Mesnex among others, is a medication used in those taking cyclophosphamide or ifosfamide to decrease the risk of bleeding from the bladder. It is used either by mouth or injection into a vein.
Common side effects include headache, vomiting, sleepiness, loss of appetite, cough, rash, and joint pain. Serious side effects include allergic reactions. Use during pregnancy appears to be safe for the baby but this use has not been well studied. Mesna is an organosulfur compound. It works by altering the breakdown products of cyclophosphamide and ifosfamide found in the urine making them less toxic.
Mesna was approved for medical use in the United States in 1988. It is on the World Health Organization's List of Essential Medicines.
Medical uses
Chemotherapy adjuvant
Mesna is used therapeutically to reduce the incidence of haemorrhagic cystitis and haematuria when a patient receives ifosfamide or cyclophosphamide for cancer chemotherapy. These two anticancer agents, in vivo, may be converted to urotoxic metabolites, such as acrolein.
Mesna assists to detoxify these metabolites by reaction of its sulfhydryl group with α,β-unsaturated carbonyl containing compounds such as acrolein. This reaction is known as a Michael addition. Mesna also increases urinary excretion of cysteine.
Other
Outside North America, mesna is also used as a mucolytic agent, working in the same way as acetylcysteine; it is sold for this indication as Mistabron and Mistabronco.
Administration
It is administered intravenously or orally (through the mouth). The IV mesna infusions would be given with IV ifosfamide, while oral mesna would be given with oral cyclophosphamide. The oral doses must be double the intravenous (IV) mesna dose due to bioavailability issues. The oral preparation allows patients to leave the hospital sooner, instead of staying four to five days for all the IV mesna infusions.
Mechanism of action
Mesna reduces the toxicity of urotoxic compounds that may form after chemotherapy administration. Mesna is a water-soluble compound with antioxidant properties, and is given concomitantly with the chemotherapeutic agents cyclophosphamide and ifosfamide. Mesna concentrates in the bladder where acrolein accumulates after administration of chemotherapy and through a Michael addition, forms a conjugate with acrolein and other urotoxic metabolites. This conjugation reaction inactivates the urotoxic compounds to harmless metabolites. The metabolites are then excreted in the urine.
Names
It is marketed by Baxter as Uromitexan and Mesnex. The name of the substance is an acronym for 2-mercaptoethane sulfonate Na (Na being the chemical symbol for sodium).
See also
Coenzyme M—a coenzyme with the same structure used by methanogenic bacteria
References
External links
BC Cancer Agency
NIH/MedlinePlus patient information
Chemotherapeutic adjuvants
Thiols
Expectorants
Organic sodium salts
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
Sulfonates | Mesna | [
"Chemistry"
] | 733 | [
"Organic compounds",
"Organic sodium salts",
"Thiols",
"Salts"
] |
996,678 | https://en.wikipedia.org/wiki/MPU-401 | The MPU-401, where MPU stands for MIDI Processing Unit, is an important but now obsolete interface for connecting MIDI-equipped electronic music hardware to personal computers. It was designed by Roland Corporation, which also co-authored the MIDI standard.
Design
Released around 1984, the original MPU-401 was an external breakout box providing MIDI IN/MIDI OUT/MIDI THRU/TAPE IN/TAPE OUT/MIDI SYNC connectors, for use with a separately-sold interface card/cartridge ("MPU-401 interface kit") inserted into a computer system. For this setup, the following "interface kits" were made:
MIF-APL: For the Apple II
MIF-C64: For the Commodore 64
MIF-FM7: For the Fujitsu FM-7
MIF-IPC: For the IBM PC/IBM XT. It turned out not to work reliably with 286 and faster processors. Early versions of the actual PCB had IF-MIDI/IBM as a silk screen.
MIF-IPC-A: For the IBM AT, works with PC and XT as well.
Xanadu MUSICOM IFM-PC: For the IBM PC / IBM XT / IBM AT. This was a third party MIDI card, incorporating the MIF-IPC(-A) and additional functionality that was coupled with the OEM Roland MPU-401 BOB. It also had a mini audio jack on the PCB.
MIF-PC8: For the NEC PC-88
MIF-PC98: For the NEC PC-98
MIF-X1: For the Sharp X1
MIF-AMG: For the Amiga, from Musicsoft
In 2014 hobbyists built clones of the MIF-IPC-A card for PCs.
Variants
Later, Roland would put most of the electronics originally found in the breakout box onto the interface card itself, thus reducing the size of the breakout box. Products released in this manner:
MPU-401N: an external interface, specifically designed for use with the NEC PC-98 series notebook computers. This breakout-box unit features a special COMPUTER IN port for direct connection to the computer's 110-pin expansion bus. METRONOME OUT connector was added. Released in Japan only.
MPU-IPC: for the IBM PC/IBM XT/IBM AT and compatibles (8 bit ISA). It had a 25-pin female connector for the breakout box, even though only nine pins were used, and only seven were functionally different: both 5V and ground use two pins each.
MPU-IPC-T: for the IBM PC/IBM XT/IBM AT and compatibles (8-bit ISA). The MIDI SYNC connector was removed from this Taiwanese-manufactured model, and the previously hardcoded I/O address and IRQ could be set to different values with jumpers. The break-out box has three DIN connectors for MIDI (1xIN and 2xOUT) plus three 3.5mm mini jack connectors (TAPE IN, TAPE OUT and METRONOME OUT).
MPU-IMC: for the IBM PS/2's Micro Channel architecture bus. In earlier models both I/O address and IRQ were hardcoded to IRQ 2 (causing serious problems with the hard disk as it also uses that IRQ); in later models the IRQ could be set with a jumper. It had a 9-pin female connector for the breakout box. . Due to the incompatibility of IRQ 2/9 (and potentially I/O addresses) between the MPU-IMC and IBM PS/2 MCA models certain games will not work with MPU-401.
S-MPU/AT (Super MPU): for the IBM AT and compatibles (16-bit ISA). It had a Mini-DIN female connector for the breakout box. The MIDI SYNC, TAPE IN, TAPE OUT, METRONOME OUT connectors was removed, but a second MIDI IN connector was added. An application to assign resources (plug and play) must be run to use the card in DOS. This application is not a TSR (it does not take up conventional memory).
S-MPU-IIAT (Super MPU II): for the IBM or compatible Plug and Play PC computers (16 bit ISA). It had a Mini-DIN female connector for the breakout box with two MIDI In connectors and two MIDI Out connectors. An application to assign resources (plug and play) must be run to use the card in DOS. This application is not a TSR (it does not take up precious conventional memory).
S-MPU/FMT: For FM Towns
LAPC-I: for the IBM PC and compatibles. Includes the Roland CM-32L sound source. A breakout box for this card, the MCB-1, was sold separately.
LAPC-N: for the NEC PC-98. Includes the Roland CM-32LN sound source. A breakout box for this card, the MCB-2, was sold separately.
RAP-10: for the IBM AT and compatibles (16 bit ISA). General midi sound source only. MPU-401 UART mode only. A breakout box for this card, the MCB-10, was sold separately.
SCP-55: for the IBM and compatible laptops (PCMCIA). Includes the Roland SC-55 sound source. A breakout box for this card, the MCB-3, was sold separately. MPU-401 UART mode only.
Still later, Roland would get rid of the breakout box completely and put all connectors on the back of the interface card itself. Products released in this manner:
MPU-APL: for the Apple II. Single-card combination of the MIF-APL interface and MPU-401, featuring MIDI IN, OUT, and SYNC connectors.
MPU-401AT: for IBM AT and "100% compatibles". Includes a connector for Wavetable daughterboards.
MPU-PC98: for the NEC PC-98
MPU-PC98II: for the NEC PC-98
S-MPU/PC (Super MPU PC-98): for the NEC PC-98
S-MPU/2N (Super MPU II N): for the NEC PC-98
SCC-1: for the IBM PC and compatibles. Includes the Roland SC-55 sound source.
GPPC-N & GPPC-NA: for the NEC PC-98. Includes the Roland SC-55 sound source.
Clones
By the late 1980s other manufacturers of PCBs developed intelligent MPU-401 clones. Some of these, like Voyetra, were equipped with Roland chips whereas most had reverse-engineered ROMs (Midiman / Music Quest).
Examples:
Midiman MM-401 (8BIT, non Roland chip set, also sold as part of the Midiman PC Desktop Music Kit)
Midi System, Inc. MDR-401, non Roland chip set
Computer Music Supply CMS-401 (8BIT, non Roland chip set)
Music Quest PC MIDI Card / MQX-16s / MQX-32m (8 & 16BIT, non Roland chip set)
Voyetra V-400x / OP-400x (V-4000, V4001, 8BIT, Roland chip set)
MIDI LAND DX-401 (non Roland chipset) & MD-401 (non Roland chipset)
Data Soft DS-401 (non Roland chipset)
In 2015 hobbyists developed a Music Quest PC MIDI Card 8BIT clone. In 2017/2018 hobbyists developed a revision of the Music Quest PC MIDI Card 8BIT clone that includes a wavetable header in analogy of the Roland MPU-401AT.
Modes
The MPU-401 can work in two modes, normal mode and UART mode. "Normal mode" would provide the host system with an 8-track sequencer, MIDI clock output, SYNC 24 signal output, Tape Sync and a metronome; as a result of these features, it is often called "intelligent mode". Compare this to UART mode, which reduces the MPU-401 to simply relaying in-/outcoming MIDI data bytes.
As computers became more powerful, the features offered in "intelligent mode" became obsolete. Implementing these in the host system's software was more efficient. Specific hardware was no longer required. As a result, the UART mode became the dominant mode of operation. Early UART MPU-401 capable cards were still advertised as MPU-401 compatible.
SoftMPU
In the mid 2010s, a hobbyist platform software interface, SoftMPU, was written that upgrades UART (non intelligent) MPU-401 interfaces to an intelligent MPU-401 interface, however this only works for MS-DOS. It also does not work for all games. Especially early Sierra games, such as Jones in the Fast Lane, will not work with SoftMPU.
HardMPU
In 2015, a PCB (HardMPU) was developed that incorporates SoftMPU as logic on hardware (so that the PC's CPU does not have to process intelligent MIDI). Currently HardMPU only supports playback and not recording.
Contemporary interfaces
Physical MIDI connections are increasingly replaced with the USB interface, and a USB to MIDI converter in order to drive musical peripherals which do not yet have their own USB ports. Often, peripherals are able to accept MIDI input through USB and convert it for the traditional DIN connectors. While MPU-401 support is no longer included in Windows Vista, a driver is available on Windows Update. As of 2011, the interface was still supported by Linux and Mac OS X.
References
External links
'Card Times' - Sound on Sound magazine, Nov 1996
SoftMPU
Louis Ohland's PS/2 Archiveshere
Computer hardware standards
MIDI
Obsolete technologies
Music sequencers | MPU-401 | [
"Technology",
"Engineering"
] | 2,061 | [
"Computer standards",
"Computer hardware standards",
"Music sequencers",
"Automation"
] |
16,042,253 | https://en.wikipedia.org/wiki/R.%20Graham%20Cooks | Robert Graham Cooks is the Henry Bohn Hass Distinguished Professor of Chemistry in the Aston Laboratories for Mass Spectrometry at Purdue University. He is an ISI Highly Cited Chemist, with over 1,000 publications and an H-index of 150.
Education
Cooks received a bachelor of science and master of science degrees from the University of Natal in South Africa in 1961 and 1963, respectively. He received a Ph.D. from the University of Natal in 1965 and a second Ph.D. from Cambridge University in 1967, where he worked with Peter Sykes. He then did post-doctoral work at Cambridge with Dudley Williams.
Career
Cooks became an Assistant Professor at Kansas State University from 1968 to 1971. In 1971, he took a position at Purdue University. He became a Professor of Chemistry in 1980 and was appointed the Henry Bohn Hass Distinguished Professor in 1990.
Cooks was co-editor of the Annual Review of Analytical Chemistry from 2013-2017.
Select research interests
Research in Cooks' laboratory (the Aston Laboratories) has contributed to a diverse assortment of areas within mass spectrometry, ranging from fundamental research to instrument and method development to applications. Cooks' research interests over the course of his career have included the study of gas-phase ion chemistry, tandem mass spectrometry, angle-resolved mass spectrometry and energy-resolved mass spectrometry (ERMS); dissociation processes, including collision-induced dissociation (CID), surface-induced dissociation (SID), and photodissociation (PD); and desorption processes, including secondary ion mass spectrometry (SIMS), laser desorption ionization (LD) and desorption electrospray ionization (DESI).
His research has ranged through areas from preparative mass spectrometry, ionization techniques and quadrupole ion traps (QITs) and related technologies to as far afield as abiogenisis (also known as "the origin of life") via homochirality.
Awards and fellowships
1984 ACS Analytical Division's Chemical Instrumentation Award
1985 Thomson Medal for International Service to Mass Spectrometry
1990 and 1995 NSF Special Creativity Award
1991 Frank H. Field & Joe Franklin Award, (ACS Award for Mass Spectrometry)
1997 Fisher Award (ACS Award for Analytical Chemistry)
2006 Distinguished Contribution in Mass Spectrometry Award
2008 Robert Boyle Prize for Analytical Science
2012 F.A. Cotton Medal for Excellence in Chemical Research of the American Chemical Society
2013 Dreyfus Prize in the Chemical Sciences
2014 ACS Nobel Laureate Signature Award for Graduate Education in Chemistry, shared with graduate student Livia S. Eberlin
2015 Member, National Academy of Sciences
2017 Aston Medal, British Mass Spectrometry Society
See also
Desorption electrospray ionization
MIKES
Orbitrap
References
External links
Aston Labs
Living people
21st-century American chemists
Mass spectrometrists
Purdue University faculty
Year of birth missing (living people)
Thomson Medal recipients
Annual Reviews (publisher) editors | R. Graham Cooks | [
"Physics",
"Chemistry"
] | 622 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
16,042,518 | https://en.wikipedia.org/wiki/Michael%20T.%20Bowers | Michael T. Bowers (born 1939) is an American mass spectrometrist, a professor in the department of chemistry and biochemistry at the University of California, Santa Barbara faculty.
Career
He studied at Gonzaga University, Spokane, Washington, earning his 1962 B.S. in 1962 and then earning a Ph.D. from the University of Illinois (with W.H. Flygare) in 1966.
He worked at the Jet Propulsion Laboratory in California for 2 years before joining UC Santa Barbara in 1968, where he was appointed full professor in 1976.
Bowers group uses mass spectrometry and ion mobility spectrometry to study gaseous species and determine their structure, reaction dynamics and mechanism.
Awards
Fellow of the American Chemical Society (ACS)
1987 Elected Fellow of the American Physical Society "for outstanding contributions both theoretically and experimentally on the Mechanism and Dynamics of Ion-Molecule Reactions"
1994 Guggenheim Fellowship
1994 Fellow of the American Association for the Advancement of Science
1996 Frank H. Field and Joe L. Franklin Award of the American Chemical Society
1997 Thomson Medal of the International Mass Spectrometry Foundation
2004 Distinguished Contribution in Mass Spectrometry Award
See also
Gas phase ion chemistry
References
External links
Bowers Group Page
Vita
1939 births
Living people
Gonzaga University alumni
University of Illinois alumni
21st-century American chemists
Mass spectrometrists
University of California, Santa Barbara faculty
Fellows of the American Chemical Society
Fellows of the American Physical Society
Thomson Medal recipients
Fellows of the American Association for the Advancement of Science | Michael T. Bowers | [
"Physics",
"Chemistry"
] | 305 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
16,042,722 | https://en.wikipedia.org/wiki/Fred%20McLafferty | Fred Warren McLafferty (May 11, 1923 − December 26, 2021) was an American chemist known for his work in mass spectrometry. He is best known for the McLafferty rearrangement reaction that was observed with mass spectrometry. With Roland Gohlke, he pioneered the technique of gas chromatography–mass spectrometry. He is also known for electron-capture dissociation, a method of fragmenting gas-phase ions.
Early life and education
Fred McLafferty was born in Evanston, Illinois in 1923, but attended grade school in Omaha, Nebraska, graduating from Omaha North High School in 1940. The urgent requirements of World War II accelerated his undergraduate studies at the University of Nebraska; he obtained his B.S. degree in 1943 and thereafter entered the US armed forces. He served in western Europe during the invasion of Germany and was awarded the Combat Infantryman Badge, a Purple Heart, Five Bronze Star Medals and a Presidential Unit Citation.
He returned to the University of Nebraska in late 1945 and completed his M.S. degree in 1947. He went on to work under William Miller at Cornell University where he earned his Ph.D. in 1950. He went on to a postdoctoral researcher position at the University of Iowa with R.L. Shriner.
Dow Chemical
He took a position at Dow Chemical in Midland, Michigan in 1950 and was in charge of mass spectrometry and gas chromatography from 1950 to 1956. In 1953-1956, he started collecting reference mass spectra whenever the instruments were not in use.
In 1956, he became the Director of Dow's Eastern Research Lab in Framingham, Massachusetts. During this time, he developed the first GC/MS instruments and analyzed the company's reference collection of spectra he himself founded. This allowed him to work out techniques for determining the structure of organic molecules by mass spectrometry, most notably in the discovery of what is now known as the McLafferty rearrangement.
Academic career
From 1964 to 1968, he was Professor of Chemistry at Purdue University. In 1968, he returned to his alma mater, Cornell University, to become the Peter J. W. Debye Professor of Chemistry. He was elected to the United States National Academy of Sciences in 1982. While at Cornell, McLafferty assembled one of the first comprehensive databases of mass spectra and pioneered artificial intelligence techniques to interpret GC/MS results. His PBM STIRS program has widespread use to save hours of time-consuming work otherwise required to manually analyze GC/MS results.
Personal life and death
McLafferty died in Ithaca, New York, on December 26, 2021, at the age of 98.
Honors and awards
1971 ACS Award in Chemical Instrumentation
1981 ACS Award in Analytical Chemistry
1984 William H. Nichols Medal
1985 Oesper Award
1985 J. J. Thomson Gold Medal by International Mass Spectrometry Society
1987 Pittsburgh Analytical Chemistry Award
1989 Field and Franklin Award for Mass Spectrometry
1989 University of Naples Gold Medal
1992 Robert Boyle Gold Medal by the Royal Society of Chemistry
1996 Chemical Pioneer Award from the American Institute of Chemists
1997 Bijvoet Medal of the Bijvoet Center for Biomolecular Research.
1999 J. Heyrovsky Medal by the Czech Academy of Sciences
2000 G. Natta Gold Medal by Italian Chemical Society
2001 Torbern Bergman Medal by the Swedish Chemical Society
2003 John B. Fenn Distinguished Contribution in Mass Spectrometry by the American Society for Mass Spectrometry (ASMS)
2004 Lavoisier Medal by the French Chemical Society
2006 Pehr Edman Award by the International Association for Protein Structure
2015 Nakanishi Prize from the American Chemical Society
2019 American Chemical Society designated a National Historic Chemical Landmark in Midland, MI for the demonstration of the first operating GC-MS by Fred McLafferty and Roland Gohlke.
References
Bibliography
External links
A Conversation with Fred W. McLafferty 2006, 90 minute video, for Cornell University.
1923 births
2021 deaths
21st-century American chemists
Mass spectrometrists
Purdue University faculty
Cornell University alumni
Cornell University faculty
Members of the United States National Academy of Sciences
Dow Chemical Company employees
Bijvoet Medal recipients
Thomson Medal recipients
Omaha North High School alumni
People from Evanston, Illinois
United States Army personnel of World War II | Fred McLafferty | [
"Physics",
"Chemistry"
] | 891 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
16,043,047 | https://en.wikipedia.org/wiki/Jesse%20L.%20Beauchamp | Jesse L. Beauchamp (born 1942) is the Charles and Mary Ferkel Professor of Chemistry at the California Institute of Technology.
Early life and education
1964 B.S. California Institute of Technology
1967 Ph.D. Harvard University
Research interests
Development of novel mass spectrometric techniques in biochemistry.
Awards
In 1978 he received the ACS Award in Pure Chemistry from the American Chemical Society and in 1981 was elected to the National Academy of Sciences. In 1999 he received the Peter Debye Award in Physical Chemistry from the American Chemical Society and was again honored in 2003 with the Field and Franklin Award in Mass Spectrometry. In 2007 he received the Distinguished Contribution Award from the American Society for Mass Spectrometry for the original development and chemical applications of ion cyclotron resonance spectroscopy.
Former students
Charles A. Wight – President of Weber State University
Frances Houle (1979) – Director of JCAP North
Terry B. McMahon (1974) – Professor or chemistry at the University of Waterloo
Peter B. Armentrout (1980) – Professor of chemistry at the University of Utah
David Dearden (1989) – Chemistry and Biochemistry department chair at BYU
Elaine Marzluff (1995) – Chemistry department chair at Grinnell College
References
External links
Beauchamp Research Group at Caltech
CCE website
21st-century American chemists
Mass spectrometrists
Living people
California Institute of Technology faculty
1942 births
Members of the United States National Academy of Sciences
Harvard University alumni
California Institute of Technology alumni | Jesse L. Beauchamp | [
"Physics",
"Chemistry"
] | 303 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
16,045,068 | https://en.wikipedia.org/wiki/Heavy%20meromyosin | Heavy meromyosin (HMM) is the larger of the two fragments obtained from the muscle protein myosin II following limited proteolysis by trypsin or chymotrypsin. HMM contains two domains S-1 and S-2, S-1 contains is the globular head that can bind to actin while the S-2 domain projects at and angle from light meromyosin (LMM) connecting the two meromyosin fragments.
HMM is used to determine the polarity of actin filaments by decorating them with HMM then viewing them under the electron microscope.
References
Motor proteins | Heavy meromyosin | [
"Chemistry",
"Biology"
] | 132 | [
"Biotechnology stubs",
"Motor proteins",
"Biochemistry stubs",
"Molecular machines",
"Biochemistry"
] |
16,046,286 | https://en.wikipedia.org/wiki/Microvoid%20coalescence | Microvoid coalescence (MVC) is a high energy microscopic fracture mechanism observed in the majority of metallic alloys and in some engineering plastics.
Fracture process
MVC proceeds in three stages: nucleation, growth, and coalescence of microvoids. The nucleation of microvoids can be caused by particle cracking or interfacial failure between precipitate particles and the matrix. Additionally, microvoids often form at grain boundaries or inclusions within the material. Microvoids grow during plastic flow of the matrix, and microvoids coalesce when adjacent microvoids link together or the material between microvoids experiences necking. Microvoid coalescence leads to fracture. Void growth rates can be predicted assuming continuum plasticity using the Rice-Tracey model:
where is a constant typically equal to 0.283 (but dependent upon the stress triaxiality), is the yield stress, is the mean stress, is the equivalent Von Mises plastic strain, is the particle size, and produced by the stress triaxality:
Fracture surface morphologies
MVC can result in three distinct fracture morphologies based on the type of loading at failure. Tensile loading results in equiaxed dimples, which are spherical depressions a few micrometres in diameter that coalesce normal to the loading axis. Shear stresses will result elongated dimples, which are parabolic depressions that coalesce in planes of maximum shear stress. The depressions point back to the crack origin, and shear influenced failure will produce depressions that point in opposite directions on opposing fracture surfaces. Combined tension and bending will also produce the elongated dimple morphology, but the directions of the depressions will be in the same direction on both fracture surfaces.
References
Fracture mechanics
Materials degradation | Microvoid coalescence | [
"Materials_science",
"Engineering"
] | 365 | [
"Structural engineering",
"Materials degradation",
"Materials science",
"Fracture mechanics"
] |
2,185,977 | https://en.wikipedia.org/wiki/Nanotomography | Nanotomography, much like its related modalities tomography and microtomography, uses x-rays to create cross-sections from a 3D-object that later can be used to recreate a virtual model without destroying the original model, applying Nondestructive testing. The term nano is used to indicate that the pixel sizes of the cross-sections are in the nanometer range
Nano-CT beamlines have been built at 3rd generation synchrotron radiation facilities, including the Advanced Photon Source of Argonne National Laboratory, SPring-8, and ESRF from early 2000s. They have been applied to wide variety of three-dimensional visualization studies, such as those of comet samples returned by the Startdust mission, mechanical degradation in lithium-ion batteries, and neuron deformation in schizophrenic brains.
Although a lot of research is done to create nano-CT scanners, currently there are only a few available commercially. The SkyScan-2011 has a range of about 150 to 250 nanometers per pixel with a resolution of 400 nm and a field of view (FOV) of 200 micrometers. The Xradia nanoXCT has a spatial resolution of better than 50 nm and a FOV of 16 micrometers.
At the Ghent University, the UGCT team developed a nano-CT scanner based on commercially available components. The UGCT facility is an open nano-CT facility giving access to scientists from universities, institutes and industry.
References
Medical imaging
Microscopes | Nanotomography | [
"Chemistry",
"Technology",
"Engineering"
] | 309 | [
"Microscopes",
"Measuring instruments",
"Microscopy"
] |
2,186,113 | https://en.wikipedia.org/wiki/Okadaic%20acid | Okadaic acid, C44H68O13, is a toxin produced by several species of dinoflagellates, and is known to accumulate in both marine sponges and shellfish. One of the primary causes of diarrhetic shellfish poisoning, okadaic acid is a potent inhibitor of specific protein phosphatases and is known to have a variety of negative effects on cells. A polyketide, polyether derivative of a C38 fatty acid, okadaic acid and other members of its family have shined light upon many biological processes both with respect to dinoflagellete polyketide synthesis as well as the role of protein phosphatases in cell growth.
History
As early as 1961, reports of gastrointestinal disorders following the consumption of cooked mussels appeared in both the Netherlands and Los Lagos. Attempts were made to determine the source of the symptoms, however they failed to elucidate the true culprit, instead implicating a species of microplanctonic dinoflagellates. In the summers of the late 1970s, a series of food poisoning outbreaks in Japan lead to the discovery of a new type of shellfish poisoning. Named for the most prominent symptoms, the new Diarrhetic Shellfish Poisoning (DSP) only affected the northern portion of Honshu during 1976, however by 1977 large cities such as Tokyo and Yokohama were affected. Research into the shellfish consumed in the affected regions showed that a fat-soluble toxin was responsible for the 164 documented cases, and this toxin was traced to mussels and scallops harvested in the Miyagi prefecture. In northeastern Japan, a legend had existed that during the season of paulownia flowers, shellfish can be poisonous. Studies following this outbreak showed that toxicity of these mussels and scallops appeared and increased during the months of June and July, and all but disappeared between August and October.
Elsewhere in Japan, in 1975 Fujisawa pharmaceutical company observed that the extract of a black sponge, Halichondria okadai, was a potent cytotoxin, and was dubbed Halichondrine-A. In 1981, the structure of one such toxin, okadaic acid, was determined after it was extracted from both the black sponge in Japan, Halichondria okadai, for which it was named, and a sponge in the Florida Keys, Halichondria melanodocia. Okadaic acid sparked research both for its cytotoxic feature and for being the first reported marine ionophore.
One of the toxic culprits of DSP, dinophysistoxin-1 (DTX-1), named for one of the organisms implicated in its production, Dinophysis fortii, was compared to and shown to be very chemically similar to okadaic acid several years later, and okadaic acid itself was implicated in DSP around the same time. Since its initial discovery, reports of DSP have spread throughout the world, and are especially concentrated in Japan, South America and Europe.
Synthesis
Derivatives
Okadaic acid (OA) and its derivatives, the dinophysistoxins (DTX), are members of a group of molecules called polyketides. The complex structure of these molecules include multiple spiroketals, along with fused ether rings.
Biosynthesis
Being polyketides, the okadaic acid family of molecules are synthesized by dinoflagellates via polyketide synthase (PKS). However unlike the majority of polyketides, the dinoflagellate group of polyketides undergo a variety of unusual modifications. Okadaic acid and its derivatives are some of the most well studied of these polyketides, and research on these molecules via isotopic labeling has helped to elucidate some of those modifications.
Okadaic acid is formed from a starter unit of glycolate, found at carbons 37 and 38, and all subsequent carbons in the chain are derived from acetate. Because polyketide synthesis is similar to fatty acid synthesis, during chain extension the molecule may undergo reduction of the ketone, dehydration, and reduction of the olefin. Failure to perform one of more of these three steps, combined with several unusual reactions is what allows for the formation of the functionality of okadaic acid. Carbon deletion and addition at the alpha and beta position comprise the other transformations present in the okadaic acid biosynthesis.
Carbon deletion occurs by way of a Favorskii rearrangement and subsequent decarboxylation. Attack of a ketone in the growing chain by enzyme-bound acetates, and subsequent decarboxylation/dehydration results in an olefin replacing the ketone, in both alpha and beta alkylation. After this the olefin can isomerize to more thermodynamically stable positions, or can be activated for cyclizations, in order to produce the natural product.
Laboratory syntheses
To date, several studies have been performed toward the synthesis of okadaic acid and its derivatives. 3 total syntheses of okadaic acid have been achieved, along with many more formal syntheses and several total syntheses of the other dinophysistoxins. The first total synthesis of okadaic acid was completed in 1986 by Isobe et al., just 5 years after the molecule's structure was elucidated. The next two were completed in 1997 and 1998 by the Forsyth and Ley groups respectively.
In Isobe's synthesis, the molecule was broken into 3 pieces, along the C14-C15 bonds, and the C27-C28 bonds. This formed fragments A, B, and C, which were all synthesized separately, after which the B and C fragments were combined, and then combined with the A fragment. This synthesis contained 106 steps, with a longest linear sequence of 54 steps. The precursors to all three fragments were all glucose derivatives obtained from the chiral pool. Spiroketals were obtained from precursor ketone diols, and were therefore formed thermally in acid.
Similar to Isobe's synthesis, the Forsyth synthesis sought to reduce the number of steps, and to increase potential for designing analogues late in the synthesis. To do this, Forsyth et al. designed the synthesis to allow for structural changes and installation of important functional groups before large pieces were joined. Their resulting synthesis was 3% yielding, with 26 steps in the longest linear sequence. As above, spiroketalization was performed thermodynamically with introduction of acid.
Ley's synthesis of okadaic acid is most unlike its predecessors, although it still contains similar motifs. Like the others, this synthesis divided okadaic acid into three components along the acyclic segments. However, designed to display new techniques developed in their group, Ley's synthesis included forming the spiroketals using (diphenylphosphineoxide)-tetrahydrofuran and (phenylsulfonyl)-tetrahydropyrans, allowing for more mild conditions. Similar to those above, a portion of the stereochemistry in the molecule was set by starting materials obtained from the chiral pool, in this case mannose.
Biology
Mechanism of action
Okadaic acid (OA) and its relatives are known to strongly inhibit protein phosphatases, specifically serine/threonine phosphatases. Furthermore, of the 4 such phosphatases, okadaic acid and its relatives specifically target protein phosphatase 1 (PP1) and protein phosphatase 2A (PP2A), at the exclusion of the other two, with dissociation constants for the two proteins of 150 nM and 30 pM respectively. Because of this, this class of molecules has been used to study the action of these phosphatases in cells. Once OA binds to the phosphatase protein(s), it results in hyperphosphorylation of specific proteins within the afflicted cell, which in turn reduces control over sodium secretion and solute permeability of the cell. Affinity between okadaic acid and its derivatives and PP2A has been tested, and it was shown that the only derivative with a lower dissociation constant, and therefore higher affinity, was DTX1, which has been shown to be 1.6 times stronger. Furthermore, for the purpose of determining the toxicity of mixtures of different okadaic acid derivatives, inhibitory equivalency factors for the relatives of okadaic acid have been studied. In wild type PP2A, the inhibitory equivalency relative to okadaic acid were 0.9 for DTX-1 and 0.6 for DTX-2.
Toxicology
The main route of exposure to DSP from okadaic acid and its relatives is through the consumption of shellfish. It was initially shown that the toxic agents responsible for DSP tend to be most concentrated in the hepatopancreas, followed by the gills for certain shellfish. The symptoms for diarrhetic shellfish poisoning include intense diarrhea and severe abdominal pains, and rarely nausea and vomiting, and they tend to occur anytime between 30 minutes and at most 12 hours after consuming toxic shellfish. It has been estimated that it takes roughly 40 μg of okadaic acid to trigger diarrhea in adult humans.
Medical uses
Because of its inhibitory effects in phosphatases, okadaic acid has shown promise in the world of medicine for numerous potential uses. During its initial discovery, okadaic acid, specifically the crude source extract, showed potent inhibition of cancer cells, and so initial interest in the family of molecules tended to center around that feature. However, it was shown that the more cytotoxic component of H. okadai was actually a separate family of compounds, the Halichondrines, and as such research into the cytotoxicity of okadaic acid decreased. However, the unique function of okadaic acid upon cells maintained biological interest in the molecule. Okadaic acid has been shown to have neurotoxic, immunotoxic, and embryotoxic effects. Furthermore, in two-stage carcinogenesis of mouse skin, the molecule and its relatives have been shown to have tumor promoting effects. Because of this, the effects of okadaic acid on Alzheimer's, AIDS, diabetes, and other human diseases have been studied.
See also
Canadian Reference Materials
Brevetoxin
Ciguatoxin
Domoic acid
Saxitoxin
Tetrodotoxin
Rubratoxin
References
External links
Carboxylic acids
Laxatives
Phycotoxins
Polyketides
Polyether toxins
Spiro compounds
Oxygen heterocycles
Phosphatase inhibitors | Okadaic acid | [
"Chemistry"
] | 2,270 | [
"Biomolecules by chemical classification",
"Natural products",
"Toxins by chemical classification",
"Polyether toxins",
"Carboxylic acids",
"Functional groups",
"Organic compounds",
"Polyketides",
"Spiro compounds"
] |
2,186,198 | https://en.wikipedia.org/wiki/Parc%20de%20la%20Villette | The Parc de la Villette () is the third-largest park in Paris, in area, located at the northeastern edge of the city in the 19th arrondissement. The park houses one of the largest concentrations of cultural venues in Paris, including the Cité des Sciences et de l'Industrie (City of Science and Industry, Europe's largest science museum), three major concert venues, and the prestigious Conservatoire de Paris.
Parc de la Villette is served by Paris Métro stations Corentin Cariou on Line 7 and Porte de Pantin on Line 5.
History
The park was designed by Bernard Tschumi, a French architect of Swiss origin, who built it from 1984 to 1987 in partnership with Colin Fournier, on the site of the huge Parisian abattoirs (slaughterhouses) and the national wholesale meat market, as part of an urban redevelopment project. The slaughterhouses, built in 1867 on the instructions of Napoléon III, had been cleared away and relocated in 1974. Tschumi won a major design competition in 1982–83 for the park as part of the Grands Projets of François Mitterrand, and sought the opinions of the deconstructionist philosopher Jacques Derrida in the preparation of his design proposal.
Since the creation of the park, museums, concert halls, and theatres have been designed by several noted contemporary architects, including Christian de Portzamparc, Adrien Fainsilber, Philippe Chaix, Jean-Paul Morel, Gérard Chamayou, on to Mr. Tschumi.
Park attractions
The park houses museums, concert halls, live performance stages, and theatres, as well as playgrounds for children, and thirty-five architectural follies. These include:
Cité des Sciences et de l'Industrie (City of Science and Industry), the largest science museum in Europe; also home of Vill'Up, a shopping centre opened in November 2016 with the world largest indoor pulsed air free fall flight simulator of 14 m high and several cinemas (IMAX, 4DX and dynamic);
La Géode, an IMAX theatre inside of a diameter geodesic dome;
Cité de la musique (City of Music), a museum of historical musical instruments with a concert hall, also home of the Conservatoire de Paris;
Philharmonie de Paris, a new symphony hall with 2,400 seats for orchestral works, jazz, and world music designed by Jean Nouvel, opened since January 2015.
Grande halle de la Villette, a historical cast iron & glass abattoir that now holds fairs, festive cultural events, and other programming;
Le Zénith, a concert arena with 6,300 seats for rock and pop music;
L'Argonaute, a 50 m long decommissioned military submarine;
Cabaret Sauvage, a flexible small concert stage with 600 to 1,200 seats, designed by Méziane Azaïche in 1997;
Le Trabendo, a contemporary venue for pop, rock, folk music, and jazz with 700 seats;
Théâtre Paris-Villette, a small actors' theatre and acting workshop with 211 seats;
Le Hall de la Chanson (at Pavillon du Charolais), theatre dedicated to French song with 140 seats
WIP Villette, "Work In Progress–Maison de la Villette," a space dedicated to Hip-Hop culture, social theatre, art work initiatives, and cultural democracy;
Espace Chapiteaux, a permanent space under a tent for contemporary circus, resident and touring companies perform;
Pavillon Paul-Delouvrier, a chic contemporary event space for conferences, workshops, and social events designed by Oscar Tusquets;
Centre équestre de la Villette, equestrian center with numerous year-round events.
Cinéma en plein air, an outdoor movie theatre, site of an annual film festival;
Le TARMAC (former Théâtre de l'Est Parisien), venue for world performance art and dance companies touring from "La Francophonie", has moved to 159 avenue Gambetta in the 20th arrondissement.
Tourism
Since its completion in 1987, the Parc de la Villette has become a popular attraction for Paris residents and international travelers alike. An estimated 10 million people visit the park each year to take part in an array of cultural activities. With its collection of museums, theatres, architectural follies, themed gardens, and open spaces for exploration and activity, the park has created an area that relates to both adults and children.
Designed by Bernard Tschumi, the park is meant to be a place inspired by the post-modernist architectural ideas of deconstructivism. Tschumi's design was in partial response to the philosophies of Jacques Derrida, acting as an architectural experiment in space (through a reflection on Plato's Khôra), form, and how those relate a person's ability to recognize and interact. According to Tschumi, the intention of the park was to create space for activity and interaction, rather than adopt the conventional park mantra of ordered relaxation and self-indulgence. The vast expanse of the park allows for visitors to walk about the site with a sense of freedom and opportunity for exploration and discovery.
The design of the park is organized into a series of points, lines, and surfaces. These categories of spatial relation and formulation are used in Tschumi's design to act as a means of deconstructing the traditional views of how a park is conventionally meant to exist.
Activities
The Parc de la Villette boasts activities that engage all people of all ages and cultural backgrounds. The park is a contemporary melting pot of cultural expression where local artists and musicians produce exhibits and performances. On the periphery of the park lies the Cité des Sciences et de l'Industrie, the largest science museum in Europe. There are a convention center and an I-MAX theatre. The park acts as a connection between these exterior functions. Concerts are scheduled year-round, hosting local and mainstream musicians. Dividing the park is the Canal de l'Ourcq, which has boat tours that transport visitors around the park and to other sites in Paris. Festivals are common in the park along with artist conventions and shows by performers.
The Parc de la Villette hosts an annual open-air film festival. In 2010 the festival's theme was "To Be 20" ("Avoir 20 ans") and featured films about youth and self-discovery around the age of 20. In 2010 films were shown by American filmmakers Woody Allen and Sofia Coppola as well as French and international filmmakers.
Gardens
The Parc de la Villette has a collection of ten themed gardens that attract a large number of the park's visitors. Each garden is created with a different representation of architectural deconstructionism and tries to create space through playfully sculptural and clever means. While some of the gardens are minimalist in design, others are clearly constructed with children in mind.
The "Jardin du Dragon" (The Garden of the Dragon) is home to a large sculptural steel dragon that has an 80-foot slide for children to play on.
The "Jardin de Bambou" (Bamboo Garden) at the Parc de la Villette was designed by Alexandre Chemetoff, winner of the Grand Prix de l'urbanisme (2000).
The "Jardin de la Treille" (Trellis Garden) designed by Gilles Vexlard and Laurence Vacherot. Vines and creepers are going along a roof trellis and 90 small fountains designed so that you only really hear the murmur of them in between the grape vines.
7 Sculptures de visées (Sculptures Bachelard) by Jean-Max Albert are installed all around and an anamorphosis reflection is displayed in a small pool.
The gardens range in function; where some gardens are meant for active engagement, others exist to play off of curiosity and investigation or merely allow for relaxation.
Follies
Probably the most iconic pieces of the park, the follies act as architectural representations of deconstruction. In architecture, a folly (in French, folie) is a building constructed primarily for decoration, but suggesting by its appearance some other purpose, or so extravagant that it transcends the normal range of garden ornaments or other class of building to which it belongs. Architecturally, the follies are meant to act as points of reference that help visitors gain a sense of direction and navigate throughout the space. Twenty-six follies, made of metal and painted bright red, are placed on a grid and offer a distinct organization to the park. Each is identified by a name and a code letter-number.
Architecturally, the follies are meant to act as points of reference that help visitors gain a sense of direction and navigate throughout the space. While the follies are meant to exist in a deconstructive vacuum without historical relation, many have found connections between the steel structures and the previous buildings that were part of the old industrial fabric of the area. Today, the follies remain as cues to organization and direction for park visitors. Some of them house restaurants, information centers, and other functions associated with the park's needs.
Architectural deconstructivism and the park
There have been many criticisms of the innovative design of the park since its original completion. To some, the park has little concern with the human scale of park functions and the vast open space seem to challenge the expectation that visitors may have of an urban park. Bernard Tschumi designed the Parc de la Villette with the intention of creating a space that exists in a vacuum, something without historical precedent. The park strives to strip down the signage and conventional representations that have infiltrated architectural design and allow for the existence of a “non-place.” This non-place, envisioned by Tschumi, is the most appropriate example of space and provides a truly honest relationship between the subject and the object.
Visitors view and react to the plan, landscaping, and sculptural pieces without the ability to cross-reference them with previous works of historical architecture. The design of the park capitalizes on the innate qualities that are illustrated within architectural deconstructivism. By allowing visitors to experience the architecture of the park within this constructed vacuum, the time, recognitions, and activities that take place in that space begin to acquire a more vivid and authentic nature. The park is not acting as a spectacle; it is not an example of traditional park design such as New York City's Central Park. The Parc de la Villette strives to act as merely a frame for other cultural interaction.
The park embodies anti-tourism, not allowing visitors to breeze through the site and pick and choose the sites they want to see. Upon arrival in the park, visitors are thrust into a world that is not defined by conventional architectural relationships. The frame of the park, due to its roots in deconstructivism, tries to change and react to the functions that it holds within.
See also
List of tourist attractions in Paris
World Architecture Survey
References
External links
Parc de la Villette website
Galinsky: Parc de la Villette
Archidose: Parc de la Villette
Review essay on Parc de la Villette
Images and Links Resource collection
Follies Parc de la Villette 3D model of two of the Follies
19th arrondissement of Paris
Villette, Parc de la
Deconstructivism
Landscape architecture
Bernard Tschumi buildings | Parc de la Villette | [
"Engineering"
] | 2,344 | [
"Landscape architecture",
"Architecture"
] |
2,186,993 | https://en.wikipedia.org/wiki/Bioprocessor | A bioprocessor is a miniaturized bioreactor capable of culturing mammalian, insect and microbial cells. Bioprocessors are capable of mimicking performance of large-scale bioreactors, hence making them ideal for laboratory scale experimentation of cell culture processes. Bioprocessors are also used for concentrating bioparticles (such as cells) in bioanalytical systems. Microfluidic processes such as electrophoresis can be implemented by bioprocessors to aid in DNA isolation and purification.
References
Biochemical engineering
Biotechnology | Bioprocessor | [
"Chemistry",
"Engineering",
"Biology"
] | 118 | [
"Biological engineering",
"Chemical engineering",
"Biotechnology stubs",
"Biochemical engineering",
"Biotechnology",
"nan",
"Biochemistry"
] |
2,187,251 | https://en.wikipedia.org/wiki/Decade%20%28log%20scale%29 | One decade (symbol dec) is a unit for measuring ratios on a logarithmic scale, with one decade corresponding to a ratio of 10 between two numbers.
Example: Scientific notation
When a real number like .007 is denoted alternatively by 7.0 × 10—3 then it is said that the number is represented in scientific notation. More generally, to write a number in the form a × 10b, where 1 <= a < 10 and b is an integer, is to express it in scientific notation, and a is called the significand or the mantissa, and b is its exponent. The numbers so expressible with an exponent equal to b span a single decade, from
Frequency measurement
Decades are especially useful when describing frequency response of electronic systems, such as audio amplifiers and filters.
Calculations
The factor-of-ten in a decade can be in either direction: so one decade up from 100 Hz is 1000 Hz, and one decade down is 10 Hz. The factor-of-ten is what is important, not the unit used, so 3.14 rad/s is one decade down from 31.4 rad/s.
To determine the number of decades between two frequencies ( & ), use the logarithm of the ratio of the two values:
decades
or, using natural logarithms:
decades
How many decades is it from 15 rad/s to 150,000 rad/s?
decades
How many decades is it from 3.2 GHz to 4.7 MHz?
decades
How many decades is one octave?
One octave is a factor of 2, so decades per octave (decade = just major third + three octaves, 10/1 () = 5/4)
To find out what frequency is a certain number of decades from the original frequency, multiply by appropriate powers of 10:
What is 3 decades down from 220 Hz?
Hz
What is 1.5 decades up from 10 Hz?
Hz
To find out the size of a step for a certain number of frequencies per decade, raise 10 to the power of the inverse of the number of steps:
What is the step size for 30 steps per decade?
– or each step is 7.9775% larger than the last.
Graphical representation and analysis
Decades on a logarithmic scale, rather than unit steps (steps of 1) or other linear scale, are commonly used on the horizontal axis when representing the frequency response of electronic circuits in graphical form, such as in Bode plots, since depicting large frequency ranges on a linear scale is often not practical. For example, an audio amplifier will usually have a frequency band ranging from 20 Hz to 20 kHz and representing the entire band using a decade log scale is very convenient. Typically the graph for such a representation would begin at 1 Hz (100) and go up to perhaps 100 kHz (105), to comfortably include the full audio band in a standard-sized graph paper, as shown below. Whereas in the same distance on a linear scale, with 10 as the major step-size, you might only get from 0 to 50.
Electronic frequency responses are often described in terms of "per decade". The example Bode plot shows a slope of −20 dB/decade in the stopband, which means that for every factor-of-ten increase in frequency (going from 10 rad/s to 100 rad/s in the figure), the gain decreases by 20 dB.
See also
Slide rule
One-third octave
Frequency level
Octave
Savart
Order of magnitude
References
Charts
Units of level | Decade (log scale) | [
"Physics",
"Mathematics"
] | 724 | [
"Physical quantities",
"Units of level",
"Quantity",
"Logarithmic scales of measurement",
"Units of measurement"
] |
2,187,847 | https://en.wikipedia.org/wiki/Complex%20Lie%20group | In geometry, a complex Lie group is a Lie group over the complex numbers; i.e., it is a complex-analytic manifold that is also a group in such a way is holomorphic. Basic examples are , the general linear groups over the complex numbers. A connected compact complex Lie group is precisely a complex torus (not to be confused with the complex Lie group ). Any finite group may be given the structure of a complex Lie group. A complex semisimple Lie group is a linear algebraic group.
The Lie algebra of a complex Lie group is a complex Lie algebra.
Examples
A finite-dimensional vector space over the complex numbers (in particular, complex Lie algebra) is a complex Lie group in an obvious way.
A connected compact complex Lie group A of dimension g is of the form , a complex torus, where L is a discrete subgroup of rank 2g. Indeed, its Lie algebra can be shown to be abelian and then is a surjective morphism of complex Lie groups, showing A is of the form described.
is an example of a surjective homomorphism of complex Lie groups that does not come from a morphism of algebraic groups. Since , this is also an example of a representation of a complex Lie group that is not algebraic.
Let X be a compact complex manifold. Then, analogous to the real case, is a complex Lie group whose Lie algebra is the space of holomorphic vector fields on X:.
Let K be a connected compact Lie group. Then there exists a unique connected complex Lie group G such that (i) , and (ii) K is a maximal compact subgroup of G. It is called the complexification of K. For example, is the complexification of the unitary group. If K is acting on a compact Kähler manifold X, then the action of K extends to that of G.
Linear algebraic group associated to a complex semisimple Lie group
Let G be a complex semisimple Lie group. Then G admits a natural structure of a linear algebraic group as follows: let be the ring of holomorphic functions f on G such that spans a finite-dimensional vector space inside the ring of holomorphic functions on G (here G acts by left translation: ). Then is the linear algebraic group that, when viewed as a complex manifold, is the original G. More concretely, choose a faithful representation of G. Then is Zariski-closed in .
References
Lie groups
Manifolds | Complex Lie group | [
"Mathematics"
] | 510 | [
"Lie groups",
"Mathematical structures",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Algebraic structures",
"Geometry",
"Geometry stubs",
"Manifolds"
] |
2,189,901 | https://en.wikipedia.org/wiki/Microstructure | Microstructure is the very small scale structure of a material, defined as the structure of a prepared surface of material as revealed by an optical microscope above 25× magnification. The microstructure of a material (such as metals, polymers, ceramics or composites) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behaviour or wear resistance. These properties in turn govern the application of these materials in industrial practice.
Microstructure at scales smaller than can be viewed with optical microscopes is often called nanostructure, while the structure in which individual atoms are arranged is known as crystal structure. The nanostructure of biological specimens is referred to as ultrastructure. A microstructure’s influence on the mechanical and physical properties of a material is primarily governed by the different defects present or absent of the structure. These defects can take many forms but the primary ones are the pores. Even if those pores play a very important role in the definition of the characteristics of a material, so does its composition. In fact, for many materials, different phases can exist at the same time. These phases have different properties and if managed correctly, can prevent the fracture of the material.
Methods
The concept of microstructure is observable in macrostructural features in commonplace objects. Galvanized steel, such as the casing of a lamp post or road divider, exhibits a non-uniformly colored patchwork of interlocking polygons of different shades of grey or silver. Each polygon is a single crystal of zinc adhering to the surface of the steel beneath. Zinc and lead are two common metals which form large crystals (grains) visible to the naked eye. The atoms in each grain are organized into one of seven 3d stacking arrangements or crystal lattices (cubic, tetrahedral, hexagonal, monoclinic, triclinic, rhombohedral and orthorhombic). The direction of alignment of the matrices differ between adjacent crystals, leading to variance in the reflectivity of each presented face of the interlocked grains on the galvanized surface. The average grain size can be controlled by processing conditions and composition, and most alloys consist of much smaller grains not visible to the naked eye. This is to increase the strength of the material (see Hall-Petch Strengthening).
Microstructure characterizations
To quantify microstructural features, both morphological and material property must be characterized. Image processing is a robust technique for determination of morphological features such as volume fraction, inclusion morphology, void and crystal orientations. To acquire micrographs, optical as well as electron microscopy are commonly used.
To determine material property, Nanoindentation is a robust technique for determination of properties in micron and submicron level for which conventional testing are not feasible. Conventional mechanical testing such as tensile testing or dynamic mechanical analysis (DMA) can only return macroscopic properties without any indication of microstructural properties. However, nanoindentation can be used for determination of local microstructural properties of homogeneous as well as heterogeneous materials. Microstructures can also be characterized using high-order statistical models through which a set of complicated statistical properties are extracted from the images. Then, these properties can be used to produce various other stochastic models.
Microstructure generation
Microstructure generation is also known as stochastic microstructure reconstruction.
Computer-simulated microstructures are generated to replicate the microstructural features of actual microstructures. Such microstructures are referred to as synthetic microstructures. Synthetic microstructures are used to investigate what microstructural feature is important for a given property. To ensure statistical equivalence between generated and actual microstructures, microstructures are modified after generation to match the statistics of an actual microstructure. Such procedure enables generation of theoretically infinite number of computer simulated microstructures that are statistically the same (have the same statistics) but stochastically different (have different configurations).
Influence of pores and composition
A pore in a microstructure, unless desired, is a disadvantage for the properties. In fact, in nearly all of the materials, a pore will be the starting point for the rupture of the material. It is the initiation point for the cracks. Furthermore, a pore is usually quite hard to get rid of. Those techniques described later involve a high temperature process. However, even those processes can sometimes make the pore even bigger. Pores with large coordination number (surrounded by many particles) tend to grow during the thermal process. This is caused by the thermal energy being converted to a driving force for the growth of the particles which will induce the growth of the pore as the high coordination number prohibits the growth towards the pore.
For many materials, it can be seen from their phase diagram that multiple phases can exist at the same time. Those different phases might exhibit different crystal structure, thus exhibiting different mechanical properties. Furthermore, these different phases also exhibit a different microstructure (grain size, orientation). This can also improve some mechanical properties as crack deflection can occur, thus pushing the ultimate breakdown further as it creates a more tortuous crack path in the coarser microstructure.
Improvement techniques
In some cases, simply changing the way the material is processed can influence the microstructure. An example is the titanium alloy TiAl6V4. Its microstructure and mechanical properties are enhanced using SLM (selective laser melting) which is a 3D printing technique using powder and melting the particles together using high powered laser. Other conventional techniques for improving the microstructure are thermal processes. Those processes rely in the principle that an increase in temperature will induce the reduction or annihilation of pores. Hot isostatic pressing (HIP) is a manufacturing process, used to reduce the porosity of metals and increase the density of many ceramic materials. This improves the material's mechanical properties and workability.
The HIP process exposes the desired material to an isostatic gas pressure as well as high temperature in a sealed vessel (high pressure). The gas used during this process is mostly Argon. The gas needs to be chemically inert so that no reaction occurs between it and the sample. The pressure is achieved by simply applying heat to the hermetically sealed vessel. However, some systems also associate gas pumping to the process to achieve the required pressure level. The pressure applied on the materials is equal and comes from all directions (hence the term “isostatic”). When castings are treated with HIP, the simultaneous application of heat and pressure eliminates internal voids and microporosity through a combination of plastic deformation, creep, and diffusion bonding; this process improves fatigue resistance of the component.
See also
References
External links
Materials science
Metallurgy | Microstructure | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,447 | [
"Metallurgy",
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
9,363,637 | https://en.wikipedia.org/wiki/Equivalent%20dumping%20coefficient | An equivalent dumping coefficient is a mathematical coefficient used in the calculation of the energy dispersed when a structure moves. As a civil engineering term, it defines the percent of a cycle of oscillation that is absorbed (converted to heat by friction) for the structure or sub-structure under analysis. Usually it is assumed that the equivalent dumping coefficient is linear, which is to say invariant compare to oscillatory amplitude. Modern seismic studies have shown this not to be a satisfactory assumption for larger civic structures, and have developed sophisticated amplitude and frequency based functions for equivalent dumping coefficient.
When a building moves, the materials it is made from absorb a fraction of the kinetic energy (this is especially true of concrete) due primarily to friction and to viscous or elastomeric resistance which convert motion or kinetic energy to heat.
References
Energy (physics) | Equivalent dumping coefficient | [
"Physics",
"Mathematics"
] | 170 | [
"Energy (physics)",
"Wikipedia categories named after physical quantities",
"Quantity",
"Physical quantities"
] |
9,364,565 | https://en.wikipedia.org/wiki/Technora | Technora is an aramid that is useful for a variety of applications that require high strength or chemical resistance. It is a brand name of the company Teijin Aramid.
Technora was used on January 25, 2004 to suspend the NASA Mars rover Opportunity from its parachute during descent.
It was also later used by NASA as one of the materials, combined with nylon and Kevlar, making up the parachute that was used to perform a braking manoeuvre during atmospheric entry of the rover Perseverance that landed on Mars on February 18, 2021.
Production
Technora is produced by condensation polymerization of terephthaloyl chloride (TCl) with a mixture of p-phenylenediamine (PPD) and 3,4'-diaminodiphenylether (3,4'-ODA). The polymer is closely related to Teijin Aramids's Twaron or DuPont's Kevlar. Technora is derived from two different diamines, 3,4'-ODA and PPD, whereas Twaron is derived from PPD alone. Because only one amide solvent is used in this very straightforward procedure, spinning can be completed immediately after polymer synthesis.
Physical properties
Technora has a better strength to weight ratio than steel. Technora also has fire resistant properties which can be beneficial.
Major industrial uses
Automotive and other industries:
Turbo hoses
high pressure hoses
Timing and V-belts
mechanical rubber goods reinforcement
Linear tension
Optical fiber cables (OFC)
Ram air parachute suspension lines
ropes, wire ropes and cables
Umbilical cables
Electrical mechanical cable (EMC)
Windsurfing sails
Hangglider sails
Drumheads
Personal protective equipment
Poi (performance art)
See also
Vectran
References
Synthetic fibers
Materials
Organic polymers
Brand name materials
Cables | Technora | [
"Physics",
"Chemistry"
] | 376 | [
"Organic polymers",
"Synthetic fibers",
"Synthetic materials",
"Organic compounds",
"Materials",
"Matter"
] |
9,366,097 | https://en.wikipedia.org/wiki/Truncation%20error | In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process.
Examples
Infinite series
A summation series for is given by an infinite series such as
In reality, we can only use a finite number of these terms as it would take an infinite amount of computational time to make use of all of them. So let's suppose we use only three terms of the series, then
In this case, the truncation error is
Example A:
Given the following infinite series, find the truncation error for if only the first three terms of the series are used.
Solution
Using only first three terms of the series gives
The sum of an infinite geometrical series
is given by
For our series, and , to give
The truncation error hence is
Differentiation
The definition of the exact first derivative of the function is given by
However, if we are calculating the derivative numerically, has to be finite. The error caused by choosing to be finite is a truncation error in the mathematical process of differentiation.
Example A:
Find the truncation in calculating the first derivative of at using a step size of
Solution:
The first derivative of is
and at ,
The approximate value is given by
The truncation error hence is
Integration
The definition of the exact integral of a function from to is given as follows.
Let be a function defined on a closed interval of the real numbers, , and
be a partition of I, where
where and .
This implies that we are finding the area under the curve using infinite rectangles. However, if we are calculating the integral numerically, we can only use a finite number of rectangles. The error caused by choosing a finite number of rectangles as opposed to an infinite number of them is a truncation error in the mathematical process of integration.
Example A.
For the integral
find the truncation error if a two-segment left-hand Riemann sum is used with equal width of segments.
Solution
We have the exact value as
Using two rectangles of equal width to approximate the area (see Figure 2) under the curve, the approximate value of the integral
Occasionally, by mistake, round-off error (the consequence of using finite precision floating point numbers on computers), is also called truncation error, especially if the number is rounded by chopping. That is not the correct use of "truncation error"; however calling it truncating a number may be acceptable.
Addition
Truncation error can cause within a computer when because (like it should), while . Here, has a truncation error equal to 1. This truncation error occurs because computers do not store the least significant digits of an extremely large integer.
See also
Quantization error
References
.
Numerical analysis | Truncation error | [
"Mathematics"
] | 565 | [
"Computational mathematics",
"Mathematical relations",
"Approximations",
"Numerical analysis"
] |
9,367,526 | https://en.wikipedia.org/wiki/EcoProfit | ECOPROFIT (in full Ecological Project for Integrated Environmental Protection, in German Ökoprofit) was developed in 1991 in Graz, Austria by the Environmental Office of the City of Graz and Graz University of Technology.
ECOPROFIT is a cooperative approach between the regional authority and local companies with the goal of reducing cost for waste, raw materials, water, and energy. Reductions in these areas also reduce environmental aspects of businesses. The model addresses production companies as well as hospitals, hotels, service companies and tradespeople.
Important elements of ECOPROFIT are workshops in Cleaner production and individual consulting by experienced consultants. After the first year the companies are audited (legal compliance, environmental performance, environmental programme) and receive an official award by the City. A number of companies at the same time goes for certification according to ISO 14001.
Additionally, most of the companies join the so-called ECOPROFIT CLUB. In regular workshop meetings they receive an exchange of experience and update their knowledge on environmental law and new organisational and technical development. The companies also receive support by consultants in the identification and implementation of new measures.
The ECOPROFIT approach as a model of cooperation of the community with regional companies is used in 19 countries on 4 continents.
Austria (Graz, Vienna, Vorarlberg, Klagenfurt),
Germany (Munich, Berlin, Hamburg, Dortmund, Aachen, and 60 more cities),
Slovenia (Ljubljana, Maribor),
Italy (Modena),
Hungary (Pécs),
India (Gurgaon),
Colombia (Bucaramanga, Medellín),
Korea (Incheon, Busan),
China (Panzhihua) and further more.
More than 5.000 companies worldwide participate in ECOPROFIT projects, most of them in Austria and Germany.
In Graz, a city with 260.000 inhabitants, until today, on an annual basis approximately 2 million Euros are saved. Results include (as of 2002, first year savings): 100.000 m³ of water, 2 GWh electricity, 0,5 GWhs of process heat, 700 tons of solid waste. This makes Ecoprofit an important means to reduce the industrial contribution to Global warming. In the German projects, more than 100.000 tons of carbon dioxide are saved annually.
References
External links
ECOPROFIT Platform
CPC Austria GmbH - ECOPROFIT Training and Distribution Center
Stenum GmbH - International ECOPROFIT Trainer / Consulter
City of Graz - Regional ECOPROFIT homepage
Environmental mitigation
Environmental economics | EcoProfit | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 524 | [
"Environmental economics",
"Environmental mitigation",
"Environmental social science",
"Environmental engineering"
] |
9,368,062 | https://en.wikipedia.org/wiki/Magnetic%20tweezers | Magnetic tweezers (MT) are scientific instruments for the manipulation and characterization of biomolecules or polymers. These apparatus exert forces and torques to individual molecules or groups of molecules. It can be used to measure the tensile strength or the force generated by molecules.
Most commonly magnetic tweezers are used to study mechanical properties of biological macromolecules like DNA or proteins in single-molecule experiments. Other applications are the rheology of soft matter, and studies of force-regulated processes in living cells. Forces are typically on the order of pico- to nanonewtons (pN to nN). Due to their simple architecture, magnetic tweezers are a popular biophysical tool.
In experiments, the molecule of interest is attached to a magnetic microparticle. The magnetic tweezer is equipped with magnets that are used to manipulate the magnetic particles whose position is measured with the help of video microscopy.
Construction principle and physics of magnetic tweezers
A magnetic tweezers apparatus consists of magnetic micro-particles, which can be manipulated with the help of an external magnetic field. The position of the magnetic particles is then determined by a microscopic objective with a camera.
Magnetic particles
Magnetic particles for the operation in magnetic tweezers come with a wide range of properties and have to be chosen according to the intended application. Two basic types of magnetic particles are described in the following paragraphs; however there are also others like magnetic nanoparticles in ferrofluids, which allow experiments inside a cell.
Superparamagnetic beads
Superparamagnetic beads are commercially available with a number of different characteristics. The most common is the use of spherical particles of a diameter in the micrometer range. They consist of a porous latex matrix in which magnetic nanoparticles have been embedded. Latex is auto-fluorescent and may therefore be advantageous for the imaging of their position. Irregular shaped particles present a larger surface and hence a higher probability to bind to the molecules to be studied. The coating of the microbeads may also contain ligands able to attach the molecules of interest. For example, the coating may contain streptavidin which couples strongly to biotin, which itself may be bound to the molecules of interest.
When exposed to an external magnetic field, these microbeads become magnetized. The induced magnetic moment is proportional to a weak external magnetic field :
where is the vacuum permeability. It is also proportional to the volume of the microspheres, which stems from the fact that the number of magnetic nanoparticles scales with the size of the bead. The magnetic susceptibility is assumed to be scalar in this first estimation and may be calculated by , where is the relative permeability. In a strong external field, the induced magnetic moment saturates at a material dependent value . The force experienced by a microbead can be derived from the potential of this magnetic moment in an outer magnetic field:
The outer magnetic field can be evaluated numerically with the help of finite element analysis or by simply measuring the magnetic field with the help of a Hall effect sensor. Theoretically it would be possible to calculate the force on the beads with these formulae; however the results are not very reliable due to uncertainties of the involved variables, but they allow estimating the order of magnitude and help to better understand the system. More accurate numerical values can be obtained considering the Brownian motion of the beads.
Due to anisotropies in the stochastic distribution of the nanoparticles within the microbead the magnetic moment is not perfectly aligned with the outer magnetic field i.e. the magnetic susceptibility tensor cannot be reduced to a scalar. For this reason, the beads are also subjected to a torque which tries to align and :
The torques generated by this method are typically much greater than , which is more than necessary to twist the molecules of interest.
Ferromagnetic nanowires
The use of ferromagnetic nanowires for the operation of magnetic tweezers enlarges their experimental application range. The length of these wires typically is in the order of tens of nanometers up to tens of micrometers, which is much larger than their diameter. In comparison with superparamagnetic beads, they allow the application of much larger forces and torques. In addition to that, they present a remnant magnetic moment. This allows the operation in weak magnetic field strengths. It is possible to produce nanowires with surface segments that present different chemical properties, which allows controlling the position where the studied molecules can bind to the wire.
Magnets
To be able to exert torques on the microbeads at least two magnets are necessary, but many other configurations have been realized, reaching from only one magnet that only pulls the magnetic microbeads to a system of six electromagnets that allows fully controlling the 3-dimensional position and rotation via a digital feedback loop. The magnetic field strength decreases roughly exponentially with the distance from the axis linking the two magnets on a typical scale of about the width of the gap between the magnets. Since this scale is rather large in comparison to the distances, when the microbead moves in an experiment, the force acting on it may be treated as constant. Therefore, magnetic tweezers are passive force clamps due to the nature of their construction in contrast to optical tweezers, although they may be used as positive clamps, too, when combined with a feedback loop. The field strength may be increased by sharpening the pole face of the magnet which, however, also diminishes the area where the field may be considered as constant. An iron ring connection the outer poles of the magnets may help to reduce stray fields. Magnetic tweezers can be operated with both permanent magnets and electromagnets. The two techniques have their specific advantages.
Permanent Magnets
Permanent magnets of magnetic tweezers are usually out of rare earth materials, like neodymium and can reach field strengths exceeding 1.3 Tesla. The force on the beads may be controlled by moving the magnets along the vertical axis. Moving them up decreases the field strength at the position of the bead and vice versa. Torques on the magnetic beads may be exerted by turning the magnets around the vertical axis to change the direction of the field. The size of the magnets is in the order of millimeters as well as their spacing.
Electromagnets
The use of electromagnets in magnetic tweezers has the advantage that the field strength and direction can be changed just by adjusting the amplitude and the phase of the current for the magnets. For this reason, the magnets do not need to be moved which allows a faster control of the system and reduces mechanical noise. In order to increase the maximum field strength, a core of a soft paramagnetic material with high saturation and low remanence may be added to the solenoid. In any case, however, the typical field strengths are much lower compared to those of permanent magnets of comparable size. Additionally, using electromagnets requires high currents that produce heat that may necessitate a cooling system.
Bead tracking system
The displacement of the magnetic beads corresponds to the response of the system to the imposed magnetic field and hence needs to be precisely measured: In a typical set-up, the experimental volume is illuminated from the top so that the beads produce diffraction rings in the focal plane of an objective which is placed under the tethering surface. The diffraction pattern is then recorded by a CCD-camera. The image can be analyzed in real time by a computer. The detection of the position in the plane of the tethering surface is not complicated since it corresponds to the center of the diffraction rings. The precision can be up to a few nanometers. For the position along the vertical axis, the diffraction pattern needs to be compared to reference images, which show the diffraction pattern of the considered bead in a number of known distances from the focal plane. These calibration images are obtained by keeping a bead fixed while displacing the objective, i.e. the focal plane, with the help of piezoelectric elements by known distances. With the help of interpolation, the resolution can reach precision of up 10 nm along this axis. The obtained coordinates may be used as input for a digital feedback loop that controls the magnetic field strength, for example, in order to keep the bead at a certain position.
Non-magnetic beads are usually also added to the sample as a reference to provide a background displacement vector. They have a different diameter as the magnetic beads so that they are optically distinguishable. This is necessary to detect potential drift of the fluid. For example, if the density of magnetic particles is too high, they may drag the surrounding viscous fluid with them. The displacement vector of a magnetic bead can be determined by subtracting its initial position vector and this background displacement vector from its current position.
Force Calibration
The determination of the force that is exerted by the magnetic field on the magnetic beads can be calculated considering thermal fluctuations of the bead in the horizontal plane: The problem is rotational symmetric with respect to the vertical axis; hereafter one arbitrarily picked direction in the symmetry plane is called . The analysis is the same for the direction orthogonal to the x-direction and may be used to increase precision. If the bead leaves its equilibrium position on the -axis by due to thermal fluctuations, it will be subjected to a restoring force that increases linearly with in the first order approximation. Considering only absolute values of the involved vectors it is geometrically clear that the proportionality constant is the force exerted by the magnets over the length of the molecule that keeps the bead anchored to the tethering surface:
.
The equipartition theorem states that the mean energy that is stored in this "spring" is equal to per degree of freedom. Since only one direction is considered here, the potential energy of the system reads:
.
From this, a first estimate for the force acting on the bead can be deduced:
.
For a more accurate calibration, however, an analysis in Fourier space is necessary. The power spectrum density of the position of the bead is experimentally available. A theoretical expression for this spectrum is derived in the following, which can then be fitted to the experimental curve in order to obtain the force exerted by the magnets on the bead as a fitting parameter. By definition this spectrum is the squared modulus of the Fourier transform of the position over the spectral bandwidth :
can be obtained considering the equation of motion for a bead of mass :
The term corresponds to the Stokes friction force for a spherical particle of radius in a medium of viscosity and is the restoring force which is opposed to the stochastic force due to the Brownian motion. Here, one may neglect the inertial term , because the system is in a regime of very low Reynolds number .
The equation of motion can be Fourier transformed inserting the driving force and the position in Fourier space:
This leads to:
.
The power spectral density of the stochastic force can be derived by using the equipartition theorem and the fact that Brownian collisions are completely uncorrelated:
This corresponds to the Fluctuation-dissipation theorem. With that expression, it is possible to give a theoretical expression for the power spectrum:
The only unknown in this expression, , can be determined by fitting this expression to the experimental power spectrum. For more accurate results, one may subtract the effect due to finite camera integration time from the experimental spectrum before doing the fit.
Another force calibration method is to use the viscous drag of the microbeads: Therefore, the microbeads are pulled through the viscous medium while recording their position. Since the Reynolds number for the system is very low, it is possible to apply Stokes law to calculate the friction force which is in equilibrium with the force exerted by the magnets:
.
The velocity can be determined by using the recorded velocity values. The force obtained via this formula can then be related to a given configuration of the magnets, which may serve as a calibration.
Typical experimental set-up
This section gives an example for an experiment carried out by Strick, Allemand, Croquette with the help of magnetic tweezers. A double-stranded DNA molecule is fixed with multiple binding sites on one end to a glass surface and on the other to a magnetic micro bead, which can be manipulated in a magnetic tweezers apparatus. By turning the magnets, torsional stress can be applied to the DNA molecule. Rotations in the sense of the DNA helix are counted positively and vice versa. While twisting, the magnetic tweezers also allow stretching the DNA molecule. This way, torsion extension curves may be recorded at different stretching forces. For low forces (less than about 0.5 pN), the DNA forms supercoils, so called plectonemes, which decrease the extension of the DNA molecule quite symmetrically for positive and negative twists. Augmenting the pulling force already increases the extension for zero imposed torsion. Positive twists lead again to plectoneme formation that reduces the extension. Negative twist, however, does not change the extension of the DNA molecule a lot. This can be interpreted as the separation of the two strands which corresponds to the denaturation of the molecule. In the high force regime, the extension is nearly independent of the applied torsional stress. The interpretation is the apparition of local regions of highly overwound DNA. An important parameter of this experiment is also the ionic strength of the solution which affects the critical values of the applied pulling force that separate the three force regimes.
History and development
Applying magnetic theory to the study of biology is a biophysical technique that started to appear in Germany in the early 1920s. Possibly the first demonstration was published by Alfred Heilbronn in 1922; his work looked at viscosity of protoplasts. The following year, Freundlich and Seifriz explored rheology in echinoderm eggs. Both studies included insertion of magnetic particles into cells and resulting movement observations in a magnetic field gradient.
In 1949 at Cambridge University, Francis Crick and Arthur Hughes demonstrated a novel use of the technique, calling it "The Magnetic Particle Method." The idea, which originally came from Dr. Honor Fell, was that tiny magnetic beads, phagocytoced by whole cells grown in culture, could be manipulated by an external magnetic field The tissue culture was allowed to grow in the presence of the magnetic material, and cells that contained a magnetic particle could be seen with a high power microscope. As the magnetic particle was moved through the cell by a magnetic field, measurements about the physical properties of the cytoplasm were made. Although some of their methods and measurements were self-admittedly crude, their work demonstrated the usefulness of magnetic field particle manipulation and paved the way for further developments of this technique. The magnetic particle phagocytosis method continued to be used for many years to research cytoplasm rheology and other physical properties in whole cells.
An innovation in the 1990s lead to an expansion of the technique's usefulness in a way that was similar to the then-emerging optical tweezers method. Chemically linking an individual DNA molecule between a magnetic bead and a glass slide allowed researchers to manipulate a single DNA molecule with an external magnetic field. Upon application of torsional forces to the molecule, deviations from free-form movement could be measured against theoretical standard force curves or Brownian motion analysis. This provided insight into structural and mechanical properties of DNA, such as elasticity.
Magnetic tweezers as an experimental technique has become exceptionally diverse in use and application. More recently, the introduction of even more novel methods have been discovered or proposed. Since 2002, the potential for experiments involving many tethering molecules and parallel magnetic beads has been explored, shedding light on interaction mechanics, especially in the case of DNA-binding proteins. A technique was published in 2005 that involved coating a magnetic bead with a molecular receptor and the glass slide with its ligand. This allows for a unique look at receptor-ligand dissociation force. In 2007, a new method for magnetically manipulating whole cells was developed by Kollmannsberger and Fabry. The technique involves attaching beads to the extracellular matrix and manipulating the cell from the outside of the membrane to look at structural elasticity. This method continues to be used as a means of studying rheology, as well as cellular structural proteins. A study appeared in a 2013 that used magnetic tweezers to mechanically measure the unwinding and rewinding of a single neuronal SNARE complex by tethering the entire complex between a magnetic bead and the slide, and then using the applied magnetic field force to pull the complex apart.
Biological applications
Magnetic tweezer rheology
Magnetic tweezers can be used to measure mechanical properties such as rheology, the study of matter flow and elasticity, in whole cells. The phagocytosis method previously described is useful for capturing a magnetic bead inside a cell. Measuring the movement of the beads inside the cell in response to manipulation from the external magnetic field yields information on the physical environment inside the cell and internal media rheology: viscosity of the cytoplasm, rigidity of internal structure, and ease of particle flow.
A whole cell may also be magnetically manipulated by attaching a magnetic bead to the extracellular matrix via fibronectin-coated magnetic beads. Fibronectin is a protein that will bind to extracellular membrane proteins. This technique allows for measurements of cell stiffness and provides insights into the functioning of structural proteins. The schematic shown at right depicts the experimental setup devised by Bonakdar and Schilling, et al. (2015) for studying the structural protein plectin in mouse cells. Stiffness was measured as proportional to bead position in response to external magnetic manipulation.
Single-molecule experiments
Magnetic tweezers as a single-molecule method is decidedly the most common use in recent years. Through the single-molecule method, molecular tweezers provide a close look into the physical and mechanical properties of biological macromolecules. Similar to other single-molecule methods, such as optical tweezers, this method provides a way to isolate and manipulate an individual molecule free from the influences of surrounding molecules. Here, the magnetic bead is attached to a tethering surface by the molecule of interest. DNA or RNA may be tethered in either single-stranded or double-stranded form, or entire structural motifs can be tethered, such as DNA Holliday junctions, DNA hairpins, or entire nucleosomes and chromatin. By acting upon the magnetic bead with the magnetic field, different types of torsional force can be applied to study intra-DNA interactions, as well as interactions with topoisomerases or histones in chromosomes .
Single-complex studies
Magnetic tweezers go beyond the capabilities of other single-molecule methods, however, in that interactions between and within complexes can also be observed. This has allowed recent advances in understanding more about DNA-binding proteins, receptor-ligand interactions, and restriction enzyme cleavage. A more recent application of magnetic tweezers is seen in single-complex studies. With the help of DNA as the tethering agent, an entire molecular complex may be attached between the bead and the tethering surface. In exactly the same way as with pulling a DNA hairpin apart by applying a force to the magnetic bead, an entire complex can be pulled apart and force required for the dissociation can be measured. This is also similar to the method of pulling apart receptor-ligand interactions with magnetic tweezers to measure dissociation force.
Comparison to other techniques
This section compares the features of magnetic tweezers with those of the most important other single-molecule experimental methods: optical tweezers and atomic force microscopy. The magnetic interaction is highly specific to the used superparamagnetic microbeads. The magnetic field does practically not affect the sample. Optical tweezers have the problem that the laser beam may also interact with other particles of the biological sample due to contrasts in the refractive index. In addition to that, the laser may cause photodamage and sample heating. In the case of atomic force microscopy, it may also be hard to discriminate the interaction of the tip with the studied molecule from other nonspecific interactions.
Thanks to the low trap stiffness, the range of forces accessible with magnetic tweezers is lower in comparison with the two other techniques. The possibility to exert torque with magnetic tweezers is not unique: optically tweezers may also offer this feature when operated with birefringent microbeads in combination with a circularly polarized laser beam.
Another advantage of magnetic tweezers is that it is easy to carry out in parallel many single molecule measurements.
An important drawback of magnetic tweezers is the low temporal and spatial resolution due to the data acquisition via video-microscopy. However, with the addition of a high-speed camera, the temporal and spatial resolution has been demonstrated to reach the Angstrom-level.
References
Further reading
Biophysics
Measuring instruments
Particle traps | Magnetic tweezers | [
"Physics",
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 4,427 | [
"Molecular physics",
"Applied and interdisciplinary physics",
"Measuring instruments",
"Particle traps",
"Biophysics"
] |
9,373,204 | https://en.wikipedia.org/wiki/General%20set%20theory | General set theory (GST) is George Boolos's (1998) name for a fragment of the axiomatic set theory Z. GST is sufficient for all mathematics not requiring infinite sets, and is the weakest known set theory whose theorems include the Peano axioms.
Ontology
The ontology of GST is identical to that of ZFC, and hence is thoroughly canonical. GST features a single primitive ontological notion, that of set, and a single ontological assumption, namely that all individuals in the universe of discourse (hence all mathematical objects) are sets. There is a single primitive binary relation, set membership; that set a is a member of set b is written a ∈ b (usually read "a is an element of b").
Axioms
The symbolic axioms below are from Boolos (1998: 196), and govern how sets behave and interact.
As with Z, the background logic for GST is first order logic with identity. Indeed, GST is the fragment of Z obtained by omitting the axioms Union, Power Set, Elementary Sets (essentially Pairing) and Infinity and then taking a theorem of Z, Adjunction, as an axiom.
The natural language versions of the axioms are intended to aid the intuition.
1) Axiom of Extensionality: The sets x and y are the same set if they have the same members.
The converse of this axiom follows from the substitution property of equality.
2) Axiom Schema of Specification (or Separation or Restricted Comprehension): If z is a set and is any property which may be satisfied by all, some, or no elements of z, then there exists a subset y of z containing just those elements x in z which satisfy the property . The restriction to z is necessary to avoid Russell's paradox and its variants. More formally, let be any formula in the language of GST in which x may occur freely and y does not. Then all instances of the following schema are axioms:
3) Axiom of Adjunction: If x and y are sets, then there exists a set w, the adjunction of x and y, whose members are just y and the members of x.
Adjunction refers to an elementary operation on two sets, and has no bearing on the use of that term elsewhere in mathematics, including in category theory.
ST is GST with the axiom schema of specification replaced by the axiom of empty set:
Discussion
Metamathematics
Note that Specification is an axiom schema. The theory given by these axioms is not finitely axiomatizable. Montague (1961) showed that ZFC is not finitely axiomatizable, and his argument carries over to GST. Hence any axiomatization of GST must include at least one axiom schema.
With its simple axioms, GST is also immune to the three great antinomies of naïve set theory: Russell's, Burali-Forti's, and Cantor's.
GST is Interpretable in relation algebra because no part of any GST axiom lies in the scope of more than three quantifiers. This is the necessary and sufficient condition given in Tarski and Givant (1987).
Peano arithmetic
Setting φ(x) in Separation to x≠x, and assuming that the domain is nonempty, assures the existence of the empty set. Adjunction implies that if x is a set, then so is . Given Adjunction, the usual construction of the successor ordinals from the empty set can proceed, one in which the natural numbers are defined as . See Peano's axioms.
GST is mutually interpretable with Peano arithmetic (thus it has the same proof-theoretic strength as PA).
The most remarkable fact about ST (and hence GST), is that these tiny fragments of set theory give rise to such rich metamathematics. While ST is a small fragment of the well-known canonical set theories ZFC and NBG, ST interprets Robinson arithmetic (Q), so that ST inherits the nontrivial metamathematics of Q. For example, ST is essentially undecidable because Q is, and every consistent theory whose theorems include the ST axioms is also essentially undecidable. This includes GST and every axiomatic set theory worth thinking about, assuming these are consistent. In fact, the undecidability of ST implies the undecidability of first-order logic with a single binary predicate letter.
Q is also incomplete in the sense of Gödel's incompleteness theorem. Any axiomatizable theory, such as ST and GST, whose theorems include the Q axioms is likewise incomplete. Moreover, the consistency of GST cannot be proved within GST itself, unless GST is in fact inconsistent.
Infinite sets
Given any model M of ZFC, the collection of hereditarily finite sets in M will satisfy the GST axioms. Therefore, GST cannot prove the existence of even a countable infinite set, that is, of a set whose cardinality is . Even if GST did afford a countably infinite set, GST could not prove the existence of a set whose cardinality is , because GST lacks the axiom of power set. Hence GST cannot ground analysis and geometry, and is too weak to serve as a foundation for mathematics.
History
Boolos was interested in GST only as a fragment of Z that is just powerful enough to interpret Peano arithmetic. He never lingered over GST, only mentioning it briefly in several papers discussing the systems of Frege's Grundlagen and Grundgesetze, and how they could be modified to eliminate Russell's paradox. The system Aξ'[δ0] in Tarski and Givant (1987: 223) is essentially GST with an axiom schema of induction replacing Specification, and with the existence of an empty set explicitly assumed.
GST is called STZ in Burgess (2005), p. 223. Burgess's theory ST is GST with Empty Set replacing the axiom schema of specification. That the letters "ST" also appear in "GST" is a coincidence.
Footnotes
References
George Boolos (1999) Logic, Logic, and Logic. Harvard Univ. Press.
Burgess, John, 2005. Fixing Frege. Princeton Univ. Press.
Collins, George E., and Daniel, J. D. (1970). "On the interpretability of arithmetic in set theory". Notre Dame Journal of Formal Logic, 11 (4): 477–483.
Richard Montague (1961) "Semantical closure and non-finite axiomatizability" in Infinistic Methods. Warsaw: 45-69.
Alfred Tarski, Andrzej Mostowski, and Raphael Robinson (1953) Undecidable Theories. North Holland.
Tarski, A., and Givant, Steven (1987) A Formalization of Set Theory without Variables. Providence RI: AMS Colloquium Publications, v. 41.
External links
Stanford Encyclopedia of Philosophy: Set Theory—by Thomas Jech.
Systems of set theory
Z notation | General set theory | [
"Mathematics"
] | 1,497 | [
"Z notation"
] |
9,374,505 | https://en.wikipedia.org/wiki/Schild%20equation | In pharmacology, Schild regression analysis, based upon the Schild equation, both named for Heinz Otto Schild, are tools for studying the effects of agonists and antagonists on the response caused by the receptor or on ligand-receptor binding.
Concept
Dose-response curves can be constructed to describe response or ligand-receptor complex formation as a function of the ligand concentration. Antagonists make it harder to form these complexes by inhibiting interactions of the ligand with its receptor. This is seen as a change in the dose response curve: typically a rightward shift or a lowered maximum. A reversible competitive antagonist should cause a rightward shift in the dose response curve, such that the new curve is parallel to the old one and the maximum is unchanged. This is because reversible competitive antagonists are surmountable antagonists. The magnitude of the rightward shift can be quantified with the dose ratio, r. The dose ratio r is the ratio of the dose of agonist required for half maximal response with the antagonist present divided by the agonist required for half maximal response without antagonist ("control"). In other words, the ratio of the EC50s of the inhibited and un-inhibited curves. Thus, r represents both the strength of an antagonist and the concentration of the antagonist that was applied. An equation derived from the Gaddum equation can be used to relate r to , as follows:
where
r is the dose ratio
is the concentration of the antagonist
is the equilibrium constant of the binding of the antagonist to the receptor
A Schild plot is a double logarithmic plot, typically as the ordinate and as the abscissa. This is done by taking the base-10 logarithm of both sides of the previous equation after subtracting 1:
This equation is linear with respect to , allowing for easy construction of graphs without computations. This was particular valuable before the use of computers in pharmacology became widespread. The y-intercept of the equation represents the negative logarithm of and can be used to quantify the strength of the antagonist.
These experiments must be carried out on a very wide range (therefore the logarithmic scale) as the mechanisms differ over a large scale, such as at high concentration of drug.
The fitting of the Schild plot to observed data points can be done with regression analysis.
Schild regression for ligand binding
Although most experiments use cellular response as a measure of the effect, the effect is, in essence, a result of the binding kinetics; so, in order to illustrate the mechanism, ligand binding is used. A ligand A will bind to a receptor R according to an equilibrium constant :
Although the equilibrium constant is more meaningful, texts often mention its inverse, the affinity constant (Kaff = k1/k−1): A better binding means an increase of binding affinity.
The equation for simple ligand binding to a single homogeneous receptor is
This is the Hill-Langmuir equation, which is practically the Hill equation described for the agonist binding. In chemistry, this relationship is called the Langmuir equation, which describes the adsorption of molecules onto sites of a surface (see adsorption).
is the total number of binding sites, and when the equation is plotted it is the horizontal asymptote to which the plot tends; more binding sites will be occupied as the ligand concentration increases, but there will never be 100% occupancy. The binding affinity is the concentration needed to occupy 50% of the sites; the lower this value is the easier it is for the ligand to occupy the binding site.
The binding of the ligand to the receptor at equilibrium follows the same kinetics as an enzyme at steady-state (Michaelis–Menten equation) without the conversion of the bound substrate to product.
Agonists and antagonists can have various effects on ligand binding. They can change the maximum number of binding sites, the affinity of the ligand to the receptor, both effects together or even more bizarre effects when the system being studied is more intact, such as in tissue samples. (Tissue absorption, desensitization, and other non equilibrium steady-state can be a problem.)
A surmountable drug changes the binding affinity:
competitive ligand:
cooperative allosteric ligand:
A nonsurmountable drug changes the maximum binding:
noncompetitive binding:
irreversible binding
The Schild regression also can reveal if there are more than one type of receptor and it can show if the experiment was done wrong as the system has not reached equilibrium.
Radioligand binding assays
The first radio-receptor assay (RRA) was done in 1970 by Lefkowitz et al., using a radiolabeled hormone to determine the binding affinity for its receptor.
A radio-receptor assay requires the separation of the bound from the free ligand. This is done by filtration, centrifugation or dialysis.
A method that does not require separation is the scintillation proximity assay that relies on the fact that β-rays from 3H travel extremely short distances. The receptors are bound to beads coated with a polyhydroxy scintillator. Only the bound ligands to be detected.
Today, the fluorescence method is preferred to radioactive materials due to a much lower cost, lower hazard, and the possibility of multiplexing the reactions in a high-throughput manner. One problem is that fluorescent-labeled ligands have to bear a bulky fluorophore that may cause it to hinder the ligand binding. Therefore, the fluorophore used, the length of the linker, and its position must be carefully selected.
An example is by using FRET, where the ligand's fluorophore transfers its energy to the fluorophore of an antibody raised against the receptor.
Other detection methods such as surface plasmon resonance do not even require fluorophores.
See also
Dose-response relationship
References
Further reading
External links
curvefit.com - Dose-response curves in the presence of antagonists, for a clear explanation.
Pharmacology articles needing expert attention
Pharmacodynamics
Biochemistry methods | Schild equation | [
"Chemistry",
"Biology"
] | 1,273 | [
"Biochemistry methods",
"Pharmacology",
"Pharmacodynamics",
"Biochemistry"
] |
14,515,477 | https://en.wikipedia.org/wiki/GPR161 | G-protein coupled receptor 161 is a protein that in humans is encoded by the GPR161 gene.
References
Further reading
G protein-coupled receptors | GPR161 | [
"Chemistry"
] | 32 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,517,273 | https://en.wikipedia.org/wiki/GPR56 | G protein-coupled receptor 56 also known as TM7XN1 is a protein encoded by the ADGRG1 gene. GPR56 is a member of the adhesion GPCR family.
Adhesion GPCRs are characterized by an extended extracellular region often possessing N-terminal protein modules that is linked to a TM7 region via a domain known as the GPCR-Autoproteolysis INducing (GAIN) domain.
GPR56 is expressed in liver, muscle, tendon, neural, and cytotoxic lymphoid cells in human as well as in hematopoietic precursor, muscle, and developing neural cells in the mouse.
GPR56 has been shown to have numerous role in cell guidance/adhesion as exemplified by its roles in tumour inhibition and neuron development. More recently it has been shown to be a marker for cytotoxic T cells and a subgroup of Natural killer cells.
Ligands
GPR56 binds transglutaminase 2 to suppress tumor metastasis and binds collagen III to regulate cortical development and lamination.
Signaling
GPR56 couples to Gαq/11 protein upon association with the tetraspanins CD9 and CD81. Forced GPR56 expression activates NF-kB, PAI-1, and TCF transcriptional response elements. The splicing of GPR56 induces tumorigenic responses as a result of activating the transcription of genes, such as COX2, iNOS, and VEGF85. GPR56 couples to the Gα12/13 protein and activates RhoA and mammalian target of rapamycin (mTOR) pathway upon ligand binding. Lack of the N-terminal fragment (NTF) of GPR56 causes stronger RhoA signaling and β-arrestin accumulation, leading to extensive ubiquitination of the C-terminal fragment (CTF). Finally, GPR56 suppresses PKCα activation to regulate angiogenesis.
Function
Studies in the hematopoietic system disclosed that during endothelial to hematopoietic stem cell transition, Gpr56 is a transcriptional target of the heptad complex of hematopoietic transcription factors, and is required for hematopoietic cluster formation. Recently, two studies showed that GPR56, is a cell autonomous regulator of oligodendrocyte development through Gα12/13 proteins and Rho activation. Della Chiesa et al. demonstrate that GPR56 is expressed on CD56dull natural killer (NK) cells. Lin and Hamann's group show all human cytotoxic lymphocytes, including CD56dull NK cells and CD27–CD45RA+ effector-type CD8+ T cells, express GPR56.
Clinical significance
GPR56 was the first adhesion GPCR causally linked to a disease. Loss-of-function mutations in GPR56 cause a severe cortical malformation known as bilateral frontoparietal polymicrogyria (BFPP). Investigating the pathological mechanism of disease-associated GPR56 mutations in BFPP has provided mechanistic insights into the functioning of adhesion GPCRs. Researchers demonstrated that disease-associated GPR56 mutations cause BFPP via multiple mechanisms. Li et al. demonstrated that GPR56 regulates pial basement membrane (BM) organization during cortical development. Disruption of the Gpr56 gene in mice leads to neuronal malformation in the cerebral cortex, which resulted in 4 critical pathological morphologies: defective pial BM, abnormal localized radial glial endfeet, malpositioned Cajal-Retzius cells, and overmigrated neurons. Furthermore, the interaction of GPR56 and collagen III inhibits neural migration to regulate lamination of the cerebral cortex. Next to GPR56, the α3β1 integrin is also involved in pial BM maintenance. Study from Itga3 (α3 integrin)/Gpr56 double knockout mice showed increased neuronal overmigration compared to Gpr56 single knockout mice, indicating cooperation of GPR56 and α3β1 integrin in modulation of the development of the cerebral cortex. More recently, the Walsh laboratory showed that alternative splicing of GPR56 regulates regional cerebral cortical patterning.
In depression patients, blood GPR56 mRNA expression increases only in responders and not non-responders to serotonin-norepinephrine reuptake inhibitor treatment. Furthermore, GPR56 was down-regulated in the prefrontal cortex of individuals with depression that died by suicide.
Outside the nervous system, GPR56 has been linked to muscle function and male fertility. The expression of GPR56 is upregulated during early differentiation of human myoblasts. Investigation of Gpr56 knockout mice and BFPP patients showed that GPR56 is required for in vitro myoblast fusion via signaling of serum response factor (SRF) and nuclear factor of activated T-cell (NFAT), but is not essential for muscle development in vivo. Additionally, GPR56 is a transcriptional target of peroxisome proliferator-activated receptor gamma coactivator 1-alpha 4 and regulates overload-induced muscle hypertrophy through Gα12/13 and mTOR signaling. Therefore, the study of knockout mice revealed that GPR56 is involved in testis development and male fertility. In melanocytic cells GPR56 gene expression may be regulated by MITF.
Mutations in GPR56 cause the brain developmental disorder BFPP, characterized by disordered cortical lamination in frontal cortex. Mice lacking expression of GPR56 develop a comparable phenotype. Furthermore, loss of GPR56 leads to reduced fertility in male mice, resulting from a defect in seminiferous tubule development. GPR56 is expressed in glioblastoma/astrocytoma as well as in esophageal squamous cell, breast, colon, non-small cell lung, ovarian, and pancreatic carcinoma. GPR56 was shown to localize together with α-actinin at the leading edge of membrane filopodia in glioblastoma cells, suggesting a role in cell adhesion/migration. In addition, recombinant GPR56-NTF protein interacts with glioma cells to inhibit cellular adhesion. Inactivation of Von Hippel-Lindau (VHL) tumor-suppressor gene and hypoxia suppressed GPR56 in a renal cell carcinoma cell line, but hypoxia influenced GPR56 expression in breast or bladder cancer cell lines. GPR56 is a target gene for vezatin, an adherens junctions transmembrane protein, which is a tumor suppressor in gastric cancer. Xu et al. used an in vivo metastatic model of human melanoma to show that GPR56 is downregulated in highly metastatic cells. Later, by ectopic expression and RNA interference they confirmed that GPR56 inhibits melanoma tumor growth and metastasis. Silenced expression of GPR56 in HeLa cells enhanced apoptosis and anoikis, but suppressed anchorage-independent growth and cell adhesion. High ecotropic viral integration site-1 acute myeloid leukemia (EVI1-high AML) expresses GPR56 that was found to be a transcriptional target of EVI1. Silencing expression of GPR56 decreases adhesion, cell growth and induces apoptosis through reduced RhoA signaling. GPR56 suppresses the angiogenesis and melanoma growth through inhibition of vascular endothelial growth factor (VEGF) via PKCα signaling pathway. Furthermore, GPR56 expression was found to be negatively correlated with the malignancy of melanomas in human patients.
References
External links
Adhesion GPCR consortium
GeneReviews/NIH/NCBI/UW entry on Polymicrogyria Overview
G protein-coupled receptors | GPR56 | [
"Chemistry"
] | 1,705 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,517,309 | https://en.wikipedia.org/wiki/GPR83 | Probable G-protein coupled receptor 83 is a protein that in humans is encoded by the GPR83 gene.
References
Further reading
G protein-coupled receptors | GPR83 | [
"Chemistry"
] | 33 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,517,351 | https://en.wikipedia.org/wiki/GPR162 | Probable G-protein coupled receptor 162 is a protein that in humans is encoded by the GPR162 gene.
This gene was identified upon genomic analysis of a gene-dense region at human chromosome 12p13. It appears to be mainly expressed in the brain; however, its function is not known. Alternatively spliced transcript variants encoding different isoforms have been identified.
References
Further reading
G protein-coupled receptors | GPR162 | [
"Chemistry"
] | 86 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,517,517 | https://en.wikipedia.org/wiki/GPR3 | G-protein coupled receptor 3 is a protein that in humans is encoded by the GPR3 gene. The protein encoded by this gene is a member of the G protein-coupled receptor family of transmembrane receptors and is involved in signal transduction.
GPR3 mRNA is broadly expressed in neurons in various brain regions, including the cortex, thalamus, hypothalamus, amygdala, hippocampus, pituitary, and cerebellum. GPR3 mRNA is also expressed in the eye, lung, kidney, liver, testes, and ovary, among other tissues.
Individuals afflicted by Alzheimer's disease have in many cases, overexpression of the GPR3 protein in their neurons.
Function
GPR3 activates adenylate cyclase in the absence of ligand. GPR3 was first described as a constitutive activator of adenylate cyclase. This constitutive activity could be due to stimulation by a ubiquitous ligand that may be free, membrane-bound, or membrane-derived. Alternatively, they propose that this could also be due to basal Gs coupling. Various groups have since supported this initial finding of GPR3 constitutive activation and have proceeded to show similar Gs activity in GPR6 and GPR12.
GPR3 is expressed in mammalian oocytes where it maintains meiotic arrest and is thought to be a communication link between oocytes and the surrounding somatic tissue. It has been proposed that sphingosine 1-phosphate (S1P) and sphingosylphosphorylcholine (SPC) are GPR3 ligands, however this result was not confirmed in a β-arrestin recruitment assay.
Mice lacking GPR3 were found to develop late-onset obesity owing to decreased UCP-1 expression in brown adipose tissue and reduced thermogenic capacity.
Brown adipose tissue Activation
Brown adipose tissue (BAT), in contrast to bona fide white fat, can dissipate significant amounts of chemical energy through uncoupled respiration and heat production (thermogenesis). Metabolic substrates are consumed to fuel mitochondrial futile cycles and uncoupling protein 1 (UCP1)-dependent respiration to ultimately convert chemical energy to heat. Gs-signaling stimulates the recruitment of thermogenically competent beige adipocytes in the subcutaneous adipose depots.
Exposure to environmental cold stimulates thermogenic catabolism of lipids and carbohydrates in brown adipose tissue (BAT).
BAT activation is predominantly ascribed to the Gs-coupled family, which signals through increased cyclic AMP (cAMP). This class is exemplified by the β-adrenergic receptors (ADRB1, ADRB2, and ADRB3), which represent the canonical means of sympathetic, ligand-mediated thermogenic control.
However, in the case of Gpr3, cold exposure increases the expression of this constitutively active receptor, which possesses innate signaling capacity and, thus, can modulate cAMP levels and thermogenic output without a ligand.
Gpr3 expression must be kept at extremely low basal levels until there is a thermogenic demand. Mimicking the cold induction of Gpr3 is then sufficient to drive and maintain elevated BAT activity even under conditions of little or no sympathetic tone.
To prove this, OS Johansen and colleagues developed a conditional gain-of-function model (Gpr3 TTG) for robust and sustained genetic manipulation of Gpr3 in vitro and in vivo.
Gpr3 TTG mice were crossed with mice to facilitate overexpression of Gpr3 in isolated primary brown and subcutaneous white adipocytes. Gpr3 overexpression significantly increased the expression of thermogenic genes, fatty acid uptake, and basal and leak mitochondrial respiration.
Gpr3 overexpression in their primary adipocyte model suppressed expression of the β-adrenergic receptors, further supporting a counter-regulatory interaction between GPR3 and other Gs-coupled receptors.
BAT-specific overexpression of Gpr3 (C-3BO) mice were completely protected from developing diet-induced obesity despite maintaining comparable levels of food intake, C-3BO mice maintained elevated whole-body energy expenditure as well as darker brown BAT depots and higher thermogenic gene expression.
Reproductive system
In mammalian oocytes, the process of meiotic arrest and meiotic maturation is controlled by in large part by cAMP concentrations in the cell. When cAMP levels in the cell decrease the process of miosis resumes and this precedes germinal vesicle breakdown. It is proposed That GPR3 is implicated in cAMP signaling in oocytes since it is consistent with the observation that their mRNA expression is reduced when cAMP is chronically increased in oocytes. The constitutive activity of these receptors is sufficient to prevent maturation in mouse oocytes, it is shown that their activity is also sufficient for maintaining the meiotic arrest in the follicle.
Brain cells
GPR3 mRNA is broadly expressed in neurons in various brain regions, including the cortex, thalamus, hypothalamus, amygdala, hippocampus, pituitary, and cerebellum. Notably, the GPR3 protein is overexpressed in neurons in post-mortem brain tissue sections from individuals afflicted by Alzheimer's disease. In a study on mice with Alzheimer's disease, it was shown that the disruption of the expression of GPR3 has affected the overgrowth of amyloid plaque on neurons, helping symptoms of Alzheimer's disease.
Ligands
GPR3 is largely known as an orphan G protein-coupled receptor. Even though it does not have any endogenous ligands there is research being conducted to find non-endogenous agonists for the receptor.
Agonists
Sphingosine 1-phosphate
The molecule Sphingosine 1-phosphate (S1P) is a signaling lipid that exists in the extracellular plasma, its synthesis is catalysed by sphingosine kinases (SphKs). The molecule is reported to have high affinity to the GPR3 receptor. The proposed ligand activates the Gs signaling pathway in oocytes.
Diphenyleneiodonium chloride
Diphenyleneiodonium chloride (DPI) is an inhibitor of NADPH oxidase and a potent, irreversible, and time and temperature-dependent iNOS/eNOS inhibitor. Diphenyleneiodonium chloride (DPI) also functions as a TRPA1 activator and selectively inhibits intracellular reactive oxygen species (ROS). Diphenyleneiodonium chloride (DPI) was identified as a novel agonist of GPR3 with weak or no cross-reactivity with other GPCRs. DPI was further characterized to activate several GPR3-mediated signal transduction pathways, including Ca(2+) mobilization, cAMP accumulation, membrane recruitment of β-arrestin2, and receptor desensitization.
Inverse agonists
Cannabidiol
Cannabidiol (CBD) is a Phyto-cannabinoid found in the cannabis plant. This compound is connected to improving anxiety, cognition, and pain. Although it is orphan, GPR3 is phylogenetically most closely related to the cannabinoid receptors. Using β-arrestin2 recruitment and cAMP accumulation assays, it was recently found that cannabidiol is an inverse agonist for GPR3. The affects that the inverse agonist has are still unknown.
Evolution
Paralogues
Source:
GPR6
GPR12
S1PR5
LPAR2
S1PR1
S1PR2
LPAR3
LPAR1
CNR1
S1PR3
MC3R
MC4R
S1PR4
MC5R
MC1R
CNR2
MC2R
GPR119
References
Further reading
G protein-coupled receptors | GPR3 | [
"Chemistry"
] | 1,663 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,517,625 | https://en.wikipedia.org/wiki/Melatonin%20receptor%201B | Melatonin receptor 1B, also known as MTNR1B, is a protein that in humans is encoded by the MTNR1B gene.
Function
This gene encodes the MT2 protein, one of two high-affinity forms of a receptor for melatonin, the primary hormone secreted by the pineal gland. This gene product is an integral membrane protein that is a G-protein coupled, 7-transmembrane receptor. It is found primarily in the retina and brain; however, this detection requires RT-PCR. It is thought to participate in light-dependent functions in the retina and may be involved in the neurobiological effects of melatonin. Besides the brain and retina this receptor is expressed on the bone forming cells where it regulates their function in depositing bone.
Clinical significance
Several studies have identified MTNR1B receptor mutations that are associated with increased average blood sugar level and around a 20 percent elevated risk of developing type 2 diabetes. MTNR1B mRNA is expressed in human islets, and immunocytochemistry confirms that it is primarily localized in beta cells in islets.
Ligands
The following MT2R ligands have selectivity over MT1R:
Compound 3d: antagonist with sub-nM affinity
Compound 18f: antagonist and compound 18g partial agonist: sub-nM affinity, >100-fold selectivity over MT1
Compound 14: antagonist
Compound 13: agonist
See also
Melatonin receptor
Discovery and development of melatonin receptor agonists
References
Further reading
External links
G protein-coupled receptors
Human proteins
1B | Melatonin receptor 1B | [
"Chemistry"
] | 327 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,522,267 | https://en.wikipedia.org/wiki/Cysteinyl%20leukotriene%20receptor%202 | Cysteinyl leukotriene receptor 2, also termed CYSLTR2, is a receptor for cysteinyl leukotrienes (LT) (see leukotrienes#Cysteinyl leukotrienes). CYSLTR2, by binding these cysteinyl LTs (CysLTs; viz, LTC4, LTD4, and to a much lesser extent, LTE4) contributes to mediating various allergic and hypersensitivity reactions in humans. However, the first discovered receptor for these CsLTs, cysteinyl leukotriene receptor 1 (CysLTR1), appears to play the major role in mediating these reactions.
Gene
The human gene maps to the long arm of chromosome 13 at position 13q14, a chromosomal region that has long been linked to asthma and other allergic diseases. The gene consists of four exons with all introns located in the genes' 5' UTR region and the entire coding region located in the last exon. CysLTR2 encodes a protein composed of 347 amino acids and shows only modest similarity to the CysLTR1 gene in that its protein shares only 31% amino acid identity with the CysLTR1 protein.
Receptor
CySLTR2 mRNA is co-expressed along with CysLRR1 in human blood eosinophils and platelets, and tissue mast cells, macrophages, airway epithelial cells, and vascular endothelial cells. It is also expressed without CysLTR1 throughout the heart, including Purkinje cells, adrenal gland, and brain as well as some vascular endothelial, airway epithelial, and smooth muscle cells.
CysLTR2, similar to CysLTR1, is a G protein–coupled receptor that links to and when bound to its CysLT ligands activates the Gq alpha subunit and/or Ga subunit of its coupled G protein, depending or the cell type. Acting through these G proteins and their subunits, ligand-bound CysLTR1 activates a series of pathways that lead to cell function (see Gq alpha subunit#function and Ga subunit#function for details); the order of potency of the cysLTs in stimulating CysLTR2 is LTD4=LTC4>LTE4 with LTE4 probably lacking sufficient potency to have much activity that operates through CysLTR1 in vivo. By comparison, the stimulating potencies of these CysLTs for CysLTR1 is LTD4>LTC4>LTE4 with LTD4 showing 10-fold greater potency on CysLTR1 than CysLTR2. Perhaps related to this difference in CysLT sensitivities, cells co-expressing CysLTR2 and CysLTR1 may exhibit lower sensitivity to LTD4 than do cells expressing only CysLTR1; in consequence, CysLTR2 has been suggested to dampen CysLTR1's activities.
In addition to CysLTR1, GPR99 (also termed the oxoglutarate receptor or, sometimes, CysLTR3) appears to be an important receptor for CysLTs, particularly for LTE4: the CystLTs show relative potencies of LTE4>LTC4>LTD4 in stimulating GPR99-bearing cells and GPR99-deficient mice exhibit a dose-dependent loss of vascular permeability responses in skin to LTE4 but not to LTC4 or LTD4.
Other studies on model cells for allergy have defined GPR17 (also termed the uracil nucleotide/cysteinyl leukotriene receptor) as a receptor not only uracil nucleotides but also for CysLTs, with CysLTs having the following potencies LTD4>LTC4>LTE4 in stimulating GPR17-bearing cells. However, recent studies also working with model cells involved in allergy find that GPR17-bearing cells do not respond to these CysLTs (or uracil nucleotides). Rather, they find that: a) cells expressing both CysLTR1 and GPR17 receptors exhibit a marked reduction in binding and responding to LTD4 and b) mice lacking GPR17 are hyper-responsive to igE in a model for passive cutaneous anaphylaxis. The latter studies conclude that GPR17 acts to inhibit CysLTR1. Finally, and in striking contrast to these studies, repeated studies on neural tissues find that Oligodendrocyte progenitor cells express GPR17 and respond through this receptor to LTC4, LTD4, and certain purines (see GPR17#Function).
CysLTR2 inhibitors
There are as yet no selective inhibitors of CysLTR2 that are in clinical use (see Clinical significance section below). However, Gemilukast (ONO-6950) reportedly inhibits both CysLTR1 and CysLTR2. The drug is currently being evaluated in phase II trials for the treatment of asthma.
CysLTR2 polymorphism
Polymorphism in the CysLTR2'' gene resulting in a single amino acid substitution, M201V (i.e. amino acid methionine changed for valine at the 201 position of CysLTR2 protein) has been negatively associated in Transmission disequilibrium testing with the inheritance of asthma in separate populations of: a) white and African-Americans from 359 families with a high prevalence of asthma in Denmark and Minnesota, USA, and b)''' 384 families with a high prevalence of asthma from the Genetics of Asthma International Network. The M201V CysLTR2 variant exhibits decreased responsiveness to LTD4 suggesting that this hypo-responsiveness underlies its asthma transmission-protecting effect. A -1220A>C (i.e. nucleotide adenine substituted for cytosine at position 1220 upstream from the transcription start site) gene polymorphism variant in intron III the upstream region of CysLTR2 has been associated significantly with development of asthma in a Japanese population; the impact of this polymorphism on the genes expression or product has not been determined. These results suggest that CYSLTR2 contributes to the etiology and development asthma and that drugs targeting CYSLTR2 may work in a manner that differs from those of CYSLTR1 antagonists.
Clinical significance
The CysLT-induced activation of CysLTR2 induces many of the same in vitro responses of cells involved in allergic reactions as well as the in vivo allergic responses in animal models as that induced by CysLT-induced CysLTR1 (see Cysteinyl leukotriene receptor 1#Receptor. However, CysLT2 requires 10-fold higher concentrations of LTD4, the most potent cysLT for CysLTR1, to activate CysLTR2. Furthermore, the allergic and hypersensitivity responses of humans and animal models are significantly reduced by chronic treatment with Montelukast, Zafirlukast, and Pranlukast, drugs which are selective receptor antagonists of CysLTR1 but not CysLTR2. Models of allergic reactions in Cysltr2-deficient mice as well as in a human mast cell line indicate that mouse Cysltr2 and its human homolog CysLTR2 act to inhibit Cysltr1 and CysLTR1, respectively, and therefore suggest that CysLTR2 may similarly inhibit CysLTR1 in human allergic diseases. The role of CysLTR2 in the allergic and hypersensitivity diseases of humans must await the development of selective CysLTR2 inhibitors.
See also
Cysteinyl leukotriene receptor 1
Eicosanoid receptor
GPR99
References
Further reading
External links
G protein-coupled receptors | Cysteinyl leukotriene receptor 2 | [
"Chemistry"
] | 1,633 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,523,851 | https://en.wikipedia.org/wiki/Cremona%20diagram | The Cremona diagram, also known as the Cremona-Maxwell method, is a graphical method used in statics of trusses to determine the forces in members (graphic statics). The method was developed by the Italian mathematician Luigi Cremona. However, recognizable Cremona diagrams appeared as early as 1725, in Pierre Varignon's posthumously published work, Nouvelle Méchanique ou Statique.
In the Cremona method, first the external forces and reactions are drawn (to scale) forming a vertical line in the lower right side of the picture. This is the sum of all the force vectors and is equal to zero as there is mechanical equilibrium.
Since the equilibrium holds for the external forces on the entire truss construction, it also holds for the internal forces acting on each joint. For a joint to be at rest the sum of the forces on a joint must also be equal to zero. Starting at joint Aorda, the internal forces can be found by drawing lines in the Cremona diagram representing the forces in the members 1 and 4, going clockwise; VA (going up) load at A (going down), force in member 1 (going down/left), member 4 (going up/right) and closing with VA. As the force in member 1 is towards the joint, the member is under compression, the force in member 4 is away from the joint so the member 4 is under tension. The length of the lines for members 1 and 4 in the diagram, multiplied with the chosen scale factor is the magnitude of the force in members 1 and 4.
Now, in the same way the forces in members 2 and 6 can be found for joint C; force in member 1 (going up/right), force in C going down, force in 2 (going down/left), force in 6 (going up/left) and closing with the force in member 1.
The same steps can be taken for joints D, H and E resulting in the complete Cremona diagram where the internal forces in all members are known.
In a next phase the forces caused by wind must be considered. Wind will cause pressure on the upwind side of a roof (and truss) and suction on the downwind side. This will translate to asymmetrical loads but the Cremona method is the same. Wind force may introduce larger forces in the individual truss members than the static vertical loads.
References
Mechanics
Structural system
Diagrams | Cremona diagram | [
"Physics",
"Technology",
"Engineering"
] | 499 | [
"Structural engineering",
"Building engineering",
"Classical mechanics stubs",
"Classical mechanics",
"Structural system",
"Mechanics",
"Mechanical engineering"
] |
11,848,175 | https://en.wikipedia.org/wiki/Fenna%E2%80%93Matthews%E2%80%93Olson%20complex | The Fenna–Matthews–Olson (FMO) complex is a water-soluble complex and was the first pigment-protein complex (PPC) to be structure analyzed by x-ray spectroscopy. It appears in green sulfur bacteria and mediates the excitation energy transfer from light-harvesting chlorosomes to the membrane-embedded bacterial reaction center (bRC). Its structure is trimeric (C3-symmetry). Each of the three monomers contains eight bacteriochlorophyll a (BChl a) molecules. They are bound to the protein scaffold via chelation of their central magnesium atom either to amino acids of the protein (mostly histidine) or water-bridged oxygen atoms (only one BChl a of each monomer).
Since the structure is available, calculating structure-based optical spectra is possible for comparison with experimental optical spectra. In the simplest case only the excitonic coupling of the BChls is taken into account. More realistic theories consider pigment-protein coupling. An important property is the local transition energy (site energy) of the BChls, different for each, due to their individual local protein environment. The site energies of the BChls determine the direction of the energy flow.
Some structural information on the FMO-RC super complex is available, which was obtained by electron microscopy and linear dichroism spectra measured on FMO trimers and FMO-RC complexes. From these measurements, two orientations of the FMO complex relative to the RC are possible. The orientation with BChl 3 and 4 close to the RC and BChl 1 and 6 (following Fenna and Matthews' original numbering) oriented towards the chlorosomes is useful for efficient energy transfer.
Test object
The complex is the simplest PPC appearing in nature and therefore a suitable test object for the development of methods that can be transferred to more complex systems like photosystem I. Engel and co-workers observed that the FMO complex exhibits remarkably long quantum coherence, but after about a decade of debate, it was shown that this quantum coherence has no significance to the functioning of the complex. Furthermore, it was shown that the reported long lived oscillations observed in the spectra are solely due to groundstate vibrational dynamics and do not reflect any energy transfer dynamics.
Quantum light harvesting
Light harvesting in photosynthesis employs both classical and quantum mechanical processes with an energy efficiency of almost 100 percent. For light to produce energy in classical processes, photons must reach reaction sites before their energy dissipates in less than one nanosecond. In photosynthetic processes, this is not possible. Because energy can exist in a superposition of states, it can travel all routes within a material at the same time. When a photon finds the correct destination, the superposition collapses, making the energy available. However, no purely quantum process can be wholly responsible, because some quantum processes slow down the movement of quantized objects through networks. Anderson localization prevents the spread of quantum states in random media. Because the state acts like a wave, it is vulnerable to disruptive interference effects. Another issue is the quantum zeno effect, in which an unstable state never changes if it is continuously measured/watched, because watching constantly nudges the state, preventing it from collapsing.
Interactions between quantum states and the environment act like measurements. The classical interaction with the environment changes the wave-like nature of the quantum state just enough to prevent Anderson localisation, while the quantum zeno effect extends the quantum state's lifetime, allowing it to reach the reaction centre. The proposed long lifetime of quantum coherence in the FMO influenced many scientists to investigate quantum coherence in the system, with Engel's 2007 paper being cited over 1500 times within 5 years of its publication. The proposal of Engel is still debated in literature with the suggestion that the original experiments were interpreted incorrectly assigning the spectral oscillations to electronic coherences instead of ground-state vibrational coherences, which will naturally be expected to live longer due to the narrower spectral width of vibrational transitions.
Computing
The problem of finding a reaction centre in a protein matrix is formally equivalent to many problems in computing. Mapping computing problems onto reaction center searches may allow light harvesting to work as a computational device, improving computational speeds at room temperature, yielding 100-1000x efficiency.
References
Photosynthesis
Phototrophic bacteria
Photosynthetic pigments | Fenna–Matthews–Olson complex | [
"Chemistry",
"Biology"
] | 918 | [
"Photosynthetic pigments",
"Photosynthesis",
"Bacteria",
"Phototrophic bacteria",
"Biochemistry"
] |
11,848,801 | https://en.wikipedia.org/wiki/Keldysh%20formalism | In non-equilibrium physics, the Keldysh formalism or Keldysh–Schwinger formalism is a general framework for describing the quantum mechanical evolution of a system in a non-equilibrium state or systems subject to time varying external fields (electrical field, magnetic field etc.). Historically, it was foreshadowed by the work of Julian Schwinger and proposed almost simultaneously by Leonid Keldysh and, separately, Leo Kadanoff and Gordon Baym. It was further developed by later contributors such as O. V. Konstantinov and V. I. Perel.
Extensions to driven-dissipative open quantum systems is given not only for bosonic systems, but also for fermionic systems.
The Keldysh formalism provides a systematic way to study non-equilibrium systems, usually based on the two-point functions corresponding to excitations in the system. The main mathematical object in the Keldysh formalism is the non-equilibrium Green's function (NEGF), which is a two-point function of particle fields. In this way, it resembles the Matsubara formalism, which is based on equilibrium Green functions in imaginary-time and treats only equilibrium systems.
Time evolution of a quantum system
Consider a general quantum mechanical system. This system has the Hamiltonian . Let the initial state of the system be the pure state . If we now add a time-dependent perturbation to this Hamiltonian, say , the full Hamiltonian is and hence the system will evolve in time under the full Hamiltonian. In this section, we will see how time evolution actually works in quantum mechanics.
Consider a Hermitian operator . In the Heisenberg picture of quantum mechanics, this operator is time-dependent and the state is not. The expectation value of the operator is given by
where, due to time evolution of operators in the Heisenberg picture, . The time-evolution unitary operator is the time-ordered exponential of an integral, (Note that if the Hamiltonian at one time commutes with the Hamiltonian at different times, then this can be simplified to .)
For perturbative quantum mechanics and quantum field theory, it is often more convenient to use the interaction picture. The interaction picture operator is
where . Then, defining we have
Since the time-evolution unitary operators satisfy , the above expression can be rewritten as
,
or with replaced by any time value greater than .
Path ordering on the Keldysh contour
We can write the above expression more succinctly by, purely formally, replacing each operator with a contour-ordered operator , such that parametrizes the contour path on the time axis starting at , proceeding to , and then returning to . This path is known as the Keldysh contour. has the same operator action as (where is the time value corresponding to ) but also has the additional information of (that is, strictly speaking if , even if for the corresponding times ).
Then we can introduce notation of path ordering on this contour, by defining , where is a permutation such that , and the plus and minus signs are for bosonic and fermionic operators respectively. Note that this is a generalization of time ordering.
With this notation, the above time evolution is written as
Where corresponds to the time on the forward branch of the Keldysh contour, and the integral over goes over the entire Keldysh contour. For the rest of this article, as is conventional, we will usually simply use the notation for where is the time corresponding to , and whether is on the forward or reverse branch is inferred from context.
Keldysh diagrammatic technique for Green's functions
The non-equilibrium Green's function is defined as .
Or, in the interaction picture, . We can expand the exponential as a Taylor series to obtain the perturbation series
.
This is the same procedure as in equilibrium diagrammatic perturbation theory, but with the important difference that both forward and reverse contour branches are included.
If, as is often the case, is a polynomial or series as a function of the elementary fields , we can organize this perturbation series into monomial terms and apply all possible Wick pairings to the fields in each monomial, obtaining a summation of Feynman diagrams. However, the edges of the Feynman diagram correspond to different propagators depending on whether the paired operators come from the forward or reverse branches. Namely,
where the anti-time ordering orders operators in the opposite way as time ordering and the sign in is for bosonic or fermionic fields. Note that is the propagator used in ordinary ground state theory.
Thus, Feynman diagrams for correlation functions can be drawn and their values computed the same way as in ground state theory, except with the following modifications to the Feynman rules: Each internal vertex of the diagram is labeled with either or , while external vertices are labelled with . Then each (unrenormalized) edge directed from a vertex (with position , time and sign ) to a vertex (with position , time and sign ) corresponds to the propagator . Then the diagram values for each choice of signs (there are such choices, where is the number of internal vertices) are all added up to find the total value of the diagram.
See also
Spin Hall effect
Kondo effect
References
Other
Gianluca Stefanucci and Robert van Leeuwen (2013). "Nonequilibrium Many-Body Theory of Quantum Systems: A Modern Introduction" (Cambridge University Press, 2013). DOI: https://doi.org/10.1017/CBO9781139023979
Robert van Leeuwen, Nils Erik Dahlen, Gianluca Stefanucci, Carl-Olof Almbladh and Ulf von Barth, "Introduction to the Keldysh Formalism", Lectures Notes in Physics 706, 33 (2006). arXiv:cond-mat/0506130
Condensed matter physics
Electromagnetism | Keldysh formalism | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,256 | [
"Electromagnetism",
"Physical phenomena",
"Phases of matter",
"Materials science",
"Fundamental interactions",
"Condensed matter physics",
"Matter"
] |
11,849,133 | https://en.wikipedia.org/wiki/Berkovich%20tip | A Berkovich tip is a type of nanoindenter tip used for testing the indentation hardness of a material. It is a three-sided pyramid which is geometrically self-similar. The popular Berkovich now has a very flat profile, with a total included angle of 142.3° and a half angle of 65.27°, measured from the axis to one of the pyramid flats. This Berkovich tip has the same projected area-to-depth ratio as a Vickers indenter. The original tip shape was invented by Russian scientist E. S. Berkovich in the USSR around 1950, which has a half angle of 65.03°.
As it is three-sided, it is easier to grind these tips to a sharp point and so is more readily employed for nanoindentation tests. It is typically used to measure bulk materials and films greater than thick.
References
Hardness tests
Soviet inventions
Russian inventions | Berkovich tip | [
"Materials_science"
] | 189 | [
"Hardness tests",
"Materials testing"
] |
11,849,160 | https://en.wikipedia.org/wiki/Tyrosine%20kinase%202 | Non-receptor tyrosine-protein kinase TYK2 is an enzyme that in humans is encoded by the TYK2 gene.
TYK2 was the first member of the JAK family that was described (the other members are JAK1, JAK2, and JAK3). It has been implicated in IFN-α, IL-6, IL-10 and IL-12 signaling.
Function
This gene encodes a member of the tyrosine kinase and, to be more specific, the Janus kinases (JAKs) protein families. This protein associates with the cytoplasmic domain of type I and type II cytokine receptors and promulgate cytokine signals by phosphorylating receptor subunits. It is also component of both the type I and type III interferon signaling pathways. As such, it may play a role in anti-viral immunity.
Cytokines play pivotal roles in immunity and inflammation by regulating the survival, proliferation, differentiation, and function of immune cells, as well as cells from other organ systems. Hence, targeting cytokines and their receptors is an effective means of treating such disorders. Type I and II cytokine receptors associate with Janus family kinases (JAKs) to affect intracellular signaling. Cytokines including interleukins, interferons and hemopoietins activate the Janus kinases, which associate with their cognate receptors.
The mammalian JAK family has four members: JAK1, JAK2, JAK3 and tyrosine kinase 2 (TYK2). The connection between Jaks and cytokine signaling was first revealed when a screen for genes involved in interferon type I (IFN-1) signaling identified TYK2 as an essential element, which is activated by an array of cytokine receptors. TYK2 has broader and profound functions in humans than previously appreciated on the basis of analysis of murine models, which indicate that TYK2 functions primarily in IL-12 and type I-IFN signaling. TYK2 deficiency has more dramatic effects in human cells than in mouse cells. However, in addition to IFN-α and -β and IL-12 signaling, TYK2 has major effects on the transduction of IL-23, IL-10, and IL-6 signals. Since, IL-6 signals through the gp-130 receptor-chain that is common to a large family of cytokines, including IL-6, IL-11, IL-27, IL-31, oncostatin M (OSM), ciliary neurotrophic factor, cardiotrophin 1, cardiotrophin-like cytokine, and LIF, TYK2 might also affect signaling through these cytokines. Recently, it has been recognized that IL-12 and IL-23 share ligand and receptor subunits that activate TYK2. IL-10 is a critical anti-inflammatory cytokine, and IL-10−/− mice suffer from fatal, systemic autoimmune disease.
TYK2 is activated by IL-10, and its deficiency affects the ability to generate and respond to IL-10. Under physiological conditions, immune cells are, in general, regulated by the action of many cytokines and it has become clear that cross-talk between different cytokine-signalling pathways is involved in the regulation of the JAK–STAT pathway.
Role in inflammation
It is now widely accepted that atherosclerosis is a result of cellular and molecular events characteristic of inflammation. Vascular inflammation can be caused by upregulation of Ang-II, which is produced locally by inflamed vessels and induces synthesis and secretion of IL-6, a cytokine responsible for induction of angiotensinogen synthesis in liver through JAK/STAT3 pathway, which gets activated through high affinity membrane protein receptors on target cells, termed IL-6R-chain recruiting gp-130 that is associated with tyrosine kinases (Jaks 1/2, and TYK2 kinase). Cytokines IL-4 and IL-13 gets elevated in lungs of chronically suffered asthmatics. Signalling through IL-4/IL-13 complexes is thought to occur through IL-4Rα-chain, which is responsible for activation of JAK-1 and TYK2 kinases. A role of TYK2 in rheumatoid arthritis is directly observed in TYK2-deficient mice that were resistant to experimental arthritis. TYK2−/− mice displayed a lack of responsiveness to a small amount of IFN-α, but they respond normally to a high concentration of IFN-α/β. In addition, these mice respond normally to IL-6 and IL-10, suggesting that TYK2 is dispensable for mediating for IL-6 and IL-10 signaling and does not play a major role in IFN-α signaling. Although TYK2−/− mice are phenotypically normal, they exhibit abnormal responses to inflammatory challenges in a variety of cells isolated from TYK2−/− mice. The most remarkable phenotype observed in TYK2-deficient macrophages was lack of nitric oxide production upon stimulation with LPS. Further elucidation of molecular mechanisms of LPS signaling, showed that TYK2 and IFN-β deficiency leads resistance to LPS-induced endotoxin shock, whereas STAT1-deficient mice are susceptible. Development of a TYK2 inhibitor appears to be a rational approach in the drug discovery.
Clinical significance
A mutation in this gene has been associated with hyperimmunoglobulin E syndrome (HIES), a primary immunodeficiency characterized by elevated serum immunoglobulin E.
TYK2 appears to play a central role in the inflammatory cascade responses in the pathogenesis of immune-mediated inflammatory diseases such as psoriasis. The drug deucravacitinib (marketed as Sotyktu), a small-molecule TYK2 inhibitor, was approved for moderate-to-severe plaque psoriasis in 2022.
The P1104A allele of TYK2 has been shown to increase risk of tuberculosis when carried as a homozygote; population genetic analyses suggest that the arrival of tuberculosis in Europe drove the frequency of that allele down three-fold about 2,000 years before present.
Interactions
Tyrosine kinase 2 has been shown to interact with FYN, PTPN6, IFNAR1, Ku80 and GNB2L1.
References
Further reading
Signal transduction
Tyrosine kinases | Tyrosine kinase 2 | [
"Chemistry",
"Biology"
] | 1,382 | [
"Biochemistry",
"Neurochemistry",
"Signal transduction"
] |
11,849,358 | https://en.wikipedia.org/wiki/ETV6 | ETV6 (i.e. translocation-Ets-leukemia virus) protein is a transcription factor that in humans is encoded by the ETV6 (previously known as TEL) gene. The ETV6 protein regulates the development and growth of diverse cell types, particularly those of hematological tissues. However, its gene, ETV6 frequently suffers various mutations that lead to an array of potentially lethal cancers, i.e., ETV6 is a clinically significant proto-oncogene in that it can fuse with other genes to drive the development and/or progression of certain cancers. However, ETV6 is also an anti-oncogene or tumor suppressor gene in that mutations in it that encode for a truncated and therefore inactive protein are also associated with certain types of cancers.
Gene
The human ETV6 gene is located at position "13.2" on the short (i.e. "p") arm of chromosome 12, i.e. its notated position is 12p13.2. The gene has 8 exons and two start codons, one located at exon 1 at the start of the gene and an alternative located upstream of exon 3. ETV6 codes for a full length protein consisting of 452 amino acids; the gene is expressed in virtually all cell types and tissues. Mice depleted of the ETV6 gene by Gene knockout die between day 10.5 and 11.5 of embryonic life with defective yolk sac angiogenesis and extensive losses in mesenchymal and neural cells due to apoptosis. Other genetic manipulation studies in mice indicate that the gene is required for the development and maintenance of bone marrow-based blood cell formation and the vascular network.
Protein
The human ETV6 protein is a member of the ETS transcription factor family; however, it more often acts to inhibit than stimulate transcription of its target genes. ETV6 protein contains 3 domains: a) the pointed N-terminal (i.e. PNT) domain which forms oligomer partners with itself as well as other transcription factors (e.g. FLI1) and is required for ETV6's transcriptional repressing activity; b) the central regulatory domain; and c) the C-terminal DNA-binding domain, ETS, which binds to the consensus DNA sequence, 5-GGAA/T-3 within a 9-to-10 bp sequence, in the target genes it regulates. ETV6 interacts with other proteins that regulate the differentiation and growth of cells. It binds to and thereby inhibits FLI1, another member of the ETS transcription factor family, which is active in promoting the maturation of blood platelet-forming megakaryocytes and blocking the Cellular differentiation of erythroblasts into red blood cells; this results in the excessive proliferation and abnormal morphology of erythroblasts. ETV6 likewise binds to HTATIP, a histone acetyl transferase that regulates the expression of various genes involved in gene transcription, DNA repair, and cellular apoptosis; this binding promotes the transcription-repressing activity of ETV6.
Medical significance
Inherited mutations
Rare missense and other loss of function mutations in ETV6 cause thrombocytopenia 5, an autosomal dominant familial disease characterized by variable thrombocytopenia (blood platelet counts from 5% to 90% of normal), mild to modest bleeding tendencies, and bone marrow biopsy findings of abnormal appearing megakaryocytes (i.e. nuclei with fewer than the normal number of lobulations) and red cell macrocytosis. Thrombocytopenia 5 is associated with an increased incidence of developing hematological (e.g. chronic myelomonocytic leukemia, acute myelocytic leukemia, B cell acute lymphoblastic leukemia, mixed phenotype acute leukemia, Myelodysplastic syndrome, and multiple myeloma) and non-hematological (e.g. skin and colon) cancers as well as non-malignant diseases such as refractory anemia myopathies, and gastroesophageal reflux disease.
Two unrelated kindreds were found to have autosomal dominant inherited mutations in the ETV6 gene, one family with a germline DNA substitution termed L349P that lead to replacing leucine with proline at amino acid 349 in the DNA binding domain of the ETV6, the second, termed N385fs, in germline DNA caused the lose of five base pairs ETV6 and a truncated ETV6 protein. Both mutant proteins failed to enter cell nuclei normally and had a reduced capacity to target genes regulated by the normal ETV6 protein. Afflicted members of these families had low platelet counts (i.e. thrombocytopenia) and acute lymphoblastic leukemia. Fifteen members of the two kindreds had thrombocytopenia, five of whom also had acute lymphoblastic leukemia. The L249P kindred also had one family member with renal cell carcinoma and another family member with Duodenal cancer. The relationship of these two cancers to the L249P mutation has not been investigated. In all events these two familial thrombocytopenia syndromes appear distinctly different than the thrombocytopenia 5 syndrome.
Treatment
Family members with thrombocytopenia 5 need to be regularly monitored with complete blood count and blood smear screenings to detect the early changes brought on by the malignant transformations of this disease into hematological neoplasms. Patients who developed these transformations have generally been treated similarly to patients who have the same hematological neoplasms but on a non-familial basis. Patients developing non-malignant hematological or non-hematological solid tumor manifestations of thrombocytopenia 5 are also treated like to patients with the same but no-familial disease.
The acute lymphoblastic leukemia associated with L349P or N385fs mutations in ETV6 appeared far less sensitive to standard chemotherapy for acute lymphoblastic leukemia with 2 among 3 family members moving rather quickly from chemotherapy to bone marrow transplantation and the third family member expiring. This suggest that these mutation-related forms of acute lymphoblastic leukemia require aggressive therapy.
Acquired mutations
The ETV6 gene is prone to develop a wide range of acquired mutations in hematological precursor cells that lead to various types of leukemia and/or lymphoma. It may also suffer a smaller number of mutations in non-hematological tissues that leads to solid tumors. These mutations involve chromosome translocations which fuse the ETV6 on chromosome 12's the short (i.e. "p") arm ("q" stands for long arm) at position p13.2 (site notation: 12p12.2) near to a second gene on another chromosome or, more rarely, its own chromosome. This creates a fusion gene of the oncogene category which encodes a chimeric protein that promotes the malignant growth of its parent cells. It may be unclear which portion of the newly formed oncoprotein contributes to the ensuing malignancy but fusions between ETV6 and proteins with tyrosine kinase activity generally are converted from a protein with tightly regulated tyrosine kinase activity to an uncontrolled and continuously active tyrosine kinase that thereby promotes the malignant transformation of its parent cells.
Hematological malignancies
The following table lists the more frequently occurring genes to which ETV6 fuses, the function of these genes, these genes' chromosomal locations, the notation designating the most common sites of the translocations of these fused genes, and the malignancies resulting from these translocations. These translocation mutations commonly occur in pluripotent hematopoietic stem cells that differentiate into various types of mature hematological cells. Consequently, a given mutation may lead to various types of hematological malignancies. The table includes abbreviations for tyrosine kinase receptor (TK receptor), non-receptor tyrosine kinase (non-receptor TK), homeobox protein type of transcription factor (homeobox protein), acute lymphocytic leukemia (ALL), Philadelphia chromosome negative chronic myelogenous leukemia (Ph(-)CML), myelodysplastic syndrome (MDS), myeloproliferative neoplasm (MPN), and acute myeloid leukemia (AML). The presence of ETV6 gene mutations in myelodysplastic syndromes is associated with shortened survival.
In addition to the fusion gene-producing translocations given in the table, ETV6 has been reported to fuse with other genes in very rare cases (i.e. 1-10 published reports). These translocations lead to one or more of the same types of hematological malignancies listed in the table. Thus, the ETV6 gene reportedly forms translocation-induced fusion genes with: a) tyrosine kinase receptor gene FGFR3; b) non-receptor tyrosine kinase genes ABL2, NTRK3, JAK2, SYK, FRK, and LYN; c) transcription factor genes MN1 and PER1; d) homeobox protein transcription factor CDX2; e) Protein tyrosine phosphatase receptor-type R gene PTPRR; f) transcriptional coactivator for nuclear hormone receptors gene NCOA2; f) Immunoglobulin heavy chain gene IGH; g) enzyme genes TTL (adds and removes tyrosine residues on α-tubulin), GOT1 (an Aspartate transaminase), and ACSL6 (a Long-chain-fatty-acid—CoA ligase); h) transporter gene ARNT (binds to ligand-bound aryl hydrocarbon receptor to aid in its movement to the nucleus where it promotes the expression of genes involved in xenobiotic metabolism); i) unknown function genes CHIC2, MDS2, FCHO2 and BAZ2A.; and j) non-annotated gene STL (which has no long open reading frame).
At least 9 frameshift mutations in the'ETV6 gene have been associated with ~12% of adult T cell Acute lymphoblastic leukemia cases. These mutations involve insertions or deletions in the gene that lead to its encoding a truncated and therefore inactive ETV6 protein. These mutations commonly occur alongside mutations in another oncogene, NOTCH1, which is associated with T cell acute lymphoblastic lymphoma quite independently of ETV6. It is suggested that suppressor mutations in the ETV6 gene may be a contributing factor in the development ant/or progression of this leukemia type.
Treatment
Patients developing hematological malignancies secondary to the ETV6 gene fusion to receptor tyrosine kinases and non-receptor tyrosine kinases may be sensitive to therapy with tyrosine kinase inhibitors. For example, patients with clonal eosinophilias due to PDGFRA or PDGFRB fusion genes experience long-term, complete remission when treated with are highly sensitive tyrosine kinase inhibitor, gleevec. Larotrectinib, entrectinib, merestinib, and server other broadly acting tyrosine kinase inhibitors target the NTRK3 gene. Many of these drugs are in phase 1 or phase 2 clinical trials for the treatment of ETV6-NTRK3-related solid tumors and may ultimately prove useful for treating hematologic malignancies associated with this fusion gene. Clinical trials have found that the first generation tyrosine kinase inhibitors sorafenib, sunitinib, midostaurin, lestaurtinib have shown some promise in treating acute myelogenous leukemia associated with the FLT3-TKI fusion gene; the second generation tyrosine kinase inhibitors quizartinib and crenolanib which are highly selective in inhibiting the FLT3 protein, have shown significant promise in treating relapsed and refractory acute myelogenous leukemia related to the FLT3-TKI fusion gene. One patient with ETV6-FLT3-related myeloid/lymphoid neoplasm obtained a short term remission on sunitinib and following relapse, on sorafenib suggesting that the cited FLT3 protein tyrosine kinase inhibitors may prove useful for treating ETV6-FLT-related hematologic malignancies. Two patients suffering hematologic malignancies related to PCM1-JAK2 or BCR-JAK2 fusion genes experienced complete and cytogenetic remissions in response to the tyrosine kinase inhibitor ruxolitinib; while both remissions were short-term (12 months), these results suggest that tyrosine kinase inhibitors that target JAK2 may be of some use for treating hematologic malignancies associated with ETV6-JAK2 fusion stems. An inhibitor of SYK tyrosine kinase, TAK-659 is currently undergoing Phase I clinical trials for advanced lymphoma malignancies and may prove to be useful in treating this disease when associated with the ETV6-SYK fusion gene. It is possible that hematological malignancies associated with ETV6 gene fusions to either the SYK or FRK tyrosine kinase genes may someday be shown susceptible to tyrosine kinase inhibitor therapy. However, children with ETV6-RUNX1-associated acute lymphoblastic leukemia are in an especially good-risk subgroup and therefore have been almost uniformly treated with standard-risk chemotherapy protocols.
Hematological malignancies associated with ETY6 gene fusions to other transcription factor genes appear to reflect a loss or gain in function of ETV6 and/or the other genes in regulating expression of their target genes; this results in the formation or lack of formation of products which influence cell growth, proliferation, and/or survival. In vitro studies of ETV6-RUNX, ETV6-MN1, ETV6-PER1, and ETV6-MECOM fusion genes support this notion. Thus, the ETV6-MECOM fusion gene is overexpressed because it is driven by the promoter derived from ETV6 whereas the ETV6-RUNX, ETV6-MN1, and ETV6-PER1 fusion genes produce chimeric proteins which lack ETV6 protein's gene-suppressing activity. The chimeric protein products of ETV6 gene fusions with ARNT, TTL, BA22A, FCHO2, MDS2, and CHIC2 likewise lack ETV6 protein's transcription factor activity. Gene fusions between ETV6 and the homeobox gens (i.e. CDX2, PAX5, and MNX1) produce chimeric proteins with lack either ETV6s and/or CDX2s, PAX5s or MNX1s transcription factor activity. In all events, hematological malignancies associated with these fusion genes have been treated with standard chemotherapy protocols selected on the basis of the malignancies phenotype.
Solid Tumors
Mutations in the ETV6 gene are also associated with solid tumors. In particular, the ETV6-NTRK3 fusion gene occurs in and is thought or proposed to drive certain types of cancers. These cancers include secretory breast cancer (also termed juvenile breast cancer), mammary analogue secretory carcinoma of the parotid and other salivary glands, congenital fibrosarcoma, congenital mesoblastic nephroma, inflammatory myofibroblastic tumor, and radiation-induced papillary thyroid carcinoma.
Treatment
The treatment of ETV6 gene-associated solid tumors has not advanced as far as that for ETV6 gene-associated hematological malignancies. It is proposed that tyrosine kinase inhibitors with specificity for NTRK3's tyrosine kinase activity in ETV6-NTRK3 gene-associated solid tumors may be of therapeutic usefulness. Entrectinib, a pan-NTRK as well as an ALK and ROS1 tyrosine kinase inhibitor has been found useful in treating a single patient with ETV6-NRTK3 fusion gene-associated mammary analogue secretory carcinoma and lends support to the clinical development of NTRK3-directed tyrosine kinase inhibitors to treat ETV6-NTRK3 fusion protein associated malignancies. Three clinical trials are in the recruitment phase for determining the efficacy of treating a wide range of solid tumors associated with mutated, overactive tyrosine kinase proteins, including the ETV6-TRK3 protein, with larotrectinib, a non-selective inhibitor of NTRK1, NTRK2, and NTRK3 tyrosine kinases.
See also
ETV6-NTRK3 gene fusion
TEL-JAK2
References
Further reading
External links
Drosophila anterior open - The Interactive Fly
Oncogenes
Tyrosine kinases
Transcription factors | ETV6 | [
"Chemistry",
"Biology"
] | 3,650 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
11,850,008 | https://en.wikipedia.org/wiki/Deluge%20gun | A deluge gun, fire monitor, master stream or deck gun is an aimable controllable high-capacity water jet used for manual firefighting or automatic fire protection systems. Deluge guns are often designed to accommodate foam which has been injected in the upstream piping.
Installation
Deluge guns are often fitted to fire boats, tug boats, and atop large fire trucks for use in manual firefighting, where they can be aimed and operated by one firefighter and are used to deliver water or foam from outside the immediate area of the fire. Deluge guns are sometimes installed in fixed fire protection systems to protect high hazards, such as aviation hangars and helicopter landing pads. Similarly, facilities with highly flammable material such as oil refineries may have permanently-installed deluge guns. Most apparatus-mounted deluge guns can be directed by a single firefighter, compared to a standard fire hose which normally requires several. Deluge guns can be automatically positioned for fixed systems, or may have portable designs. The latter option enables a firefighter to set up the gun to apply water to a blaze, before leaving it in place to attend to other tasks.
Capacity
A deluge gun can discharge per minute or more. A master stream is a fire service term for a water stream of per minute or greater. It is delivered by a master stream device, such as a deck gun, deluge gun, or fire monitor. Master streams are often found at the end of aerial ladders, tele-squirt nozzles, or monitor nozzles. The high pressure that they require renders them unsuitable for handline use.
Risks
A master stream brings with it many risks when used in an urban setting. A master stream should never be fired into an occupied building, as the force could knock down a supporting wall and crush victims. Also, the steam from the high volume of water delivered could cause a blowout or displace oxygen from an enclosed area, creating a risk of asphyxiation.
See also
Standpipe (firefighting)
Water cannon
Water gun
Water salutes, often carried out with deluge guns
References
US Patent for improved mobile fire apparatus
ABS Rules for Steel Vessels 2007 5C.9.11/3 Specific Vessel Types- Chemical Carriers, Fire Protection and Fire Extinction
Fire protection
Firefighting equipment | Deluge gun | [
"Engineering"
] | 470 | [
"Building engineering",
"Fire protection"
] |
3,022,936 | https://en.wikipedia.org/wiki/Asphaltene | Asphaltenes are molecular substances that are found in crude oil, along with resins, aromatic hydrocarbons, and saturates (i.e. saturated hydrocarbons such as alkanes). The word "asphaltene" was coined by Jean-Baptiste Boussingault in 1837 when he noticed that the distillation residue of some bitumens had asphalt-like properties. Asphaltenes in the form of asphalt or bitumen products from oil refineries are used as paving materials on roads, shingles for roofs, and waterproof coatings on building foundations.
Composition
Asphaltenes consist primarily of carbon, hydrogen, nitrogen, oxygen, and sulfur, as well as trace amounts of vanadium and nickel. The C:H ratio is approximately 1:1.2, depending on the asphaltene source. Asphaltenes are defined operationally as the n-heptane ()-insoluble, toluene ()-soluble component of a carbonaceous material such as crude oil, bitumen, or coal. Asphaltenes have been shown to have a distribution of molecular masses in the range of 400 u to 1500 u, but the average and maximum values are difficult to determine due to aggregation of the molecules in solution.
Analysis
The molecular structure of asphaltenes is difficult to determine because the molecules tend to stick together in solution. These materials are extremely complex mixtures containing hundreds or even thousands of individual chemical species. Asphaltenes do not have a specific chemical formula: individual molecules can vary in the number of atoms contained in the structure, and the average chemical formula can depend on the source. Although they have been subjected to modern analytical methods, including SARA, mass spectrometry, electron paramagnetic resonance and nuclear magnetic resonance, the exact molecular structures are difficult to determine. Given this limitation, asphaltenes are composed mainly of polyaromatic carbon ring units with oxygen, nitrogen, and sulfur heteroatoms, combined with trace amounts of heavy metals, particularly chelated vanadium and nickel, and aliphatic side chains of various lengths. Many asphaltenes from crude oils around the world contain similar ring units, as well as polar and non-polar groups, which are linked together to make highly diverse large molecules.
Asphaltene after heating have been subdivided as: nonvolatile (heterocyclic N and S species), and, volatile (paraffin + olefins, benzenes, naphthalenes, phenanthrenes, several others). Speight reports a simplified representation of the separation of petroleum into the following six major fractions: volatile saturates, volatile aromatics, nonvolatile saturates, nonvolatile aromatics, resins and asphaltenes. He also reports arbitrarily defined physical boundaries for petroleum using carbon-number and boiling point.
Geochemistry
Asphaltenes are today widely recognised as dispersed, chemically altered fragments of kerogen, which migrated out of the source rock for the oil, during oil catagenesis. Asphaltenes had been thought to be held in solution in oil by resins (similar structure and chemistry, but smaller), but recent data shows that this is incorrect. Indeed, it has recently been suggested that asphaltenes are nanocolloidally suspended in crude oil and in toluene solutions of sufficient concentrations. In any event, for low surface tension liquids, such as alkanes and toluene, surfactants are not necessary to maintain nanocolloidal suspensions of asphaltenes.
The nickel to vanadium ratio of asphaltenes reflect the pH and Eh conditions of the paleo-depositional environment of the source rock for oil (Lewan, 1980;1984), and this ratio is, therefore, in use in the petroleum industry for oil-oil correlation and for identification of potential source rocks for oil exploration.
Occurrence
Heavy oils, oil sands, bitumen and biodegraded oils (as bacteria cannot assimilate asphaltenes, but readily consume saturated hydrocarbons and certain aromatic hydrocarbon isomers – enzymatically controlled) contain much higher proportions of asphaltenes than do medium-API oils or light oils. Condensates are virtually devoid of asphaltenes.
Measurement
Because the ratio of electron spins per gram is constant for a particular species of asphaltene then the quantity of asphaltene in an oil can be determined by measuring its paramagnetic signature (EPR). Measuring the EPR signature of the oil at the wellhead as the oil is produced then gives a direct indication of whether the amount of asphaltene is changing (e.g. because of precipitation or sloughing in the tubing below).
In addition, asphaltene aggregation, precipitation or deposition can sometimes be predicted by modeling or machine learning methods and can be measured in the laboratory using imaging methods or filtration.
Production problems
Asphaltenes impart high viscosity to crude oils, negatively impacting production. Furthermore, the variable asphaltene concentration in crude oils within individual reservoirs creates a myriad of production problems.
Heat exchanger fouling
Asphaltenes are known to be one of the largest causes of fouling in the heat exchangers of the crude oil distillation preheat train. They are present within micelles in crude oil, which can be broken down by reaction with paraffins under high temperature. Once the protective micelle has been removed polar asphaltenes agglomerate and are transported to the tube walls, where they can stick and form a foulant layer.
Asphaltene removal
Chemical treatments for removing asphaltene include:
Solvents
Dispersants/solvents
Oil/dispersants/solvents
The dispersant/solvent approach is used for removing asphaltenes from formation minerals. Continuous treating may be required to inhibit asphaltene deposition in the tubing. Batch treatments are common for dehydration equipment and tank bottoms. There are also asphaltene precipitation inhibitors that can be used by continuous treatment or squeeze treatments.
See also
Tholin
References
External links
An in-depth article on asphaltenes from OilfieldWiki.com, the oilfield encyclopedia
Article regarding asphaltene fouling by Irwin A. Wiehe
Asphaltene Aggregation from Crude Oils and Model Systems Studied by High-Pressure NIR Spectroscopy (Source : American Chemical Society)
A comprehensive website about asphaltene and its role in petroleum fouling by Prof. GA Mansoori at the Univ. of Illinois at Chicago
Petroleum production
Asphalt | Asphaltene | [
"Physics",
"Chemistry"
] | 1,312 | [
"Amorphous solids",
"Asphalt",
"Unsolved problems in physics",
"Chemical mixtures"
] |
3,024,615 | https://en.wikipedia.org/wiki/Finite%20potential%20well | The finite potential well (also known as the finite square well) is a concept from quantum mechanics. It is an extension of the infinite potential well, in which a particle is confined to a "box", but one which has finite potential "walls". Unlike the infinite potential well, there is a probability associated with the particle being found outside the box. The quantum mechanical interpretation is unlike the classical interpretation, where if the total energy of the particle is less than the potential energy barrier of the walls it cannot be found outside the box. In the quantum interpretation, there is a non-zero probability of the particle being outside the box even when the energy of the particle is less than the potential energy barrier of the walls (cf quantum tunnelling).
Particle in a one-dimensional potential well
For the one-dimensional case on the x-axis, the time-independent Schrödinger equation can be written as:
where
is the reduced Planck constant,
is the mass of the particle,
is the potential energy at each point x,
is the (complex valued) wavefunction, or "eigenfunction", and
is the energy, a real number, sometimes called eigenenergy.
For the case of the particle in a one-dimensional box of length L, the potential is outside the box, and zero for x between and . The wavefunction is composed of different wavefunctions; depending on whether x is inside or outside of the box, such that:
Inside the box
For the region inside the box, V(x) = 0 and Equation 1 reduces to
resembling the time-independent free schrödinger equation, hence
Letting
the equation becomes
with a general solution of
where A and B can be any complex numbers, and k can be any real number.
Outside the box
For the region outside of the box, since the potential is constant, and equation becomes:
There are two possible families of solutions, depending on whether E is less than (the particle is in a bound state) or E is greater than (the particle is in an unbounded state).
If we solve the time-independent Schrödinger equation for an energy , letting such that
then the solution has the same form as the inside-well case:
and, hence, will be oscillatory both inside and outside the well. Thus, the solution is never square integrable; that is, it is always a non-normalizable state. This does not mean, however, that it is impossible for a quantum particle to have energy greater than , it merely means that the system has continuous spectrum above , i.e., the non-normalizable states still contribute to the continuous part of the spectrum as generalized eigenfunctions of an unbounded operator.
This analysis will focus on the bound state, where . Letting
produces
where the general solution is exponential:
Similarly, for the other region outside the box:
Now in order to find the specific solution for the problem at hand, we must specify the appropriate boundary conditions and find the values for A, B, F, G, H and I that satisfy those conditions.
Finding wavefunctions for the bound state
Solutions to the Schrödinger equation must be continuous, and continuously differentiable. These requirements are boundary conditions on the differential equations previously derived, that is, the matching conditions between the solutions inside and outside the well.
In this case, the finite potential well is symmetrical, so symmetry can be exploited to reduce the necessary calculations.
Summarizing the previous sections:
where we found , , and to be:
We see that as goes to , the term goes to infinity. Likewise, as goes to , the term goes to infinity. In order for the wave function to be square integrable, we must set , and we have:
and
Next, we know that the overall function must be continuous and differentiable. In other words, the values of the functions and their derivatives must match up at the dividing points:
These equations have two sorts of solutions, symmetric, for which and , and antisymmetric, for which and . For the symmetric case we get
so taking the ratio gives
Similarly for the antisymmetric case we get
Recall that both and depend on the energy. What we have found is that the continuity conditions cannot be satisfied for an arbitrary value of the energy; because that is a result of the infinite potential well case. Thus, only certain energy values, which are solutions to one or either of these two equations, are allowed. Hence we find that the energy levels of the system below are discrete; the corresponding eigenfunctions are bound states. (By contrast, for the energy levels above are continuous.)
The energy equations cannot be solved analytically. Nevertheless, we will see that in the symmetric case, there always exists at least one bound state, even if the well is very shallow.
Graphical or numerical solutions to the energy equations are aided by rewriting them a little and it should be mentioned that a nice approximation method has been found by Lima which works for any pair of parameters and . If we introduce the dimensionless variables and , and note from the definitions of and that , where , the master equations read
In the plot to the right, for , solutions exist where the blue semicircle intersects the purple or grey curves ( and ). Each purple or grey curve represents a possible solution, within the range . The total number of solutions, , (i.e., the number of purple/grey curves that are intersected by the blue circle) is therefore determined by dividing the radius of the blue circle, , by the range of each solution and using the floor or ceiling functions:
In this case there are exactly three solutions, since .
and , with the corresponding energies
If we want, we can go back and find the values of the constants in the equations now (we also need to impose the normalisation condition). On the right we show the energy levels and wave functions in this case (where ).
We note that however small is (however shallow or narrow the well), there is always at least one bound state.
Two special cases are worth noting. As the height of the potential becomes large, , the radius of the semicircle gets larger and the roots get closer and closer to the values , and we recover the case of the infinite square well.
The other case is that of a very narrow, deep well - specifically the case and with fixed. As it will tend to zero, and so there will only be one bound state. The approximate solution is then , and the energy tends to . But this is just the energy of the bound state of a Delta function potential of strength , as it should be.
A simpler graphical solution for the energy levels can be obtained by normalizing the potential and the energy through multiplication by . The normalized quantities are
giving directly the relation between the allowed couples as
for the even and odd parity wave functions, respectively. In the previous equations only the positive derivative parts of the functions have to be considered. The chart giving directly the allowed couples is reported in the figure.
Asymmetric well
Consider a one-dimensional asymmetric potential well given by the potential
with . The corresponding solution for the wave function with is found to be
and
The energy levels are determined once is solved as a root of the following transcendental equation
where Existence of root to above equation is not always guaranteed, for example, one can always find a value of so small, that for given values of and , there exists no discrete energy level. The results of symmetrical well is obtained from above equation by setting .
Particle in a spherical potential well
Consider the following spherical potential well
where is the radius from the origin. The solution for the wavefunction with zero angular momentum () and with an energy is given by
satisfying the condition
This equation does not always have a solution indicating that in some cases, there are no bound states. The minimum depth of the potential well for which the bound state first appears at is given by
which increases with decreasing well radius . Thus, bound states are not possible if the well is sufficiently shallow and narrow. For well depth slightly exceeding the minimum value, i.e., for , the ground state energy (since we are considering case) is given by
Spherically symmetric annular well
The results above can be used to show that, as to the one-dimensional case, there is two bound states in a spherical cavity, as spherical coordinates make equivalent the radius at any direction.
The ground state (n = 1) of a spherically symmetric potential will always have zero orbital angular momentum (ℓ = n−1), and the reduced wave function satisfies the equation
where is the radial part of the wave function. Notice that for (n = 1) angular part is constant (ℓ = 0).
This is identical to the one-dimensional equation, except for the boundary conditions. As before,
The energy levels for
are determined once is solved as a root of the following transcendental equation
where
Existence of root to above equation is always guaranteed. The results are always with spherical symmetry. It fulfils the condition where the wave does not find any potential inside the sphere: .
Different differential equation lay on when ℓ ≠0, so as above titles, here it is:
The solution can be rationalized by some changes of variable and function to rise a Bessel like differential equation, which solution is:
where , and are Bessel, Newman and Hankel spherical functions respectively, and could be rewritten as function of standard Bessel function.
The energy levels for
are determined once is solved as a root of the following transcendental equation
where
Also this two transcendental equations are solutions:
and also,
Existence of roots to above equations are always guaranteed. The results are always with spherical symmetry.
See also
Potential well
Delta function potential
Infinite potential well
Semicircle potential well
Quantum tunnelling
Rectangular potential barrier
References
Further reading
Quantum mechanical potentials
Quantum models
Exactly solvable models | Finite potential well | [
"Physics"
] | 2,028 | [
"Quantum models",
"Quantum mechanical potentials",
"Quantum mechanics"
] |
3,025,108 | https://en.wikipedia.org/wiki/Seapost%20Service | A Seapost was a mail compartment aboard an ocean-going vessel wherein international exchange mail was distributed. The first American service of this type was the U.S.-German Seapost, which began operating in 1891 on the S.S. Havel North German Lloyd Line. The service rapidly expanded with routes to Great Britain, Central America, South America, and Asia. The Seapost service still employed fifty-five clerks in early 1941. The last route of this type (to South America) was terminated October 19, 1941, due to unsafe wartime conditions on the Atlantic Ocean. The few remaining Seapost clerks transferred to branches of the Railway Mail Service (RMS). Seapost operations for the US Post Office Department were supervised from a New York City, New York, office.
Seapost offices were also operated by the postal authorities of France, Germany, Great Britain, Italy, Japan and New Zealand.
Sources
Wilking, Clarence. (1985) The Railway Mail Service, Railway Mail Service Library, Boyce, Virginia. Available as an MS Word file at http://www.railwaymailservicelibrary.org/articles/THE_RMS.DOC
United States Sea Post Cancellations Part 1 Transatlantic Routes, Edited by Philip Cockrill, Cockrill Series Booklet No 54
Seaposts of the USA by Roger Hosking, Published by the TPO & Seapost Society, September 2008
External links
TPO and Seapost Society for all collectors of Rail and Ship Mail worldwide
Postal systems
Philatelic terminology | Seapost Service | [
"Technology"
] | 308 | [
"Transport systems",
"Postal systems"
] |
3,027,037 | https://en.wikipedia.org/wiki/Bosonization | In theoretical condensed matter physics and quantum field theory, bosonization is a mathematical procedure by which a system of interacting fermions in (1+1) dimensions can be transformed to a system of massless, non-interacting bosons.
The method of bosonization was conceived independently by particle physicists Sidney Coleman and Stanley Mandelstam; and condensed matter physicists Daniel C. Mattis and Alan Luther in 1975.
In particle physics, however, the boson is interacting, cf, the Sine-Gordon model, and notably through topological interactions, cf. Wess–Zumino–Witten model.
The basic physical idea behind bosonization is that particle-hole excitations are bosonic in character. However, it was shown by Tomonaga in 1950 that this principle is only valid in one-dimensional systems. Bosonization is an effective field theory that focuses on low-energy excitations.
Mathematical descriptions
A pair of chiral fermions , one being the conjugate variable of the other, can be described in terms of a chiral boson
where the currents of these two models are related by
where composite operators must be defined by a regularization and a subsequent renormalization.
Examples
In particle physics
The standard example in particle physics, for a Dirac field in (1+1) dimensions, is the equivalence between the massive Thirring model (MTM) and the quantum Sine-Gordon model. Sidney Coleman showed the Thirring model is S-dual to the sine-Gordon model. The fundamental fermions of the Thirring model correspond to the solitons (bosons) of the sine-Gordon model.
In condensed matter
The Luttinger liquid model, proposed by Tomonaga and reformulated by J.M. Luttinger, describes electrons in one-dimensional electrical conductors under second-order interactions. Daniel C. Mattis and Elliott H. Lieb proved in 1965 that electrons could be modeled as bosonic interactions. The response of the electron density to an external perturbation can be treated as plasmonic waves. This model predicts the emergence of spin–charge separation.
See also
Holstein–Primakoff transformation
References
Quantum field theory
Condensed matter physics | Bosonization | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 461 | [
"Quantum field theory",
"Matter",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Quantum physics stubs"
] |
3,027,077 | https://en.wikipedia.org/wiki/Helicity%20%28particle%20physics%29 | In physics, helicity is the projection of the spin onto the direction of momentum.
Mathematically, helicity is the sign of the projection of the spin vector onto the momentum vector: "left" is negative, "right" is positive.
Overview
The angular momentum J is the sum of an orbital angular momentum L and a spin S. The relationship between orbital angular momentum L, the position operator r and the linear momentum (orbit part) p is
so L's component in the direction of p is zero. Thus, helicity is just the projection of the spin onto the direction of linear momentum. The helicity of a particle is positive (" right-handed") if the direction of its spin is the same as the direction of its motion and negative ("left-handed") if opposite.
Helicity is conserved. That is, the helicity commutes with the Hamiltonian, and thus, in the absence of external forces, is time-invariant. It is also rotationally invariant, in that a rotation applied to the system leaves the helicity unchanged. Helicity, however, is not Lorentz invariant; under the action of a Lorentz boost, the helicity may change sign. Consider, for example, a baseball, pitched as a gyroball, so that its spin axis is aligned with the direction of the pitch. It will have one helicity with respect to the point of view of the players on the field, but would appear to have a flipped helicity in any frame moving faster than the ball.
Comparison with chirality
In this sense, helicity can be contrasted
to chirality, which is Lorentz invariant, but is not a constant of motion for massive particles. For massless particles, the two coincide: The helicity is equal to the chirality, both are Lorentz invariant, and both are constants of motion.
In quantum mechanics, angular momentum is quantized, and thus helicity is quantized as well. Because the eigenvalues of spin with respect to an axis have discrete values, the eigenvalues of helicity are also discrete. For a massive particle of spin , the eigenvalues of helicity are , , , ..., −.
For massless particles, not all of spin eigenvalues correspond to physically meaningful degrees of freedom: For example, the photon is a massless spin 1 particle with helicity eigenvalues −1 and +1, but the eigenvalue 0 is not physically present.
All known spin particles have non-zero mass; however, for hypothetical massless spin particles (the Weyl spinors), helicity is equivalent to the chirality operator multiplied by . By contrast, for massive particles, distinct chirality states (e.g., as occur in the weak interaction charges) have both positive and negative helicity components, in ratios proportional to the mass of the particle.
A treatment of the helicity of gravitational waves can be found in Weinberg.
In summary, they come in only two forms: +2 and −2, while the +1, 0 and −1 helicities are "non-dynamical" (they can be removed by a gauge transformation).
Little group
In dimensions, the little group for a massless particle is the double cover of SE(2). This has unitary representations which are invariant under the SE(2) "translations" and transform as under a SE(2) rotation by . This is the helicity representation. There is also another unitary representation which transforms non-trivially under the SE(2) translations. This is the continuous spin representation.
In dimensions, the little group is the double cover of SE() (the case where is more complicated because of anyons, etc.). As before, there are unitary representations which don't transform under the SE() "translations" (the "standard" representations) and "continuous spin" representations.
See also
Chirality (physics)
Helicity basis
Gyroball, a macroscopic object (specifically a baseball) exhibiting an analogous phenomenon
Wigner's classification
Pauli–Lubanski pseudovector
References
Other sources
Quantum field theory | Helicity (particle physics) | [
"Physics"
] | 883 | [
"Quantum field theory",
"Quantum mechanics"
] |
3,027,310 | https://en.wikipedia.org/wiki/Kretschmann%20scalar | In the theory of Lorentzian manifolds, particularly in the context of applications to general relativity, the Kretschmann scalar is a quadratic scalar invariant. It was introduced by Erich Kretschmann.
Definition
The Kretschmann invariant is
where is the Riemann curvature tensor and is the Christoffel symbol. Because it is a sum of squares of tensor components, this is a quadratic invariant.
Einstein summation convention with raised and lowered indices is used above and throughout the article. An explicit summation expression is
Examples
For a Schwarzschild black hole of mass , the Kretschmann scalar is
where is the gravitational constant.
For a general FRW spacetime with metric
the Kretschmann scalar is
Relation to other invariants
Another possible invariant (which has been employed for example in writing the gravitational term of the Lagrangian for some higher-order gravity theories) is
where is the Weyl tensor, the conformal curvature tensor which is also the completely traceless part of the Riemann tensor. In dimensions this is related to the Kretschmann invariant by
where is the Ricci curvature tensor and is the Ricci scalar curvature (obtained by taking successive traces of the Riemann tensor). The Ricci tensor vanishes in vacuum spacetimes (such as the Schwarzschild solution mentioned above), and hence there the Riemann tensor and the Weyl tensor coincide, as do their invariants.
Gauge theory invariants
The Kretschmann scalar and the Chern-Pontryagin scalar
where is the left dual of the Riemann tensor, are mathematically analogous (to some extent, physically analogous) to the familiar invariants of the electromagnetic field tensor
Generalising from the gauge theory of electromagnetism to general non-abelian gauge theory, the first of these invariants is
,
an expression proportional to the Yang–Mills Lagrangian. Here is the curvature of a covariant derivative, and is a trace form. The Kretschmann scalar arises from taking the connection to be on the frame bundle.
See also
Carminati-McLenaghan invariants, for a set of invariants
Classification of electromagnetic fields, for more about the invariants of the electromagnetic field tensor
Curvature invariant, for curvature invariants in Riemannian and pseudo-Riemannian geometry in general
Curvature invariant (general relativity)
Ricci decomposition, for more about the Riemann and Weyl tensor
References
Further reading
Riemannian geometry
Lorentzian manifolds
Tensors in general relativity | Kretschmann scalar | [
"Physics",
"Engineering"
] | 523 | [
"Tensors in general relativity",
"Tensors",
"Tensor physical quantities",
"Physical quantities"
] |
3,027,800 | https://en.wikipedia.org/wiki/Methylglyoxal | Methylglyoxal (MGO) is the organic compound with the formula CH3C(O)CHO. It is a reduced derivative of pyruvic acid. It is a reactive compound that is implicated in the biology of diabetes. Methylglyoxal is produced industrially by degradation of carbohydrates using overexpressed methylglyoxal synthase.
Chemical structure
Gaseous methylglyoxal has two carbonyl groups: an aldehyde and a ketone. In the presence of water, it exists as hydrates and oligomers. The formation of these hydrates is indicative of the high reactivity of MGO, which is relevant to its biological behavior.
Biochemistry
Biosynthesis and biodegradation
In organisms, methylglyoxal is formed as a side-product of several metabolic pathways. Methylglyoxal mainly arises as side products of glycolysis involving glyceraldehyde-3-phosphate and dihydroxyacetone phosphate. It is also thought to arise via the degradation of acetone and threonine. Illustrative of the myriad pathways to MGO, aristolochic acid caused 12-fold increase of methylglyoxal from 18 to 231 μg/mg of kidney protein in poisoned mice. It may form from 3-aminoacetone, which is an intermediate of threonine catabolism, as well as through lipid peroxidation. However, the most important source is glycolysis. Here, methylglyoxal arises from nonenzymatic phosphate elimination from glyceraldehyde phosphate and dihydroxyacetone phosphate (DHAP), two intermediates of glycolysis. This conversion is the basis of a potential biotechnological route to the commodity chemical 1,2-propanediol.
Since methylglyoxal is highly cytotoxic, several detoxification mechanisms have evolved. One of these is the glyoxalase system. Methylglyoxal is detoxified by glutathione. Glutathione reacts with methylglyoxal to give a hemithioacetal, which converted into S--lactoyl-glutathione by glyoxalase I. This thioester is hydrolyzed to -lactate by glyoxalase II.
Biochemical function
Methylglyoxal is involved in the formation of advanced glycation end products (AGEs). In this process, methylglyoxal reacts with free amino groups of lysine and arginine and with thiol groups of cysteine forming AGEs. Histones are also heavily susceptible to modification by methylglyoxal and these modifications are elevated in breast cancer.
DNA damages are induced by reactive carbonyls, principally methylglyoxal and glyoxal, at a frequency similar to that of oxidative DNA damages. Such damage, referred to as DNA glycation, can cause mutation, breaks in DNA and cytotoxicity. In humans, a protein DJ-1 (also named PARK7), has a key role in the repair of glycated DNA bases.
Biomedical aspects
Due to increased blood glucose levels, methylglyoxal has higher concentrations in diabetics and has been linked to arterial atherogenesis. Damage by methylglyoxal to low-density lipoprotein through glycation causes a fourfold increase of atherogenesis in diabetics. Methylglyoxal binds directly to the nerve endings and by that increases the chronic extremity soreness in diabetic neuropathy.
Occurrence, other
Methylglyoxal is a component of some kinds of honey, including manuka honey; it appears to have activity against E. coli and S. aureus and may help prevent formation of biofilms formed by P. aeruginosa.
Research suggests that methylglyoxal contained in honey does not cause an increased formation of advanced glycation end products (AGEs) in healthy persons.
See also
Dicarbonyl
1,2-Dicarbonyl, methylglyoxal can be classified as an 1,2-dicarbonyl
References
Aldehydes
Endogenous aldehydes
GABAA receptor agonists
Metabolism
Conjugated ketones
Advanced glycation end-products | Methylglyoxal | [
"Chemistry",
"Biology"
] | 896 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Senescence",
"Endogenous aldehydes",
"Biomolecules",
"Cellular processes",
"Biochemistry",
"Advanced glycation end-products",
"Metabolism"
] |
13,370,236 | https://en.wikipedia.org/wiki/NORBIT | In electronics, the NORBIT family of modules is a very early form (since 1960) of digital logic developed by Philips (and also provided through and Mullard) that uses modules containing discrete components to build logic function blocks in resistor–transistor logic (RTL) or diode–transistor logic (DTL) technology.
Overview
The system was originally conceived as building blocks for solid-state hard-wired programmed logic controllers (the predecessors of programmable logic controllers (PLC)) to replace electro-mechanical relay logic in industrial control systems for process control and automation applications, similar to early Telefunken/AEG Logistat, Siemens Simatic, Brown, Boveri & Cie, ACEC Logacec or Estacord systems.
Each available logical function was recognizable by the color of its plastic container, black, blue, red, green, violet, etc. The most important circuit block contained a NOR gate (hence the name), but there were also blocks containing drivers, and a timer circuit similar to the later 555 timer IC.
The original Norbit modules of the YL 6000 series introduced in 1960 had potted single in-line packages with up to ten long flying leads arranged in two groups of up to five leads in a row. These modules were specified for frequencies of less than 1 kHz at ±24 V supply.
Also available in 1960 were so called Combi-Element modules in single-in line packages with ten evenly spaced stiff leads in a row (5.08 mm / 0.2-inch pitch) for mounting on a PCB. They were grouped in the 1-series (aka "100 kHz series") with ±6 V supply. The newer 10-series and 20-series had similarly sized packages, but came with an additional parallel row of nine staggered leads for a total of 19 leads. The 10-series uses germanium alloy transistors, whereas in the 20-series silicon planar transistors are used for a higher cut-off frequency of up to 1 MHz (vs. 30 kHz) and a higher allowed temperature range of +85 °C (vs. +55 °C).
In 1967, the Philips/Mullard NORBIT 2 aka Valvo NORBIT-S family of modules was introduced, first consisting of the 60-series for frequencies up to 10 kHz at a single supply voltage of 24 V, only. Later, the 61-series, containing thyristor trigger and control modules, was added. A 90-series became available in the mid-1970s as well. There were three basic types contained in a large (one by two inch-sized) 17 pins dual in-line package, with nine pins spaced 5.08 mm (0.2-inch) on one side and eight staggered pins on the other side.
Modules
Original Norbit family
YL 6000 series
YL6000 - NOR gate (red) ("NOR")
YL6001 - Emitter follower (yellow) ("EF")
YL6004 - High power output (Double-sized module) ("HP")
YL6005, YL6005/00 - Counter unit (triple binary) ("3C") (violet)
YL6005/05 - Single divide by 2 counter (violet) ("1C")
YL6006 - Timer (brown) ("TU")
YL6007 - Chassis ("CU")
YL6008 - Medium power output (orange) ("MP")
YL6009 - Low power output (white) ("LP")
YL6010 - Photo-electric detector head ("PD")
YL6011 - Photo-electric lamp head ("PL")
YL6012 - Twin 2-input NOR gate (black) ("2.2 NOR")
YL 6100 series
YL6101 - Rectifier unit, 3…39V 1A
YL6102 - Rectifier unit, 3…39V 5A
YL6103/00 - Regulator unit, 6…30V 250mA
YL6103/01 - Regulator unit, 1…6V 250mA
YL6104 - Longitudinal link for regulator unit
YL6105 - Regulator unit, 6V 150mA
88930 Relay series
Used to control relays using variable-length pulse sequences (as with telephone pulse dialing).
88930/30 - Input/Output unitFilters an input pulse string and can drive two command circuits and two relay unitsContains 1×/48, 2×/51, and 2×/57.
88930/33 - Primary pulse counting unit (dual command)Can trigger two different signals via two different pulse sequences. The number of pulses that will trigger each command is configurable.
88930/36 - Dual command unitAdds two additional commands to the /33.
88930/37 - Quad command unitAdds four additional commands to the /33.
88930/39 - Output unitCan drive two command circuits (in /36 or /37 command units) plus two /60 relay units.Contains 2×/51 and 2×/57.
88930/42 - Empty unitFor adding custom circuitry. Comprises an empty housing, connector, and blank circuit board.
88930/48 - Pulse shaper unit for /33 (no housing)
88930/51 - Command preparation unit (no housing)For providing input to command units.
88930/54 - Reset unit
88930/57 - Relay amplifier unit (no housing)For driving a low-impedance relay such as the /60 relay block unit.
88930/60 - Relay block unitDouble-pole, double throw 250V 2A relay. Accepts a /57 relay amplifier unit.
88930/64 - Power supply unitProvides 280V 45mA, 150V 2mA, 24V 750mA, and 15V 120mA.
Combi-Element family
1-series / B890000 series
B893000, B164903 - Twin 3-input AND gates (orange) ("2.3A1", "2x3N1")
B893001, B164904 - Twin 2-input AND gates (orange) ("2.2A1", "2x2N1")
B893002, 2P72729 - Twin 3-input OR gates (orange) ("2.3O1", "23O1", "2x3P1")
B893003, 2P72730 - Twin 2-input OR gates (orange) ("2.2O1", "22O1", "2x2P1")
B894002, B164910 - Twin inverter amplifier (yellow) ("2IA1", "2.IA1", "2xIA1")
B894005, 2P72728 - Twin inverter amplifier (yellow) ("2IA2", "2xIA2")
B894001, B164909 - Twin emitter follower (yellow) ("2EF1", 2xEF1")
B894003, 2P72727 - Twin emitter follower (yellow) ("2EF2", "2xEF2")
B894000, B164907 - Emitter follower/inverter amplifier (yellow) ("EF1/IA1")
B895000, B164901 - Pulse shaper (Schmitt trigger + amplifier) (green) ("PS1")
B895001, B164908 - One-shot multivibrator ("OS1")
B895003 - One-shot multivibrator ("OS2")
B892000, B164902 - Flip-flop (red) ("FF1")
B892001, 2P72707 - Shift-register Flip-flop (red) ("FF2")
B892002 - Flip-flop (red) ("FF3")
B892003 - Flip-flop (red) ("FF4")
B893004, 2P72726 - Pulse logic (orange) ("PL1", "2xPL1")
B893007 - Pulse logic (orange) ("2xPL2")
B885000, B164911 - Decade counter ("DC1")
B890000 - Power amplifier ("PA1")
B896000 - Twin selector switch for core memories ("2SS1")
B893005 - Selection gate for core memories ("SG1")
2P72732 - Pulse generator for core memories ("PG1")
2P72731 - Read amplifier for core memories ("RA1")
10-series
2P73701 - Flip-flop ("FF10")
2P73702 - Flip-flop ("FF11")
2P73703 - Flip-flop / Bistable multivibrator with built-in trigger gates and set-reset inputs (black) ("FF12")
Dual trigger gate ("2.TG13")
Dual trigger gate ("2.TG14")
Quadruple trigger gate ("4.TG15")
Dual positive gate inverter amplifier ("2.GI10")
Dual positive gate inverter amplifier ("2.GI11")
Dual positive gate inverter amplifier ("2.GI12")
Gate amplifier ("GA11")
One-shot multivibrator ("OS11")
Timer unit ("TU10")
Pulse driver ("PD11")
Relay driver ("RD10")
Relay driver ("RD11")
Power amplifier ("PA10")
Pulse shaper ("PS10")
Numerical indicator tube driver ("ID10")
20-series
2P73710 - ("2.GI12", "2GI12")
Norbit 2 / Norbit-S family
60-series
2NOR60, 2.NOR60 - Twin NOR (black)
4NOR60, 4.NOR60 - Quadruple NOR (black)
2.IA60, 2IA60 - Twin inverter amplifier for low power output (blue)
LPA60 - Twin low power output
2.LPA60, 2LPA60 - Twin low power output (blue)
PA60 - Medium power output (blue)
HPA60 - High power output (black)
2.SF60, 2SF60 - Twin input switch filter (green)
TU60 - Timer (red)
FF60 - Flip-flop
GLD60 - Grounded load driver (black)
61-series
TT61 - Trigger transformer
UPA61 - Universal power amplifier
RSA61 - Rectifier and synchroniser
DOA61 - Differential operational amplifier
2NOR61, 2.NOR61 - Twin NOR
90-series
PS90 - Pulse shaper (green)
FF90 - Flip-flop (red)
2TG90, 2.TG90 - Twin trigger gate (red)
Accessories
PSU61 - Power supply
PCB60 - Printed wiring board
MC60 - Mounting chassis
UMC60 - Universal mounting chassis
MB60 - Mounting bar
Photo gallery
See also
Logic family
fischertechnik
Notes
References
Further reading
(43 pages) (NB. Also part of the Valvo-Handbuch 1962 pages 83–125.)
(253 pages) (NB. Contains a chapter about Norbit modules as well.)
(25 pages)
External links
Logic families
Digital electronics
Solid state engineering
Industrial automation
Control engineering | NORBIT | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,514 | [
"Digital electronics",
"Industrial engineering",
"Automation",
"Electronic engineering",
"Control engineering",
"Condensed matter physics",
"Industrial automation",
"Solid state engineering"
] |
13,371,195 | https://en.wikipedia.org/wiki/ANOVA%E2%80%93simultaneous%20component%20analysis | In computational biology and bioinformatics, analysis of variance – simultaneous component analysis (ASCA or ANOVA–SCA) is a method that partitions variation and enables interpretation of these partitions by SCA, a method that is similar to principal components analysis (PCA). Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures used to analyze differences. Statistical coupling analysis (SCA) is a technique used in bioinformatics to measure covariation between pairs of amino acids in a protein multiple sequence alignment (MSA).
This method is a multivariate or even megavariate extension of analysis of variance (ANOVA). The variation partitioning is similar to ANOVA. Each partition matches all variation induced by an effect or factor, usually a treatment regime or experimental condition. The calculated effect partitions are called effect estimates. Because even the effect estimates are multivariate, interpretation of these effects estimates is not intuitive. By applying SCA on the effect estimates one gets a simple interpretable result.<ref>Daniel J Vis, Johan A Westerhuis, Age K Smilde: Jan van der Greef (2007) "Statistical validation of megavariate effects in ASCA", BMC Bioinformatics", 8:322 </ref> In case of more than one effect, this method estimates the effects in such a way that the different effects are not correlated.
Details
Many research areas see increasingly large numbers of variables in only few samples. The low sample to variable ratio creates problems known as multicollinearity and singularity. Because of this, most traditional multivariate statistical methods cannot be applied.
ASCA algorithm
This section details how to calculate the ASCA model on a case of two main effects with one interaction effect. It is easy to extend the declared rationale to more main effects and more interaction effects. If the first effect is time and the second effect is dosage, only the interaction between time and dosage exists. We assume there are four time points and three dosage levels.
Let X be a matrix that holds the data. X is mean centered, thus having zero mean columns. Let A and B denote the main effects and AB the interaction of these effects. Two main effects in a biological experiment can be time (A) and pH (B), and these two effects may interact. In designing such experiments one controls the main effects to several (at least two) levels. The different levels of an effect can be referred to as A1, A2, A3 and A4, representing 2, 3, 4, 5 hours from the start of the experiment. The same thing holds for effect B, for example, pH 6, pH 7 and pH 8 can be considered effect levels.
A and B are required to be balanced if the effect estimates need to be orthogonal and the partitioning unique. Matrix E holds the information that is not assigned to any effect. The partitioning gives the following notation:
Calculating main effect estimate A (or B)
Find all rows that correspond to effect A level 1 and average these rows. The result is a vector. Repeat this for the other effect levels. Make a new matrix of the same size of X and place the calculated averages in the matching rows. That is, give all rows that match effect (i.e.) A level 1 the average of effect A level 1.
After completing the level estimates for the effect, perform an SCA. The scores of this SCA are the sample deviations for the effect, the important variables of this effect are in the weights of the SCA loading vector.
Calculating interaction effect estimate AB
Estimating the interaction effect is similar to estimating main effects. The difference is that for interaction estimates the rows that match effect A level 1 are combined with the effect B level 1 and all combinations of effects and levels are cycled through. In our example setting, with four time point and three dosage levels there are 12 interaction sets {A1-B1, A1B2, A2B1, A2B2 and so on}. It is important to deflate (remove) the main effects before estimating the interaction effect.
SCA on partitions A, B and AB
Simultaneous component analysis is mathematically identical to PCA, but is semantically different in that it models different objects or subjects at the same time.
The standard notation for a SCA – and PCA – model is:
where X is the data, T are the component scores and P are the component loadings. E'' is the residual or error matrix. Because ASCA models the variation partitions by SCA, the model for effect estimates looks like this:
Note that every partition has its own error matrix. However, algebra dictates that in a balanced mean centered data set every two level system is of rank 1. This results in zero errors, since any rank 1 matrix can be written as the product of a single component score and loading vector.
The full ASCA model with two effects and interaction including the SCA looks like this:
Decomposition:
Time as an effect
Because 'time' is treated as a qualitative factor in the ANOVA decomposition preceding ASCA, a nonlinear multivariate time trajectory can be modeled. An example of this is shown in Figure 10 of this reference.
References
Analysis of variance
Bioinformatics | ANOVA–simultaneous component analysis | [
"Engineering",
"Biology"
] | 1,095 | [
"Bioinformatics",
"Biological engineering"
] |
13,371,925 | https://en.wikipedia.org/wiki/Instant | In physics and the philosophy of science, instant refers to an infinitesimal interval in time, whose passage is instantaneous. In ordinary speech, an instant has been defined as "a point or very short space of time," a notion deriving from its etymological source, the Latin verb instare, from in- + stare ('to stand'), meaning 'to stand upon or near.'
The continuous nature of time and its infinite divisibility was addressed by Aristotle in his Physics, where he wrote on Zeno's paradoxes. The philosopher and mathematician Bertrand Russell was still seeking to define the exact nature of an instant thousands of years later. In 2024, John William Stafford used algorithms to demonstrate that a time difference of zero could theoretically continue to expand (in various ways) to infinity, and subsequently described a new concept that he referred to as instantaneous. He concluded by stating that instantaneous is, with respect to the measurement of time, mutually exclusive. In addition, a theoretical model of multiple Universes was proposed which exist within the context of instantaneous.
, the smallest time interval certified in regulated measurements is on the order of 397 zeptoseconds (397 × 10−21 seconds).
18th and 19th century usage
Instant (usually abbreviated in print to inst.) can be used to indicate "Of the current month". For example, "the 11th inst." means the 11th day of the current month, whether that date is in the past, or the future, from the date of publication.
See also
Infinitesimal
Planck time
Present
References
Time | Instant | [
"Physics",
"Mathematics"
] | 330 | [
"Physical quantities",
"Time",
"Time stubs",
"Quantity",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
13,372,642 | https://en.wikipedia.org/wiki/Toroidal%20ring%20model | The toroidal ring model, known originally as the Parson magneton or magnetic electron, is a physical model of subatomic particles. It is also known as the plasmoid ring, vortex ring, or helicon ring. This physical model treated electrons and protons as elementary particles, and was first proposed by Alfred Lauck Parson in 1915.
Theory
Instead of a single orbiting charge, the toroidal ring was conceived as a collection of infinitesimal charge elements, which orbited or circulated along a common continuous path or "loop". In general, this path of charge could assume any shape, but tended toward a circular form due to internal repulsive electromagnetic forces. In this configuration the charge elements circulated, but the ring as a whole did not radiate due to changes in electric or magnetic fields since it remained stationary. The ring produced an overall magnetic field ("spin") due to the current of the moving charge elements. These elements circulated around the ring at the speed of light c, but at frequency ν = c/2πR, which depended inversely on the radius R. The ring's inertial energy increased when compressed, like a spring, and was also inversely proportional to its radius, and therefore proportional to its frequency ν. The theory claimed that the proportionality constant was the Planck constant h, the conserved angular momentum of the ring.
According to the model, electrons or protons could be viewed as bundles of "fibers" or "plasmoids" with total charge ±e. The electrostatic repulsion force between charge elements of the same sign was balanced by the magnetic attraction force between the parallel currents in the fibers of a bundle, per Ampère's law. These fibers twisted around the torus of the ring as they progressed around its radius, forming a Slinky-like helix. Circuit completion demanded that each helical plasmoid fiber twisted around the ring an integer number of times as it proceeded around the ring. This requirement was thought to account for "quantum" values of angular momentum and radiation. Chirality demanded the number of fibers to be odd, probably three, like a rope. The helicity of the twist, was thought to distinguish the electron from the proton.
The toroidal or "helicon" model did not demand a constant radius or inertial energy for a particle. In general its shape, size, and motion adjusted according to the external electromagnetic fields from its environment. These adjustments or reactions to external field changes constituted the emission or absorption of radiation for the particle. The model, then, claimed to explain how particles linked together to form atoms.
History
Beginnings
The development of the helicon or toroidal ring began with André-Marie Ampère, who in 1823 proposed tiny magnetic "loops of charge" to explain the attractive force between current elements. In that same era Carl Friedrich Gauss and Michael Faraday also uncovered foundational laws of classical electrodynamics, later collected by James Maxwell as Maxwell's equations. When Maxwell expressed the laws of Gauss, Faraday, and Ampère in differential form, he assumed point particles, an assumption that remains foundational to relativity theory and quantum mechanics today. In 1867 Lord Kelvin suggested that the vortex rings of a perfect fluid discovered by Hermann von Helmholtz represented "the only true atoms". Then shortly before 1900, as scientists still debated over the very existence of atoms, J. J. Thomson and Ernest Rutherford sparked a revolution with experiments confirming the existence and properties of electrons, protons, and nuclei. Max Planck added to the fire when he solved the blackbody radiation problem by assuming not only discrete particles, but discrete frequencies of radiation emanating from these "particles" or "resonators". Planck's famous paper, which incidentally calculated both the Planck constant h and the Boltzmann constant kB, suggested that something in the "resonators" themselves provided these discrete frequencies.
Numerous theories about the structure of the atom developed in the wake of all the new information, of which the 1913 model of Niels Bohr came to predominate. The Bohr model proposed electrons in circular orbit around the nucleus with quantized values of angular momentum. Instead of radiating energy continuously, as classical electrodynamics demanded from an accelerating charge, Bohr's electron radiated discretely when it "leaped" from one state of angular momentum to another.
Parson magneton
In 1915, Alfred Lauck Parson proposed his "magneton" as an improvement over the Bohr model, depicting finite-sized particles with the ability to maintain stability and emit and absorb radiation from electromagnetic waves. At about the same time Leigh Page developed a classical theory of blackbody radiation assuming rotating "oscillators", able to store energy without radiating. Gilbert N. Lewis was inspired in part by Parson's model in developing his theory of chemical bonding. Then David L. Webster wrote three papers connecting Parson's magneton with Page's oscillator and explaining mass and alpha scattering in terms of the magneton. In 1917 Lars O. Grondahl confirmed the model with his experiments on free electrons in iron wires. Parson's theory next attracted the attention of Arthur Compton, who wrote a series of papers on the properties of the electron, and H. Stanley Allen, whose papers also argued for a "ring electron".
Current status
The aspect of the Parson magneton with the most experimental relevance (and the aspect investigated by Grondahl and Webster) was the existence of an electron magnetic dipole moment; this dipole moment is indeed present. However, later work by Paul Dirac and Alfred Landé showed that a pointlike particle could have an intrinsic quantum spin, and also a magnetic moment. The highly successful modern theory, Standard Model of particle physics describes a pointlike electron with an intrinsic spin and magnetic moment. On the other hand, the usual assertion that an electron is pointlike may be conventionally associated only with a "bare" electron. The pointlike electron would have a diverging electromagnetic field, which should create a strong vacuum polarization. In accordance with QED, deviations from the Coulomb law are predicted at Compton scale distances from the centre of electron, 10−11 cm. Virtual processes in the Compton region determine the spin of electron and renormalization of its charge and mass. It shows that the Compton region of the electron should be considered as a coherent whole with its pointlike core, forming a physical ("dressed") electron. Notice that the Dirac theory of electron also exhibits the peculiar behaviour of the Compton region. In particular, electrons display zitterbewegung at the Compton scale. From this point of view, the ring model does not contradict QED or the Dirac theory and some versions could possibly be used to incorporate gravity in quantum theory.
The question of whether the electron has a substructure of any sort must be decided by experiment. All experiments to date agree with the Standard Model of the electron, with no substructure, ring-like or otherwise. The two major approaches are high-energy electron–positron scattering and high-precision atomic tests of quantum electrodynamics, both of which agree that the electron is point-like at resolutions down to 10−20 m. At present, the Compton region of virtual processes, 10−11 cm across, is not exhibited in the high-energy experiments on electron–positron scattering.
Nikodem Popławski use the Papapetrou method of multipole expansion to show that torsion modifies Burinskii’s model of the Dirac electron by replacing the Kerr–Newman singular ring of the Compton size with a toroidal structure with the outer radius of the Compton size and the inner radius of the Cartan size (10−27 m) in the Einstein–Cartan theory of gravity.
References
Further reading
David L. Bergman, J. Paul Wesley ; Spinning Charged Ring Model of Electron Yielding Anomalous Magnetic Moment, Galilean Electrodynamics. Vol. 1, 63-67 (Sept./Oct. 1990).
Particle physics
Nuclear physics
Obsolete theories in physics | Toroidal ring model | [
"Physics"
] | 1,673 | [
"Obsolete theories in physics",
"Theoretical physics",
"Particle physics",
"Nuclear physics"
] |
13,373,895 | https://en.wikipedia.org/wiki/Diaphragm%20%28structural%20system%29 | In structural engineering, a diaphragm is a structural element that transmits lateral loads to the vertical resisting elements of a structure (such as shear walls or frames). Diaphragms are typically horizontal but can be sloped in a gable roof on a wood structure or concrete ramp in a parking garage. The diaphragm forces tend to be transferred to the vertical resisting elements primarily through in-plane shear stress. The most common lateral loads to be resisted are those resulting from wind and earthquake actions, but other lateral loads such as lateral earth pressure or hydrostatic pressure can also be resisted by diaphragm action.
The diaphragm of a structure often does double duty as the floor system or roof system in a building, or the deck of a bridge, which simultaneously supports gravity loads.
Parts of a diaphragm include:
the collector (or membrane), used as a shear panel to carry in-plane shear
The drag strut member, used to transfer the load to the shear walls or frames
the chord, used to resist the tension and compression forces that develop in the diaphragm since the collector is usually incapable of handling these loads alone
Diaphragms are usually constructed of plywood or oriented strand board in timber construction; metal deck or composite metal deck in steel construction; or a concrete slab in concrete construction.
The two primary types of the diaphragm are flexible and rigid. Flexible diaphragms resist lateral forces depending on the tributary area, irrespective of the flexibility of the members to they are transferring force to. On the other hand, rigid diaphragms transfer load to frames or shear walls depending on their flexibility and their location in the structure. Diaphragms that cannot be classified as either flexible or rigid are referred to as semirigid. The flexibility of a diaphragm affects the distribution of lateral forces to the vertical components of the lateral force-resisting elements in a structure.
References
Structural system
Floors | Diaphragm (structural system) | [
"Technology",
"Engineering"
] | 404 | [
"Structural engineering",
"Building engineering",
"Floors",
"Structural system",
"Architecture stubs",
"Architecture"
] |
13,376,002 | https://en.wikipedia.org/wiki/Optical%20modulation%20amplitude | In telecommunications, optical modulation amplitude (OMA) is the difference between two optical power levels, of a digital signal generated by an optical source, e.g., a laser diode.
It is given by
where P1 is the optical power level generated when the light source is "on," and P0 is the power level generated when the light source is "off." The OMA may be specified in peak-to-peak mW.
The OMA can be related to the average power and the extinction ratio
In the limit of a high extinction ratio, . However, OMA is often used to express the effective usable modulation in a signal when the extinction ratio is not high and this approximation may not be valid.
External links
OMA presentation by Optillion, New Orleans, September 2000
Optical communications | Optical modulation amplitude | [
"Engineering"
] | 166 | [
"Optical communications",
"Telecommunications engineering"
] |
8,707,155 | https://en.wikipedia.org/wiki/Statistical%20shape%20analysis | Statistical shape analysis is an analysis of the geometrical properties of some given set of shapes by statistical methods. For instance, it could be used to quantify differences between male and female gorilla skull shapes, normal and pathological bone shapes, leaf outlines with and without herbivory by insects, etc. Important aspects of shape analysis are to obtain a measure of distance between shapes, to estimate mean shapes from (possibly random) samples, to estimate shape variability within samples, to perform clustering and to test for differences between shapes. One of the main methods used is principal component analysis (PCA). Statistical shape analysis has applications in various fields, including medical imaging, computer vision, computational anatomy, sensor measurement, and geographical profiling.
Landmark-based techniques
In the point distribution model, a shape is determined by a finite set of coordinate points, known as landmark points. These landmark points often correspond to important identifiable features such as the corners of the eyes. Once the points are collected some form of registration is undertaken. This can be a baseline methods used by Fred Bookstein for geometric morphometrics in anthropology. Or an approach like Procrustes analysis which finds an average shape.
David George Kendall investigated the statistical distribution of the shape of triangles, and represented each triangle by a point on a sphere. He used this distribution on the sphere to investigate ley lines and whether three stones were more likely to be co-linear than might be expected. Statistical distribution like the Kent distribution can be used to analyse the distribution of such spaces.
Alternatively, shapes can be represented by curves or surfaces representing their contours, by the spatial region they occupy.
Shape deformations
Differences between shapes can be quantified by investigating deformations transforming one shape into another. In particular a diffeomorphism preserves smoothness in the deformation. This was pioneered in D'Arcy Thompson's On Growth and Form before the advent of computers. Deformations can be interpreted as resulting from a force applied to the shape. Mathematically, a deformation is defined as a mapping from a shape x to a shape y by a transformation function , i.e., . Given a notion of size of deformations, the distance between two shapes can be defined as the size of the smallest deformation between these shapes.
Diffeomorphometry is the focus on comparison of shapes and forms with a metric structure based on diffeomorphisms, and is central to the field of Computational anatomy. Diffeomorphic registration, introduced in the 90's, is now an important player with existing codes bases organized around ANTS, DARTEL, DEMONS, LDDMM, StationaryLDDMM, and FastLDDMM are examples of actively used computational codes for constructing correspondences between coordinate systems based on sparse features and dense images. Voxel-based morphometry (VBM) is an important technology built on many of these principles. Methods based on diffeomorphic flows are also used. For example, deformations could be diffeomorphisms of the ambient space, resulting in the LDDMM (Large Deformation Diffeomorphic Metric Mapping) framework for shape comparison.
See also
Active shape model
Geometric data analysis
Shape analysis (disambiguation)
Procrustes analysis
Computational anatomy
Large Deformation Diffeomorphic Metric Mapping
Bayesian Estimation of Templates in Computational Anatomy
Bayesian model of computational anatomy
3D Face Morphable Model
References
Statistical data types
Spatial analysis
Computer vision
Geometric shapes | Statistical shape analysis | [
"Physics",
"Mathematics",
"Engineering"
] | 701 | [
"Geometric shapes",
"Packaging machinery",
"Mathematical objects",
"Spatial analysis",
"Space",
"Geometric objects",
"Artificial intelligence engineering",
"Spacetime",
"Computer vision"
] |
8,712,675 | https://en.wikipedia.org/wiki/Hamming%287%2C4%29 | In coding theory, Hamming(7,4) is a linear error-correcting code that encodes four bits of data into seven bits by adding three parity bits. It is a member of a larger family of Hamming codes, but the term Hamming code often refers to this specific code that Richard W. Hamming introduced in 1950. At the time, Hamming worked at Bell Telephone Laboratories and was frustrated with the error-prone punched card reader, which is why he started working on error-correcting codes.
The Hamming code adds three additional check bits to every four data bits of the message. Hamming's (7,4) algorithm can correct any single-bit error, or detect all single-bit and two-bit errors. In other words, the minimal Hamming distance between any two correct codewords is 3, and received words can be correctly decoded if they are at a distance of at most one from the codeword that was transmitted by the sender. This means that for transmission medium situations where burst errors do not occur, Hamming's (7,4) code is effective (as the medium would have to be extremely noisy for two out of seven bits to be flipped).
In quantum information, the Hamming (7,4) is used as the base for the Steane code, a type of CSS code used for quantum error correction.
Goal
The goal of the Hamming codes is to create a set of parity bits that overlap so that a single-bit error in a data bit or a parity bit can be detected and corrected. While multiple overlaps can be created, the general method is presented in Hamming codes.
{| class="wikitable"
|-
!Bit # !! 1 !! 2 !! 3 !! 4 !! 5 !! 6 !! 7
|-
!Transmitted bit !! !! !! !! !! !! !!
|-
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|}
This table describes which parity bits cover which transmitted bits in the encoded word. For example, p2 provides an even parity for bits 2, 3, 6, and 7. It also details which transmitted bit is covered by which parity bit by reading the column. For example, d1 is covered by p1 and p2 but not p3 This table will have a striking resemblance to the parity-check matrix (H) in the next section.
Furthermore, if the parity columns in the above table were removed
{| class="wikitable"
|-
! !! !! !! !!
|-
|
|
|
|
|
|-
|
|
|
|
|
|-
|
|
|
|
|
|}
then resemblance to rows 1, 2, and 4 of the code generator matrix (G) below will also be evident.
So, by picking the parity bit coverage correctly, all errors with a Hamming distance of 1 can be detected and corrected, which is the point of using a Hamming code.
Hamming matrices
Hamming codes can be computed in linear algebra terms through matrices because Hamming codes are linear codes. For the purposes of Hamming codes, two Hamming matrices can be defined: the code generator matrix G and the parity-check matrix H:
As mentioned above, rows 1, 2, and 4 of G should look familiar as they map the data bits to their parity bits:
p1 covers d1, d2, d4
p2 covers d1, d3, d4
p3 covers d2, d3, d4
The remaining rows (3, 5, 6, 7) map the data to their position in encoded form and there is only 1 in that row so it is an identical copy. In fact, these four rows are linearly independent and form the identity matrix (by design, not coincidence).
Also as mentioned above, the three rows of H should be familiar. These rows are used to compute the syndrome vector at the receiving end and if the syndrome vector is the null vector (all zeros) then the received word is error-free; if non-zero then the value indicates which bit has been flipped.
The four data bits — assembled as a vector p — is pre-multiplied by G (i.e., ) and taken modulo 2 to yield the encoded value that is transmitted. The original 4 data bits are converted to seven bits (hence the name "Hamming(7,4)") with three parity bits added to ensure even parity using the above data bit coverages. The first table above shows the mapping between each data and parity bit into its final bit position (1 through 7) but this can also be presented in a Venn diagram. The first diagram in this article shows three circles (one for each parity bit) and encloses data bits that each parity bit covers. The second diagram (shown to the right) is identical but, instead, the bit positions are marked.
For the remainder of this section, the following 4 bits (shown as a column vector) will be used as a running example:
Channel coding
Suppose we want to transmit this data (1011) over a noisy communications channel. Specifically, a binary symmetric channel meaning that error corruption does not favor either zero or one (it is symmetric in causing errors). Furthermore, all source vectors are assumed to be equiprobable. We take the product of G and p, with entries modulo 2, to determine the transmitted codeword x:
This means that 0110011 would be transmitted instead of transmitting 1011.
Programmers concerned about multiplication should observe that each row of the result is the least significant bit of the Population Count of set bits resulting from the row and column being Bitwise ANDed together rather than multiplied.
In the adjacent diagram, the seven bits of the encoded word are inserted into their respective locations; from inspection it is clear that the parity of the red, green, and blue circles are even:
red circle has two 1's
green circle has two 1's
blue circle has four 1's
What will be shown shortly is that if, during transmission, a bit is flipped then the parity of two or all three circles will be incorrect and the errored bit can be determined (even if one of the parity bits) by knowing that the parity of all three of these circles should be even.
Parity check
If no error occurs during transmission, then the received codeword r is identical to the transmitted codeword x:
The receiver multiplies H and r to obtain the syndrome vector z, which indicates whether an error has occurred, and if so, for which codeword bit. Performing this multiplication (again, entries modulo 2):
Since the syndrome z is the null vector, the receiver can conclude that no error has occurred. This conclusion is based on the observation that when the data vector is multiplied by G, a change of basis occurs into a vector subspace that is the kernel of H. As long as nothing happens during transmission, r will remain in the kernel of H and the multiplication will yield the null vector.
Error correction
Otherwise, suppose, we can write
modulo 2, where ei is the unit vector, that is, a zero vector with a 1 in the , counting from 1.
Thus the above expression signifies a single bit error in the place.
Now, if we multiply this vector by H:
Since x is the transmitted data, it is without error, and as a result, the product of H and x is zero. Thus
Now, the product of H with the standard basis vector picks out that column of H, we know the error occurs in the place where this column of H occurs.
For example, suppose we have introduced a bit error on bit #5
The diagram to the right shows the bit error (shown in blue text) and the bad parity created (shown in red text) in the red and green circles. The bit error can be detected by computing the parity of the red, green, and blue circles. If a bad parity is detected then the data bit that overlaps only the bad parity circles is the bit with the error. In the above example, the red and green circles have bad parity so the bit corresponding to the intersection of red and green but not blue indicates the errored bit.
Now,
which corresponds to the fifth column of H. Furthermore, the general algorithm used (see Hamming code#General algorithm) was intentional in its construction so that the syndrome of 101 corresponds to the binary value of 5, which indicates the fifth bit was corrupted. Thus, an error has been detected in bit 5, and can be corrected (simply flip or negate its value):
This corrected received value indeed, now, matches the transmitted value x from above.
Decoding
Once the received vector has been determined to be error-free or corrected if an error occurred (assuming only zero or one bit errors are possible) then the received data needs to be decoded back into the original four bits.
First, define a matrix R:
Then the received value, pr, is equal to Rr. Using the running example from above
Multiple bit errors
It is not difficult to show that only single bit errors can be corrected using this scheme. Alternatively, Hamming codes can be used to detect single and double bit errors, by merely noting that the product of H is nonzero whenever errors have occurred. In the adjacent diagram, bits 4 and 5 were flipped. This yields only one circle (green) with an invalid parity but the errors are not recoverable.
However, the Hamming (7,4) and similar Hamming codes cannot distinguish between single-bit errors and two-bit errors. That is, two-bit errors appear the same as one-bit errors. If error correction is performed on a two-bit error the result will be incorrect.
Similarly, Hamming codes cannot detect or recover from an arbitrary three-bit error; Consider the diagram: if the bit in the green circle (colored red) were 1, the parity checking would return the null vector, indicating that there is no error in the codeword.
All codewords
Since the source is only 4 bits then there are only 16 possible transmitted words. Included is the eight-bit value if an extra parity bit is used (see Hamming(7,4) code with an additional parity bit). (The data bits are shown in blue; the parity bits are shown in red; and the extra parity bit shown in green.)
E7 lattice
The Hamming(7,4) code is closely related to the E7 lattice and, in fact, can be used to construct it, or more precisely, its dual lattice E7∗ (a similar construction for E7 uses the dual code [7,3,4]2). In particular, taking the set of all vectors x in Z7 with x congruent (modulo 2) to a codeword of Hamming(7,4), and rescaling by 1/, gives the lattice E7∗
This is a particular instance of a more general relation between lattices and codes. For instance, the extended (8,4)-Hamming code, which arises from the addition of a parity bit, is also related to the E8 lattice.
References
External links
A programming problem about the Hamming Code(7,4)
Coding theory
Error detection and correction
Computer arithmetic | Hamming(7,4) | [
"Mathematics",
"Engineering"
] | 2,374 | [
"Discrete mathematics",
"Coding theory",
"Reliability engineering",
"Error detection and correction",
"Computer arithmetic",
"Arithmetic"
] |
8,715,410 | https://en.wikipedia.org/wiki/Surface-enhanced%20laser%20desorption/ionization | Surface-enhanced laser desorption/ionization (SELDI) is a soft ionization method in mass spectrometry (MS) used for the analysis of protein mixtures. It is a variation of matrix-assisted laser desorption/ionization (MALDI). In MALDI, the sample is mixed with a matrix material and applied to a metal plate before irradiation by a laser, whereas in SELDI, proteins of interest in a sample become bound to a surface before MS analysis. The sample surface is a key component in the purification, desorption, and ionization of the sample. SELDI is typically used with time-of-flight (TOF) mass spectrometers and is used to detect proteins in tissue samples, blood, urine, or other clinical samples, however, SELDI technology can potentially be used in any application by simply modifying the sample surface.
Sample preparation and instrumentation
SELDI can be seen as a combination of solid-phase chromatography and TOF-MS. The sample is applied to a modified chip surface, which allows for the specific binding of proteins from the sample to the surface. Contaminants and unbound proteins are then washed away. After washing the sample, an energy absorbing matrix, such as sinapinic acid (SPA) or α-Cyano-4-hydroxycinnamic acid (CHCA), is applied to the surface and allowed to crystallize with the sample. Alternatively, the matrix can be attached to the sample surface by covalent modification or adsorption before the sample is applied. The sample is then irradiated by a pulsed laser, causing ablation and desorption of the sample and matrix.
SELDI-TOF-MS
Samples spotted on a SELDI surface are typically analyzed using time-of-flight mass spectrometry. An irradiating laser ionizes peptides from crystals of the sample/matrix mixture. The matrix absorbs the energy of the laser pulse, preventing destruction of the molecule, and transfers charge to the sample molecules, forming ions. The ions are then briefly accelerated through an electric potential and travel down a field-free flight tube where they are separated by their velocity differences. The mass-to-charge ratio of each ion can be determined from the length of the tube, the kinetic energy given to ions by the electric field, and the velocity of the ions in the tube. The velocity of the ions is inversely proportional to the square root of the mass-to-charge ratio of the ion; ions with low mass-to-charge ratios are detected earlier than ions with high mass-to-charge ratios.
SELDI surface
The binding of proteins to the SELDI surface acts as a solid-phase chromatographic separation step, and as a result, the proteins attached to the surface are easier to analyze. The surface is composed primarily of materials with a variety of physico-chemical characteristics, metal ions, or anion or cation exchangers. Common surfaces include CM10 (weak cation exchange), H50 (hydrophobic surface, similar to C6-C12 reverse phase chromatography), IMAC30 (metal-binding surface), and Q10 (strong anion exchange). SELDI surfaces can also be modified to study DNA-protein binding, antibody-antigen assays, and receptor-ligand interactions.
Additional surface methods
The SELDI process is a combination of surface-enhanced neat desorption (SEND),surface-enhanced affinity-capture (SEAC), and surface-enhanced photolabile attachment and release (SEPAR) mass spectrometry. With SEND, analytes can be desorbed and ionized without adding a matrix; the matrix is incorporated into the sample surface. In SEAC, the sample surface is modified to bind the analyte of interest for analysis with laser desorption/ionization mass spectrometry (LDI-MS). SEPAR is a combination of SEND and SEAC; the modified sample surface also acts as an energy absorbing matrix for ionization.
History
SELDI technology was developed by T. William Hutchens and Tai-Tung Yip at Baylor College of Medicine in 1993. Hutchens and Yip attached single-stranded DNA to agarose beads and used the beads to capture lactoferrin, an iron-binding glycoprotein, from preterm infant urine. The beads were incubated in the sample and then removed, washed, and analyzed with a MALDI-MS probe tip. This research led to the idea that MALDI surfaces could be derivatized with SEAC devices; the technique was later described by Hutchens and Yip in 1998.
SELDI technology was first commercialized by Ciphergen Biosystems in 1997 as the ProteinChip system, and is now produced and marketed by Bio-Rad Laboratories.
Applications
SELDI technology can potentially be used in any application by modifying the SELDI surface. SELDI-TOF-MS is optimal for analyzing low molecular weight proteins (<20 kDa) in a variety of biological materials, such as tissue samples, blood, urine, and serum. This technique is often used in combination with immunoblotting and immunohistochemistry as a diagnostic tool to aid in the detection of biomarkers for diseases, and has also been applied to the diagnosis of cancer and neurological disorders. SELDI-TOF-MS has been used in biomarker discovery for lung, breast, liver, colon, pancreatic, bladder, kidney, cervical, ovarian, and prostate cancers. SELDI technology is most widely used in biomarker discovery to compare protein levels in serum samples from healthy and diseased patients. Serum studies allow for a minimally invasive approach to disease monitoring in patients and are useful in the early detection and diagnosis of diseases and neurological disorders, such as amyotrophic lateral sclerosis (ALS) and Alzheimer's.
SELDI-TOF-MS can also be used in biological applications to detect post-translationally modified proteins and to study phosphorylation states of proteins.
Advantages
A major advantage of the SELDI process is the chromatographic separation step. While liquid chromatography-mass spectrometry (LC-MS) is based on the elution of analytes in the separated sample, separation in SELDI is based on retention. Any sample components that interfere with analytical measurements, such as salts, detergents, and buffers, are washed away before analysis with mass spectrometry. Only the analytes that are bound to the surface are analyzed, reducing the overall complexity of the sample. As a result, there is an increased probability of detecting analytes that are present in lower concentrations. Because of the initial separation step, protein profiles can be obtained from samples of as few as 25-50 cells.
In biological applications, SELDI-TOF-MS has a major advantage in that the technique does not require the use of radioactive isotopes. Furthermore, an assay can be sampled at multiple time points during an experiment. Additionally, in proteomics, the biomarker discovery, identification, and validation steps can all be done on the SELDI surface.
Limitations
SELDI is often criticized for its reproducibility due to differences in the mass spectra obtained when using different batches of chip surfaces. While the method has been successful with analyzing low molecular weight proteins, consistent results have not been obtained when analyzing high molecular weight proteins. There also exists a potential for sample bias, as nonspecific absorption matrices favor the binding of analytes with higher abundances in the sample at the expense of less abundant analytes. While SELDI-TOF-MS has detection limits in the femtomolar range, the baseline signal in the spectra varies and noise due to the matrix is maximal below 2000 Da, with Ciphergen Biosystems suggesting to ignore spectral peaks below 2000 Da.
See also
Soft laser desorption
List of mass spectrometry software
References
Proteomics
Biochemistry methods
Proteins
Ion source | Surface-enhanced laser desorption/ionization | [
"Physics",
"Chemistry",
"Biology"
] | 1,692 | [
"Biochemistry methods",
"Biomolecules by chemical classification",
"Spectrum (physical sciences)",
"Ion source",
"Mass spectrometry",
"Molecular biology",
"Biochemistry",
"Proteins"
] |
5,522,946 | https://en.wikipedia.org/wiki/Institute%20of%20Transportation%20Engineers | The Institute of Transportation Engineers (ITE) is an international educational and scientific association of transportation professionals who are responsible for meeting mobility and safety needs. ITE facilitates the application of technology and scientific principles to research, planning, functional design, implementation, operation, policy development, and management for any mode of ground transportation.
History
The organization was formed in October 1930 amid growing public demand for experts to alleviate traffic congestion and the frequency of crashes that came from the rapid development of automotive transportation. Various national and regional conferences called for discussions of traffic problems. These discussions led to a group of transportation engineers starting the creation of the first professional traffic society. A meeting took place in Pittsburgh on October 2, 1930, where a tentative draft of the organization's constitution and by-laws came to fruition. The constitution and by-laws were later adopted at a meeting in New York on January 20, 1931. The first chapter of the Institute of Traffic Engineers was established consisting of 30 men with Ernest P. Goodrich as its first president.
The organization consists of 10 districts, 62 sections, and 30 chapters from various parts of the world.
Transportation Professional Certification Board
ITE founded the Transportation Professional Certification Board Inc. (TPCB) in 1996 as an autonomous certification body. TPCB facilitates multiple testing and certification pathways for transportation professionals.
Standards development
ITE is also a standards development organization designated by the United States Department of Transportation (USDOT). One of the current standardization efforts is the advanced transportation controller. ITE is also known for publishing articles about trip generation, parking generation, parking demand, and various transportation-related material through ITE Journal, a monthly publication.
Criticism
Urbanists such as Jeff Speck have criticized ITE standards for encouraging towns to build more, wider streets making pedestrians less safe and cities less walkable. Donald Shoup in his book The High Cost of Free Parking argues that the ITE Trip Generation Manual estimates give towns the false confidence to regulate minimum parking requirements which reinforce sprawl.
See also
National Transportation Communications for Intelligent Transportation System Protocol (NTCIP)
Canadian Institute of Transportation Engineers
References
External links
Transportation engineering
Organizations based in Washington, D.C.
Road transport organizations
Organizations established in 1930
Engineering organizations
Transportation organizations based in the United States | Institute of Transportation Engineers | [
"Engineering"
] | 454 | [
"Transportation engineering",
"Civil engineering",
"nan",
"Industrial engineering"
] |
5,524,046 | https://en.wikipedia.org/wiki/National%20Atmospheric%20Release%20Advisory%20Center | The National Atmospheric Release Advisory Center (NARAC) is located at the University of California's Lawrence Livermore National Laboratory. It is a national support and resource center for planning, real-time assessment, emergency response, and detailed studies of incidents involving a wide variety of hazards, including nuclear, radiological, chemical, biological, and natural emissions.
NARAC provides tools and services to federal, state and local governments, that map the probable spread of hazardous material accidentally or intentionally released into the atmosphere.
NARAC provides atmospheric plume predictions in time for an emergency manager to decide if protective action is necessary to protect the health and safety of people in affected areas.
The NARAC facility includes
Scientific and technical staff who provide support and training for NARAC tools, as well as quality assurance and detailed analysis of atmospheric releases.
24 hour x 7 day on-duty or on-call staff.
Training facility.
An Operations Center with uninterruptible power, backup power generators, and robust computer systems.
Links to over 100 emergency operations centers on the U.S.
A team of research and operational staff with expertise in atmospheric research, operational meteorology, numerical modeling, computer science, software engineering, geographical information systems, computer graphics, hazardous material (radiological, chemical, biological) properties and effects.
The Emergency Response System: Real time dispersion modeling
The NARAC emergency response central modeling system consists of an integrated suite of meteorological and atmospheric dispersion models. The meteorological data assimilation model, ADAPT, constructs fields of such variables as the mean winds, pressure, precipitation, temperature, and turbulence. Non-divergent wind fields are produced by a procedure based on the variational principle and a finite-element discretization. The dispersion model, LODI, solves the 3-D advection-diffusion equation using a Lagrangian stochastic, Monte Carlo method. LODI includes methods for simulating the processes of mean wind advection, turbulent diffusion, radioactive decay and production, bio-agent degradation, first-order chemical reactions, wet deposition, gravitational settling, dry deposition, and buoyant/momentum plume rise.
The models are coupled to NARAC databases providing topography, geographical data, chemical-biological-nuclear agent properties and health risk levels, real-time meteorological observational data, and global and mesoscale forecast model predictions. The NARAC modeling system also includes an in-house version of the Naval Research Laboratory's mesoscale weather forecast model COAMPS.
See also
Materials MASINT
Accidental release source terms
Air Resources Laboratory
Air Quality Modeling Group
Atmospheric dispersion modeling
Department of Public Safety
National Center for Atmospheric Research
University Corporation for Atmospheric Research
References
External links and sources
National Atmospheric Release Advisory Center (official website)
Lawrence Livermore National Laboratory (official website)
Atmospheric dispersion modeling
Lawrence Livermore National Laboratory | National Atmospheric Release Advisory Center | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 579 | [
"Atmospheric dispersion modeling",
"Environmental modelling",
"Environmental engineering"
] |
5,529,638 | https://en.wikipedia.org/wiki/Stress%E2%80%93energy%E2%80%93momentum%20pseudotensor | In the theory of general relativity, a stress–energy–momentum pseudotensor, such as the Landau–Lifshitz pseudotensor, is an extension of the non-gravitational stress–energy tensor that incorporates the energy–momentum of gravity. It allows the energy–momentum of a system of gravitating matter to be defined. In particular it allows the total of matter plus the gravitating energy–momentum to form a conserved current within the framework of general relativity, so that the total energy–momentum crossing the hypersurface (3-dimensional boundary) of any compact space–time hypervolume (4-dimensional submanifold) vanishes.
Some people (such as Erwin Schrödinger) have objected to this derivation on the grounds that pseudotensors are inappropriate objects in general relativity, but the conservation law only requires the use of the 4-divergence of a pseudotensor which is, in this case, a tensor (which also vanishes). Mathematical developments in the 1980's have allowed pseudotensors to be understood as sections of jet bundles, thus providing a firm theoretical foundation for the concept of pseudotensors in general relativity.
Landau–Lifshitz pseudotensor
The Landau–Lifshitz pseudotensor, a stress–energy–momentum pseudotensor for gravity, when combined with terms for matter (including photons and neutrinos), allows the energy–momentum conservation laws to be extended into general relativity.
Requirements
Landau and Lifshitz were led by four requirements in their search for a gravitational energy momentum pseudotensor, :
that it be constructed entirely from the metric tensor, so as to be purely geometrical or gravitational in origin.
that it be index symmetric, i.e. , (to conserve angular momentum)
that, when added to the stress–energy tensor of matter, , its total ordinary 4-divergence (, not ) vanishes so that we have a conserved expression for the total stress–energy–momentum. (This is required of any conserved current.)
that it vanish locally in an inertial frame of reference (which requires that it only contains first order and not second or higher order derivatives of the metric). This is because the equivalence principle requires that the gravitational force field, the Christoffel symbols, vanish locally in some frames. If gravitational energy is a function of its force field, as is usual for other forces, then the associated gravitational pseudotensor should also vanish locally.
Definition
Landau and Lifshitz showed that there is a unique construction that satisfies these requirements, namely
where:
Gμν is the Einstein tensor (which is constructed from the metric)
gμν is the inverse of the metric tensor, gμν
is the determinant of the metric tensor. , hence its appearance as .
are partial derivatives, not covariant derivatives
is the Einstein gravitational constant
G is the Newtonian constant of gravitation
Verification
Examining the 4 requirement conditions we can see that the first 3 are relatively easy to demonstrate:
Since the Einstein tensor, , is itself constructed from the metric, so therefore is
Since the Einstein tensor, , is symmetric so is since the additional terms are symmetric by inspection.
The Landau–Lifshitz pseudotensor is constructed so that when added to the stress–energy tensor of matter, , its total 4-divergence vanishes: . This follows from the cancellation of the Einstein tensor, , with the stress–energy tensor, by the Einstein field equations; the remaining term vanishes algebraically due to the commutativity of partial derivatives applied across antisymmetric indices.
The Landau–Lifshitz pseudotensor appears to include second derivative terms in the metric, but in fact the explicit second derivative terms in the pseudotensor cancel with the implicit second derivative terms contained within the Einstein tensor, . This is more evident when the pseudotensor is directly expressed in terms of the metric tensor or the Levi-Civita connection; only the first derivative terms in the metric survive and these vanish where the frame is locally inertial at any chosen point. As a result, the entire pseudotensor vanishes locally (again, at any chosen point) , which demonstrates the delocalisation of gravitational energy–momentum.
Cosmological constant
When the Landau–Lifshitz pseudotensor was formulated it was commonly assumed that the cosmological constant, , was zero. Nowadays, that assumption is suspect, and the expression frequently gains a term, giving:
This is necessary for consistency with the Einstein field equations.
Metric and affine connection versions
Landau and Lifshitz also provide two equivalent but longer expressions for the Landau–Lifshitz pseudotensor:
Metric tensor version:
Affine connection version:
This definition of energy–momentum is covariantly applicable not just under Lorentz transformations, but also under general coordinate transformations.
Einstein pseudotensor
This pseudotensor was originally developed by Albert Einstein.
Paul Dirac showed that the mixed Einstein pseudotensor
satisfies a conservation law
Clearly this pseudotensor for gravitational stress–energy is constructed exclusively from the metric tensor and its first derivatives. Consequently, it vanishes at any event when the coordinate system is chosen to make the first derivatives of the metric vanish because each term in the pseudotensor is quadratic in the first derivatives of the metric tensor field. However it is not symmetric, and is therefore not suitable as a basis for defining the angular momentum.
See also
Bel–Robinson tensor
Gravitational wave
Notes
References
Tensors
Tensors in general relativity | Stress–energy–momentum pseudotensor | [
"Physics",
"Engineering"
] | 1,137 | [
"Tensors in general relativity",
"Tensors",
"Tensor physical quantities",
"Physical quantities"
] |
5,529,757 | https://en.wikipedia.org/wiki/Fundamental%20thermodynamic%20relation | In thermodynamics, the fundamental thermodynamic relation are four fundamental equations which demonstrate how four important thermodynamic quantities depend on variables that can be controlled and measured experimentally. Thus, they are essentially equations of state, and using the fundamental equations, experimental data can be used to determine sought-after quantities like G (Gibbs free energy) or H (enthalpy). The relation is generally expressed as a microscopic change in internal energy in terms of microscopic changes in entropy, and volume for a closed system in thermal equilibrium in the following way.
Here, U is internal energy, T is absolute temperature, S is entropy, P is pressure, and V is volume.
This is only one expression of the fundamental thermodynamic relation. It may be expressed in other ways, using different variables (e.g. using thermodynamic potentials). For example, the fundamental relation may be expressed in terms of the enthalpy H as
in terms of the Helmholtz free energy F as
and in terms of the Gibbs free energy G as
.
The first and second laws of thermodynamics
The first law of thermodynamics states that:
where and are infinitesimal amounts of heat supplied to the system by its surroundings and work done by the system on its surroundings, respectively.
According to the second law of thermodynamics we have for a reversible process:
Hence:
By substituting this into the first law, we have:
Letting be reversible pressure-volume work done by the system on its surroundings,
we have:
This equation has been derived in the case of reversible changes. However, since U, S, and V are thermodynamic state functions that depend on only the initial and final states of a thermodynamic process, the above relation holds also for non-reversible changes. If the composition, i.e. the amounts of the chemical components, in a system of uniform temperature and pressure can also change, e.g. due to a chemical reaction, the fundamental thermodynamic relation generalizes to:
The are the chemical potentials corresponding to particles of type .
If the system has more external parameters than just the volume that can change, the fundamental thermodynamic relation generalizes to
Here the are the generalized forces corresponding to the external parameters . (The negative sign used with pressure is unusual and arises because pressure represents a compressive stress that tends to decrease volume. Other generalized forces tend to increase their conjugate displacements.)
Relationship to statistical mechanics
The fundamental thermodynamic relation and statistical mechanical principles can be derived from one another.
Derivation from statistical mechanical principles
The above derivation uses the first and second laws of thermodynamics. The first law of thermodynamics is essentially a definition of heat, i.e. heat is the change in the internal energy of a system that is not caused by a change of the external parameters of the system.
However, the second law of thermodynamics is not a defining relation for the entropy. The fundamental definition of entropy of an isolated system containing an amount of energy is:
where is the number of quantum states in a small interval between and . Here is a macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of . However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific entropy (entropy per unit volume or per unit mass) does not depend on . The entropy is thus a measure of the uncertainty about exactly which quantum state the system is in, given that we know its energy to be in some interval of size .
Deriving the fundamental thermodynamic relation from first principles thus amounts to proving that the above definition of entropy implies that for reversible processes we have:
The fundamental assumption of statistical mechanics is that all the states at a particular energy are equally likely. This allows us to extract all the thermodynamical quantities of interest. The temperature is defined as:
This definition can be derived from the microcanonical ensemble, which is a system of a constant number of particles, a constant volume and that does not exchange energy with its environment. Suppose that the system has some external parameter, x, that can be changed. In general, the energy eigenstates of the system will depend on x. According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in.
The generalized force, X, corresponding to the external parameter x is defined such that is the work performed by the system if x is increased by an amount dx. E.g., if x is the volume, then X is the pressure. The generalized force for a system known to be in energy eigenstate is given by:
Since the system can be in any energy eigenstate within an interval of , we define the generalized force for the system as the expectation value of the above expression:
To evaluate the average, we partition the energy eigenstates by counting how many of them have a value for within a range between and . Calling this number , we have:
The average defining the generalized force can now be written:
We can relate this to the derivative of the entropy with respect to x at constant energy E as follows. Suppose we change x to x + dx. Then will change because the energy eigenstates depend on x, causing energy eigenstates to move into or out of the range between and . Let's focus again on the energy eigenstates for which lies within the range between and . Since these energy eigenstates increase in energy by Y dx, all such energy eigenstates that are in the interval ranging from E − Y dx to E move from below E to above E. There are
such energy eigenstates. If , all these energy eigenstates will move into the range between and and contribute to an increase in . The number of energy eigenstates that move from below to above is, of course, given by . The difference
is thus the net contribution to the increase in . Note that if Y dx is larger than there will be energy eigenstates that move from below to above . They are counted in both and , therefore the above expression is also valid in that case.
Expressing the above expression as a derivative with respect to E and summing over Y yields the expression:
The logarithmic derivative of with respect to x is thus given by:
The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and thus vanishes in the thermodynamic limit. We have thus found that:
Combining this with
Gives:
which we can write as:
Derivation of statistical mechanical principles from the fundamental thermodynamic relation
It has been shown that the fundamental thermodynamic relation together with the following three postulates
is sufficient to build the theory of statistical mechanics without the equal a priori probability postulate.
For example, in order to derive the Boltzmann distribution, we assume the probability density of microstate satisfies . The normalization factor (partition function) is therefore
The entropy is therefore given by
If we change the temperature by while keeping the volume of the system constant, the change of entropy satisfies
where
Considering that
we have
From the fundamental thermodynamic relation, we have
Since we kept constant when perturbing , we have . Combining the equations above, we have
Physics laws should be universal, i.e., the above equation must hold for arbitrary systems, and the only way for this to happen is
That is
It has been shown that the third postulate in the above formalism can be replaced by the following:
However, the mathematical derivation will be much more complicated.
References
External links
The Fundamental Thermodynamic Relation
Thermodynamics
Statistical mechanics
Thermodynamic equations | Fundamental thermodynamic relation | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,698 | [
"Thermodynamic equations",
"Equations of physics",
"Thermodynamics",
"Statistical mechanics",
"Dynamical systems"
] |
7,051,723 | https://en.wikipedia.org/wiki/Hazard%20and%20operability%20study | A hazard and operability study (HAZOP) is a structured and systematic examination of a complex system, usually a process facility, in order to identify hazards to personnel, equipment or the environment, as well as operability problems that could affect operations efficiency. It is the foremost hazard identification tool in the domain of process safety. The intention of performing a HAZOP is to review the design to pick up design and engineering issues that may otherwise not have been found. The technique is based on breaking the overall complex design of the process into a number of simpler sections called nodes which are then individually reviewed. It is carried out by a suitably experienced multi-disciplinary team during a series of meetings. The HAZOP technique is qualitative and aims to stimulate the imagination of participants to identify potential hazards and operability problems. Structure and direction are given to the review process by applying standardized guideword prompts to the review of each node. A relevant IEC standard calls for team members to display 'intuition and good judgement' and for the meetings to be held in "an atmosphere of critical thinking in a frank and open atmosphere [sic]."
The HAZOP technique was initially developed for systems involving the treatment of a fluid medium or other material flow in the process industries, where it is now a major element of process safety management. It was later expanded to the analysis of batch reactions and process plant operational procedures. Recently, it has been used in domains other than or only loosely related to the process industries, namely: software applications including programmable electronic systems; software and code development; systems involving the movement of people by transport modes such as road, rail, and air; assessing administrative procedures in different industries; assessing medical devices; etc. This article focuses on the technique as it is used in the process industries.
History
The technique is generally considered to have originated in the Heavy Organic Chemicals Division of Imperial Chemical Industries (ICI), which was then a major British and international chemical company.
Its origins have been described by Trevor Kletz, who was the company's safety advisor from 1968 to 1982. In 1963 a team of three people met for three days a week for four months to study the design of a new phenol plant. They started with a technique called critical examination which asked for alternatives but changed this to look for deviations. The method was further refined within the company, under the name operability studies, and became the third stage of its hazard analysis procedure (the first two being done at the conceptual and specification stages) when the first detailed design was produced.
In 1974 a one-week safety course including this procedure was offered by the Institution of Chemical Engineers (IChemE) at Teesside Polytechnic. Coming shortly after the Flixborough disaster, the course was fully booked, as were ones in the next few years. In the same year the first paper in the open literature was also published. In 1977 the Chemical Industries Association published a guide. Up to this time the term 'HAZOP' had not been used in formal publications. The first to do this was Kletz in 1983, with what were essentially the course notes (revised and updated) from the IChemE courses. By this time, hazard and operability studies had become an expected part of chemical engineering degree courses in the UK.
Nowadays, regulators and the process industry at large (including operators and contractors) consider HAZOP a strictly necessary step of project development, at the very least during the detailed design phase.
Method
The method is applied to complex processes, for which sufficient design information is available and not likely to change significantly. This range of data should be explicitly identified and taken as the "design intent" basis for the HAZOP study. For example, a prudent designer will have allowed for foreseeable variations within the process, creating a larger design envelope than just the basic requirements, and the HAZOP will be looking at ways in which this might not be sufficient.
A common use of the HAZOP is relatively early through the detailed design of a plant or process. However, it can also be applied at other stages, including later operational life of existing plants, in which case it is usefully applied as a revalidation tool to ensure that unduly managed changes have not crept in since first plant start-up. Where design information is not fully available, such as during front-end loading, a coarse HAZOP can be conducted; however, where a design is required to have a HAZOP performed to meet legislative or regulatory requirements, such an early exercise cannot be considered sufficient and a later, detailed design HAZOP also becomes necessary.
For process plants, identifiable sections (nodes) are chosen so that for each a meaningful design intent can be specified . They are commonly indicated on piping and instrumentation diagrams (P&IDs) and process flow diagrams (PFDs). P&IDs in particular are the foremost reference document for conducting a HAZOP. The extent of each node should be appropriate to the complexity of the system and the magnitude of the hazards it might pose. However, it will also need to balance between "too large and complex" (fewer nodes, but the team members may not be able to consider issues within the whole node at once) and "too small and simple" (many trivial and repetitive nodes, each of which has to be reviewed independently and documented).
For each node, in turn, the HAZOP team uses a list of standardized guidewords and process parameters to identify potential deviations from the design intent. For each deviation, the team identifies feasible causes and likely consequences then decides (with confirmation by risk analysis where necessary, e.g., by way of an agreed upon risk matrix) whether the existing safeguards are sufficient, or whether an action or recommendation to install additional safeguards or put in place administrative controls is necessary to reduce the risks to an acceptable level.
The degree of preparation for the HAZOP is critical to the overall success of the review. "Frozen" design information provided to the team members with time for them to familiarize themselves with the process, an adequate schedule allowed for the performance of the HAZOP, provision of the best team members for their role. Those scheduling a HAZOP should take into account the review scope, the number of nodes to be reviewed, the provision of completed design drawings and documentation and the need to maintain team performance over an extended time-frame. The team members may also need to perform some of their normal tasks during this period and the HAZOP team members can tend to lose focus unless adequate time is allowed for them to refresh their mental capabilities.
The team meetings should be managed by an independent, trained HAZOP facilitator (also referred to as HAZOP leader or chairperson), who is responsible for the overall quality of the review, partnered with a dedicated scribe to minute the meetings. As the IEC standard puts it:The success of the study strongly depends on the alertness and concentration of the team members and it is therefore important that the sessions are not too long and that there are appropriate intervals between sessions. How these requirements are achieved is ultimately the responsibility of the study leader. For a medium-sized chemical plant, where the total number of items to be considered is around 1200 pieces of equipment and piping, about 40 such meetings would be needed. Various software programs are now available to assist in the management and scribing of the workshop.
Guidewords and parameters
Source:
In order to identify deviations, the team applies (systematically i.e. in a given order) a set of guidewords to each node in the process. To prompt discussion, or to ensure completeness, appropriate process parameters are considered in turn, which apply to the design intent. Typical parameters are flow (or flowrate), temperature, pressure, level, composition, etc. The IEC standard notes guidewords should be chosen that are appropriate to the study, neither too specific (limiting ideas and discussion) nor too general (allowing loss of focus). A fairly standard set of guidewords (given as an example the standard) is as follows:
Where a guide word is meaningfully applicable to a parameter (e.g., "no flow", "more temperature"), their combination should be recorded as a credible potential deviation from the design intent that requires review.
The following table gives an overview of commonly used guideword-parameter pairs (deviations) and common interpretations of them.
Once the causes and effects of any potential hazards have been established, the system being studied can then be modified to improve its safety. The modified design should then be subject to a formal HAZOP close-out, to ensure that no new problems have been added.
HAZOP team
A HAZOP study is a team effort. The team should be as small as practicable and having relevant skills and experience. Where a system has been designed by a contractor, the HAZOP team should contain personnel from both the contractor and the client company. A minimum team size of five is recommended. In a large process there will be many HAZOP meetings and the individuals within the team may change, as different specialists and deputies will be required for the various roles. As many as 20 individuals may be involved. Each team member should have a definite role as follows:
In earlier publications it was suggested that the study leader could also be the recorder but separate roles are now generally recommended.
The use of computers and projector screens enhances the recording of meeting minutes (the team can see what is minuted and ensure that it is accurate), the display of P&IDs for the team to review, the provision of supplemental documented information to the team and the logging of non-HAZOP issues that may arise during the review, e.g., drawing/document corrections and clarifications. Specialist software is now available from several suppliers to support the recording of meeting minutes and tracking the completion of recommended actions.
See also
Hazard analysis
Hazard analysis and critical control points
HAZID
Process safety management
Risk assessment
Safety engineering
Workplace safety standards
Notes
References
Further reading
Explanation by a software supplier:
Process safety | Hazard and operability study | [
"Chemistry",
"Engineering"
] | 2,067 | [
"Chemical process engineering",
"Safety engineering",
"Process safety"
] |
7,052,669 | https://en.wikipedia.org/wiki/Distillation%20Design | Distillation Design is a book which provides complete coverage of the design of industrial distillation columns for the petroleum refining, chemical and petrochemical plants, natural gas processing, pharmaceutical, food and alcohol distilling industries. It has been a classical chemical engineering textbook since it was first published in February 1992.
The subjects covered in the book include:
Vapor–liquid equilibrium(VLE): Vapor–liquid K values, relative volatilities, ideal and non-ideal systems, phase diagrams, calculating bubble points and dew points
Key fractional distillation concepts: theoretical stages, x-y diagrams, multicomponent distillation, column composition and temperature profiles
Process design and optimization: minimum reflux and minimum stages, optimum reflux, short-cut methods, feed entry location
Rigorous calculation methods: Bubble point method, sum rates method, numerical methods (Newton–Raphson technique), inside out method, relaxation method, other methods
Batch distillation: Simple distillation, constant reflux, varying reflux, time and boilup requirements
Tray design and tray efficiency: tray types, tray capacities, tray hydraulic parameters, tray sizing and determination of column diameter, point and tray efficiencies, tray efficiency prediction and scaleup
Packing design and packing efficiency: packing types, packing hydraulics and capacities, determination of packing efficiency by transfer unit method and by HETP method, packed column sizing
See also
External links
McGraw Hill website page
Distillation
Engineering textbooks
Science books
Technology books | Distillation Design | [
"Chemistry"
] | 308 | [
"Distillation",
"Separation processes"
] |
7,054,016 | https://en.wikipedia.org/wiki/Thermomagnetic%20convection | Ferrofluids can be used to transfer heat, since heat and mass transport in such magnetic fluids can be controlled using an external magnetic field.
B. A. Finlayson first explained in 1970 (in his paper "Convective instability of ferromagnetic fluids", Journal of Fluid Mechanics, 40:753-767) how an external magnetic field imposed on a ferrofluid with varying magnetic susceptibility, e.g., due to a temperature gradient, results in a nonuniform magnetic body force, which leads to thermomagnetic convection. This form of heat transfer can be useful for cases where conventional convection fails to provide adequate heat transfer, e.g., in miniature microscale devices or under reduced gravity conditions.
Ozoe group has studied thermomagnetic convection both experimentally and numerically. They showed how to enhance, suppress, and invert the convection modes. They have also carried out scaling analysis for paramagnetic fluids in microgravity conditions.
A comprehensive review of thermomagnetic convection (in A. Mukhopadhyay, R. Ganguly, S. Sen, and I. K. Puri, "Scaling analysis to characterize thermomagnetic convection", International Journal of Heat and Mass Transfer 48:3485-3492, (2005)) also shows that this form of convection can be correlated with a dimensionless magnetic Rayleigh number. Subsequently, this group explained that fluid motion occurs due to a Kelvin body force with two terms. The first term can be treated as a magnetostatic pressure. In contrast, the second is important only if there is a spatial gradient of the fluid susceptibility, e.g., in a non-isothermal system. The colder fluid that has a larger magnetic susceptibility is attracted towards regions with larger field strength during thermomagnetic convection, which displaces warmer fluid of lower susceptibility. They showed that thermomagnetic convection can be correlated with a dimensionless magnetic Rayleigh number. Heat transfer due to this form of convection can be much more effective than buoyancy-induced convection for systems with small dimensions.
The ferrofluid magnetization depends on the local value of the applied magnetic field H and on the fluid magnetic susceptibility. In a ferrofluid flow encompassing varying temperatures, the susceptibility is a function of the temperature. This produces a force that can be expressed in the Navier–Stokes or momentum equation governing fluid flow as the "Kelvin body force (KBF)". Recently, Kumar et.al shed new light on the 20-plus year-old question of the appropriate tensor form of the Kelvin body force in Ferrofluids.
The KBF creates a static pressure field that is symmetric about a magnet, e.g., a line dipole, that produces a curl-free force field, i.e., curl(ℑ) = 0 for constant temperature flow. Such a symmetric field does not alter the velocity. However, if the temperature distribution about the imposed magnetic field is asymmetric, so is the KBF in which case curl(ℑ) ≠ 0. Such an asymmetric body force leads to ferrofluid motion across isotherms.
References
Convection
Magnetism
Continuum mechanics | Thermomagnetic convection | [
"Physics",
"Chemistry"
] | 696 | [
"Transport phenomena",
"Physical phenomena",
"Continuum mechanics",
"Classical mechanics",
"Convection",
"Thermodynamics"
] |
7,204,363 | https://en.wikipedia.org/wiki/Interchange%20of%20limiting%20operations | In mathematics, the study of interchange of limiting operations is one of the major concerns of mathematical analysis, in that two given limiting operations, say L and M, cannot be assumed to give the same result when applied in either order. One of the historical sources for this theory is the study of trigonometric series.
Formulation
In symbols, the assumption
LM = ML,
where the left-hand side means that M is applied first, then L, and vice versa on the right-hand side, is not a valid equation between mathematical operators, under all circumstances and for all operands. An algebraist would say that the operations do not commute. The approach taken in analysis is somewhat different. Conclusions that assume limiting operations do 'commute' are called formal. The analyst tries to delineate conditions under which such conclusions are valid; in other words mathematical rigour is established by the specification of some set of sufficient conditions for the formal analysis to hold. This approach justifies, for example, the notion of uniform convergence. It is relatively rare for such sufficient conditions to be also necessary, so that a sharper piece of analysis may extend the domain of validity of formal results.
Professionally speaking, therefore, analysts push the envelope of techniques, and expand the meaning of well-behaved for a given context. G. H. Hardy wrote that "The problem of deciding whether two given limit operations are commutative is one of the most important in mathematics". An opinion apparently not in favour of the piece-wise approach, but of leaving analysis at the level of heuristic, was that of Richard Courant.
Examples
Examples abound, one of the simplest being that for a double sequence am,n: it is not necessarily the case that the operations of taking the limits as m → ∞ and as n → ∞ can be freely interchanged. For example take
am,n = 2m − n
in which taking the limit first with respect to n gives 0, and with respect to m gives ∞.
Many of the fundamental results of infinitesimal calculus also fall into this category: the symmetry of partial derivatives, differentiation under the integral sign, and Fubini's theorem deal with the interchange of differentiation and integration operators.
One of the major reasons why the Lebesgue integral is used is that theorems exist, such as the dominated convergence theorem, that give sufficient conditions under which integration and limit operation can be interchanged. Necessary and sufficient conditions for this interchange were discovered by Federico Cafiero.
List of related theorems
Interchange of limits:
Moore-Osgood theorem
Interchange of limit and infinite summation:
Tannery's theorem
Interchange of limit and derivatives:
If a sequence of functions converges at at least one point and the derivatives converge uniformly, then converges uniformly as well, say to some function and the limiting function of the derivatives is . While this is often shown using the mean value theorem for real-valued functions, the same method can be applied for higher-dimensional functions by using the mean value inequality instead.
Interchange of partial derivatives:
Schwarz's theorem
Interchange of integrals:
Fubini's theorem
Interchange of limit and integral:
Dominated convergence theorem
Vitali convergence theorem
Fichera convergence theorem
Cafiero convergence theorem
Fatou's lemma
Monotone convergence theorem for integrals (Beppo Levi's lemma)
Interchange of derivative and integral:
Leibniz integral rule
See also
Iterated limit
Uniform convergence
Notes
Mathematical analysis
Limits (mathematics) | Interchange of limiting operations | [
"Mathematics"
] | 706 | [
"Mathematical analysis"
] |
7,204,577 | https://en.wikipedia.org/wiki/Pullback%20attractor | In mathematics, the attractor of a random dynamical system may be loosely thought of as a set to which the system evolves after a long enough time. The basic idea is the same as for a deterministic dynamical system, but requires careful treatment because random dynamical systems are necessarily non-autonomous. This requires one to consider the notion of a pullback attractor or attractor in the pullback sense.
Set-up and motivation
Consider a random dynamical system on a complete separable metric space , where the noise is chosen from a probability space with base flow .
A naïve definition of an attractor for this random dynamical system would be to require that for any initial condition , as . This definition is far too limited, especially in dimensions higher than one. A more plausible definition, modelled on the idea of an omega-limit set, would be to say that a point lies in the attractor if and only if there exists an initial condition, , and there is a sequence of times such that
as .
This is not too far from a working definition. However, we have not yet considered the effect of the noise , which makes the system non-autonomous (i.e. it depends explicitly on time). For technical reasons, it becomes necessary to do the following: instead of looking seconds into the "future", and considering the limit as , one "rewinds" the noise seconds into the "past", and evolves the system through seconds using the same initial condition. That is, one is interested in the pullback limit
.
So, for example, in the pullback sense, the omega-limit set for a (possibly random) set is the random set
Equivalently, this may be written as
Importantly, in the case of a deterministic dynamical system (one without noise), the pullback limit coincides with the deterministic forward limit, so it is meaningful to compare deterministic and random omega-limit sets, attractors, and so forth.
Several examples of pullback attractors of non-autonomous dynamical systems are presented analytically and numerically.
Definition
The pullback attractor (or random global attractor) for a random dynamical system is a -almost surely unique random set such that
is a random compact set: is almost surely compact and is a -measurable function for every ;
is invariant: for all almost surely;
is attractive: for any deterministic bounded set ,
almost surely.
There is a slight abuse of notation in the above: the first use of "dist" refers to the Hausdorff semi-distance from a point to a set,
whereas the second use of "dist" refers to the Hausdorff semi-distance between two sets,
As noted in the previous section, in the absence of noise, this definition of attractor coincides with the deterministic definition of the attractor as the minimal compact invariant set that attracts all bounded deterministic sets.
Theorems relating omega-limit sets to attractors
The attractor as a union of omega-limit sets
If a random dynamical system has a compact random absorbing set , then the random global attractor is given by
where the union is taken over all bounded sets .
Bounding the attractor within a deterministic set
Crauel (1999) proved that if the base flow is ergodic and is a deterministic compact set with
then -almost surely.
References
Further reading
Random dynamical systems | Pullback attractor | [
"Mathematics"
] | 705 | [
"Random dynamical systems",
"Dynamical systems"
] |
7,204,602 | https://en.wikipedia.org/wiki/Pfister%20form | In mathematics, a Pfister form is a particular kind of quadratic form, introduced by Albrecht Pfister in 1965. In what follows, quadratic forms are considered over a field F of characteristic not 2. For a natural number n, an n-fold Pfister form over F is a quadratic form of dimension 2n that can be written as a tensor product of quadratic forms
for some nonzero elements a1, ..., an of F. (Some authors omit the signs in this definition; the notation here simplifies the relation to Milnor K-theory, discussed below.) An n-fold Pfister form can also be constructed inductively from an (n−1)-fold Pfister form q and a nonzero element a of F, as .
So the 1-fold and 2-fold Pfister forms look like:
.
For n ≤ 3, the n-fold Pfister forms are norm forms of composition algebras. In that case, two n-fold Pfister forms are isomorphic if and only if the corresponding composition algebras are isomorphic. In particular, this gives the classification of octonion algebras.
The n-fold Pfister forms additively generate the n-th power I n of the fundamental ideal of the Witt ring of F.
Characterizations
A quadratic form q over a field F is multiplicative if, for vectors of indeterminates x and y, we can write q(x).q(y) = q(z) for some vector z of rational functions in the x and y over F. Isotropic quadratic forms are multiplicative. For anisotropic quadratic forms, Pfister forms are multiplicative, and conversely.
For n-fold Pfister forms with n ≤ 3, this had been known since the 19th century; in that case z can be taken to be bilinear in x and y, by the properties of composition algebras. It was a remarkable discovery by Pfister that n-fold Pfister forms for all n are multiplicative in the more general sense here, involving rational functions. For example, he deduced that for any field F and any natural number n, the set of sums of 2n squares in F is closed under multiplication, using that
the quadratic form
is an n-fold Pfister form (namely, ).
Another striking feature of Pfister forms is that every isotropic Pfister form is in fact hyperbolic, that is, isomorphic to a direct sum of copies of the hyperbolic plane . This property also characterizes Pfister forms, as follows: If q is an anisotropic quadratic form over a field F, and if q becomes hyperbolic over every extension field E such that q becomes isotropic over E, then q is isomorphic to aφ for some nonzero a in F and some Pfister form φ over F.
Connection with K-theory
Let kn(F) be the n-th Milnor K-group modulo 2. There is a homomorphism from kn(F) to the quotient In/In+1 in the Witt ring of F, given by
where the image is an n-fold Pfister form. The homomorphism is surjective, since the Pfister forms additively generate In. One part of the Milnor conjecture, proved by Orlov, Vishik and Voevodsky, states that this homomorphism is in fact an isomorphism . That gives an explicit description of the abelian group In/In+1 by generators and relations. The other part of the Milnor conjecture, proved by Voevodsky, says that kn(F) (and hence In/In+1) maps isomorphically to the Galois cohomology group Hn(F, F2).
Pfister neighbors
A Pfister neighbor is an anisotropic form σ which is isomorphic to a subform of aφ for some nonzero a in F and some Pfister form φ with dim φ < 2 dim σ. The associated Pfister form φ is determined up to isomorphism by σ. Every anisotropic form of dimension 3 is a Pfister neighbor; an anisotropic form of dimension 4 is a Pfister neighbor if and only if its discriminant in F*/(F*)2 is trivial. A field F has the property that every 5-dimensional anisotropic form over F is a Pfister neighbor if and only if it is a linked field.
Notes
References
, Ch. 10
Quadratic forms | Pfister form | [
"Mathematics"
] | 976 | [
"Quadratic forms",
"Number theory"
] |
7,204,666 | https://en.wikipedia.org/wiki/Norm%20form | In mathematics, a norm form is a homogeneous form in n variables constructed from the field norm of a field extension L/K of degree n. That is, writing N for the norm mapping to K, and selecting a basis e1, ..., en for L as a vector space over K, the form is given by
N(x1e1 + ... + xnen)
in variables x1, ..., xn.
In number theory norm forms are studied as Diophantine equations, where they generalize, for example, the Pell equation. For this application the field K is usually the rational number field, the field L is an algebraic number field, and the basis is taken of some order in the ring of integers OL of L.
See also
Trace form
References
Field (mathematics)
Diophantine equations
Homogeneous polynomials | Norm form | [
"Mathematics"
] | 177 | [
"Diophantine equations",
"Mathematical objects",
"Equations",
"Number theory"
] |
7,204,725 | https://en.wikipedia.org/wiki/Base%20flow%20%28random%20dynamical%20systems%29 | In mathematics, the base flow of a random dynamical system is the dynamical system defined on the "noise" probability space that describes how to "fast forward" or "rewind" the noise when one wishes to change the time at which one "starts" the random dynamical system.
Definition
In the definition of a random dynamical system, one is given a family of maps on a probability space . The measure-preserving dynamical system is known as the base flow of the random dynamical system. The maps are often known as shift maps since they "shift" time. The base flow is often ergodic.
The parameter may be chosen to run over
(a two-sided continuous-time dynamical system);
(a one-sided continuous-time dynamical system);
(a two-sided discrete-time dynamical system);
(a one-sided discrete-time dynamical system).
Each map is required
to be a -measurable function: for all ,
to preserve the measure : for all , .
Furthermore, as a family, the maps satisfy the relations
, the identity function on ;
for all and for which the three maps in this expression are defined. In particular, if exists.
In other words, the maps form a commutative monoid (in the cases and ) or a commutative group (in the cases and ).
Example
In the case of random dynamical system driven by a Wiener process , where is the two-sided classical Wiener space, the base flow would be given by
.
This can be read as saying that "starts the noise at time instead of time 0".
References
Random dynamical systems | Base flow (random dynamical systems) | [
"Mathematics"
] | 338 | [
"Random dynamical systems",
"Dynamical systems"
] |
7,205,021 | https://en.wikipedia.org/wiki/Ettringite | Ettringite is a hydrous calcium aluminium sulfate mineral with formula: . It is a colorless to yellow mineral crystallizing in the trigonal system. The prismatic crystals are typically colorless, turning white on partial dehydration. It is part of the ettringite-group which includes other sulfates such as thaumasite and bentorite.
Discovery and occurrence
Ettringite was first described in 1874 by , for an occurrence near the Ettringer Bellerberg Volcano, Ettringen, Rheinland-Pfalz, Germany. It occurs within metamorphically altered limestone adjacent to igneous intrusive rocks or within xenoliths. It also occurs as weathering crusts on larnite in the Hatrurim Formation of Israel. It occurs associated with portlandite, afwillite and hydrocalumite at Scawt Hill, Ireland and with afwillite, hydrocalumite, mayenite and gypsum in the Hatrurim Formation. It has also been reported from the Zeilberg quarry, Maroldsweisach, Bavaria; at Boisséjour, near Clermont-Ferrand, Puy-de-Dôme, Auvergne, France; the N’Chwaning mine, Kuruman district, Cape Province, South Africa; in the US, occurrences were found in spurrite-merwinite-gehlenite skarn at the 910 level of the Commercial quarry, Crestmore, Riverside County, California and in the Lucky Cuss mine, Tombstone, Arizona.
Ettringite is also sometimes referred in the ancient French literature as Candelot salt, or Candlot salt.
Occurrence in cement
In concrete chemistry, ettringite is a hexacalcium aluminate trisulfate hydrate, of general formula when noted as oxides:
or
.
Ettringite is formed in the hydrated Portland cement system as a result of the reaction of tricalcium aluminate () with calcium sulfate, both present in Portland cement.
The addition of gypsum () to clinker during the grinding operation to obtain the crushed powder of Portland cement is essential to avoid the flash setting of concrete during its early hydration. Indeed, the tricalcium aluminate () is the most reactive phase of the four main mineral phases present in Portland cement (, , , and ). hydration is very exothermic and also occurs very fast in the fresh concrete mix as the temperature quickly increases with the progress of the hydration reaction. The effect of gypsum addition is to promote the formation of a thin impervious film of ettringite at the surface of the grains, passivating their surface, and so slowing down their hydration. The addition of gypsum to Portland cement is needed to control the concrete setting.
Ettringite, the most prominent representative of AFt phases or (), can also be synthesized in aqueous solution by reacting stoichiometric amounts of calcium oxide, aluminium oxide, and sulfate.
In the cement system, the presence of ettringite depends on the ratio of calcium sulfate to tri-calcium aluminate (); when this ratio is low, ettringite forms during early hydration and then converts to the calcium aluminate monosulfate (AFm phases or ()). When the ratio is intermediate, only a portion of the ettringite converts to AFm and both can coexist, while ettringite is unlikely to convert to AFm at high ratios.
The following standard abbreviations are used to designate the different oxide phases in the cement chemist notation (CCN):
C = CaO
S =
A =
F =
=
H =
K =
N =
m = mono
t = tri
AFt and AFm phases
AFt: abbreviation for "alumina, ferric oxide, tri-substituted" or (). It represents a group of calcium aluminate hydrates. AFt has the general formula where X represents a doubly charged anion or, sometimes, two singly charged anions. Ettringite is the most common and prominent member of the AFt group (X in this case denoting sulfate), and often simply called Alumina Ferrite tri-sulfate (AFt).
AFm: abbreviation for "alumina, ferric oxide, mono-substituted" or (). It represents another group of calcium aluminate hydrates with general formula where X represents a singly charged anion or 'half' a doubly charged anion. X may be one of many anions. The most important anions involved in Portland cement hydration are hydroxyl (), sulfate (), and carbonate ().
Structure
The mineral ettringite has a structure that runs parallel to the c axis – the needle axis – in the middle of these two lie the sulfate ions and molecules, the space group is P31c. Ettringite crystal system is trigonal, crystals are elongated and in a needle like shape, occurrence of disorder or twining is common, which affects the intercolumn material. The first X-ray diffraction crystallographic study was done by Bannister, Hey and Bernal (1936), which found that the crystal unit cell is of a hexagonal form with a = 11.26 and c = 21.48 with space group /mmc and Z = 2, where Z is a number of formula units per unit cell. From observations on dehydration and chemical formulas there were suggestions of the structure being composed of and , were between them lie ions and molecules. Further X-ray studies ensued; namely Wellin (1956) which determined the crystal structure of thaumasite, and Besjak and Jelenic (1966) which gave confirmation of the structure nature of ettringite.
An ettringite sample extracted from Scawt Hill was analysed by C. E. Tilley, the crystal was , with specific gravity of , possessed five prism faces of the form m{100} and a small face a{110}, with no pyramidal or basal faces. Upon X-ray diffraction a Laue diagram along the c-axis revealed a hexagonal axis with vertical planes of symmetry, this study showed that the structure has a hexagonal and not a rhombohedral lattice. Further studies conducted on synthetic ettringite by use of X-ray and powder diffraction confirmed earlier assumptions and analyses.
Upon analyzing the structure of both ettringite and thaumasite, it was deduced that both minerals have hexagonal structures, but different space groups.
Ettringite crystals have a P31c with a = 11.224 Å, c = 21,108 Å, while thaumasite crystals fall into space group P63 with a=11.04 Å, c=10.39 Å While these two minerals form a solid solution, the difference in space groups lead to discontinuities in unit cell parameters Differences between structures of ettringite and thaumasite arise from the columns of cations and anions Ettringite cation columns are composed of , which run parallel to the c axis, and the other columns of sulfate anions and water molecules in channels parallel to these columns In contrast, thaumasite containing a hexacoordinated silicon complex of (a rare octahedral configuration for Si) consists of a cylindrical column of in the c axis, with sulfate and carbonate anions in channels between these columns which contain water molecules as well.
Further research
Ongoing research on ettringite and cement phase minerals is performed to find new ways to immobilize toxic anions (e.g., borate, selenate and arsenate) and heavy metals to avoid their dispersion in soils and the environment; this can be achieved by using the proper cement phases whose crystal lattice can accommodate these elements. For example, copper immobilization at high pH can be achieved through the formation of C-S-H/C-A-H and ettringite. The crystal structure of ettringite Ca6Al2(SO4)3(OH)12·26H2O can incorporate a variety of divalent ions: Cu2+, Pb2+, Cd2+ and Zn2+, which can substitute for Ca2+.
See also
Cement
Cement chemists notation
Concrete
References
Aluminium minerals
Calcium minerals
Cement
Concrete
26
Sulfate minerals
Geology of Riverside County, California
Crestmore Heights, California
Trigonal minerals
Minerals in space group 159
Minerals described in 1874 | Ettringite | [
"Chemistry",
"Engineering"
] | 1,799 | [
"Structural engineering",
"Concrete",
"Hydrate minerals",
"Hydrates"
] |
7,206,824 | https://en.wikipedia.org/wiki/Truck%20scale | A truck scale (US), weighbridge (non-US) or railroad scale is a large set of scales, usually mounted permanently on a concrete foundation, that is used to weigh entire rail or road vehicles and their contents. By weighing the vehicle both empty and when loaded, the load carried by the vehicle can be calculated.
The key component that uses a weighbridge in order to make the weigh measurement is load cells.
Weight certification in the United States
Commercial scales have to be National Type Evaluation Program (NTEP) approved or certified. The certification is issued by the National Conference on Weights and Measures (NCWM), in accordance to the National Institute of Standards and Technology (NIST), "Handbook 44" specifications and tolerances, through Conformity Assessment and the Verified Conformity Assessment Program (VCAP)
Legal for trade
Handbook 44: General Code paragraph G-A.1.; and the NIST Handbook 130 (Uniform Weights and Measures Law; Section 1.13.) define Commercial Weighing and Measuring Equipment as follows;
NTEP approved scales are generally considered those scales which are intended by the manufacturer for use in commercial
applications where products are sold by weight. NTEP Approved is also known as Legal for Trade or complies with Handbook 44. NTEP scales are commonly used for applications ranging from weighing coldcuts at the deli, to fruit at the roadside farm stand, shipping centers for determining shipping cost to weighing gold and silver and more.
Rail weighbridge
A rail weighbridge is used to weigh rollingstock including railroad cars, railroad cars, goods wagons and locomotives, empty or loaded. When loaded, the net weight of the cargo is the gross weight less the tare weight when known. It is also used to weigh trams.
There are different types, but all of them have electronic sensors built into the track that measure the weight. All designs have in common that there must be a sufficient approach and departure distance in front of and behind the respective scale. All of them can measure independently of the direction of travel and whether the train is being pushed or pulled.
In principle, a distinction is made between three different types of construction:
1. Dynamic track weighbridge
The dynamic weighbridge consists of one or more weighbridges that can be connected together. The construction of the weighbridge is similar to the static track scales with load cells and weighing platform. The rails are applied to the weighing platform and are designed with rail bevelling. Rail switches are integrated into the rails to detect the position of the wagons on the scale. Together with the weighing terminal and the software, the weight of the individual wagons or the bogies is determined dynamically during the passage at up to 10 km/h.
Advantages:
Weighing accuracy class up to 0.2 for individual wagon weights in accordance with calibration regulations and OIML-R 106,
Due to the modular design, liquids can also be dynamically weighed in a verifiable manner,
Suitable as a static reference scale for calibration, thus saving costs with every recalibration,
A weighbridge is very robust and durable due to its construction like a static track scale.
Disadvantages:
No determination of wheel load and axle loads, however, the design can be expanded to include integrated axle load and wheel load measurement with force sensors in the track.
2. Dynamic track scales with strain gauges in the track
For dynamic track scales with force sensors, several force sensors are drilled and pressed into the track. When a train passes over the scales at up to 30 km/h, the rail is deformed by the mass of the vehicle (deformation). The change in material stress deforms the sensor, in which strain gauges are mounted as in a classic load cell. Thus, the weight of the individual wheelset or bogie can be calculated from the specific deformation behaviour of the rail.
Advantages:
Can be used as a wheel load scale and axle load scale,
Higher measuring speeds possible than with the other two designs,
Comparatively inexpensive due to the use of only a small amount of hardware and little track construction work.
Disadvantages:
Not calibratable,
Accuracy depends on passing speed,
Can only be used for solids.
3. Dynamic track scales based on weighing sleepers
A dynamic track weigher based on weighing sleepers is, like the strain gauge in rail weigher, a gapless construction without rail cuts. In simple terms, several sleepers are removed from the track and replaced by weighing sleepers. Load cells are installed in these sleepers. Compared to the weighbridge, the gapless (and thus force-coupled) design means that the weighbridge cannot be statically adjusted, but can only operate purely dynamically. This requires a very stable substructure without a jump in stiffness. The difference to the scale with strain gauge in the rail is that calibratable sensors can be used for this variant and the scale is therefore calibratable.
Advantages:
Weighing accuracy class up to 0.2 for individual wagon weights in accordance with calibration regulations and OIML-R 106,
Like the scales with strain gauges in the rail, the hardware volume is low,
Modular design also enables legal-for-trade dynamic weighing of liquids.
Disadvantages:
Static reference scale required for dynamic calibration, which increases the costs for recalibration,
Costly substructure/track construction work required (to ensure long-term stability, a resin-based ballast bonding is usually used for the weighing track. A procedure that creates an almost fixed track).
Types
Electronic (deep pit type)
Electronic (pit less type)
Digital (deep pit type)
Digital (shallow pit)
Digital (pit less type)
Rail Weighbridge
Movable Weighbridge
Mechanical weighbridge
Mechanical (digital type)
Electro-mechanical
Portable weighbridge
Axle scales
Portable ramp end scales
In -Motion weighbridge
Design concept
Truck scales can be surface mounted with a ramp leading up a short distance and the weighing equipment underneath or they can be pit mounted with the weighing equipment and platform in a pit so that the weighing surface is level with the road. They are typically built from steel or concrete and by nature are extremely robust.
In earlier versions the bridge is installed over a rectangular pit that contains levers that ultimately connect to a balance mechanism. The most complex portion of this type is the arrangement of levers underneath the weighbridge since the response of the scale must be independent of the distribution of the load. Modern devices use multiple load cells that connect to an electronic equipment to totalize the sensor inputs. In either type of semi-permanent scale the weight readings are typically recorded in a nearby hut or office.
Many weighbridges are now linked to a PC which runs truck scale software capable of printing tickets and providing reporting features.
Uses
Truck scales can be used for two main purposes:
Selling or charging by weight over the bridge (Trade Approved)
Check weighing both axle weights and gross vehicle weights. This helps to stop axle overloading and possible heavy fines.
They are used in industries that manufacture or move bulk items, such as in mines or quarries, garbage dumps / recycling centers, bulk liquid and powder movement, household goods, and electrical equipment. Since the weight of the vehicle carrying the goods is known (and can be ascertained quickly if it is not known by the simple expedient of weighing the empty vehicle) they are a quick and easy way to measure the flow of bulk goods in and out of different locations.
A single axle truck scale or axle weighing system can be used to check individual axle weights and gross vehicle weights to determine whether the vehicle is safe to travel on the public highway without being stopped and fined by the authorities for being overloaded. Similar to the full size truck scale these systems can be pit mounted with the weighing surface flush to the level of the roadway or surface mounted.
For many uses (such as at police over the road truck weigh stations or temporary road intercepts) weighbridges have been largely supplanted by simple and thin electronic weigh cells, over which a vehicle is slowly driven. A computer records the output of the cell and accumulates the total vehicle weight. By weighing the force of each axle it can be assured that the vehicle is within statutory limits, which typically will impose a total vehicle weight, a maximum weight within an axle span limit and an individual axle limit. The former two limits ensure the safety of bridges while the latter protects the road surface.
Portable versions
Portable truck scales can also be found in use around the world. A portable truck scale will have lower frame work that can be placed on non-typical surfaces such as dirt. These scales retain the same level of accuracy as a pit-type scale, with accuracy of up to + or - 1%. The first portable truck scale record in the US was units operated by the Weight Patrol of the Los Angeles Motor Patrol in 1929. Four such weighing units were used with one under each of the trucks wheels. Each unit could record up to .
Technological advancement
Digital Load cells : Digital load cells have replaced traditional analog ones due to their superior accuracy, faster response times, and better resistance to environmental factors. These load cells offer real-time weight data with reduced signal interference.
Weighbridge Software Integration : Weighbridge software has been developed to streamline data collection, analysis, and reporting. This software simplifies integration with other business systems, improving compliance tracking, inventory management, and billing.
Remote Monitoring & Connectivity : Weighbridges now feature remote monitoring capabilities, allowing users to access weight data and system status in real-time from a distance. This feature enhances efficiency by providing preventive maintenance and troubleshooting capabilities.
In- Motion weighing : In-motion weighbridge systems have revolutionized truck weighing by allowing vehicles to be weighed while moving slowly over the scale. This eliminates the need for stopping for weighing, improving traffic flow and saving time.
RFID Technology : RFID technology is being integrated into weighbridge systems to automate the identification of vehicles and goods. This improves data accuracy, speeds up the weighing process, and reduces errors.
Imaging : Advanced camera systems capture images of vehicles and their loads during the weighing process. This visual evidence can be useful in dispute resolution, record-keeping, and verification.
Data Analytics & Reporting : Weighbridge technology now includes powerful data analytics tools that help organizations draw insights from weight data. These insights can aid in making informed operational decisions, identifying patterns, and optimizing load distribution
Mobile Apps & Cloud Integration : Mobile applications allow users to interact with weighbridge systems remotely and access reports, alerts, and real-time weight data. Integration with the cloud ensures secure data storage and cross-platform accessibility.
Sustainable : Weighbridge designs now incorporate solar-powered systems and energy-saving components to minimize their environmental impact.
Enhanced Durability & Construction : Weighbridge construction materials have advanced to withstand heavy usage, harsh weather conditions, and corrosive environments, resulting in longer lifespans and reduced maintenance requirements.
See also
On-board scale
Tare weight
Weigh lock
Weigh station
Weighing scale
References
Bridges
Weighing instruments
Measuring instruments | Truck scale | [
"Physics",
"Technology",
"Engineering"
] | 2,241 | [
"Structural engineering",
"Weighing instruments",
"Mass",
"Measuring instruments",
"Bridges",
"Matter"
] |
7,208,118 | https://en.wikipedia.org/wiki/Stellar%20mass%20loss | Stellar mass loss is a phenomenon observed in stars by which stars lose some mass over their lives. Mass loss can be caused by triggering events that cause the sudden ejection of a large portion of the star's mass. It can also occur when a star gradually loses material to a binary companion or due to strong stellar winds. Massive stars are particularly susceptible to losing mass in the later stages of evolution. The amount and rate of mass loss varies widely based on numerous factors.
Stellar mass loss plays a very important role in stellar evolution, the composition of the interstellar medium, nucleosynthesis as well as understanding the populations of stars in clusters and galaxies.
Causes
Every star undergoes some mass loss in its lifetime. This could be caused by its own stellar wind, or by interactions with the outside environment. Additionally, massive stars are particularly vulnerable to significant mass loss and can be influenced by a number of factors, including:
Gravitational attraction of a binary companion
Coronal mass ejection-type events
Ascension to red giant or red supergiant status
Some of these causes are discussed below, along with the consequences of such phenomenon.
Solar wind
The solar wind is a stream of plasma released from the upper atmosphere of the Sun. The high temperatures of the corona allow charged particles and other atomic nuclei to gain the energy needed to escape the Sun's gravity. The sun loses mass due to the solar wind at a very small rate, solar masses per year.
The solar wind carries trace amounts of the nuclei of heavy elements fused in the core of the sun, revealing the inner workings of the sun while also carrying information about the solar magnetic field. In 2021, the Parker Solar Probe measured 'sound speed' and magnetic properties of the solar wind plasma environment.
Binary Mass Transfer
Often when a star is a member of a pair of close-orbiting binary stars, the tidal attraction of the gasses near the center of mass is sufficient to pull gas from one star onto its partner. This effect is especially prominent when the partner is a white dwarf, neutron star, or black hole. Mass loss in binary systems has particularly interesting outcomes. If the secondary star in the system overflows its Roche lobe, it loses mass to the primary, greatly altering their evolution. If the primary star is a white dwarf, the system rapidly develops into a Type-Ia supernova. Another alternate scenario for the same system is the formation of a cataclysmic variable or a 'Nova'. If the accreting star is a Neutron star or a Black hole, the resultant system is an X-ray binary.
A study in 2012 found that more than 70% of all massive stars exchange mass with a companion which leads to a binary merger in one-third of the cases. Since the trajectory of evolution of these stars is greatly altered due to the mass loss to the companion, models of stellar evolution are focusing on replicating these observations.
Mass ejection
Certain classes of stars, especially Wolf-Rayet stars are sufficiently massive and as they evolve, their radius increases. This causes their hold on their upper layers to weaken allowing small disturbances to blast large amounts of the outer layers into space. Events such as solar flares and coronal mass ejections are mere blips on the mass loss scale for low mass stars (like our sun). However, these same events cause catastrophic ejection of stellar material into space for massive stars like Wolf-Rayet stars.
Such stars are extremely charitable and spend much of their lives donating mass to the surrounding interstellar medium. As they are stripped of their hydrogen envelopes, they continue to be good samaritans, giving up heavier elements like helium, carbon, nitrogen and oxygen, with some of the most massive stars putting out even heavier elements up to aluminum.
Red giant mass loss
Stars which have entered the red giant phase are notorious for rapid mass loss. As above, the gravitational hold on the upper layers is weakened, and they may be shed into space by violent events such as the beginning of a helium flash in the core. The final stage of a red giant's life will also result in prodigious mass loss as the star loses its outer layers to form a planetary nebula.
The structures of these nebulae provide insight into the history of the mass loss of the star. Over-densities and under-densities reveal the periods where the star was actively losing mass while the distribution of these clumps in space hints at the physical cause of the loss. Uniform spherical shells in the nebula point towards symmetric stellar winds while asymmetry and lack of uniform structure point to mass ejections and stellar flares as the cause.
This phenomenon takes on a new scale when looking at AGB stars. Stars found on the Asymptotic giant branch of the Hertzsprung–Russell diagram are the most prone to mass loss in the later stages of their evolution compared to others. This phase is when the most amount of mass is lost for a single star that does not go on to explode in a supernova.
See also
Red giant
Red supergiant
Betelgeuse
Coronal mass ejection
Helium flash
External Links and Further Reading
Simulation of a Red Supergiant displaying instability and mass loss
A Review of Stellar Mass Loss in Massive Stars
Effects of Mass Loss of Intermediate stars on the Interstellar Medium
References
Concepts in stellar astronomy
Stellar phenomena | Stellar mass loss | [
"Physics"
] | 1,088 | [
"Concepts in stellar astronomy",
"Physical phenomena",
"Stellar phenomena",
"Concepts in astrophysics"
] |
7,209,279 | https://en.wikipedia.org/wiki/Glass%20fiber%20reinforced%20concrete | Glass fiber reinforced concrete (GFRC) is a type of fiber-reinforced concrete. The product is also known as glassfibre reinforced concrete or GRC in British English. Glass fiber concretes are mainly used in exterior building façade panels and as architectural precast concrete. Somewhat similar materials are fiber cement siding and cement boards.
Composition
GRC (Glass fibre-reinforced concrete) ceramic consists of high-strength, alkali-resistant glass fibre embedded in a concrete & ceramic matrix. In this form, both fibres and matrix retain their physical and chemical identities, while offering a synergistic combination of properties that cannot be achieved with either of the components acting alone. In general, fibres are the principal load-carrying members, while the surrounding matrix keeps them in the desired locations and orientation, acting as a load transfer medium between the fibres and protecting them from environmental damage. The fibres provide reinforcement for the matrix and other useful functions in fibre-reinforced composite materials. Glass fibres can be incorporated into a matrix either in continuous or discontinuous (chopped) lengths.
Durability was poor with the original type of glass fibres since the alkalinity of cement reacts with its silica. In the 1970s alkali-resistant glass fibres were commercialized. Alkali resistance is achieved by adding zirconia to the glass. The higher the zirconia content the better the resistance to alkali attack. AR glass fibres should have a Zirconia content of more than 16% to be in compliance with internationally recognized specifications (EN, ASTM, PCI, GRCA, etc).
Laminates
A widely used application for fibre-reinforced concrete is structural laminate, obtained by adhering and consolidating thin layers of fibres and matrix into the desired thickness. The fibre orientation in each layer as well as the stacking sequence of various layers can be controlled to generate a wide range of physical and mechanical properties for the composite laminate. GFRC cast without steel framing is commonly used for purely decorative applications such as window trims, decorative columns, exterior friezes, or limestone-like wall panels.
Properties
The design of glass-fibre-reinforced concrete panels uses a knowledge of its basic properties under tensile, compressive, bending and shear forces, coupled with estimates of behavior under secondary loading effects such as creep, thermal response and moisture movement.
There are a number of differences between structural metal and fibre-reinforced composites. For example, metals in general exhibit yielding and plastic deformation, whereas most fibre-reinforced composites are elastic in their tensile stress-strain characteristics. However, the dissimilar nature of these materials provides mechanisms for high-energy absorption on a microscopic scale comparable to the yielding process. Depending on the type and severity of external loads, a composite laminate may exhibit gradual deterioration in properties but usually does not fail in a catastrophic manner. Mechanisms of damage development and growth in metal and composite structure are also quite different. Other important characteristics of many fibre-reinforced composites are their non-corroding behavior, high damping capacity and low coefficients of thermal expansion.
Glass-fibre-reinforced concrete architectural panels have the general appearance of pre-cast concrete panels, but differ in several significant ways. For example, the GFRC panels, on average, weigh substantially less than pre-cast concrete panels due to their reduced thickness. Their low weight decreases loads superimposed on the building’s structural components making construction of the building frame more economical.
Sandwich panels
A sandwich panel is a composite of three or more materials bonded together to form a structural panel. It takes advantage of the shear strength of a low density core material and the high compressive and tensile strengths of the GFRC facing to obtain high strength-to-weight ratios.
The theory of sandwich panels and functions of the individual components may be described by making an analogy to an I-beam. The core in a sandwich panel is comparable to the web of an I-beam, which supports the flanges and allows them to act as a unit. The web of the I-beam and the core of the sandwich panels carry the beam shear stresses. The core in a sandwich panel differs from the web of an I-beam in that it maintains continuous support for the facings, allowing the facings to be worked up to or above their yield strength without crimping or buckling. Obviously, the bonds between the core and facings must be capable of transmitting shear loads between these two components, thus making the entire structure an integral unit.
The load-carrying capacity of a sandwich panel can be increased dramatically by introducing light steel framing. Light steel stud framing is similar to conventional steel stud framing for walls, except that the frame is encased in a concrete product. Here, the sides of the steel frame are covered with two or more layers of GFRC, depending on the type and magnitude of external loads. The strong and rigid GFRC provides full lateral support on both sides of the studs, preventing them from twisting and buckling laterally. The resulting panel is lightweight in comparison with traditionally reinforced concrete, yet is strong and durable and can be easily handled.
Technical specifications
GFRC Material Properties
Typical strength properties of GRC
Uses
GFRC is incredibly versatile and has a large number of use cases due to its strength, weight, and design. The most common place you will see this material is in the construction industry. It's used in very demanding cases such as architectural cladding that's hanging several stories above sidewalks or even more for aesthetics such as interior furniture pieces like GFRC coffee tables, GRC Jali, Elevation screens. The glass fiber reinforced concrete not only reduces the cost of concrete but also enhances its strength.
References
6. GFRC Screen (GRC Jali) Asian GRC. Revised 12 February 2024. "GFRC Technical Specification, GRC Material Properties, Typical strength properties of GRC and uses".
Concrete
Composite materials
Fibre-reinforced cementitious materials | Glass fiber reinforced concrete | [
"Physics",
"Engineering"
] | 1,230 | [
"Structural engineering",
"Composite materials",
"Materials",
"Concrete",
"Matter"
] |
7,209,369 | https://en.wikipedia.org/wiki/Fiber-reinforced%20concrete | Fiber-reinforced concrete or fibre-reinforced concrete (FRC) is concrete containing fibrous material which increases its structural integrity. It contains short discrete fibers that are uniformly distributed and randomly oriented. Fibers include steel fibers, glass fibers, synthetic fibers and natural fibers – each of which lend varying properties to the concrete. In addition, the character of fiber-reinforced concrete changes with varying concretes, fiber materials, geometries, distribution, orientation, and densities.
Historical perspective
The concept of using fibers as reinforcement is not new. Fibers have been used as reinforcement since ancient times. Historically, horsehair was used in mortar and straw in mudbricks. In the 1900s, asbestos fibers were used in concrete. In the 1950s, the concept of composite materials came into being and fiber-reinforced concrete was one of the topics of interest. Once the health risks associated with asbestos were discovered, there was a need to find a replacement for the substance in concrete and other building materials. By the 1960s, steel, glass (GFRC), and synthetic (such as polypropylene) fibers were used in concrete. Research into new fiber-reinforced concretes continues today.
Fibers are usually used in concrete to control cracking due to plastic shrinkage and to drying shrinkage. They also reduce the permeability of concrete and thus reduce bleeding of water. Some types of fibers produce greater impact, abrasion, and shatter resistance in concrete. Larger steel or synthetic fibers can replace rebar or steel completely in certain situations. Fiber reinforced concrete has all but completely replaced bar in underground construction industry such as tunnel segments where almost all tunnel linings are fiber reinforced in lieu of using rebar. This may, in part, be due to issues relating to oxidation or corrosion of steel reinforcements. This can occur in climates that are subjected to water or intense and repeated moisture, see Surfside Building Collapse. Indeed, some fibers actually reduce the compressive strength of concrete. Lignocellulosic fibers in a cement matrix can degrade due to the hydrolysis of lignin and hemicelluloses.
The amount of fibers added to a concrete mix is expressed as a percentage of the total volume of the composite (concrete and fibers), termed "volume fraction" (Vf). Vf typically ranges from 0.1 to 3%. The aspect ratio (l/d) is calculated by dividing fiber length (l) by its diameter (d). Fibers with a non-circular cross section use an equivalent diameter for the calculation of aspect ratio. If the fiber's modulus of elasticity is higher than the matrix (concrete or mortar binder), they help to carry the load by increasing the tensile strength of the material. Increasing the aspect ratio of the fiber usually segments the flexural strength and toughness of the matrix. Longer length results in better matrix inside the concrete and finer diameter increases the count of fibers. To ensure that each fiber strand is effective, it is recommended to use fibers longer than the maximum aggregate size. Normal concrete contains equivalent diameter aggregate which is 35-45% of concrete, fibers longer than are more effective. However, fibers that are too long and not properly treated at time of processing tend to "ball" in the mix and create work-ability problems.
Fibers are added for long term durability of concrete. Glass and polyester decompose in alkaline condition of concrete and various additives and surface treatment of concrete.
The High Speed 1 tunnel linings incorporated concrete containing 1 kg/m3 or more of polypropylene fibers, of diameter 18 & 32 μm, giving the benefits noted below. Adding fine diameter polypropylene fibers, not only provides reinforcement in tunnel lining, but also prevents "spalling" and damage of lining in case of fire due to accident.
Benefits
Glass fibers can:
Improve concrete strength at low cost.
Adds tensile reinforcement in all directions, unlike rebar.
Add a decorative look as they are visible in the finished concrete surface.
Polypropylene and nylon fibers can:
Improve mix cohesion, improving pumpability over long distances
Improve freeze-thaw resistance
Improve resistance to explosive spalling in case of a severe fire
Improve impact- and abrasion-resistance
Increase resistance to plastic shrinkage during curing
Improve structural strength
Reduce steel reinforcement requirements
Improve ductility
Reduce crack widths and control the crack widths tightly, thus improving durability
Steel fibers can:
Improve structural strength
Reduce steel reinforcement requirements
Reduce crack widths and control the crack widths tightly, thus improving durability
Improve impact- and abrasion-resistance
Improve freeze-thaw resistance
Natural (lignocellulosic, LC) fibers and/or particles can:
Improve ductility
Contribute to crack control via bridging
Reduce the negative environmental impact of the materials (GWP - global warming potential)
Reduce weight
LC (plant-based) fibers and particles can degrade in a cement matrix
Blends of both steel and polymeric fibers are often used in construction projects in order to combine the benefits of both products; structural improvements provided by steel fibers and the resistance to explosive spalling and plastic shrinkage improvements provided by polymeric fibers.
In certain specific circumstances, steel fiber or macro synthetic fibers can entirely replace traditional steel reinforcement bar ("rebar") in reinforced concrete. This is most common in industrial flooring but also in some other precasting applications. Typically, these are corroborated with laboratory testing to confirm that performance requirements are met. Care should be taken to ensure that local design code requirements are also met, which may impose minimum quantities of steel reinforcement within the concrete. There are increasing numbers of tunnelling projects using precast lining segments reinforced only with steel fibers.
Micro-rebar has also been recently tested and approved to replace traditional reinforcement in vertical walls designed in accordance with ACI 318 Chapter 14.
Some developments
At least half of the concrete in a typical building component protects the steel reinforcement from corrosion. Concrete using only fiber as reinforcement can result in saving of concrete, thereby greenhouse effect associated with it. FRC can be molded into many shapes, giving designers and engineers greater flexibility.
High performance FRC (HPFRC) claims it can sustain strain-hardening up to several percent strain, resulting in a material ductility of at least two orders of magnitude higher when compared to normal concrete or standard fiber-reinforced concrete.
HPFRC also claims a unique cracking behavior. When loaded to beyond the elastic range, HPFRC maintains crack width to below 100 μm, even when deformed to several percent tensile strains. Field results with HPFRC and The Michigan Department of Transportation resulted in early-age cracking.
Recent studies on high-performance fiber-reinforced concrete in a bridge deck found that adding fibers provided residual strength and controlled cracking. There were fewer and narrower cracks in the FRC even though the FRC had more shrinkage than the control. Residual strength is directly proportional to the fiber content.
The use of natural fibers has become a topic of research mainly due to the expected positive environmental impact, recyclability, and economy. The degradation of natural fibers and particles in a cement matrix is a concern.
Some studies were performed using waste carpet fibers in concrete as an environmentally friendly use of recycled carpet waste. A carpet typically consists of two layers of backing (usually fabric from polypropylene tape yarns), joined by CaCO3 filled styrene-butadiene latex rubber (SBR), and face fibers (majority being nylon 6 and nylon 66 textured yarns). Such nylon and polypropylene fibers can be used for concrete reinforcement. Other ideas are emerging to use recycled materials as fibers: recycled polyethylene terephthalate (PET) fiber, for example.
Standards
International
The following are several international standards for fiber-reinforced concrete:
BS EN 14889-1:2006 – Fibres for Concrete. Steel Fibres. Definitions, specifications & conformity
BS EN 14845-1:2007 – Test methods for fibres in concrete
ASTM A820-16 – Standard Specification for Fiber-Reinforced Concrete (superseded)
ASTM C1116/C1116M - Standard Specification for Fiber-Reinforced Concrete
ASTM C1018-97 – Standard Test Method for Flexural Toughness and First-Crack Strength of Fiber-Reinforced Concrete (Using Beam With Third-Point Loading) (Withdrawn 2006)
Canada
CSA A23.1-19 Annex U - Ultra High Performance Concrete (with and without Fiber Reinforcement)
CSA S6-19, 8.1 - Design Guideline for Ultra High Performance Concrete
See also
Fiber-reinforced plastic
Glass-reinforced plastic
Reinforced concrete
Steel fibre-reinforced shotcrete
Textile-reinforced concrete
References
Citations
Books
Composite materials
Reinforced concrete
Glass applications
Building materials
Fibre-reinforced cementitious materials | Fiber-reinforced concrete | [
"Physics",
"Engineering"
] | 1,795 | [
"Building engineering",
"Composite materials",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
7,212,817 | https://en.wikipedia.org/wiki/Crystallization%20adjutant | A crystallization adjutant is a material used to promote crystallization, normally in a context where a material does not crystallize naturally from a pure solution.
Additives in Macromolecular Crystallization
In macromolecular crystallography, the term additive is used instead of adjutant. An additive can either interact directly with the protein, and become incorporated at a fixed position in the resulting crystal or have a role within the disordered solvent, that in protein crystals constitute roughly 50% of the lattice volume.
Polyethylene glycols of various molecular weights and high-ionic strength salts such as ammonium sulfate and sodium citrate that induce protein precipitation when used in high concentrations are classified as precipitants, while certain other salts such as zinc sulfate or calcium sulfate that may cause a protein to precipitate vigorously even when used in small amounts are considered adjutants. Crystallization adjutants are considered additives when they are effective at relatively low concentrations.
The distinction between buffers and adjutants is also fuzzy. Buffer molecules can become part of the lattice (for example HEPES in becomes incorporated in crystals of human neutrophil collagenase) but their main use is to maintain the rather precise pH requirements for crystallization that many proteins have. Commonly used buffers such as citrate have a high ionic strength and at the typical buffer concentrations they also act as precipitants. Various species such as Ca2+ and Zn2+ are a biological requirement for certain proteins to fold correctly and certain co-factors are needed to maintain a well defined conformation. Certain strategies, like replacing precipitants and buffers with others intended to have a similar effect, have been used to differentiate between the roles played in protein crystallization by the various components in the crystallization solution.
Additives for Membrane Protein Crystallization
For membrane proteins, the situation is more complicated because the system that is being crystallized is not the membrane protein itself but the micellar system in which the membrane protein is embedded.
The size of the protein-detergent mixed micelles are affected by both additives and detergents which will strongly influence the crystals obtained. In addition to varying the concentration of primary detergents, additives (lipids and alcohols) and secondary detergents can be used to modulate the size and shape of the detergent micelles. By reducing the size of the mixed micelles lattice forming protein-protein contacts are encouraged. Lipid cubic phases, spontaneous self-assembling liquid crystals or lipid mesophases have been used successfully in the crystallization of integral membrane proteins.
Temperature, salts, detergents, various additives are used in this system to tailor the cubic phase to suit the target protein. Typical detergents used are n-dodecyl-β-d-maltopyranoside, n-decyl-β-d-glucopyranoside, lauryldimethylamine oxide LDAO, n-hexyl-β-d-glucopyranoside, n-nonyl-β-d-glucopyranoside and n-octyl-β-d-glucopyranoside; the various lipids are dioleoyl phosphatidylcholine, dioleoyl phosphatidylethanolamine and monoolein.
References
External links
A list of adjutants from a German Crystallography laboratory
The 'Jeffamine' group of compounds, a number of which are commonly used adjutants
Crystallography | Crystallization adjutant | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 732 | [
"Crystallography",
"Condensed matter physics",
"Materials science"
] |
1,547,057 | https://en.wikipedia.org/wiki/EHealth | eHealth describes healthcare services which are supported by digital processes, communication or technology such as electronic prescribing, Telehealth, or Electronic Health Records (EHRs). The term "eHealth" originated in the 1990s, initially conceived as "Internet medicine," but has since evolved to have a broader range of technologies and innovations aimed at enhancing healthcare delivery and accessibility. According to the World Health Organization (WHO), eHealth encompasses not only internet-based healthcare services but also modern advancements such as artificial intelligence, mHealth (mobile health), and telehealth, which collectively aim to improve accessibility and efficiency in healthcare delivery. Usage of the term varies widely. A study in 2005 found 51 unique definitions of eHealth, reflecting its diverse applications and interpretations. While some argue that it is interchangeable with health informatics as a broad term covering electronic/digital processes in health, others use it in the narrower sense of healthcare practice specifically facilitated by the Internet. It also includes health applications and links on mobile phones, referred to as mHealth or m-Health. . Key components of eHealth include electronic health records (EHRs), telemedicine, health information exchange, mobile health applications, wearable devices, and online health information. For example, diabetes monitoring apps allow patients to track health metrics in real time, bridging the gap between home and clinical care. These technologies enable healthcare providers, patients, and other stakeholders to access, manage, and exchange health information more effectively, leading to improved communication, decision-making, and overall healthcare outcomes.
Types
The term can encompass a range of services or systems that are at the edge of medicine/healthcare and information technology, including:
Electronic health record: enabling the communication of patient data between different healthcare professionals (GPs, specialists etc.);
Computerized physician order entry: a means of requesting diagnostic tests and treatments electronically and receiving the results
ePrescribing: access to prescribing options, printing prescriptions to patients and sometimes electronic transmission of prescriptions from doctors to pharmacists
Clinical decision support system: providing information electronically about protocols and standards for healthcare professionals to use in diagnosing and treating patients
Telemedicine: physical and psychological diagnosis and treatments at a distance, including telemonitoring of patients functions and videoconferencing;
Telerehabilitation: providing rehabilitation services over a distance through telecommunications.
Telesurgery: use robots and wireless communication to perform surgery remotely.
Teledentistry: exchange clinical information and images over a distance.
Consumer health informatics: use of electronic resources on medical topics by healthy individuals or patients;
Health knowledge management: e.g. in an overview of latest medical journals, best practice guidelines or epidemiological tracking (examples include physician resources such as Medscape and MDLinx);
Virtual healthcare teams: consisting of healthcare professionals who collaborate and share information on patients through digital equipment (for transmural care)
mHealth or m-Health: includes the use of mobile devices in collecting aggregate and patient-level health data, providing healthcare information to practitioners, researchers, and patients, real-time monitoring of patient vitals, and direct provision of care (via mobile telemedicine);
Medical research using grids: powerful computing and data management capabilities to handle large amounts of heterogeneous data.
Health informatics / healthcare information systems: also often refer to software solutions for appointment scheduling, patient data management, work schedule management and other administrative tasks surrounding health. There can be integrated data collection platforms for devices and standards and require extended research.
Internet Based Sources for Public Health Surveillance (Infoveillance).
Contested Definition
Several authors have noted the variable usage in the term; from being specific to the use of the Internet in healthcare to being generally around any use of computers in healthcare. Various authors have considered the evolution of the term and its usage and how this maps to changes in health informatics and healthcare generally. Oh et al., in a 2005 systematic review of the term's usage, offered the definition of eHealth as a set of technological themes in health today, more specifically based on commerce, activities, stakeholders, outcomes, locations, or perspectives. One thing that all sources seem to agree on is that e-health initiatives do not originate with the patient, though the patient may be a member of a patient organization that seeks to do this, as in the e-Patient movement.
eHealth literacy
eHealth literacy is defined as "the ability to seek, find, understand and appraise health information from electronic sources and apply knowledge gained to addressing or solving a health problem." This concept encompasses six types of literacy: traditional (literacy and numeracy), information, media, health, computer, and scientific. Of these, media and computer literacies are unique to the Internet context. eHealth media literacy includes awareness of media bias, the ability to discern both explicit and implicit meanings from media messages, and the capability to derive accurate information from digital content.
While eHealth literacy involves the ability to use technology, it is extremely important to have the skills to critically evaluate online health information. This makes media literacy a critical part of successfully using eHealth. Having the composite skills of eHealth literacy allows health consumers to achieve positive outcomes from using the Internet for health purposes. eHealth literacy has the potential to both protect consumers from harm and empower them to fully participate in informed health-related decision making. People with high levels of eHealth literacy are also more aware of the risk of encountering unreliable information on the Internet On the other hand, the extension of digital resources to the health domain in the form of eHealth literacy can also create new gaps between health consumers. eHealth literacy hinges not on the mere access to technology, but rather on the skill to apply the accessed knowledge. The efficiency of eHealth also heavily relies on the efficiency and ease of use regarding technology being used by the patient. A high understanding of technology will not overcome the obstacles of overcomplicated technology being used by patients that are physically and mentally hindered.
The population of elderly people surpassed the number of children for the first time in history in 2018. A more multi-faceted approach is necessary for this age group, because they are more susceptible to chronic disease, contraindications of medication, and other age-related setbacks like forgetfulness. Ehealth offers services that can be very helpful for all of these scenarios, making an elderly patient's quality of life substantially better with proper use.
Data exchange
One of the factors hindering the widespread acceptance of e-health tools is the concern about privacy, particularly regarding EPRs (Electronic patient record). This main concern has to do with the confidentiality of the data, as well as non-confidential data that may be vulnerable to unauthorized access. Each medical practice has its own jargon and diagnostic tools, so to standardize the exchange of information, various coding schemes may be used in combination with international medical standards. Systems that deal with these transfers are often referred to as Health Information Exchange (HIE). Of the forms of e-health already mentioned, there are roughly two types; front-end data exchange and back-end exchange.
Front-end exchange typically involves the patient, while back-end exchange does not. A common example of a rather simple front-end exchange is a patient sending a photo taken by mobile phone of a healing wound and sending it via email to the family doctor for control. Such an action may avoid the cost of an expensive visit to the hospital.
A common example of a back-end exchange is when a patient on vacation visits a doctor who then may request access to the patient's health records, such as medicine prescriptions, x-ray photographs, or blood test results. Such an action may reveal allergies or other prior conditions that are relevant to the visit.
Thesaurus
Successful e-health initiatives such as e-Diabetes have shown that for data exchange to be facilitated either at the front-end or the back-end, a common thesaurus is needed for terms of reference. Various medical practices in chronic patient care (such as for diabetic patients) already have a well defined set of terms and actions, which makes standard communication exchange easier, whether the exchange is initiated by the patient or the caregiver.
In general, explanatory diagnostic information (such as the standard ICD-10) may be exchanged insecurely, and private information (such as personal information from the patient) must be secured. E-health manages both flows of information, while ensuring the quality of the data exchange.
Early adopters
Patients living with long term conditions (also called chronic conditions) over time often acquire a high level of knowledge about the processes involved in their own care, and often develop a routine in coping with their condition. For these types of routine patients, front-end e-health solutions tend to be relatively easy to implement.
E-mental health
E-mental health is frequently used to refer to internet based interventions and support for mental health conditions. However, it can also refer to the use of information and communication technologies that also includes the use of social media, landline and mobile phones. These services can range from providing information to offering peer support, computer-based programs, virtual applications, games, and real-time interaction with trained clinicians. Additionally, services can be delivered through telephones and interactive voice response (IVR).
Mental disorders, including alcohol and drug use disorders, mood disorders such as depression, dementia, schaddressed ia, and anxiety disorders can all be addressed through e-mental health services. The majority of e-mental health interventions have focused on the treatment of depression and anxiety. There are also E-mental health programs available for other interventions such as smoking cessation, gambling, and post-disaster mental health.
Advantages and disadvantages
E-mental health has a number of advantages such as being low cost, easily accessible and providing anonymity to users. However, there are also a number of disadvantages such as concerns regarding treatment credibility, user privacy and confidentiality. Online security involves the implementation of appropriate safeguards to protect user privacy and confidentiality. This includes appropriate collection and handling of user data, the protection of data from unauthorized access and modification and the safe storage of data. Technical difficulties are another potential disadvantage. With almost all forms of technology, there will be unintended difficulties or malfunctions, which doesn't exclude tablets, computers, and wireless medical devices. Ehealth is also very dependent on the patient having functional Wi-Fi, which can be an issue that cannot be fixed without an expert.
E-mental health has been gaining momentum in the academic research as well as practical arenas in a wide variety of disciplines such as psychology, clinical social work, family and marriage therapy, and mental health counseling. Testifying to this momentum, the E-Mental Health movement has its own international organization, the International Society for Mental Health Online. However, e-Mental health implementation into clinical practice and healthcare systems remains limited and fragmented.
Programs
There are at least five programs currently available to treat anxiety and depression. Several programs have been identified by the UK National Institute for Health and Care Excellence as cost effective for use in primary care. These include Fearfighter, a text based cognitive behavioral therapy program to treat people with phobias, and Beating the Blues, an interactive text, cartoon and video CBT program for anxiety and depression. Two programs have been supported for use in primary care by the Australian Government. The first is Anxiety Online, a text based program for the anxiety, depressive and eating disorders, and the second is THIS WAY UP, a set of interactive text, cartoon and video programs for the anxiety and depressive disorders. Another is iFightDepression a multilingual, free to use, web-based tool for self-management of less severe forms of depression, for use under guidance of a GP or psychotherapist.
There are a number of online programs relating to smoking cessation. QuitCoach is a personalised quit plan based on the users response to questions regarding giving up smoking and tailored individually each time the user logs into the site. Freedom From Smoking takes users through lessons that are grouped into modules that provide information and assignments to complete. The modules guide participants through steps such as preparing to quit smoking, stopping smoking and preventing relapse.
Other internet programs have been developed specifically as part of research into treatment for specific disorders. For example, an online self-directed therapy for problem gambling was developed to specifically test this as a method of treatment. All participants were given access to a website. The treatment group was provided with behavioural and cognitive strategies to reduce or quit gambling. This was presented in the form of a workbook which encouraged participants to self-monitor their gambling by maintaining an online log of gambling and gambling urges. Participants could also use a smartphone application to collect self-monitoring information. Finally participants could also choose to receive motivational email or text reminders of their progress and goals.
An internet based intervention was also developed for use after Hurricane Ike in 2009. During this study, 1,249 disaster-affected adults were randomly recruited to take part in the intervention. Participants were given a structured interview then invited to access the web intervention using a unique password. Access to the website was provided for a four-month period. As participants accessed the site they were randomly assigned to either the intervention. those assigned to the intervention were provided with modules consisting of information regarding effective coping strategies to manage mental health and health risk behaviour.
eHealth programs have been found to be effective in treating borderline personality disorder (BPD).
Cybermedicine
Cybermedicine is the use of the Internet to deliver medical services, such as medical consultations and drug prescriptions. It is the successor to telemedicine, wherein doctors would consult and treat patients remotely via telephone or fax.
Cybermedicine is already being used in small projects where images are transmitted from a primary care setting to a medical specialist, who comments on the case and suggests which intervention might benefit the patient. A field that lends itself to this approach is dermatology, where images of an eruption are communicated to a hospital specialist who determines if referral is necessary.
The field has also expanded to include online "ask the doctor" services that allow patients direct, paid access to consultations (with varying degrees of depth) with medical professionals (examples include Bundoo.com, Teladoc, and Ask The Doctor).
A Cyber Doctor, known in the UK as a Cyber Physician, is a medical professional who does consultation via the internet, treating virtual patients, who may never meet face to face. This is a new area of medicine which has been utilized by the armed forces and teaching hospitals offering online consultation to patients before making their decision to travel for unique medical treatment only offered at a particular medical facility.
Self-monitoring healthcare devices
Self-monitoring is the use of sensors or tools which are readily available to the general public to track and record personal data. The sensors are usually wearable devices and the tools are digitally available through mobile device applications. Self-monitoring devices were created for the purpose of allowing personal data to be instantly available to the individual to be analyzed. As of now, fitness and health monitoring are the most popular applications for self-monitoring devices. The biggest benefit to self-monitoring devices is the elimination of the necessity for third party hospitals to run tests, which are both expensive and lengthy. These devices are an important advancement in the field of personal health management. Self-monitoring devices, like fitness trackers, have also been shown to help manage chronic diseases, providing users with real-time data that supports ongoing care and better disease management.
Self-monitoring healthcare devices exist in many forms. An example is the Nike+ FuelBand, which is a modified version of the original pedometer. This device is wearable on the wrist and allows one to set a personal goal for a daily energy burn. It records the calories burned and the number of steps taken for each day while simultaneously functioning as a watch. To add to the ease of the user interface, it includes both numeric and visual indicators of whether or not the individual has achieved his or her daily goal. Finally, it is also synced to an iPhone app which allows for tracking and sharing of personal record and achievements.
Other monitoring devices have more medical relevance. A well-known device of this type is the blood glucose monitor. The use of this device is restricted to diabetic patients and allows users to measure the blood glucose levels in their body. It is extremely quantitative and the results are available instantaneously. However, this device is not as independent of a self-monitoring device as the Nike+ Fuelband because it requires some patient education before use. One needs to be able to make connections between the levels of glucose and the effect of diet and exercise. In addition, the users must also understand how the treatment should be adjusted based on the results. In other words, the results are not just static measurements.
The demand for self-monitoring health devices is skyrocketing, as wireless health technologies have become especially popular in the last few years. In fact, it is expected that by 2016, self-monitoring health devices will account for 80% of wireless medical devices. The key selling point for these devices is the mobility of information for consumers. The accessibility of mobile devices such as smartphones and tablets has increased significantly within the past decade. This has made it easier for users to access real-time information in a number of peripheral devices.
There are still many future improvements for self-monitoring healthcare devices. Although most of these wearable devices have been excellent at providing direct data to the individual user, the biggest task which remains at hand is how to effectively use this data. Although the blood glucose monitor allows the user to take action based on the results, measurements such as the pulse rate, EKG signals, and calories do not necessarily serve to actively guide an individual's personal healthcare management. Consumers are interested in qualitative feedback in addition to the quantitative measurements recorded by the devices. Integrating self-monitoring devices with healthcare providers can help close this gap by allowing healthcare professionals to track their patients' data remotely, which in turn allows for more personalized care and timely interventions.
eHealth During COVID-19
The pandemic that impacted the entire world made it extremely difficult for vast amounts of people to receive adequate healthcare in person. Elderly citizens and people with chronic health conditions were at more risk than the average healthy human, therefore they were more adversely affected than most. The switch from in-person to telehealth appointments and interventions was necessary to reduce the risks of spreading and/or contracting the disease. The forced use of telehealth during the pandemic highlighted its strengths and weaknesses, which accelerated the progression of this medium. The user feedback on eHealth during the COVID-19 pandemic was very positive, and consequently many patients and healthcare providers reported that they will continue to use this method of healthcare following the pandemic.
In developing countries
eHealth in general, and telemedicine in particular, is a vital resource to remote regions of emerging and developing countries but is often difficult to establish because of the lack of communications infrastructure. For example, in Benin, hospitals often can become inaccessible due to flooding during the rainy season and across Africa, the low population density, along with severe weather conditions and the difficult financial situation in many African states, has meant that the majority of the African people are badly disadvantaged in medical care. Telemedicine in Nepal is becoming popular tool to improve health care delivery in order to combat difficult landscape. In many regions there is not only a significant lack of facilities and trained health professionals, but also no access to eHealth because there is also no internet access in remote villages, or even a reliable electricity supply.
Approximately 13 percent of people who live in Kenya have health insurance. A majority of the total health expenditure in sub-Saharan Africa was paid out-of-pocket, which forces millions into poverty yearly. A Kenyan service by the name of M-PESA may offer a solution to this problem. This mobile platform provides full transparency of patients needs and allows access to medical products and the ability to efficiently manage their funding.
Internet connectivity, and the benefits of eHealth, can be brought to these regions using satellite broadband technology, and satellite is often the only solution where terrestrial access may be limited, or poor quality, and one that can provide a fast connection over a vast coverage area.
Evaluation
While eHealth has become an indispensable facet of healthcare in the past 5 years, there are still barriers preventing it from reaching its full potential. Knowledge of the socio-economic performance of eHealth is limited, and findings from evaluations are often challenging to transfer to other settings. Socio-economic evaluations of some narrow types of mHealth can rely on health economic methodologies, but larger scale eHealth may have too many variables, and tortuous, intangible cause and effect links may need a wider approach. There are no international guidelines for the usage of eHealth due to many variables such as ignorance on the matter, infrastructure issues, quality of healthcare professionals and lack of healthcare plans. It should also be stated that the effectiveness of eHealth is also dependent on the patient's condition. Some researchers believe that online healthcare may be most efficient as a supplement to in-person care.
See also
Personal Science
Human Enhancement
Quantified self
Center for Telehealth and E-Health Law
eHealthInsurance
EUDRANET
European Institute for Health Records
Health 2.0
Telehealth
Seth Roberts
References
Further reading
External links
Health informatics
Telemedicine | EHealth | [
"Biology"
] | 4,473 | [
"Health informatics",
"Medical technology"
] |
1,547,157 | https://en.wikipedia.org/wiki/Population%20size | In population genetics and population ecology, population size (usually denoted N) is a countable quantity representing the number of individual organisms in a population. Population size is directly associated with amount of genetic drift, and is the underlying cause of effects like population bottlenecks and the founder effect. Genetic drift is the major source of decrease of genetic diversity within populations which drives fixation and can potentially lead to speciation events.
Genetic drift
Of the five conditions required to maintain Hardy-Weinberg Equilibrium, infinite population size will always be violated; this means that some degree of genetic drift is always occurring. Smaller population size leads to increased genetic drift, it has been hypothesized that this gives these groups an evolutionary advantage for acquisition of genome complexity. An alternate hypothesis posits that while genetic drift plays a larger role in small populations developing complexity, selection is the mechanism by which large populations develop complexity.
Population bottlenecks and founder effect
Population bottlenecks occur when population size reduces for a short period of time, decreasing the genetic diversity in the population.
The founder effect occurs when few individuals from a larger population establish a new population and also decreases the genetic diversity, and was originally outlined by Ernst Mayr. The founder effect is a unique case of genetic drift, as the smaller founding population has decreased genetic diversity that will move alleles within the population more rapidly towards fixation.
Modeling genetic drift
Genetic drift is typically modeled in lab environments using bacterial populations or digital simulation. In digital organisms, a generated population undergoes evolution based on varying parameters, including differential fitness, variation, and heredity set for individual organisms.
Rozen et al. use separate bacterial strains on two different mediums, one with simple nutrient components and one with nutrients noted to help populations of bacteria evolve more heterogeneity. A digital simulation based on the bacterial experiment design was also used, with assorted assignations of fitness and effective population sizes comparable to those of the bacteria used based on both small and large population designations Within both simple and complex environments, smaller populations demonstrated greater population variation than larger populations, which showed no significant fitness diversity. Smaller populations had increased fitness and adapted more rapidly in the complex environment, while large populations adapted faster than small populations in the simple environment. These data demonstrate that the consequences of increased variation within small populations is dependent on the environment: more challenging or complex environments allow variance present within small populations to confer greater advantage. Analysis demonstrates that smaller populations have more significant levels of fitness from heterogeneity within the group regardless of the complexity of the environment; adaptive responses are increased in more complex environments. Adaptations in asexual populations are also not limited by mutations, as genetic variation within these populations can drive adaptation. Although small populations tend to face more challenges because of limited access to widespread beneficial mutation adaptation within these populations is less predictable and allows populations to be more plastic in their environmental responses. Fitness increase over time in small asexual populations is known to be strongly positively correlated with population size and mutation rate, and fixation probability of a beneficial mutation is inversely related to population size and mutation rate.
LaBar and Adami use digital haploid organisms to assess differing strategies for accumulating genomic complexity. This study demonstrated that both drift and selection are effective in small and large populations, respectively, but that this success is dependent on several factors. Data from the observation of insertion mutations in this digital system demonstrate that small populations evolve larger genome sizes from fixation of deleterious mutations and large populations evolve larger genome sizes from fixation of beneficial mutations. Small populations were noted to have an advantage in attaining full genomic complexity due to drift-driven phenotypic complexity. When deletion mutations were simulated, only the largest populations had any significant fitness advantage. These simulations demonstrate that smaller populations fix deleterious mutations by increased genetic drift. This advantage is likely limited by high rates of extinction. Larger populations evolve complexity through mutations that increase expression of particular genes; removal of deleterious alleles does not limit developing more complex genomes in the larger groups and a large number of insertion mutations that resulted in beneficial or non-functional elements within the genome were not required. When deletion mutations occur more frequently, the largest populations have an advantage that suggests larger populations generally have an evolutionary advantage for development of new traits.
Critical Mutation Rate
Critical mutation rate, or error threshold, limits the number of mutations that can exist within a self-replicating molecule before genetic information is destroyed in later generations.
Contrary to the findings of previous studies, critical mutation rate has been noted to be dependent on population size in both haploid and diploid populations. When populations have fewer than 100 individuals, critical mutation rate can be exceeded, but will lead to loss of genetic material which results in further population decline and likelihood of extinction. This ‘speed limit’ is common within small, adapted asexual populations and is independent of mutation rate.
Effective population size (Ne)
The effective population size (Ne) is defined as "the number of breeding individuals in an idealized population that would show the same amount of dispersion of allele frequencies under random genetic drift or the same amount of inbreeding as the population under consideration." Ne is usually less than N (the absolute population size) and this has important applications in conservation genetics.
Overpopulation may indicate any case in which the population of any species of animal may exceed the carrying capacity of its ecological niche.
See also
Carrying capacity
Holocene extinction event
Lists of organisms by population
Overpopulation
Population growth rate
References
Ecological metrics
Population genetics
Countable quantities | Population size | [
"Physics",
"Mathematics"
] | 1,128 | [
"Scalar physical quantities",
"Physical quantities",
"Metrics",
"Quantity",
"Ecological metrics",
"Dimensionless quantities",
"Countable quantities"
] |
1,547,519 | https://en.wikipedia.org/wiki/Acierage | Metal plating | Acierage | [
"Chemistry"
] | 5 | [
"Metallurgical processes",
"Coatings",
"Metal plating"
] |
1,548,510 | https://en.wikipedia.org/wiki/Fractional-order%20control | Fractional-order control (FOC) is a field of control theory that uses the fractional-order integrator as part of the control system design toolkit. The use of fractional calculus can improve and generalize well-established control methods and strategies.
The fundamental advantage of FOC is that the fractional-order integrator weights history using a function that decays with a power-law tail. The effect is that the effects of all time are computed for each iteration of the control algorithm. This creates a "distribution of time constants", the upshot of which is there is no particular time constant, or resonance frequency, for the system.
In fact, the fractional integral operator is different from any integer-order rational transfer function , in the sense that it is a non-local operator that possesses an infinite memory and takes into account the whole history of its input signal.
Fractional-order control shows promise in many controlled environments that suffer from the classical problems of overshoot and resonance, as well as time diffuse applications such as thermal dissipation and chemical mixing. Fractional-order control has also been demonstrated to be capable of suppressing chaotic behaviors in mathematical models of, for example, muscular blood vessels and robotics.
Initiated from the 1980's by the Pr. Oustaloup's group, the CRONE approach is one of the most developed control-system design methodologies that uses fractional-order operator properties.
See also
Differintegral
Fractional calculus
Fractional-order system
External links
Dr. YangQuan Chen's latest homepage for the applied fractional calculus (AFC)
Dr. YangQuan Chen's page about fractional calculus on Google Sites
References
Control theory
Cybernetics | Fractional-order control | [
"Mathematics"
] | 357 | [
"Applied mathematics",
"Control theory",
"Applied mathematics stubs",
"Dynamical systems"
] |
1,550,674 | https://en.wikipedia.org/wiki/Radial%20stress | Radial stress is stress toward or away from the central axis of a component.
Pressure vessels
The walls of pressure vessels generally undergo triaxial loading. For cylindrical pressure vessels, the normal loads on a wall element are longitudinal stress, circumferential (hoop) stress and radial stress.
The radial stress for a thick-walled cylinder is equal and opposite to the gauge pressure on the inside surface, and zero on the outside surface. The circumferential stress and longitudinal stresses are usually much larger for pressure vessels, and so for thin-walled instances, radial stress is usually neglected.
Formula
The radial stress for a thick walled pipe at a point from the central axis is given by
where is the inner radius, is the outer radius, is the inner absolute pressure and is the outer absolute pressure. Maximum radial stress occurs when (at the inside surface) and is equal to gauge pressure, or .
References
Solid mechanics | Radial stress | [
"Physics"
] | 186 | [
"Solid mechanics",
"Mechanics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.